Abstract
This paper investigates the robustness of Run Transferable Libraries(RTLs) on scaled problems. RTLs provide GP with a library of functions which replace the usual primitive functions provided when approaching a problem. The RTL evolves from run to run using feedback based on function usage, and has been shown to outperform GP by an order of magnitude on a variety of scalable problems. RTLs can, however, also be applied across a domain of related problems, as well as across a range of scaled instances of a single problem. To do this successfully, it will need to balance a range of functions. We introduce a problem that can deceive the system into converging to a sub-optimal set of functions, and demonstrate that this is a consequence of the greediness of the library update algorithm. We demonstrate that a much simpler, truly evolutionary, update strategy doesn't suffer from this problem, and exhibits far better optimization properties than the original strategy.
Original language | English |
---|---|
Pages (from-to) | 361-370 |
Number of pages | 10 |
Journal | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
Volume | 3447 |
DOIs | |
Publication status | Published - 2005 |
Event | 8th European Conference on Genetic Programming, EuroGP 2005 - Lausanne, Switzerland Duration: 30 Mar 2005 → 1 Apr 2005 |