Abstract

This paper investigates the robustness of Run Transferable Libraries(RTLs) on scaled problems. RTLs provide GP with a library of functions which replace the usual primitive functions provided when approaching a problem. The RTL evolves from run to run using feedback based on function usage, and has been shown to outperform GP by an order of magnitude on a variety of scalable problems. RTLs can, however, also be applied across a domain of related problems, as well as across a range of scaled instances of a single problem. To do this successfully, it will need to balance a range of functions. We introduce a problem that can deceive the system into converging to a sub-optimal set of functions, and demonstrate that this is a consequence of the greediness of the library update algorithm. We demonstrate that a much simpler, truly evolutionary, update strategy doesn't suffer from this problem, and exhibits far better optimization properties than the original strategy.

Original languageEnglish
Pages (from-to)361-370
Number of pages10
JournalLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume3447
DOIs
Publication statusPublished - 2005
Event8th European Conference on Genetic Programming, EuroGP 2005 - Lausanne, Switzerland
Duration: 30 Mar 20051 Apr 2005

Fingerprint

Dive into the research topics of 'Undirected training of run transferable libraries'. Together they form a unique fingerprint.

Cite this