Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-3 of 3
Lee Spector
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2024, ALIFE 2024: Proceedings of the 2024 Artificial Life Conference88, (July 22–26, 2024) 10.1162/isal_a_00832
Abstract
View Papertitled, Untangling the Effects of Down-Sampling and Selection in Genetic Programming
View
PDF
for content titled, Untangling the Effects of Down-Sampling and Selection in Genetic Programming
Genetic programming systems often use large training sets to evaluate the quality of candidate solutions for selection, which is often computationally expensive. Down-sampling training sets has long been used to decrease the computational cost of evaluation in a wide range of application domains. More specifically, recent studies have shown that both random and informed down-sampling can substantially improve problem-solving success for GP systems that use the lexicase parent selection algorithm. We test whether these down-sampling techniques can also improve problem-solving success in the context of three other commonly used selection methods, fitness-proportionate, tournament, implicit fitness sharing plus tournament selection, across six program synthesis GP problems. We verified that down-sampling can significantly improve the problem-solving success for all three of these other selection schemes, demonstrating its general efficacy. We discern that the selection pressure imposed by the selection scheme does not interact with the down-sampling method. However, we find that informed down-sampling can improve problem solving success significantly over random down-sampling when the selection scheme has a mechanism for diversity maintenance like lexicase or implicit fitness sharing. Overall, our results suggest that down-sampling should be considered more often when solving test-based problems, regardless of the selection scheme in use.
Proceedings Papers
. isal2020, ALIFE 2020: The 2020 Conference on Artificial Life303-311, (July 13–18, 2020) 10.1162/isal_a_00326
Abstract
View Papertitled, Genetic Source Sensitivity and Transfer Learning in Genetic Programming
View
PDF
for content titled, Genetic Source Sensitivity and Transfer Learning in Genetic Programming
Genetic programming uses biologically-inspired processes of variation and selection to synthesize computer programs that solve problems. Here we investigate the sensitivity of genetic programming to changes in the probability that particular instructions and constants will be chosen for inclusion in randomly generated programs or for introduction by mutation. We find, contrary to conventional wisdom within the field, that genetic programming can be highly sensitive to changes in this source of new genetic material. Additionally, we find that genetic sources can be tuned to significantly improve adaptation across sets of related problems. We study the evolution of solutions to software synthesis problems using untuned genetic sources and sources that have been tuned on the basis of problem statements, human intuition, or prevalence in prior solution programs. We find significant differences in performance across these approaches, and use these lessons to develop a method for tuning genetic sources on the basis of evolved solutions to related problems. This “transfer learning” approach tunes genetic sources nearly as well as humans do, but by means of a fully automated process that can be applied to previously unsolved problems.
Proceedings Papers
. isal2020, ALIFE 2020: The 2020 Conference on Artificial Life341-349, (July 13–18, 2020) 10.1162/isal_a_00334
Abstract
View Papertitled, Explaining and Exploiting the Advantages of Down-sampled Lexicase Selection
View
PDF
for content titled, Explaining and Exploiting the Advantages of Down-sampled Lexicase Selection
In genetic programming, parent selection is ordinarily based on aggregate measures of performance across an entire training set. Lexicase selection, by contrast, selects on the basis of performance on random sequences of test cases; this has been shown to enhance problem-solving power in many circumstances. Lexicase selection can also be seen as better reflecting biological evolution, by modeling sequences of challenges that organisms face over their lifetimes. Recent work has demonstrated that the advantages of lexicase selection can be amplified by down-sampling, meaning that only a random subsample of the training cases is used each generation, which can also be seen as modeling environmental change over time. Here we provide the most extensive benchmarking of down-sampled lexicase selection to date, showing that its benefits hold up to increased scrutiny. The reasons that down-sampling helps, however, are not yet fully understood. Hypotheses include that down-sampling allows for more generations to be processed with the same budget of program evaluations; that the variation of training data across generations acts as a changing environment, encouraging adaptation; or that it reduces overfitting, leading to more general solutions. We systematically evaluate these hypotheses, finding evidence against all three, and instead draw the conclusion that down-sampled lexicase selection's main benefit stems from the fact that it allows GP to examine more individuals within the same computational budget, even though each individual is examined less completely.