Evolutionary Computation Advance Access
https://direct.mit.edu/evco
en-usThu, 05 Sep 2024 00:00:00 GMTFri, 06 Sep 2024 22:46:00 GMTSilverchaireditor@direct.mit.edu/evcowebmaster@direct.mit.edu/evcoBUSTLE: A Versatile Tool for the Evolutionary Learning of STL Specifications from Data
https://direct.mit.edu/evco/article/doi/10.1162/evco_a_00347/119669/BUSTLE-A-Versatile-Tool-for-the-Evolutionary
Thu, 05 Sep 2024 00:00:00 GMT<span class="paragraphSection"><div class="boxTitle">Abstract</div>Describing the properties of complex systems that evolve over time is a crucial requirement for monitoring and understanding them. Signal Temporal Logic (STL) is a framework that proved to be effective for this aim because it is expressive and allows state properties as human-readable formulae. Crafting STL formulae that fit a particular system is, however, a difficult task. For this reason, a few approaches have been proposed recently for the automatic learning of STL formulae starting from observations of the system. In this paper, we propose BUSTLE (Bi-level Universal STL Evolver), an approach based on evolutionary computation for learning STL formulae from data. BUSTLE advances the state of the art because it (i) applies to a broader class of problems, in terms of what is known about the state of the system during its observation, and (ii) generates both the structure and the values of the parameters of the formulae employing a bi-level search mechanism (global for the structure, local for the parameters). We consider two cases where (a) observations of the system in both anomalous and regular state are available, or (b) only observations of regular state are available. We experimentally evaluate BUSTLE on problem instances corresponding to the two cases and compare it against previous approaches. We show that the evolved STL formulae are effective and human-readable: the versatility of BUSTLE does not come at the cost of lower effectiveness.</span>12410.1162/evco_a_00347https://direct.mit.edu/evco/article/doi/10.1162/evco_a_00347/119669/BUSTLE-A-Versatile-Tool-for-the-EvolutionaryVirtual Position Guided Strategy for Particle Swarm Optimization Algorithms on Multimodal Problems
https://direct.mit.edu/evco/article/doi/10.1162/evco_a_00352/121190/Virtual-Position-Guided-Strategy-for-Particle
Mon, 05 Aug 2024 00:00:00 GMT<span class="paragraphSection"><div class="boxTitle">Abstract</div>Premature convergence is a thorny problem for particle swarm optimization (PSO) algorithms, especially on multimodal problems, where maintaining swarm diversity is crucial. However, most enhancement strategies for PSO, including the existing diversity-guided strategies, have not fully addressed this issue. This paper proposes the virtual position guided (VPG) strategy for PSO algorithms. The VPG strategy calculates diversity values for two different populations and establishes a diversity baseline. It then dynamically guides the algorithm to conduct different search behaviors, through three phases—divergence, normal, and acceleration—in each iteration, based on the relationships among these diversity values and the baseline. Collectively, these phases orchestrate different schemes to balance exploration and exploitation, collaboratively steering the algorithm away from local optima and towards enhanced solution quality. The introduction of “virtual position” caters to the strategy's adaptability across various PSO algorithms, ensuring the generality and effectiveness of the proposed VPG strategy. With a single hyperparameter and a recommended usual setup, VPG is easy to implement. The experimental results demonstrate that the VPG strategy is superior to several canonical and the state-of-the-art strategies for diversity guidance, and is effective in improving the search performance of most PSO algorithms on multimodal problems of various dimensionalities.</span>13210.1162/evco_a_00352https://direct.mit.edu/evco/article/doi/10.1162/evco_a_00352/121190/Virtual-Position-Guided-Strategy-for-ParticleA Layered Learning Approach to Scaling in Learning Classifier Systems for Boolean Problems
https://direct.mit.edu/evco/article/doi/10.1162/evco_a_00351/120932/A-Layered-Learning-Approach-to-Scaling-in-Learning
Mon, 05 Aug 2024 00:00:00 GMT<span class="paragraphSection"><div class="boxTitle">Abstract</div>Evolutionary Computation (EC) often throws away learned knowledge as it is reset for each new problem addressed. Conversely, humans can learn from small-scale problems, retain this knowledge (plus functionality), and then successfully reuse them in larger-scale and/or related problems. Linking solutions to problems has been achieved through layered learning, where an experimenter sets a series of simpler related problems to solve a more complex task. Recent works on Learning Classifier Systems (LCSs) has shown that knowledge reuse through the adoption of Code Fragments, GP-like tree-based programs, is plausible. However, random reuse is inefficient. Thus, the research question is how LCS can adopt a layered-learning framework, such that increasingly complex problems can be solved efficiently. An LCS (named XCSCF*) has been developed to include the required base axioms necessary for learning, refined methods for transfer learning and learning recast as a decomposition into a series of subordinate problems. These subordinate problems can be set as a curriculum by a teacher, but this does not mean that an agent can learn from it; especially if it only extracts over-fitted knowledge of each problem rather than the underlying scalable patterns and functions. Results show that from a conventional tabula rasa, with only a vague notion of which subordinate problems might be relevant, XCSCF* captures the general logic behind the tested domains and therefore can solve any n-bit Multiplexer, n-bit Carry-one, n-bit Majority-on, and n-bit Even-parity problems. This work demonstrates a step towards continual learning as learned knowledge is effectively reused in subsequent problems.</span>12610.1162/evco_a_00351https://direct.mit.edu/evco/article/doi/10.1162/evco_a_00351/120932/A-Layered-Learning-Approach-to-Scaling-in-LearningSynthesising Diverse and Discriminatory Sets of Instances Using Novelty Search in Combinatorial Domains
https://direct.mit.edu/evco/article/doi/10.1162/evco_a_00350/120931/Synthesising-Diverse-and-Discriminatory-Sets-of
Mon, 05 Aug 2024 00:00:00 GMT<span class="paragraphSection"><div class="boxTitle">Abstract</div>Gathering sufficient instance data to either train algorithm-selection models or understand algorithm footprints within an instance space can be challenging. We propose an approach to generating synthetic instances that are tailored to perform well with respect to a target algorithm belonging to a predefined portfolio but are also diverse with respect to their features. Our approach uses a novelty search algorithm with a linearly weighted fitness function that balances novelty and performance to generate a large set of diverse and discriminatory instances in a single run of the algorithm. We consider two definitions of novelty: (1) with respect to discriminatory performance within a portfolio of solvers; (2) with respect to the features of the evolved instances. We evaluate the proposed method with respect to its ability to generate diverse and discriminatory instances in two domains (knapsack and bin-packing), comparing to another well-known quality diversity method, Multi-dimensional Archive of Phenotypic Elites (MAP-Elites) and an evolutionary algorithm that only evolves for discriminatory behaviour. The results demonstrate that the novelty search method outperforms its competitors in terms of coverage of the space and its ability to generate instances that are diverse regarding the relative size of the “performance gap” between the target solver and the remaining solvers in the portfolio. Moreover, for the Knapsack domain, we also show that we are able to generate novel instances in regions of an instance space not covered by existing benchmarks using a portfolio of state-of-the-art solvers. Finally, we demonstrate that the method is robust to different portfolios of solvers (stochastic approaches, deterministic heuristics, and state-of-the-art methods), thereby providing further evidence of its generality.</span>13610.1162/evco_a_00350https://direct.mit.edu/evco/article/doi/10.1162/evco_a_00350/120931/Synthesising-Diverse-and-Discriminatory-Sets-ofOneMax Is Not the Easiest Function for Fitness Improvements
https://direct.mit.edu/evco/article/doi/10.1162/evco_a_00348/120278/OneMax-Is-Not-the-Easiest-Function-for-Fitness
Mon, 05 Aug 2024 00:00:00 GMT<span class="paragraphSection"><div class="boxTitle">Abstract</div>We study the (1:s+1) success rule for controlling the population size of the (1,λ)-EA. It was shown by Hevia Fajardo and Sudholt that this parameter control mechanism can run into problems for large s if the fitness landscape is too easy. They conjectured that this problem is worst for the <span style="text-transform:lowercase;font-variant:small-caps;">OneMax</span> benchmark, since in some well-established sense <span style="text-transform:lowercase;font-variant:small-caps;">OneMax</span> is known to be the easiest fitness landscape. In this paper, we disprove this conjecture. We show that there exist s and ɛ such that the self-adjusting (1,λ)-EA with the (1:s+1)-rule optimizes <span style="text-transform:lowercase;font-variant:small-caps;">OneMax</span> efficiently when started with ɛn zero-bits, but does not find the optimum in polynomial time on <span style="text-transform:lowercase;font-variant:small-caps;">Dynamic BinVal</span>. Hence, we show that there are landscapes where the problem of the (1:s+1)-rule for controlling the population size of the (1,λ)-EA is more severe than for <span style="text-transform:lowercase;font-variant:small-caps;">OneMax</span>. The key insight is that, while <span style="text-transform:lowercase;font-variant:small-caps;">OneMax</span> is the easiest function for decreasing the distance to the optimum, it is <span style="font-style:italic;">not</span> the easiest fitness landscape with respect to finding fitness-improving steps.</span>12810.1162/evco_a_00348https://direct.mit.edu/evco/article/doi/10.1162/evco_a_00348/120278/OneMax-Is-Not-the-Easiest-Function-for-FitnessDrift Analysis with Fitness Levels for Elitist Evolutionary Algorithms
https://direct.mit.edu/evco/article/doi/10.1162/evco_a_00349/120277/Drift-Analysis-with-Fitness-Levels-for-Elitist
Mon, 05 Aug 2024 00:00:00 GMT<span class="paragraphSection"><div class="boxTitle">Abstract</div>The fitness level method is a popular tool for analyzing the hitting time of elitist evolutionary algorithms. Its idea is to divide the search space into multiple fitness levels and estimate lower and upper bounds on the hitting time using transition probabilities between fitness levels. However, the lower bound generated by this method is often loose. An open question regarding the fitness level method is what are the tightest lower and upper time bounds that can be constructed based on transition probabilities between fitness levels. To answer this question, we combine drift analysis with fitness levels and define the tightest bound problem as a constrained multiobjective optimization problem subject to fitness levels. The tightest metric bounds by fitness levels are constructed and proven for the first time. Then linear bounds are derived from metric bounds and a framework is established that can be used to develop different fitness level methods for different types of linear bounds. The framework is generic and promising, as it can be used to draw tight time bounds on both fitness landscapes with and without shortcuts. This is demonstrated in the example of the (1+1) EA maximizing the TwoMax1 function.</span>12510.1162/evco_a_00349https://direct.mit.edu/evco/article/doi/10.1162/evco_a_00349/120277/Drift-Analysis-with-Fitness-Levels-for-Elitist