Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Christine Zarges
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2015) 23 (4): 513–541.
Published: 01 December 2015
FIGURES
| View All (52)
Abstract
View article
PDF
Dynamic optimisation is an area of application where randomised search heuristics like evolutionary algorithms and artificial immune systems are often successful. The theoretical foundation of this important topic suffers from a lack of a generally accepted analytical framework as well as a lack of widely accepted example problems. This article tackles both problems by discussing necessary conditions for useful and practically relevant theoretical analysis as well as introducing a concrete family of dynamic example problems that draws inspiration from a well-known static example problem and exhibits a bi-stable dynamic. After the stage has been set this way, the framework is made concrete by presenting the results of thorough theoretical and statistical analysis for mutation-based evolutionary algorithms and artificial immune systems.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2013) 21 (1): 1–27.
Published: 01 March 2013
Abstract
View article
PDF
Extending previous analyses on function classes like linear functions, we analyze how the simple (1+1) evolutionary algorithm optimizes pseudo-Boolean functions that are strictly monotonic. These functions have the property that whenever only 0-bits are changed to 1, then the objective value strictly increases. Contrary to what one would expect, not all of these functions are easy to optimize. The choice of the constant c in the mutation probability p ( n )= c / n can make a decisive difference. We show that if c <1, then the (1+1) EA finds the optimum of every such function in iterations. For c =1, we can still prove an upper bound of O ( n 3/2 ). However, for , we present a strictly monotonic function such that the (1+1) EA with overwhelming probability needs iterations to find the optimum. This is the first time that we observe that a constant factor change of the mutation probability changes the runtime by more than a constant factor.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2009) 17 (1): 1–2.
Published: 01 March 2009