Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-8 of 8
Thomas Jansen
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2015) 23 (4): 513–541.
Published: 01 December 2015
FIGURES
| View All (52)
Abstract
View article
PDF
Dynamic optimisation is an area of application where randomised search heuristics like evolutionary algorithms and artificial immune systems are often successful. The theoretical foundation of this important topic suffers from a lack of a generally accepted analytical framework as well as a lack of widely accepted example problems. This article tackles both problems by discussing necessary conditions for useful and practically relevant theoretical analysis as well as introducing a concrete family of dynamic example problems that draws inspiration from a well-known static example problem and exhibits a bi-stable dynamic. After the stage has been set this way, the framework is made concrete by presenting the results of thorough theoretical and statistical analysis for mutation-based evolutionary algorithms and artificial immune systems.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2013) 21 (1): 1–27.
Published: 01 March 2013
Abstract
View article
PDF
Extending previous analyses on function classes like linear functions, we analyze how the simple (1+1) evolutionary algorithm optimizes pseudo-Boolean functions that are strictly monotonic. These functions have the property that whenever only 0-bits are changed to 1, then the objective value strictly increases. Contrary to what one would expect, not all of these functions are easy to optimize. The choice of the constant c in the mutation probability p ( n )= c / n can make a decisive difference. We show that if c <1, then the (1+1) EA finds the optimum of every such function in iterations. For c =1, we can still prove an upper bound of O ( n 3/2 ). However, for , we present a strictly monotonic function such that the (1+1) EA with overwhelming probability needs iterations to find the optimum. This is the first time that we observe that a constant factor change of the mutation probability changes the runtime by more than a constant factor.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2010) 18 (3): 333–334.
Published: 01 September 2010
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2010) 18 (1): 1–26.
Published: 01 March 2010
Abstract
View article
PDF
Evolutionary algorithms are general randomized search heuristics and typically perform an unbiased random search that is guided only by the fitness of the search points encountered. However, in applications there is often problem-specific knowledge that suggests some additional bias. The use of appropriately biased variation operators may speed up the search considerably. Problems defined over bit strings of finite length often have the property that good solutions have only very few 1-bits or very few 0-bits. A mutation operator tailored toward such situations is studied under different perspectives and in a rigorous way discussing its assets and drawbacks. We consider the runtime of evolutionary algorithms using biased mutations on illustrative example functions as well as on function classes. A comparison with unbiased operators shows on which functions biased mutations lead to a speedup, on which functions biased mutations increase the runtime, and in which settings there is almost no difference in performance. The main focus is on theoretical runtime analysis yielding asymptotic results. These findings are accompanied by the results of empirical investigations that deliver additional insights.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2009) 17 (1): 1–2.
Published: 01 March 2009
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2005) 13 (4): 413–440.
Published: 01 December 2005
Abstract
View article
PDF
Evolutionary algorithms (EAs) generally come with a large number of parameters that have to be set before the algorithm can be used. Finding appropriate settings is a diffi- cult task. The influence of these parameters on the efficiency of the search performed by an evolutionary algorithm can be very high. But there is still a lack of theoretically justified guidelines to help the practitioner find good values for these parameters. One such parameter is the offspring population size. Using a simplified but still realistic evolutionary algorithm, a thorough analysis of the effects of the offspring population size is presented. The result is a much better understanding of the role of offspring population size in an EA and suggests a simple way to dynamically adapt this parameter when necessary.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2004) 12 (4): 405–434.
Published: 01 December 2004
Abstract
View article
PDF
Coevolutionary algorithms are variants of traditional evolutionary algorithms and are often considered more suitable for certain kinds of complex tasks than noncoevolutionary methods. One example is a general cooperative coevolutionary framework for function optimization. This paper presents a thorough and rigorous introductory analysis of the optimization potential of cooperative coevolution. Using the cooperative coevolutionary framework as a starting point, the CC (1+1) EA is defined and investigated from the perspective of the expected optimization time. The research concentrates on separability, a key property of objective functions. We show that separability alone is not sufficient to yield any advantage of the CC (1+1) EA over its traditional, non-coevolutionary counterpart. Such an advantage is demonstrated to have its basis in the increased explorative possibilities of the cooperative coevolutionary algorithm. For inseparable functions, the cooperative coevolutionary set-up can be harmful. We prove that for some objective functions the CC (1+1) EA fails to locate a global optimum with overwhelming probability, even in infinite time; however, inseparability alone is not sufficient for an objective function to cause difficulties. It is demonstrated that the CC (1+1) EA may perform equal to its traditional counterpart, and may even outperform it on certain inseparable functions.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1998) 6 (2): 185–196.
Published: 01 June 1998
Abstract
View article
PDF
Evolutionary algorithms (EAs) are heuristic randomized algorithms which, by many impressive experiments, have been proven to behave quite well for optimization problems of various kinds. In this paper a rigorous theoretical complexity analysis of the (1 + 1) evolutionary algorithm for separable functions with Boolean inputs is given. Different mutation rates are compared, and the use of the crossover operator is investigated. The main contribution is not the result that the expected run time of the (1 + 1) evolutionary algorithm is Θ( n ln n ) for separable functions with n variables but the methods by which this result can be proven rigorously.