Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-8 of 8
Benjamin Doerr
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2023) 31 (4): 337–373.
Published: 01 December 2023
Abstract
View article
PDF
Multiobjective evolutionary algorithms are successfully applied in many real-world multiobjective optimization problems. As for many other AI methods, the theoretical understanding of these algorithms is lagging far behind their success in practice. In particular, previous theory work considers mostly easy problems that are composed of unimodal objectives. As a first step towards a deeper understanding of how evolutionary algorithms solve multimodal multiobjective problems, we propose the O n e J u m p Z e r o J u m p problem, a bi-objective problem composed of two objectives isomorphic to the classic jump function benchmark. We prove that the simple evolutionary multiobjective optimizer (SEMO) with probability one does not compute the full Pareto front, regardless of the runtime. In contrast, for all problem sizes n and all jump sizes k ∈ [ 4 . . n 2 - 1 ] , the global SEMO (GSEMO) covers the Pareto front in an expected number of Θ ( ( n - 2 k ) n k ) iterations. For k = o ( n ) , we also show the tighter bound 3 2 e n k + 1 ± o ( n k + 1 ) , which might be the first runtime bound for an MOEA that is tight apart from lower-order terms. We also combine the GSEMO with two approaches that showed advantages in single-objective multimodal problems. When using the GSEMO with a heavy-tailed mutation operator, the expected runtime improves by a factor of at least k Ω ( k ) . When adapting the recent stagnation-detection strategy of Rajabi and Witt ( 2022 ) to the GSEMO, the expected runtime also improves by a factor of at least k Ω ( k ) and surpasses the heavy-tailed GSEMO by a small polynomial factor in k . Via an experimental analysis, we show that these asymptotic differences are visible already for small problem sizes: A factor-5 speed-up from heavy-tailed mutation and a factor-10 speed-up from stagnation detection can be observed already for jump size 4 and problem sizes between 10 and 50. Overall, our results show that the ideas recently developed to aid single-objective evolutionary algorithms to cope with local optima can be effectively employed also in multiobjective optimization.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2021) 29 (4): 543–563.
Published: 01 December 2021
Abstract
View article
PDF
In their recent work, Lehre and Nguyen ( 2019 ) show that the univariate marginal distribution algorithm (UMDA) needs time exponential in the parent populations size to optimize the DeceptiveLeadingBlocks ( DLB ) problem. They conclude from this result that univariate EDAs have difficulties with deception and epistasis. In this work, we show that this negative finding is caused by the choice of the parameters of the UMDA. When the population sizes are chosen large enough to prevent genetic drift, then the UMDA optimizes the DLB problem with high probability with at most λ ( n 2 + 2 e ln n ) fitness evaluations. Since an offspring population size λ of order n log n can prevent genetic drift, the UMDA can solve the DLB problem with O ( n 2 log n ) fitness evaluations. In contrast, for classic evolutionary algorithms no better runtime guarantee than O ( n 3 ) is known (which we prove to be tight for the ( 1 + 1 ) EA), so our result rather suggests that the UMDA can cope well with deception and epistatis. From a broader perspective, our result shows that the UMDA can cope better with local optima than many classic evolutionary algorithms; such a result was previously known only for the compact genetic algorithm. Together with the lower bound of Lehre and Nguyen, our result for the first time rigorously proves that running EDAs in the regime with genetic drift can lead to drastic performance losses.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2021) 29 (2): 305–329.
Published: 01 June 2021
Abstract
View article
PDF
A decent number of lower bounds for non-elitist population-based evolutionary algorithms has been shown by now. Most of them are technically demanding due to the (hard to avoid) use of negative drift theorems—general results which translate an expected movement away from the target into a high hitting time. We propose a simple negative drift theorem for multiplicative drift scenarios and show that it can simplify existing analyses. We discuss in more detail Lehre's ( 2010 ) negative drift in populations method, one of the most general tools to prove lower bounds on the runtime of non-elitist mutation-based evolutionary algorithms for discrete search spaces. Together with other arguments, we obtain an alternative and simpler proof of this result, which also strengthens and simplifies this method. In particular, now only three of the five technical conditions of the previous result have to be verified. The lower bounds we obtain are explicit instead of only asymptotic. This allows us to compute concrete lower bounds for concrete algorithms, but also enables us to show that super-polynomial runtimes appear already when the reproduction rate is only a ( 1 - ω ( n - 1 / 2 ) ) factor below the threshold. For the special case of algorithms using standard bit mutation with a random mutation rate (called uniform mixing in the language of hyper-heuristics), we prove the result stated by Dang and Lehre ( 2016b ) and extend it to mutation rates other than Θ ( 1 / n ) , which includes the heavy-tailed mutation operator proposed by Doerr et al. ( 2017 ). We finally use our method and a novel domination argument to show an exponential lower bound for the runtime of the mutation-only simple genetic algorithm on OneMax for arbitrary population size.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2016) 24 (4): 719–744.
Published: 01 December 2016
FIGURES
| View All (5)
Abstract
View article
PDF
We analyze the unrestricted black-box complexity of the Jump function classes for different jump sizes. For upper bounds, we present three algorithms for small, medium, and extreme jump sizes. We prove a matrix lower bound theorem which is capable of giving better lower bounds than the classic information theory approach. Using this theorem, we prove lower bounds that almost match the upper bounds. For the case of extreme jump functions, which apart from the optimum reveal only the middle fitness value(s), we use an additional lower bound argument to show that any black-box algorithm does not gain significant insight about the problem instance from the first fitness evaluations. This, together with our upper bound, shows that the black-box complexity of extreme jump functions is .
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2015) 23 (4): 641–670.
Published: 01 December 2015
Abstract
View article
PDF
We analyze the unbiased black-box complexities of jump functions with small, medium, and large sizes of the fitness plateau surrounding the optimal solution. Among other results, we show that when the jump size is , that is, when only a small constant fraction of the fitness values is visible, then the unbiased black-box complexities for arities 3 and higher are of the same order as those for the simple OneMax function. Even for the extreme jump function, in which all but the two fitness values and n are blanked out, polynomial time mutation-based (i.e., unary unbiased) black-box optimization algorithms exist. This is quite surprising given that for the extreme jump function almost the whole search space (all but a fraction) is a plateau of constant fitness. To prove these results, we introduce new tools for the analysis of unbiased black-box complexities, for example, selecting the new parent individual not only by comparing the fitnesses of the competing search points but also by taking into account the (empirical) expected fitnesses of their offspring.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2013) 21 (1): 1–27.
Published: 01 March 2013
Abstract
View article
PDF
Extending previous analyses on function classes like linear functions, we analyze how the simple (1+1) evolutionary algorithm optimizes pseudo-Boolean functions that are strictly monotonic. These functions have the property that whenever only 0-bits are changed to 1, then the objective value strictly increases. Contrary to what one would expect, not all of these functions are easy to optimize. The choice of the constant c in the mutation probability p ( n )= c / n can make a decisive difference. We show that if c <1, then the (1+1) EA finds the optimum of every such function in iterations. For c =1, we can still prove an upper bound of O ( n 3/2 ). However, for , we present a strictly monotonic function such that the (1+1) EA with overwhelming probability needs iterations to find the optimum. This is the first time that we observe that a constant factor change of the mutation probability changes the runtime by more than a constant factor.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2011) 19 (4): 673–691.
Published: 01 December 2011
Abstract
View article
PDF
We conduct a rigorous analysis of the (1+1) evolutionary algorithm for the single source shortest path problem proposed by Scharnow, Tinnefeld, and Wegener (The analyses of evolutionary algorithms on sorting and shortest paths problems, 2004, Journal of Mathematical Modelling and Algorithms , 3 (4):349–366). We prove that with high probability, the optimization time is O (n 2 max{ℓ, log( n )}), where ℓ is the smallest integer such that any vertex can be reached from the source via a shortest path having at most ℓ edges. This bound is tight. For all values of n and ℓ we provide a graph with edge weights such that, with high probability, the optimization time is of order Ω(n 2 max{ℓ, log( n )}). To obtain such sharp bounds, we develop a new technique that overcomes the coupon collector behavior of previously used arguments. Also, we exhibit a simple Chernoff type inequality for sums of independent geometrically distributed random variables, and one for sequences of random variables that are not independent, but show a desired behavior independent of the outcomes of the previous random variables. We are optimistic that these tools find further applications in the analysis of evolutionary algorithms.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2007) 15 (4): 401–410.
Published: 01 December 2007
Abstract
View article
PDF
Successful applications of evolutionary algorithms show that certain variation operators can lead to good solutions much faster than other ones. We examine this behavior observed in practice from a theoretical point of view and investigate the effect of an asymmetric mutation operator in evolutionary algorithms with respect to the runtime behavior. Considering the Eulerian cycle problem we present runtime bounds for evolutionary algorithms using an asymmetric operator which are much smaller than the best upper bounds for a more general one. In our analysis it turns out that a plateau which both algorithms have to cope with changes its structure in a way that allows the algorithm to obtain an improvement much faster. In addition, we present a lower bound for the general case which shows that the asymmetric operator speeds up computation by at least a linear factor.