Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-8 of 8
Nikolaus Hansen
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2022) 30 (2): 165–193.
Published: 01 June 2022
Abstract
View article
PDF
Several test function suites are being used for numerical benchmarking of multiobjective optimization algorithms. While they have some desirable properties, such as well-understood Pareto sets and Pareto fronts of various shapes, most of the currently used functions possess characteristics that are arguably underrepresented in real-world problems such as separability, optima located exactly at the boundary constraints, and the existence of variables that solely control the distance between a solution and the Pareto front. Via the alternative construction of combining existing single-objective problems from the literature, we describe the bbob-biobj test suite with 55 bi-objective functions in continuous domain, and its extended version with 92 bi-objective functions ( bbob-biobj-ext ). Both test suites have been implemented in the COCO platform for black-box optimization benchmarking and various visualizations of the test functions are shown to reveal their properties. Besides providing details on the construction of these problems and presenting their (known) properties, this article also aims at giving the rationale behind our approach in terms of groups of functions with similar properties, objective space normalization, and problem instances. The latter allows us to easily compare the performance of deterministic and stochastic solvers, which is an often overlooked issue in benchmarking.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2015) 23 (4): 611–640.
Published: 01 December 2015
FIGURES
| View All (71)
Abstract
View article
PDF
This paper analyzes a -Evolution Strategy, a randomized comparison-based adaptive search algorithm optimizing a linear function with a linear constraint. The algorithm uses resampling to handle the constraint. Two cases are investigated: first, the case where the step-size is constant, and second, the case where the step-size is adapted using cumulative step-size adaptation. We exhibit for each case a Markov chain describing the behavior of the algorithm. Stability of the chain implies, by applying a law of large numbers, either convergence or divergence of the algorithm. Divergence is the desired behavior. In the constant step-size case, we show stability of the Markov chain and prove the divergence of the algorithm. In the cumulative step-size adaptation case, we prove stability of the Markov chain in the simplified case where the cumulation parameter equals 1, and discuss steps to obtain similar results for the full (default) algorithm where the cumulation parameter is smaller than 1. The stability of the Markov chain allows us to deduce geometric divergence or convergence, depending on the dimension, constraint angle, population size, and damping parameter, at a rate that we estimate. Our results complement previous studies where stability was assumed.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2012) 20 (4): 481.
Published: 01 December 2012
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2007) 15 (1): 1–28.
Published: 01 March 2007
Abstract
View article
PDF
The covariancematrix adaptation evolution strategy (CMA-ES) is one of themost powerful evolutionary algorithms for real-valued single-objective optimization. In this paper, we develop a variant of the CMA-ES for multi-objective optimization (MOO). We first introduce a single-objective, elitist CMA-ES using plus-selection and step size control based on a success rule. This algorithm is compared to the standard CMA-ES. The elitist CMA-ES turns out to be slightly faster on unimodal functions, but is more prone to getting stuck in sub-optimal local minima. In the new multi-objective CMAES (MO-CMA-ES) a population of individuals that adapt their search strategy as in the elitist CMA-ES is maintained. These are subject to multi-objective selection. The selection is based on non-dominated sorting using either the crowding-distance or the contributing hypervolume as second sorting criterion. Both the elitist single-objective CMA-ES and the MO-CMA-ES inherit important invariance properties, in particular invariance against rotation of the search space, from the original CMA-ES. The benefits of the new MO-CMA-ES in comparison to the well-known NSGA-II and to NSDE, a multi-objective differential evolution algorithm, are experimentally shown.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2006) 14 (3): 255–275.
Published: 01 September 2006
Abstract
View article
PDF
This paper investigates σ-self-adaptation for real valued evolutionary algorithms on linear fitness functions. We identify the step-size logarithm log σ as a key quantity to understand strategy behavior. Knowing the bias of mutation, recombination, and selection on log σ is sufficient to explain σ-dynamics and strategy behavior in many cases, even from previously reported results on non-linear and/or noisy fitness functions. On a linear fitness function, if intermediate multi-recombination is applied on the object parameters, the i -th best and the i -th worst individual have the same σ-distribution. Consequently, the correlation between fitness and step-size σ is zero. Assuming additionally that σ-changes due to mutation and recombination are unbiased, then σ-self-adaptation enlarges σ if and only if μ < λ/2, given (μ, λ)-truncation selection. Experiments show the relevance of the given assumptions.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2003) 11 (1): 1–18.
Published: 01 March 2003
Abstract
View article
PDF
This paper presents a novel evolutionary optimization strategy based on the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). This new approach is intended to reduce the number of generations required for convergence to the optimum. Reducing the number of generations, i.e., the time complexity of the algorithm, is important if a large population size is desired: (1) to reduce the effect of noise; (2) to improve global search properties; and (3) to implement the algorithm on (highly) parallel machines. Our method results in a highly parallel algorithm which scales favorably with large numbers of processors. This is accomplished by efficiently incorporating the available information from a large population, thus significantly reducing the number of generations needed to adapt the covariance matrix. The original version of the CMA-ES was designed to reliably adapt the covariance matrix in small populations but it cannot exploit large populations efficiently. Our modifications scale up the efficiency to population sizes of up to 10 n , where n is the problem dimension. This method has been applied to a large number of test problems, demonstrating that in many cases the CMA-ES can be advanced from quadratic to linear time complexity.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2001) 9 (2): 159–195.
Published: 01 June 2001
Abstract
View article
PDF
This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation . Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adaptation of arbitrary (normal) mutation distributions are developed. Applying arbitrary, normal mutation distributions is equiv-alent to applying a general, linear problem encoding. The underlying objective of mutative strategy parameter control is roughly to favor previously selected mutation steps in the future. If this objective is pursued rigor-ously, a completely derandomized self-adaptation scheme results, which adapts arbitrary normal mutation distributions. This scheme, called covariance matrix adaptation (CMA), meets the previously stated demands. It can still be considerably improved by cumulation - utilizing an evolution path rather than single search steps. Simulations on various test functions reveal local and global search properties of the evolution strategy with and without covariance matrix adaptation. Their performances are comparable only on perfectly scaled functions. On badly scaled, non-separable functions usually a speed up factor of several orders of magnitude is ob-served. On moderately mis-scaled functions a speed up factor of three to ten can be expected.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1994) 2 (4): 369–380.
Published: 01 December 1994
Abstract
View article
PDF
Comparable to other optimization techniques, the performance of evolution strategies (ESs) depends on a suitable choice of internal strategy control parameters. Apart from a fixed setting, ESs facilitate an adjustment of such parameters within a self-adaptation process. For step-size control in particular, various adaptation concepts have been evolved early in the development of ESs. These algorithms mostly work very efficiently as long as the scaling of the parameters to be optimized is known. If the scaling is not known, the strategy has to adapt individual step-sizes for all the parameters. In general, the number of necessary step-sizes (variances) equals the dimension of the problem. In this case, step-size adaptation proves to be difficult, and the algorithms known are not satisfactory. The algorithm presented in this paper is based on the well-known concept of mutative step-size control. Our investigations indicate that the adaptation by this concept declines due to an interaction of the random elements involved. We show that this weak point of mutative step-size control can be avoided by relatively small changes in the algorithm. The modifications may be summarized by the word “derandomization.” The derandomized scheme of mutative step-size control facilitates a reliable self-adaptation of individual step-sizes.