Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
H. Trautmann
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2019) 27 (4): 577–609.
Published: 01 December 2019
FIGURES
| View All (13)
Abstract
View article
PDF
We continue recent work on the definition of multimodality in multiobjective optimization (MO) and the introduction of a test bed for multimodal MO problems. This goes beyond well-known diversity maintenance approaches but instead focuses on the landscape topology induced by the objective functions. More general multimodal MO problems are considered by allowing ellipsoid contours for single-objective subproblems. An experimental analysis compares two MO algorithms, one that explicitly relies on hypervolume gradient approximation, and one that is based on local search, both on a selection of generated example problems. We do not focus on performance but on the interaction induced by the problems and algorithms, which can be described by means of specific characteristics explicitly designed for the multimodal MO setting. Furthermore, we widen the scope of our analysis by additionally applying visualization techniques in the decision space. This strengthens and extends the foundations for Exploratory Landscape Analysis (ELA) in MO.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2015) 23 (1): 161–185.
Published: 01 March 2015
FIGURES
| View All (8)
Abstract
View article
PDF
We present methods to answer two basic questions that arise when benchmarking optimization algorithms. The first one is: which algorithm is the “best” one? and the second one is: which algorithm should I use for my real-world problem? Both are connected and neither is easy to answer. We present a theoretical framework for designing and analyzing the raw data of such benchmark experiments. This represents a first step in answering the aforementioned questions. The 2009 and 2010 BBOB benchmark results are analyzed by means of this framework and we derive insight regarding the answers to the two questions. Furthermore, we discuss how to properly aggregate rankings from algorithm evaluations on individual problems into a consensus, its theoretical background and which common pitfalls should be avoided. Finally, we address the grouping of test problems into sets with similar optimizer rankings and investigate whether these are reflected by already proposed test problem characteristics, finding that this is not always the case.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2012) 20 (2): 249–275.
Published: 01 June 2012
FIGURES
| View All (6)
Abstract
View article
PDF
Meta-modeling has become a crucial tool in solving expensive optimization problems. Much of the work in the past has focused on finding a good regression method to model the fitness function. Examples include classical linear regression, splines, neural networks, Kriging and support vector regression. This paper specifically draws attention to the fact that assessing model accuracy is a crucial aspect in the meta-modeling framework. Resampling strategies such as cross-validation, subsampling, bootstrapping, and nested resampling are prominent methods for model validation and are systematically discussed with respect to possible pitfalls, shortcomings, and specific features. A survey of meta-modeling techniques within evolutionary optimization is provided. In addition, practical examples illustrating some of the pitfalls associated with model selection and performance assessment are presented. Finally, recommendations are given for choosing a model validation technique for a particular setting.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2009) 17 (4): 493–509.
Published: 01 December 2009
Abstract
View article
PDF
In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation quality at the same time.