Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-8 of 8
Carsten Witt
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation 1–24.
Published: 17 October 2024
Abstract
View article
PDF
Chance-constrained optimization problems allow us to model problems where constraints involving stochastic components should be violated only with a small probability. Evolutionary algorithms have been applied to this scenario and shown to achieve high-quality results. With this paper, we contribute to the theoretical understanding of evolutionary algorithms for chance-constrained optimization. We study the scenario of stochastic components that are independent and normally distributed. Considering the simple single-objective (1 + 1) EA, we show that imposing an additional uniform constraint already leads to local optima for very restricted scenarios and an exponential optimization time. We therefore introduce a multiobjective formulation of the problem which trades off the expected cost and its variance. We show that multiobjective evolutionary algorithms are highly effective when using this formulation and obtain a set of solutions that contains an optimal solution for any possible confidence level imposed on the constraint. Furthermore, we prove that this approach can also be used to compute a set of optimal solutions for the chance-constrained minimum spanning tree problem. In order to deal with potentially exponentially many trade-offs in the multiobjective formulation, we propose and analyze improved convex multiobjective approaches. Experimental investigations on instances of the NP-hard stochastic minimum weight dominating set problem confirm the benefit of the multiobjective and the improved convex multiobjective approach in practice.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2023) 31 (1): 1–29.
Published: 01 March 2023
Abstract
View article
PDF
Recently a mechanism called stagnation detection was proposed that automatically adjusts the mutation rate of evolutionary algorithms when they encounter local optima. The so-called SD-(1 + 1) EA introduced by Rajabi and Witt (2022) adds stagnation detection to the classical (1 + 1) EA with standard bit mutation. This algorithm flips each bit independently with some mutation rate, and stagnation detection raises the rate when the algorithm is likely to have encountered a local optimum. In this article, we investigate stagnation detection in the context of the k -bit flip operator of randomized local search that flips k bits chosen uniformly at random and let stagnation detection adjust the parameter k . We obtain improved runtime results compared with the SD-(1 + 1) EA amounting to a speedup of at least ( 1 - o ( 1 ) ) 2 π m , where m is the so-called gap size, that is, the distance to the next improvement. Moreover, we propose additional schemes that prevent infinite optimization times even if the algorithm misses a working choice of k due to unlucky events. Finally, we present an example where standard bit mutation still outperforms the k -bit flip operator with stagnation detection.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2010) 18 (4): 617–633.
Published: 01 December 2010
Abstract
View article
PDF
The main aim of randomized search heuristics is to produce good approximations of optimal solutions within a small amount of time. In contrast to numerous experimental results, there are only a few theoretical explorations on this subject. We consider the approximation ability of randomized search heuristics for the class of covering problems and compare single-objective and multi-objective models for such problems. For the VertexCover problem, we point out situations where the multi-objective model leads to a fast construction of optimal solutions while in the single-objective case, no good approximation can be achieved within the expected polynomial time. Examining the more general SetCover problem, we show that optimal solutions can be approximated within a logarithmic factor of the size of the ground set, using the multi-objective approach, while the approximation quality obtainable by the single-objective approach in expected polynomial time may be arbitrarily bad.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2009) 17 (4): 455–476.
Published: 01 December 2009
Abstract
View article
PDF
Maintaining diversity is important for the performance of evolutionary algorithms. Diversity-preserving mechanisms can enhance global exploration of the search space and enable crossover to find dissimilar individuals for recombination. We focus on the global exploration capabilities of mutation-based algorithms. Using a simple bimodal test function and rigorous runtime analyses, we compare well-known diversity-preserving mechanisms like deterministic crowding, fitness sharing, and others with a plain algorithm without diversification. We show that diversification is necessary for global exploration, but not all mechanisms succeed in finding both optima efficiently. Our theoretical results are accompanied by additional experiments for different population sizes.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2009) 17 (1): 1–2.
Published: 01 March 2009
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2009) 17 (1): 3–19.
Published: 01 March 2009
Abstract
View article
PDF
Hybrid methods are very popular for solving problems from combinatorial optimization. In contrast, the theoretical understanding of the interplay of different optimization methods is rare. In this paper, we make a first step into the rigorous analysis of such combinations for combinatorial optimization problems. The subject of our analyses is the vertex cover problem for which several approximation algorithms have been proposed. We point out specific instances where solutions can (or cannot) be improved by the search process of a simple evolutionary algorithm in expected polynomial time.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2007) 15 (4): 435–443.
Published: 01 December 2007
Abstract
View article
PDF
Various methods have been defined to measure the hardness of a fitness function for evolutionary algorithms and other black-box heuristics. Examples include fitness landscape analysis, epistasis, fitness-distance correlations etc., all of which are relatively easy to describe. However, they do not always correctly specify the hardness of the function. Some measures are easy to implement, others are more intuitive and hard to formalize. This paper rigorously defines difficulty measures in black-box optimization and proposes a classification. Different types of realizations of such measures are studied, namely exact and approximate ones. For both types of realizations, it is proven that predictive versions that run in polynomial time in general do not exist unless certain complexity-theoretical assumptions are wrong.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2006) 14 (1): 65–86.
Published: 01 March 2006
Abstract
View article
PDF
Although Evolutionary Algorithms (EAs) have been successfully applied to optimization in discrete search spaces, theoretical developments remain weak, in particular for population-based EAs. This paper presents a first rigorous analysis of the (μ + 1) EA on pseudo-Boolean functions. Using three well-known example functions fromthe analysis of the (1 + 1) EA, we derive bounds on the expected runtime and success probability. For two of these functions, upper and lower bounds on the expected runtime are tight, and on all three functions, the (μ + 1) EA is never more efficient than the (1 + 1) EA. Moreover, all lower bounds growwith μ. On a more complicated function, however, a small increase of μ provably decreases the expected runtime drastically. This paper develops a newproof technique that bounds the runtime of the (μ + 1) EA. It investigates the stochastic process for creating family trees of individuals; the depth of these trees is bounded. Thereby, the progress of the population towards the optimum is captured. This new technique is general enough to be applied to other population-based EAs.