Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-7 of 7
Tobias Friedrich
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2016) 24 (2): 237–254.
Published: 01 June 2016
FIGURES
Abstract
View article
PDF
Recently, ant colony optimization (ACO) algorithms have proven to be efficient in uncertain environments, such as noisy or dynamically changing fitness functions. Most of these analyses have focused on combinatorial problems such as path finding. We rigorously analyze an ACO algorithm optimizing linear pseudo-Boolean functions under additive posterior noise. We study noise distributions whose tails decay exponentially fast, including the classical case of additive Gaussian noise. Without noise, the classical EA outperforms any ACO algorithm, with smaller being better; however, in the case of large noise, the EA fails, even for high values of (which are known to help against small noise). In this article, we show that ACO is able to deal with arbitrarily large noise in a graceful manner; that is, as long as the evaporation factor is small enough, dependent on the variance of the noise and the dimension n of the search space, optimization will be successful. We also briefly consider the case of prior noise and prove that ACO can also efficiently optimize linear functions under this noise model.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2015) 23 (4): 543–558.
Published: 01 December 2015
Abstract
View article
PDF
Many combinatorial optimization problems have underlying goal functions that are submodular. The classical goal is to find a good solution for a given submodular function f under a given set of constraints. In this paper, we investigate the runtime of a simple single objective evolutionary algorithm called ( ) EA and a multiobjective evolutionary algorithm called GSEMO until they have obtained a good approximation for submodular functions. For the case of monotone submodular functions and uniform cardinality constraints, we show that the GSEMO achieves a -approximation in expected polynomial time. For the case of monotone functions where the constraints are given by the intersection of matroids, we show that the ( ) EA achieves a ( )-approximation in expected polynomial time for any constant . Turning to nonmonotone symmetric submodular functions with matroid intersection constraints, we show that the GSEMO achieves a -approximation in expected time .
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2015) 23 (1): 131–159.
Published: 01 March 2015
FIGURES
| View All (51)
Abstract
View article
PDF
Many optimization problems arising in applications have to consider several objective functions at the same time. Evolutionary algorithms seem to be a very natural choice for dealing with multi-objective problems as the population of such an algorithm can be used to represent the trade-offs with respect to the given objective functions. In this paper, we contribute to the theoretical understanding of evolutionary algorithms for multi-objective problems. We consider indicator-based algorithms whose goal is to maximize the hypervolume for a given problem by distributing points on the Pareto front. To gain new theoretical insights into the behavior of hypervolume-based algorithms, we compare their optimization goal to the goal of achieving an optimal multiplicative approximation ratio. Our studies are carried out for different Pareto front shapes of bi-objective problems. For the class of linear fronts and a class of convex fronts, we prove that maximizing the hypervolume gives the best possible approximation ratio when assuming that the extreme points have to be included in both distributions of the points on the Pareto front. Furthermore, we investigate the choice of the reference point on the approximation behavior of hypervolume-based approaches and examine Pareto fronts of different shapes by numerical calculations.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2010) 18 (4): 617–633.
Published: 01 December 2010
Abstract
View article
PDF
The main aim of randomized search heuristics is to produce good approximations of optimal solutions within a small amount of time. In contrast to numerous experimental results, there are only a few theoretical explorations on this subject. We consider the approximation ability of randomized search heuristics for the class of covering problems and compare single-objective and multi-objective models for such problems. For the VertexCover problem, we point out situations where the multi-objective model leads to a fast construction of optimal solutions while in the single-objective case, no good approximation can be achieved within the expected polynomial time. Examining the more general SetCover problem, we show that optimal solutions can be approximated within a logarithmic factor of the size of the ground set, using the multi-objective approach, while the approximation quality obtainable by the single-objective approach in expected polynomial time may be arbitrarily bad.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2010) 18 (3): 383–402.
Published: 01 September 2010
Abstract
View article
PDF
The hypervolume indicator serves as a sorting criterion in many recent multi-objective evolutionary algorithms (MOEAs). Typical algorithms remove the solution with the smallest loss with respect to the dominated hypervolume from the population. We present a new algorithm which determines for a population of size n with d objectives, a solution with minimal hypervolume contribution in time ( n d /2 log n ) for d > 2. This improves all previously published algorithms by a factor of n for all d > 3 and by a factor of for d = 3. We also analyze hypervolume indicator based optimization algorithms which remove λ > 1 solutions from a population of size n = μ + λ. We show that there are populations such that the hypervolume contribution of iteratively chosen λ solutions is much larger than the hypervolume contribution of an optimal set of λ solutions. Selecting the optimal set of λ solutions implies calculating conventional hypervolume contributions, which is considered to be computationally too expensive. We present the first hypervolume algorithm which directly calculates the contribution of every set of λ solutions. This gives an additive term of in the runtime of the calculation instead of a multiplicative factor of . More precisely, for a population of size n with d objectives, our algorithm can calculate a set of λ solutions with minimal hypervolume contribution in time ( n d /2 log n + n λ ) for d > 2. This improves all previously published algorithms by a factor of n min{λ, d /2} for d > 3 and by a factor of n for d = 3.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2009) 17 (4): 455–476.
Published: 01 December 2009
Abstract
View article
PDF
Maintaining diversity is important for the performance of evolutionary algorithms. Diversity-preserving mechanisms can enhance global exploration of the search space and enable crossover to find dissimilar individuals for recombination. We focus on the global exploration capabilities of mutation-based algorithms. Using a simple bimodal test function and rigorous runtime analyses, we compare well-known diversity-preserving mechanisms like deterministic crowding, fitness sharing, and others with a plain algorithm without diversification. We show that diversification is necessary for global exploration, but not all mechanisms succeed in finding both optima efficiently. Our theoretical results are accompanied by additional experiments for different population sizes.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2009) 17 (1): 3–19.
Published: 01 March 2009
Abstract
View article
PDF
Hybrid methods are very popular for solving problems from combinatorial optimization. In contrast, the theoretical understanding of the interplay of different optimization methods is rare. In this paper, we make a first step into the rigorous analysis of such combinations for combinatorial optimization problems. The subject of our analyses is the vertex cover problem for which several approximation algorithms have been proposed. We point out specific instances where solutions can (or cannot) be improved by the search process of a simple evolutionary algorithm in expected polynomial time.