Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-6 of 6
Andrew M. Sutton
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2023) 31 (3): 309–335.
Published: 01 September 2023
Abstract
View article
PDF
Recently, Rowe and Aishwaryaprajna (2019) introduced a simple majority vote technique that efficiently solves Jump with large gaps, OneMax with large noise, and any monotone function with a polynomial-size image. In this paper, we identify a pathological condition for this algorithm: the presence of spin-flip symmetry in the problem instance. Spin-flip symmetry is the invariance of a pseudo-Boolean function to complementation. Many important combinatorial optimization problems admit objective functions that exhibit this pathology, such as graph problems, Ising models, and variants of propositional satisfiability. We prove that no population size exists that allows the majority vote technique to solve spin-flip symmetric functions of unitation with reasonable probability. To remedy this, we introduce a symmetry-breaking technique that allows the majority vote algorithm to overcome this issue for many landscapes. This technique requires only a minor modification to the original majority vote algorithm to force it to sample strings in { 0 , 1 } n from a dimension n - 1 hyperplane. We prove a sufficient condition for a spin-flip symmetric function to possess in order for the symmetry-breaking voting algorithm to succeed, and prove its efficiency on generalized TwoMax , a spin-flip symmetric variant of Jump , and families of constructed 3-NAE-SAT and 2-XOR-SAT formulas. We also prove that the algorithm fails on the one-dimensional Ising model, and suggest different techniques for overcoming this. Finally, we present empirical results that explore the tightness of the runtime bounds and the performance of the technique on randomized satisfiability variants.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2016) 24 (2): 237–254.
Published: 01 June 2016
FIGURES
Abstract
View article
PDF
Recently, ant colony optimization (ACO) algorithms have proven to be efficient in uncertain environments, such as noisy or dynamically changing fitness functions. Most of these analyses have focused on combinatorial problems such as path finding. We rigorously analyze an ACO algorithm optimizing linear pseudo-Boolean functions under additive posterior noise. We study noise distributions whose tails decay exponentially fast, including the classical case of additive Gaussian noise. Without noise, the classical EA outperforms any ACO algorithm, with smaller being better; however, in the case of large noise, the EA fails, even for high values of (which are known to help against small noise). In this article, we show that ACO is able to deal with arbitrarily large noise in a graceful manner; that is, as long as the evaporation factor is small enough, dependent on the variance of the noise and the dimension n of the search space, optimization will be successful. We also briefly consider the case of prior noise and prove that ACO can also efficiently optimize linear functions under this noise model.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2015) 23 (4): 509–511.
Published: 01 December 2015
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2015) 23 (2): 217–248.
Published: 01 June 2015
FIGURES
| View All (10)
Abstract
View article
PDF
Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p , the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2014) 22 (4): 595–628.
Published: 01 December 2014
FIGURES
| View All (11)
Abstract
View article
PDF
Parameterized runtime analysis seeks to understand the influence of problem structure on algorithmic runtime. In this paper, we contribute to the theoretical understanding of evolutionary algorithms and carry out a parameterized analysis of evolutionary algorithms for the Euclidean traveling salesperson problem (Euclidean TSP). We investigate the structural properties in TSP instances that influence the optimization process of evolutionary algorithms and use this information to bound their runtime. We analyze the runtime in dependence of the number of inner points k . In the first part of the paper, we study a EA in a strictly black box setting and show that it can solve the Euclidean TSP in expected time where A is a function of the minimum angle between any three points. Based on insights provided by the analysis, we improve this upper bound by introducing a mixed mutation strategy that incorporates both 2-opt moves and permutation jumps. This strategy improves the upper bound to . In the second part of the paper, we use the information gained in the analysis to incorporate domain knowledge to design two fixed-parameter tractable (FPT) evolutionary algorithms for the planar Euclidean TSP. We first develop a EA based on an analysis by M. Theile, 2009, ”Exact solutions to the traveling salesperson problem by a population-based evolutionary algorithm,” Lecture notes in computer science , Vol. 5482 (pp. 145–155), that solves the TSP with k inner points in generations with probability . We then design a EA that incorporates a dynamic programming step into the fitness evaluation. We prove that a variant of this evolutionary algorithm using 2-opt mutation solves the problem after steps in expectation with a cost of for each fitness evaluation.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2013) 21 (4): 561–590.
Published: 01 November 2013
FIGURES
| View All (5)
Abstract
View article
PDF
The frequency distribution of a fitness function over regions of its domain is an important quantity for understanding the behavior of algorithms that employ randomized sampling to search the function. In general, exactly characterizing this distribution is at least as hard as the search problem, since the solutions typically live in the tails of the distribution. However, in some cases it is possible to efficiently retrieve a collection of quantities (called moments ) that describe the distribution. In this paper, we consider functions of bounded epistasis that are defined over length- n strings from a finite alphabet of cardinality q . Many problems in combinatorial optimization can be specified as search problems over functions of this type. Employing Fourier analysis of functions over finite groups, we derive an efficient method for computing the exact moments of the frequency distribution of fitness functions over Hamming regions of the q -ary hypercube. We then use this approach to derive equations that describe the expected fitness of the offspring of any point undergoing uniform mutation. The results we present provide insight into the statistical structure of the fitness function for a number of combinatorial problems. For the graph coloring problem, we apply our results to efficiently compute the average number of constraint violations that lie within a certain number of steps of any coloring. We derive an expression for the mutation rate that maximizes the expected fitness of an offspring at each fitness level. We also apply the results to the slightly more complex frequency assignment problem, a relevant application in the domain of the telecommunications industry. As with the graph coloring problem, we provide formulas for the average value of the fitness function in Hamming regions around a solution and the expectation-optimal mutation rate.