Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-15 of 15
Darrell Whitley
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2022) 30 (3): 409–446.
Published: 01 September 2022
Abstract
View article
PDF
An optimal recombination operator for two-parent solutions provides the best solution among those that take the value for each variable from one of the parents (gene transmission property). If the solutions are bit strings, the offspring of an optimal recombination operator is optimal in the smallest hyperplane containing the two parent solutions. Exploring this hyperplane is computationally costly, in general, requiring exponential time in the worst case. However, when the variable interaction graph of the objective function is sparse, exploration can be done in polynomial time. In this article, we present a recombination operator, called Dynastic Potential Crossover (DPX), that runs in polynomial time and behaves like an optimal recombination operator for low-epistasis combinatorial problems. We compare this operator, both theoretically and experimentally, with traditional crossover operators, like uniform crossover and network crossover, and with two recently defined efficient recombination operators: partition crossover and articulation points partition crossover. The empirical comparison uses NKQ Landscapes and MAX-SAT instances. DPX outperforms the other crossover operators in terms of quality of the offspring and provides better results included in a trajectory and a population-based metaheuristic, but it requires more time and memory to compute the offspring.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2020) 28 (2): 255–288.
Published: 01 June 2020
FIGURES
| View All (10)
Abstract
View article
PDF
Generalized Partition Crossover (GPX) is a deterministic recombination operator developed for the Traveling Salesman Problem. Partition crossover operators return the best of 2 k reachable offspring, where k is the number of recombining components. This article introduces a new GPX2 operator, which finds more recombining components than GPX or Iterative Partial Transcription (IPT). We also show that GPX2 has O( n ) runtime complexity, while also introducing new enhancements to reduce the execution time of GPX2. Finally, we experimentally demonstrate the efficiency of GPX2 when it is used to improve solutions found by the multitrial Lin-Kernighan-Helsgaum (LKH) algorithm. Significant improvements in performance are documented on large ( n > 5000 ) and very large ( n = 100 , 000 ) instances of the Traveling Salesman Problem.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2016) 24 (3): 491–519.
Published: 01 September 2016
FIGURES
| View All (10)
Abstract
View article
PDF
This article investigates Gray Box Optimization for pseudo-Boolean optimization problems composed of M subfunctions, where each subfunction accepts at most k variables. We will refer to these as Mk Landscapes. In Gray Box Optimization, the optimizer is given access to the set of M subfunctions. We prove Gray Box Optimization can efficiently compute hyperplane averages to solve non-deceptive problems in time. Bounded separable problems are also solved in time. As a result, Gray Box Optimization is able to solve many commonly used problems from the evolutional computation literature in evaluations. We also introduce a more general class of Mk Landscapes that can be solved using dynamic programming and discuss properties of these functions. For certain type of problems Gray Box Optimization makes it possible to enumerate all local optima faster than brute force methods. We also provide evidence that randomly generated test problems are far less structured than those found in real-world problems.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2015) 23 (2): 217–248.
Published: 01 June 2015
FIGURES
| View All (10)
Abstract
View article
PDF
Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p , the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2013) 21 (4): 561–590.
Published: 01 November 2013
FIGURES
| View All (5)
Abstract
View article
PDF
The frequency distribution of a fitness function over regions of its domain is an important quantity for understanding the behavior of algorithms that employ randomized sampling to search the function. In general, exactly characterizing this distribution is at least as hard as the search problem, since the solutions typically live in the tails of the distribution. However, in some cases it is possible to efficiently retrieve a collection of quantities (called moments ) that describe the distribution. In this paper, we consider functions of bounded epistasis that are defined over length- n strings from a finite alphabet of cardinality q . Many problems in combinatorial optimization can be specified as search problems over functions of this type. Employing Fourier analysis of functions over finite groups, we derive an efficient method for computing the exact moments of the frequency distribution of fitness functions over Hamming regions of the q -ary hypercube. We then use this approach to derive equations that describe the expected fitness of the offspring of any point undergoing uniform mutation. The results we present provide insight into the statistical structure of the fitness function for a number of combinatorial problems. For the graph coloring problem, we apply our results to efficiently compute the average number of constraint violations that lie within a certain number of steps of any coloring. We derive an expression for the mutation rate that maximizes the expected fitness of an offspring at each fitness level. We also apply the results to the slightly more complex frequency assignment problem, a relevant application in the domain of the telecommunications industry. As with the graph coloring problem, we provide formulas for the average value of the fitness function in Hamming regions around a solution and the expectation-optimal mutation rate.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2011) 19 (4): 597–637.
Published: 01 December 2011
Abstract
View article
PDF
A small number of combinatorial optimization problems have search spaces that correspond to elementary landscapes, where the objective function f is an eigenfunction of the Laplacian that describes the neighborhood structure of the search space. Many problems are not elementary; however, the objective function of a combinatorial optimization problem can always be expressed as a superposition of multiple elementary landscapes if the underlying neighborhood used is symmetric. This paper presents theoretical results that provide the foundation for algebraic methods that can be used to decompose the objective function of an arbitrary combinatorial optimization problem into a sum of subfunctions, where each subfunction is an elementary landscape. Many steps of this process can be automated, and indeed a software tool could be developed that assists the researcher in finding a landscape decomposition. This methodology is then used to show that the subset sum problem is a superposition of two elementary landscapes, and to show that the quadratic assignment problem is a superposition of three elementary landscapes.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2004) 12 (1): 47–76.
Published: 01 January 2004
Abstract
View article
PDF
Representations are formalized as encodings that map the search space to the vertex set of a graph. We define the notion of bit equivalent encodings and show that for such encodings the corresponding Walsh coefficients are also conserved. We focus on Gray codes as particular types of encoding and present a review of properties related to the use of Gray codes. Gray codes are widely used in conjunction with genetic algorithms and bit-climbing algorithms for parameter optimization problems. We present new convergence proofs for a special class of unimodal functions; the proofs show that a steepest ascent bit climber using any reflected Gray code representation reaches the global optimum in a number of steps that is linear with respect to the encoding size. There are in fact many different Gray codes. Shifting is defined as a mechanism for dynamically switching from one Gray code representation to another in order to escape local optima. Theoretical results that substantially improve our understanding of the Gray codes and the shifting mechanism are presented. New proofs also shed light on the number of unique Gray code neighborhoods accessible via shifting and on how neighborhood structure changes during shifting. We show that shifting can improve the performance of both a local search algorithm as well as one of the best genetic algorithms currently available.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2002) 10 (4): iii.
Published: 01 December 2002
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2002) 10 (1): iii.
Published: 01 March 2002
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2001) 9 (1): iii.
Published: 01 March 2001
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2000) 8 (1): iii–iv.
Published: 01 March 2000
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1999) 7 (1): 69–101.
Published: 01 March 1999
Abstract
View article
PDF
Classically, epistasis is either computed exactly by Walsh coefficients or estimated by sampling. Exact computation is usually of theoretical interest since the computation typically grows exponentially with the number of bits in the domain. Given an evaluation function, epistasis also can be estimated by sampling. However this approach gives us little insight into the origin of the epistasis and is prone to sampling error. This paper presents theorems establishing the bounds of epistasis for problems that can be stated as mathematical expressions. This leads to substantial computational savings for bounding the difficulty of a problem. Furthermore, working with these theorems in a mathematical context, one can gain insight into the mathematical origins of epistasis and how a problem's epistasis might be reduced. We present several new measures for epistasis and give empirical evidence and examples to demonstrate the application of the theorems. In particular, we show that some functions display “parity” such that by picking a well-defined representation, all Walsh coefficients of either odd or even index become zero, thereby reducing the nonlinearity of the function.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1996) 4 (3): iv–viii.
Published: 01 September 1996
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1994) 2 (3): 249–278.
Published: 01 September 1994
Abstract
View article
PDF
Delta coding is an iterative genetic search strategy that dynamically changes the representation of the search space in an attempt to exploit different problem representations. Delta coding sustains search by reinitializing the population at each iteration of search. This helps to avoid the asymptotic performance typically observed in genetic search as the population becomes more homogeneous. Here, the optimization ability of delta coding is empirically compared against CHC, ESGA, GENITOR, and random mutation hill-climbing (RMHC) on a suite of well-known test functions with and without Gray coding. Issues concerning the effects of Gray coding on these test functions are addressed.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1993) 1 (3): 213–233.
Published: 01 September 1993
Abstract
View article
PDF
A grammar tree is used to encode a cellular developmental process that can generate whole families of Boolean neural networks for computing parity and symmetry. The development process resembles biological cell division. A genetic algorithm is used to find a grammar tree that yields both architecture and weights specifying a particular neural network for solving specific Boolean functions. The current study particularly focuses on the addition of learning to the development process and the evolution of grammar trees. Three ways of adding learning to the development process are explored. Two of these exploit the Baldwin effect by changing the fitness landscape without using Lamarckian evolution. The third strategy is Lamarckian in nature. Results for these three modes of combining learning with genetic search are compared against genetic search without learning. Our results suggest that merely using learning to change the fitness landscape can be as effective as Lamarckian strategies at improving search.