Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-17 of 17
David E. Goldberg
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2009) 17 (4): 595–626.
Published: 01 December 2009
Abstract
View article
PDF
In many different fields, researchers are often confronted by problems arising from complex systems. Simple heuristics or even enumeration works quite well on small and easy problems; however, to efficiently solve large and difficult problems, proper decomposition is the key. In this paper, investigating and analyzing interactions between components of complex systems shed some light on problem decomposition. By recognizing three bare-bones interactions—modularity, hierarchy, and overlap, facet-wise models are developed to dissect and inspect problem decomposition in the context of genetic algorithms. The proposed genetic algorithm design utilizes a matrix representation of an interaction graph to analyze and explicitly decompose the problem. The results from this paper should benefit research both technically and scientifically. Technically, this paper develops an automated dependency structure matrix clustering technique and utilizes it to design a model-building genetic algorithm that learns and delivers the problem structure. Scientifically, the explicit interaction model describes the problem structure very well and helps researchers gain important insights through the explicitness of the procedure.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2008) 16 (3): 315–354.
Published: 01 September 2008
Abstract
View article
PDF
A wide range of niching techniques have been investigated in evolutionary and genetic algorithms. In this article, we focus on niching using crowding techniques in the context of what we call local tournament algorithms. In addition to deterministic and probabilistic crowding, the family of local tournament algorithms includes the Metropolis algorithm, simulated annealing, restricted tournament selection, and parallel recombinative simulated annealing. We describe an algorithmic and analytical framework which is applicable to a wide range of crowding algorithms. As an example of utilizing this framework, we present and analyze the probabilistic crowding niching algorithm. Like the closely related deterministic crowding approach, probabilistic crowding is fast, simple, and requires no parameters beyond those of classical genetic algorithms. In probabilistic crowding, subpopulations are maintained reliably, and we show that it is possible to analyze and predict how this maintenance takes place. We also provide novel results for deterministic crowding, show how different crowding replacement rules can be combined in portfolios, and discuss population sizing. Our analysis is backed up by experiments that further increase the understanding of probabilistic crowding.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2007) 15 (2): 133–168.
Published: 01 June 2007
Abstract
View article
PDF
We analyze generalization in XCSF and introduce three improvements. We begin by showing that the types of generalizations evolved by XCSF can be influenced by the input range. To explain these results we present a theoretical analysis of the convergence of classifier weights in XCSF which highlights a broader issue. In XCSF, because of the mathematical properties of the Widrow-Hoff update, the convergence of classifier weights in a given subspace can be slow when the spread of the eigenvalues of the autocorrelation matrix associated with each classifier is large. As a major consequence, the system's accuracy pressure may act before classifier weights are adequately updated, so that XCSF may evolve piecewise constant approximations, instead of the intended, and more efficient, piecewise linear ones. We propose three different ways to update classifier weights in XCSF so as to increase the generalization capabilities of XCSF: one based on a condition-based normalization of the inputs, one based on linear least squares, and one based on the recursive version of linear least squares. Through a series of experiments we show that while all three approaches significantly improve XCSF, least squares approaches appear to be best performing and most robust. Finally we show how XCSF can be extended to include polynomial approximations.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2006) 14 (3): 345–380.
Published: 01 September 2006
Abstract
View article
PDF
Learning Classifier Systems (LCSs), such as the accuracy-based XCS, evolve distributed problem solutions represented by a population of rules. During evolution, features are specialized, propagated, and recombined to provide increasingly accurate subsolutions. Recently, it was shown that, as in conventional genetic algorithms (GAs), some problems require efficient processing of subsets of features to find problem solutions efficiently. In such problems, standard variation operators of genetic and evolutionary algorithms used in LCSs suffer from potential disruption of groups of interacting features, resulting in poor performance. This paper introduces efficient crossover operators to XCS by incorporating techniques derived from competent GAs: the extended compact GA (ECGA) and the Bayesian optimization algorithm (BOA). Instead of simple crossover operators such as uniform crossover or one-point crossover, ECGA or BOA-derived mechanisms are used to build a probabilistic model of the global population and to generate offspring classifiers locally using the model. Several offspring generation variations are introduced and evaluated. The results show that it is possible to achieve performance similar to runs with an informed crossover operator that is specifically designed to yield ideal problem-dependent exploration, exploiting provided problem structure information. Thus, we create the first competent LCSs, XCS/ECGA and XCS/BOA, that detect dependency structures online and propagate corresponding lower-level dependency structures effectively without any information about these structures given in advance.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2005) 13 (3): 279–302.
Published: 01 September 2005
Abstract
View article
PDF
This paper identifies the sequential behavior of the linkage learning genetic algorithm, introduces the tightness time model for a single building block, and develops the connection between the sequential behavior and the tightness time model. By integrating the first-building-block model based on the sequential behavior, the tightness time model, and the connection between these two models, a convergence time model is constructed and empirically verified. The proposed convergence time model explains the exponentially growing time required by the linkage learning genetic algorithm when solving uniformly scaled problems.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2005) 13 (3): 353–385.
Published: 01 September 2005
Abstract
View article
PDF
In many applications of genetic algorithms, there is a tradeoff between speed and accuracy in fitness evaluations when evaluations use numerical methods with varying discretization. In these types of applications, the cost and accuracy vary from discretization errors when implicit or explicit quadrature is used to estimate the function evaluations. This paper examines discretization scheduling , or how to vary the discretization within the genetic algorithm in order to use the least amount of computation time for a solution of a desired quality. The effectiveness of discretization scheduling can be determined by comparing its computation time to the computation time of a GA using a constant discretization. There are three ingredients for the discretization scheduling: population sizing, estimated time for each function evaluation and predicted convergence time analysis. Idealized one- and two-dimensional experiments and an inverse groundwater application illustrate the computational savings to be achieved from using discretization scheduling.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2003) 11 (4): 381–415.
Published: 01 December 2003
Abstract
View article
PDF
This paper discusses how the use of redundant representations influences the performance of genetic and evolutionary algorithms. Representations are redundant if the number of genotypes exceeds the number of phenotypes. A distinction is made between synonymously and non-synonymously redundant representations. Representations are synonymously redundant if the genotypes that represent the same phenotype are very similar to each other. Non-synonymously redundant representations do not allow genetic operators to work properly and result in a lower performance of evolutionary search. When using synonymously redundant representations, the performance of selectorecombinative genetic algorithms (GAs) depends on the modification of the initial supply. We have developed theoretical models for synonymously redundant representations that show the necessary population size to solve a problem and the number of generations goes with O(2 k r /r) , where k r is the order of redundancy and r is the number of genotypic building blocks (BB) that represent the optimal phenotypic BB. As a result, uniformly redundant representations do not change the behavior of GAs. Only by increasing r , which means overrepresenting the optimal solution, does GA performance increase. Therefore, non-uniformly redundant representations can only be used advantageously if a-priori information exists regarding the optimal solution. The validity of the proposed theoretical concepts is illustrated for the binary trivial voting mapping and the real-valued link-biased encoding. Our empirical investigations show that the developed population sizing and time to convergence models allow an accurate prediction of the empirical results.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2003) 11 (3): 279–298.
Published: 01 September 2003
Abstract
View article
PDF
This paper analyzes the impact of using noisy data sets in Pittsburgh-style learning classifier systems. This study was done using a particular kind of learning classifier system based on multiobjective selection. Our goal was to characterize the behavior of this kind of algorithms when dealing with noisy domains. For this reason, we developed a theoretical model for predicting the minimal achievable error in noisy domains. Combining this theoretical model for crisp learners with graphical representations of the evolved hypotheses through multiobjective techniques, we are able to bound the behavior of a learning classifier system. This kind of modeling lets us identify relevant characteristics of the evolved hypotheses, such as overfitting conditions that lead to hypotheses that poorly generalize the concept to be learned.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2003) 11 (3): 239–277.
Published: 01 September 2003
Abstract
View article
PDF
The evolutionary learning mechanism in XCS strongly depends on its accuracy-based fitness approach. The approach is meant to result in an evolutionary drive from classifiers of low accuracy to those of high accuracy. Since, given inaccuracy, lower specificity often corresponds to lower accuracy, fitness pressure most often also results in a pressure towards higher specificity. Moreover, fitness pressure should cause the evolutionary process to be innovative in that it combines low-order building blocks of lower accurate classifiers, to higher-order building blocks with higher accuracy. This paper investigates how, when, and where accuracy-based fitness results in successful rule evolution in XCS. Along the way, a weakness in the current proportionate selection method in XCS is identified. Several problem bounds are derived that need to be obeyed to enable proper evolutionary pressure. Moreover, a fitness dilemma is identified that causes accuracy-based fitness to be misleading. Improvements are introduced to XCS to make fitness pressure more robust and overcome the fitness dilemma. Specifically, (1) tournament selection results in a much better fitness-bias exploitation, and (2) bilateral accuracy prevents the fitness dilemma. While the improvements stand for themselves, we believe they also contribute to the ultimate goal of an evolutionary learning system that is able to solve decomposable machine-learning problems quickly, accurately, and reliably. The paper also contributes to the further understanding of XCS in general and the fitness approach in XCS in particular.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2002) 10 (4): 317–344.
Published: 01 December 2002
Abstract
View article
PDF
In the context of optimization by evolutionary algorithms (EAs), epistasis, deception, and scaling are well-known examples of problem difficulty characteristics. The presence of one such characteristic in the representation of a search problem indicates a certain type of difficulty the EA is to encounter during its search for globally optimal configurations. In this paper, we claim that the occurrence of symmetry in the representation is another problem difficulty characteristic and discuss one particular form, spin-flip symmetry, characterized by fitness invariant permutations on the alphabet. Its usual effect on unspecialized EAs, premature convergence due to synchronization problems, is discussed in detail. We discuss five different ways to specialize EAs to cope with the symmetry: adapting the genetic operators, changing the fitness function, using a niching technique, using a distributed EA, and attaching a highly redundant genotype-phenotype mapping.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2002) 10 (1): 75–97.
Published: 01 March 2002
Abstract
View article
PDF
When using genetic and evolutionary algorithms for network design, choosing a good representation scheme for the construction of the genotype is important for algorithm performance. One of the most common representation schemes for networks is the characteristic vector representation. However, with encoding trees, and using crossover and mutation, invalid individuals occur that are either under or overspecified. When constructing the offspring or repairing the invalid individuals that do not represent a tree, it is impossible to distinguish between the importance of the links that should be used. These problems can be overcome by transferring the concept of random keys from scheduling and ordering problems to the encoding of trees. This paper investigates the performance of a simple genetic algorithm (SGA) using network random keys (NetKeys) for the one-max tree and a real-world problem. The comparison between the network random keys and the characteristic vector encoding shows that despite the effects of stealth mutation, which favors the characteristic vector representation, selectorecombinative SGAs with NetKeys have some advantages for small and easy optimization problems. With more complex problems, SGAs with network random keys significantly outperform SGAs using characteristic vectors. This paper shows that random keys can be used for the encoding of trees, and that genetic algorithms using network random keys are able to solve complex tree problems much faster than when using the characteristic vector. Users should therefore be encouraged to use network random keys for the representation of trees.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2000) 8 (3): 311–340.
Published: 01 September 2000
Abstract
View article
PDF
This paper proposes an algorithm that uses an estimation of the joint distribution of promising solutions in order to generate new candidate solutions. The algorithm is settled into the context of genetic and evolutionary computation and the algorithms based on the estimation of distributions. The proposed algorithm is called the Bayesian Optimization Algorithm (BOA). To estimate the distribution of promising solutions, the techniques for modeling multivariate data by Bayesian networks are used. The BOA identifies, reproduces, and mixes building blocks up to a specified order. It is independent of the ordering of the variables in strings representing the solutions. Moreover, prior information about the problem can be incorporated into the algorithm, but it is not essential. First experiments were done with additively decomposable problems with both nonoverlapping as well as overlapping building blocks. The proposed algorithm is able to solve all but one of the tested problems in linear or close to linear time with respect to the problem size. Except for the maximal order of interactions to be covered, the algorithm does not use any prior knowledge about the problem. The BOA represents a step toward alleviating the problem of identifying and mixing building blocks correctly to obtain good solutions for problems with very limited domain information.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1999) 7 (4): 429–449.
Published: 01 December 1999
Abstract
View article
PDF
This paper examines the scalability of several types of parallel genetic algorithms (GAs). The objective is to determine the optimal number of processors that can be used by each type to minimize the execution time. The first part of the paper considers algorithms with a single population. The investigation focuses on an implementation where the population is distributed to several processors, but the results are applicable to more common masterslave implementations, where the population is entirely stored in a master processor and multiple slaves are used to evaluate the fitness. The second part of the paper deals with parallel GAs with multiple populations. It first considers a bounding case where the connectivity, the migration rate, and the frequency of migrations are set to their maximal values. Then, arbitrary regular topologies with lower migration rates are considered and the frequency of migrations is set to its lowest value. The investigationis mainly theoretical, but experimental evidence with an additively-decomposable function is included to illustrate the accuracy of the theory. In all cases, the calculations show that the optimal number of processors that minimizes the execution time is directly proportional to the square root of the population size and the fitness evaluation time. Since these two factors usually increase as the domain becomes more difficult, the results of the paper suggest that parallel GAs can integrate large numbers of processors and significantly reduce the execution time of many practical applications.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1999) 7 (4): 377–398.
Published: 01 December 1999
Abstract
View article
PDF
This paper presents the linkage identification by non-monotonicity detection (LIMD) procedure and its extension for overlapping functions by introducing the tightness detection (TD) procedure. The LIMD identifies linkage groups directly by performing order-2 simultaneous perturbations on a pair of loci to detect monotonicity/non-monotonicity of fitness changes. The LIMD can identify linkage groups with at most order of k when it is applied to O (2 k ) strings. The TD procedure calculates tightness of linkage between a pair of loci based on the linkage groups obtained by the LIMD. By removing loci with weak tightness from linkage groups, correct linkage groups are obtained for overlapping functions, which were considered difficult for linkage identification procedures.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1999) 7 (3): 231–253.
Published: 01 September 1999
Abstract
View article
PDF
This paper presents a model to predict the convergence quality of genetic algorithms based on the size of the population. The model is based on an analogy between selection in GAs and one-dimensional random walks. Using the solution to a classic random walk problem—the gambler's ruin—the model naturally incorporates previous knowledge about the initial supply of building blocks (BBs) and correct selection of the best BB over its competitors. The result is an equation that relates the size of the population with the desired quality of the solution, as well as the problem size and difficulty. The accuracy of the model is verified with experiments using additively decomposable functions of varying difficulty. The paper demonstrates how to adjust the model to account for noise present in the fitness evaluation and for different tournament sizes.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1996) 4 (2): 113–131.
Published: 01 June 1996
Abstract
View article
PDF
This paper analyzes the effect of noise on different selection mechanisms for genetic algorithms (GAs). Models for several selection schemes are developed that successfully predict the convergence characteristics of GAs within noisy environments. The selection schemes modeled in this paper include proportionate selection, tournament selection, (μ, λ) selection, and linear ranking selection. An allele-wise model for convergence in the presence of noise is developed for the OneMax domain, and then extended to more complex domains where the building blocks are uniformly scaled. These models are shown to accurately predict the convergence rate of GAs for a wide range of noise levels.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1994) 2 (1): 37–66.
Published: 01 March 1994
Abstract
View article
PDF
We approach the difficult task of analyzing the complex behavior of even the simplest learning classifier system (LCS) by isolating one crucial subfunction in the LCS learning algorithm: covering through niching. The LCS must maintain a population of diverse rules that together solve a problem (e.g., classify examples). To maintain a diverse population while applying the GAs selection operator, the LCS must incorporate some kind of niching mechanism. The natural way to accomplish niching in an LCS is to force competing rules to share resources (i.e., rewards). This implicit LCS fitness sharing is similar to the explicit fitness sharing used in many niched GAs. Indeed, the LCS implicit sharing algorithm can be mapped onto explicit fitness sharing with a one-to-one correspondence between algorithm components. This mapping is important because several studies of explicit fitness sharing, and of niching in GAs generally, have produced key insights and analytical tools for understanding the interaction of the niching and selection forces. We can now bring those results to bear in understanding the fundamental type of cooperation (a.k.a. weak cooperation) that an LCS must promote.