Skip Nav Destination
Close Modal
Update search
NARROW
Date
Availability
1-20 of 39
Articles
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
1
Sort by
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2009) 17 (3): 275–306.
Published: 01 September 2009
Abstract
View article
PDF
Learning with imbalanced data is one of the recent challenges in machine learning. Various solutions have been proposed in order to find a treatment for this problem, such as modifying methods or the application of a preprocessing stage. Within the preprocessing focused on balancing data, two tendencies exist: reduce the set of examples (undersampling) or replicate minority class examples (oversampling). Undersampling with imbalanced datasets could be considered as a prototype selection procedure with the purpose of balancing datasets to achieve a high classification rate, avoiding the bias toward majority class examples. Evolutionary algorithms have been used for classical prototype selection showing good results, where the fitness function is associated to the classification and reduction rates. In this paper, we propose a set of methods called evolutionary undersampling that take into consideration the nature of the problem and use different fitness functions for getting a good trade-off between balance of distribution of classes and performance. The study includes a taxonomy of the approaches and an overall comparison among our models and state of the art undersampling methods. The results have been contrasted by using nonparametric statistical procedures and show that evolutionary undersampling outperforms the nonevolutionary models when the degree of imbalance is increased.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2009) 17 (2): 135–166.
Published: 01 June 2009
Abstract
View article
PDF
Many-objective problems represent a major challenge in the field of evolutionary multiobjective optimization—in terms of search efficiency, computational cost, decision making, visualization, and so on. This leads to various research questions, in particular whether certain objectives can be omitted in order to overcome or at least diminish the difficulties that arise when many, that is, more than three, objective functions are involved. This study addresses this question from different perspectives. First, we investigate how adding or omitting objectives affects the problem characteristics and propose a general notion of conflict between objective sets as a theoretical foundation for objective reduction. Second, we present both exact and heuristic algorithms to systematically reduce the number of objectives, while preserving as much as possible of the dominance structure of the underlying optimization problem. Third, we demonstrate the usefulness of the proposed objective reduction method in the context of both decision making and search for a radar waveform application as well as for well-known test functions.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2009) 17 (1): 3–19.
Published: 01 March 2009
Abstract
View article
PDF
Hybrid methods are very popular for solving problems from combinatorial optimization. In contrast, the theoretical understanding of the interplay of different optimization methods is rare. In this paper, we make a first step into the rigorous analysis of such combinations for combinatorial optimization problems. The subject of our analyses is the vertex cover problem for which several approximation algorithms have been proposed. We point out specific instances where solutions can (or cannot) be improved by the search process of a simple evolutionary algorithm in expected polynomial time.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2008) 16 (3): 289–313.
Published: 01 September 2008
Abstract
View article
PDF
We present a statistical model of empirical optimization that admits the creation of algorithms with explicit and intuitively defined desiderata. Because No Free Lunch theorems dictate that no optimization algorithm can be considered more efficient than any other when considering all possible functions, the desired function class plays a prominent role in the model. In particular, this provides a direct way to answer the traditionally difficult question of what algorithm is best matched to a particular class of functions. Among the benefits of the model are the ability to specify the function class in a straightforward manner, a natural way to specify noisy or dynamic functions, and a new source of insight into No Free Lunch theorems for optimization.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2008) 16 (1): 1–30.
Published: 01 March 2008
Abstract
View article
PDF
The dynamic optimization problem concerns finding an optimum in a changing environment. In the field of evolutionary algorithms, this implies dealing with a time-changing fitness landscape. In this paper we compare different techniques for integrating motion information into an evolutionary algorithm, in the case it has to follow a time-changing optimum, under the assumption that the changes follow a nonrandom law. Such a law can be estimated in order to improve the optimum tracking capabilities of the algorithm. In particular, we will focus on first order dynamical laws to track moving objects. A vision-based tracking robotic application is used as testbed for experimental comparison.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2007) 15 (3): 253–289.
Published: 01 September 2007
Abstract
View article
PDF
Evolutionary algorithms rarely deal with ontogenetic, non-inherited alteration of genetic information because they are based on a direct genotype-phenotype mapping. In contrast, several processes have been discovered in nature which alter genetic information encoded in DNA before it is translated into amino-acid chains. Ontogenetically altered genetic information is not inherited but extensively used in regulation and development of phenotypes, giving organisms the ability to, in a sense, re-program their genotypes according to environmental cues. An example of post-transcriptional alteration of gene-encoding sequences is the process of RNA Editing. Here we introduce a novel Agent-based model of genotype editing and a computational study of its evolutionary performance in static and dynamic environments. This model builds on our previous Genetic Algorithm with Editing, but presents a fundamentally novel architecture in which coding and non-coding genetic components are allowed to co-evolve. Our goals are: (1) to study the role of RNA Editing regulation in the evolutionary process, (2) to understand how genotype editing leads to a different, and novel evolutionary search algorithm, and (3) the conditions under which genotype editing improves the optimization performance of traditional evolutionary algorithms. We show that genotype editing allows evolving agents to perform better in several classes of fitness functions, both in static and dynamic environments. We also present evidence that the indirect genotype/phenotype mapping resulting from genotype editing leads to a better exploration/exploitation compromise of the search process. Therefore, we show that our biologically-inspired model of genotype editing can be used to both facilitate understanding of the evolutionary role of RNA regulation based on genotype editing in biology, and advance the current state of research in Evolutionary Computation.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2007) 15 (2): 133–168.
Published: 01 June 2007
Abstract
View article
PDF
We analyze generalization in XCSF and introduce three improvements. We begin by showing that the types of generalizations evolved by XCSF can be influenced by the input range. To explain these results we present a theoretical analysis of the convergence of classifier weights in XCSF which highlights a broader issue. In XCSF, because of the mathematical properties of the Widrow-Hoff update, the convergence of classifier weights in a given subspace can be slow when the spread of the eigenvalues of the autocorrelation matrix associated with each classifier is large. As a major consequence, the system's accuracy pressure may act before classifier weights are adequately updated, so that XCSF may evolve piecewise constant approximations, instead of the intended, and more efficient, piecewise linear ones. We propose three different ways to update classifier weights in XCSF so as to increase the generalization capabilities of XCSF: one based on a condition-based normalization of the inputs, one based on linear least squares, and one based on the recursive version of linear least squares. Through a series of experiments we show that while all three approaches significantly improve XCSF, least squares approaches appear to be best performing and most robust. Finally we show how XCSF can be extended to include polynomial approximations.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2007) 15 (1): 1–28.
Published: 01 March 2007
Abstract
View article
PDF
The covariancematrix adaptation evolution strategy (CMA-ES) is one of themost powerful evolutionary algorithms for real-valued single-objective optimization. In this paper, we develop a variant of the CMA-ES for multi-objective optimization (MOO). We first introduce a single-objective, elitist CMA-ES using plus-selection and step size control based on a success rule. This algorithm is compared to the standard CMA-ES. The elitist CMA-ES turns out to be slightly faster on unimodal functions, but is more prone to getting stuck in sub-optimal local minima. In the new multi-objective CMAES (MO-CMA-ES) a population of individuals that adapt their search strategy as in the elitist CMA-ES is maintained. These are subject to multi-objective selection. The selection is based on non-dominated sorting using either the crowding-distance or the contributing hypervolume as second sorting criterion. Both the elitist single-objective CMA-ES and the MO-CMA-ES inherit important invariance properties, in particular invariance against rotation of the search space, from the original CMA-ES. The benefits of the new MO-CMA-ES in comparison to the well-known NSGA-II and to NSDE, a multi-objective differential evolution algorithm, are experimentally shown.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2006) 14 (4): 383–409.
Published: 01 December 2006
Abstract
View article
PDF
Genetic Algorithms perform crossovers effectively when linkage sets — sets of variables tightly linked to form building blocks — are identified. Several methods have been proposed to detect the linkage sets. Perturbation methods (PMs) investigate fitness differences by perturbations of gene values and Estimation of distribution algorithms (EDAs) estimate the distribution of promising strings. In this paper, we propose a novel approach combining both of them, which detects dependencies of variables by estimating the distribution of strings clustered according to fitness differences. The proposed algorithm, called the Dependency Detection for Distribution Derived from fitness Differences (D 5 ), can detect dependencies of a class of functions that are difficult for EDAs, and requires less computational cost than PMs.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2006) 14 (3): 255–275.
Published: 01 September 2006
Abstract
View article
PDF
This paper investigates σ-self-adaptation for real valued evolutionary algorithms on linear fitness functions. We identify the step-size logarithm log σ as a key quantity to understand strategy behavior. Knowing the bias of mutation, recombination, and selection on log σ is sufficient to explain σ-dynamics and strategy behavior in many cases, even from previously reported results on non-linear and/or noisy fitness functions. On a linear fitness function, if intermediate multi-recombination is applied on the object parameters, the i -th best and the i -th worst individual have the same σ-distribution. Consequently, the correlation between fitness and step-size σ is zero. Assuming additionally that σ-changes due to mutation and recombination are unbiased, then σ-self-adaptation enlarges σ if and only if μ < λ/2, given (μ, λ)-truncation selection. Experiments show the relevance of the given assumptions.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2006) 14 (2): 129–156.
Published: 01 June 2006
Abstract
View article
PDF
This paper presents a novel Genetic Parallel Programming (GPP) paradigm for evolving parallel programs running on a Multi-Arithmetic-Logic-Unit (Multi-ALU) Processor (MAP). The MAP is a Multiple Instruction-streams, Multiple Data-streams (MIMD), general-purpose register machine that can be implemented on modern Very Large-Scale Integrated Circuits (VLSIs) in order to evaluate genetic programs at high speed. For human programmers, writing parallel programs is more difficult than writing sequential programs. However, experimental results show that GPP evolves parallel programs with less computational effort than that of their sequential counterparts. It creates a new approach to evolving a feasible problem solution in parallel program form and then serializes it into a sequential programif required. The effectiveness and efficiency of GPP are investigated using a suite of 14 well-studied benchmark problems. Experimental results show that GPP speeds up evolution substantially.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1999) 7 (4): 331–352.
Published: 01 December 1999
Abstract
View article
PDF
Scalable evolutionary computation has. become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithm—namely elitism, niching, and restricted mating are not significantly improving the scalability problems.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1999) 7 (3): 205–230.
Published: 01 September 1999
Abstract
View article
PDF
In this paper, we study the problem features that may cause a multi-objective genetic algorithm (GA) difficulty in converging to the true Pareto-optimal front. Identification of such features helps us develop difficult test problems for multi-objective optimization. Multi-objective test problems are constructed from single-objective optimization problems, thereby allowing known difficult features of single-objective problems (such as multi-modality, isolation, or deception) to be directly transferred to the corresponding multi-objective problem. In addition, test problems having features specific to multi-objective optimization are also constructed. More importantly, these difficult test problems will enable researchers to test their algorithms for specific aspects of multi-objective optimization.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1999) 7 (2): 109–124.
Published: 01 June 1999
Abstract
View article
PDF
In the light of a recently derived evolution equation for genetic algorithms we consider the schema theorem and the building block hypothesis. We derive a schema theorem based on the concept of effective fitness showing that schemata of higher than average effective fitness receive an exponentially increasing number of trials over time. The equation makes manifest the content of the building block hypothesis showing how fit schemata are constructed from fit sub-schemata. However, we show that, generically, there is no preference for short, low-order schemata. In the case where schema reconstruction is favored over schema destruction, large schemata tend to be favored. As a corollary of the evolution equation we prove Geiringer's theorem.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1999) 7 (1): 1–17.
Published: 01 March 1999
Abstract
View article
PDF
A general model for job shop scheduling is described which applies to static, dynamic and non-deterministic production environments. Next, a Genetic Algorithm is presented which solves the job shop scheduling problem. This algorithm is tested in a dynamic environment under different workload situations. Thereby, a highly efficient decoding procedure is proposed which strongly improves the quality of schedules. Finally, this technique is tested for scheduling and rescheduling in a non-deterministic environment. It is shown by experiment that conventional methods of production control are clearly outperformed atreasonable runtime costs.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1998) 6 (4): 293–309.
Published: 01 December 1998
Abstract
View article
PDF
Parsimony pressure, the explicit penalization of larger programs, has been increasingly used as a means of controlling code growth in genetic programming. However, in many cases parsimony pressure degrades the performance of the genetic program. In this paper we show that poor average results with parsimony pressure are a result of “failed” populations that overshadow the results of populations that incorporate parsimony pressure successfully. Additionally, we show that the effect of parsimony pressure can be measured by calculating the relationship between program size and performance within the population. This measure can be used as a partial indicator of success or failure for individual populations.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1998) 6 (3): 201–229.
Published: 01 September 1998
Abstract
View article
PDF
L. M. Adleman launched the field of DNA computing with a demonstration in 1994 that strands of DNA could be used to solve the Hamiltonian path problem for a simple graph. He also identified three broad categories of open questions for the field. First, is DNA capable of universal computation? Second, what kinds of algorithms can DNA implement? Third, can the error rates in the manipulations of the DNA be controlled enough to allow for useful computation? In the two years that have followed, theoretical work has shown that DNA is in fact capable of universal computation. Furthermore, algorithms for solving interesting questions, like breaking the Data Encryption Standard, have been described using currently available technology and methods. Finally, a few algorithms have been proposed to handle some of the apparently crippling error rates in a few of the common processes used to manipulate DNA. It is thus unlikely that DNA computation is doomed to be only a passing curiosity. However, much work remains to be done on the containment and correction of errors. It is far from clear if the problems in the error rates can be solved sufficiently to ever allow for general-purpose computation that will challenge the more popular substrates for computation. Unfortunately, biological demonstrations of the theoretical results have been sadly lacking. To date, only the simplest of computations have been carried out in DNA. To make significant progress, the field will require both the assessment of the practicality of the different manipulations of DNA and the implementation of algorithms for realistic problems. Theoreticians, in collaboration with experimentalists, can contribute to this research program by settling on a small set of practical and efficient models for DNA computation.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1998) 6 (2): 109–127.
Published: 01 June 1998
Abstract
View article
PDF
The paper is in three parts. First, we use simple adversary arguments to redevelop and explore some of the no-free-lunch (NFL) theorems and perhaps extend them a little. Second, we clarify the relationship of NFL theorems to algorithm theory and complexity classes such as NP. We claim that NFL is weaker in the sense that the constraints implied by the conjectures of traditional algorithm theory on what an evolutionary algorithm may be expected to accomplish are far more severe than those implied by NFL. Third, we take a brief look at how natural evolution relates to computation and optimization. We suggest that the evolution of complex systems exhibiting high degrees of orderliness is not equivalent in difficulty to optimizing hard (in the complexity sense) problems, and that the optimism in genetic algorithms (GAs) as universal optimizers is not justified by natural evolution. This is an informal tutorial paper—most of the information presented is not formally proven, and is either “common knowledge” or formally proven elsewhere. Some of the claims are intuitions based on experience with algorithms, and in a more formal setting should be classified as conjectures.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1998) 6 (1): 1–24.
Published: 01 March 1998
Abstract
View article
PDF
Scheduling of a bus transit system must be formulated as an optimization problem, if the level of service to passengers is to be maximized within the available resources. In this paper, we present a formulation of a transit system scheduling problem with the objective of minimizing the overall waiting time of transferring and nontransferring passengers while satisfying a number of resource- and service-related constraints. It is observed that the number of variables and constraints for even a simple transit system (a single bus station with three routes) is too large to tackle using classical mixed-integer optimization techniques. The paper shows that genetic algorithms (GAs) are ideal for these problems, mainly because they (i) naturally handle binary variables, thereby taking care of transfer decision variables, which constitute the majority of the decision variables in the transit scheduling problem; and (ii) allow procedure-based declarations, thereby allowing complex algorithmic approaches (involving if then-else conditions) to be handled easily. The paper also shows how easily the same GA procedure with minimal modifications can handle a number of other more pragmatic extensions to the simple transit scheduling problem: buses with limited capacity, buses that do not arrive exactly as per scheduled times, and a multiple-station transit system having common routes among bus stations. Simulation results show the success of GAs in all these problems and suggest the application of GAs in more complex scheduling problems.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1997) 5 (4): 373–399.
Published: 01 December 1997
Abstract
View article
PDF
This article demonstrates the advantages of a cooperative, coevolutionary search in difficult control problems. The symbiotic adaptive neuroevolution (SANE) system coevolves a population of neurons that cooperate to form a functioning neural network. In this process, neurons assume different but overlapping roles, resulting in a robust encoding of control behavior. SANE is shown to be more efficient and more adaptive and to maintain higher levels of diversity than the more common network-based population approaches. Further empirical studies illustrate the emergent neuron specializations and the different roles the neurons assume in the population.
1