Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-6 of 6
Dirk Thierens
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2024) 32 (4): 371–397.
Published: 02 December 2024
Abstract
View articletitled, Parameterless Gene-Pool Optimal Mixing Evolutionary Algorithms
View
PDF
for article titled, Parameterless Gene-Pool Optimal Mixing Evolutionary Algorithms
When it comes to solving optimization problems with evolutionary algorithms (EAs) in a reliable and scalable manner, detecting and exploiting linkage information, that is, dependencies between variables, can be key. In this paper, we present the latest version of, and propose substantial enhancements to, the gene-pool optimal mixing evolutionary algorithm (GOMEA): an EA explicitly designed to estimate and exploit linkage information. We begin by performing a large-scale search over several GOMEA design choices to understand what matters most and obtain a generally best-performing version of the algorithm. Next, we introduce a novel version of GOMEA, called CGOMEA, where linkage-based variation is further improved by filtering solution mating based on conditional dependencies. We compare our latest version of GOMEA, the newly introduced CGOMEA, and another contending linkage-aware EA, DSMGA-II, in an extensive experimental evaluation, involving a benchmark set of nine black-box problems that can be solved efficiently only if their inherent dependency structure is unveiled and exploited. Finally, in an attempt to make EAs more usable and resilient to parameter choices, we investigate the performance of different automatic population management schemes for GOMEA and CGOMEA, de facto making the EAs parameterless. Our results show that GOMEA and CGOMEA significantly outperform the original GOMEA and DSMGA-II on most problems, setting a new state of the art for the field.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2018) 26 (1): 117–143.
Published: 01 March 2018
FIGURES
| View All (10)
Abstract
View articletitled, GAMBIT: A Parameterless Model-Based Evolutionary Algorithm for Mixed-Integer Problems
View
PDF
for article titled, GAMBIT: A Parameterless Model-Based Evolutionary Algorithm for Mixed-Integer Problems
Learning and exploiting problem structure is one of the key challenges in optimization. This is especially important for black-box optimization (BBO) where prior structural knowledge of a problem is not available. Existing model-based Evolutionary Algorithms (EAs) are very efficient at learning structure in both the discrete, and in the continuous domain. In this article, discrete and continuous model-building mechanisms are integrated for the Mixed-Integer (MI) domain, comprising discrete and continuous variables. We revisit a recently introduced model-based evolutionary algorithm for the MI domain, the Genetic Algorithm for Model-Based mixed-Integer opTimization (GAMBIT). We extend GAMBIT with a parameterless scheme that allows for practical use of the algorithm without the need to explicitly specify any parameters. We furthermore contrast GAMBIT with other model-based alternatives. The ultimate goal of processing mixed dependences explicitly in GAMBIT is also addressed by introducing a new mechanism for the explicit exploitation of mixed dependences. We find that processing mixed dependences with this novel mechanism allows for more efficient optimization. We further contrast the parameterless GAMBIT with Mixed-Integer Evolution Strategies (MIES) and other state-of-the-art MI optimization algorithms from the General Algebraic Modeling System (GAMS) commercial algorithm suite on problems with and without constraints, and show that GAMBIT is capable of solving problems where variable dependences prevent many algorithms from successfully optimizing them.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2013) 21 (3): 445–469.
Published: 01 September 2013
FIGURES
| View All (67)
Abstract
View articletitled, Benchmarking Parameter-Free AMaLGaM on Functions With and Without Noise
View
PDF
for article titled, Benchmarking Parameter-Free AMaLGaM on Functions With and Without Noise
We describe a parameter-free estimation-of-distribution algorithm (EDA) called the adapted maximum-likelihood Gaussian model iterated density-estimation evolutionary algorithm (AMaLGaM-ID A, or AMaLGaM for short) for numerical optimization. AMaLGaM is benchmarked within the 2009 black box optimization benchmarking (BBOB) framework and compared to a variant with incremental model building (iAMaLGaM). We study the implications of factorizing the covariance matrix in the Gaussian distribution, to use only a few or no covariances. Further, AMaLGaM and iAMaLGaM are also evaluated on the noisy BBOB problems and we assess how well multiple evaluations per solution can average out noise. Experimental evidence suggests that parameter-free AMaLGaM can solve a wide range of problems efficiently with perceived polynomial scalability, including multimodal problems, obtaining the best or near-best results among all algorithms tested in 2009 on functions such as the step-ellipsoid and Katsuuras, but failing to locate the optimum within the time limit on skew Rastrigin-Bueche separable and Lunacek bi-Rastrigin in higher dimensions. AMaLGaM is found to be more robust to noise than iAMaLGaM due to the larger required population size. Using few or no covariances hinders the EDA from dealing with rotations of the search space. Finally, the use of noise averaging is found to be less efficient than the direct application of the EDA unless the noise is uniformly distributed. AMaLGaM was among the best performing algorithms submitted to the BBOB workshop in 2009.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2010) 18 (2): 157–198.
Published: 01 June 2010
Abstract
View articletitled, Geometrical Recombination Operators for Real-Coded Evolutionary MCMCs
View
PDF
for article titled, Geometrical Recombination Operators for Real-Coded Evolutionary MCMCs
Markov chain Monte Carlo (MCMC) algorithms are sampling methods for intractable distributions. In this paper, we propose and investigate algorithms that improve the sampling process from multi-dimensional real-coded spaces. We present MCMC algorithms that run a population of samples and apply recombination operators in order to exchange useful information and preserve commonalities in highly probable individual states. We call this class of algorithms Evolutionary MCMCs (EMCMCs). We introduce and analyze various recombination operators which generate new samples by use of linear transformations, for instance, by translation or rotation. These recombination methods discover specific structures in the search space and adapt the population samples to the proposal distribution. We investigate how to integrate recombination in the MCMC framework to sample from a desired distribution. The recombination operators generate individuals with a computational effort that scales linearly in the number of dimensions and the number of parents. We present results from experiments conducted on a mixture of multivariate normal distributions. These results show that the recombinative EMCMCs outperform the standard MCMCs for target distributions that have a nontrivial structural relationship between the dimensions.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2004) 12 (2): 243–267.
Published: 01 June 2004
Abstract
View articletitled, On the Design and Analysis of Competent Selecto-recombinative GAs
View
PDF
for article titled, On the Design and Analysis of Competent Selecto-recombinative GAs
In this paper, we study two recent theoretical models—a population-sizing model and a convergence model—and examine their assumptions to gain insights into the conditions under which selecto-recombinative GAs work well. We use these insights to formulate several design rules to develop competent GAs for practical problems. To test the usefulness of the design rules, we consider as a case study the map-labeling problem, an NP-hard problem from cartography. We compare the predictions of the theoretical models with the actual performance of the GA for the map-labeling problem. Experiments show that the predictions match the observed scale-up behavior of the GA, thereby strengthening our claim that the design rules can guide the design of competent selecto-recombinative GAs for realistic problems.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1999) 7 (4): 331–352.
Published: 01 December 1999
Abstract
View articletitled, Scalability Problems of Simple Genetic Algorithms
View
PDF
for article titled, Scalability Problems of Simple Genetic Algorithms
Scalable evolutionary computation has. become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithm—namely elitism, niching, and restricted mating are not significantly improving the scalability problems.