Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-7 of 7
Heinz Mühlenbein
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2005) 13 (1): 1–27.
Published: 01 March 2005
Abstract
View article
PDF
Estimation of Distribution Algorithms (EDA) have been proposed as an extension of genetic algorithms. In this paper we explain the relationship of EDA to algorithms developed in statistics, artificial intelligence, and statistical physics. The major design issues are discussed within a general interdisciplinary framework. It is shown that maximum entropy approximations play a crucial role. All proposed algorithms try to minimize the Kullback-Leibler divergence KLD between the unknown distribution p ( x ) and a class q ( x ) of approximations. However, the Kullback-Leibler divergence is not symmetric. Approximations which suppose that the function to be optimized is additively decomposed (ADF) minimize KLD ( q || p ), the methods which learn the approximate model from data minimize KLD ( p || q ). This minimization is identical to maximizing the log-likelihood . In the paper three classes of algorithms are discussed. FDAuses the ADF to compute an approximate factorization of the unknown distribution. The factors are marginal distributions, whose values are computed from samples. The second class is represented by the Bethe-Kikuchi approach which has recently been rediscovered in statistical physics. Here the values of the marginals are computed from a difficult constrained minimization problem. The third class learns the factorization from the data. We analyze our learning algorithm LFDA in detail. It is shown that learning is faced with two problems: first, to detect the important dependencies between the variables, and second, to create an acyclic Bayesian network of bounded clique size.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1999) 7 (4): 353–376.
Published: 01 December 1999
Abstract
View article
PDF
The Factorized Distribution Algorithm (FDA) is an evolutionary algorithm which combines mutation and recombination by using a distribution. The distribution is estimated from a set of selected points. In general, a discrete distribution defined for n binary variables has 2 n parameters. Therefore it is too expensive to compute. For additively decomposed discrete functions (ADFs) there exist algorithms which factor the distribution into conditional and marginal distributions. This factorization is used by FDA. The scaling of FDA is investigated theoretically and numerically. The scaling depends on the ADF structure and the specific assignment of function values. Difficult functions on a chain or a tree structure are solved in about O ( n √ n ) operations. More standard genetic algorithms are not able to optimize these functions. FDA is not restricted to exact factorizations. It also works for approximate factorizations as is shown for a circle and a grid structure. By using results from Bayes networks, FDA is extended to LFDA. LFDA computes an approximate factorization using only the data, not the ADF structure. The scaling of LFDA is compared to the scaling of FDA.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1997) 5 (3): 303–346.
Published: 01 September 1997
Abstract
View article
PDF
The Breeder Genetic Algorithm (BGA) was designed according to the theories and methods used in the science of livestock breeding. The prediction of a breeding experiment is based on the response to selection (RS) equation. This equation relates the change in a population's fitness to the standard deviation of its fitness, as well as to the parameters selection intensity and realized heritability . In this paper the exact RS equation is derived for proportionate selection given an infinite population in linkage equilibrium . In linkage equilibrium the genotype frequencies are the product of the univariate marginal frequencies. The equation contains Fisher's fundamental theorem of natural selection as an approximation. The theorem shows that the response is approximately equal to the quotient of a quantity called additive genetic variance, V A , and the average fitness. We compare Mendelian two-parent recombination with gene-pool recombination, which belongs to a special class of genetic algorithms that we call univariate marginal distribution (UMD) algorithms. UMD algorithms keep the genotypes in linkage equilibrium. For UMD algorithms, an exact RS equation is proven that can be used for long-term prediction. Empirical and theoretical evidence is provided that indicates that Mendelian two-parent recombination is also mainly exploiting the additive genetic variance. We compute an exact RS equation for binary tournament selection. It shows that the two classical methods for estimating realized heritability—the regression heritability and the heritability in the narrow sense —may give poor estimates. Furthermore, realized heritability for binary tournament selection can be very different from that of proportionate selection. The paper ends with a short survey about methods that extend standard genetic algorithms and UMD algorithms by detecting interacting variables in nonlinear fitness functions and using this information to sample new points.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1997) 5 (2): 213–236.
Published: 01 June 1997
Abstract
View article
PDF
This paper is concerned with the automatic induction of parsimonious neural networks. In contrast to other program induction situations, network induction entails parametric learning as well as structural adaptation. We present a novel representation scheme called neural trees that allows efficient learning of both network architectures and parameters by genetic search. A hybrid evolutionary method is developed for neural tree induction that combines genetic programming and the breeder genetic algorithm under the unified framework of the minimum description length principle. The method is successfully applied to the induction of higher order neural trees while still keeping the resulting structures sparse to ensure good generalization performance. Empirical results are provided on two chaotic time series prediction problems of practical interest.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1995) 3 (1): 17–38.
Published: 01 March 1995
Abstract
View article
PDF
Genetic programming is distinguished from other evolutionary algorithms in that it uses tree representations of variable size instead of linear strings of fixed length. The flexible representation scheme is very important because it allows the underlying structure of the data to be discovered automatically. One primary difficulty, however, is that the solutions may grow too big without any improvement of their generalization ability. In this article we investigate the fundamental relationship between the performance and complexity of the evolved structures. The essence of the parsimony problem is demonstrated empirically by analyzing error landscapes of programs evolved for neural network synthesis. We consider genetic programming as a statistical inference problem and apply the Bayesian model-comparison framework to introduce a class of fitness functions with error and complexity terms. An adaptive learning method is then presented that automatically balances the model-complexity factor to evolve parsimonious programs without losing the diversity of the population needed for achieving the desired training accuracy. The effectiveness of this approach is empirically shown on the induction of sigma-pi neural networks for solving a real-world medical diagnosis problem as well as benchmark tasks.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1993) 1 (4): 335–360.
Published: 01 December 1993
Abstract
View article
PDF
The breeder genetic algorithm (BGA) models artificial selection as performed by human breeders. The science of breeding is based on advanced statistical methods. In this paper a connection between genetic algorithm theory and the science of breeding is made. We show how the response to selection equation and the concept of heritability can be applied to predict the behavior of the BGA. Selection, recombination, and mutation are analyzed within this framework. It is shown that recombination and mutation are complementary search operators. The theoretical results are obtained under the assumption of additive gene effects. For general fitness landscapes, regression techniques for estimating the heritability are used to analyze and control the BGA. The method of decomposing the genetic variance into an additive and a nonadditive part connects the case of additive fitness functions with the general case.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (1993) 1 (1): 25–49.
Published: 01 March 1993
Abstract
View article
PDF
In this paper a new genetic algorithm called the Breeder Genetic Algorithm (BGA) is introduced. The BGA is based on artificial selection similar to that used by human breeders. A predictive model for the BGA is presented that is derived from quantitative genetics. The model is used to predict the behavior of the BGA for simple test functions. Different mutation schemes are compared by computing the expected progress to the solution. The numerical performance of the BGA is demonstrated on a test suite of multimodal functions. The number of function evaluations needed to locate the optimum scales only as n ln ( n ) where n is the number of parameters. Results up to n = 1000 are reported.