Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Marcus Gallagher
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2019) 27 (1): 75–98.
Published: 01 March 2019
FIGURES
Abstract
View article
PDF
Exploratory Landscape Analysis provides sample-based methods to calculate features of black-box optimization problems in a quantitative and measurable way. Many problem features have been proposed in the literature in an attempt to provide insights into the structure of problem landscapes and to use in selecting an effective algorithm for a given optimization problem. While there has been some success, evaluating the utility of problem features in practice presents some significant challenges. Machine learning models have been employed as part of the evaluation process, but they may require additional information about the problems as well as having their own hyper-parameters, biases and experimental variability. As a result, extra layers of uncertainty and complexity are added into the experimental evaluation process, making it difficult to clearly assess the effect of the problem features. In this article, we propose a novel method for the evaluation of problem features which can be applied directly to individual or groups of features and does not require additional machine learning techniques or confounding experimental factors. The method is based on the feature's ability to detect a prior ranking of similarity in a set of problems. Analysis of Variance (ANOVA) significance tests are used to determine if the feature has successfully distinguished the successive problems in the set. Based on ANOVA test results, a percentage score is assigned to each feature for different landscape characteristics. Experimental results for twelve different features on four problem transformations demonstrate the method and provide quantitative evidence about the ability of different problem features to detect specific properties of problem landscapes.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2005) 13 (1): 29–42.
Published: 01 March 2005
Abstract
View article
PDF
Evolutionary algorithms perform optimization using a population of sample solution points. An interesting development has been to view population-based optimization as the process of evolving an explicit, probabilistic model of the search space. This paper investigates a formal basis for continuous, population-based optimization in terms of a stochastic gradient descent on the Kullback-Leibler divergence between the model probability density and the objective function, represented as an unknown density of assumed form. This leads to an update rule that is related and compared with previous theoretical work, a continuous version of the population-based incremental learning algorithm, and the generalized mean shift clustering framework. Experimental results are presented that demonstrate the dynamics of the new algorithm on a set of simple test problems.