Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-6 of 6
Heike Trautmann
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2024) 32 (3): 211–216.
Published: 03 September 2024
Abstract
View article
PDF
The herein proposed Python package pflacco provides a set of numerical features to characterize single-objective continuous and constrained optimization problems. Thereby, pflacco addresses two major challenges in the area of optimization. Firstly, it provides the means to develop an understanding of a given problem instance, which is crucial for designing, selecting, or configuring optimization algorithms in general. Secondly, these numerical features can be utilized in the research streams of automated algorithm selection and configuration. While the majority of these landscape features are already available in the R package flacco , our Python implementation offers these tools to an even wider audience and thereby promotes research interests and novel avenues in the area of optimization.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2019) 27 (1): 1–2.
Published: 01 March 2019
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2019) 27 (1): 99–127.
Published: 01 March 2019
FIGURES
| View All (8)
Abstract
View article
PDF
In this article, we build upon previous work on designing informative and efficient Exploratory Landscape Analysis features for characterizing problems' landscapes and show their effectiveness in automatically constructing algorithm selection models in continuous black-box optimization problems. Focusing on algorithm performance results of the COCO platform of several years, we construct a representative set of high-performing complementary solvers and present an algorithm selection model that, compared to the portfolio's single best solver, on average requires less than half of the resources for solving a given problem. Therefore, there is a huge gain in efficiency compared to classical ensemble methods combined with an increased insight into problem characteristics and algorithm properties by using informative features. The model acts on the assumption that the function set of the Black-Box Optimization Benchmark is representative enough for practical applications. The model allows for selecting the best suited optimization algorithm within the considered set for unseen problems prior to the optimization itself based on a small sample of function evaluations. Note that such a sample can even be reused for the initial population of an evolutionary (optimization) algorithm so that even the feature costs become negligible.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2019) 27 (1): 3–45.
Published: 01 March 2019
FIGURES
| View All (4)
Abstract
View article
PDF
It has long been observed that for practically any computational problem that has been intensely studied, different instances are best solved using different algorithms. This is particularly pronounced for computationally hard problems, where in most cases, no single algorithm defines the state of the art; instead, there is a set of algorithms with complementary strengths. This performance complementarity can be exploited in various ways, one of which is based on the idea of selecting, from a set of given algorithms, for each problem instance to be solved the one expected to perform best. The task of automatically selecting an algorithm from a given set is known as the per-instance algorithm selection problem and has been intensely studied over the past 15 years, leading to major improvements in the state of the art in solving a growing number of discrete combinatorial problems, including propositional satisfiability and AI planning. Per-instance algorithm selection also shows much promise for boosting performance in solving continuous and mixed discrete/continuous optimisation problems. This survey provides an overview of research in automated algorithm selection, ranging from early and seminal works to recent and promising application areas. Different from earlier work, it covers applications to discrete and continuous problems, and discusses algorithm selection in context with conceptually related approaches, such as algorithm configuration, scheduling, or portfolio selection. Since informative and cheaply computable problem instance features provide the basis for effective per-instance algorithm selection systems, we also provide an overview of such features for discrete and continuous problems. Finally, we provide perspectives on future work in the area and discuss a number of open research challenges.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2018) 26 (4): 597–620.
Published: 01 December 2018
FIGURES
| View All (6)
Abstract
View article
PDF
The Travelling Salesperson Problem (TSP) is one of the best-studied NP-hard problems. Over the years, many different solution approaches and solvers have been developed. For the first time, we directly compare five state-of-the-art inexact solvers—namely, LKH, EAX, restart variants of those, and MAOS—on a large set of well-known benchmark instances and demonstrate complementary performance, in that different instances may be solved most effectively by different algorithms. We leverage this complementarity to build an algorithm selector, which selects the best TSP solver on a per-instance basis and thus achieves significantly improved performance compared to the single best solver, representing an advance in the state of the art in solving the Euclidean TSP. Our in-depth analysis of the selectors provides insight into what drives this performance improvement.
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2015) 23 (3): 369–395.
Published: 01 September 2015
FIGURES
| View All (43)
Abstract
View article
PDF
In multiobjective optimization, set-based performance indicators are commonly used to assess the quality of a Pareto front approximation. Based on the scalarization obtained by these indicators, a performance comparison of multiobjective optimization algorithms becomes possible. The and the hypervolume (HV) indicator represent two recommended approaches which have shown a correlated behavior in recent empirical studies. Whereas the HV indicator has been comprehensively analyzed in the last years, almost no studies on the indicator exist. In this extended version of our previous conference paper, we thus perform a comprehensive investigation of the properties of the indicator in a theoretical and empirical way. The influence of the number and distribution of the weight vectors on the optimal distribution of solutions is analyzed. Based on a comparative analysis, specific characteristics and differences of the and HV indicator are presented. Furthermore, the indicator is integrated into an indicator-based steady-state evolutionary multiobjective optimization algorithm (EMOA). It is shown that the so-called -EMOA can accurately approximate the optimal distribution of solutions regarding .