Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-1 of 1
Nathan Buskulic
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Evolutionary Computation (2021) 29 (4): 521–541.
Published: 01 December 2021
Abstract
View articletitled, Maximizing Drift Is Not Optimal for Solving OneMax
View
PDF
for article titled, Maximizing Drift Is Not Optimal for Solving OneMax
It seems very intuitive that for the maximization of the OneMax problem O m ( x ) : = ∑ i = 1 n x i the best that an elitist unary unbiased search algorithm can do is to store a best so far solution, and to modify it with the operator that yields the best possible expected progress in function value. This assumption has been implicitly used in several empirical works. In Doerr et al. (2020), it was formally proven that this approach is indeed almost optimal. In this work, we prove that drift maximization is not optimal. More precisely, we show that for most fitness levels between n / 2 and 2 n / 3 the optimal mutation strengths are larger than the drift-maximizing ones. This implies that the optimal RLS is more risk-affine than the variant maximizing the stepwise expected progress. We show similar results for the mutation rates of the classic (1 + 1) Evolutionary Algorithm (EA) and its resampling variant, the (1 + 1) EA > 0 . As a result of independent interest we show that the optimal mutation strengths, unlike the drift-maximizing ones, can be even.