We establish global convergence of the (1 + 1) evolution strategy, that is, convergence to a critical point independent of the initial state. More precisely, we show the existence of a critical limit point, using a suitable extension of the notion of a critical point to measurable functions. At its core, the analysis is based on a novel progress guarantee for elitist, rank-based evolutionary algorithms. By applying it to the (1 + 1) evolution strategy we are able to provide an accurate characterization of whether global convergence is guaranteed with full probability, or whether premature convergence is possible. We illustrate our results on a number of example applications ranging from smooth (non-convex) cases over different types of saddle points and ridge functions to discontinuous and extremely rugged problems.

Global convergence of an optimization algorithm refers to convergence of the iterates to a critical point independent of the initial state—in contrast to local convergence, which guarantees this property only for initial iterates in the vicinity of a critical point.1 For example, many first order methods enjoy this property (Gilbert and Nocedal, 1992), while Newton's method does not. In the realm of direct search algorithms, mesh adaptive search algorithms are known to be globally convergent (Torczon, 1997).

Evolution strategies (ES) are a class of randomized search heuristics for direct search in Rd. The (1 + 1)-ES is the maybe simplest such method, originally developed by Rechenberg (1973). A particularly simple variant thereof, which was first defined by Kern et al. (2004), is given in Algorithm 1. Its state consists of a single parent individual mRd and a step size σ>0. It samples a single offspring xRd per generation from the isotropic multivariate normal distribution N(m,σ2I) and applies (1 + 1)-selection; that is, it keeps the better of the two points. Here, IRd×d denotes the identity matrix. The standard deviation σ>0 of the sampling distribution, also called global step size, is adapted online. The mechanism maintains a fixed success rate usually chosen as 1/5, in accordance with Rechenberg's original approach. It is discussed in more detail in Section 3. In effect, step size control enables linear convergence on convex quadratic functions (Jägersküpper, 2006a), and therefore locally linear convergence on twice differentiable functions. In contrast, algorithms without step size adaptation can converge as slowly as pure random search (Hansen et al., 2015). Furthermore, being rank-based methods, ESs are invariant to strictly monotonic transformations of objective values. ESs tend to be robust and suitable for solving difficult problems (rugged and multimodal fitness landscapes), a capacity that is often attributed to invariance properties.

graphic

Although the (1 + 1)-ES is the oldest evolution strategy in existence, we do not yet fully understand how generally it is applicable. In this article, we cast this open problem into the question on which functions the algorithm will succeed to locate a local optimum, and on which functions it may converge prematurely, and hence fail. We aim at an as complete as possible characterization of these different cases.

By modern standards, the (1 + 1)-ES cannot be considered a competitive optimization method. The covariance matrix adaptation evolution strategy (CMA-ES) by Hansen and Ostermeier (2001) and its many variants mark the state of the art. The algorithm goes beyond the simple (1 + 1)-ES in many ways: it uses nonelitist selection with a population, it adapts the full covariance matrix of its sampling distribution (effectively resembling second order methods), and it performs temporal integration of direction information in the form of evolution paths for step size and covariance matrix adaptation. Still, its convergence order on many relevant functions is linear, and that is thanks to the same mechanism as in the (1 + 1)-ES, namely step size adaptation.

To date, convergence guarantees for ESs are scarce. Some results exist for convex quadratic problems, which essentially implies local convergence on twice continuously differentiable functions. In this situation it is natural to start with the simplest ES, which is arguably the (1 + 1)-ES. The variant defined by Kern et al. (2004) is given in Algorithm 1; it is discussed in detail in Section 3.

Jägersküpper (2003, 2005, 2006a,b) analyzed the (1 + 1)-ES2 on the sphere function as well as on general convex quadratic functions. His analysis ensures linear convergence with overwhelming probability, that is, with a probability of 1-expΩ(dɛ) for some ɛ>0, where d is the problem dimension. In other words, the analysis is asymptotic in the sense d, and for fixed (finite) dimension dN, no concrete value or bound is attributed to this probability. A dimension-dependent convergence rate of Θ(1/d) is obtained.

A related and more modern approach relying explicitly on drift analysis was presented by Akimoto et al. (2018), showing linear convergence of the algorithm on the sphere function, and providing an explicit, non-asymptotic runtime bound for the first hitting time of a level set.

The analysis by Auger (2005) is based on the stability of the Markov chain defined by the normalized state m/σ, for a (1,λ)-ES on the sphere function. Since the chain is shown to converge to a stationary distribution and the problem is scale-invariant, linear convergence or divergence is obtained, with full probability. There exists sufficient empirical evidence for convergence; however, this is not covered by the result.

A different approach to proving global convergence is to modify the algorithm under consideration in a way that allows for an analysis with well established techniques. This route was explored by Diouane et al. (2015), where step size adaptation is subject to a forcing function in order to guarantee a sufficient decrease condition, akin to, for example, the Wolfe conditions for inexact line search (Wolfe, 1969). This is a powerful approach since the resulting analysis is general in terms of the algorithms (the same step size forcing mechanism can be added to virtually all ES) and the objective functions (the function must be bounded from below and Lipschitz near the limit point) at the same time. The price is that the analysis does not apply to algorithms regularly applied within the EC community, and that we do not obtain new insights about the mechanisms of these algorithms. Furthermore, the forcing function decays slowly, forcing a linearly convergent algorithm into sublinear convergence (but still much faster than random search). From a more technical point of view the Lipschitz condition is unfortunate since it is not preserved under monotonic transformations of fitness values. We improve on this approach by providing sufficient decrease of a transformed objective function, which holds for all randomized elitist, rank-based algorithms, and hence does not require a forcing function or any other algorithmic changes.

The global convergence guarantee by Akimoto et al. (2010) is closest to the present article. Also, that analysis is extremely general in the sense that it covers a broad range of problems and algorithms. The objective function is assumed to be continuously differentiable, and the only requirement for the algorithm is that it successfully diverges on a linear function. This includes all state-of-the-art evolution strategies and many more algorithms. Since continuously differentiable functions are locally arbitrarily well approximated by linear functions (first order Taylor polynomial), it is concluded that any limit point must be stationary, since there the linear term vanishes and higher order terms take over. This is an elegant and powerful result. Its main restriction is that it applies only to continuously differentiable functions. This is a huge class, but it can still be considered a relevant limitation because on continuously differentiable problems ESs are in direct competition with gradient-based methods, which are usually more efficient if gradients are available.

For this reason, solving smooth and otherwise easy problems cannot be the focus of evolution strategies. Therefore, in this article we seek to explore the most general class of problems that can be solved with an evolution strategy. In other words, we aim to push the limits beyond the well-understood cases, towards really difficult ones. Our goal is to establish the largest possible class of problems that can be solved reliably by an ES, and we also want to understand its limitations, i.e., which problems cannot be solved, and why. For this purpose, we focus on the simplest such algorithm, namely the (1 + 1)-ES defined in Algorithm 1. It turns out that the limitations of the algorithm are closely tied to its success-based step size adaptation mechanism. To capture this effect we introduce a novel regularity condition ensuring proper function of success-based step-size control. The new condition is arguably much weaker than continuous differentiability, in a sense that will become clear as we discuss examples and counter-examples.

From a bird's eye's perspective, our contributions are as follows:

  1. we provide a general progress or decrease guarantee for rank-based elitist algorithms,

  2. we show how general the (1 + 1)-ES is applicable, that is, on which problems it will find a local optimum.

The article and the proofs are organized as follows. In the next section we establish a progress guarantee for rank-based elitist algorithms. This result is extremely general, and it is in no way tied to continuous search spaces and the (1 + 1)-ES. Therefore, it is stated in general terms, in the expectation that it will prove useful for the analysis of algorithms other than the (1 + 1)-ES. Its role in the global convergence proof is to ensure a sufficient rate of optimization progress as long as the step size is well adapted and the progress rate is bounded away from zero. In Section 3, we discuss properties of the (1 + 1)-ES and introduce the regularity condition. Based on this condition we show that the step size returns infinitely often to a range where non-trivial progress can be concluded from the decrease theorem. Based on these achievements we establish a global convergence theorem in Section 4, essentially stating that there exists a subsequence of iterates converging to a critical point, the exact notion of which is defined in Section 3. We also establish a negative result, showing that a nonoptimal critical point results in premature convergence with positive probability, which excludes global convergence. In Section 5, we apply the analysis to a variety of settings and demonstrate their implications. We close with conclusions and open questions.

In this section, we establish a general theorem ensuring a certain rate of optimization progress for randomized rank-based elitist algorithms. We consider a general search space X. This space is equipped with a σ-algebra and a reference measure denoted Λ. The usual choice of the reference measure is the counting measure for discrete spaces and the Lebesgue measure for continuous spaces. The objective function f:XR, to be minimized, is assumed to be measurable. The parent selection and variation operations of the search algorithm are also assumed to be measurable; indeed we assume that these operators give rise to a distribution from which the offspring is sampled, and this distribution has a density with respect to Λ.

A rank-based optimization algorithm ignores the numerical fitness scores (f-values), and instead relies solely on pairwise comparisons, resulting in exactly one of the relations f(x)<f(x'), f(x)=f(x'), or f(x)>f(x'). This property renders it invariant to strictly monotonically increasing (rank preserving) transformations of the objective values. Therefore it “perceives” the objective function only in terms of its level sets, not in terms of the actual function values. For f:XR let
denote the level set of f, and the sub-level sets strictly below and including level yR. For mX we define the short notations Lf(m):=Lff(m), Sf<(m):=Sf<f(m) and Sf(m):=Sff(m).

Due to the assumption that the offspring generation distribution is Λ-measurable, with full probability, the algorithm is invariant to the values of the objective function restricted to zero sets (sets Z of measure zero, fulfilling Λ(Z)=0). The following definition captures these properties. It encodes the “essential” level set structure of an objective function.

Definition 1:
We call two measurable functions f,g:XR equivalent and write
if there exists a zero set ZX and a strictly monotonically increasing function φ:f(X)g(X) such that g(x)=φf(x) for all xXZ. Here f(X) and g(X) denote the images of f and g, respectively. We denote the corresponding equivalence class in the set of measurable functions by [f]:=g:XR|g^f.

It follows immediately from the definition that the sublevel sets of equivalent objective functions f^g coincide outside a zero set.

In the next step we construct a canonical representative for each equivalence class, which we can think of as a normal form of an objective function.

Definition 2:
For f:XR we define the spatial suboptimality functions
computing the volume of the success domain, that is, the set of improving points. If f^Λ< and f^Λ coincide then we drop the upper index and simply denote the spatial suboptimality function by f^Λ.

The definition is illustrated with two examples in Figures 1 and 2. In the following, mX will denote the elite (or parent) point, and m(t) is the elite point in iteration tN of an iterative algorithm, that is, an evolutionary algorithm with elitist selection. For two very different reasons, namely 1) to avoid divergence of the algorithm in the case of unbounded search spaces, and 2) for simplicity of the technical arguments in the proofs, we restrict ourselves to the case that the sublevel set Sfm(0) of the initial iterate m(0) is bounded and has finite spatial suboptimality. For most reasonable reference measures, boundedness implies finite spatial suboptimality. For X=Rd equipped with the Lebesgue measure this is equivalent to the topological closure Sfm(0)¯ being compact. The assumptions immediately imply that Sf<(y) and Sf(y) are bounded for all yfm(0), and that restricted to Sf(m(0)) the functions f^Λ< and f^Λ take values in the bounded range 0,f^Λm(0). Since an elitist algorithm never accepts points outside Sf(m(0)), we will from here on ignore the issue of infinite f^Λ-values.3

In the continuous case, a plateau is a level set of positive Lebesgue measure. When defining a local optimum as the best point within an open neighborhood, then an interior point of a plateau is a local optimum, which may not always be intended. Anyway, when analyzing the (1 + 1)-ES we will not handle plateaus and instead assume that level sets of f are zero sets. This also implies that f^Λ and f^Λ< agree. For now the only slightly weaker statement of the following lemma is sufficient, which does allow for plateaus.

Lemma 1:

Let f:XR be measurable. If f^Λ(x) is finite for all xX, then it holds f^Λ^f^f^Λ<.

The proof is found in the appendix. We use f^Λ and f^Λ< (or simply f^Λ if possible) as a canonical representative of its equivalence class (if the function values are finite, but see the discussion above). These functions have the property
that is, f^Λ encodes the Lebesgue measure of its own sublevel sets. We will measure optimization progress in terms of f^Λ-values. Decreasing the spatial suboptimality f^Λ by δ>0 amounts to reducing the volume of better points by δ.

Due to the rank-based nature of the algorithms under study we cannot expect to fulfill a sufficient decrease condition based on f-values. This is because a functional gain Δ:=f(x)-f(x')>0 achieved by moving from x to x' can be reduced to an arbitrarily small or large gain φ(f(x))-φ(f(x')), where ϕ is strictly monotonically increasing, and the class of transformations does not allow to bound the difference uniformly, neither additively nor multiplicatively. Instead, the following theorem establishes a progress or decrease guarantee measured in terms of the spatial suboptimality function f^Λ. It gets around the problem of inconclusive values in objective space (which, in case of single-objective optimization, is just the real line) by considering a quantity in search space, namely the reference measure of the sublevel set.

The algorithm is randomized; hence the decrease follows a distribution. The following definition captures properties of this distribution.

Definition 3:
Let P denote a probability distribution on X with a bounded density with respect to Λ and let f:XR be a measurable objective function. The quantity
is an upper bound on the density. Consider a sample xP. Define the functions
of probabilities of strict and weak improvements. Furthermore, we define s:[0,1]R as a measurable inverse function fulfilling r<s(q)qrs(q) for all q[0,1]. We collect the discontinuities of r< and r in the set Z:=zR|r<(z)<r(z) and define the sum
of squared improvement jumps.

Note that u, r<, r, s, Z, and ζ implicitly depend on Λ, P, and f. This is not indicated explicitly in order to avoid excessive clutter in the notation.

If the function f is continuous with continuous domain X and without plateaus, then r< and r coincide, we have ζ=0, and s maps each probability q[0,1] to the corresponding unique quantile of the distribution of f(x) under P. However, if there exists a plateau within the support of P (a level set of positive P-measure, that is, if X is discrete), then ζ is positive and on Z the function s takes values anywhere between the lower quantile P(f(x)<z) and the upper quantile P(f(x)z). The exact value does not matter, since the only use of s-values is as arguments to one of the r-functions. Indeed, r<(s(q)) and r(s(q)) “round” the probability q down or up, respectively, to the closest value that is attainable as the probability of sampling a sublevel set. The freedom in the choice of s can also be understood in the context of Figure 1: if the point z in the definitions of r< and r is located on the plateau, then s(q) can be the anywhere between the probability mass of the sub-level set excluding and including the plateau.

With these definitions in place, the following theorem controls the expected value as well as the quantiles of the decrease distribution.

Theorem 1:
Let P denote a probability distribution on X with a bounded density with respect to Λ and let f:XR be a measurable objective function. We use the notation of the above definition. Fix a reference point mX and let p:=r<f(m) denote the probability of strict improvement of a sample xP over m. Then for each q[0,p], the q-quantile of the f^Λ<-decrease is bounded from below by p-r<s(q)u and the q-quantile of the f^Λ-decrease is bounded by p-rs(q)u+ΛLf(m), i.e.,
The expected f^Λ<-decrease is bounded from below by
and the expected f^Λ-decrease is bounded from below by
Proof:

We start with the first two claims, which provide lower bounds on the q-quantiles of probabilities of improvement by some margin δ0. The argument here is elementary: an f^Λ-improvement of δ from m to x means that the f^Λ-sublevel set of x is smaller than that of m by Λ-mass δ (due to the offspring x improving upon its parent m). This corresponds to a difference in P-mass of the same f^Λ-sublevel sets of at most u·δ, which will correspond to q in the following. Note that the probabilities (Pr()-notation) correspond to the same distribution P from which x is sampled, and that f^Λ-values and s-values directly correspond to Λ-mass. The situation is illustrated in Figure 3.

To make the above argument precise we fix q and define the f-level
For q=0 the first two statements are trivial. For q>0 the infimum is attained and it thus holds PSf(yq)q. We define three disjoint sets: A:=Sf<(yq), B:=Lf(yq), and C:=Sf<(m)Sf(yq). The nested sublevel sets Sf<(yq)=A, Sf(yq)=AB, and Sf<(m)=ABC are unions of these sets. By the definitions of p and q the probability of the set C is upper bounded by P(C)=P(Sf<(m))-P(Sf(yq))p-q, and the probability of AB is lower bounded by P(AB)q.
We will show that the event of interest for the first claim, namely f^Λ<(m)-f^Λ<(x)p-r<(s(q))u, implies xSf(yq)=AB. To this end we define the f^Λ<-level zq<:=f^Λ<(m)-p-r<(s(q))u and the set Δq<:=Sf^Λ<<(m)Sf^Λ<(zq<). We have
and hence P(Δq<)p-r<(s(q)) by the definition of u. Together with Lemma 1, this implies Δq<BC, and hence ASf^Λ<(zq<). However, due to the definition of S (in contrast to S<), the sublevel set A being a subset of Sf^Λ<(zq<) implies that also the level set B is contained in Sf^Λ<(zq<). This shows the first claim.

For the second claim we define the f^Λ-level zq:=f^Λ(m)-p-r(s(q))u-ΛLf(m) and the set Δq:=Sf^Λ<(m)Sf^Λ(zq), and we note that it holds f^Λ(m)-ΛLf(m)=f^Λ<(m). Then, with an analogous argument as above we obtain P(Δq<)p-r(s(q)). In this case we immediately arrive at ΔqC and hence at ABSf^Λ(zq), which shows the second claim.

Let Q denote the quantile function (the generalized inverse of the cdf) of the f^Λ<-improvement max0,f^Λ<(m)-f^Λ<(x). Then the expectation is lower bounded by
The proof of the expected f^Λ improvement is analogous. The additional term ΛLf(m) again comes from f^Λ(m)=f^Λ<(m)+ΛLf(m).
Figure 1:

Objective function f:RR with plateau and jump (left). Corresponding spatial suboptimality f^Λ< (dotted) and f^Λ (solid) (right).

Figure 1:

Objective function f:RR with plateau and jump (left). Corresponding spatial suboptimality f^Λ< (dotted) and f^Λ (solid) (right).

Close modal
Figure 2:

All relevant properties of the sphere function f:R2R for rank-based optimization are specified by its circular level sets, illustrated in blue on the domain (ground plane). The spatial suboptimality of the point x is the Lebesgue measure of the gray area, which coincides with the function value f^Λ(x) indicated by the bold red vertical arrow. In this example it holds f^Λ(x)=π·x2, irrespective of the rank-preserving (and hence level-set preserving) transformation applied to f.

Figure 2:

All relevant properties of the sphere function f:R2R for rank-based optimization are specified by its circular level sets, illustrated in blue on the domain (ground plane). The spatial suboptimality of the point x is the Lebesgue measure of the gray area, which coincides with the function value f^Λ(x) indicated by the bold red vertical arrow. In this example it holds f^Λ(x)=π·x2, irrespective of the rank-preserving (and hence level-set preserving) transformation applied to f.

Close modal
Figure 3:

Illustration of the quantile decrease, here in the continuous case. The optimum is marked with a flag. In this example, the level lines of the objective function f are star-shaped. The circle with the dashed shading on the right indicates the sampling distribution, which has ball-shaped support in this case. The probability of the area AB is the value p=P(AB), and q=P(A) is the probability of the event of interest, corresponding to a significant improvement. The area Λ(B) is a lower bound on the improvement in terms of f^Λ. It is lower bounded by P(B)u=p-qu. The (bold) level line separating A and B belongs to A, and not to B. Therefore, if this set has positive measure, then we can only guarantee qP(A) (in contrast to equality), and the lower bound becomes p-r<(s(q))uΛ(B).

Figure 3:

Illustration of the quantile decrease, here in the continuous case. The optimum is marked with a flag. In this example, the level lines of the objective function f are star-shaped. The circle with the dashed shading on the right indicates the sampling distribution, which has ball-shaped support in this case. The probability of the area AB is the value p=P(AB), and q=P(A) is the probability of the event of interest, corresponding to a significant improvement. The area Λ(B) is a lower bound on the improvement in terms of f^Λ. It is lower bounded by P(B)u=p-qu. The (bold) level line separating A and B belongs to A, and not to B. Therefore, if this set has positive measure, then we can only guarantee qP(A) (in contrast to equality), and the lower bound becomes p-r<(s(q))uΛ(B).

Close modal

In our application of the above theorem to the (1 + 1)-ES x corresponds to the offspring point sampled from a Gaussian centered on m.

Due to the term ΛLf(m) in the decrease of f^Λ, the theorem covers the fitness-level method (Droste et al., 2002; Wegener, 2003). However, in particular for search distributions spreading their probability mass over many level sets, the theorem is considerably stronger.

In the continuous case, in the absence of plateaus, the statement can be simplified considerably:

Corollary 1:
Under the assumptions and with the notation of Definition 3 and Theorem 1 we assume in addition that all level sets of f have measure zero. Then for each q[0,p], the q-quantile of the f^Λ-decrease is bounded from below by
and the expected f^Λ-decrease is bounded from below by
The following corollary is a broken down version for Gaussian search distributions N(m,C) with mean m and covariance matrix C, which has the density
Corollary 2:
Consider the search space Rd and the Lebesgue measure Λ. Let f:RdR denote a measurable objective function with level sets of measure zero. Consider a normally distributed sample xN(m,C). Under the assumptions and with the notation of Definition 3 and Theorem 1, for each q[0,p], the q-quantile of the f^Λ-decrease is bounded from below by
and the expected f^Λ-decrease is bounded from below by

An isotropic distribution with component-wise standard deviation (step size) σ>0 has covariance matrix C=σ2I, where IRd×d is the identity matrix; hence we have det(C)=σd. In the context of continuous search spaces, Jägersküpper (2003) refers to f^Λ-progress as “spatial gain.” He analyzes in detail the gain distribution of an isotropic search distribution on the sphere model. This result is much less general than the previous corollary, since we can deal with arbitrary objective functions, which are characterized (locally) only by a single number, the success probability. For the special case of a Gaussian mutation and the sphere function, Jägersküpper's computation of the spatial gain is more exact, since it is tightly tailored to the geometry of the case, in contrast to being based on a general bound. We lose only a multiplicative factor of the gain, which does not impact our analysis significantly. However, it should be noted that in the problem analyzed by Jägersküpper, the factor grows with the problem dimension d. The spatial gain is closely connected to the notion of a progress rate (Rechenberg, 1973), in particular if the gain is lower bounded by a fixed fraction of the suboptimality. For a fixed objective function like the sphere model f(x)=x2 it is easy to relate functional suboptimality f(x)-f* to spatial suboptimality f^Λ(x).

In this section, we discuss properties of the (1 + 1)-ES algorithm and provide an analysis of its success-based step size adaptation rule that will allow us to derive global convergence theorems. To this end we introduce a nonstandard regularity property.

From here on, we consider the search space Rd, equipped with the standard Borel σ-algebra, and Λ denotes the Lebesgue measure. Of course, all results from the previous section apply, with X=Rd.

In each iteration tN, the state of the (1 + 1)-ES is given by (m(t),σ(t))Rd×R+. It samples one candidate offspring from the isotropic normal distribution x(t)N(m(t),σ(t))2I. The parent is replaced by successful offspring, meaning that the offspring must perform at least as well as the parent.

The goal of success-based step size adaptation is to maintain a stable distribution of the success rate, for example, concentrated around 1/5. This can be achieved with a number of different mechanisms. Here we consider the maybe simplest such mechanism, namely immediate adaptation based on “success” or “failure” of each sample. Pseudocode for the full algorithm is provided in Algorithm 1.

Constants c-<0 and c+>0 in Algorithm 1 control the change of log(σ) in case of failure and success, respectively. They are parameters of the method. For c++4·c-=0 we obtain an implementation of Rechenberg's classic 1/5-rule (Rechenberg, 1973). We call τ=c-c--c+ the target success probability of the algorithm, which is always assumed to be strictly less than 1/2. This is equivalent to c+>-c-. A reasonable parameter setting is c-,c+Ω1d.

Two properties of the algorithm are central for our analysis: it is rank-based and it performs elitist selection, ensuring that the best-so-far solution is never lost and the sequence f(m(t)) is monotonically decreasing.

Since step-size control depends crucially on the concept of a fixed rate of successful offspring, we define the success probability of the algorithm, which is the probability of a sampled point outperforming the parent in the search distribution center.

Definition 4:
For a measurable function f:RdR, we define the success probability functions

The function pf computes the probability of sampling a point at least as good as m, while pf< computes the probability of sampling a strictly better point. If pf< and pf coincide (i.e., if there are no plateaus), then we write pf. A nice property of the success probability is that it does not drop too quickly when increasing the step size:

Lemma 2:
For all mRd, σ>0 and a1 it holds

The proof is found in the appendix; this is the case for a number of technical lemmas in this section. The next step is to define a plausible range for the step size.

Definition 5:
For p[0,1] and a measurable function f:RdR, we define upper and lower bounds
on the step size guaranteeing lower and upper bounds on the probability of improvement.

We think of ξpf(m) with p>τ as a “too small” step size at m. Similarly, for p<τ, ηpf(m) is a “too large” step size at m. Assume that the two values of p are chosen so that a sufficiently wide range of “well-adapted” step sizes exists in between the “too small” and “too large” ones. We aim to establish that if the step size is outside this range, then step size adaptation will push it back into the range. The main complication is that the range for σ depends on the point m.

The following lemma establishes a gap between lower and upper step size bound, that is, a lower bound on the size of the step size range.

Lemma 3:

For 0pHpT1 it holds pHd·ξpTf(x)pTd·ηpHf(x) for all xRd.

The following definition is central. It captures the ability of the (1 + 1)-ES to recover from a state with a far too small step size. This property is needed to avoid premature convergence.

Definition 6:

For p>0, a function f:RdR is called p-improvable in xRd if ξpf(x) is positive. The function is called p-improvable on YRd if ξpf|Y (the function ξpf restricted to Y) is lower bounded by a positive, lower semi-continuous function ξ˜pf:Y(0,1]. A point xRd is called p-critical if it is not p-improvable for any p>0.

The property of p-improvability is a nonstandard regularity condition. The concept applies to measurable functions; hence we do not need to restrict ourselves to smooth or continuous objectives. On the one hand side, the property excludes many measurable and even some smooth functions. On the other hand, it is far less restrictive than continuity and smoothness, in the sense that it allows the objective function to jump and the level sets to have kinks. Intuitively, in the two-dimensional case illustrated in Figure 4, if for each point the sublevel set opens up in an angle of more than 2πp, then the function is p-improvable. This is the case for many discontinuous functions, however, not for all smooth ones. The degree three polynomial f(x1,x2)=x13+x22 can serve as a counter example, since every point of the form (x1,0) is p-critical. All of its contour lines form cuspidal cubics; see Figure 6 in Section 5.3. Local optima are always p-critical, but many critical points of smooth functions are not (see below). The above example demonstrates that some saddle points share this property; however, if x is p-critical but not locally optimal, then pf<(x,σ)>0 for all σ>0. This means that such a point can be improved with positive probability for each choice of the step size, but in the limit σ0 the probability of improvement tends to zero.

Figure 4:

Illustration of a contour line with a kink opening up in an angle indicated by the dashed lines. The circles are iso-density lines on the isotropic Gaussian search distribution centered on the kink.

Figure 4:

Illustration of a contour line with a kink opening up in an angle indicated by the dashed lines. The circles are iso-density lines on the isotropic Gaussian search distribution centered on the kink.

Close modal

We should stress the difference between point-wise p-improvability, which simply demands that ξpf is positive, and set-wise p-improvability, which in addition demands that ξpf is lower bounded by a lower semicontinuous positive function. The latter property ensures the existence of a positive lower bound for ξpf on a compact set. In this sense, set-wise p-improvability is uniform on compact sets. In Sections 5.5 and 5.6, we will see examples where this makes a decisive difference.

Intuitively, the value of p of a p-improvable function is critical: if it is below τ, then the algorithm may be endangered to systematically decrease its step size while it should better do the contrary.

The next lemma establishes that smooth functions are p-improvable in all regular points, and also in most saddle points.

Lemma 4:

Let f:RdR be continuously differentiable.

  1. For a regular point xRd, f is p-improvable in x for all p<12.

  2. Let Y denote the set of all regular points of f, then f is p-improvable on Y, for all p<12.

  3. Let xRd denote a critical point of f, let f be twice continuously differentiable in a neighborhood of x, and let H=2f(x) denote the Hessian matrix. If H has at least one negative eigen value, then x is not p-critical.

Similarly, we need to ensure that the step size does not diverge to . This is easy, since the spatial suboptimality is finite:

Lemma 5:
Consider the state (m(t),σ(t)) of the (1 + 1)-ES. For each p(0,1), if
then pf<(m(t),σ(t))p.

In other words, a too large step size is very likely to produce unsuccessful offspring. The probability of success decays quickly with growing step size, since the step size bound grows slowly in the form Θ(p-1/d) as the success probability p decays to zero. Applying the above inequality to p<τ implies that for large enough step size σ(t), the expected change E[log(σ(t+1))-log(σ(t))] in the (1 + 1)-ES (Algorithm 1) is negative.

The following lemma is elementary. It is used multiple times in proofs, with the interpretation of the event “1” meaning that a statement holds true. It has a similar role as drift theorems in an analysis of the expected or high-probability behavior (Lehre and Witt, 2013; Lengler and Steger, 2016; Akimoto et al., 2018); however, here we aim for almost sure results.

Lemma 6:

Let X(t){0,1} denote a sequence of independent binary random variables. If there exists a uniform lower bound Pr(X(t)=1)p>0, then almost surely there exists an infinite subsequence (tk)kN so that X(tk)=1 for all kN.

In applications of the lemma, the events of interest are not necessarily independent; however, they can be “made independent” by considering a sequence of independent events that imply the events of interest. In our applications, this is the case if the events of actual interest hold with probability of at least p; then an i.i.d. sequence of Bernoulli events implying corresponding sub-events with probability of exactly p does the job. In other words, we will have a sequence X˜(t) of independent events, where X˜(t)=1 implies X(t)=1. The above lemma is then applied to X˜(t), which trivially yields the same statement for X(t). We imply this construction in all applications of the lemma.

The following lemma establishes, under a number of technical conditions, that the step size control rule succeeds in keeping the step size stable. If the prerequisites are fulfilled, then the result yields an impossible fact, namely that the overall reduction of the spatial suboptimality is unbounded. So the lemma is designed with proofs by contradiction in mind.

Lemma 7:
Let m(t),σ(t) denote the sequence of states of the (1 + 1)-ES on a measurable objective function f:RdR. Let pT,pH(0,1) denote probabilities fulfilling pH<τ<pT and pHpTed·c-, and assume the existence of constants 0<bT<bH such that
for all tN. Then, with full probability, there exists an infinite subsequence (tk)kN of iterations fulfilling
(1)
for all kN.

Equation (1) is a rather weak condition demanding that step-size adaptation works as desired. However, the requirement of a uniform lower bound bT on the step size together with Theorem 1 implies that the (1 + 1)-ES would make infinite f^Λ-progress in expectation. This is of course impossible if f^Λ(m(0)) is finite, since f^Λ is by definition non-negative. Therefore the lemma does not describe a typical situation observed when running the (1 + 1)-ES, but quite in contrast, an impossible situation that needs to be excluded in the proof of the main result in the next section.

In this section, we establish our main result. The theorem ensures the existence of a limit point of the sequence m(t) in a subset of desirable locations. In many cases this amounts to convergence of the algorithm to a (local) optimum.

Theorem 2:

Consider a measurable objective function f:RdR with level sets of measure zero. Assume that K0:=Sfm(0)¯ is compact, and let K1K0 denote a closed subset. If f is p-improvable on K0K1 for some p>τ, then the sequence m(t)tN has a limit point in K1.

Proof:
Lemma 5 ensures the existence of 0<pH<e-d·c-·τ and
such that it holds ηpHf(x)bH uniformly for all xK0. In particular, bH is a uniform upper bound on ηpHf.
Let B(x,r) denote the open ball of radius r>0 around xRd and define the compact set
It holds K(r)K0K1 and r>0K(r)=K0K1; hence K(r) is a compact exhaustion of K0K1.
Fix r>0, and assume for the sake of contradiction that all points m(t), t>t0, are contained in K(r). We set pT:=p. Let ξ˜pTf denote the positive lower semicontinuous lower bound on ξpTf, which is guaranteed to exist due to the p-improvability of f. We define
and apply Lemma 7 to obtain an infinite subsequence of states with step size lower bounded by σ(t)bT>0. According to Lemma 2, the success probability is lower bounded by pfm(t),σ(t)pI:=(bT/bH)d·pT>0 for all mK(r) and σ[bT,bH].

Corollary 2 ensures that in each such state the probability to decrease the f^Λ-value by at least (2π)d/2·bTd·pI/2 is lower bounded by pI/2>0. We apply Lemma 6 with the following construction. For each state (m,σ) we pick a set E(m,σ)Rd of probability mass pI/2 improving on f^Λ(m) by at least (2π)d/2·bTd·pI/2. Then we model the sampling procedure of the (1 + 1)-ES in iteration t as a two-stage process: first we draw a binary variable X˜(t){0,1} with Pr(X˜(t)=1)=pI/2, and then we draw x(t) from a Gaussian restricted to E(m(t-1),σ(t-1)) if X˜(t)=1, and restricted to the complement otherwise. The variables X˜(t) are independent, by construction.

Then Lemma 6 implies that the overall f^Λ-decrease is almost surely infinite, which contradicts the fact that f^Λ(m(0)) is finite and f^Λ is lower bounded by zero. Hence, the sequence m(t) leaves K(r) after finitely many steps, almost surely. For r=1/n, let tn denote an iteration fulfilling m(tn)K(r). The sequence m(tn)nN does not have a limit point in K0K1 (since that point would be contained in K(r) for some r>0), however, due to the Bolzano-Weierstraß theorem it has at least one limit point in K0, which must therefore be located in K1.

The above theorem is of primary interest if K1 is the set of (local) minima of f, or at least the set of critical or p-critical points. Due to the prerequisites of the theorem we always have
that is, p-critical points are candidate limit points.

In accordance with Akimoto et al. (2010), the following corollary establishes convergence to a critical point for continuously differentiable functions.

Corollary 3:

Let f:RdR be a continuously differentiable function with level sets of measure zero. Assume that K0=Sfm(0)¯ is compact. Then the sequence m(t)tN has a critical limit point.

Proof:

Define K1:={xK0|f(x)=0} as the set of critical points. This set is compact. Lemma 4 ensures that f is p-improvable on K0K1 for all p<1/2. Then the claim follows immediately from Theorem 2.

Technically the above statements do not apply to problems with unbounded sublevel sets. However, due to the fast decay of the tails of Gaussian search distributions we can often approximate these problems by changing the function “very far away” from the initial search distribution, in order to make the sublevel sets bounded. We may then even apply the theorem with empty K1, which implies that after a while the approximation becomes insufficient since the algorithm diverges. In this sense we can conclude divergence, for example, on a linear function. We will use this argument several times in the next section, mainly to avoid unnecessary technical complications when defining saddle points and ridge functions.

We may ask whether p-improvability for p>τ is not only a sufficient but also a necessary condition for global convergence. This turns out to be wrong. The quadratic saddle point case discussed in Section 5.2 is a counter example, where the algorithm diverges reliably even if the success probability is far smaller than τ. In contrast, the ridge of p-critical saddle points analyzed in Section 5.3 results in premature convergence, despite the fact that the critical points form a zero set, and this can even happen for a ridge of p-improvable points with p<τ; see Section 5.4. Drift analysis is a promising tool for handling all of these cases. Here we provide a rather simple result, which still suffices for many interesting cases. A related analysis for a nonelitist ES was carried out by Beyer and Meyer-Nieberg (2006).

Theorem 3:
Consider a measurable objective function f:RdR with level sets of measure zero. Let mRd be a p-critical point. If the success probability decays sufficiently quickly, that is, if
then for each given p<1 there exists an initial condition such that the (1 + 1)-ES converges to m with probability of at least p.
Proof:

Define the zero sequence SK:=k=Kpf(m,ek·c-). For given p<1, there exists a K0 such that SK0<1-p. By definition, the probability of never sampling a successful offspring when starting the algorithm in the initial state m(0)=m, σ(0)=eK0·c- is given by SK0; in this case we have m(t)=m for all tN.

The above theorem precludes global convergence to a (local) optimum with full probability in the presence of a suitable nonoptimal p-critical point.

In this section, we analyze various example problems with very different characteristics, by applying the above convergence analysis. We characterize the optimization behavior of the (1 + 1)-ES, giving either positive or negative results in terms of global convergence. We start with smooth functions and then turn to less regular cases of nonsmooth and discontinuous functions. On the one hand side, we show that the theorem is applicable to interesting and nontrivial cases; on the other hand we explore its limits.

5.1  The 2-D Rosenbrock Function

The two-dimensional Rosenbrock function is given by
This is a degree four polynomial. The function is unimodal (has a single local minimum), but not convex. Moreover, it does not have critical points other than the global optimum x*=(1,1). The function is illustrated in Figure 5.
Figure 5:

The 2-D Rosenbrock function in the range [-2,2]×[-1,3].

Figure 5:

The 2-D Rosenbrock function in the range [-2,2]×[-1,3].

Close modal

The Rosenbrock function is a popular test problem because it requires a diverse set of optimization behaviors: the algorithm must descend into a parabolic valley, follow the valley while adapting to its curved shape, and finally converge into the global optimum, which is a smooth optimum with nontrivial (but still moderate) conditioning.

Corollary 3 immediately implies convergence of the (1 + 1)-ES into the global optimum. It does not say anything about the speed of convergence; however, Jägersküpper (2006a) established linear convergence in the last phase with overwhelming probability (however, using a different step size adaptation rule).

Taken together, these results give a rather complete picture of the optimization process: irrespective of the initial state we know that the algorithm manages to locate the global optimum without getting stuck on the way. Once the objective function starts to look quadratic in good enough approximation, Jägersküpper's result indicates that linear convergence can be expected. The same analysis applies to all twice continuously differentiable unimodal functions without critical points other than the optimum.

5.2  Saddle Points—The p-Improvable Case

We consider the quadratic objective function
with parameter a>0. The origin is a saddle point. It is p-improvable for all p<2cot-1(a)/π (see the appendix for details). For small enough a, the success probability is larger than τ and Corollary 3 applies, while for large values of a the success probability decays to zero and we lose all guarantees.

Simulations show that the ES overcomes the zero level set containing the saddle point without a problem, also for large values of a. It seems that p-improvable saddle points do not result in premature convergence of the algorithm, irrespective of the value of p>0. However, this statement is based on an empirical observation, not on a rigorous proof.

5.3  Saddle Points—The p-Critical Case

The cubic polynomial
has p-critical saddle points on the line R×{0}R2 forming a ridge; see Figure 6. Without loss of generality we consider m=0R2 in the following. A successful offspring xR2 fulfills x13+x220. For small enough σ and hence for small enough x1, xΘ(σ), this implies -x1|x2| and hence -x1Θ(σ) and |x2|o(σ). Plugging this into the above inequality we obtain |x2|O(-x1·σ). Therefore, for small σ we have pf(0,σ)O(σ). This implies that the cumulative success probability
is finite, and Theorem 3 yields (premature) convergence with arbitrarily high probability.
Figure 6:

Level lines of the function f(x1,x2)=x13+x22 in the range [-1,1]2. The inset shows a zoom of factor 10.

Figure 6:

Level lines of the function f(x1,x2)=x13+x22 in the range [-1,1]2. The inset shows a zoom of factor 10.

Close modal

5.4  Linear Ridge

Consider the linear ridge objective
with parameter a>0. The function is continuous, and its level sets contain a kink. Again, the line R×{0} is critical; this is where the function is nondifferentiable. The function is p-improvable for p<cot-1(a)/π<1/2 (see the appendix). For a the success probability decays to zero.

As long as cot-1(a)/π>τ we can conclude divergence of the algorithm (the intended behavior) from Theorem 2. Otherwise we lose this property, and it is well known and easy to check with simulations that for large enough a the algorithm indeed converges prematurely.

5.5  Sphere with Jump

Our next example is an “essentially discontinuous” problem in the sense that in general no function in the equivalence class [f] is continuous. We consider objective functions of the form
where 11S denotes the indicator function of a measurable set SRd. If S has a sufficiently simple shape then this problem is similar to a constrained problem where S is the infeasible region (Arnold and Brauer, 2008), at least for small enough σ. As long as m(t)S the (1 + 1)-ES essentially optimizes the sphere function, and as soon as m(t)S the (soft) constraint comes into play.

If S is the complement of a star-shaped open neighborhood of the origin then it is easy to see that the function is unimodal and p-improvable for all p<1/2. Theorem 2 applied with K1:={0} yields the existence of a subsequence converging to the origin, which implies convergence of the whole sequence due to monotonicity of fm(t). The results of Jägersküpper (2005) and Akimoto et al. (2018) imply linear convergence.

Other shapes of S give different results. For example, for d2, if S is a ball not containing the origin then the function is still unimodal. For example, define S as the open ball of radius 1/2 around the first unit vector e1=(1,0,,0)Rd. Then at m:=3/2·e1 we have ξpf(m)=0 for all p>0, and according to Theorem 3 the algorithm can converge prematurely if the step size is small. Alternatively, if S is the closed ball, then all points except the origin are p-improvable for all p<1/2; however, there does not exist a positive lower semicontinuous lower bound on ξpf in any neighborhood of m=3/2·e1, and again the algorithm can converge to this point, irrespective of the target success probability τ.

Now consider the strip S:=(a,)×(0,1)R2 with parameter a>0. An elementary calculation of the success rate at m:=(a+ɛ,1) for σ0 shows that the (1 + 1)-ES is guaranteed to converge to the optimum irrespective of the initial conditions if tan-1(a)/(2π)<τ (details are found in the appendix), that is, if a is large enough; otherwise the algorithm can converge prematurely to a point on the edge (a,)×{1} of S.

5.6  Extremely Rugged Barrier

Let us drive the above discontinuous problem to the extreme. Consider the one-dimensional problem
where S[-1,0] is a Smith-Volterra-Cantor set, also known as a fat Cantor set. S is closed, has positive measure (usually chosen as Λ(S)=1/2), but is nowhere dense. Counterintuitively, the function is unimodal in the sense that no point is optimal restricted to an open neighborhood (which is what commonly defines a local optimum). Still, intuitively, S should act as a barrier blocking optimization progress with high probability.

The function is point-wise p-improvable everywhere. However, similar to the closed ball case in the previous section, there is no positive, lower semicontinuous lower bound on ξpf. Therefore Theorem 2 does not apply. Indeed, unsurprisingly, simulations4 show that the algorithm gets stuck with positive probability when initialized with 0<x(0)1 and σ1. When removing 0 from S, then analogous to Section 5.3 we obtain pf(m,σ)O(σ) for m=0 and small σ, and hence Theorem 3 applies.

In contrast, if S is a Cantor set of measure zero then the algorithm diverges successfully, since it ignores zero sets with full probability.

We have established global convergence of the (1 + 1)-ES for an extremely wide range of problems. Importantly, with the exception of a few proof details, the analysis captures the actual dynamics of the algorithm and hence consolidates our understanding of its working principles.

Our analysis rests on two pillars. The first one is a progress guarantee for rank-based evolutionary algorithms with elitist selection. In its simplest form, it bounds the progress on problems without plateaus from below. It seems to be quite generally applicable, for example, to runtime analysis and hence to the analysis of convergence speed.

The second ingredient is an analysis of success-based step size control. The current method barely suffices to show global convergence. It is not suitable for deducing stronger statements such as linear convergence on scale invariant problems. Control of the step size on general problems therefore needs further work.

Many natural questions remain open, the most significant are listed in the following. These open points are left for future work.

  • The approach does not directly yield results on the speed of convergence. However, the progress guarantee of Theorem 1 is a powerful tool for such an analysis. It can provide us with drift conditions and hence yield bounds on the expected runtime and on the tails of the runtime distribution. But for that to be effective we need better tools for bounding the tails of the step size distribution. Here, again, drift is a promising tool.

  • The current results are limited to step-size adaptive algorithms and do not include covariance matrix adaptation. One could hope to extend the proceeding to the (1 + 1)-CMA-ES algorithm (Igel et al., 2007), or to (1 + 1)-xNES (Glasmachers et al., 2010). Controlling the stability of the covariance matrix is expected to be challenging. It is not clear whether additional assumptions will be required. As an added benefit, it may be possible to relax the condition p>τ for p-improvability, by requiring it only after successful adaptation of the covariance matrix.

  • Plateaus are currently not handled. Theorem 1 shows how they distort the distribution of the decrease. Worse, they affect step size adaptation, and they make it virtually impossible to obtain a lower bound on the one-step probability of a strict improvement. Therefore, proper handling of plateaus requires additional arguments.

  • In the interest of generality, our convergence theorem only guarantees the existence of a limit point, not convergence of the sequence as a whole. We believe that convergence actually holds in most cases of interest (at least as long as there are no plateaus; see above). This is nearly trivial if the limit point is an isolated local optimum; however, it is unclear for a spatially extended optimum, for example, a low-dimensional variety or a Cantor set.

  • Our current result requires a saddle point to be p-improvable for some p>τ, otherwise the theorem does not exclude convergence of the ES to the saddle point. We know from simulations that the (1 + 1)-ES overcomes p-improvable saddle points reliably, also for pτ. A proper analysis guaranteeing this behavior would allow the establishment of statements analogous to work on gradient-based algorithms that overcome saddle points quickly and reliably; see for example, Dauphin et al. (2014). However, this is clearly beyond the scope of the present article.

  • We provide only a minimal negative result stating that the algorithm may indeed converge prematurely with positive probability if there exists a p-critical point for which the cumulative success probability does not sum to infinity. In Section 5.5, it becomes apparent that this notion is rather weak, since the statement is not formally applicable to the case of a closed ball, which however differs from the open ball scenario only on a zero set. This makes clear that there is still a gap between positive results (global convergence) and negative results (premature convergence). Theorem 3 can certainly be strengthened, but the exact conditions remain to be explored. A single p-improvable point with p<τ is apparently insufficient. A p-critical point may be sufficient, but it is not necessary.

I would like to thank Anne Auger for helpful discussions, and I gratefully acknowledge support by Dagstuhl seminar 17191 “Theory of Randomized Search Heuristics.”

1

Some authors refer to global convergence as convergence to a global optimum. We do not use the term in this sense.

2

Jägersküpper analyzed a different step size adaptation rule. However, it exhibits essentially the same dynamics as Algorithm 1.

3

An alternative approach to avoiding infinite values is to apply a bounded reference measure with full support, for example, a Gaussian on Rd. In the absence of a uniform distribution on X, the price to pay for a bounded and everywhere positive reference measure is a nonuniform measure, which does not allow for a uniform, positive lower bound. The resulting technical complications seem to outweigh the slightly increased generality of the results.

4

Special care must be taken when simulating this problem with floating point arithmetic. Our simulation is necessarily inexact; however, not beyond the usual limitations of floating point numbers. It does reflect the actual dynamics well. The fitness function is designed such that the most critical point for the simulation is zero, which is where standard IEEE floating point numbers have maximal precision.

Akimoto
,
Y.
,
Auger
,
A.
, and
Glasmachers
,
T
. (
2018
). Drift theory in continuous search spaces: Expected hitting time of the (1 + 1)-es with 1/5 success rule. In
Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)
, pp.
1922
1925
.
Akimoto
,
Y.
,
Nagata
,
Y.
,
Ono
,
I.
, and
Kobayashi
,
S
. (
2010
). Theoretical analysis of evolutionary computation on continuously differentiable functions. In
Genetic and Evolutionary Computation Conference
, pp.
1401
1408
.
Arnold
,
D.
, and
Brauer
,
D
. (
2008
). On the behaviour of the (1 + 1)-es for a simple constrained problem. In
Parallel Problem Solving from Nature
, pp.
1
10
.
Auger
,
A
. (
2005
).
Convergence results for the ( 1 , λ ) -SA-ES using the theory of ϕ -irreducible Markov chains
.
Theoretical Computer Science
,
334
(
1–3
):
35
69
.
Beyer
,
H.-G.
, and
Meyer-Nieberg
,
S
. (
2006
). Self-adaptation on the ridge function class: First results for the sharp ridge. In
Parallel Problem Solving from Nature
, pp.
72
81
.
Dauphin
,
Y. N.
,
Pascanu
,
R.
,
Gulcehre
,
C.
,
Cho
,
K.
,
Ganguli
,
S.
, and
Bengio
,
Y.
(
2014
). Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In
Z.
Ghahramani
,
M.
Welling
,
C.
Cortes
,
N. D.
Lawrence
, and
K. Q.
Weinberger
(Eds.),
Advances in neural information processing systems
27
, pp.
2933
2941
.
Red Hook, NY
:
Curran Associates, Inc
.
Diouane
,
Y.
,
Gratton
,
S.
, and
Vicente
,
L
. (
2015
).
Globally convergent evolution strategies
.
Mathematical Programming
,
152
(
1–2
):
467
490
.
Droste
,
S.
,
Jansen
,
T.
, and
Wegener
,
I
. (
2002
).
On the analysis of the (1+ 1) evolutionary algorithm
.
Theoretical Computer Science
,
276
(
1–2
):
51
81
.
Gilbert
,
J.
, and
Nocedal
,
J
. (
1992
).
Global convergence properties of conjugate gradient methods for optimization
.
SIAM Journal on Optimization
,
2
(
1
):
21
42
.
Glasmachers
,
T.
,
Schaul
,
T.
, and
Schmidhuber
,
J
. (
2010
). A natural evolution strategy for multi-objective optimization. In
Parallel Problem Solving from Nature
, pp.
627
636
.
Hansen
,
N.
,
Arnold
,
D. V.
, and
Auger
,
A.
(
2015
). Evolution strategies. In
J.
Kacprzyk
and
W.
Pedrycz
(Eds.),
Handbook of computational intelligence
, pp.
871
898
.
Berlin
:
Springer
.
Hansen
,
N.
, and
Ostermeier
,
A
. (
2001
).
Completely derandomized self-adaptation in evolution strategies
.
Evolutionary Computation
,
9
(
2
):
159
195
.
Igel
,
C.
,
Hansen
,
N.
, and
Roth
,
S
. (
2007
).
Covariance matrix adaptation for multi-objective optimization
.
Evolutionary Computation
,
15
(
1
):
1
28
.
Jägersküpper
,
J.
(
2003
).
Analysis of a simple evolutionary algorithm for minimization in Euclidean spaces
.
Automata, Languages and Programming
, p.
188
.
Jägersküpper
,
J
. (
2005
). Rigorous runtime analysis of the (1 + 1) ES: 1/5-rule and ellipsoidal fitness landscapes. In
International Workshop on Foundations of Genetic Algorithms
, pp.
260
281
.
Jägersküpper
,
J
. (
2006a
).
How the (1 + 1) ES using isotropic mutations minimizes positive definite quadratic forms
.
Theoretical Computer Science
,
361
(
1
):
38
56
.
Jägersküpper
,
J
. (
2006b
). Probabilistic runtime analysis of ( 1 + , λ ), ES using isotropic mutations. In
Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation (GECCO)
, pp.
461
468
.
Kern
,
S.
,
Müller
,
S. D.
,
Hansen
,
N.
,
Büche
,
D.
,
Ocenasek
,
J.
, and
Koumoutsakos
,
P
. (
2004
).
Learning probability distributions in continuous evolutionary algorithms—A comparative review
.
Natural Computing
,
3
(
1
):
77
112
.
Lehre
,
P. K.
, and
Witt
,
C.
(
2013
).
General drift analysis with tail bounds
.
Technical Report. Retrieved from arXiv:1307.2559
.
Lengler
,
J.
, and
Steger
,
A.
(
2016
).
Drift analysis and evolutionary algorithms revisited
.
Technical Report. Retrieved from arXiv:1608.03226
.
Rechenberg
,
I
. (
1973
).
Evolutionsstrategie: Optimierung technisher systeme nach prinzipien der biologischen evolution
.
Stuttgart
:
Frommann-Holzboog
.
Torczon
,
V
. (
1997
).
On the convergence of pattern search algorithms
.
SIAM Journal on Optimization
,
7
(
1
):
1
25
.
Wegener
,
I.
(
2003
). Methods for the analysis of evolutionary algorithms on pseudo-Boolean functions. In
Evolutionary optimization
, pp.
349
369
.
International Series in Operations Research and Management
, Vol.
48
.
Boston
:
Springer
.
Wolfe
,
P
. (
1969
).
Convergence conditions for ascent methods
.
SIAM Review
,
11
(
2
):
226
235
.

Here we provide the proofs of technical lemmas that were omitted from the main text in the interest of readability.

Proof of Lemma 1:

We have to show that the level sets of all three functions agree outside a set of measure zero. It is immediately clear from definition 1 that the level sets of f are a refinement of the level sets of f^Λ and f^Λ<, i.e., f(x)=f(x') implies f^Λ(x)=f^Λ(x') and f^Λ<(x)=f^Λ<(x'), and f^Λ(x)<f^Λ(x') and f^Λ<(x)<f^Λ<(x') both imply f(x)<f(x').

It remains to be shown that f^Λ and f^Λ< do not join f-level sets of positive measure. Let yR denote a level so that Y=f^Λ<-1(y) has positive measure Λ(Y)>0. We have to show that this measure (not necessarily the whole set, only up to a zero set) is covered by a single f-level set. Assume the contrary, for the sake of contradiction. Then we find ourselves in one of the following situations:

  1. There exist x,x'Y fulfilling a:=f(x)<f(x')=:a' and it holds Λf-1(a)>0 and Λf-1(a')>0. So the mass of Y is split into at least two chunks of positive measure. This implies f^Λ<(x')-f^Λ<(x)Λf-1(a)>0, which contradicts the assumption that x and x' belong to the same f^Λ<-level.

  2. There exist x,x'Y fulfilling a=f(x)<f(x')=a' and it holds Λf-1(I)>0 for the open interval I=(a,a'). So Y consists of a continuum of level sets of measure zero. Again, this implies f^Λ<(x')-f^Λ<(x)Λf-1(I)>0, leading to the same contradiction as in the first case.

The argument for f^Λ is exactly analogous.

Proof of Lemma 2:
It holds
The computation for pf is analogous.
Proof of Lemma 3:
Fix x and define ξ:=ξpTf(x). The cases pH=0 and ξ=0 are trivial, so in the following we treat the case that both are positive. For a1 it holds
In other words, the success probability for step size a·ξ is at least pT/ad. Hence, in order to push the success probability below pT/ad, the step size must be at least ξ·a, which therefore bounds ηpT/adf(x) from below. Applying the above argument with a=pT/pHd completes the proof.
Proof of Lemma 4:
In a small enough neighborhood of a regular point x the function f can be approximated arbitrarily well by a linear function (its first order Taylor polynomial). In particular, the level set of f is arbitrarily well approximated by a hyperplane, for which the probability of strict improvement is exactly 1/2. Hence we have
which immediately implies the first statement.
We have already seen that the second statement holds point-wise. It remains to be shown that ξpf|Y is lower bounded by a positive, lower semicontinuous function. To this end we show that ξpf itself is lower-semicontinuous, and we note that ξpf|Y takes positive values. Consider a convergent sequence (at)tNxRd and define ξa:=liminftξpf(at) and ξx:=ξpf(x). We have to show that it holds ξxξa for all choices of x and (at)tN. We define
which allows us to write ξa=inf(Sa) and ξx=inf(Sx). Fix σSa and a corresponding subsequence (tk)kN so that it holds pf<(atk,σ)pkN. From the continuity of f it follows that the success probability function pf< is lower semicontinuous (and even continuous in its second argument, the step size). From limkatk=x and lower semicontinuity of pf< it follows σSx. We conclude SaSx and therefore ξxξa.
To show the last statement we construct a cone of improving steps centered at x. This cone makes up a fixed fraction of each ball centered on x, which shows that x is p-improvable, where p is any number smaller than the volume of the intersection of ball and cone divided by the volume of the ball, which is well-defined and positive in the limit when the radius tends to zero. Let v denote an eigen vector of H fulfilling vTHv<0. For σ0, the objective function is well approximated by the quadratic Taylor expansion
The sublevel set Sf<(x) is locally well approximated by Sg<(x), which is a cone centered on x. Whether a ray x+R·z belongs to Sg<(x) or not depends on whether zTHz<0 or not. Now, the eigen vector v has this property, and due to continuity of g, the same holds for an open neighborhood N of v. The cone x+R·N is contained in Sg<(x) and has the same positive probability sg<(x,σ)=p>0 under N(x,σ2I) for all σ>0. We conclude
which completes the proof.
Proof of Lemma 5:
We use the short notation m:=m(t) and σ=σ(t). Let S=Sf<(m) denote the region of improvement, with Lebesgue measure f^Λ(m). The probability of sampling from this region is bounded by
where the last inequality is equivalent to the assumption.
Proof of Lemma 6:
Assume the contrary, for the sake of contradiction. Then
Fix NN. Hoeffding's inequality applied with ɛ=p/2 and n2Np yields
Hence, for n, with full probability the infinite sum exceeds N. Since N was arbitrary, we arrive at a contradiction.
Proof of Lemma 7:
In each iteration, the step size σ is multiplied by either ec- or ec+. According to Lemma 3, the condition pHpTed·c- yields
An unsuccessful step of the (1 + 1)-ES in iteration t results in a reduction of the step size by the factor σ(t+1)σ(t)=ec-<1 and leaves m(t+1)=m(t) unchanged. We conclude that no such step can overjump the interval ξpTfm(t),ηpHfm(t), in the sense of σ(t)ηpHfm(t) and σ(t+1)ξpTfm(t). The above property also implies bHbTe-c-.

The central proof argument works as follows. First, we exclude that the step size remains outside [bT,bH] for too long. The same argument does not work for the target interval defined in Equation (1) because of its time dependency—we could overjump the moving target. Instead we show that the only way for the step size to avoid the target interval for an infinite time is to overjump, that is, to find itself above and below the interval infinitely often. Finally, an argument exploiting the properties of unsuccessful steps allows us to consider a static target, which cannot be overjumped by the property already shown above.

First, we show that there exists an infinite subsequence of iterations t fulfilling σ(t)[bT,bH]. This statement is strictly weaker than the assertion to be shown. It is still helpful in the following because then we know that the step sizes return to a fixed, t-independent interval for an infinite number of times. Assume for the sake of contradiction that there exists t0 such that σ(t)bT for all tt0. The logarithmic step size change δ(t):=log(σ(t+1))-log(σ(t)) takes the values c+>0 with probability at least pT>τ and c-<0 with probability at most 1-pT<1-τ, hence
For t1>t0 we consider the random variable log(σ(t1))=log(σ(t0))+t=t0t1-1δ(t). The variables δ(t) are not independent. We create independent variables as follows. For each candidate state (m,σ) fulfilling σ<bT we fix a set I(m,σ)Sf<(m) of improving steps with probability mass exactly pT under the distribution N(m,σ2I). Let δ˜(t) denote the step size change corresponding to δ(t) for which the step size is increased only if the iterate m(t+1) is contained in I(m,σ). Note that these hypothetical step size changes do not influence the actual sequence of algorithm states. Therefore, the sequence is i.i.d., and it holds δ˜(t)δ(t). From Hoeffding's inequality applied with ɛ=Δ/2 to t=t0t1-1δ˜(t)t=t0t1-1δ(t) we obtain
that is, the probability that the log step size grows by less than Δ/2 per iteration on average is exponentially small in t1-t0. For t1t0+2/Δ·log(bT)-log(σ(t0)) the probability becomes minuscule, and for t1 it vanishes completely. Hence, with full probability, we arrive at a contradiction. The same logic contradicts the assumption that σ(t)bH for all tt0. Hence, with full probability, subepisodes of very small and very large step size are of finite length, and according to Lemma 6 the sequence of step sizes returns infinitely often to the interval [bT,bH].
Next we show that there exists an infinite subsequence of iterations fulfilling Equation (1). Again, assume the contrary. We know already that σ(t) does not stay below bT or above bH for an infinite time. Hence, there must exist an infinite subsequence fulfilling either
(2)
or
(3)
Assume an infinite subsequence fulfilling Equation (2). For each of these iterations, the success probability is lower bounded by pT. Consider the case of consecutive successes. Until the event
(4)
the probability of success remains lower bounded by pT>0. The condition is fulfilled after at most n+:=log(bH)-log(bT)/c+ successes in a row, hence the probability of such an episode occurring is lower bounded by pTn+>0. Lemma 6 ensures the existence of an infinite subsequence of iterations with this property. Each such episode contains a point fulfilling either Equation (1) or Equation (4). By assumption, the former happens only finitely often, which implies that the latter happens infinitely often.
Hence, this case as well as the alternative assumption of an infinite sequence fulfilling Equation (3), handled with an analogous argument, result in an infinite subsequence with the property
Following the same line of arguments as above, as long as σ(t)ηpHfm(t), the probability of an unsuccessful step is lower bounded by 1-pH>0. After at most n-:=log(bT)-log(bH)+c+/c- unsuccessful steps in a row, called an episode in the following, the step size must have dropped below bTηpHfm(t), hence the probability of such an episode occurring is lower bounded by (1-pH)n->0. According to Lemma 6, an infinite number of such episodes occurs.

By construction, these episodes consist entirely of unsuccessful steps, and therefore m(t) remains unchanged for the duration of an episode. This comes in handy, since this means that also the target interval ξpTfm(t),ηpHfm(t) remains fixed, and this again means that at least one iteration of the episode falls into this interval. We have thus constructed an infinite subsequence of iteration within the above interval, in contradiction to the assumption.

Finally, we provide details on the computations of success rates in the examples. In Section 5.2, the set where the function f(x1,x2):=a·x12-x22 takes the value zero consists of two lines through the origin in directions (1,a) and (-1,a). The cone is bounded by these lines in the success domain. The angle between their directions divided by π corresponds to the success rate. It is two times the angle between (1,a) and (1,0), and hence 2cot-1(a). Dividing by π yields the result.

The threshold p<cot-1(a)/π in Section 5.4 follows the exact same logic, with the difference that the square root vanishes in the direction vectors, and we lose a factor of two, since the success domain is only one half of the cone.

In Section 5.5, the circular level line in the corner point (a,1) is tangent to the vector (-1,a). The angle tan-1(a) between (-1,a) and (-1,0), divided by 2π, is a lower bound on the success rate at m=(a+ɛ,1) with σɛ. The bound is precise for ɛ0.

This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license, which permits copying and redistributing the material in any medium or format for noncommercial purposes only. For a full description of the license, please visit https://creativecommons.org/licenses/by-nc/4.0/legalcode.