## Abstract

We establish global convergence of the (1 + 1) evolution strategy, that is, convergence to a critical point independent of the initial state. More precisely, we show the existence of a critical limit point, using a suitable extension of the notion of a critical point to measurable functions. At its core, the analysis is based on a novel progress guarantee for elitist, rank-based evolutionary algorithms. By applying it to the (1 + 1) evolution strategy we are able to provide an accurate characterization of whether global convergence is guaranteed with full probability, or whether premature convergence is possible. We illustrate our results on a number of example applications ranging from smooth (non-convex) cases over different types of saddle points and ridge functions to discontinuous and extremely rugged problems.

## 1 Introduction

Global convergence of an optimization algorithm refers to convergence of the iterates to a
critical point independent of the initial state—in contrast to local convergence, which
guarantees this property only for initial iterates in the vicinity of a critical point.^{1} For example, many first order methods enjoy
this property (Gilbert and Nocedal, 1992), while
Newton's method does not. In the realm of direct search algorithms, mesh adaptive search
algorithms are known to be globally convergent (Torczon, 1997).

Evolution strategies (ES) are a class of randomized search heuristics for direct search in $Rd$. The (1 + 1)-ES is the maybe simplest such method, originally
developed by Rechenberg (1973). A particularly simple
variant thereof, which was first defined by Kern et al. (2004), is given in Algorithm 1. Its state consists of a single parent individual $m\u2208Rd$ and a step size $\sigma >0$. It samples a single offspring $x\u2208Rd$ per generation from the isotropic multivariate normal
distribution $N(m,\sigma 2I)$ and applies (1 + 1)-selection; that is, it keeps the better
of the two points. Here, $I\u2208Rd\xd7d$ denotes the identity matrix. The standard deviation $\sigma >0$ of the sampling distribution, also called *global step
size*, is adapted online. The mechanism maintains a fixed success rate usually
chosen as $1/5$, in accordance with Rechenberg's original approach. It is
discussed in more detail in Section 3. In effect, step
size control enables linear convergence on convex quadratic functions (Jägersküpper, 2006a), and therefore locally linear convergence on twice
differentiable functions. In contrast, algorithms without step size adaptation can converge
as slowly as pure random search (Hansen et al., 2015). Furthermore, being rank-based methods, ESs are invariant to strictly
monotonic transformations of objective values. ESs tend to be robust and suitable for
solving difficult problems (rugged and multimodal fitness landscapes), a capacity that is
often attributed to invariance properties.

Although the (1 + 1)-ES is the oldest evolution strategy in existence, we do not yet fully understand how generally it is applicable. In this article, we cast this open problem into the question on which functions the algorithm will succeed to locate a local optimum, and on which functions it may converge prematurely, and hence fail. We aim at an as complete as possible characterization of these different cases.

By modern standards, the (1 + 1)-ES cannot be considered a competitive optimization method. The covariance matrix adaptation evolution strategy (CMA-ES) by Hansen and Ostermeier (2001) and its many variants mark the state of the art. The algorithm goes beyond the simple (1 + 1)-ES in many ways: it uses nonelitist selection with a population, it adapts the full covariance matrix of its sampling distribution (effectively resembling second order methods), and it performs temporal integration of direction information in the form of evolution paths for step size and covariance matrix adaptation. Still, its convergence order on many relevant functions is linear, and that is thanks to the same mechanism as in the (1 + 1)-ES, namely step size adaptation.

To date, convergence guarantees for ESs are scarce. Some results exist for convex quadratic problems, which essentially implies local convergence on twice continuously differentiable functions. In this situation it is natural to start with the simplest ES, which is arguably the (1 + 1)-ES. The variant defined by Kern et al. (2004) is given in Algorithm 1; it is discussed in detail in Section 3.

Jägersküpper (2003, 2005, 2006a,b) analyzed the (1 + 1)-ES^{2} on the
sphere function as well as on general convex quadratic functions. His analysis ensures
linear convergence with overwhelming probability, that is, with a probability of $1-exp\Omega (d\u025b)$ for some $\u025b>0$, where $d$ is the problem dimension. In other words, the analysis is
asymptotic in the sense $d\u2192\u221e$, and for fixed (finite) dimension $d\u2208N$, no concrete value or bound is attributed to this
probability. A dimension-dependent convergence rate of $\Theta (1/d)$ is obtained.

A related and more modern approach relying explicitly on drift analysis was presented by Akimoto et al. (2018), showing linear convergence of the algorithm on the sphere function, and providing an explicit, non-asymptotic runtime bound for the first hitting time of a level set.

The analysis by Auger (2005) is based on the stability of the Markov chain defined by the normalized state $m/\sigma $, for a $(1,\lambda )$-ES on the sphere function. Since the chain is shown to converge to a stationary distribution and the problem is scale-invariant, linear convergence or divergence is obtained, with full probability. There exists sufficient empirical evidence for convergence; however, this is not covered by the result.

A different approach to proving global convergence is to modify the algorithm under consideration in a way that allows for an analysis with well established techniques. This route was explored by Diouane et al. (2015), where step size adaptation is subject to a forcing function in order to guarantee a sufficient decrease condition, akin to, for example, the Wolfe conditions for inexact line search (Wolfe, 1969). This is a powerful approach since the resulting analysis is general in terms of the algorithms (the same step size forcing mechanism can be added to virtually all ES) and the objective functions (the function must be bounded from below and Lipschitz near the limit point) at the same time. The price is that the analysis does not apply to algorithms regularly applied within the EC community, and that we do not obtain new insights about the mechanisms of these algorithms. Furthermore, the forcing function decays slowly, forcing a linearly convergent algorithm into sublinear convergence (but still much faster than random search). From a more technical point of view the Lipschitz condition is unfortunate since it is not preserved under monotonic transformations of fitness values. We improve on this approach by providing sufficient decrease of a transformed objective function, which holds for all randomized elitist, rank-based algorithms, and hence does not require a forcing function or any other algorithmic changes.

The global convergence guarantee by Akimoto et al. (2010) is closest to the present article. Also, that analysis is extremely general in the sense that it covers a broad range of problems and algorithms. The objective function is assumed to be continuously differentiable, and the only requirement for the algorithm is that it successfully diverges on a linear function. This includes all state-of-the-art evolution strategies and many more algorithms. Since continuously differentiable functions are locally arbitrarily well approximated by linear functions (first order Taylor polynomial), it is concluded that any limit point must be stationary, since there the linear term vanishes and higher order terms take over. This is an elegant and powerful result. Its main restriction is that it applies only to continuously differentiable functions. This is a huge class, but it can still be considered a relevant limitation because on continuously differentiable problems ESs are in direct competition with gradient-based methods, which are usually more efficient if gradients are available.

For this reason, solving smooth and otherwise easy problems cannot be the focus of evolution strategies. Therefore, in this article we seek to explore the most general class of problems that can be solved with an evolution strategy. In other words, we aim to push the limits beyond the well-understood cases, towards really difficult ones. Our goal is to establish the largest possible class of problems that can be solved reliably by an ES, and we also want to understand its limitations, i.e., which problems cannot be solved, and why. For this purpose, we focus on the simplest such algorithm, namely the (1 + 1)-ES defined in Algorithm 1. It turns out that the limitations of the algorithm are closely tied to its success-based step size adaptation mechanism. To capture this effect we introduce a novel regularity condition ensuring proper function of success-based step-size control. The new condition is arguably much weaker than continuous differentiability, in a sense that will become clear as we discuss examples and counter-examples.

From a bird's eye's perspective, our contributions are as follows:

we provide a general progress or decrease guarantee for rank-based elitist algorithms,

we show how general the (1 + 1)-ES is applicable, that is, on which problems it will find a local optimum.

The article and the proofs are organized as follows. In the next section we establish a progress guarantee for rank-based elitist algorithms. This result is extremely general, and it is in no way tied to continuous search spaces and the (1 + 1)-ES. Therefore, it is stated in general terms, in the expectation that it will prove useful for the analysis of algorithms other than the (1 + 1)-ES. Its role in the global convergence proof is to ensure a sufficient rate of optimization progress as long as the step size is well adapted and the progress rate is bounded away from zero. In Section 3, we discuss properties of the (1 + 1)-ES and introduce the regularity condition. Based on this condition we show that the step size returns infinitely often to a range where non-trivial progress can be concluded from the decrease theorem. Based on these achievements we establish a global convergence theorem in Section 4, essentially stating that there exists a subsequence of iterates converging to a critical point, the exact notion of which is defined in Section 3. We also establish a negative result, showing that a nonoptimal critical point results in premature convergence with positive probability, which excludes global convergence. In Section 5, we apply the analysis to a variety of settings and demonstrate their implications. We close with conclusions and open questions.

## 2 Optimization Progress of Rank-Based Elitist Algorithms

In this section, we establish a general theorem ensuring a certain rate of optimization progress for randomized rank-based elitist algorithms. We consider a general search space $X$. This space is equipped with a $\sigma $-algebra and a reference measure denoted $\Lambda $. The usual choice of the reference measure is the counting measure for discrete spaces and the Lebesgue measure for continuous spaces. The objective function $f:X\u2192R$, to be minimized, is assumed to be measurable. The parent selection and variation operations of the search algorithm are also assumed to be measurable; indeed we assume that these operators give rise to a distribution from which the offspring is sampled, and this distribution has a density with respect to $\Lambda $.

Due to the assumption that the offspring generation distribution is $\Lambda $-measurable, with full probability, the algorithm is invariant to the values of the objective function restricted to zero sets (sets $Z$ of measure zero, fulfilling $\Lambda (Z)=0$). The following definition captures these properties. It encodes the “essential” level set structure of an objective function.

It follows immediately from the definition that the sublevel sets of equivalent objective functions $f\u223c^g$ coincide outside a zero set.

In the next step we construct a canonical representative for each equivalence class, which
we can think of as a *normal form* of an objective function.

The definition is illustrated with two examples in Figures 1 and 2. In the following, $m\u2208X$ will denote the elite (or parent) point, and $m(t)$ is the elite point in iteration $t\u2208N$ of an iterative algorithm, that is, an evolutionary algorithm
with elitist selection. For two very different reasons, namely 1) to avoid divergence of the
algorithm in the case of unbounded search spaces, and 2) for simplicity of the technical
arguments in the proofs, we restrict ourselves to the case that the sublevel set $Sf\u2264m(0)$ of the initial iterate $m(0)$ is bounded and has finite spatial suboptimality. For most
reasonable reference measures, boundedness implies finite spatial suboptimality. For $X=Rd$ equipped with the Lebesgue measure this is equivalent to the
topological closure $Sf\u2264m(0)\xaf$ being compact. The assumptions immediately imply that $Sf<(y)$ and $Sf\u2264(y)$ are bounded for all $y\u2264fm(0)$, and that restricted to $Sf\u2264(m(0))$ the functions $f^\Lambda <$ and $f^\Lambda \u2264$ take values in the bounded range $0,f^\Lambda m(0)$. Since an elitist algorithm never accepts points outside $Sf\u2264(m(0))$, we will from here on ignore the issue of infinite $f^\Lambda $-values.^{3}

In the continuous case, a plateau is a level set of positive Lebesgue measure. When defining a local optimum as the best point within an open neighborhood, then an interior point of a plateau is a local optimum, which may not always be intended. Anyway, when analyzing the (1 + 1)-ES we will not handle plateaus and instead assume that level sets of $f$ are zero sets. This also implies that $f^\Lambda \u2264$ and $f^\Lambda <$ agree. For now the only slightly weaker statement of the following lemma is sufficient, which does allow for plateaus.

Let $f:X\u2192R$ be measurable. If $f^\Lambda \u2264(x)$ is finite for all $x\u2208X$, then it holds $f^\Lambda \u2264\u223c^f\u223c^f^\Lambda <$.

Due to the rank-based nature of the algorithms under study we cannot expect to fulfill a
sufficient decrease condition based on $f$-values. This is because a functional gain $\Delta :=f(x)-f(x')>0$ achieved by moving from $x$ to $x'$ can be reduced to an arbitrarily small or large gain $\phi (f(x))-\phi (f(x'))$, where $\varphi $ is strictly monotonically increasing, and the class of
transformations does not allow to bound the difference uniformly, neither additively nor
multiplicatively. Instead, the following theorem establishes a progress or decrease
guarantee measured in terms of the spatial suboptimality function $f^\Lambda $. It gets around the problem of inconclusive values in
objective space (which, in case of single-objective optimization, is just the real line) by
considering a quantity in *search space*, namely the reference measure of the
sublevel set.

The algorithm is randomized; hence the decrease follows a distribution. The following definition captures properties of this distribution.

Note that $u$, $r<$, $r\u2264$, $s$, $Z$, and $\zeta $ implicitly depend on $\Lambda $, $P$, and $f$. This is not indicated explicitly in order to avoid excessive clutter in the notation.

If the function $f$ is continuous with continuous domain $X$ and without plateaus, then $r<$ and $r\u2264$ coincide, we have $\zeta =0$, and $s$ maps each probability $q\u2208[0,1]$ to the corresponding unique quantile of the distribution of $f(x)$ under $P$. However, if there exists a plateau within the support of $P$ (a level set of positive $P$-measure, that is, if $X$ is discrete), then $\zeta $ is positive and on $Z$ the function $s$ takes values anywhere between the lower quantile $P(f(x)<z)$ and the upper quantile $P(f(x)\u2264z)$. The exact value does not matter, since the only use of $s$-values is as arguments to one of the $r$-functions. Indeed, $r<(s(q))$ and $r\u2264(s(q))$ “round” the probability $q$ down or up, respectively, to the closest value that is attainable as the probability of sampling a sublevel set. The freedom in the choice of $s$ can also be understood in the context of Figure 1: if the point $z$ in the definitions of $r<$ and $r\u2264$ is located on the plateau, then $s(q)$ can be the anywhere between the probability mass of the sub-level set excluding and including the plateau.

With these definitions in place, the following theorem controls the expected value as well as the quantiles of the decrease distribution.

We start with the first two claims, which provide lower bounds on the $q$-quantiles of probabilities of improvement by some margin $\delta \u22650$. The argument here is elementary: an $f^\Lambda $-improvement of $\delta $ from $m$ to $x$ means that the $f^\Lambda $-sublevel set of $x$ is smaller than that of $m$ by $\Lambda $-mass $\delta $ (due to the offspring $x$ improving upon its parent $m$). This corresponds to a difference in $P$-mass of the same $f^\Lambda $-sublevel sets of at most $u\xb7\delta $, which will correspond to $q$ in the following. Note that the probabilities ($Pr(\cdots )$-notation) correspond to the same distribution $P$ from which $x$ is sampled, and that $f^\Lambda $-values and $s$-values directly correspond to $\Lambda $-mass. The situation is illustrated in Figure 3.

For the second claim we define the $f^\Lambda \u2264$-level $zq\u2264:=f^\Lambda \u2264(m)-p-r\u2264(s(q))u-\Lambda Lf(m)$ and the set $\Delta q\u2264:=Sf^\Lambda \u2264<(m)\u2216Sf^\Lambda \u2264\u2264(zq)$, and we note that it holds $f^\Lambda \u2264(m)-\Lambda Lf(m)=f^\Lambda <(m)$. Then, with an analogous argument as above we obtain $P(\Delta q<)\u2264p-r\u2264(s(q))$. In this case we immediately arrive at $\Delta q\u2264\u2282C$ and hence at $A\u222aB\u2282Sf^\Lambda \u2264\u2264(zq\u2264)$, which shows the second claim.

In our application of the above theorem to the (1 + 1)-ES $x$ corresponds to the offspring point sampled from a Gaussian centered on $m$.

Due to the term $\Lambda Lf(m)$ in the decrease of $f^\Lambda \u2264$, the theorem covers the fitness-level method (Droste et al., 2002; Wegener, 2003). However, in particular for search distributions spreading their probability mass over many level sets, the theorem is considerably stronger.

In the continuous case, in the absence of plateaus, the statement can be simplified considerably:

An isotropic distribution with component-wise standard deviation (step size) $\sigma >0$ has covariance matrix $C=\sigma 2I$, where $I\u2208Rd\xd7d$ is the identity matrix; hence we have $det(C)=\sigma d$. In the context of continuous search spaces, Jägersküpper
(2003) refers to $f^\Lambda $-progress as “spatial gain.” He analyzes in detail the gain
distribution of an isotropic search distribution on the sphere model. This result is much
less general than the previous corollary, since we can deal with *arbitrary* objective functions, which are characterized (locally) only by a single number, the success
probability. For the special case of a Gaussian mutation and the sphere function,
Jägersküpper's computation of the spatial gain is more exact, since it is tightly tailored
to the geometry of the case, in contrast to being based on a general bound. We lose only a
multiplicative factor of the gain, which does not impact our analysis significantly.
However, it should be noted that in the problem analyzed by Jägersküpper, the factor grows
with the problem dimension $d$. The spatial gain is closely connected to the notion of a
progress rate (Rechenberg, 1973), in particular if
the gain is lower bounded by a fixed fraction of the suboptimality. For a fixed objective
function like the sphere model $f(x)=\u2225x\u22252$ it is easy to relate functional suboptimality $f(x)-f*$ to spatial suboptimality $f^\Lambda (x)$.

## 3 Success-Based Step Size Control in the (1 + 1)-ES

In this section, we discuss properties of the (1 + 1)-ES algorithm and provide an analysis of its success-based step size adaptation rule that will allow us to derive global convergence theorems. To this end we introduce a nonstandard regularity property.

From here on, we consider the search space $Rd$, equipped with the standard Borel $\sigma $-algebra, and $\Lambda $ denotes the Lebesgue measure. Of course, all results from the previous section apply, with $X=Rd$.

In each iteration $t\u2208N$, the state of the (1 + 1)-ES is given by $(m(t),\sigma (t))\u2208Rd\xd7R+$. It samples one candidate offspring from the isotropic normal distribution $x(t)\u223cN(m(t),\sigma (t))2I$. The parent is replaced by successful offspring, meaning that the offspring must perform at least as well as the parent.

The goal of success-based step size adaptation is to maintain a stable distribution of the success rate, for example, concentrated around $1/5$. This can be achieved with a number of different mechanisms. Here we consider the maybe simplest such mechanism, namely immediate adaptation based on “success” or “failure” of each sample. Pseudocode for the full algorithm is provided in Algorithm 1.

Constants $c-<0$ and $c+>0$ in Algorithm 1 control the change of $log(\sigma )$ in case of failure and success, respectively. They are parameters of the method. For $c++4\xb7c-=0$ we obtain an implementation of Rechenberg's classic $1/5$-rule (Rechenberg, 1973). We call $\tau =c-c--c+$ the target success probability of the algorithm, which is always assumed to be strictly less than $1/2$. This is equivalent to $c+>-c-$. A reasonable parameter setting is $c-,c+\u2208\Omega 1d$.

Two properties of the algorithm are central for our analysis: it is rank-based and it performs elitist selection, ensuring that the best-so-far solution is never lost and the sequence $f(m(t))$ is monotonically decreasing.

Since step-size control depends crucially on the concept of a fixed rate of successful offspring, we define the success probability of the algorithm, which is the probability of a sampled point outperforming the parent in the search distribution center.

*success probability*functions

The function $pf\u2264$ computes the probability of sampling a point at least as good as $m$, while $pf<$ computes the probability of sampling a strictly better point. If $pf<$ and $pf\u2264$ coincide (i.e., if there are no plateaus), then we write $pf$. A nice property of the success probability is that it does not drop too quickly when increasing the step size:

The proof is found in the appendix; this is the case for a number of technical lemmas in this section. The next step is to define a plausible range for the step size.

We think of $\xi pf(m)$ with $p>\tau $ as a “too small” step size at $m$. Similarly, for $p<\tau $, $\eta pf(m)$ is a “too large” step size at $m$. Assume that the two values of $p$ are chosen so that a sufficiently wide range of “well-adapted” step sizes exists in between the “too small” and “too large” ones. We aim to establish that if the step size is outside this range, then step size adaptation will push it back into the range. The main complication is that the range for $\sigma $ depends on the point $m$.

The following lemma establishes a gap between lower and upper step size bound, that is, a lower bound on the size of the step size range.

For $0\u2264pH\u2264pT\u22641$ it holds $pHd\xb7\xi pTf(x)\u2264pTd\xb7\eta pHf(x)$ for all $x\u2208Rd$.

The following definition is central. It captures the ability of the (1 + 1)-ES to recover from a state with a far too small step size. This property is needed to avoid premature convergence.

For $p>0$, a function $f:Rd\u2192R$ is called $p$-improvable in $x\u2208Rd$ if $\xi pf(x)$ is positive. The function is called $p$-improvable on $Y\u2282Rd$ if $\xi pf|Y$ (the function $\xi pf$ restricted to $Y$) is lower bounded by a positive, lower semi-continuous function $\xi \u02dcpf:Y\u2192(0,1]$. A point $x\u2208Rd$ is called $p$-critical if it is not $p$-improvable for any $p>0$.

The property of $p$-improvability is a nonstandard regularity condition. The concept applies to measurable functions; hence we do not need to restrict ourselves to smooth or continuous objectives. On the one hand side, the property excludes many measurable and even some smooth functions. On the other hand, it is far less restrictive than continuity and smoothness, in the sense that it allows the objective function to jump and the level sets to have kinks. Intuitively, in the two-dimensional case illustrated in Figure 4, if for each point the sublevel set opens up in an angle of more than $2\pi p$, then the function is $p$-improvable. This is the case for many discontinuous functions, however, not for all smooth ones. The degree three polynomial $f(x1,x2)=x13+x22$ can serve as a counter example, since every point of the form $(x1,0)$ is $p$-critical. All of its contour lines form cuspidal cubics; see Figure 6 in Section 5.3. Local optima are always $p$-critical, but many critical points of smooth functions are not (see below). The above example demonstrates that some saddle points share this property; however, if $x$ is $p$-critical but not locally optimal, then $pf<(x,\sigma )>0$ for all $\sigma >0$. This means that such a point can be improved with positive probability for each choice of the step size, but in the limit $\sigma \u21920$ the probability of improvement tends to zero.

We should stress the difference between point-wise $p$-improvability, which simply demands that $\xi pf$ is positive, and set-wise $p$-improvability, which in addition demands that $\xi pf$ is lower bounded by a lower semicontinuous positive function. The latter property ensures the existence of a positive lower bound for $\xi pf$ on a compact set. In this sense, set-wise $p$-improvability is uniform on compact sets. In Sections 5.5 and 5.6, we will see examples where this makes a decisive difference.

Intuitively, the value of $p$ of a $p$-improvable function is critical: if it is below $\tau $, then the algorithm may be endangered to systematically decrease its step size while it should better do the contrary.

The next lemma establishes that smooth functions are $p$-improvable in all regular points, and also in most saddle points.

Let $f:Rd\u2192R$ be continuously differentiable.

For a regular point $x\u2208Rd$, $f$ is $p$-improvable in $x$ for all $p<12$.

Let $Y$ denote the set of all regular points of $f$, then $f$ is $p$-improvable on $Y$, for all $p<12$.

Let $x\u2208Rd$ denote a critical point of $f$, let $f$ be twice continuously differentiable in a neighborhood of $x$, and let $H=\u22072f(x)$ denote the Hessian matrix. If $H$ has at least one negative eigen value, then $x$ is not $p$-critical.

Similarly, we need to ensure that the step size does not diverge to $\u221e$. This is easy, since the spatial suboptimality is finite:

In other words, a too large step size is very likely to produce unsuccessful offspring. The probability of success decays quickly with growing step size, since the step size bound grows slowly in the form $\Theta (p-1/d)$ as the success probability $p$ decays to zero. Applying the above inequality to $p<\tau $ implies that for large enough step size $\sigma (t)$, the expected change $E[log(\sigma (t+1))-log(\sigma (t))]$ in the (1 + 1)-ES (Algorithm 1) is negative.

The following lemma is elementary. It is used multiple times in proofs, with the interpretation of the event “1” meaning that a statement holds true. It has a similar role as drift theorems in an analysis of the expected or high-probability behavior (Lehre and Witt, 2013; Lengler and Steger, 2016; Akimoto et al., 2018); however, here we aim for almost sure results.

Let $X(t)\u2208{0,1}$ denote a sequence of independent binary random variables. If there exists a uniform lower bound $Pr(X(t)=1)\u2265p>0$, then almost surely there exists an infinite subsequence $(tk)k\u2208N$ so that $X(tk)=1$ for all $k\u2208N$.

In applications of the lemma, the events of interest are not necessarily independent; however, they can be “made independent” by considering a sequence of independent events that imply the events of interest. In our applications, this is the case if the events of actual interest hold with probability of at least $p$; then an i.i.d. sequence of Bernoulli events implying corresponding sub-events with probability of exactly $p$ does the job. In other words, we will have a sequence $X\u02dc(t)$ of independent events, where $X\u02dc(t)=1$ implies $X(t)=1$. The above lemma is then applied to $X\u02dc(t)$, which trivially yields the same statement for $X(t)$. We imply this construction in all applications of the lemma.

The following lemma establishes, under a number of technical conditions, that the step size control rule succeeds in keeping the step size stable. If the prerequisites are fulfilled, then the result yields an impossible fact, namely that the overall reduction of the spatial suboptimality is unbounded. So the lemma is designed with proofs by contradiction in mind.

Equation (1) is a rather weak condition demanding that step-size adaptation works as desired. However, the requirement of a uniform lower bound $bT$ on the step size together with Theorem 1 implies that the (1 + 1)-ES would make infinite $f^\Lambda $-progress in expectation. This is of course impossible if $f^\Lambda (m(0))$ is finite, since $f^\Lambda $ is by definition non-negative. Therefore the lemma does not describe a typical situation observed when running the (1 + 1)-ES, but quite in contrast, an impossible situation that needs to be excluded in the proof of the main result in the next section.

## 4 Global Convergence

In this section, we establish our main result. The theorem ensures the existence of a limit point of the sequence $m(t)$ in a subset of desirable locations. In many cases this amounts to convergence of the algorithm to a (local) optimum.

Consider a measurable objective function $f:Rd\u2192R$ with level sets of measure zero. Assume that $K0:=Sf\u2264m(0)\xaf$ is compact, and let $K1\u2282K0$ denote a closed subset. If $f$ is $p$-improvable on $K0\u2216K1$ for some $p>\tau $, then the sequence $m(t)t\u2208N$ has a limit point in $K1$.

Corollary 2 ensures that in each such state the probability to decrease the $f^\Lambda $-value by at least $(2\pi )d/2\xb7bTd\xb7pI/2$ is lower bounded by $pI/2>0$. We apply Lemma 6 with the following construction. For each state $(m,\sigma )$ we pick a set $E(m,\sigma )\u2282Rd$ of probability mass $pI/2$ improving on $f^\Lambda (m)$ by at least $(2\pi )d/2\xb7bTd\xb7pI/2$. Then we model the sampling procedure of the (1 + 1)-ES in iteration $t$ as a two-stage process: first we draw a binary variable $X\u02dc(t)\u2208{0,1}$ with $Pr(X\u02dc(t)=1)=pI/2$, and then we draw $x(t)$ from a Gaussian restricted to $E(m(t-1),\sigma (t-1))$ if $X\u02dc(t)=1$, and restricted to the complement otherwise. The variables $X\u02dc(t)$ are independent, by construction.

Then Lemma 6 implies that the overall $f^\Lambda $-decrease is almost surely infinite, which contradicts the fact that $f^\Lambda (m(0))$ is finite and $f^\Lambda $ is lower bounded by zero. Hence, the sequence $m(t)$ leaves $K(r)$ after finitely many steps, almost surely. For $r=1/n$, let $tn$ denote an iteration fulfilling $m(tn)\u2209K(r)$. The sequence $m(tn)n\u2208N$ does not have a limit point in $K0\u2216K1$ (since that point would be contained in $K(r)$ for some $r>0$), however, due to the Bolzano-Weierstraß theorem it has at least one limit point in $K0$, which must therefore be located in $K1$.$\u25a1$

In accordance with Akimoto et al. (2010), the following corollary establishes convergence to a critical point for continuously differentiable functions.

Let $f:Rd\u2192R$ be a continuously differentiable function with level sets of measure zero. Assume that $K0=Sf\u2264m(0)\xaf$ is compact. Then the sequence $m(t)t\u2208N$ has a critical limit point.

Technically the above statements do not apply to problems with unbounded sublevel sets. However, due to the fast decay of the tails of Gaussian search distributions we can often approximate these problems by changing the function “very far away” from the initial search distribution, in order to make the sublevel sets bounded. We may then even apply the theorem with empty $K1$, which implies that after a while the approximation becomes insufficient since the algorithm diverges. In this sense we can conclude divergence, for example, on a linear function. We will use this argument several times in the next section, mainly to avoid unnecessary technical complications when defining saddle points and ridge functions.

We may ask whether $p$-improvability for $p>\tau $ is not only a sufficient but also a necessary condition for global convergence. This turns out to be wrong. The quadratic saddle point case discussed in Section 5.2 is a counter example, where the algorithm diverges reliably even if the success probability is far smaller than $\tau $. In contrast, the ridge of $p$-critical saddle points analyzed in Section 5.3 results in premature convergence, despite the fact that the critical points form a zero set, and this can even happen for a ridge of $p$-improvable points with $p<\tau $; see Section 5.4. Drift analysis is a promising tool for handling all of these cases. Here we provide a rather simple result, which still suffices for many interesting cases. A related analysis for a nonelitist ES was carried out by Beyer and Meyer-Nieberg (2006).

Define the zero sequence $SK:=\u2211k=K\u221epf\u2264(m,ek\xb7c-)$. For given $p<1$, there exists a $K0$ such that $SK0<1-p$. By definition, the probability of never sampling a successful offspring when starting the algorithm in the initial state $m(0)=m$, $\sigma (0)=eK0\xb7c-$ is given by $SK0$; in this case we have $m(t)=m$ for all $t\u2208N$.$\u25a1$

The above theorem precludes global convergence to a (local) optimum with full probability in the presence of a suitable nonoptimal $p$-critical point.

## 5 Case Studies

In this section, we analyze various example problems with very different characteristics, by applying the above convergence analysis. We characterize the optimization behavior of the (1 + 1)-ES, giving either positive or negative results in terms of global convergence. We start with smooth functions and then turn to less regular cases of nonsmooth and discontinuous functions. On the one hand side, we show that the theorem is applicable to interesting and nontrivial cases; on the other hand we explore its limits.

### 5.1 The 2-D Rosenbrock Function

The Rosenbrock function is a popular test problem because it requires a diverse set of optimization behaviors: the algorithm must descend into a parabolic valley, follow the valley while adapting to its curved shape, and finally converge into the global optimum, which is a smooth optimum with nontrivial (but still moderate) conditioning.

Corollary 3 immediately implies convergence of the (1 + 1)-ES into the global optimum. It does not say anything about the speed of convergence; however, Jägersküpper (2006a) established linear convergence in the last phase with overwhelming probability (however, using a different step size adaptation rule).

Taken together, these results give a rather complete picture of the optimization process: irrespective of the initial state we know that the algorithm manages to locate the global optimum without getting stuck on the way. Once the objective function starts to look quadratic in good enough approximation, Jägersküpper's result indicates that linear convergence can be expected. The same analysis applies to all twice continuously differentiable unimodal functions without critical points other than the optimum.

### 5.2 Saddle Points—The $p$-Improvable Case

Simulations show that the ES overcomes the zero level set containing the saddle point without a problem, also for large values of $a$. It seems that $p$-improvable saddle points do not result in premature convergence of the algorithm, irrespective of the value of $p>0$. However, this statement is based on an empirical observation, not on a rigorous proof.

### 5.3 Saddle Points—The $p$-Critical Case

### 5.4 Linear Ridge

As long as $cot-1(a)/\pi >\tau $ we can conclude divergence of the algorithm (the intended behavior) from Theorem 2. Otherwise we lose this property, and it is well known and easy to check with simulations that for large enough $a$ the algorithm indeed converges prematurely.

### 5.5 Sphere with Jump

If $S$ is the complement of a star-shaped open neighborhood of the origin then it is easy to see that the function is unimodal and $p$-improvable for all $p<1/2$. Theorem 2 applied with $K1:={0}$ yields the existence of a subsequence converging to the origin, which implies convergence of the whole sequence due to monotonicity of $fm(t)$. The results of Jägersküpper (2005) and Akimoto et al. (2018) imply linear convergence.

Other shapes of $S$ give different results. For example, for $d\u22652$, if $S$ is a ball not containing the origin then the function is still unimodal. For example, define $S$ as the open ball of radius $1/2$ around the first unit vector $e1=(1,0,\cdots ,0)\u2208Rd$. Then at $m:=3/2\xb7e1$ we have $\xi pf(m)=0$ for all $p>0$, and according to Theorem 3 the algorithm can converge prematurely if the step size is small. Alternatively, if $S$ is the closed ball, then all points except the origin are $p$-improvable for all $p<1/2$; however, there does not exist a positive lower semicontinuous lower bound on $\xi pf$ in any neighborhood of $m=3/2\xb7e1$, and again the algorithm can converge to this point, irrespective of the target success probability $\tau $.

Now consider the strip $S:=(a,\u221e)\xd7(0,1)\u2282R2$ with parameter $a>0$. An elementary calculation of the success rate at $m:=(a+\u025b,1)$ for $\sigma \u21920$ shows that the (1 + 1)-ES is guaranteed to converge to the optimum irrespective of the initial conditions if $tan-1(a)/(2\pi )<\tau $ (details are found in the appendix), that is, if $a$ is large enough; otherwise the algorithm can converge prematurely to a point on the edge $(a,\u221e)\xd7{1}$ of $S$.

### 5.6 Extremely Rugged Barrier

The function is point-wise $p$-improvable everywhere. However, similar to the closed ball
case in the previous section, there is no positive, lower semicontinuous lower bound on $\xi pf$. Therefore Theorem 2 does not apply. Indeed, unsurprisingly, simulations^{4} show that the algorithm gets stuck with positive
probability when initialized with $0<x(0)\u226a1$ and $\sigma \u226a1$. When removing 0 from $S$, then analogous to Section 5.3 we obtain $pf\u2264(m,\sigma )\u2208O(\sigma )$ for $m=0$ and small $\sigma $, and hence Theorem 3 applies.

In contrast, if $S$ is a Cantor set of measure zero then the algorithm diverges successfully, since it ignores zero sets with full probability.

## 6 Conclusions and Future Work

We have established global convergence of the (1 + 1)-ES for an extremely wide range of problems. Importantly, with the exception of a few proof details, the analysis captures the actual dynamics of the algorithm and hence consolidates our understanding of its working principles.

Our analysis rests on two pillars. The first one is a progress guarantee for rank-based evolutionary algorithms with elitist selection. In its simplest form, it bounds the progress on problems without plateaus from below. It seems to be quite generally applicable, for example, to runtime analysis and hence to the analysis of convergence speed.

The second ingredient is an analysis of success-based step size control. The current method barely suffices to show global convergence. It is not suitable for deducing stronger statements such as linear convergence on scale invariant problems. Control of the step size on general problems therefore needs further work.

Many natural questions remain open, the most significant are listed in the following. These open points are left for future work.

The approach does not directly yield results on the speed of convergence. However, the progress guarantee of Theorem 1 is a powerful tool for such an analysis. It can provide us with drift conditions and hence yield bounds on the expected runtime and on the tails of the runtime distribution. But for that to be effective we need better tools for bounding the tails of the step size distribution. Here, again, drift is a promising tool.

The current results are limited to step-size adaptive algorithms and do not include covariance matrix adaptation. One could hope to extend the proceeding to the (1 + 1)-CMA-ES algorithm (Igel et al., 2007), or to (1 + 1)-xNES (Glasmachers et al., 2010). Controlling the stability of the covariance matrix is expected to be challenging. It is not clear whether additional assumptions will be required. As an added benefit, it may be possible to relax the condition $p>\tau $ for $p$-improvability, by requiring it only after successful adaptation of the covariance matrix.

Plateaus are currently not handled. Theorem 1 shows how they distort the distribution of the decrease. Worse, they affect step size adaptation, and they make it virtually impossible to obtain a lower bound on the one-step probability of a strict improvement. Therefore, proper handling of plateaus requires additional arguments.

In the interest of generality, our convergence theorem only guarantees the existence of a limit point, not convergence of the sequence as a whole. We believe that convergence actually holds in most cases of interest (at least as long as there are no plateaus; see above). This is nearly trivial if the limit point is an isolated local optimum; however, it is unclear for a spatially extended optimum, for example, a low-dimensional variety or a Cantor set.

Our current result requires a saddle point to be $p$-improvable for some $p>\tau $, otherwise the theorem does not exclude convergence of the ES to the saddle point. We know from simulations that the (1 + 1)-ES overcomes $p$-improvable saddle points reliably, also for $p\u226a\tau $. A proper analysis guaranteeing this behavior would allow the establishment of statements analogous to work on gradient-based algorithms that overcome saddle points quickly and reliably; see for example, Dauphin et al. (2014). However, this is clearly beyond the scope of the present article.

We provide only a minimal negative result stating that the algorithm may indeed converge prematurely with positive probability if there exists a $p$-critical point for which the cumulative success probability does not sum to infinity. In Section 5.5, it becomes apparent that this notion is rather weak, since the statement is not formally applicable to the case of a closed ball, which however differs from the open ball scenario only on a zero set. This makes clear that there is still a gap between positive results (global convergence) and negative results (premature convergence). Theorem 3 can certainly be strengthened, but the exact conditions remain to be explored. A single $p$-improvable point with $p<\tau $ is apparently insufficient. A $p$-critical point may be sufficient, but it is not necessary.

## Acknowledgments

I would like to thank Anne Auger for helpful discussions, and I gratefully acknowledge support by Dagstuhl seminar 17191 “Theory of Randomized Search Heuristics.”

## Notes

^{1}

Some authors refer to global convergence as convergence to a global optimum. We do not use the term in this sense.

^{2}

Jägersküpper analyzed a different step size adaptation rule. However, it exhibits essentially the same dynamics as Algorithm 1.

^{3}

An alternative approach to avoiding infinite values is to apply a bounded reference measure with full support, for example, a Gaussian on $Rd$. In the absence of a uniform distribution on $X$, the price to pay for a bounded and everywhere positive reference measure is a nonuniform measure, which does not allow for a uniform, positive lower bound. The resulting technical complications seem to outweigh the slightly increased generality of the results.

^{4}

Special care must be taken when simulating this problem with floating point arithmetic. Our simulation is necessarily inexact; however, not beyond the usual limitations of floating point numbers. It does reflect the actual dynamics well. The fitness function is designed such that the most critical point for the simulation is zero, which is where standard IEEE floating point numbers have maximal precision.

## References

## Appendix

Here we provide the proofs of technical lemmas that were omitted from the main text in the interest of readability.

We have to show that the level sets of all three functions agree outside a set of measure zero. It is immediately clear from definition 1 that the level sets of $f$ are a refinement of the level sets of $f^\Lambda \u2264$ and $f^\Lambda <$, i.e., $f(x)=f(x')$ implies $f^\Lambda \u2264(x)=f^\Lambda \u2264(x')$ and $f^\Lambda <(x)=f^\Lambda <(x')$, and $f^\Lambda \u2264(x)<f^\Lambda \u2264(x')$ and $f^\Lambda <(x)<f^\Lambda <(x')$ both imply $f(x)<f(x')$.

It remains to be shown that $f^\Lambda \u2264$ and $f^\Lambda <$ do not join $f$-level sets of positive measure. Let $y\u2208R$ denote a level so that $Y=f^\Lambda <-1(y)$ has positive measure $\Lambda (Y)>0$. We have to show that this measure (not necessarily the whole set, only up to a zero set) is covered by a single $f$-level set. Assume the contrary, for the sake of contradiction. Then we find ourselves in one of the following situations:

There exist $x,x'\u2208Y$ fulfilling $a:=f(x)<f(x')=:a'$ and it holds $\Lambda f-1(a)>0$ and $\Lambda f-1(a')>0$. So the mass of $Y$ is split into at least two chunks of positive measure. This implies $f^\Lambda <(x')-f^\Lambda <(x)\u2265\Lambda f-1(a)>0$, which contradicts the assumption that $x$ and $x'$ belong to the same $f^\Lambda <$-level.

There exist $x,x'\u2208Y$ fulfilling $a=f(x)<f(x')=a'$ and it holds $\Lambda f-1(I)>0$ for the open interval $I=(a,a')$. So $Y$ consists of a continuum of level sets of measure zero. Again, this implies $f^\Lambda <(x')-f^\Lambda <(x)\u2265\Lambda f-1(I)>0$, leading to the same contradiction as in the first case.

The argument for $f^\Lambda \u2264$ is exactly analogous.$\u25a1$

The central proof argument works as follows. First, we exclude that the step size remains outside $[bT,bH]$ for too long. The same argument does not work for the target interval defined in Equation (1) because of its time dependency—we could overjump the moving target. Instead we show that the only way for the step size to avoid the target interval for an infinite time is to overjump, that is, to find itself above and below the interval infinitely often. Finally, an argument exploiting the properties of unsuccessful steps allows us to consider a static target, which cannot be overjumped by the property already shown above.

By construction, these episodes consist entirely of unsuccessful steps, and therefore $m(t)$ remains unchanged for the duration of an episode. This comes in handy, since this means that also the target interval $\xi pTfm(t),\eta pHfm(t)$ remains fixed, and this again means that at least one iteration of the episode falls into this interval. We have thus constructed an infinite subsequence of iteration within the above interval, in contradiction to the assumption.$\u25a1$

Finally, we provide details on the computations of success rates in the examples. In Section 5.2, the set where the function $f(x1,x2):=a\xb7x12-x22$ takes the value zero consists of two lines through the origin in directions $(1,a)$ and $(-1,a)$. The cone is bounded by these lines in the success domain. The angle between their directions divided by $\pi $ corresponds to the success rate. It is two times the angle between $(1,a)$ and $(1,0)$, and hence $2cot-1(a)$. Dividing by $\pi $ yields the result.

The threshold $p<cot-1(a)/\pi $ in Section 5.4 follows the exact same logic, with the difference that the square root vanishes in the direction vectors, and we lose a factor of two, since the success domain is only one half of the cone.

In Section 5.5, the circular level line in the corner point $(a,1)$ is tangent to the vector $(-1,a)$. The angle $tan-1(a)$ between $(-1,a)$ and $(-1,0)$, divided by $2\pi $, is a lower bound on the success rate at $m=(a+\u025b,1)$ with $\sigma \u226a\u025b$. The bound is precise for $\u025b\u21920$.