Abstract
We establish global convergence of the (1 + 1) evolution strategy, that is, convergence to a critical point independent of the initial state. More precisely, we show the existence of a critical limit point, using a suitable extension of the notion of a critical point to measurable functions. At its core, the analysis is based on a novel progress guarantee for elitist, rank-based evolutionary algorithms. By applying it to the (1 + 1) evolution strategy we are able to provide an accurate characterization of whether global convergence is guaranteed with full probability, or whether premature convergence is possible. We illustrate our results on a number of example applications ranging from smooth (non-convex) cases over different types of saddle points and ridge functions to discontinuous and extremely rugged problems.
1 Introduction
Global convergence of an optimization algorithm refers to convergence of the iterates to a critical point independent of the initial state—in contrast to local convergence, which guarantees this property only for initial iterates in the vicinity of a critical point.1 For example, many first order methods enjoy this property (Gilbert and Nocedal, 1992), while Newton's method does not. In the realm of direct search algorithms, mesh adaptive search algorithms are known to be globally convergent (Torczon, 1997).
Evolution strategies (ES) are a class of randomized search heuristics for direct search in . The (1 + 1)-ES is the maybe simplest such method, originally developed by Rechenberg (1973). A particularly simple variant thereof, which was first defined by Kern et al. (2004), is given in Algorithm 1. Its state consists of a single parent individual and a step size . It samples a single offspring per generation from the isotropic multivariate normal distribution and applies (1 + 1)-selection; that is, it keeps the better of the two points. Here, denotes the identity matrix. The standard deviation of the sampling distribution, also called global step size, is adapted online. The mechanism maintains a fixed success rate usually chosen as , in accordance with Rechenberg's original approach. It is discussed in more detail in Section 3. In effect, step size control enables linear convergence on convex quadratic functions (Jägersküpper, 2006a), and therefore locally linear convergence on twice differentiable functions. In contrast, algorithms without step size adaptation can converge as slowly as pure random search (Hansen et al., 2015). Furthermore, being rank-based methods, ESs are invariant to strictly monotonic transformations of objective values. ESs tend to be robust and suitable for solving difficult problems (rugged and multimodal fitness landscapes), a capacity that is often attributed to invariance properties.
Although the (1 + 1)-ES is the oldest evolution strategy in existence, we do not yet fully understand how generally it is applicable. In this article, we cast this open problem into the question on which functions the algorithm will succeed to locate a local optimum, and on which functions it may converge prematurely, and hence fail. We aim at an as complete as possible characterization of these different cases.
By modern standards, the (1 + 1)-ES cannot be considered a competitive optimization method. The covariance matrix adaptation evolution strategy (CMA-ES) by Hansen and Ostermeier (2001) and its many variants mark the state of the art. The algorithm goes beyond the simple (1 + 1)-ES in many ways: it uses nonelitist selection with a population, it adapts the full covariance matrix of its sampling distribution (effectively resembling second order methods), and it performs temporal integration of direction information in the form of evolution paths for step size and covariance matrix adaptation. Still, its convergence order on many relevant functions is linear, and that is thanks to the same mechanism as in the (1 + 1)-ES, namely step size adaptation.
To date, convergence guarantees for ESs are scarce. Some results exist for convex quadratic problems, which essentially implies local convergence on twice continuously differentiable functions. In this situation it is natural to start with the simplest ES, which is arguably the (1 + 1)-ES. The variant defined by Kern et al. (2004) is given in Algorithm 1; it is discussed in detail in Section 3.
Jägersküpper (2003, 2005, 2006a,b) analyzed the (1 + 1)-ES2 on the sphere function as well as on general convex quadratic functions. His analysis ensures linear convergence with overwhelming probability, that is, with a probability of for some , where is the problem dimension. In other words, the analysis is asymptotic in the sense , and for fixed (finite) dimension , no concrete value or bound is attributed to this probability. A dimension-dependent convergence rate of is obtained.
A related and more modern approach relying explicitly on drift analysis was presented by Akimoto et al. (2018), showing linear convergence of the algorithm on the sphere function, and providing an explicit, non-asymptotic runtime bound for the first hitting time of a level set.
The analysis by Auger (2005) is based on the stability of the Markov chain defined by the normalized state , for a -ES on the sphere function. Since the chain is shown to converge to a stationary distribution and the problem is scale-invariant, linear convergence or divergence is obtained, with full probability. There exists sufficient empirical evidence for convergence; however, this is not covered by the result.
A different approach to proving global convergence is to modify the algorithm under consideration in a way that allows for an analysis with well established techniques. This route was explored by Diouane et al. (2015), where step size adaptation is subject to a forcing function in order to guarantee a sufficient decrease condition, akin to, for example, the Wolfe conditions for inexact line search (Wolfe, 1969). This is a powerful approach since the resulting analysis is general in terms of the algorithms (the same step size forcing mechanism can be added to virtually all ES) and the objective functions (the function must be bounded from below and Lipschitz near the limit point) at the same time. The price is that the analysis does not apply to algorithms regularly applied within the EC community, and that we do not obtain new insights about the mechanisms of these algorithms. Furthermore, the forcing function decays slowly, forcing a linearly convergent algorithm into sublinear convergence (but still much faster than random search). From a more technical point of view the Lipschitz condition is unfortunate since it is not preserved under monotonic transformations of fitness values. We improve on this approach by providing sufficient decrease of a transformed objective function, which holds for all randomized elitist, rank-based algorithms, and hence does not require a forcing function or any other algorithmic changes.
The global convergence guarantee by Akimoto et al. (2010) is closest to the present article. Also, that analysis is extremely general in the sense that it covers a broad range of problems and algorithms. The objective function is assumed to be continuously differentiable, and the only requirement for the algorithm is that it successfully diverges on a linear function. This includes all state-of-the-art evolution strategies and many more algorithms. Since continuously differentiable functions are locally arbitrarily well approximated by linear functions (first order Taylor polynomial), it is concluded that any limit point must be stationary, since there the linear term vanishes and higher order terms take over. This is an elegant and powerful result. Its main restriction is that it applies only to continuously differentiable functions. This is a huge class, but it can still be considered a relevant limitation because on continuously differentiable problems ESs are in direct competition with gradient-based methods, which are usually more efficient if gradients are available.
For this reason, solving smooth and otherwise easy problems cannot be the focus of evolution strategies. Therefore, in this article we seek to explore the most general class of problems that can be solved with an evolution strategy. In other words, we aim to push the limits beyond the well-understood cases, towards really difficult ones. Our goal is to establish the largest possible class of problems that can be solved reliably by an ES, and we also want to understand its limitations, i.e., which problems cannot be solved, and why. For this purpose, we focus on the simplest such algorithm, namely the (1 + 1)-ES defined in Algorithm 1. It turns out that the limitations of the algorithm are closely tied to its success-based step size adaptation mechanism. To capture this effect we introduce a novel regularity condition ensuring proper function of success-based step-size control. The new condition is arguably much weaker than continuous differentiability, in a sense that will become clear as we discuss examples and counter-examples.
From a bird's eye's perspective, our contributions are as follows:
we provide a general progress or decrease guarantee for rank-based elitist algorithms,
we show how general the (1 + 1)-ES is applicable, that is, on which problems it will find a local optimum.
The article and the proofs are organized as follows. In the next section we establish a progress guarantee for rank-based elitist algorithms. This result is extremely general, and it is in no way tied to continuous search spaces and the (1 + 1)-ES. Therefore, it is stated in general terms, in the expectation that it will prove useful for the analysis of algorithms other than the (1 + 1)-ES. Its role in the global convergence proof is to ensure a sufficient rate of optimization progress as long as the step size is well adapted and the progress rate is bounded away from zero. In Section 3, we discuss properties of the (1 + 1)-ES and introduce the regularity condition. Based on this condition we show that the step size returns infinitely often to a range where non-trivial progress can be concluded from the decrease theorem. Based on these achievements we establish a global convergence theorem in Section 4, essentially stating that there exists a subsequence of iterates converging to a critical point, the exact notion of which is defined in Section 3. We also establish a negative result, showing that a nonoptimal critical point results in premature convergence with positive probability, which excludes global convergence. In Section 5, we apply the analysis to a variety of settings and demonstrate their implications. We close with conclusions and open questions.
2 Optimization Progress of Rank-Based Elitist Algorithms
In this section, we establish a general theorem ensuring a certain rate of optimization progress for randomized rank-based elitist algorithms. We consider a general search space . This space is equipped with a -algebra and a reference measure denoted . The usual choice of the reference measure is the counting measure for discrete spaces and the Lebesgue measure for continuous spaces. The objective function , to be minimized, is assumed to be measurable. The parent selection and variation operations of the search algorithm are also assumed to be measurable; indeed we assume that these operators give rise to a distribution from which the offspring is sampled, and this distribution has a density with respect to .
Due to the assumption that the offspring generation distribution is -measurable, with full probability, the algorithm is invariant to the values of the objective function restricted to zero sets (sets of measure zero, fulfilling ). The following definition captures these properties. It encodes the “essential” level set structure of an objective function.
It follows immediately from the definition that the sublevel sets of equivalent objective functions coincide outside a zero set.
In the next step we construct a canonical representative for each equivalence class, which we can think of as a normal form of an objective function.
The definition is illustrated with two examples in Figures 1 and 2. In the following, will denote the elite (or parent) point, and is the elite point in iteration of an iterative algorithm, that is, an evolutionary algorithm with elitist selection. For two very different reasons, namely 1) to avoid divergence of the algorithm in the case of unbounded search spaces, and 2) for simplicity of the technical arguments in the proofs, we restrict ourselves to the case that the sublevel set of the initial iterate is bounded and has finite spatial suboptimality. For most reasonable reference measures, boundedness implies finite spatial suboptimality. For equipped with the Lebesgue measure this is equivalent to the topological closure being compact. The assumptions immediately imply that and are bounded for all , and that restricted to the functions and take values in the bounded range . Since an elitist algorithm never accepts points outside , we will from here on ignore the issue of infinite -values.3
In the continuous case, a plateau is a level set of positive Lebesgue measure. When defining a local optimum as the best point within an open neighborhood, then an interior point of a plateau is a local optimum, which may not always be intended. Anyway, when analyzing the (1 + 1)-ES we will not handle plateaus and instead assume that level sets of are zero sets. This also implies that and agree. For now the only slightly weaker statement of the following lemma is sufficient, which does allow for plateaus.
Let be measurable. If is finite for all , then it holds .
Due to the rank-based nature of the algorithms under study we cannot expect to fulfill a sufficient decrease condition based on -values. This is because a functional gain achieved by moving from to can be reduced to an arbitrarily small or large gain , where is strictly monotonically increasing, and the class of transformations does not allow to bound the difference uniformly, neither additively nor multiplicatively. Instead, the following theorem establishes a progress or decrease guarantee measured in terms of the spatial suboptimality function . It gets around the problem of inconclusive values in objective space (which, in case of single-objective optimization, is just the real line) by considering a quantity in search space, namely the reference measure of the sublevel set.
The algorithm is randomized; hence the decrease follows a distribution. The following definition captures properties of this distribution.
Note that , , , , , and implicitly depend on , , and . This is not indicated explicitly in order to avoid excessive clutter in the notation.
If the function is continuous with continuous domain and without plateaus, then and coincide, we have , and maps each probability to the corresponding unique quantile of the distribution of under . However, if there exists a plateau within the support of (a level set of positive -measure, that is, if is discrete), then is positive and on the function takes values anywhere between the lower quantile and the upper quantile . The exact value does not matter, since the only use of -values is as arguments to one of the -functions. Indeed, and “round” the probability down or up, respectively, to the closest value that is attainable as the probability of sampling a sublevel set. The freedom in the choice of can also be understood in the context of Figure 1: if the point in the definitions of and is located on the plateau, then can be the anywhere between the probability mass of the sub-level set excluding and including the plateau.
With these definitions in place, the following theorem controls the expected value as well as the quantiles of the decrease distribution.
We start with the first two claims, which provide lower bounds on the -quantiles of probabilities of improvement by some margin . The argument here is elementary: an -improvement of from to means that the -sublevel set of is smaller than that of by -mass (due to the offspring improving upon its parent ). This corresponds to a difference in -mass of the same -sublevel sets of at most , which will correspond to in the following. Note that the probabilities (-notation) correspond to the same distribution from which is sampled, and that -values and -values directly correspond to -mass. The situation is illustrated in Figure 3.
For the second claim we define the -level and the set , and we note that it holds . Then, with an analogous argument as above we obtain . In this case we immediately arrive at and hence at , which shows the second claim.
In our application of the above theorem to the (1 + 1)-ES corresponds to the offspring point sampled from a Gaussian centered on .
Due to the term in the decrease of , the theorem covers the fitness-level method (Droste et al., 2002; Wegener, 2003). However, in particular for search distributions spreading their probability mass over many level sets, the theorem is considerably stronger.
In the continuous case, in the absence of plateaus, the statement can be simplified considerably:
An isotropic distribution with component-wise standard deviation (step size) has covariance matrix , where is the identity matrix; hence we have . In the context of continuous search spaces, Jägersküpper (2003) refers to -progress as “spatial gain.” He analyzes in detail the gain distribution of an isotropic search distribution on the sphere model. This result is much less general than the previous corollary, since we can deal with arbitrary objective functions, which are characterized (locally) only by a single number, the success probability. For the special case of a Gaussian mutation and the sphere function, Jägersküpper's computation of the spatial gain is more exact, since it is tightly tailored to the geometry of the case, in contrast to being based on a general bound. We lose only a multiplicative factor of the gain, which does not impact our analysis significantly. However, it should be noted that in the problem analyzed by Jägersküpper, the factor grows with the problem dimension . The spatial gain is closely connected to the notion of a progress rate (Rechenberg, 1973), in particular if the gain is lower bounded by a fixed fraction of the suboptimality. For a fixed objective function like the sphere model it is easy to relate functional suboptimality to spatial suboptimality .
3 Success-Based Step Size Control in the (1 + 1)-ES
In this section, we discuss properties of the (1 + 1)-ES algorithm and provide an analysis of its success-based step size adaptation rule that will allow us to derive global convergence theorems. To this end we introduce a nonstandard regularity property.
From here on, we consider the search space , equipped with the standard Borel -algebra, and denotes the Lebesgue measure. Of course, all results from the previous section apply, with .
In each iteration , the state of the (1 + 1)-ES is given by . It samples one candidate offspring from the isotropic normal distribution . The parent is replaced by successful offspring, meaning that the offspring must perform at least as well as the parent.
The goal of success-based step size adaptation is to maintain a stable distribution of the success rate, for example, concentrated around . This can be achieved with a number of different mechanisms. Here we consider the maybe simplest such mechanism, namely immediate adaptation based on “success” or “failure” of each sample. Pseudocode for the full algorithm is provided in Algorithm 1.
Constants and in Algorithm 1 control the change of in case of failure and success, respectively. They are parameters of the method. For we obtain an implementation of Rechenberg's classic -rule (Rechenberg, 1973). We call the target success probability of the algorithm, which is always assumed to be strictly less than . This is equivalent to . A reasonable parameter setting is .
Two properties of the algorithm are central for our analysis: it is rank-based and it performs elitist selection, ensuring that the best-so-far solution is never lost and the sequence is monotonically decreasing.
Since step-size control depends crucially on the concept of a fixed rate of successful offspring, we define the success probability of the algorithm, which is the probability of a sampled point outperforming the parent in the search distribution center.
The function computes the probability of sampling a point at least as good as , while computes the probability of sampling a strictly better point. If and coincide (i.e., if there are no plateaus), then we write . A nice property of the success probability is that it does not drop too quickly when increasing the step size:
The proof is found in the appendix; this is the case for a number of technical lemmas in this section. The next step is to define a plausible range for the step size.
We think of with as a “too small” step size at . Similarly, for , is a “too large” step size at . Assume that the two values of are chosen so that a sufficiently wide range of “well-adapted” step sizes exists in between the “too small” and “too large” ones. We aim to establish that if the step size is outside this range, then step size adaptation will push it back into the range. The main complication is that the range for depends on the point .
The following lemma establishes a gap between lower and upper step size bound, that is, a lower bound on the size of the step size range.
For it holds for all .
The following definition is central. It captures the ability of the (1 + 1)-ES to recover from a state with a far too small step size. This property is needed to avoid premature convergence.
For , a function is called -improvable in if is positive. The function is called -improvable on if (the function restricted to ) is lower bounded by a positive, lower semi-continuous function . A point is called -critical if it is not -improvable for any .
The property of -improvability is a nonstandard regularity condition. The concept applies to measurable functions; hence we do not need to restrict ourselves to smooth or continuous objectives. On the one hand side, the property excludes many measurable and even some smooth functions. On the other hand, it is far less restrictive than continuity and smoothness, in the sense that it allows the objective function to jump and the level sets to have kinks. Intuitively, in the two-dimensional case illustrated in Figure 4, if for each point the sublevel set opens up in an angle of more than , then the function is -improvable. This is the case for many discontinuous functions, however, not for all smooth ones. The degree three polynomial can serve as a counter example, since every point of the form is -critical. All of its contour lines form cuspidal cubics; see Figure 6 in Section 5.3. Local optima are always -critical, but many critical points of smooth functions are not (see below). The above example demonstrates that some saddle points share this property; however, if is -critical but not locally optimal, then for all . This means that such a point can be improved with positive probability for each choice of the step size, but in the limit the probability of improvement tends to zero.
We should stress the difference between point-wise -improvability, which simply demands that is positive, and set-wise -improvability, which in addition demands that is lower bounded by a lower semicontinuous positive function. The latter property ensures the existence of a positive lower bound for on a compact set. In this sense, set-wise -improvability is uniform on compact sets. In Sections 5.5 and 5.6, we will see examples where this makes a decisive difference.
Intuitively, the value of of a -improvable function is critical: if it is below , then the algorithm may be endangered to systematically decrease its step size while it should better do the contrary.
The next lemma establishes that smooth functions are -improvable in all regular points, and also in most saddle points.
Let be continuously differentiable.
For a regular point , is -improvable in for all .
Let denote the set of all regular points of , then is -improvable on , for all .
Let denote a critical point of , let be twice continuously differentiable in a neighborhood of , and let denote the Hessian matrix. If has at least one negative eigen value, then is not -critical.
Similarly, we need to ensure that the step size does not diverge to . This is easy, since the spatial suboptimality is finite:
In other words, a too large step size is very likely to produce unsuccessful offspring. The probability of success decays quickly with growing step size, since the step size bound grows slowly in the form as the success probability decays to zero. Applying the above inequality to implies that for large enough step size , the expected change in the (1 + 1)-ES (Algorithm 1) is negative.
The following lemma is elementary. It is used multiple times in proofs, with the interpretation of the event “1” meaning that a statement holds true. It has a similar role as drift theorems in an analysis of the expected or high-probability behavior (Lehre and Witt, 2013; Lengler and Steger, 2016; Akimoto et al., 2018); however, here we aim for almost sure results.
Let denote a sequence of independent binary random variables. If there exists a uniform lower bound , then almost surely there exists an infinite subsequence so that for all .
In applications of the lemma, the events of interest are not necessarily independent; however, they can be “made independent” by considering a sequence of independent events that imply the events of interest. In our applications, this is the case if the events of actual interest hold with probability of at least ; then an i.i.d. sequence of Bernoulli events implying corresponding sub-events with probability of exactly does the job. In other words, we will have a sequence of independent events, where implies . The above lemma is then applied to , which trivially yields the same statement for . We imply this construction in all applications of the lemma.
The following lemma establishes, under a number of technical conditions, that the step size control rule succeeds in keeping the step size stable. If the prerequisites are fulfilled, then the result yields an impossible fact, namely that the overall reduction of the spatial suboptimality is unbounded. So the lemma is designed with proofs by contradiction in mind.
Equation (1) is a rather weak condition demanding that step-size adaptation works as desired. However, the requirement of a uniform lower bound on the step size together with Theorem 1 implies that the (1 + 1)-ES would make infinite -progress in expectation. This is of course impossible if is finite, since is by definition non-negative. Therefore the lemma does not describe a typical situation observed when running the (1 + 1)-ES, but quite in contrast, an impossible situation that needs to be excluded in the proof of the main result in the next section.
4 Global Convergence
In this section, we establish our main result. The theorem ensures the existence of a limit point of the sequence in a subset of desirable locations. In many cases this amounts to convergence of the algorithm to a (local) optimum.
Consider a measurable objective function with level sets of measure zero. Assume that is compact, and let denote a closed subset. If is -improvable on for some , then the sequence has a limit point in .
Corollary 2 ensures that in each such state the probability to decrease the -value by at least is lower bounded by . We apply Lemma 6 with the following construction. For each state we pick a set of probability mass improving on by at least . Then we model the sampling procedure of the (1 + 1)-ES in iteration as a two-stage process: first we draw a binary variable with , and then we draw from a Gaussian restricted to if , and restricted to the complement otherwise. The variables are independent, by construction.
Then Lemma 6 implies that the overall -decrease is almost surely infinite, which contradicts the fact that is finite and is lower bounded by zero. Hence, the sequence leaves after finitely many steps, almost surely. For , let denote an iteration fulfilling . The sequence does not have a limit point in (since that point would be contained in for some ), however, due to the Bolzano-Weierstraß theorem it has at least one limit point in , which must therefore be located in .
In accordance with Akimoto et al. (2010), the following corollary establishes convergence to a critical point for continuously differentiable functions.
Let be a continuously differentiable function with level sets of measure zero. Assume that is compact. Then the sequence has a critical limit point.
Technically the above statements do not apply to problems with unbounded sublevel sets. However, due to the fast decay of the tails of Gaussian search distributions we can often approximate these problems by changing the function “very far away” from the initial search distribution, in order to make the sublevel sets bounded. We may then even apply the theorem with empty , which implies that after a while the approximation becomes insufficient since the algorithm diverges. In this sense we can conclude divergence, for example, on a linear function. We will use this argument several times in the next section, mainly to avoid unnecessary technical complications when defining saddle points and ridge functions.
We may ask whether -improvability for is not only a sufficient but also a necessary condition for global convergence. This turns out to be wrong. The quadratic saddle point case discussed in Section 5.2 is a counter example, where the algorithm diverges reliably even if the success probability is far smaller than . In contrast, the ridge of -critical saddle points analyzed in Section 5.3 results in premature convergence, despite the fact that the critical points form a zero set, and this can even happen for a ridge of -improvable points with ; see Section 5.4. Drift analysis is a promising tool for handling all of these cases. Here we provide a rather simple result, which still suffices for many interesting cases. A related analysis for a nonelitist ES was carried out by Beyer and Meyer-Nieberg (2006).
Define the zero sequence . For given , there exists a such that . By definition, the probability of never sampling a successful offspring when starting the algorithm in the initial state , is given by ; in this case we have for all .
The above theorem precludes global convergence to a (local) optimum with full probability in the presence of a suitable nonoptimal -critical point.
5 Case Studies
In this section, we analyze various example problems with very different characteristics, by applying the above convergence analysis. We characterize the optimization behavior of the (1 + 1)-ES, giving either positive or negative results in terms of global convergence. We start with smooth functions and then turn to less regular cases of nonsmooth and discontinuous functions. On the one hand side, we show that the theorem is applicable to interesting and nontrivial cases; on the other hand we explore its limits.
5.1 The 2-D Rosenbrock Function
The Rosenbrock function is a popular test problem because it requires a diverse set of optimization behaviors: the algorithm must descend into a parabolic valley, follow the valley while adapting to its curved shape, and finally converge into the global optimum, which is a smooth optimum with nontrivial (but still moderate) conditioning.
Corollary 3 immediately implies convergence of the (1 + 1)-ES into the global optimum. It does not say anything about the speed of convergence; however, Jägersküpper (2006a) established linear convergence in the last phase with overwhelming probability (however, using a different step size adaptation rule).
Taken together, these results give a rather complete picture of the optimization process: irrespective of the initial state we know that the algorithm manages to locate the global optimum without getting stuck on the way. Once the objective function starts to look quadratic in good enough approximation, Jägersküpper's result indicates that linear convergence can be expected. The same analysis applies to all twice continuously differentiable unimodal functions without critical points other than the optimum.
5.2 Saddle Points—The -Improvable Case
Simulations show that the ES overcomes the zero level set containing the saddle point without a problem, also for large values of . It seems that -improvable saddle points do not result in premature convergence of the algorithm, irrespective of the value of . However, this statement is based on an empirical observation, not on a rigorous proof.
5.3 Saddle Points—The -Critical Case
5.4 Linear Ridge
As long as we can conclude divergence of the algorithm (the intended behavior) from Theorem 2. Otherwise we lose this property, and it is well known and easy to check with simulations that for large enough the algorithm indeed converges prematurely.
5.5 Sphere with Jump
If is the complement of a star-shaped open neighborhood of the origin then it is easy to see that the function is unimodal and -improvable for all . Theorem 2 applied with yields the existence of a subsequence converging to the origin, which implies convergence of the whole sequence due to monotonicity of . The results of Jägersküpper (2005) and Akimoto et al. (2018) imply linear convergence.
Other shapes of give different results. For example, for , if is a ball not containing the origin then the function is still unimodal. For example, define as the open ball of radius around the first unit vector . Then at we have for all , and according to Theorem 3 the algorithm can converge prematurely if the step size is small. Alternatively, if is the closed ball, then all points except the origin are -improvable for all ; however, there does not exist a positive lower semicontinuous lower bound on in any neighborhood of , and again the algorithm can converge to this point, irrespective of the target success probability .
Now consider the strip with parameter . An elementary calculation of the success rate at for shows that the (1 + 1)-ES is guaranteed to converge to the optimum irrespective of the initial conditions if (details are found in the appendix), that is, if is large enough; otherwise the algorithm can converge prematurely to a point on the edge of .
5.6 Extremely Rugged Barrier
The function is point-wise -improvable everywhere. However, similar to the closed ball case in the previous section, there is no positive, lower semicontinuous lower bound on . Therefore Theorem 2 does not apply. Indeed, unsurprisingly, simulations4 show that the algorithm gets stuck with positive probability when initialized with and . When removing 0 from , then analogous to Section 5.3 we obtain for and small , and hence Theorem 3 applies.
In contrast, if is a Cantor set of measure zero then the algorithm diverges successfully, since it ignores zero sets with full probability.
6 Conclusions and Future Work
We have established global convergence of the (1 + 1)-ES for an extremely wide range of problems. Importantly, with the exception of a few proof details, the analysis captures the actual dynamics of the algorithm and hence consolidates our understanding of its working principles.
Our analysis rests on two pillars. The first one is a progress guarantee for rank-based evolutionary algorithms with elitist selection. In its simplest form, it bounds the progress on problems without plateaus from below. It seems to be quite generally applicable, for example, to runtime analysis and hence to the analysis of convergence speed.
The second ingredient is an analysis of success-based step size control. The current method barely suffices to show global convergence. It is not suitable for deducing stronger statements such as linear convergence on scale invariant problems. Control of the step size on general problems therefore needs further work.
Many natural questions remain open, the most significant are listed in the following. These open points are left for future work.
The approach does not directly yield results on the speed of convergence. However, the progress guarantee of Theorem 1 is a powerful tool for such an analysis. It can provide us with drift conditions and hence yield bounds on the expected runtime and on the tails of the runtime distribution. But for that to be effective we need better tools for bounding the tails of the step size distribution. Here, again, drift is a promising tool.
The current results are limited to step-size adaptive algorithms and do not include covariance matrix adaptation. One could hope to extend the proceeding to the (1 + 1)-CMA-ES algorithm (Igel et al., 2007), or to (1 + 1)-xNES (Glasmachers et al., 2010). Controlling the stability of the covariance matrix is expected to be challenging. It is not clear whether additional assumptions will be required. As an added benefit, it may be possible to relax the condition for -improvability, by requiring it only after successful adaptation of the covariance matrix.
Plateaus are currently not handled. Theorem 1 shows how they distort the distribution of the decrease. Worse, they affect step size adaptation, and they make it virtually impossible to obtain a lower bound on the one-step probability of a strict improvement. Therefore, proper handling of plateaus requires additional arguments.
In the interest of generality, our convergence theorem only guarantees the existence of a limit point, not convergence of the sequence as a whole. We believe that convergence actually holds in most cases of interest (at least as long as there are no plateaus; see above). This is nearly trivial if the limit point is an isolated local optimum; however, it is unclear for a spatially extended optimum, for example, a low-dimensional variety or a Cantor set.
Our current result requires a saddle point to be -improvable for some , otherwise the theorem does not exclude convergence of the ES to the saddle point. We know from simulations that the (1 + 1)-ES overcomes -improvable saddle points reliably, also for . A proper analysis guaranteeing this behavior would allow the establishment of statements analogous to work on gradient-based algorithms that overcome saddle points quickly and reliably; see for example, Dauphin et al. (2014). However, this is clearly beyond the scope of the present article.
We provide only a minimal negative result stating that the algorithm may indeed converge prematurely with positive probability if there exists a -critical point for which the cumulative success probability does not sum to infinity. In Section 5.5, it becomes apparent that this notion is rather weak, since the statement is not formally applicable to the case of a closed ball, which however differs from the open ball scenario only on a zero set. This makes clear that there is still a gap between positive results (global convergence) and negative results (premature convergence). Theorem 3 can certainly be strengthened, but the exact conditions remain to be explored. A single -improvable point with is apparently insufficient. A -critical point may be sufficient, but it is not necessary.
Acknowledgments
I would like to thank Anne Auger for helpful discussions, and I gratefully acknowledge support by Dagstuhl seminar 17191 “Theory of Randomized Search Heuristics.”
Notes
Some authors refer to global convergence as convergence to a global optimum. We do not use the term in this sense.
Jägersküpper analyzed a different step size adaptation rule. However, it exhibits essentially the same dynamics as Algorithm 1.
An alternative approach to avoiding infinite values is to apply a bounded reference measure with full support, for example, a Gaussian on . In the absence of a uniform distribution on , the price to pay for a bounded and everywhere positive reference measure is a nonuniform measure, which does not allow for a uniform, positive lower bound. The resulting technical complications seem to outweigh the slightly increased generality of the results.
Special care must be taken when simulating this problem with floating point arithmetic. Our simulation is necessarily inexact; however, not beyond the usual limitations of floating point numbers. It does reflect the actual dynamics well. The fitness function is designed such that the most critical point for the simulation is zero, which is where standard IEEE floating point numbers have maximal precision.
References
Appendix
Here we provide the proofs of technical lemmas that were omitted from the main text in the interest of readability.
We have to show that the level sets of all three functions agree outside a set of measure zero. It is immediately clear from definition 1 that the level sets of are a refinement of the level sets of and , i.e., implies and , and and both imply .
It remains to be shown that and do not join -level sets of positive measure. Let denote a level so that has positive measure . We have to show that this measure (not necessarily the whole set, only up to a zero set) is covered by a single -level set. Assume the contrary, for the sake of contradiction. Then we find ourselves in one of the following situations:
There exist fulfilling and it holds and . So the mass of is split into at least two chunks of positive measure. This implies , which contradicts the assumption that and belong to the same -level.
There exist fulfilling and it holds for the open interval . So consists of a continuum of level sets of measure zero. Again, this implies , leading to the same contradiction as in the first case.
The argument for is exactly analogous.
The central proof argument works as follows. First, we exclude that the step size remains outside for too long. The same argument does not work for the target interval defined in Equation (1) because of its time dependency—we could overjump the moving target. Instead we show that the only way for the step size to avoid the target interval for an infinite time is to overjump, that is, to find itself above and below the interval infinitely often. Finally, an argument exploiting the properties of unsuccessful steps allows us to consider a static target, which cannot be overjumped by the property already shown above.
By construction, these episodes consist entirely of unsuccessful steps, and therefore remains unchanged for the duration of an episode. This comes in handy, since this means that also the target interval remains fixed, and this again means that at least one iteration of the episode falls into this interval. We have thus constructed an infinite subsequence of iteration within the above interval, in contradiction to the assumption.
Finally, we provide details on the computations of success rates in the examples. In Section 5.2, the set where the function takes the value zero consists of two lines through the origin in directions and . The cone is bounded by these lines in the success domain. The angle between their directions divided by corresponds to the success rate. It is two times the angle between and , and hence . Dividing by yields the result.
The threshold in Section 5.4 follows the exact same logic, with the difference that the square root vanishes in the direction vectors, and we lose a factor of two, since the success domain is only one half of the cone.
In Section 5.5, the circular level line in the corner point is tangent to the vector . The angle between and , divided by , is a lower bound on the success rate at with . The bound is precise for .