Abstract

GLOBAL is a multi-start type stochastic method for bound constrained global optimization problems. Its goal is to find the best local minima that are potentially global. For this reason it involves a combination of sampling, clustering, and local search. The role of clustering is to reduce the number of local searches by forming groups of points around the local minimizers from a uniformly sampled domain and to start few local searches in each of those groups. We evaluate the performance of the GLOBAL algorithm on the BBOB 2009 noiseless testbed, containing problems which reflect the typical difficulties arising in real-world applications. The obtained results are also compared with those obtained form the simple multi-start procedure in order to analyze the effects of the applied clustering rule. An improved parameterization is introduced in the GLOBAL method and the performance of the new procedure is compared with the performance of the MATLAB GlobalSearch solver by using the BBOB 2010 test environment.

1  Introduction

In this paper, global optimization problems subject to variable bound constraints are considered:
formula
1
where f(x) is the objective function, X is the set of feasibility, a rectangular domain defined by bounds on the variables, and D is the dimension of the search space. In general, we assume that the objective function is twice continuously differentiable, although this is not necessary for the global optimization framework procedure; with a proper local search algorithm, nondifferentiable problems can also be solved.

Several stochastic strategies have been developed in the recent past in order to solve the problem in Equation (1). Usually they consist of two phases: the global and the local. During the global phase, random points are drawn from the search space X according to a certain, often uniform, distribution. Then, the objective function is evaluated in these points. During the local phase, the sample points are manipulated by means of local search to yield a candidate global minimum. We assume that a proper local search method LS is available. It can be started from an arbitrary point and then this algorithm generates the sequence of points in X which converges to some , that is, the local minimizer related to the starting point x0.

These methods are also called multi-start techniques, because they apply local searches to each point in a random sample drawn from the feasible region (Boender et al., 1982b; Rinnooy Kan and Timmer, 1987a, 1987b). However, the multi-start method is inefficient when it performs local searches starting from all sample points. That is, in such cases some local minimizer points will be found several times. Since local search is the most time-consuming part of the method, it should ideally be invoked no more than once in every region of attraction.

Various improvements were proposed by diverse authors in order to reduce the number of local searches (see, e.g., Törn, 1978; Rinnooy Kan and Timmer, 1987a; Guss et al., 1995). The two most important methods which are aimed at reducing the number of performed local searches are the clustering methods and the multi level single linkage (MLSL) algorithms.

The basic idea behind clustering methods is to form groups (clusters) around the local minimizers from a uniformly sampled domain and to start with as low a number of local searches as possible in each of those groups. In other words, the procedure tries to identify the regions of attraction of the given function.

MLSL methods have been derived from clustering methods (Rinnooy Kan and Timmer, 1987b). In this algorithm, the local search procedure is applied to every sample point, except if there is another sample point within some critical distance which has a lower function value.

Random linkage (RL) multi-start algorithms introduced by Locatelli and Schoen (1999) retain the good convergence properties of MLSL. Uniformly distributed points are generated one by one, and LS is started from each point with a probability given by a nondecreasing function , where d is the distance from the current sample point to the closest of the previous sample points with a better function value.

The multi-start clustering global optimization method called GLOBAL (Csendes, 1988) was introduced in the 1980s for bound constrained global optimization problems with black box type objective functions. The algorithm is based on Boender's algorithm (Boender et al., 1982b), and its goal is to find the best local minimizer points that are potentially global. The local search procedure used by GLOBAL was originally either a quasi-Newton procedure with the Davidon–Fletcher–Powell (DFP) update formula (Davidon, 1959) or a random walk type direct search method called UNIRANDI (for details, see Järvi, 1973). The main idea behind quasi-Newtonian methods is the construction of a sequence of matrices providing improved approximation of the Hessian matrix (or its inverse) by applying rank-one (or rank-two) update formula in order to avoid the direct and costly calculations. The DFP formula was the earliest scheme for constructing the inverse Hessian and it has theoretical properties that guarantee superlinear (fast) convergence rate and global convergence under certain conditions. GLOBAL was originally coded in Fortran and C. In several recent comparative studies (e.g., Mongeau et al., 2000; Moles et al., 2003), this method performed quite well in terms of both efficiency and robustness, obtaining the best results in many cases.

Based on the old GLOBAL method, we introduced a new version (Csendes et al., 2008) coded in MATLAB. The algorithm was carefully analyzed and it was modified in some places to achieve better reliability and efficiency while allowing higher dimensional problems to be solved. In the new version, we use the quasi-Newton local search method with the Broyden–Fletcher–Goldfarb–Shanno (BFGS) update instead of the earlier DFP. Numerical experiments (Powell, 1986) showed that the performance of the BFGS formula is superior to the DFP formula. All three versions of the algorithm (Fortran, C, MATLAB) are freely available for academic and nonprofit purposes.1

The aim of the present work is to benchmark the GLOBAL algorithm and compare the performance of it with a simple multi-start procedure and with the MATLAB GlobalSearch solver on a testbed which reflects the typical difficulties arising in real-word applications. In the first comparison, the main goal was to examine the benefits of the clustering procedure over the simple multi-start type method, while in the second case the comparison of the global phase of the two methods was our target. The remainder of the paper is organized as follows: The GLOBAL method is presented in Section 2 and the test environment in Section 3. The benchmarking on the BBOB 2009 noiseless testbed (Finck et al., 2009a; Hansen et al., 2009a) is done in Section 4 and it is based on the unpublished report from Pál et al. (2009). During this section, we also describe the parameters and settings in the test. The CPU timing experiment is presented in Section 4.2, while the discussion of the results is given in Section 4.3. The comparison of the GLOBAL and the simple multi-start procedure is described in Section 4.4.

In Section 5, we compare GLOBAL with the MATLAB GlobalSearch method using the BBOB 2010 test environment. The new parameter settings of the GLOBAL method are presented in Section 5.1, while the GlobalSearch method is overviewed in Section 5.2. Section 5.3 contains the comparison results.

2  Presentation of the GLOBAL Algorithm

The GLOBAL method has a global phase and a local phase. The global phase consists of sampling and clustering, while the local phase is based on local searches. The local minimizer points are found by means of a local search procedure, starting from appropriately chosen points from the sample drawn uniformly within the set of feasibility. In an effort to identify the region of attraction of a local minimizer, the procedure invokes a clustering algorithm. The role of clustering is to reduce the number of local searches by forming groups of points around the local minimizers from a uniformly sampled domain and start local searches as few times as possible in each of those groups. Clusters are formed stepwise, starting from a seed point, which may be an unclustered point with the lowest function value or the local minimum found by applying local search to this point. New points are attached to the cluster according to clustering rules.

GLOBAL uses the SL clustering rule (Boender et al., 1982b; Rinnooy Kan and Timmer, 1987a), which is constructed in such a way that the probability that a local method will not be applied to a point that would lead to an undiscovered local minimizer diminishes to zero when the size of the sample grows. In this method, the clusters are formed sequentially and each of them is initiated by a seed point. The distance between two points x and in the neighborhood of the seed point xs is defined as
formula
where H(xs) is the Hessian of the objective function at the seed point. If xs is a local minimizer, then a good approximation of H(xs) can be obtained by using the BFGS method, otherwise H(xs) can be replaced by the identity matrix. Let C(xs) denote the cluster initiated by the seed point xs. After a cluster C(xs) is initiated, we find an unclustered sample point x such that d(x,y) is minimal, where . This point is then added to C(xs), after which the procedure is repeated until exceeds some critical value rk. The applied critical distance in our algorithm is based on the one used in Boender et al. (1982a) which is
formula
where is the gamma function, |H(xs)| denotes the determinant of H(xs), m(X) is the Lebesgue measure of the set X, and is a parameter of the clustering procedure. The main steps of GLOBAL are summarized in Algorithm 1.
formula

In line 2 of Algorithm 1, the and X(1) sets are initialized, where is a set containing the local minimizer points found so far, while X(1) is a set containing sample points to which the local search procedure was applied unsuccessfully in the sense that the already known local minimizer was found again. Moreover, the set X(1) has the role of further reducing the number of local searches by applying clustering using the elements of it as seed points. The number of new drawings is denoted by k, set initially to 0. The algorithm contains a main iteration loop in the steps from line 4 to line 20, that will be repeated until some global stopping rule is satisfied. In line 5, N points are generated uniformly on X. In line 6, a reduced sample is constructed by taking those points of the accumulated sample that have the lowest function values. The accumulated sample contains all points sampled during the iterations. A clustering procedure is then applied to the reduced sample (line 7). The elements of are first chosen as seed points, followed by the elements of X(1). In case of a seed point xs, we add all unclustered reduced sample points which are within the critical distance rk to the cluster initiated by xs. In the first iteration, and X(1) are empty and thus no clustering takes place.

Between lines 8 and 20, we iterate over the unclustered points from the reduced sample and apply a local search procedure to them to find a local minimizer point . The point is then added to the cluster (line 11). If is a new local minimizer point, then we add it to (line 13) and choose it as the next seed point (line 14), otherwise we add to X(1) (line 16) and choose it as the next seed point (line 17). In line 19, we again apply the clustering procedure to the unclustered reduced sample points, which are within a critical distance from the cluster initiated by the seed point xs. In line 22, the smallest local minimum value is returned.

One of the questions in applying a stochastic method is when to stop it. Several approaches based on different assumptions about the properties of possible objective functions f and using some stochastic techniques have been proposed to design a proper stopping rule.

A Bayesian stopping rule for the multi-start algorithm was introduced by Zieliński (1981) and further developed later (Boender and Zieliński, 1982; Boender and Rinnooy Kan, 1987, 1991; Betrò and Schoen, 1992, among others).

Most Bayesian stopping rules for multi-start techniques are based on the collected knowledge about the size of the sample and the number of local minimizers detected. In our GLOBAL algorithm, we stop the search (line 21) when it has not found any new local minimizer point in the actual iteration step. Apart from this principal stopping criterion, GLOBAL contains further stopping rules in order to stop the optimization process when this takes too long. The first one stops the algorithm when it exceeds the upper limit on the number of iterations, while the second one stops the search when the number of the found local minimizer points is larger than a prescribed value.

3  The Test Environment Description

In this paper, the numerical experiments are conducted on a testbed composed of 24 noiseless test functions (Finck et al., 2009a; Hansen et al., 2009a). These functions have been constructed so that they reflect the real-word application difficulties and are split into several groups such as separable functions (f1f5), functions with low or moderate conditioning (f6f9), functions with bad conditioning and unimodal (f10f14), multimodal with adequate global structure (f15f19), multimodal with weak global structure (f20f24). All functions are scalable with dimension; thus, in our tests we used 2, 3, 5, 10, and 20 as dimensions. Additionally, all functions are defined over , while the actual search domain is [−5; 5]D. Every function has an artificially chosen optimal function value. Consequently, for each function, different instances can be generated. Each function is tested over five different instances and the experiments are repeated three times for each instance. The performance of the algorithm is evaluated over all 15 trials. The success criterion of a run is to reach the target value, where fopt is the (preknown) optimal function value, and is the precision to reach.

In order to quantify the search cost of an algorithm, a performance measure should be provided. The main performance measure adopted in this paper (Hansen, Auger, et al., 2009; Price, 1997) is the expected runtime (ERT). The ERT depends on a given target function value, and is computed over all relevant trials as the number of function evaluations used during the trials up to the point where the best function value was not able to reach ft, summed over all trials, and divided by the number of trials that actually reached ft. Formally
formula
where pS is the probability of success, the ratio of the number of successful runs over the total number of runs, and RTS and RTUS denote the average number of function evaluations for successful and unsuccessful trials, respectively.

The results are also presented using the empirical cumulative distribution function (ECDF) of the distribution of ERT divided by D to reach a given target function value. This shows the empirical cumulated probability of success on the problems considered depending on the allocated budget. For a more detailed environment and experimental description, see Hansen, Auger, et al. (2009) and Hansen, Auger, Finck, et al. (2010).

4  Benchmarking GLOBAL on the BBOB 2009 Noiseless Testbed

4.1  Parameter Tuning and Setup

GLOBAL has six parameters to set: the number of sample points to be generated within an iteration step, the number of best points to be selected for the reduced sample, the stopping criterion parameter for the local search, the maximum number of function evaluations allowed for local search, the maximum number of local minima to be found, and the type of local method to be used. All these parameters have a default value and usually it is enough to change only the first three of them.

In all dimensions and for all functions, we used 300 sample points, and the two best points were kept for the reduced sample. In 2, 3, and 5 dimensions, we used the Nelder-Mead simplex method (Nelder and Mead, 1965) as implemented by Kelley (1999) as a local search with 10-8 as termination tolerance parameter value and with 5,000 as the maximum number of function evaluations. In 10 and 20 dimensions with the f3, f4, f7, f16, and f23 functions, we used the previous settings with a local search tolerance of 10-9. Finally, in the case of the remaining functions, we used MATLAB's function as the local search method using the BFGS update formula with 10,000 as the maximum number of function evaluations and with 10-9 as the termination tolerance parameter value.

As can be observed, during parameter tuning, we used two different settings. In lower dimensions we used the Nelder-Mead method, while in higher dimensions the BFGS local search was applied to all functions except for five of them. Although this kind of a priori parameter setting is not suggested in general, the two important parameters of GLOBAL (the number of sample points, the number of best points selected) were the same on the entire testbed. The different settings may be characterized with the crafting effort (Price, 1997; Hoos and Stützle, 1998) for each dimensionality in the following way:
formula
where is the number of functions in the testbed and nk is the number of functions, where the parameter setting with index k was used for , K is the number of different parameter settings. The crafting effort CrE=0 for dimensions 2, 3, and 5, while for D=10, 20 it can be calculated as .

4.2  CPU Timing Experiment

For the timing experiment, the GLOBAL algorithm was run on the test function f8, and restarted until at least 30 s had passed (according to Figure 2 in Hansen, Auger, et al., 2009). These experiments have been conducted with an Intel Core 2 Duo 2.00 GHz processor computer under Windows XP using the MATLAB 7.6.0.324 version. We have completed two experiments using the BFGS and the simplex local search methods. The other algorithm parameters were the same. In the first case (BFGS) the results were s, while in the second case (Nelder-Mead simplex) they were s per function evaluation in dimensions 2, 3, 5, 10, 20, and 40, respectively. The CPU time of a function evaluation of the BFGS search grows sublinearly with the dimension. The slow increase in the CPU time is due to the initializing process. In lower dimensions, there will be more restarts (before reaching 30 s) which means that there will be more initializations. We assume the CPU time per function evaluation would increase given that the dimensionality is large enough. For the Nelder-Mead simplex method, the CPU time increases with dimension linearly up to 20-dimensional problems, while for 40-dimensional functions, a rapid increase can be observed.

4.3  Results and Discussion

Results from experiments according to Hansen, Auger, et al. (2009) on the benchmark functions (Finck et al., 2009a; Hansen et al., 2009a) are presented in Figure 1 and Tables 2 and 3.

Figure 1:

Expected running time (ERT) divided by dimension versus dimension in log-log presentation. Shown are different target values , where and the exponent is given in the legend of f1 and f24. Plus symbols (+) show the median number of f evaluations for the best reached target value. Crosses (×) indicate the total number of f evaluations () divided by the number of trials. Numbers above ERT-symbols indicate the number of successful trials. Y axis annotations are decimal logarithms.

Figure 1:

Expected running time (ERT) divided by dimension versus dimension in log-log presentation. Shown are different target values , where and the exponent is given in the legend of f1 and f24. Plus symbols (+) show the median number of f evaluations for the best reached target value. Crosses (×) indicate the total number of f evaluations () divided by the number of trials. Numbers above ERT-symbols indicate the number of successful trials. Y axis annotations are decimal logarithms.

Table 1:
CPU time per function evaluation in 10-4 s and the corresponding restart numbers.
Algorithm2D3D5D10D20D40D80D
GLOBAL 6.8 (215) 6.1 (147) 5.3 (87) 4.3 (43) 3.8 (17) 3.5 (5) 3.6 (2) 
GlobalSearch 9.8 (7) 9.5 (8) 7.9 (7) 5.8 (5) 4.6 (2) 3.8 (1) 4.1 (1) 
Algorithm2D3D5D10D20D40D80D
GLOBAL 6.8 (215) 6.1 (147) 5.3 (87) 4.3 (43) 3.8 (17) 3.5 (5) 3.6 (2) 
GlobalSearch 9.8 (7) 9.5 (8) 7.9 (7) 5.8 (5) 4.6 (2) 3.8 (1) 4.1 (1) 
Table 2:
ERT and half-interquantile range (90% – 10%) divided by the best ERT measured during BBOB 2009 for different values for functions f1f24 in 5D.
1e+11e+01e–11e–31e–51e–7#succ
f1 11 12 12 12 12 12 15/15 
GLOBAL 6.8 26 28 32 35 39 13/15 
MSTART 2.3 4.7 6.9 11 15 19 15/15 
f2 83 87 88 90 92 94 15/15 
GLOBAL 6.3 6.9 7.3 7.8 8.2 8.5 15/15 
MSTART 6.6 8.3 8.7 9.2 10 10 15/15 
f3 716 1,622 1,637 1,646 1,650 1,654 15/15 
GLOBAL 3.3     2,600 0/15 
MSTART 4.6 278    1.0e5 0/15 
f4 809 1,633 1,688 1,817 1,886 1,903 15/15 
GLOBAL 8.3     3,200 0/15 
MSTART 11 419    1.0e5 0/15 
f5 10 10 10 10 10 10 15/15 
GLOBAL 32 33 34 34 34 34 15/15 
MSTART 3.0 4.1 4.2 4.2 4.2 4.2 15/15 
f6 114 214 281 580 1,038 1,332 15/15 
GLOBAL 2.9 2.1 2.0 2.2 3.6 35 1/15 
MSTART 2.4 2.7 3.1 4.3 5.7 27 6/15 
f7 24 324 1,171 1,572 1,572 1,597 15/15 
GLOBAL 12 5.7 10   1,900 0/15 
MSTART 14 10 27 155 155 152 5/15 
f8 73 273 336 391 410 422 15/15 
GLOBAL 5.0 2.1 2.1 2.1 2.1 2.2 15/15 
MSTART 1.7 2.4 2.3 2.3 2.3 2.3 15/15 
f9 35 127 214 300 335 369 15/15 
GLOBAL 11 4.6 3.2 2.8 2.7 2.7 13/15 
MSTART 3.2 3.9 3.0 2.5 2.4 2.3 15/15 
f10 349 500 574 626 829 880 15/15 
GLOBAL 1.9 1.6 1.8 2.0 1.7 1.7 15/15 
MSTART 1.3 1.3 1.3 1.3 1.1 1.1 15/15 
f11 143 202 763 1,177 1,467 1,673 15/15 
GLOBAL 4.0 5.5 3.5 5.0 5.0 8.5 8/15 
MSTART 5.3 6.2 2.9 2.7 2.3 2.1 15/15 
f12 108 268 371 461 1,303 1,494 15/15 
GLOBAL 4.6 2.7 2.4 5.0 3.1 3.4 6/15 
MSTART 4.2 3.4 3.2 6.2 5.2 7.4 11/15 
f13 132 195 250 1,310 1,752 2,255 15/15 
GLOBAL 4.2 6.1 11   1,300 0/15 
MSTART 2.0 2.8 3.3 75  1.0e5 0/15 
f14 10 41 58 139 251 476 15/15 
GLOBAL 2.2 7.7 5.9 3.3 3.6 1,300 0/15 
MSTART 1.9 1.3 1.6 1.5 1.6 60 1/15 
f15 511 9,310 19,369 20,073 20,769 21,359 14/15 
GLOBAL 6.0     2,700 0/15 
MSTART 12 47 73 71 68 67 1/15 
f16 120 612 2,663 10,449 11,644 12,095 15/15 
GLOBAL 1.4 1 1 3.5 6.8 6.6 0/15 
MSTART 0.78 2.5 7.5 19 57 1.0e5 0/15 
f17 5.2 215 899 3,669 6,351 7,934 15/15 
GLOBAL 3.5 5.0    3,100 0/15 
MSTART 53 98 734   1.0e5 0/15 
f18 103 378 3,968 9,280 10,905 12,469 15/15 
GLOBAL 3.9 15 14   2,600 0/15 
MSTART 41 162    1.0e5 0/15 
f19 242 1.20e5 1.21e5 1.22e5 15/15 
GLOBAL 46 7,329    4,300 0/15 
MSTART 19 1,651 179   1.0e5 0/15 
f20 16 851 38,111 54,470 54,861 55,313 14/15 
GLOBAL 17 18    2,300 0/15 
MSTART 1.8 5.6    1.0e5 0/15 
f21 41 1,157 1,674 1,705 1,729 1,757 14/15 
GLOBAL 2.3 1.1 1 1 1 1 14/15 
MSTART 4.6 2.3 2.8 2.8 2.7 2.7 15/15 
f22 71 386 938 1,008 1,040 1,068 14/15 
GLOBAL 3.6 1.3 1 1 1 1 14/15 
MSTART 5.3 5.9 3.1 2.9 2.9 2.9 15/15 
f23 3.0 518 14,249 31,654 33,030 34,256 15/15 
GLOBAL 1.6 1.0 4.8   4,900 0/15 
MSTART 2.1 0.57 2.6   1.0e5 0/15 
f24 1,622 2.16e5 6.36e6 9.62e6 1.28e7 1.28e7 3/15 
GLOBAL 4.2     6,400 0/15 
MSTART 11     1.0e5 0/15 
1e+11e+01e–11e–31e–51e–7#succ
f1 11 12 12 12 12 12 15/15 
GLOBAL 6.8 26 28 32 35 39 13/15 
MSTART 2.3 4.7 6.9 11 15 19 15/15 
f2 83 87 88 90 92 94 15/15 
GLOBAL 6.3 6.9 7.3 7.8 8.2 8.5 15/15 
MSTART 6.6 8.3 8.7 9.2 10 10 15/15 
f3 716 1,622 1,637 1,646 1,650 1,654 15/15 
GLOBAL 3.3     2,600 0/15 
MSTART 4.6 278    1.0e5 0/15 
f4 809 1,633 1,688 1,817 1,886 1,903 15/15 
GLOBAL 8.3     3,200 0/15 
MSTART 11 419    1.0e5 0/15 
f5 10 10 10 10 10 10 15/15 
GLOBAL 32 33 34 34 34 34 15/15 
MSTART 3.0 4.1 4.2 4.2 4.2 4.2 15/15 
f6 114 214 281 580 1,038 1,332 15/15 
GLOBAL 2.9 2.1 2.0 2.2 3.6 35 1/15 
MSTART 2.4 2.7 3.1 4.3 5.7 27 6/15 
f7 24 324 1,171 1,572 1,572 1,597 15/15 
GLOBAL 12 5.7 10   1,900 0/15 
MSTART 14 10 27 155 155 152 5/15 
f8 73 273 336 391 410 422 15/15 
GLOBAL 5.0 2.1 2.1 2.1 2.1 2.2 15/15 
MSTART 1.7 2.4 2.3 2.3 2.3 2.3 15/15 
f9 35 127 214 300 335 369 15/15 
GLOBAL 11 4.6 3.2 2.8 2.7 2.7 13/15 
MSTART 3.2 3.9 3.0 2.5 2.4 2.3 15/15 
f10 349 500 574 626 829 880 15/15 
GLOBAL 1.9 1.6 1.8 2.0 1.7 1.7 15/15 
MSTART 1.3 1.3 1.3 1.3 1.1 1.1 15/15 
f11 143 202 763 1,177 1,467 1,673 15/15 
GLOBAL 4.0 5.5 3.5 5.0 5.0 8.5 8/15 
MSTART 5.3 6.2 2.9 2.7 2.3 2.1 15/15 
f12 108 268 371 461 1,303 1,494 15/15 
GLOBAL 4.6 2.7 2.4 5.0 3.1 3.4 6/15 
MSTART 4.2 3.4 3.2 6.2 5.2 7.4 11/15 
f13 132 195 250 1,310 1,752 2,255 15/15 
GLOBAL 4.2 6.1 11   1,300 0/15 
MSTART 2.0 2.8 3.3 75  1.0e5 0/15 
f14 10 41 58 139 251 476 15/15 
GLOBAL 2.2 7.7 5.9 3.3 3.6 1,300 0/15 
MSTART 1.9 1.3 1.6 1.5 1.6 60 1/15 
f15 511 9,310 19,369 20,073 20,769 21,359 14/15 
GLOBAL 6.0     2,700 0/15 
MSTART 12 47 73 71 68 67 1/15 
f16 120 612 2,663 10,449 11,644 12,095 15/15 
GLOBAL 1.4 1 1 3.5 6.8 6.6 0/15 
MSTART 0.78 2.5 7.5 19 57 1.0e5 0/15 
f17 5.2 215 899 3,669 6,351 7,934 15/15 
GLOBAL 3.5 5.0    3,100 0/15 
MSTART 53 98 734   1.0e5 0/15 
f18 103 378 3,968 9,280 10,905 12,469 15/15 
GLOBAL 3.9 15 14   2,600 0/15 
MSTART 41 162    1.0e5 0/15 
f19 242 1.20e5 1.21e5 1.22e5 15/15 
GLOBAL 46 7,329    4,300 0/15 
MSTART 19 1,651 179   1.0e5 0/15 
f20 16 851 38,111 54,470 54,861 55,313 14/15 
GLOBAL 17 18    2,300 0/15 
MSTART 1.8 5.6    1.0e5 0/15 
f21 41 1,157 1,674 1,705 1,729 1,757 14/15 
GLOBAL 2.3 1.1 1 1 1 1 14/15 
MSTART 4.6 2.3 2.8 2.8 2.7 2.7 15/15 
f22 71 386 938 1,008 1,040 1,068 14/15 
GLOBAL 3.6 1.3 1 1 1 1 14/15 
MSTART 5.3 5.9 3.1 2.9 2.9 2.9 15/15 
f23 3.0 518 14,249 31,654 33,030 34,256 15/15 
GLOBAL 1.6 1.0 4.8   4,900 0/15 
MSTART 2.1 0.57 2.6   1.0e5 0/15 
f24 1,622 2.16e5 6.36e6 9.62e6 1.28e7 1.28e7 3/15 
GLOBAL 4.2     6,400 0/15 
MSTART 11     1.0e5 0/15 

For notation, see text.

Table 3:
ERT and half-interquantile range (90% – 10%) divided by the best ERT measured during BBOB 2009 for different values for functions f1f24 in 20D.
1e+11e+01e–11e–31e–51e–7#succ
f1 43 43 43 43 43 43 15/15 
GLOBAL 8.0 8.0 8.0 8.0 8.0 8.0 15/15 
MSTART 15/15 
f2 385 386 387 390 391 393 15/15 
GLOBAL 18 23 26 33 51 63 13/15 
MSTART 19 31 38 69 113 123 12/15 
f3 5,066 7,626 7,635 7,643 7,646 7,651 15/15 
GLOBAL      5.0e4 0/15 
MSTART      1.0e5 0/15 
f4 4,722 7,628 7,666 7,700 7,758 1.41e5 9/15 
GLOBAL      7.8e4 0/15 
MSTART      1.0e5 0/15 
f5 41 41 41 41 41 41 15/15 
GLOBAL 10 11 11 11 11 11 15/15 
MSTART 2.2 2.9 3.1 3.1 3.1 3.1 15/15 
f6 1,296 2,343 3,413 5,220 6,728 8,409 15/15 
GLOBAL 3.6 3.6 6.1   4.1e4 0/15 
MSTART 3.4 4.5 6.3   1.0e5 0/15 
f7 1,351 4,274 9,503 16,524 16,524 16,969 15/15 
GLOBAL      1.4e4 0/15 
MSTART      1.0e5 0/15 
f8 2,039 3,871 4,040 4,219 4,371 4,484 15/15 
GLOBAL 1.6 1.2 1.2 1.2 1.2 1.2 15/15 
MSTART 1.9 1.5 1.5 1.5 1.4 1.4 15/15 
f9 1,716 3,102 3,277 3,455 3,594 3,727 15/15 
GLOBAL 1.7 1.7 1.6 1.6 1.6 1.5 15/15 
MSTART 2.1 1.5 1.5 1.5 1.4 1.4 15/15 
f10 7,413 8,661 10,735 14,920 17,073 17,476 15/15 
GLOBAL 1 1.1 1.1 2.0 5.9 4.1e4 0/15 
MSTART 0.98 1.2 1.5 6.3 27 1.1e5 0/15 
f11 1,002 2,228 6,278 9,762 12,285 14,831 15/15 
GLOBAL 1.2 1.0 1   2.7e4 0/15 
MSTART 0.87 0.78 1.6 72  1.0e5 0/15 
f12 1,042 1,938 2,740 4,140 12,407 13,827 15/15 
GLOBAL 1 1 1 1 1.1 3.4 0/15 
MSTART 1.4 1.7 1.5