Abstract

The migration interval is one of the fundamental parameters governing the dynamic behaviour of island models. Yet, there is little understanding on how this parameter affects performance, and how to optimally set it given a problem in hand. We propose schemes for adapting the migration interval according to whether fitness improvements have been found. As long as no improvement is found, the migration interval is increased to minimise communication. Once the best fitness has improved, the migration interval is decreased to spread new best solutions more quickly. We provide a method for obtaining upper bounds on the expected running time and the communication effort, defined as the expected number of migrants sent. Example applications of this method to common example functions show that our adaptive schemes are able to compete with, or even outperform, the optimal fixed choice of the migration interval, with regard to running time and communication effort.

1  Introduction

Evolutionary algorithms (EAs) have given rise to many parallel variants (Luque and Alba, 2011; Tomassini, 2005) fuelled by the rapidly increasing number of CPU cores and the ready availability of computation power through GPUs and cloud computing. Parallelization provides a cost-effective approach to solving problems in real time and for tackling large-scale problems.

There are many variants of parallel evolutionary algorithms, from parallelising function evaluations on multiple processors to fine-grained models such as cellular evolutionary algorithms and coarse-grained models such as island models (Tomassini, 2005; Luque and Alba, 2011). In the latter approach, multiple populations evolve independently for a certain period of time. Every generations, for some fixed parameter called migration interval, individuals migrate between these islands to coordinate searches on the island. Communication takes place according to a spatial structure, a topology connecting populations. Common topologies include rings, two-dimensional grids or toroids, hypercubes, and complete graphs with all possible connections.

Island models are popular optimisers for several reasons:

  • Multiple communicating populations can make the same progress as a single population in a fraction of the time, speeding up computation.

  • Smaller populations can be simulated faster than large populations, effectively reducing the execution time on each processor (Alba, 2002).

  • Periodic communication only requires small bandwidth if the migration interval is not very small, leading to low communication costs.

  • Solution quality is improved as different populations can explore different regions of the search space.

The usefulness of parallel populations has been demonstrated in thousands of successful applications ranging from language tagging, circuit design, scheduling and planning to bioinformatics (Luque and Alba, 2011; Alba et al., 2013).

However, designing an effective parallel evolutionary algorithm can be challenging, as the method and amount of communication need to be tuned carefully. Too frequent communication leads to high communication costs, and it can compromise exploration. Too little communication means that the populations become too isolated and unable to coordinate their searches effectively. There is agreement that even the effect of the most fundamental parameters on performance is not well understood (Luque and Alba, 2011; Alba et al., 2013).

We make a contribution toward finding good values for the migration interval, the parameter describing the frequency of migration. We propose adaptive schemes that adjust the migration interval depending on whether islands have managed to find improvements during the last migration interval or not. The goal is to reduce communication while not compromising the exploitation of good solutions. The main idea of our schemes is that if an island has managed to improve its current best fitness, migration should be intensified to spread this solution to other islands. Otherwise, islands decrease the frequency of migration to avoid large communication costs.

Two different adaptive schemes are proposed, inspired by previous work (Lässig and Sudholt, 2011a). In both of them islands have individual migration intervals that are adapted throughout the run. In Scheme A if an island has not improved its current best fitness during the last migration interval, its migration interval is doubled. Once an improvement is found, the migration interval is set to 1 to communicate this new solution quickly. In Scheme B an island also doubles the migration interval when no improvement was found, while when an improvement is found, it halves at the end of the current migration interval.

We show that doubling the migration interval guarantees for elitist EAs that the number of migrations from an island is logarithmic in the time this island spends on a certain fitness level, for any value of the current best fitness.

We contribute a rigorous analytical framework that yields upper bounds on the expected optimisation time and the expected communication effort, defined as the total number of migrants sent. This is described for fixed migration intervals in Section 3, Scheme A in Section 4, and Scheme B in Section 5. This framework can be applied to a range of evolutionary algorithms; we demonstrate its application for a simple island model called parallel (1+1) EA  (Lässig and Sudholt, 2014b). Our adaptive schemes are then compared in Section 6 against the best fixed values of the migration interval for classical test problems. The results reveal that our adaptive schemes are able to match or even outperform the best fixed migration intervals with regard to upper bounds on the expected parallel time and the expected communication effort.

Note that our methodology only provides upper bounds on the expected parallel times and expected communication efforts. Hence performance comparisons between migration schemes or topologies are based on comparisons of upper bounds. A better upper bound translates to a better expectation in case the respective upper bounds are tight, in an asymptotic sense. For the application to the parallel (1+1) EA and two of our test problems, OneMax and LO, we know from general lower bounds for all mutation-based evolutionary algorithms1 (Sudholt, 2013) that all bounds for expected parallel times are tight. For fixed migration intervals also, the stated bounds on the expected communication effort are tight (see Theorem 1 in Section 3). For other problems or problem classes, we can only compare upper bounds.

This paper is based on an extended abstract published at GECCO 2014 (Mambrini and Sudholt, 2014), where some proofs were omitted. The present manuscript contains all proofs and several extensions, most notably a discussion in Section 7 about the balance between exploration and exploitation in the light of our adaptive schemes.

1.1  Related Work

This paper is in line with recent theoretical research on the running time of parallel EAs. Lässig and Sudholt (2010) presented a method for analysing speedups in island models, with applications to a range of combinatorial problems (Lässig and Sudholt, 2011b, 2014a). Neumann et al. (2011) considered the benefit of using crossover during migration for artificial problems and instances of the vertex cover problem. Mambrini et al. (2012) studied the running time and communication effort of homogeneous and heterogeneous island models for finding good solutions for the NP-hard set cover problem.

Different migration policies were compared by Araujo and Merelo Guervós (2011). Bravo et al. (2012) studied the effect of the migration interval when tackling dynamic optimization problems.

Skolicki and De Jong (2005) investigated the impact of the migration interval and the number of migrants on performance. They found that the dynamic behaviour of the algorithm is not just the result of the number of exchanged individuals, but it results from several phenomena. For frequent migrations the effect of varying the migration interval are much stronger than that of varying the number of migrants. Performance degrades when the number of migrants approaches the population size of islands. And performance may degrade in the presence of large migration intervals if the algorithm stops prematurely.

Alba and Luque (2005) analysed growth curves and takeover times based on migration intervals and migration topologies, showing how quickly good solutions spread in an island model. Theoretical analyses of this takeover time were presented by Rudolph (2006).

Lässig and Sudholt (2013) presented a theoretical analysis and a problem where island models excel over both panmictic populations as well as independent runs. This requires a delicate choice of the migration interval, and performance degrades drastically when suboptimal parameter values are being used. This again emphasises the importance of this parameter for the performance of island models.

Hong et al. (2007) and Lin et al. (2012) presented a fitness-based adaptive migration scheme. Each island compares its increase of its best fitness over the last migration interval with the same quantity from the migration interval before. If the new fitness increase is larger than the old one, the migration interval is increased by some constant value. Otherwise, it is decreased by the same constant value. A preliminary experimental study on random 0/1 knapsack problem instances showed that the adaptive scheme can lead to a better final fitness (Hong et al., 2007). The difference in our scheme is that we adapt the migration interval in opposite directions: we decrease the migration interval in case of good fitness gains and increase it otherwise. Furthermore, our changes to the migration interval are more drastic than their additive changes.

Osorio et al. (2011; 2013) presented adaptive schemes for the migration interval, which aim for convergence at the end of the run (for runs of fixed length). The migration interval is set according to growth curves of good individuals and the remaining number of generations; migration intensifies toward the end of a run. They obtained competitive performance results, compared to optimal fixed parameters, for MAX-SAT instances (Osorio et al., 2013). Our perspective is different, as we do not optimise for fixed-length runs.

Finally, Lässig and Sudholt (2011a) presented schemes for adapting the number of islands during a run of an island model. The same schemes also apply to offspring populations in a (1+) EA as a special case. Scheme A doubles the number of islands in case no improvement has been found in one generation. Otherwise, the number of islands is reduced to one island. Scheme B also doubles the number of islands when no improvement is found, and halves it otherwise. Both schemes achieve optimal or near-optimal parallel running times while not increasing the total number of function evaluations by more than a constant factor. Our schemes for adapting the migration interval are inspired by this work.

2  Preliminaries

We define the parallel EAs considered in this work, which contain our adaptive schemes. Our analytical framework is applicable to all elitist EAs: EAs that do not lose their current best solution. We define our schemes for maximisation problems.

Scheme A (see Algorithm 1) maintains a migration interval  for each island. As soon as the current best fitness on an island has improved through evolution, the island communicates this solution to its neighbouring islands. In this case, or when the best fitness increases after immigration, the migration interval for that island drops to 1. This implies that copies of a high-fitness immigrant are propagated to all neighbouring islands in the next generation. If no improvement of the current best fitness is found after  generations, the migration interval doubles.

formula

For the purpose of a theoretical analysis, we assume that all islands run in synchronicity: the tth generation is executed on all islands at the same time. However, this is not a restriction of our adaptive scheme, as it can be applied in asynchronous parallel architectures using message passing for implementing migration.

formula

Inspired by Lässig and Sudholt (2011a), we also consider a Scheme B (see Algorithm 2), where the migration interval is halved (instead of being set to 1) once an improvement has been detected. In contrast to Scheme A, this change is not implemented immediately but only after the current migration period has ended. A flag “” is used to indicate whether a success on island i has occurred in the current migration period. The advantage of Scheme B is that it uses less communication than Scheme A, and if there is a good region in the parameter space of , our hope is that it will maintain a good parameter value in that region over time.

We provide general methods for analysing the expected parallel time and the expected communication effort for arbitrary elitist EAs that migrate copies of selected individuals (the original individuals remain on their island). The parallel time is defined as the number of generations until a global optimum is found and denoted . The communication effort is defined as the total number of individuals migrated until a global optimum is found. For simplicity and ease of presentation, we assume that each migration only transfers one individual; if individuals migrate, the communication effort has to be multiplied by .

In order to demonstrate and illustrate this approach, we consider one simple algorithm in more detail: following Lässig and Sudholt (2011a), the parallel (1+1) EA is a special case where each island runs a (1+1) EA.

In terms of communication topologies, for Scheme A we consider general graphs on vertices as well as the following common special cases. A unidirectional ring is a ring with edges going in the same direction. A grid graph contains undirected edges with vertices arranged on a two-dimensional grid. A torus can be regarded as a grid where edges wrap around horizontally and vertically. A hypercube graph of dimension d contains vertices. Each vertex has a d-bit label, and vertices are neighboured if and only if their labels differ in exactly one bit. The complete graph contains all possible edges. For Scheme B we consider only complete topologies. Notice that in this situation the individual migration intervals of each island will all be equal.

The diameter of a graph G with vertices is defined as the largest number of edges on a shortest path between any two vertices. The unidirectional ring has the largest diameter of . The diameter of any grid or torus graph with side lengths is at most . The diameter of a -dimensional hypercube is , and that of a complete topology is 1.

3  Fixed Migration Intervals

In order to compare our adaptive schemes against fixed migration intervals, we first need to investigate the latter. For fixed migration intervals, every period of generations leads to one migration. This simple argument shows that the parallel time and the communication effort are related as follows.

Theorem 1:
Consider an island model with an arbitrary communication topology  and a fixed migration interval . Then the communication effort is related to the parallel optimization time as follows:
formula

In order to bound the (expected) communication effort from above or below, it is therefore sufficient to bound the (expected) parallel time.

Lässig and Sudholt (2010; 2013) presented general upper bounds for the parallel optimisation time of island models with different topologies. Their method is based on the so-called fitness level method, also known as fitness-based partitions (Wegener, 2002; Lehre, 2011).

Throughout this work, we use a special case of this method. Without loss of generality, consider a problem with fitness values . Consider fitness level sets such that Ai contains all points with fitness i. In particular, Am contains all global optima. We further assume that if the current best individual of a population is in Ai, there is a lower bound si for the probability of finding a higher fitness level in one generation through variation operators used in the algorithm (e.g., mutation, recombination). It is easy to show that then is an upper bound for the expected running time of an elitist EA.

Lässig and Sudholt (2010; 2011b) showed how upper bounds on the parallel optimisation time can be derived from functions of the success probabilities . They considered migration in every generation () (Lässig and Sudholt, 2011b) as well as probabilistic migration, where every island independently decides for each neighbouring island whether migration occurs, and the probability for a migration is a fixed parameter p (Lässig and Sudholt, 2013). The following theorem is an adaptation of the latter, which is valid for periodic migration with migration interval . The results for the expected communication effort on a topology with edge set E follow from multiplying the expected parallel time by , as this term reflects the average number of migrated individuals across the topology in one generation. The upper bounds on the expected parallel time can be derived as in Lässig and Sudholt (2013).

Theorem 2:
Consider an island model with islands, each running an elitist EA. Every iterations each island sends a copy of its best individual to all neighbouring islands. Each island incorporates the best out of its own individuals and its immigrants. For fitness level sets , Ai containing all points of the ith fitness value, let si be a lower bound on the probability that in one generation an island in Ai finds a point in . At most, expected parallel optimization time and the expected communication effort are
formula
for every unidirectional ring,
formula
for an undirected grid or torus with side lengths ,
formula
for the -dimensional hypercube, and
formula
for the complete topology .
Proof:

The statements on communication times follow from the upper bounds on the parallel time, using the second inequality from Theorem 1. The number of (directed) edges in the topology, , is for a unidirectional ring, at most for the stated grids and tori, for the hypercube, and for the complete topology. So the upper bounds on the expected communication effort follow from multiplying upper bounds on by . In the following, we only show the statements on the parallel time.

Consider the time for leaving fitness level i. For the ring we note that after generations, for any integer , k islands will be on fitness level i, and then the probability of finding a better fitness level is at least
formula
where we have used in both inequalities. So, for every , the expected number of generations on level i is at most
formula
If , we use the trivial upper bound
formula
Otherwise, if , we pick such that (since is an integer) and hence , which yields the upper bound
formula
If we set and get
formula
The maximum over all these cases is at most
formula
and summing over all yields the claimed bound for the ring.
Likewise, for torus and grid graphs within generations, for any integer , at least k2 islands will be on the current best fitness level, as this time is sufficient to cover a rectangle of side lengths . As before, we get a time bound of
formula
If , we again use the trivial upper bound
formula
Otherwise, if , we pick such that (as is an integer) and hence , which yields the upper bound
formula
Finally, for , we get for ,
formula
The maximum over all these cases is at most
formula
and summing over all yields the claimed bound for the torus.
For the hypercube, after generations, for any integer , islands will be on fitness level i. So the expected time on fitness level i is at most
formula
If , we get
formula
If , we set and get
formula
Otherwise, if , putting (implying , since is an integer) and using gives an upper bound of
formula
The maximum over all these cases is at most
formula
and summing over all yields the claimed bound for the hypercube.
Finally, for the complete graph, if , we use the upper bound
formula
and otherwise after generations all islands are on fitness level i, yielding the upper bound
formula
Summing over all yields the claimed bound for the complete graph.

The upper bounds from Theorem 2 match the ones from Lässig and Sudholt (2014b) for the case of probabilistic migration if we compare against a migration probability of ; the constant factors here are even better. The constants for probabilistic migration are higher to account for the variation in the spread of information. Periodic migration is more reliable in this respect, since information is guaranteed to be spread every generations.

4  Adaptive Scheme A

In this section we analyse Scheme A on different topologies, including those from Theorem 2. Note that whenever an island improves its current best solution, a copy of this solution is spread to all neighbouring islands immediately. Thus, good fitness levels spread in the same way as migrating in every generation would do, i.e., using a global parameter . This means that the upper bounds from Theorem 2 apply for .

Theorem 3:

For Scheme A on topologies from Theorem 2, the expected parallel optimisation time is bounded from above as in Theorem 2 with .

Note that the bounds on the expected parallel time from Theorem 2 are minimised for . This implies that we get upper bounds on the expected parallel time equal to the best upper bounds for any fixed choice of the migration interval. In case these bounds are asymptotically tight, this means that our adaptive Scheme A never increases the expected parallel running time asymptotically.

The intended benefit of Scheme A comes from a reduced communication effort, as all islands decrease communication, while no improvement is encountered through either variation or immigration. The expected communication effort is bounded from above in the following theorem. The main observation is that for each fitness level, the number of migrations from an island is logarithmic in the time it remains on that fitness level. For an upper bound we consider the expected worst-case time spent on a fitness level Ai, where the worst case is taken over all populations with their best individual in Ai.

Theorem 4:
Consider Scheme A on an arbitrary communication topology  with diameter . Let be (an upper bound on) the worst-case expected number of generations during which the current best search point in the island model is on fitness level i. Then the expected communication effort is at most
formula
Proof:

Initially, and after improving its current best fitness, an island will double its migration interval until its current best fitness improves again. If the current best fitness does not improve for t generations, the island will perform at most migrations.

Consider an island v after reaching fitness level i for the first time, either through variation or immigration. If no other island has found a better fitness level, the random parallel time for some island finding such an improvement is given (or bounded) by . Then this solution (or another solution of fitness better than i) will be propagated through the topology, advancing to neighbouring islands in every generation. Hence, some solution on a better fitness level than i will eventually reach v within the following generations. The latter also holds if some island has already found a better fitness level than i at the time v reaches fitness level i. In any case, the total time v will spend on fitness level i is at most .

An island v in the topology with outdegree will send individuals during each migration. Hence, the total number of migrated solutions in t generations on fitness level i is at most
formula
The expected communication effort, therefore, is at most
formula
where the inequality follows from Jensen’s inequality and the fact that is a concave function.

The communication effort is proportional to the logarithm of the expected time spent on each fitness level. For functions that can be optimised by the island model in expected polynomial time, and for polynomial  (note ), this logarithm is always at most . Then Theorem 4 gives the following upper bound.

Corollary 5:
Consider a fitness function f with m fitness values, such that f is being optimised by the island model in an expected polynomial number of generations, for every initial population. If also , we have
formula

The expected parallel time on a fitness level can be smaller than a polynomial. If sufficiently many islands are being used, and the topology spreads information quickly enough, it can be logarithmic or even constant. For specific topologies we get the following results by combining Theorem 4 with bounds on the parallel time from Theorem 2 for , as established by Theorem 3.

Theorem 6:

Given success probabilities as in Theorem 2, the expected communication effort for Scheme A is bounded from above for certain topologies with islands as follows:

  • for a unidirectional ring

  • for every undirected grid or torus with side lengths

  • for the -dimensional hypercube

  • for the complete graph

We demonstrate the application of this theorem in Section 6.

5  Adaptive Scheme B

Scheme A resets the migration interval of an island to 1 every time an improvement is found. We propose Scheme B, which halves this value instead. This may be advantageous if there is a Goldilocks region of good values for the migration interval across several subsequent fitness levels. In contrast to Scheme A, Scheme B should be able to maintain a value in that region over several fitness levels.

When improvements are being found with probability p, good parameter values are close to , as then we find one improvement in expectation between any migration. If the current migration interval is much smaller, chances of finding an improvement are small, and is likely to increase. Likewise, if is large, the island will find an improvement with high probability and halve . Thus, the migration interval will reach an equilibrium state close to .

Scheme B might have a smaller communication effort as it does not reset the migration interval to 1, thus leading to a chain of frequent migrations. This, however, only holds if Scheme B does not lead to an increase in the parallel time. In fact, for sparse topologies G, such as the unidirectional ring, there is a risk that improvements may take a very long time to be communicated. If, say, all islands had the same migration interval , and one island found a new best solution, it might take up to generations for this information to arrive at the last island. Scheme B therefore makes more sense for dense topologies such as the complete graph, where , and decreases of  quickly propagate to all islands.

We follow the analysis of Scheme B for adapting the number of islands in Lässig and Sudholt (2011a) and note the following similarities. In both approaches, over some time span, a resource is doubled and halved depending on whether an improvement of the best fitness is found. In Lässig/Sudholt, this resource is the number of islands and hence the number of function evaluations executed in one generation. Here it is the number of generations within one migration period. The Lässig/Sudholt time span is just one generation, leading to the parallel time as performance measure. In our case the time span is the current migration interval, leading to the communication effort as performance measure.

For the parallel time in our work equals the sequential time in Lässig/Sudholt, and the communication effort in our work equals the parallel time in their paper. However, a difference emerges for , as in our scenario an island has trials to find an improvement, so the resources for finding improvements are by a factor of  higher compared to Lässig/Sudholt.

We adapt the analysis from Lässig and Sudholt to accommodate this difference. The following lemma is a straightforward adaptation of parts of their Lemma 1. It estimates the expected communication effort for finding a single improvement, based on a given initial migration interval . We abbreviate by .

Lemma 7:

Assume an island model with a complete topology starts with migration interval  and that in each generation each island finds an improvement over the current best individual with probability at least p. Let be the random number of individuals migrated between islands until an improvement is found on any island. Let be the migration interval at this time (before it is halved). Then for every

  1. .

Proof:
The proof closely follows the proof of Lässig and Sudholt (2011a), Lemma 1. The condition
formula
requires that no island finds an improvement before exceeds the stated threshold. The latter is equivalent to
formula
The number of generations between migrations is then at least
formula
In order for to exceed the threshold, we must not have a success on any island during these generations. The number of trials for obtaining a success on any island is by a factor of larger than the number of generations. Hence
formula
To bound the expectation, we observe that the first statement implies . The number of migrants sent in one migration is . Since is non-negative, we have
formula
splitting the sum at and bounding probabilities for small t by 1,
formula
as the last sum is less than 1.

The expected number of migrations is expressed as the difference between logarithms of the ideal value and the initial value . If the initial migration interval is larger, , the expected number of migrations is just 2.

This fact is reflected in the following theorem. The upper bound on the communication effort only contains values when the migration interval needs to be increased, i.e. . For the special case where fitness levels get progressively harder, , the bound simplifies significantly.

Theorem 8:
Given success probabilities as in Theorem 2, for Scheme B we have
formula
For the latter simplifies to
formula

The upper bound for the expected parallel time is only by a factor of larger than the upper bound for Scheme A. Hence, both upper bounds are asymptotically equal. In other words, the reduced communication in Scheme B does not worsen the running time, for problems where the upper bounds for Schemes A and B are asymptotically tight.

We give an informal argument to convey the intuition for this result. Assume that Scheme B has raised the migration interval from 1 to some large value  before an improvement is found. Then Scheme B has spent generations leading up to this value. In the worst case, the new migration interval is too large: improvements are found easily, and then the algorithm has to idle for nearly generations before being able to communicate the current best fitness. The total time spent is around , whereas Scheme A in the same situation would spend generations. So, in this setting Scheme B needs at most as many generations as Scheme A.

A formal argument was given in Lässig and Sudholt (2011a) to derive an upper bound for the parallel time of Scheme B, which is that for Scheme A. The bound on the expected communication effort follows from similar arguments and applying our refined Lemma 7. We refrain from giving a formal proof here as it can be obtained with straightforward modifications from the proof of Theorem 3 in Lässig and Sudholt (2011a).

6  Performance on Common Example Functions

The analytical frameworks for analysing fixed migration intervals and our two adaptive schemes can be applied by simply using lower bounds on success probabilities for improving the current fitness. We demonstrate this approach by analysing the parallel time and the communication effort on common test problems.

In the following we provide an analysis for the maximisation of the same pseudo-Boolean test functions investigated in Lässig and Sudholt (2014b). For a search point , , we define as the number of 1s in x, and as the number of leading 1s in x. We also consider the class of unimodal functions taking d fitness values . A function is called unimodal if every nonoptimal search point has a Hamming neighbour with strictly larger fitness. Finally for we consider
formula
The name comes from the fact that typically at the some point in the evolution a mutation flipping k specific bits (a “jump”) has to be made. The parameter k tunes the difficulty of this function.

6.1  Fitness Partition and Success Probabilities

In order to apply Theorems 2, 6, 8, it is just necessary to define the probability si to move from the fitness level Ai to a better one. Recall that si is a lower bound on the probability of one island finding a search point of strictly higher fitness, given that the population has current best fitness i.

For the simple (1+1) EA, these values are easy to derive:

  • For OneMax, a search point with i 1s has ni zeros. The probability of a mutation flipping only one of these zeros is

  • For LO, it is sufficient to flip the first 0-bit, which has probability .

  • For unimodal functions, the success probability on each fitness level is at least , as for any nonoptimal point there is always a better Hamming neighbour.

  • For with , it is possible to find an improvement to an individual having up to nk 1-bits just increasing the number of 1s, thus the si with are equal to the 1s for OneMax. A similar argument applies to levels . Once we have obtained an individual with nk 1-bits, an improvement is found by generating as offspring a specific bit string having Hamming distance k from the parent, which has probability .

6.2  Fixed Scheme

Given Theorem 2, it is possible to bound the parallel time and the communication effort for fixed migration intervals similarly to Lässig and Sudholt (2014b). For example, for OneMax and the complete topology, we get an expected parallel time of and an expected communication effort of .

For fixed , the value of yields a trade-off between upper bounds for the parallel time and that of the communication effort. In our example, for, say, , we get and . We can notice how a large  always minimises the bound for the communication effort, while a small one (i.e., ) minimises the bound for the parallel time.

Given a fixed number of islands , we define the best as the largest that does not asymptotically increase the bound for the parallel time (compared to ). This assures good scalability while minimising the communication effort. For the example proposed, the best is . This leads to and . The results for other topologies and problems are summarised in Table 1. Notice that fixing to its best value is possible, provided that the number of islands is small enough. Particularly for the example proposed the number of islands must be in order for the best to be defined ().

Table 1:
Expected parallel optimisation times and expected communication efforts for islands running a (1+1) EA. The table shows restrictions on to yield linear speedups and fixed values for , leading to the best upper bounds for the communication effort while not increasing the parallel running time. For both parallel time and communication effort, we show bounds for general in the realm of linear speedups, and the best parallel time achieved by using the largest such , along with the communication effort for the same .
Topology and Scheme
 Complete, Scheme A    
 Complete, Scheme B    
 Complete,     
 Ring, Scheme A    
 Ring,     
 Grid, Scheme A    
 Grid,     
 Hypercube, Scheme A    
 Hypercube,     
LO Complete, Scheme A    
 Complete, Scheme B    
 Complete,     
 Ring, Scheme A    
 Ring,     
 Grid, Scheme A    
 Grid,     
 Hypercube, Scheme A    
 Hypercube,     
Unimodal, Complete, Scheme A    
df-values Complete, Scheme B    
 Complete,     
 Ring, Scheme A    
 Ring,     
 Grid, Scheme A    
 Grid,     
 Hypercube, Scheme A    
 Hypercube,     
Complete, Scheme A    
 Complete, Scheme B    
 Complete,     
 Ring, Scheme A    
 Ring,     
 Grid, Scheme A    
 Grid,     
 Hypercube, Scheme A    
 Hypercube,     
Topology and Scheme
 Complete, Scheme A    
 Complete, Scheme B    
 Complete,     
 Ring, Scheme A    
 Ring,     
 Grid, Scheme A    
 Grid,     
 Hypercube, Scheme A    
 Hypercube,     
LO Complete, Scheme A    
 Complete, Scheme B    
 Complete,     
 Ring, Scheme A    
 Ring,     
 Grid, Scheme A    
 Grid,     
 Hypercube, Scheme A    
 Hypercube,     
Unimodal, Complete, Scheme A    
df-values Complete, Scheme B    
 Complete,     
 Ring, Scheme A    
 Ring,     
 Grid, Scheme A    
 Grid,     
 Hypercube, Scheme A    
 Hypercube,     
Complete, Scheme A    
 Complete, Scheme B    
 Complete,     
 Ring, Scheme A    
 Ring,     
 Grid, Scheme A    
 Grid,     
 Hypercube, Scheme A    
 Hypercube,     

For OneMax and LO, lower bounds on the expected parallel times in Table 1 follow from general lower bounds on the class of mutation-based evolutionary algorithms (Sudholt, 2013). Every mutation-based algorithm needs at least function evaluations on OneMax and at least function evaluations on LO. The parallel (1+1) EA with islands makes evaluations in one generation; hence the bounds translate to for OneMax and for LO. Lower bounds for the communication efforts for fixed migration intervals come from the lower bound in Theorem 1. The floor function can be ignored in the asymptotic notation, as in all cases of Table 1, , implying .

6.3  Adaptive Scheme

In order to calculate the parallel time for the adaptive Scheme A, we can refer to the results for the fixed scheme when , as shown in Theorem 3. For example, our Scheme A running on a complete topology solves OneMax in time . Scheme B has asymptotically the same parallel time of Scheme A, as Theorem 8 shows.

We only consider values of  that lead to a linear speedup as defined in Lässig and Sudholt (2013): the expected parallel time for islands is by a factor of smaller than the expected optimisation time of a single island, in an asymptotic sense. In this setting an island model thus makes the same expected number of function evaluations compared to a single island. In the example proposed, linear speedup is achieved for a number of islands up to ; in fact for a larger number of islands the upper bound on the parallel time would be regardless of . The bounds on limit the best upper bound on the parallel time achievable with our analytical framework. Table 1 shows the bound on for different problems and topologies and the best achievable parallel time bound. For OneMax and LO, lower bounds on the expected parallel times in Table 1 follow from general lower bounds on the class of mutation-based evolutionary algorithms (Sudholt, 2013).

In order to calculate the communication effort we use Theorems 6 and 8 for Schemes A and B, respectively. We first get a general bound for every included in the linear speedup bound, and then we calculate it for the maximum value of , thus providing the communication effort for the value of leading to the best achievable parallel time. Table 1 shows all results derived in this manner. In the following we provide an example of this calculation for Scheme A on LO.

6.3.1  Example: Communication Effort of Scheme A for LO

We provide details on how to calculate bounds on the expected communication effort for Scheme A using Theorem 6, choosing as an example. Calculations for other test functions are similar. The purpose is to illustrate how we derived the results in Table 1.

In the following, is restricted to the cases leading to linear speedup, as stated in the following and in Table 1. The calculations often use that .

  • For the complete topology (),
    formula
    If we set ,
    formula
  • For ring (),
    formula
    If we set , we obtain
    formula
  • For the grid (), we get
    formula
    If we set ,
    formula
  • For the hypercube ,
    formula
    If we set , we get
    formula

6.4  Evaluation of Results

Recall that Table 1 only shows results for linear speedups, hence all (upper bounds on) parallel times are equal, but the range of values varies between topologies.

Table 2 compares upper bounds from Table 1 on the communication efforts for the best fixed value of  against our adaptive schemes. For OneMax on all topologies, the upper bound on the communication effort is by a small term larger for the adaptive schemes compared to the best fixed . The latter varies according to the topology: it is for the ring, for the grid, and for the hypercube and the complete graph. So, the additional factor is a small price to pay for the convenience of adapting  automatically.

Table 2:
Comparison of upper bounds on the communication efforts. The table shows the asymptotic ratio of upper bounds on the communication effort from the rightmost column of Table 1 for the best fixed choice of  and the best adaptive scheme based on bounds from Table 1. A value less than 1 indicates that the best fixed leads to better bounds, and a value larger than 1 indicates a decrease of the upper bound on the communication effort by the stated factor. In all cases was chosen as the largest possible value that guarantees a linear speedup according to the upper bounds.
OneMaxLeadingOnesUnimodalJump
Complete 
Ring     
Grid/Torus     
Hypercube     
OneMaxLeadingOnesUnimodalJump
Complete 
Ring     
Grid/Torus     
Hypercube     

For LO, Scheme A on the ring has a communication effort of  compared to , and for the grid it is versus . Since the bounds for fixed are tight, we see an improvement of and , respectively. These significant improvements show that decreasing the migration interval is an effective strategy in lowering the communication costs without harming the parallel running time. For the hypercube the communication effort is lower by a factor of , whereas for the complete graph no differences are visible in the upper bounds.

For general bounds for unimodal functions, Scheme A on the ring has a communication effort of  compared to , and for the grid it is versus . This means that the adaptive scheme can guarantee a lower upper bound for the communication effort. For the hypercube the upper bound on the communication effort is lower by a factor of , whereas for the complete graph no differences are visible in the upper bounds.

For Jump, with regard to comparing upper bounds, there are no differences for the complete graph, while on the hypercube Scheme A is by a factor worse than the best fixed value. For rings and grids the adaptive scheme is better; the performance gap even grows with k and hence the difficulty of the function.

Comparing Schemes A and B in Table 1, both achieve similar results. For LO we see an advantage of Scheme B over Scheme A: the general bound for the communication effort of Scheme A is , whereas that for Scheme B is . This makes sense, as the probability for finding an improvement in one generation is of order  for the considered , and the ideal value for the migration interval is in the region of . Scheme A needs to increase the migration interval around times to get into this range, which is precisely the performance difference visible in our upper bounds. The difference disappears for .

The same argument also applies to the more general function class of unimodal functions.

7  Discussion

The adaptive schemes presented here were designed to reduce the communication effort without compromising exploitation. The results from Section 6.3 have demonstrated that this goal is achieved for functions that require a high degree of exploitation and little or no exploration. In settings where more exploration and less exploitation is needed, our schemes may not be able to find optimal or near-optimal migration intervals.

As mentioned in Section 1.1, the function class LOLZ from Lässig and Sudholt (2013) was the first constructed example where island models excel over panmictic populations and independent runs. Its structure is similar to the LO problem in a sense that bits have to be fixed to their correct values from left to right. In addition, LOLZ contains a number of traps that islands may fall into. For any island that has not gotten stuck in a trap, the probability of finding a fitness improvement is always at least as for LO.

Solving LOLZ efficiently requires a delicate choice of the migration interval. For , the parallel (1+1) EA finds global optima on LOLZ efficiently with overwhelming probability, given appropriate parameter settings for the islands model (number of islands and communication topology) (Lässig and Sudholt, 2013, Theorem 3). This value of is large enough to allow islands to explore different regions of the search space independently. But it is also small enough to propagate solutions of islands that are on target to finding the optimum, which then take over islands that have gotten stuck in local optima.

Let us consider the performance of our schemes with a complete topology, using nonrigorous arguments that could likely be turned into a rigorous (but lengthy) analysis. As argued in the beginning of Section 5, Scheme B will be drawn toward an equilibrium state for close to , where as for LO. This holds so long as not all islands have gotten stuck in a trap. So for most of the time, we will have , as deviations to larger values are very unlikely (see the first statement of Lemma 7). The same also holds for Scheme A.

However, as the diameter of the complete topology  is 1, we are then in a realm where , and the island model is known to be inefficient (Lässig and Sudholt, 2013, Theorem 1). The reason is that migration happens too frequently, and the island model behaves similarly to a panmictic population, which is prone to getting stuck in a trap. Here our schemes, for complete topologies, would focus too much on exploitation, and not give the necessary degree of exploration needed to optimise LOLZ.

Lässig and Sudholt (2014a, sec. 6.1) also studied an instance of the Eulerian cycle problem, where a large migration interval is beneficial, as it leads to a better exploration. The same arguments as before lead to the same conclusion: our schemes typically will not give the necessary degree of exploration, leading to suboptimal performance for complete topologies.

For sparse topologies like rings, however, exploration is increased and the situation may improve. It is still unlikely that the recommended choice of for LOLZ is being reached, as migration intervals are likely to remain in the range for most of the time (by similar arguments as for the first statement of Lemma 7). But we conjecture that for sufficiently large and sparse topologies, a sufficient degree of exploration can be achieved. A detailed, formal analysis is beyond the scope of this paper and is left as an open problem for future work.

8  Conclusions and Future Work

We have presented two adaptive schemes for adapting the migration interval in island models. Both schemes have been accompanied by analytical frameworks that yield upper bounds on the expected parallel time and the expected communication effort, based on probabilities of fitness improvements in single islands running elitist EAs. The results show that our schemes are able to decrease the upper bound on the communication effort without significantly compromising exploitation. For arbitrary topologies, we got upper bounds on the expected parallel time that are asymptotically no larger than those for maximum exploitation, that is, migration in every generation.

Example applications to the parallel (1+1) EA on common example functions revealed that in the realm of linear speedups and comparing against the best fixed choice of the migration interval, the upper bound on the expected communication effort was larger by a tiny term for OneMax and similarly for the hypercube on Jump, but significantly lower for a general analysis of unimodal functions and for rings and grids on Jump. For LO, the adaptive Scheme A on grid and ring topologies can even guarantee an upper bound on the communication effort that is polynomially lower than the lower bound for the best fixed choice of the migration interval.

One avenue for future work is to evaluate our adaptive schemes empirically on parallel hardware and for real-world problems, to assess the practicality of our approach outside the scope of examples covered here. Another avenue is formalising the ideas from Section 7, leading to a rigorous analysis of our schemes with different topologies on examples that require a certain degree of exploration. This might also address the challenging question of how to choose the right communication topology for a given problem. And it might be possible to further refine our schemes to allow an explicit tuning of the balance between exploitation and exploration.

Finally, this work presents mostly upper bounds for the expected running time and just an upper bound for the expected communication effort of the adaptive schemes. Obtaining corresponding lower bounds would help to identify what performance can be achieved, and assist in the search for provably optimal communication strategies. A promising direction is using black box complexity (Droste et al., 2006; Lehre and Witt, 2012), which describes universal lower bounds on the expected (worst-case) running time of every black box algorithm on a given class of functions. Recent advances toward a black box complexity for parallel and distributed black box algorithms have been made (Badkobeh et al., 2014; 2015), which include island models using mutation for variation.

Acknowledgments

The authors would like to thank Joseph Kempka and the participants of Dagstuhl seminar 13271 “Theory of Evolutionary Algorithms” for fruitful discussions, and the anonymous reviewers for their constructive comments. The work of Andrea Mambrini was partly funded by EPSRC (Grant No. EP/I010297/1). The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement 618091 (SAGE).

References

Alba
,
E
. (
2002
).
Parallel evolutionary algorithms can achieve super-linear performance
.
Information Processing Letters
,
82
(
1
):
7
13
.
Alba
,
E.
, and
Luque
,
G
. (
2005
).
Theoretical models of selection pressure for dEAs: Topology influence
. In
Proceedings of the IEEE Congress on Evolutionary Computation
, pp.
214
221
.
Alba
,
E.
,
Luque
,
G.
, and
Nesmachnow
,
S
. (
2013
).
Parallel metaheuristics: Recent advances and new trends
.
International Transactions in Operational Research
,
20
(
1
):
1
48
.
Araujo
,
L.
, and
Merelo Guervós
,
J. J.
(
2011
).
Diversity through multiculturality: Assessing migrant choice policies in an island model
.
IEEE Transactions on Evolutionary Computation
,
15
(
4
):
456
469
.
Badkobeh
,
G.
,
Lehre
,
P. K.
, and
Sudholt
,
D.
(
2014
).
Unbiased black-box complexity of parallel search
. In
Parallel Problem Solving from Nature
, pp.
892
901
.
Lecture Notes in Computer Science
, vol.
8672
.
Badkobeh
,
G.
,
Lehre
,
P. K.
, and
Sudholt
,
D
. (
2015
).
Black-box complexity of parallel search with distributed populations
. In
Proceedings of the Conference on Foundations of Genetic Algorithms
, pp.
3
15
.
Bravo
,
Y.
,
Luque
,
G.
, and
Alba
,
E
. (
2012
).
Influence of the migration period in parallel distributed GAs for dynamic optimization
. In
Proceedings of the Conference on Learning and Intelligent Optimization
, pp.
343
348
.
Droste
,
S.
,
Jansen
,
T.
, and
Wegener
,
I
. (
2006
).
Upper and lower bounds for randomized search heuristics in black-box optimization
.
Theory of Computing Systems
,
39
(
4
):
525
544
.
Hong
,
T.-P.
,
Lin
,
W.-Y.
,
Liu
,
S.-M.
, and
Lin
,
J.-H
. (
2007
).
Experimental analysis of dynamic migration intervals on 0/1 knapsack problems
. In
Proceedings of the IEEE Congress on Evolutionary Computation
, pp.
1163
1167
.
Lässig
,
J.
, and
Sudholt
,
D.
(
2010
).
General scheme for analyzing running times of parallel evolutionary algorithms
. In
Parallel Problem Solving from Nature
, pp.
234
243
.
Lecture Notes in Computer Science
, vol.
6238
.
Lässig
,
J.
, and
Sudholt
,
D
. (
2011a
).
Adaptive population models for offspring populations and parallel evolutionary algorithms
. In
Proceedings of the Conference on Foundations of Genetic Algorithms
, pp.
181
192
.
Lässig
,
J.
, and
Sudholt
,
D.
(
2011b
).
Analysis of speedups in parallel evolutionary algorithms for combinatorial optimization
. In
Proceedings of the International Symposium on Algorithms and Computation
, pp.
405
414
.
Lecture Notes in Computer Science
, vol.
7074
.
Lässig
,
J.
, and
Sudholt
,
D
. (
2013
).
Design and analysis of migration in parallel evolutionary algorithms
.
Soft Computing
,
17
(
7
):
1121
1144
.
Lässig
,
J.
, and
Sudholt
,
D.
(
2014a
).
Analysis of speedups in parallel evolutionary algorithms and (1+ λ) EAs for combinatorial optimization
.
Theoretical Computer Science
,
551:66
83
.
Lässig
,
J.
, and
Sudholt
,
D
. (
2014b
).
General upper bounds on the running time of parallel evolutionary algorithms
.
Evolutionary Computation
,
22
(
3
):
405
437
.
Lehre
,
P. K
. (
2011
).
Fitness-levels for non-elitist populations
. In
Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)
, pp.
2075
2082
.
Lehre
,
P. K.
, and
Witt
,
C
. (
2012
).
Black-box search by unbiased variation
.
Algorithmica
,
64
(
4
):
623
642
.
Lin
,
W.-Y.
,
Hong
,
T.-P.
,
Liu
,
S.-M.
, and
Lin
,
J.-H
. (
2012
).
Revisiting the design of adaptive migration schemes for multipopulation genetic algorithms
. In
Proceedings of the Conference on Technologies and Applications of Artificial Intelligence
, pp.
338
343
.
Luque
,
G.
, and
Alba
,
E
. (
2011
).
Parallel genetic algorithms: Theory and real world applications
.
Studies in Computational Intelligence, vol. 367. Berlin
:
Springer
.
Mambrini
,
A.
, and
Sudholt
,
D
. (
2014
).
Design and analysis of adaptive migration intervals in parallel evolutionary algorithms
. In
Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)
, pp.
1047
1054
.
Mambrini
,
A.
,
Sudholt
,
D.
, and
Yao
,
X.
(
2012
).
Homogeneous and heterogeneous island models for the set cover problem
. In
Parallel Problem Solving from Nature
, pp.
11
20
.
Lecture Notes in Computer Science
, vol.
7491
.
Neumann
,
F.
,
Oliveto
,
P. S.
,
Rudolph
,
G.
, and
Sudholt
,
D
. (
2011
).
On the effectiveness of crossover for migration in parallel evolutionary algorithms
. In
Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)
, pp.
1587
1594
.
Osorio
,
K.
,
Alba
,
E.
, and
Luque
,
G
. (
2013
).
Using theory to self-tune migration periods in distributed genetic algorithms
. In
Proceedings of the IEEE Congress on Evolutionary Computation
, pp.
2595
2601
.
Osorio
,
K.
,
Luque
,
G.
, and
Alba
,
E
. (
2011
).
Distributed evolutionary algorithms with adaptive migration period
. In
Proceedings of the International Conference on Intelligent Systems Design and Applications
, pp.
259
264
.
Rudolph
,
G
. (
2006
).
Takeover time in parallel populations with migration
. In
Proceedings of the International Conference on Bioinspired Optimization Methods and Their Applications
, pp.
63
72
.
Skolicki
,
Z.
, and
De Jong
,
K
. (
2005
).
The influence of migration sizes and intervals on island models
. In
Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)
, pp. 
1295
1302
.
Sudholt
,
D
. (
2013
).
A new method for lower bounds on the running time of evolutionary algorithms
.
IEEE Transactions on Evolutionary Computation
,
17
(
3
):
418
435
.
Tomassini
,
M
. (
2005
).
Spatially structured evolutionary algorithms: Artificial evolution in space and time
.
Berlin
:
Springer
.
Wegener
,
I.
(
2002
).
Methods for the analysis of evolutionary algorithms on pseudo-Boolean functions
. In
R.
Sarker
,
X.
Yao
, and
M.
Mohammadian
(Eds.),
Evolutionary optimization
, pp. 
349
369
.
New York
:
Springer
.

Note

1

The class of mutation-based evolutionary algorithms describes all algorithms starting with a population of individuals picked uniformly at random, and afterwards only using standard bit mutation as variation operator. The parallel (1+1) EA considered in Section 6 fits in this framework, regardless of the topology and migration policy used, as standard bit mutation is the only variation operator.