## Abstract

The parameter-less population pyramid (P3) is a recently introduced method for performing evolutionary optimization without requiring any user-specified parameters. P3’s primary innovation is to replace the generational model with a pyramid of multiple populations that are iteratively created and expanded. In combination with local search and advanced crossover, P3 scales to problem difficulty, exploiting previously learned information before adding more diversity. Across seven problems, each tested using on average 18 problem sizes, P3 outperformed all five advanced comparison algorithms. This improvement includes requiring fewer evaluations to find the global optimum and better fitness when using the same number of evaluations. Using both algorithm analysis and comparison, we find P3’s effectiveness is due to its ability to properly maintain, add, and exploit diversity. Unlike the best comparison algorithms, P3 was able to achieve this quality without any problem-specific tuning. Thus, unlike previous parameter-less methods, P3 does not sacrifice quality for applicability. Therefore we conclude that P3 is an efficient, general, parameter-less approach to black box optimization which is more effective than existing state-of-the-art techniques.

## 1 Introduction

A primary purpose of evolutionary optimization is to efficiently find good solutions to challenging real-world problems with minimal prior knowledge about the problem itself. This driving goal has created search algorithms that can escape user bias to create truly novel results, sometimes publishable or patentable in their own right (Kannappan et al., 2015). While it is not possible for any algorithm to do better than random search across all possible problems (Wolpert and Macready, 1997), effectiveness can be achieved by assuming the search landscape has structure and then biasing the algorithm toward exploiting that structure.

In evolutionary optimization, and genetic algorithms (GAs) in particular, search is often biased through parameters. This can be beneficial because it allows practitioners to inject their knowledge about the shape of the search landscape into the algorithm. However, the quality of solutions found, and the speed at which they are found, is strongly tied to setting these parameters correctly (Goldberg et al., 1991). As such, either expert knowledge or exceedingly expensive parameter tuning (Grefenstette, 1986) is required to leverage this feature to its fullest potential. Furthermore, parameters such as population size, mutation rate, crossover rate, tournament size, and so on, usually have no clear relationship to the problem being solved, meaning even domain experts may not understand how the parameters will interact with the problem or with each other. To further complicate matters, there is mounting evidence that parameter values should change during search (Goldman and Tauritz, 2011; LaPorte et al., 2014).

There have been periodic efforts to reduce or remove the need for parameter tuning. Rechenberg (1973) introduced self-adaptive parameters, in which parameter values were included in each solution’s genome and themselves underwent evolution. This allowed search to optimize some of its own parameters, resulting in a reduced need for expert tuning. Harik and Lobo (1999) were able to design an entirely parameter-less GA by leveraging schema theory and parallel populations. Unfortunately, these methods were provably less efficient than directly setting the parameters to optimal values (Pelikan and Lobo, 2000).

One area that has been very effective at reducing the number of algorithm parameters is model-based search. The hierarchical Bayesian optimization algorithm (hBOA) (Pelikan and Goldberg, 2006) and the linkage tree genetic algorithm (LTGA) (Thierens, 2010) both require only a single parameter: population size. Pošík and Vaníček (2011) leveraged model building to create a fully parameter-less algorithm, but it is restricted to only order *k*, fully decomposable, noiseless problems.

Most recently Goldman and Punch (2014) introduced the parameter-less population pyramid (P3). This method uses a pyramid structure of populations to combine model-based search with local search to achieve parameter-less optimization. Initial results suggest that unlike previous parameter-less methods, P3 is actually more efficient than current state-of-the-art parameterized search algorithms. In this paper, we extend these results to cover more comparison algorithms; compare both efficiency in reaching the global optimum and intermediate fitnesses; analyze algorithm complexity; and provide more in-depth analysis of P3 itself.

## 2 Comparison Optimizers

In order to fully understand the effectiveness of P3, we compare it with five advanced algorithms that have related features to P3. The random restart hill climber, defined by Goldman and Punch (2014), was chosen as an efficient form of repeated local search. As P3 combines this hill climber with crossover, a comparison with local search alone shows the advantages of P3’s overall approach. The algorithm (Doerr et al., 2013) is the current best theory-supported simple genetic algorithm, and its method of crossover is in some sense a macromutation, just as in P3. hBOA and parameter-less hBOA are advanced model-building search techniques that are very effective at learning complex problem structures, are designed to achieve similar goals as P3’s linkage learning, but use very different methods. Finally, LTGA represents the current state-of-the-art in black box search and is the origin of P3’s linkage learning and crossover methods.

Only hBOA and LTGA require any parameters, each requiring only population size. This makes knowing the optimal behavior of these algorithms much more tractable. All the algorithms are also gene order independent, fitness scale invariant, and unbiased. This means, for any problem, the order in which problem variables appear in the genome can be changed without changing the behavior of the search. The fitness can also be manipulated in any fashion as long as the rank ordering of solutions is unchanged. These algorithms are also unaffected by the meaning assigned to each bit, such that inverting a predetermined random subset of genes before evaluation will not impact search efficiency.

Our implementations of all these algorithms as well as all the population size information, raw results, and processing scripts are available from our website.^{1}

### 2.1 Random Restart Hill Climber

Perhaps the simplest black box search heuristic is stochastic local search, or hill climbing. This optimization technique focuses on improving a single solution until it reaches a local optimum. Here we use the first-improvement hill climber defined by Goldman and Punch (2014) (see Algorithm 1), which works by flipping each bit in a random order and keeping modifications when fitness is improved until single bit flips cannot result in further fitness improvements.

The hill climber requires an amortized cost of operations per evaluation. In order to terminate, at least one evaluation must be performed for each of the *N* bits in the solution. As such, any operation that happens only once per search can be amortized over at least *N* evaluations, covering the initialization of on line 2 of the algorithm. Line 6, which prevents wasted evaluations, can be called at most twice per evaluation: once to add into and once to prevent from being unnecessarily evaluated again. The only way three or more calls could happen is if no fitness improvement was made for the entire previous iteration, which contradicts the loop invariant.

Because of its nature, this hill climber cannot escape basins of attraction. Once a solution is reached such that none of the single bit neighbors are fitness improvements, the search stops. Thus this algorithm requires a restart mechanism to solve multimodal problems. We have chosen here to naïvely restart search from a random solution whenever a local optimum is found. This ensures that on all landscapes there is always a non-zero probability of search finding the global optimum.

### 2.2 Algorithm

Doerr et al. (2013) presented the first genetic algorithm to provably show the advantages of performing crossover on One Max. This comparatively simple algorithm maintains only a single individual and a self-controlled parameter .

Each iteration the number of bits to flip is chosen from the binomial distribution , where *N* is the number of bits in the genome. Next, offspring are produced by flipping *b* bits. The best mutant then produces offspring via uniform crossover with the original parent, such that each gene comes from the mutant with probability . In the original algorithm the best offspring produced by crossover then replaces the original parent if its fitness is no worse. The parameter, which is initialized to 1, is decreased if the offspring replaced its parent and increased otherwise.

The original formulation was designed specifically for unimodal landscapes and as such was not directly suitable for multimodal problems. Goldman and Punch (2014) extended to include random restarts. As search stagnates, the parameter increases in value. Eventually this results in , causing mutation to always flip all bits of the individual. As this prevents any future improvement, whenever , search is restarted from a random solution, with reset to 1.

A few other efficiency modifications were also made. If there is a tie in crossover offspring fitness, whichever has a larger Hamming distance from the parent is retained. This encourages drifting across plateaus. The “mod” control strategy proposed by Doerr et al. (2013) was not used because it conflicted with the random restart strategy. If a crossover individual is identical to either of its parents, it is not evaluated. If mutation produces an offspring that is better than the best crossover offspring, it is used to compare against the original parent.

### 2.3 Hierarchical Bayesian Optimization Algorithm

Pelikan and Goldberg (2006) used statistical principles in combination with a decision tree structure to create the hierarchical Bayesian optimization algorithm (hBOA). This method creates a model of epistatic relationships between genes, which is then used to stochastically generate new solutions. Each generation a binary tournament with replacement is used to select solutions from the population. These solutions are then used to build the model, which in turn is used to generate new solutions. The new solutions are then integrated into the population using restricted tournament replacement.

Conceptually, the model built by hBOA is trying to infer rules of the form “Given that this subset of genes is set to these values, how frequently is gene *x _{i}* set to value v?” This can be represented using a directed acyclic decision forest, with each tree in the forest representing one gene in the solution. In the decision tree

*T*, which is used to set the value of gene

_{i}*x*, each internal node represents previous decisions on how to set some other gene

_{i}*x*, with the children of that node representing how the decision was made. The leaves of each tree give the probability that

_{j}*x*should be set to each possible value.

_{i}The forest is constructed iteratively, with each tree initially containing a single leaf and with each leaf storing a pointer for each selected solution. Each iteration the algorithm considers all possible ways of splitting an existing leaf using another gene *x _{j}*, such that solutions in the leaf are moved to the newly created leaves based on their value for

*x*. The general goal is to separate the solutions such that all solutions with move to one leaf while solutions with move to the other.

_{j}*N*. However, through algebraic manipulation (discussed in the Appendix) we derived a simplified form, shown in Equation (1). Here

*l*is a leaf in tree

*i*, with and the results of splitting

*l*. is the number of solutions that reach

*l*and is the number of solutions that reach

*l*, with the given value for

*x*. If no proposed split satisfies the inequality, iteration stops. If multiple splits do, whichever maximizes the right side is chosen.

_{i}Initially there are possible ways to split existing leaves, as each of the *N* single-node trees can be split by any of the other genes. Each iteration a new edge is added to the decision forest, meaning some of the previously tested splits cannot be used. For instance, if *T _{i}*, which is used to decide the value of

*x*, is split using the value of

_{i}*x*,

_{j}*T*can no longer be split using

_{j}*x*. As a split creates two new leaves, new potential splits must also be tested. Equation (1) parses all solutions that reach a leaf to count gene frequencies, requiring time. The number of total leaves created depends heavily on the problem and . However, assuming no splits are accepted or that the cost of testing all future splits is less than the initial , constructing the model requires time. Each model is used to generate solutions, leading to a cost per evaluation of .

_{i}To generate a solution, the value of each gene *x _{i}* is set using its corresponding decision tree

*T*. Because the forest is directed acyclic, there must be an ordering of

_{i}*T*such that before

_{i}*T*is executed, all

_{i}*x*it uses to make decisions have already been set. As such, previous decisions made by other trees are used to follow each

_{j}*T*until a leaf is reached. The value of

_{i}*x*is then set based on the probability that other solutions reached that leaf with each value of

_{i}*x*.

_{i}To perform replacement, hBOA uses restricted tournament replacement. After each new solution is generated and evaluated, a set of *w* solutions is chosen at random from the population, where . From this set the solution that is most genetically similar to the offspring is chosen. If the offspring is at least as fit as the chosen solution, it replaces the chosen solution in the population. Otherwise the offspring is discarded. This method is designed to preserve genetic diversity, because only genetically similar solutions must compete on fitness.

hBOA is designed to work with large population sizes, resulting in a large number of evaluations per generation. As hBOA utilizes explicit diversity maintenance, standard methods for determining convergence are not considered very accurate. Therefore the authors suggest that an hBOA run should be terminated after performing generations equal to *N*.

Like other model-based techniques, hBOA has very few parameters. There is no mutation or crossover, and modeling does not rely on any explicit parameters. Solution selection, generation, and replacement are all derived from the population size, which must be set by the user.

### 2.4 Parameter-less hBOA

Using the methods first introduced by Harik and Lobo (1999) for the parameter-less GA, Pelikan and Lin (2004) created parameter-less hBOA, which automatically scales its population size to fit the problem. This is done by maintaining a list of concurrent populations using exponentially scaled population sizes.

A run of parameter-less hBOA starts with a single population of size , conventionally set to . After two generations are performed, a new population of size is created and performs a generation. Evolution then continues, with the population performing two generations for each one performed by . Each time population performs its second generation, a new population is created, which performs generations half as often as . In this way an infinite number of parallel populations can be simulated, with each population receiving the same number of total evaluations.

In all other respects each population is identical to an hBOA population using a fixed . No search information is shared among populations, and each search is independently terminated. As such, parameter-less hBOA cannot perform better than hBOA using the optimal population size for a given instance, because it must also spend evaluations on populations of different sizes. This inefficiency is bounded by a log multiple of the total number of evaluations (Pelikan and Lobo, 2000).

### 2.5 Linkage Tree Genetic Algorithm

Thierens (2010) introduced the linkage tree genetic algorithm (LTGA), which automatically detects and exploits problem epistasis by examining pairwise gene entropy. Because of its enhanced ability to preserve high fitness gene subsets, LTGA was able to outperform state-of-the-art GAs across many benchmarks. Since its introduction, many variants of LTGA have been proposed (Thierens and Bosman, 2011; Goldman and Tauritz, 2012), so for clarity we have chosen the version presented by Thierens and Bosman (2013) as our model.

LTGA’s effectiveness comes from its method of performing crossover. Instead of blindly mixing genes between parents, LTGA attempts to preserve important interrelationships between genes. Before performing any crossovers in a generation, LTGA first builds a set of hierarchical gene clusters, which are then used to dictate how genes are mixed during crossover.

Throughout this process, tracks the set of all gene clusters that should be preserved for use by crossover. This set begins with all genes in separate clusters, and each time a new cluster is created, it is added to . However, not all clusters are necessarily worth keeping. For instance, in all versions of LTGA the cluster containing all genes is removed from because preserving all genes during crossover can only create clones. Thierens and Bosman (2013) extended this removal to include any unsupported subsets. If the pairwise distance between two clusters is zero, this means there are no individuals in the population that disrupt the relationships between the two clusters. Therefore during crossover, there is no reason to believe a fitness improvement can be achieved by breaking the stored pattern. As such a cluster is only kept if its direct superset has a nonzero distance. As a final step, line 10 reorders such that clusters appear in reversed order from which they were added to . Thus the most linked clusters and those containing single genes appear at the end of the returned list.

Thierens and Bosman’s 2013 version of LTGA does not use the entire population when determining pairwise entropy. Instead, binary tournament is used to select half of the population. This is done to ensure the model is built using only high-quality solutions, even during the first generation.

In order to efficiently perform clustering, a pairwise gene frequency table is constructed from the selected solutions. To calculate Equation (2), Equation (3) is called for each gene and pair of genes . Extracting this information requires time, where is the population size and *N* is the genome size. The process of converting this pairwise frequency information into clusters can be achieved in using the bookkeeping methods presented by Gronau and Moran (2007). This cost is performed only once per generation and is then used to perform approximately crossover evaluations. As a result, the amortized cost of LTGA’s model building is .

Algorithm 3 describes how the identified clusters are used by crossover to preserve gene linkage while still exploring the search space. Unlike more traditional crossover methods, LTGA crosses each individual with the entire population. Also, to produce a single offspring, multiple evaluations of the fitness function are performed.

Each generation, each individual in the population undergoes crossover. In a single crossover event, each cluster of genes *C _{i}* in is applied as a crossover mask. A random donor

*d*is chosen from the entire population (not just the model-selected population), and

*d*’s genes for

*C*are copied into the working solution. If a modification is made, an evaluation is then performed. If the crossover resulted in no worse fitness, then the changes are kept. This allows for neutral drift across plateaus. The resulting solution, which must be at least as fit as its parent, is then copied into the next generation.

_{i}In total each individual can cause up to evaluations. If all clusters were kept, even those deemed unhelpful, and all donations were evaluated, even those which did not change any genes, then Cluster-Usage would perform exactly evaluations for each of the solutions in the population. This provides the amortizing evaluations required to make clustering only operations per evaluation. However, with skipping some evaluations, it is possible that clustering may be superlinear.

LTGA has no explicit form of diversity control and no method for introducing new genetic information once the population has converged. Therefore an LTGA run is considered converged once two consecutive populations contain the same unique solutions.

By design, LTGA only has a single parameter: population size. LTGA uses no mutation, and crossover is defined in terms of the clustering algorithm. Selection between generations is fully elitist and embedded in the crossover, with selection of model-building solutions fixed to a binary tournament. Neither Cluster-Creation nor Cluster-Usage relies on parameter values. LTGA does not provide any method for controlling or setting the population size, relying instead on a fixed user-specified size.

## 3 Parameter-less Population Pyramid

Goldman and Punch (2014) introduced the parameter-less population pyramid (P3) as a method for performing optimization that does not require the user to provide any parameters. This is achieved by combining efficient local search with the model-building methods of LTGA using an iteratively constructed hierarchy of populations.

The high-level algorithm of P3 is presented as Algorithm 4. The variable is an ordered set of populations, and is a set of all solutions in . Unlike more traditional GAs, P3 does not follow a generational model. Instead, it maintains an iteratively expanding pyramid of expanding populations. Each iteration a new random solution is generated. This solution is brought to a local optimum using the hill climbing algorithm shown as Algorithm 1. If that local optimum has not yet been added to any level of the pyramid, the solution is added to the lowest population *P*_{0}.

Next, the solution is iteratively improved by applying LTGA’s crossover algorithm (Algorithm 3) with each population *P _{i}* in the pyramid. If this process results in a strict fitness improvement and has created a solution not yet stored in the pyramid, it is added to the next highest pyramid level . If does not yet exist, it is created. In this way populations in the pyramid expand over time, and the number of populations stored increases over time. Initially the pyramid contains no solutions or populations, meaning the user does not need to specify a population size.

To accommodate P3’s unique population structure, some of LTGA’s clustering procedures were modified. In LTGA, clusters are identified at the start of each generation and are used to create all offspring in that generation. Because P3 does not perform serial generations, P3 instead rebuilds the model each time a solution is added to a population. Furthermore, unlike our chosen variant of LTGA, all solutions in the population are used to generate the model, not just the winners of a binary tournament. We can do this because even the worst solutions in the pyramid are already of high quality owing to local search. Using local search in LTGA was examined by Bosman and Thierens (2011) and found to provide no significant improvement. A likely cause was that their study applied local search to every solution, not just the initial population, resulting in significant overhead.

Beyond the changes in population structuring, P3 modifies LTGA’s version of Cluster-Creation and Cluster-Usage. P3 changes line 10 in Algorithm 2 from last merged first ordering to smallest first ordering. This method applies gene clusters during crossover based on how many genes are in each cluster,^{2} and not on how tightly linked those genes are. Goldman and Tauritz (2012) found that this alternative was better at preserving diversity and therefore required smaller populations.

P3 also modified line 3 in Algorithm 3. Instead of choosing a single genetic donor for each cluster, P3 iterates over the population in a random order until a solution in the population is found that has a least one gene different for that cluster of genes from the improving solution. This process increases the likelihood of an evaluation being performed for every cluster, and helps test rare gene patterns in the population.

In LTGA the cost of rebuilding the model is , as it must collect pairwise gene frequency information for all solutions in the population. P3 does not store a single population, and it does not have a fixed size for any population. However, each time a solution is added to the population, it requires time to update the table of pairwise frequencies and another time to rebuild the linkage model. The model is then used immediately to perform up to one evaluation for each of the up to clusters. Just as in LTGA, if no evaluation shortcuts were made, P3 has an amortized cost of modeling cost per evaluation. While P3 does rebuild the model more frequently per solution in the population, it also performs a number of local search evaluations that are quite efficient, meaning theoretical comparisons of their speed are difficult to perform. As a final note, P3’s repeated attempts to find a useful donation make it less likely than LTGA to skip evaluations, but there is an added cost to find these donations. Repeated donations could require as much as attempts per evaluation, but experimental evidence suggests that this operation actually saves more overhead than it costs by increasing the number of evaluations per model rebuild.

Each of the pieces of the P3 algorithm were selected not just for their stand-alone efficacy but for the ways in which they interact. By using the hill climber to optimize randomly generated solutions, the underlying pairwise relationships in the problem are exposed. As a result, detecting clusters for use by crossover becomes much more effective. The crossover operator is extremely elitist, as each gene donation must result in no fitness loss, and a solution must strictly improve to be added to the next level of the pyramid. This is balanced by continual integration of new random solutions. Furthermore, each random restart decreases the probability of spurious linkages caused by shared ancestry. This diversity is further preserved by applying gene clusters in smallest first order during crossover, since this reduces the probability of genetic hitchhikers.

Other algorithms have been proposed that use multiple concurrent populations. Hornby (2006) had a hierarchy of populations with solutions periodically advancing upward. This allows for continuous integration of diversity as the lowest population is reseeded with random solutions. However, this method resulted in increased parameterization because not only was a population size required but also new parameters for how frequently generations advanced between levels and how many total levels to have. Harik and Lobo (1999) used multiple independent populations of different sizes as a method for removing the population size parameter, but this was provably less efficient than using an optimal population size, since no information is shared between the populations.

## 4 Problem Descriptions

### 4.1 Single-Instance Problems

Understanding how a stochastic search algorithm will behave on arbitrary and complex search landscapes can be exceedingly difficult. Therefore a common practice for algorithm understanding is to perform search on well-defined, well-understood landscapes. To be of interest these landscapes need to represent interesting and important aspects of real-world problems.

*k*bit nonoverlapping subproblems referred to as traps. Each subproblem is scored using Equation (4), where

*t*is the number of bits in the trap set to 1. The global optimum in each trap is a string of all 1s, while all other solutions lead to a local optimum of all 0s. This problem tests an algorithm’s ability to overcome

*k*-sized deception and is commonly used to determine how effective crossover is at preserving building blocks. Any crossover event that mixes bits from different parents in the same trap will likely result in that trap being optimized to the local optimum. For our experiments we chose to ensure highly deceptive traps.

*s*, introducing an exponential number of local optima into each trap. With and , as used in our experiments, all traps with 0, 1, 3, 5, and 7 bits set are local optima. This means that half of all ways to set the trap are 1 bit local optima. More generally, the number of local optima grows at . As a result, the deceptive step trap is much more challenging for linkage learning techniques, while still being highly deceptive.

Another challenging aspect of landscapes can be higher-order relationships. The hierarchical if and only if (HIFF) problem (Watson et al., 1998) is designed to capture the difficulties of this class of problem. In HIFF the genome is broken up into a complete binary tree, such that each gene appears in exactly one leaf and each internal node is the subset of genes contained in its children. If all genes represented in a node of the tree are set to the same value, they score equal to the size of the set. In this way small subsets lead toward solutions to larger subsets. However, a node can score if all genes are either all 1s or all 0s, meaning that to solve higher-order subproblems it is necessary to perform crossovers that preserve lower-order solutions. This problem is a natural fit for LTGA because the linkage tree can perfectly duplicate the problem’s true relationships (Thierens and Bosman, 2013).

*x*in Equation (6) is encoded using a 10 bit gray code.

_{i}### 4.2 Randomly Generated Problem Classes

While well-defined landscapes can provide specific insights into how an algorithm works, their static nature can be misleading. Specifically, algorithm quality might be very fragile such that it is only effective at searching well-behaved landscapes. A more realistic test of an algorithm’s black box effectiveness is to work with randomly generated instances drawn from a problem class. With testing over a sufficiently large sample it is possible to draw more general conclusions about the algorithm’s effectiveness. The challenge with these landscapes is determining the global optimum to gauge if an algorithm was successful.

Perhaps the most common model for generating random rugged landscapes is the NK model. An NK landscape determines the fitness of each gene based on epistatic relationships with *K* other genes in the genome. This fitness is specified using a randomly generated table of fitness values, were each possible combination of the genes is mapped to some floating point value . In unrestricted NK landscapes the relationships between genes are also randomly chosen, and as a result finding the global optimum is NP-hard for . However, if epistasis is set such that each gene depends on the *K* directly following it in the genome, the solution can be found in polynomial time (Wright et al., 2000). These nearest neighbor NK landscapes are therefore ideal for search algorithm testing. For all of our experiments we fixed to ensure highly rugged landscapes.

*E*is the set of all edges,

*e*is the edge weight connecting vertex

_{ij}*i*to vertex

*j*, and

*x*and

_{i}*x*are the gene values for vertices

_{j}*i*and

*j*. Optimal fitness is when this sum is minimized. Similar to NK landscapes, the general class is NP-hard to optimize, but the subset of Ising spin glass can be polynomially solved.

^{3}In this subset the graph is restricted to be a two-dimensional torus, edge weights are randomly set to either −1 or 1, and vertex values must be −1 or 1.

As our final class of randomly generated problems we chose the maximum satisfiability (MAX-SAT) problem. Related to the more common 3-SAT problem, a MAX-SAT instance is defined by a set of three term clauses. Each term is a randomly chosen variable, which may also be negated. A clause scores if and only if at least one term in the clause evaluates to true. In order to make MAX-SAT instances with a known global optimum, Goldman and Punch (2014) proposed constructing clauses around a fixed solution. In this way the signs of the terms are set to ensure the target solution satisfies the clause. To ensure that each problem would be challenging, we chose a clause to variable ratio of (Selman et al., 1996).

## 5 Comparison Algorithm Parameter Tuning

While four of the six algorithms in our experiments do not require any user-specified parameters, hBOA and LTGA both use a population size parameter. To ensure these techniques are not unfairly handicapped, we extensively tuned each using the bisection method (Sastry, 2001) to determine the optimal population size for each problem size. Extended by Goldman and Tauritz (2012), this method iteratively doubles the population size until some success criterion is met and then performs bisection between the lowest successful and highest unsuccessful sizes. In this way the minimum population size that meets the success criterion is found. Goldman and Punch (2014) proposed a success criterion of performing *r* successful runs in a row, such that the expected failure rate can be bounded above by (Jovanovic and Levy, 1997). As P3 and the other three algorithms do not prematurely converge, we chose to ensure that hBOA and LTGA would almost never do so. As bisection can make infinitely large population sizes, any run that had not found the global optimum after 100 million evaluations or 128 computing hours was considered unsuccessful.

Figure 1 shows the results from performing bisection. In general, hBOA required population sizes that were at least an order of magnitude larger than LTGA. Because of runtime and memory overhead, finding the optimal value for hBOA was much less tractable than LTGA for moderate to large problem sizes. LTGA’s population size also grew significantly slower than hBOA’s as problem size increased, especially on the two trap problems and the Rastrigin problem. Both algorithms were very ineffective on MAX-SAT; neither was able to tune to problem sizes over 60 bits. This is likely owing to the fact that some randomly generated MAX-SAT landscapes are very flat and highly deceptive (Rana and Whitley, 1998).

While hill climbing is not currently treated as a parameter, we also performed preliminary tests of integrating hill climbing into LTGA and hBOA. To match P3, we applied first-improvement hill climbing to each algorithm’s initial population. We then performed bisection on the modified algorithms for the largest problem sizes where hBOA without hill climbing was effective. We found that in general both methods performed worse when combined with hill climbing, in some cases up to an order of magnitude worse. There were three exceptions: both improved on MAX-SAT, and hBOA improved on Rastrigin. In all cases the inclusion of hill climbing did not result in either algorithm outperforming P3 in terms of evaluations required to reach the global optimum. As such, all further experiments use the unmodified published versions.

## 6 Finding the Global Optimum

Figure 2 shows the median number of evaluations required by each of the six algorithms to find the global optimum for multiple sizes of each problem. Each data point in Figure 2 represents the median of 100 runs, where unsuccessful runs are treated as requiring more evaluations than any successful run. If the median run was not successful, no point is shown. Medians are used because the data are not normally distributed, and because they allow for more meaningful comparison between techniques with different success rates. The maximum problem size used for each problem was set to be the largest size we could feasibly determine LTGA’s optimal population size. For many larger problems, results are not shown for hBOA because of the extreme computational cost required to optimally set the population size.

### 6.1 Quantitative Comparison

Of the 130 tested configurations, P3 found the global optimum using the least median evaluations on 114. The largest problem size for any problem where P3 was not the most efficient has 49 bits, with P3 achieving the best results on all 92 larger configurations. hBOA, LTGA, and parameter-less hBOA only outperform P3 on the smallest five, four, and one deceptive step trap instances, respectively. Random restart hill climber outperforms P3 on the smallest three nearest neighbor NK instances and the smallest Ising spin glass. has the most success outperforming P3, doing so on the smallest five deceptive traps, smallest three deceptive step traps, and smallest two Rastrigin. The likelihood that P3 would achieve these pairwise results assuming its median result is actually worse is according to the binomial test. Pairwise comparison of LTGA and P3 on the largest problem size using the Mann-Whitney U test results in for all problems.

### 6.2 Local Search

The random restart hill climber and are both relatively effective on small problem sizes. This is especially true for the three randomly generated problem classes. These problems may contain relatively few local optima or just be exceptionally difficult for the model-based algorithms. On deceptive trap and deceptive step trap using four or fewer traps, performs significantly better than any other algorithm. We believe this is because is able to overcome deception by probabilistically flipping entire traps. This ability also leads to outperform the random restart hill climber on all problems except nearest neighbor NK.

On larger problem sizes, the ability for local search to reach the global optimum quickly diminishes. Only on MAX-SAT are these optimizers competitive at larger tested problem sizes. However, we believe this is because the largest tested MAX-SAT was an order of magnitude smaller than the largest size tested for most other problems. As the problem size increases, the number of local optima increases exponentially, which explains why random restart hill climber was unable to scale. For larger problems it also becomes increasingly unlikely for to make the right combination of changes required to reach the global optimum. This behavior causes high variance in success rate, as evident by the occasional successes on large deceptive trap problems.

### 6.3 Model Building

Only techniques that explicitly built models of gene epistasis were able to solve the largest problem instances. On single instance problems LTGA was more effective than hBOA, with hBOA outperforming LTGA on nearest neighbor NK and Ising spin glass. This may be due to the differences in modeling method: unlike the single instance problems, gene epistasis in the randomly generated problem classes cannot be perfectly represented with a linkage tree.

Considering how different hBOA and LTGA are in performing optimization, it is somewhat surprising how similar their results are on HIFF. However, both techniques rely on populations large enough to support the diversity required to reach the global optimum and to model epistasis. Both techniques also only rebuild models once per generation. As the subproblems of HIFF are nested, it is unlikely that either technique can accurately model higher-order epistasis before solving lower-order subproblems. Therefore both methods require one generation per subproblem order.

### 6.4 P3

Unlike the other model-based methods, P3 generally outperforms both random restart hill climber and even on small problem sizes. Unlike the other local search methods, P3 outperforms LTGA and hBOA even on large problem sizes. This implies that P3 is gaining the benefits of each, leveraging local search to solve easy problems and model building to solve harder ones.

Furthermore, the interaction between these two optimization tools explains some of the reason P3 outperforms each method alone. On deceptive trap, P3’s use of hill climbing ensures all traps are immediately optimized, allowing for perfect linkage detection and high-quality donation. On HIFF, local search solves all pairwise subproblems, saving P3 a generation over LTGA and hBOA. By comparison P3 is only a slight improvement on deceptive step trap, which is less amenable to local search.

## 7 Fitness over Time

For some applications, finding the global optimum is less important than finding good solutions quickly. Therefore we examine this behavior in Figure 3. At regular intervals during optimization Figure 3 shows the median of the best fitnesses found at that point of search across 100 runs. For each problem we show the largest problem size for which we were able to successfully gather results for all six algorithms, but the trends shown are representative of all larger problem sizes. The maximum reporting interval is set to include the slowest P3 run to reach the global optimum.

### 7.1 Quantitative Comparison

Of 181 sample points, P3 had the highest median fitness in 121. In pairwise competition, was the most likely to outperform P3, doing so on 50 sample points. LTGA, hBOA, and parameter-less hBOA were the next best, outperforming P3 on 27, 20, and 18 sample points, respectively. Random restart hill climber almost never outperformed P3, doing so only nine times. The likelihood that P3 would achieve these pairwise results, assuming its median result is actually worse, is according to the binomial test.

### 7.2 Local Search

Perhaps the most striking result is the quality of . Until quite far into search this method performs better than both LTGA and hBOA. Given sufficient evaluations also outperforms random restart hill climber on all seven problems. For brief periods in the middle of search it performs the best of all techniques on deceptive trap; deceptive step trap; HIFF; Ising spin glass; and MAX-SAT problems. ’s ability to efficiently incorporate gene modifications of larger than one bit allows it to overcome the deception and plateaus in deceptive trap and deceptive step trap, solve medium-sized subproblems in HIFF, flip the signs on multiple adjacent bits in Ising spin glass, and cross plateaus in MAX-SAT. However, this method is slow in reaching the global optimum in many of these problems, which causes it to eventually be overtaken by the model-building techniques.

### 7.3 Model Building

Both hBOA and LTGA are marked by periods of little improvement followed by rapid improvement. In hBOA this is taken to the extreme, with all fitness improvement made at the very end of search. In both cases this is caused by model building. Before the model is accurate, little improvement is made. Once it is accurate, fitness improves dramatically.

At 58% of the recording intervals, hBOA has the worst fitness of any solver. Most of the exceptions occur when hBOA is still evaluating its initial population, allowing this random search to temporarily surpass the local search methods. After *N* evaluations, however, hBOA and LTGA both fall behind until their models begin to improve. Parameter-less hBOA reaches intermediate fitnesses faster than hBOA, doing so on 62% of intervals, as its models begin to optimize earlier than hBOA. However, this trend is reversed after a sufficient number of evaluations, most clearly on deceptive step trap and Ising spin glass, as hBOA’s tuned population overtakes parameter-less hBOA’s parallel populations.

On every problem LTGA has five distinct periods: fitness plateau, near instantaneous improvement, fitness plateau, and improvement to global optimum. The early period corresponds with initialization of the population, with the first fitness gain achieved immediately upon completing the first generation. When using an inaccurate model, LTGA’s mixing strategy performs a sort of less effective local search. Subsequent generations then make only minor fitness improvements. Once the model becomes accurate and the probability of a crossover using high-quality genetic material increases sufficiently, LTGA enters a second period of rapid improvement.

### 7.4 P3

The integration of hill climbing into P3 makes it strictly better than using hill climbing alone. Early in optimization P3 and the random restart hill climber have effectively identical quality. This is because P3 performs the same evaluations as the hill climber for the first two restarts. Once P3 begins performing crossover, it immediately improves over the hill climber. In 95% of intervals, P3 had a fitness at least as high as hill climbing. As such P3 is better than a simple hill climber regardless of how long each technique is run and irrespective of how high-quality the solution found has to be.

Unlike the model-based methods, which struggle until model accuracy improves, P3’s iterative solution integration allows it to improve much more quickly. This behavior exists in most problems, but is easiest to understand on deceptive trap. On this problem, P3 immediately brings all traps to local optima, equaled only by the random restart hill climber in quality. By comparison LTGA must evaluate the entire population and perform multiple generations to reach similar quality. P3 is able to immediately integrate optimal versions of each trap into a single individual as they are found by local search, resulting in smoother fitness improvement than LTGA.

## 8 Computational Expenses

While it is common in evolutionary computation to assume the evaluation function will dominate algorithm complexity, in some domains this will not be true. Model-based methods are especially likely to violate this norm. Therefore, in order to assess P3’s quality in solving problems with efficient fitness functions, we provide data on both its algorithmic complexity and wall clock time.

### 8.1 Operation Counting

When discussing the asymptotic complexity of P3 in Section 3, two aspects eluded precise analysis: how expensive model rebuilding is, and how many gene donations are made. Figure 4 provides some insight into how often these two aspects of the algorithm are utilized.

Figure 4a reports in an algorithmic sense how expensive model rebuilding is for search. In order to calculate this value we recorded how many times search rebuilt the model during each run. Figure 4a shows the estimated ratio of model rebuilding cost (*N*^{2} per rebuild) over evaluation cost (*N* per evaluation). If the cost of model building scaled linearly with evaluations, the relationship plotted for each problem should be asymptotically constant. For nearest neighbor NK, Ising spin glass, and Rastrigin this is the case. For both trap problems and HIFF there is slow growth in the ratio. The problem sizes used for MAX-SAT were not sufficient to accurately gauge the asymptotic behavior. Together this suggests that while the cost of building the model is almost linear per evaluation, it can grow slowly. However, even in the worst case (HIFF) this growth was no more than twice the algorithmic cost of an evaluation even using 2,048 bits.

When applying a crossover subset, P3 tries random donors from the population until one is found with at least one bit different from the improving solution. In theory this can result in up to operations. Figure 4b examines the observed average number of donations per evaluation performed. Ising spin glass, HIFF, and Rastrigin all achieve effectively constant behavior here, implying repeated donation does not impact the asymptotic runtime of P3. Both trap functions and nearest neighbor NK all increase in number of donations as problem size increases, potentially increasing algorithmic costs. An important note is that each donation may range in size from a single bit up to . However, repeated donation attempts are far more likely to happen with smaller clusters. As such this may cause some superlinear growth in P3, but it is unlikely to be very high.

### 8.2 Wall Clock Performance

#### 8.2.1 Model Building

hBOA and parameter-less hBOA perform much worse when using wall clock time as the unit of comparison than when using evaluations. This makes sense because hBOA’s model building requires time per evaluation while, under reasonable assumptions, P3 and LTGA require time per evaluation. This penalty is most clear on Ising spin glass, where hBOA goes from being slightly more efficient than LTGA in terms of evaluations to three orders of magnitude worse in terms of seconds. As P3 and LTGA require an asymptotic complexity per evaluation similar to the hill climber and , no similar change in ordering occurs.

#### 8.2.2 P3

When LTGA is optimally tuned to a single-instance problem with an efficient evaluation function, it can find the global optimum faster than P3 in terms of wall clock time. However, on randomly generated problem classes, P3’s efficient use of evaluations is enough to overtake LTGA.

On the four single-instance problems, LTGA not only finds the global optimum using less wall clock time, but the factor speedup increases as problem length does. Naïvely, this suggests that LTGA is achieving a lower order of complexity. However, for these experiments LTGA is growing at sublinear time per evaluation, which is not asymptotically stable because of (at minimum) the time required to perform an evaluation. We suspect that the true cause is that *N* is small enough to be overshadowed by lower-order polynomial terms. For example, LTGA requires time per evaluation to rebuild the linkage model from the frequency table. For small , runtime may be dominated by model building instead of pairwise frequency extraction.

When applied to randomly generated problem classes, the differences in P3 and LTGA’s evaluation complexity dominate runtime complexity. As with Figure 2, the amount of speedup P3 achieves over LTGA increases with problem size on nearest neighbor NK, Ising spin glass, and MAX-SAT.

Across both types of problems we find that P3’s time per evaluation grows approximately linearly. Thus we conclude that P3 requires asymptotically amounts of time per evaluation similar to the other efficient techniques.

## 9 Population Sizing

A major advantage to P3 is that it does not require the user to set a population size parameter. Beyond making P3 easier to apply, this also conveys two additional advantages: diversity scaled to initialization and no need to sacrifice intermediate fitness for eventual optimality.

Figure 6a shows how the number of total solutions stored in the pyramid changes as problem size increases, similar to Figure 1 for hBOA and LTGA’s tuned population sizes. As expected, the number of concurrently stored solutions increases as problem difficult increases, with the exact behavior dependent on the problem landscape. Figure 6b shows how the number of solutions stored is distributed on the largest problem sizes. Here we see that the behavior depends on the type of problem. On single-instance problems, P3’s stored variance is relatively low, and generally higher than optimally tuned LTGA’s population size. On randomly generated problem classes, P3 has a much higher variance in stored solutions but in general requires smaller sizes than LTGA.

### 9.1 Problem Instance versus Problem Class

Our procedure for tuning LTGA and hBOA (see Section 5) involved finding the optimal population size for each class of problem. For real-world black box optimization this is realistically the best that either algorithm could hope for, because tuning to a problem instance or population initialization involves repeatedly solving the problem being tuned. This limitation does not exist in parameter-less methods, which scale their diversity based on the problem instance without needing to solve that instance repeatedly.

To achieve high success rates on randomly generated problem classes, LTGA and hBOA must use a population size large enough to solve the hardest instances in that class. Therefore these methods will have population sizes larger than necessary to solve the easiest instances in the class. Even on single-instance problems, both methods will require population sizes large enough to ensure that the worst random initialization is diverse enough to solve the problem, which may be much larger than the best random initialization.

Figure 7 highlights how this can affect the required number of evaluations to reach the global optimum, showing the distribution of results when solving the largest size of each problem. On each problem LTGA has a much smaller difference between its best and worst runs. This makes sense because LTGA uses the same population size regardless of instance and progresses search generationally. By contrast, P3 has a much higher split, with many runs finishing very quickly. On all problems except deceptive step trap, P3’s upper quartile is lower than LTGA’s lower quartile. Furthermore, on deceptive trap, HIFF, and Ising spin glass, P3’s worst run is better than LTGA’s best run. For nearest neighbor NK, most of P3’s runs finish much faster than the fastest LTGA runs. However, some of P3’s outliers take approximately as long as LTGA’s tuned performance. This supports the hypothesis that P3 is able to scale its diversity not just to the problem class but to the problem instance or even problem initialization, something wholly infeasible for tuned population sizing to do.

This tuning distinction is also apparent when comparing parameter-less hBOA with hBOA (see Figure 2). While the former generally performs worse than hBOA, the difference between the two algorithms is smallest on randomly generated problem classes. On MAX-SAT, parameter-less hBOA actually outperformed both hBOA and LTGA, likely because of its ability to scale diversity to the problem instance instead of the entire problem class.

### 9.2 Fast versus Optimal

In Section 7 we examined intermediate fitness qualities of LTGA and hBOA when using population sizes tuned to reach the global optimum. Both were exceptionally ineffective at quickly reaching high-quality solutions. This is because unlike P3, these methods have an explicit trade-off between optimal performance and intermediate performance caused by their population size parameter.

Figure 8 examines the effect of population size on LTGA’s intermediate fitness by reducing LTGA’s population size to one-tenth of the tuned value. The two problems shown are representative of the behavior of using a smaller population size on the other five problems. Reducing the population size caused LTGA to improve earlier, but plateau at lower fitnesses. This caused LTGA’s success rate to drop from 100 to 0 on deceptive step trap and from 98 to 68 on nearest neighbor NK. Even when using a reduced population size for LTGA, P3 still achieved a fitness at least as high as LTGA at 80% of intervals. The likelihood that P3 would achieve these pairwise results assuming its median result is actually worse is according to the binomial test.

## 10 Inner Workings

While analysis of optimization speed is useful from a practitioner standpoint, doing so provides very little insight into algorithm behavior. To better understand how P3 works in detail we present here a look at some internal features specific to P3.

### 10.1 Crossover

Figure 9a shows the proportion of evaluations P3 spends on crossover as opposed to hill climbing, and Figure 9b shows what percentage of crossover evaluations resulted in a fitness improvement. Together these figures provide some insight into the role of crossover within P3. The behavior for each is clearly problem dependent and generally asymptotically stable as problem size increases.

When solving problems where epistasis can be effectively detected and represented by a linkage tree, P3 tends to spend fewer evaluations performing crossover, and each crossover is more likely to be successful. Deceptive trap and Rastrigin are the easiest problems to model epistasis, with local search quickly reducing pairwise entropy in each. These are also the problems where P3 uses the fewest evaluations on crossover and has the highest success rates for crossover. At the other extreme are nearest neighbor NK and Ising spin glass, which both have overlapping linkage not representable by a linkage tree. These problems have the highest crossover usage and lowest crossover success of any problem except deceptive step trap. While deceptive step trap’s epistasis can be accurately modeled by a linkage tree, the exponential number of plateaus makes detecting gene linkage very challenging.

When compared with LTGA, P3’s crossover success rates are lower but similar in problem ordering. Counterintuitively, the use of hill climbing on the initial population reduces P3’s crossover success, not because it reduces model quality or the donation pool, but because it is much more challenging to improve locally optimal solutions than randomly generated ones. LTGA’s crossover benefits from application to unoptimized solutions, which makes its aggregate crossover success incomparable to P3’s.

Even when crossover success rates are quite low, such as nearest neighbor NK’s 0.007% success, the results discussed in Sections 6 and 7 show it is still critical to optimization. Without crossover, P3’s performance would be identical to the random restart hill climber, which was unable to solve even moderate sized problems and quickly fell behind P3 in intermediate fitness quality. Therefore even infrequently successful crossover donations are critical to success. This does, however, suggest a potential avenue for future improvement by using more successful modeling and donation algorithms.

### 10.2 Pyramid

Another feature unique to P3 is the shape and size of the population pyramid constructed for each problem. Figure 10a shows the number of solutions stored at each level of the pyramid for the largest tested problem sizes. Each point is the median size across 100 runs. If a run did not store any solutions at a level, it is treated as zero. No point is drawn if the median run had zero solutions stored at that level. While pyramid size is affected by problem size, the overall shape is not. As such the behavior shown in Figure 10a is representative of that for all other tested problem sizes.

With the exception of the dip in deceptive step trap, all the pyramids show a monotonic reduction in size as the level increases. This is because a solution must be a strict fitness improvement over its previous version to be added to a higher level, which becomes less likely each time the solution improves. Lobo (2011) found theoretical evidence, and Goldman and Tauritz (2011) found empirical evidence, that the optimal population size decreases with each generation of traditional evolutionary search. By decreasing in size, P3 implicitly stores more diversity in low levels and focuses search on high-quality solutions at higher levels. By comparison, LTGA and hBOA suboptimally use a fixed population size at each generation.

Figure 10a examines how crossover success changes at different levels of the pyramid. At low levels, success gets progressively lower as solution quality increases faster than the model’s ability to improve solutions. At higher levels, modeling becomes more accurate and donations contain higher frequencies of high-quality building blocks, resulting in increased crossover success. The highest level of most problems has a low crossover success rate, as solutions crossing with that level have already been improved by previous operations to the point where the only improvement would be to create the global optimum, which can only happen once.

#### 10.2.1 Deceptive Step Trap

The number of solutions stored at the fourth level of deceptive step trap is significantly lower than that of the third or fifth levels, breaking the decreasing trend of the other six problems. Figure 10b has a similar aberration, with crossover success dropping to 0.0004% before rebounding and following the more common trajectory. This behavior exists in all other problem sizes tested, with the dips occurring at exactly the same level.

This behavior is rooted in the peculiar nature of this landscape. After local search, all traps in all solutions have a total number of 1 bits equal to either 0, 1, 3, 5, or 7, as these correspond to the local optima when using and . Crossover is very likely to overcome the two bit plateaus, and as a result solutions in the second level generally do not contain the lowest fitness local optima (5 bits set) and the third level has very few traps set to the next worst local optima (3 bits set). As a result, solutions that reach the third level can only be improved by replacing the deceptive local optima (0 and 1 bits set) with the global optimum (7 bits set). The global optimum is very rare in the population, and with eight ways to represent local optima, linkage learning is inaccurate. Therefore it is very unlikely for crossover to be successful, meaning few solutions will be added to the fourth level. Solutions that do improve by definition must have a higher frequency of optimal trap settings, meaning level four’s model will be more accurate and donations are more likely to contain optimal trap values. Thus the level size and crossover success rates increase after contracting around level four.

## 11 Conclusions and Future Work

The parameter-less population pyramid (P3) is a method for performing black box optimization. P3’s primary innovation is the replacement of the generational model with a pyramid of populations. This pyramid is constructed iteratively, with both the number of levels and the number of solutions stored at each level growing as search progresses. P3 uses a model-based crossover method that learns a linkage tree from gene epistasis. Combined with a simple hill climber, P3’s design contains many synergistic features.

Across a large number of problems and problem sizes, P3 required fewer evaluations to reach the global optimum than optimally tuned state-of-the-art competitors. On single-instance problems P3’s improvement was by a constant factor, while for the three randomly generated problem classes P3’s improvement increased with problem size. This quality extends to intermediate points during evolution, with P3 generally reaching at least as high fitness as the competitive techniques when using the same number of evaluations. While P3 does require modeling overhead, the expense of this overhead is approximately linear with respect to genome size. There is some evidence that even when compared on wall clock time, P3 performs on par with the best comparison techniques. All these achievements are made without any problem-specific parameter tuning, making P3 easier to apply to new domains than its two closest competitors in quality.

P3’s quality is due to a number of desirable traits. First, mixing local search with model-based crossover lets search focus on properly mixing high-quality solutions. Second, by adding diversity only as necessary P3 tends to use the minimal amount of random initialization, unlike other techniques that must overcompensate with larger population sizes on single-instance problems and consider the worst instance when solving problem classes. Third, by heavily exploiting existing diversity before adding more, P3 is able to reach high-quality intermediate fitnesses quickly without prematurely converging. Fourth, the very nature of the pyramid’s shape allows search to preserve a desirable proportion of diversity at each fitness level, similar to a generational model using a decreasing population size.

There are a number of meaningful avenues for future P3 experimentation. Perhaps the most pressing for practitioner acceptance is to apply P3 to real-world problems and compare its results with other black box or even problem-specific heuristics. While parameter-less, P3 is currently limited to discrete, fixed-length genomes evaluated using single-objective fitness. These limitations can be relaxed with future work to make P3 more widely applicable. While asymptotically linear in problem size, P3’s modeling techniques and local search methods are likely to be prohibitively expensive for genome sizes in the hundreds of thousands or millions of genes, and the inability of the model to capture overlapping linkage may be hindering search efficiency. Overcoming these limitations by using a new modeling technique may allow the pyramid model even greater flexibility. Similarly, while P3 is able to overcome low-order deception via linkage learning, the iterative improvement method by which crossovers are made may mislead search on landscapes with higher-order deception.

However, even without these improvements our results show P3 is highly efficient at finding global optima on black box problems without any problem-specific tuning.

## Acknowledgments

This material is based in part upon work supported by the National Science Foundation under Cooperative Agreement No. DBI-0939454. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

## References

*k*decomposable problems

### Appendix hBOA Simplification

The outermost product of Equation (8) iterates over all trees in the forest. However, each split can modify only one of the trees and therefore the contribution of all others can be canceled. The middle product is across all leaves in the tree. Again, since only one leaf can be changed, all other terms can be canceled. By convention hBOA uses uninformed Bayesian priors of and for binary alphabets. As this means the top term in the middle product and the bottom term in the third product reduce to 1. The only remaining terms are then and which represent the number of solutions which reached leaf *l* and the number of solutions which reached leaf *l* with a specific value for *x _{i}*, respectively.

Equation (9) can also be simplified when doing comparisons. If model has exactly one more leaf than model *B*, then the ratio simplifies to regardless of total model size.

The resulting simplifications create Equation 1, where is different from *B* by exactly 1 split, such that *l* was split to create and . The best split is whichever maximizes its improvement over *B*, which is equal to the right side of the inequality. Note that these factorials can still be exceedingly large, and therefore it is imperative that implementations avoid rounding errors and overflows.

## Notes

^{2}

Ties are broken randomly.