## Abstract

This article presents an exploratory landscape analysis of three NP-hard combinatorial optimisation problems: the number partitioning problem, the binary knapsack problem, and the quadratic binary knapsack problem. In the article, we examine empirically a number of fitness landscape properties of randomly generated instances of these problems. We believe that the studied properties give insight into the structure of the problem landscape and can be representative of the problem difficulty, in particular with respect to local search algorithms. Our work focuses on studying how these properties vary with different values of problem parameters. We also compare these properties across various landscapes that were induced by different penalty functions and different neighbourhood operators. Unlike existing studies of these problems, we study instances generated at random from various distributions. We found a general trend where some of the landscape features in all of the three problems were found to vary between the different distributions. We captured this variation by a single, easy to calculate parameter and we showed that it has a potentially useful application in guiding the choice of the neighbourhood operator of some local search heuristics.

## 1 Introduction

In the field of evolutionary computation and meta-heuristics, the most common practice of evaluating the performance of a new proposed algorithm is through what is known as ”competitive testing,” where an extensive number of experiments is performed on benchmark problems and the results are then compared against the performance of other existent algorithms (Burke et al., 2009). Hooker (1995) had warned against this approach more than 20 years ago and argued that it is harmful for research in the field as it gives little or no insight into why or how the algorithm under test is better or worse than the others. Over the past few years, this line of research has been increasingly criticised for not advancing our knowledge and understanding of the algorithm behaviour (Swan et al., 2015; Sörensen, 2015). The need for a deep understanding of the problem features and how it relates to algorithm performance has consequently been recognised and encouraged by an ongoing initiative to guide the research in the field towards this direction (Swan et al., 2015, Sörensen, 2015). Indeed, understanding problems features and exploring algorithms weaknesses and strengths has been a popular area of research both as an independent exploratory landscape analysis of problems (Tayarani-N. and Prügel-Bennett, 2015b,c,a; Prügel-Bennett and Tayarani-N., 2012; Daolio et al., 2015) or as part of automatic algorithm selection and configuration techniques (Smith-Miles and Lopes, 2012; Consoli et al., 2014).

Fitness landscape analysis emerged as an analytical framework to address the need of understanding the relation between problem features and algorithm performance. Indeed, an extensive amount of research has been carried out in the past two decades where various predictive measures were proposed to characterise problem difficulty. However, soon it was realised that a single general measure that accurately predicts the difficulty of all the problems and that can be computed in polynomial-time cannot exist unless P = NP as rigorously proven by He et al. (2007). These measures, therefore, should be viewed instead as part of a toolbox of techniques to broadly characterise problems. Studying more than one measure or feature can help in getting a broader perspective and increase the chances of capturing various aspects of the problem difficulty.

Three NP-hard binary packing problems are studied in this article, each of which is an abstraction of many real-world problems. The problems are the number partitioning problem, the binary knapsack problem, and the quadratic binary knapsack problem. We provide a systematic landscape analysis studying a set of properties that we believe to be representative of the problems difficulties and that give insights into the structure of the landscape particularly with respect to local search. We provide a comparative analysis where we compare and contrast the landscape properties across different landscapes that were induced by different neighbourhood operators, penalty functions and problem parameters. To the best of our knowledge, this research is the first to conduct such extensive and comparative landscape analysis of these problems. The analysis examines landscape properties, neighbourhood operators and differently distributed random instances that have not been studied before in the literature. As a result of the landscape analysis, the study also proposes a good heuristic for determining which local search operator to choose for this class of binary packing problems.

The article is structured as follows. Section 2 presents preliminaries and definitions. Section 3 formally introduces, the three problems under study and describes the random generation of their instances. Sections 4 to 6 discuss the properties of the different landscapes of all three problems and examines the similarities and dissimilarities between them. Section 7 examines the performance of local search and relates that to the corresponding landscape properties. The article is concluded in Section 8 with remarks on future work.

## 2 Preliminaries and Definitions

**Search Space.**The search space*X*is the finite set of all the candidate solutions. The fitness functions of all the studied problems in this article are pseudo-Boolean functions, hence the search space size is $2n$.**Neighbourhood.**A neighbourhood is a mapping $N:X\u2192P(X)$, that associates each solution with a set of candidate solutions, called neighbours, which can be reached by applying the neighbourhood operator once. The set of neighbours of*x*is called $N(x)$, and $x\u2209N(x)$. We consider two different neighbourhood operators: the Hamming 1 operator (*H1*) and the 1+2 Hamming operator (*H1+2*). The neighbourhood of the*H1*operator is the set of points that are reached by 1-bit flip mutation of the current solution*x*, hence the neighbourhood size is $|N(x)|=n$. The neighbourhood of the*H1+2*operator includes the Hamming one neighbours in addition to the Hamming two neighbours of the current solution*x*, which can be reached by 2-bits flip mutation. The neighbourhood size for this operator is $|N(x)|=(n2+n)/2$. Suppose we lay out the search space in circles around a configuration*x*, so that*x*is placed in the centre and the configurations that are*h*Hamming distance away from it lie on the circle of radius*h*(see Figure 1 for an illustrative example when $n=10$). For a configuration in the*h*-th circle, its*H1*neighbours will be spread out as follows:*h*of them will reside in the $h-1$ circle, the rest ($n-h$) will reside in the $h+1$ circle. Its*H1+2*neighbours will be spread out over the $h-2$, $h-1$,*h*, $h+1$, $h+2$ circles as follows: $h(h-1)/2$,*h*, $h(n-h)$, $n-h$, $(n-h)(n-h-1)/2$, respectively.Figure 1:**Fitness Landscape.**The fitness landscape of a combinatorial optimisation problem is a triple $(X,N,f)$, where*f*is the objective function $f:X\u2192R$,*X*is the search space and*N*is the neighbourhood operator function (Stadler and Stephens, 2002).**Search Position Type.**For a given point $x\u2208X$ in the landscape, according to the topology and fitness values of its direct neighbourhood, it can belong to one of seven different types of search positions (Hoos and Stützle, 2005). The types are:Strict local minimum (SLMIN): $\u2200y\u2208N(x),f(y)>f(x)$.

Non-strict local minimum (NSLMIN): $\u2200y\u2208N(x),f(y)\u2265f(x)$, and $\u2203u,z\u2208N(x),$ such that $f(u)=f(x),$ and $f(z)>f(x)$.

Interior plateau (IPLAT): $\u2200y\u2208N(x),f(y)=f(x)$.

Ledge (LEDGE): $\u2203u,y,z\u2208N(x),$ such that $f(u)=f(x),f(y)>f(x),$ and $f(z)<f(x)$.

Slope (SLOPE): $\u2200y\u2208N(x),f(y)\u2260f(x)$, and $\u2203u,z\u2208N(x),$ such that $f(u)<f(x),$ and $f(z)>f(x)$.

Non-strict local maximum (NSLMAX): $\u2200y\u2208N(x),f(y)\u2264f(x)$, and $\u2203u,z\u2208N(x),$ such that $f(u)=f(x),$ and $f(z)<f(x)$.

Strict local maximum (SLMAX): $\u2200y\u2208N(x),f(y)<f(x)$.

For the purpose of this article, we use the term

*local optimum*to refer to both strict and non-strict local optimum.**Global Optima.**Assuming maximisation, a point $x\u2208X$ is a*strict global maximum*if it is a strict local maximum and $\u2200y\u2208X,f(x)\u2265f(y)$, and a point $x\u2208X$ is a*non-strict global maximum*if it is a non-strict local maximum and $\u2200y\u2208X,f(x)\u2265f(y)$.**Plateaux.**A plateau is a set of connected non-strict local maxima, with or without interior plateau points. An exit is a neighbour of one or more configurations in the plateau, which shares the same fitness value of the plateau, but has an improving move. An exit could be a non-strict local minimum (maximum when minimising) or a ledge. We call a plateau open when it has at least one exit otherwise we call it closed. We call a plateau of non-strict global maximum, a global plateau. Obviously, all global plateaux are closed. We illustrated our definitions in Figure 2. Collecting information about the different plateaux types gives an insight into the different plateaux regions in the problems and can help inform the algorithm design and the choice of search operators. For example, a problem with mostly open than closed plateaux motivates the use of plateaux moves.Figure 2:**Local Search.**The local search algorithm under study in this research is the steepest ascent (descent when minimising) with no plateau moves and with random restarts.**Basin of Attraction.**The attraction basin $B(x*)$ of an optimum $x*\u2208X$ is the set of points that leads to it after applying local search to them, $B(x*)={x\u2208X\u2223$ localsearch$(x)=x*}$. The basin of a plateau is the union of the basins of its configurations. The neighbours of a point*x*are evaluated in order from left to right, with respect to bit flips, in the case of having more than one neighbour with the best improving move, the first one is always selected. Of course, this deterministic way of choosing the improving move could introduce some bias to the size of the basin. However, there was only a small subset of such configurations in the landscapes of the instances we have studied. Thus, we speculate that the bias, if any, will be quite small. A possible way to break the tie and avoid the bias is to choose randomly between the best improving configurations. This method, however, will render the landscape structure stochastic. To study the attraction basin shape, we use the return probability concept from Prügel-Bennett and Tayarani-N. (2012). The return probability to an optimum starting from a Hamming sphere of radius*h*around it is given by $pr(h)$. We randomly sample*s*configurations that are*h*Hamming distance away from an optimum, then we apply local search to them and calculate the fraction that led to the starting optimum. The sample size*s*is obtained from $s=s0SS0+(S-1)$ and $s0=za/22p^(1-p^)e2$, where*S*is the number of configurations in the Hamming sphere $h,p^=0.5$ is the initial estimated proportion, $e=0.005$ is the error margin and $za/2$ is the z-score which is set to 1.645 (corresponding to 90% confidence level; Alyahya and Rowe, 2016).

## 3 Problems

We study three NP-hard problems: the number partitioning problem (NPP), the binary knapsack problem (0-1KP), and the quadratic binary knapsack problem (0-1QKP). All of the three problems are similar in nature, in the sense that all of them fall into a class of NP-hard binary packing problems related to the 0-1 knapsack problem. A special case of the 0-1KP known as the subset sum problem is a generalisation of the NPP. The 0-1QKP is a variant of the 0-1KP where the profit of an item depends also on the other selected items. The 0-1KP and the 0-1QKP are constrained optimisation problems and their search space is partitioned into a feasible and an infeasible region.

### 3.1 Number Partitioning Problem

*m*-bit positive integers (weights) drawn at random from the set ${1,2,\u2026,M}$ with $M=2m$, the goal is to partition

*W*into two disjoint subsets $S,S'$ such that the discrepancy between them $|\u2211wi\u2208Swi-\u2211wi\u2208S'wi|$ is minimised. A partition is called perfect if the discrepancy between the two subsets is 0 when the sum of the original set is even, or 1 when the sum is odd. Equivalently, the problem can be viewed as minimising: $max\u2211wi\u2208Swi,\u2211wi\u2208S'wi$, the maximum sum over the two subsets. Let $x\u2208{0,1}n$, the objective function to be minimised can be defined as:

The binary representation of NPP creates a symmetry in the search space, in the sense that a solution and its bitwise complement have the same fitness value. Thus, the number of unique solutions is $\u22642n-1$. The NPP is NP-hard in the weak sense (Garey and Johnson, 1979), that is, there exists an algorithm that can solve it in pseudo-polynomial time through dynamic programming. The complexity of such an algorithm, $O(n2log2\u2211i=1nwi)$, is polynomial in the number of weights and the sum of the weights but exponential in the number of bits required to represent the sum. As Garey and Johnson (1979) note, such an algorithm will display an exponential behaviour only when extremely large input numbers are allowed. The running time of such an algorithm would thus exhibit an exponential behaviour as *M* grows large.

NPP undergoes a sudden phase transition from solvability (a perfect partition exists) to insolvability (a perfect partition doesn't exist), determined by the control parameter $k=log2(M)/n$, which corresponds to the number of the bits required to encode the numbers in the set divided by the size of the set. For $log2(M)$ and *n* tending to infinity, the transition occurs at the critical value of $kc=1$, such that for $k<1$, there are many perfect partitions with probability tending to 1, whereas for $k>1$, the number of perfect partitions drops to zero with probability tending to 1 (Borgs et al., 2001). A more detailed parameterisation of the critical value of the control parameter is given by the following^{1} (Mertens, 2001): $kc=1-ln(\pi 6n)2nln(2)$.

The transition between the two phases appears in the size of the problem backbone. The backbone notion is borrowed from statistical physics and has been generalised to optimisation problems by Slaney and Walsh (2001), where it is defined as the set of decisions with fixed outcomes in all optimal solutions. In NPP, the pairs of weights that are placed in the same subset or in opposite subsets in all optimal solutions of an instance form the backbone of that instance. There is a very sharp increase in the backbone size of the optimal solutions in the NPP as one approaches the phase transition boundary, after which the backbone tends to be complete giving a unique optimal solution (Slaney and Walsh, 2001). In the literature, the effect of this phase transition has been shown in the computational complexity of some exact solvers such as the complete Karmarkar-Karp differencing algorithm (Mertens, 2001). Instances with $k<kc$ were “easy-to-solve” and the ones with $k>kc$ were “hard-to-solve.”

### 3.2 0-1 Knapsack Problem

*C*and a set of

*n*items each with associated weight $wi$ and profit $pi$, the aim is to find a subset of items that maximises

*i*belongs to the subset and $xi=0$ otherwise. We study instances where $pi$ and $wi$ are positive integers drawn from the set ${1,2,\u2026,M}$. Like NPP, the 0-1KP is NP-hard in the weak sense (Garey and Johnson, 1979).

Note that the 0-1KP search space, $X={0,1}n$, is partitioned into a feasible region $F=x\u2208X\u2223\u2211i=1nxiwi\u2264C$ and an infeasible region $INF=X\u2216F$. For $\lambda =1$ there are no infeasible solutions and as the value of $\lambda $ decreases the size of the infeasible region increases until $INF=X$ when $\lambda =0$. We define the boundary between feasible and infeasible regions as the set of feasible configurations that have at least one infeasible neighbour, $B={x\u2208X\u2223x\u2208F\u2227\u2203y:(y\u2208N(x)\u2227y\u2208INF)}$. We mostly study instance with moderate constraint ($\lambda =0.5$), such instances have the largest boundary sizes and thus the largest number of optima.

Randomly generated instances of the 0-1KP can be classified into different types based on the relation between the item's profit and weight. We study 11 types, which have been the focus of several studies in the literature, each with different properties that could influence the performance of problem solvers (Pisinger, 1999, 2005; Martello et al., 1999; Caccetta and Kulanoot, 2001). The instance types are: uncorrelated, weakly correlated, strongly correlated, inverse strongly correlated, subset sum, uncorrelated spanner, weakly correlated spanner, strongly correlated spanner, multiple strongly correlated, profit ceiling, and circle. The definition of the instance types are taken from Pisinger (2005).^{2}

#### 3.2.1 Constraint Handling

We use a penalty function as a constraint handling method. An infeasible solution *x* that violates the given constraint is penalised by a value $Pen(x)>0$, $Pen(x)=0$ for a feasible solution *x*. The objective functions after adding the penalty term is $f(x)=\u2211i=1nxipi-Pen(x)$. The choice of an appropriate penalty function is very critical for inducing smoother landscapes and guiding the search process to good feasible regions. Careful design of appropriate penalty functions is particularly important for this class of problems since local optima in all covering and packing problems lie in the boundary of the feasible region (Gottlieb, 2001). Assigning a lower fitness value to infeasible solutions than all feasible solutions is hence of critical importance for successful penalty-based search. Some penalty-based algorithms for the 0-1KP suffer from the feasibility problem, that is, they often terminate with completely infeasible solutions due to inappropriate choice of the penalty function (Olsen, 1994).

### 3.3 Quadratic 0-1 Knapsack Problem

*j*is selected and $pij+pji$ is the profit achieved if both items

*i*and

*j*are selected (for $i<j$) (Gallo et al., 1980). The aim of the 0-1QKP is to find a subset of items that maximises the profit without exceeding the knapsack capacity. The density of the profit matrix, that is the percentage of non-zero elements, is given by $\Delta $. The quadratic objective function to be maximised is as follows:

#### 3.3.1 Constraint Handling

### 3.4 Random Instance Generation

Most of the existing studies of these problems only consider instances where weights are drawn at random from a uniform distribution. We are interested in studying if and how the landscape properties of instances generated randomly from different distributions vary. Thus, we generate instances with weights drawn randomly from five different discrete probability distributions: uniform, normal, negatively skewed, positively skewed and bimodal distribution with peaks at both ends. We used arbitrary-precision arithmetic for large ranges that exceed the fixed precision range. We investigate setting the phase transition control parameter *k* to 0.4 and 1 in all the studied problems and additionally to 1.2 in NPP. For the 0-1QKP we explore the effect of varying the profit matrix density on the landscape by studying instances with $\Delta =0.1,0.25,0.5,0.75,0.95,$ and 1. We study instances of size $n=14,16,18,20,22,30,100$. For each parameter combination, landscape properties are collected from exhaustively enumerated search space of 600 instances for $n<30$ and sampled from 500 instances for $n\u226530$.

## 4 Search Position Types

The common finding across all of three problems is that no configuration of type IPLAT was found in either of the *H1* or *H1+2* landscapes across the different distributions of the weights, the different values of *k*, and for all $\Delta >0.25$ in the 0-1QKP. For a sparse 0-1QKP profit matrix $\Delta \u22640.25$, very few configurations of type IPLAT were found, which is not surprising, since the very low density of the profit matrix results in many solutions sharing similar objective values. The absence of IPLATs and NSLMAXs in NPP is an implication of its elementary landscape. The NPP has an elementary landscape under the *H1* operator when the objective function is the square of the discrepancy (Grover, 1992). This has an implication on the types of plateaux and search positions in its landscapes (Whitley et al., 2008). The first implication is that configurations of type IPLAT can only exist when the entire landscape is flat. The second implication is that exits of open plateaux can only be LEDGEs not NSLMAXs. On both landscapes of NPP, there are always two configurations of type SLMAX: the all zeros solution $x=(0,\u2026,0)$ and its bitwise complement. In the infeasible region of 0-1KP, all the found configurations were of the following types: LEDGE, SLOPE, SLMIN, and NSLMIN. No strict optima or plateaux were found in the infeasible region. In the *H1* landscape of the infeasible region, the configurations were mainly of type LEDGE with both values of *k*. In the *H1+2* landscape of the infeasible region, the configurations were mainly of types LEDGE and SLOPE when $k=0.4$, and mainly LEDGE when $k=1$. In the feasible region of the *H1+2* landscape, when $k=0.4$, the found configurations were mainly of types LEDGE, SLOPE, NSLMAX, and SLMAX in all the problem types. Apart from the uncorrelated and uncorrelated spanner instances, where no configurations of type NSLMAX or SLOPE were found. When $k=1$, the SLOPE and NSLMAX types disappear, aside from very few configurations in normal and negatively skewed instances of the profit ceiling and circle problem types. The feasible region of the *H1* landscape is similar to its infeasible region, in that no apparent changes were observed between the values of *k*. In general, the configurations were of type SLMAX and LEDGE in this region. The configurations in the infeasible region of 0-1QKP were of types SLMIN, LEDGE, and SLOPE. The SLOPE configurations seem to disappear in the infeasible region of the *H1+2* landscape when *k* goes from 0.4 to 1. Apart from that, not much difference is found between the two values of *k* across all the different $\Delta $ and for both landscapes and regions.

## 5 Properties of Optima and Plateaux

### 5.1 Number of Optima and Plateaux

In NPP, the number of global optima is highest when $k=0.4$ and it starts to decrease as we approach the critical phase transition point and keeps decreasing as we cross the point until we have only two optimal solutions. Similar behaviour has been observed for the number of non-strict local optima, where it starts to decrease as we approach the phase transition until it reaches zero in the phase beyond the critical point. That is true for both the *H1* and *H1+2* landscapes and for all the different distributions. The results can be found in our paper (Alyahya and Rowe, 2014). In the 0-1QKP and in all problem types of the 0-1KP apart subset sum, the number of global optima is not effected by the different values of *k*. In general there was only one global optimum apart from few exceptions with a slightly higher number in very sparse 0-1QKP profit matrix ($\Delta =0.1$) and in strongly correlated, multiple strongly correlated, and profit ceiling problems. In subset sum, similar to NPP, the optimal solutions number is highest (around $103$ for $n=20$) when $k=0.4$, and it drops down to only one when $k=1$.

The number of strict local optima does not change very much between the different values of *k* regardless of the distribution from which the weights are chosen and regardless of the used operator. This is true for all the three problems. In the 0-1KP and in the 0-1QKP, the variation in the number of local optima in this landscape between the different problem types and between the different values of $\Delta $ was found to be very small (with the exception of a slightly higher number in instances with very sparse profit matrix $\Delta =0.1$). The number of strict local optima was found however to vary largely between distributions in the *H1* landscape in all problems. In particular, the negatively skewed and the normal distributions have the largest number of strict local optima (between 8-15% of the search space) while the positively skewed and the two peaks distributions have the fewest (around 1% of the search space). The number of strict local optima drops in the *H1+2* landscape compared to the *H1*, which is expected, however the difference between the two landscapes is large. The difference is in two orders of magnitude in the negatively skewed and the normal distributions and in one order of magnitude for the rest of the distributions. In the 0-1QKP and in the uncorrelated, weakly correlated, uncorrelated and weakly correlated spanner instances of 0-1KP the difference is around 4 orders of magnitude. The number of local optima in the 0-1QKP and uncorrelated 0-1KP is very low in this landscape with medians $<10$ for $n=20$.

We believe that the underlying relation between the number of strict local optima in the *H1* landscape and the different distributions can be explained by the variability of the weights. To capture this with a single parameter, one that does not require the knowledge of the underlying distribution of the weights, we propose using the coefficient of variation (*CV*) of the weights, which provides a measure of relative variability or dispersion. Roughly speaking (see Figure 4), normal and negatively skewed instances map to the region $\u22640.3$, uniform instances map to the region between 0.4 to 0.7, and positively skewed and two peaks instances map to the region $>0.8$. The density of the region $\u22640.3$ is higher because both the normal and the negatively skewed map to this small region. The *CV* seems to capture most of the variation in the number of strict local optima in the *H1* landscape as the two correlate very strongly and negatively across the different values of *n* and *k* we studied. To explain the intuition behind this strong correlation, consider an example of two extreme cases of NPP, the first is when all the weights are the same (small *CV*) and the second case is when one weight is equal to the sum of the rest of the weights (large *CV*). In the first case there are $2nn\u223c4n\pi n$ possible ways to split the weights into two subsets, while in the second case there are only two.

In the phase below the phase transition point where plateaux exists in the NPP, the majority of the plateaux in the *H1* landscape were open plateaux with very few global ones. The number of configurations in each plateau is very small, for example when $n=20$ the size of the global and closed plateaux was found to be only 2 to 3 non-strict local optima (remember all the plateaux are composed of NSLMIN only). All the open plateaux we found are composed of only one non-strict local optimum and the majority of them have only one exit. There are more global plateaux in the *H1+2* landscape, but fewer number of open and closed plateaux. The number of exits from open plateaux is larger in this landscape as expected. The size of all the plateaux in this landscape is also larger than that in the *H1* landscape, it can reach up to 300 for global plateaux and around 50 for open and closed ones. In the 0-1KP, plateaux were only found in the *H1+2* landscape and mainly when $k=0.4$. For most of the problem types, the majority of the found plateaux were closed plateaux. For the subset sum instances, the majority of plateaux were global and closed plateaux. Most of the plateaux found in the profit ceiling instances were open and closed plateaux. In all the problem types, most of the found plateaux have very small sizes, around two or three configurations. The largest found plateaux were less than 10 configurations in all $n=20$ instances, apart from the profit ceiling and circle instances where the largest found plateaux were composed of around 30 configurations. However, these large plateaux were rarely found. The number of exits in open plateaux was found to be also quite small in all the problem types (mainly between 1–3 exits). In the 0-1QKP, similar results have been found, in that the closed and open plateaux were of very small size, with exception of relatively large number of exits in open plateaux in very sparse profit matrix. As the density ($\Delta $) of the profit matrix increases, the plateaux decrease until they disappear in very dense profit matrix instances. The observation that the *H1+2* landscape have sometimes more plateaux than the *H1* may seem counter-intuitive at first, since applying a larger neighbourhood operator, as opposed to a smaller one, usually has the positive effect of reducing the number of optima and plateaux. However, it can also have an effect of introducing new larger plateaux. A schematic illustration of this mechanism is shown in Figure 3. The figure shows how after applying the larger neighbourhood operator, the same-fitness strict optima became connected forming a closed plateau. If the green triangle-shaped optimum had a better fitness value than the red diamond-shaped optima, then the figure shows how two open plateaux, sharing the same exit and each of size one, can be formed.

### 5.2 Average Number of Strict Local Optima

The average proportion of the strict local optima in the *H1* landscape when the NPP weights are drawn from uniform distribution follows this formula, $24\pi n-3/2$, derived using statistical mechanics (Ferreira and Fontanari, 1998). Based on the strong and negative correlation between the *CV* and the strict local optima number in all the three problems, we propose a generalised formula for estimating the average proportion of strict local optima in the *H1* landscape, which depends only on the *CV* of the weights and the size of the problem $ae-bCV$, in which the values of the coefficients *a* and *b* depend on the problem and on *n*. Figure 4 shows the estimation of the fraction of the strict local optima using this formula. The values of *a* and *b* were determined by least-squares regressions. The proposed formula seems to be a good approximate fit of the average number of strict optima in all the problem sizes we studied, in particular for the NPP. In the 0-1KP and 0-1QKP, it seems to be noisier around small *CV* values ($\u22640.3$).

In the 0-1KP, the correlation between the *CV* and the number of strict local optima in the *H1* landscape is strong and negative apart from highly constrained instances ($\lambda \u22640.2$) and weakly constraint instances ($\lambda >0.8$) where the correlation is strong and positive. In small *CV* instances, most of the solutions are infeasible when highly constrained and feasible when weakly constrained. This consequently decreases the boundary size and thus the number of strict local optima. On the other hand, the larger weights in large *CV* instances prevent many solutions from becoming feasible when weakly constrained and allow many solutions to be feasible when highly constrained. This makes the boundary size in large *CV* instances relatively larger in both cases, causing the optima number to be higher than that in small *CV* instances. In fact, when highly constrained, the mechanism of solving large *CV* instances becomes similar to that of solving small *CV* instances, in that the problem becomes about fitting the small similar weights into the knapsack.

To easily study the growth behaviour of the number of strict local optima as the problem size increases, we grouped the instances based on their *CV* values into three intervals: (0,0.3], (0.3,1), and [1, 2). Figure 5 shows the decay of the proportion of number of strict local optima as *n* increases. The results for $n=30,100$ are the SRS (Alyahya and Rowe, 2016) estimates obtained with the sample sizes $s=105,5\xd7105$ respectively. All the proportions seem to decrease polynomially with *n* in the form $an-b$. The largest decay happens in the landscape of *H1+2* and the smallest in the *H1* landscape of the interval (0,0.3]. The proportion of the strict local optima appears to decay faster in the *H1+2* landscape compared to the *H1* across all the intervals. The proportion in the *H1+2* landscape of 0-1QKP and the following 0-1KP types: uncorrelated, weakly correlated, uncorrelated spanner, weakly correlated spanner, strongly correlated spanner, and multiple strongly correlated, seems to be smaller than what the SRS with the above sample sizes can detect. This was evident by the negative lower bound of the 95% $CIAC$ of the obtained estimates indicating that the point estimates are greatly overestimating the real proportions. Therefore, we did not include these estimates in the figure. We also did not fit the decay of the proportions with the form $an-b$, since we are only left with four close data points. We are unable to comment on the growth of the number of strict local optima in the *H1+2* landscapes of these instances. However, the growth of the strict local optima in the *H1+2* landscape of all the remaining problems and the growth in all the *H1* landscapes seems to be exponential with *n*.

### 5.3 Quality of Optima and Plateaux

The difference in the number of optima between the two landscapes is no doubt an important feature, but another equally important one is the difference in quality of optima between the two. Obviously, the quality of the optima in the *H1+2* landscape is at least as good or better than that in the *H1* as every optimum in the *H1+2* landscape is also an optimum in the *H1*. However, we want to examine how the difference in quality between the two landscape changes across the *CV* values. To obtain a measure of quality that is independent of the problem instance and that does not require the knowledge of the optimal solution, we measure the quality of an optimum *x* in a given instance as $f(x)/\u2211i=1nwi$ in NPP, $f(x)/\u2211i=1npi$ in 0-1KP, and $f(x)/\u2211i=1n\u2211j=1npij$ in 0-1QKP. Figure 6 shows selective results of the three problems.^{3} Note that the distributions in the figure cannot be used directly to infer the quality of the local optima relative to the global since the distributions are calculated over multiple instances. In NPP, the number of optima with relatively poor quality in the *H1* landscape of instances with very small *CV* is very high. The number starts to decrease as the *CV* values increases. This behaviour is consistent across all *k* values and all *n*. In the 0-1KP, there is a clear difference among the various problem types in terms of the overall quality of the optima and in the difference between the quality of the optima in the two landscapes. For example, in the uncorrelated and the uncorrelated spanners instances there is a large difference in the quality of optima between the two landscapes and that difference does not seem to change much across the *CV* intervals. On the other hand, the quality of the optima in the two landscapes seems to be similar in the circle instances and that similarity seems to increase as the *CV* value increases. The case in the subset sum instances is perhaps the most similar to that in the NPP, in that the quality of the optima in the *H1* landscape gets better as the *CV* value increases, which in turn decreases the difference in the quality between the two landscapes. In general, and apart from the uncorrelated and the uncorrelated spanner instances, the difference in the quality of optima between the two landscapes appears to decrease as the *CV* value increases. The 0-1QKP is similar to the uncorrelated problem types in the 0-1KP, across all values of $\Delta $, the quality of optima in the *H1+2* landscape is better than that in the *H1*, and this difference in the quality does not seem to change much across the *CV* values.

## 6 Basins of Attraction

### 6.1 Basin Size

In the NPP, as with the number of strict optima, the basin sizes do not seem to change much across different values of *k* for all the different distributions and for both landscapes. In all the problems, the size of the basins in the *H1+2* landscape is as expected larger than that in the *H1*. For both landscapes, the distribution of the basin sizes in all the instances we studied was found to be highly skewed to the right, with many small basins and only few large ones. Similar skewness in the distribution of basin sizes was reported in other combinatorial problems, for example in the flow-shop scheduling problem where the log-normal distribution was found to be a plausible model of the basin size (Reeves and Eremeev, 2004). The basin sizes in the *H1* landscape of all the problems increase with the *CV* until their sizes become similar to those of the *H1+2* landscape. In the *H1* landscape of NPP, 20% of the basins cover around 60–70% of the search space in instances with $CV\u22640.3$, while they cover around 35–50% of the search space in instances with $CV>1$. This discrepancy between the two *CV* intervals seems to continue to exist in the *H1+2* landscape, though at a lower degree. In the 0-1KP, the largest basins in the uncorrelated problem type quickly cover large area of the search space in the *H1* landscape, where around 60% to 90% of the search space is covered by only 20% of the basins. For the *H1+2* landscape, the same percentage of the basins cover around 40% to 70% of the search space in the weakly correlated problem type, and around 40% to 60% in the subset sum problem type. The results of the uncorrelated spanner are similar to the results of the uncorrelated problem type. The results of the weakly correlated spanner are similar to the results of the weakly correlated problem type. The rest of the problem types have similar results to that of subset sum. The results of subset sum are again similar to the results of NPP. In the *H1* landscape of the 0-1QKP, as in the uncorrelated 0-1KP, the largest basins quickly cover a large part of the search space. In instances from the large *CV* interval [1,2), a large portion of the search space gets covered by very few basins, where only 10% of the basins cover between 70% to 90% of the search space. The same percentage of the basins cover around 50% to 70% of the search space in instances with $CV\u2208(0.3,1)$, and cover around 50% of the search space in instances with $CV\u2208(0,0.3]$. In all the problems, the search space is covered by fewer basins in the *H1* landscape compared to the *H1+2* one (in NPP, in instances with $CV\u22640.3$, in particular). For instance, around 90% of the search space is covered by half of the basins in the *H1* landscape, while it takes around 70% of the basins to cover the same amount of the search space in the *H1+2* landscape.

### 6.2 Basin Size and Fitness

Another important aspect of the fitness landscape is the correlation between the basin size and the fitness of the optimum. Previous studies have shown that in general, fitter optima have larger basins (Stadler, 2002; Tayarani-N. and Prügel-Bennett, 2014), and landscapes with this kind of feature usually tend to be easier to search. Figure 7 shows the correlation between basin size and fitness of selected results of the three problems. We measured the correlation between the two quantities using Spearman's correlation coefficient instead of the traditional Pearson's correlation coefficient. The reason for that is Pearson's method assumes that both variables are drawn from normal distribution while Spearman's method is a non-parametric method. As we have seen in the previous section, the distribution of the fitness values and the basin sizes are highly skewed and far from being normally distributed. In NPP, the correlation between the objective and the basin size in both landscapes was found in general to be moderately negative (0.4-0.6) to strongly negative ($>0.6$) (remember we are minimising here). If we excluded the *H1* with $CV\u22640.3$, the correlation appears to remain more or less the same across the values of *k* and *CV* apart from few cases in the *H1+2* landscape. The correlation also appears to get slightly stronger with larger *n*. For the *H1* landscape of instances with $CV\u22640.3$, the correlation seems to be slightly stronger when $k<kc$ and it seems to get weaker with larger *n* in the hard phase. In the 0-1KP, the correlation between the basin size and fitness was found to vary between weak to strong positive in both landscapes and seems not to change much across the values of *k* or *CV*. The few exceptions and the most surprising ones are of instances of type inverse strongly correlated with $CV>0.3$ and few other cases where the correlation was found to be negative. This negative correlation is in fact unusual in the combinatorial optimisation problems studied in the literature, where in general fitter optima were found to have larger basins (Stadler, 2002; Tayarani-N. and Prügel-Bennett, 2014). Similar to the uncorrelated 0-1KP problem types, the correlation between the attraction basin size and the optimum fitness in both landscape of the 0-1QKP is very strong and positive, indicating that indeed in this problem the fitter optima tend to have larger basins. This does not seem to change much across the *CV* values for all the values of $\Delta $. However, as $\Delta $ gets larger the correlation in the *H1+2* landscapes of some of the instances seems to get weaker and sometimes even very strong negative (this is more noticeable in the small *CV* interval (0,0.3]). This indicates that, in such instances, fitter optima have smaller basins. Usually this feature means that these landscapes are more difficult to search particularly for local search, as the fitter optima has less probability of being found with a hill climber.

### 6.3 Global Basin

Figure 8 shows the global basins' proportion out of the search space against the *CV* for both landscapes and for some interesting instances of all the three problems. Clearly, this proportion translates to the probability of finding the optimal solution. In general and across all problems, the probability of finding the optimal solution is always higher in the *H1+2* landscape than in the *H1* one. In NPP and 0-1KP subset sum, the probability of finding the global optima in the *H1+2* landscape drops down from almost around $\u223c1$ in $k<kc$ to around $\u223c10-2$ in $k>kc$. In *H1* landscape the probability of finding the global decreases from between around $\u223c0.1$ in small *CV* instance and $\u223c0.8$ in large *CV* instances when $k<kc$, to reach around $\u223c10-4$ and $\u223c15\xd710-3$ when $k>kc$. The probability of finding the global when $k>kc$, in both landscapes, decreases as the problem size grows. This is not surprising since the number of local optima in both problems grows exponentially with *n*. Like the previously studied features, the probability does not seem to change much across the *CV* values in the *H1+2* landscape, unlike the *H1* landscape, where the probability of finding the global increases with the *CV* in both phases. This increase can be attributed to the decrease in the number of local optima in this landscape as the *CV* increases, and the strong correlation between the basin size and fitness. In the inverse strongly correlated instances the opposite happens as the probability decreases when the *CV* increases, despite the fact that the number of local optima decreases as the *CV* increases. This reflects the results of the correlation between the basin size and the fitness in this problem type, where the correlation was found to be moderately to strongly positive in the small *CV* interval but it starts to decease as *CV* increases to a strong negative sometimes. We continue to see the effect of this feature on the performance of local search to find the global in this problem type in the next section. The probability of finding the global is very high in the *H1+2* landscape of the 0-1QKP and the uncorrelated 0-1KP. In the 0-1QKP, the probability of finding the global in the *H1+2* landscape seems to decrease slightly as $\Delta $ increases. This again is a reflection of the correlation between basin size and fitness that we have seen previously. The correlation was found to be strongly negative in some of these instances, which explains the decrease in the global basin size.

In an attempt to study the shape of the global basin, we plot in Figure 9, the proportion of the configurations that are part of its basin in every Hamming sphere of radius *h* around it. In NPP, the plot shows the result for one of the two global found, as the same result applies to the other global due to the search space symmetry. The results are shown for three instances of size $n=22$. Note that the global basin is not the largest in all of these instances. For example, in the instance with the smallest *CV* value, the global basin proportion in the *H1* landscape is $1.98\xd710-05$ while the largest basin proportion is $1.86\xd710-04$. Similarly in the *H1+2* landscape the global basin proportion is 0.001 while the largest basin proportion is 0.009. We can see that in both landscapes the configurations in the global basin are concentrated in the immediate Hamming spheres around it. This is true for the both landscapes of all the problems apart from the *H1+2* landscape of the 0-1QKP, and 0-1KP the uncorrelated, weakly correlated, and uncorrelated spanner types, where the probability of returning to the global continues until the last sphere sometimes. Interestingly, we see that the return probability in some of the *H1+2* landscapes does not decrease monotonically but has an oscillating behaviour as *h* increases. This can be attributed to the very small number of optima in such landscapes and the nature of the *H1+2* neighbourhood, as the neighbours of a configuration in a given sphere *h* would be spread over five spheres using this neighbourhood operator compared to only two spheres when using the *H1* operator. Figure 10 presents an example illustrating how this oscillating behaviour in the *H1+2* landscape can occur. In the 0-1QKP, we can see that the return to the global in the *H1+2* landscape of instances with small *CV* values approaches zero faster in larger values of $\Delta $, where the return probability is almost zero in configurations different than the global in half or more of the dimensions. This agrees with the results obtained previously about the proportion of the global basins and the correlation between the fitness and the basin size.

## 7 Local Search Performance

### 7.1 Cost of Finding the Global and Quality of Optima Obtained with a Fixed Budget

We ran local search with random restarts until the optimal solution is found for 30 times per instance for $n\u226422$. The cost of finding the global optima is then calculated using the number of used fitness evaluations. Note that we treat the objective function as a black-box here, hence the number of times the objective function is queried for each step taken equals the neighbourhood size of the employed operator. In NPP and the 0-1KP subset sum, the average number of fitness evaluations used to find the global increases as we approach the phase transition point and keeps increasing as we cross it for all the different distributions and for both landscapes. This is expected due to the drastic decrease in the number of global optima in the phase with $k>kc$. As we have seen before, the probability of finding the global is higher in instances with $k<kc$; the algorithm quickly finds one of the many global optima while it struggles to find the single (two in NPP due to symmetry) global optimum when $k>kc$. This indeed makes the phase transition in both of these problems a transition between ”easy” to ”hard” to solve for local search algorithms. In both the problems, the cost of finding the global optimum grows exponentially with *n* in the hard phase as Figure 11 depicts. The growth in the easy phase seems to be much slower but we are unable to comment on its growth type as the trend is not very clear from the data. The cost of finding the global in the rest of the problems is the same across all values of *k*. In almost all the problems apart from the 0-1KP inverse strongly correlated and the 0-1QKP with $\Delta <0.5$, the *H1* operator has the lowest mean number of used fitness evaluations to find the global in instances with $CV\u2208[1,2)$. The uncorrelated and uncorrelated spanner instances have the lowest cost of finding the global in both landscapes. This is a reflection of the very strong positive correlation between the basin size and fitness, and the higher probability of returning to the global in theses instances. Instances of type inverse strongly correlated have the highest mean cost of finding the global, apart from the cost of the $CV\u2208(0,0.3]$ interval. The increase in the cost in the instances with $CV>0.3$ can be attributed to the strong negative correlation between the basin size and fitness, which resulted in a lower probability of finding the global in these instances. In fact, we can see that the cost of finding the global using the *H1* operator in this problem type is the lowest in the *CV* interval (0,0.3]. The low cost is despite the fact that the number of local optima is the highest in this interval, which translates in all the other problem types to having the highest cost of locating the global out of all the *CV* intervals. This again emphasises the importance of the correlation between the basin size and fitness (remember the correlation in the inverse strongly correlated was found to be moderately to strongly positive in $CV\u2208(0,0.3]$ but it starts to decease as the *CV* increases to a strong negative sometimes). The straight lines in almost all the observations in each problem type of the 0-1KP and the 0-1QKP indicate that the cost of finding the global seems to grow exponentially with *n*. In 0-1QKP, the cost of the *H1+2* operator increases slightly as $\Delta $ increases. In instances with $CV\u2208(0,0.3]$, the increase is almost one order of magnitude. This can be attributed to the negative correlation between basin size and fitness and the decrease in the global basin proportion as $\Delta $ increases in these instances. The cost using the *H1+2* operator, in particular in instances with $CV\u2208(0,0.3]$, is lower than that when *H1* is used. This might be surprising considering the quadratic (in *n*) cost associated with every *H1+2* step compared to the linear one in the *H1* case. This, however, can be explained by the very big difference in the number of local optima between the two landscapes, which suggest that the algorithm had to do far fewer number of restarts when using the *H1+2* operator. For the rest of the *CV* intervals, the *H1* operator seems to have a lower cost even though it still has more local optima than that in the *H1+2* landscape. However, the difference between the two here is less by one order of magnitude, which makes it enough for the *H1+2* quadratic cost to offset the advantage of having lower number of local optima.

To examine further the relation between the number of fitness evaluations needed to explore the neighbourhood and the difference in the number of local optima between the two landscapes we compare the performance of the two operators in Figure 12, where we determine the statistical significance between the two performances using Wilcoxon rank-sum test at the 5% level. In the NPP and subset sum, we can see that in the hard phase, the *H1* operator performs better when the *CV* is large $\u22651$. In the easy phase, the *H1* operator performs better across all the *CV* intervals, despite the fact that the *H1+2* landscape is always smoother and has a higher probability of finding the global across all the values of the *CV*. This can be explained by the presence of many global optima in the easy phase which mitigates the ruggedness of the *H1* landscape, even in the very rugged landscape of the (0,0.3] *CV* interval. However, this behaviour seems to fade away as *n* grows in the sizes we studied in $n=14-22$, where the *H1+2* operator starts to win more instances. This can be attributed to the growth of the number of local optima in the *H1* landscape in the (0,0.3] interval, which is the fastest growth rate out of all the *CV* intervals, while the same interval has the lowest growth rate in the *H1+2* landscape. The trend of the *H1+2* operator performing better in the (0,0.3] *CV* interval and the *H1* operator performing better in the [1,2) *CV* interval continues to show in larger problem sizes of $n=30,100$ as shown in Figure 12. Now the results show the quality of optima obtained by a fixed arbitrarily selected budget of fitness evaluations. As we have seen before in the NPP and subset sum, the difference in the quality between the two landscape decreases as the *CV* grows, which explains the results for the winning case of the *H1* operator, of course alongside the lower difference in the number of local optima in this interval between the two landscapes. Note that these results are specific to the budget we selected, whether the same trends will continue to occur with other budget values remains an open question. In the 0-1KP, apart from subset sum, multiple strongly correlated, and profit ceiling types, the trends of the winner operator were found to change between when searching for the global and the quality of optima obtained by fixed budget search. They also were found to change with *n*. In some of the problem types, the number of tie cases seems to decrease as *n* increases, and a clear winner emerges. In terms of the quality of the obtained optima with fixed budget search, the *H1* operator performs better in the strongly correlated, inverse strongly correlated and circle problem types, across all the *CV* intervals. The *H1+2* operator performs better in the 0-1QK and the 0-1KP uncorrelated, weakly correlated, and their spanner equivalent across all *CV* values. Theses results can be explained by the large difference in the quality of the optima between the two landscapes that we have seen in Section 5.3, and the small number of optima and the faster decay of their proportions in the *H1+2* landscape.

### 7.2 Time to Local Optima

The time it takes steepest descent (ascent) starting from a random configuration until a local optimum is reached can reveal interesting aspects of the underlying structure of the landscape and particularly the shape of attraction basins. For each problem, we ran 1000 steepest descents per instance and collected the number of steps over 600 instances for each $n=14,16,18,20,22$, and 500 instances for $n=30,100$. In the NPP and most problem types of the 0-1KP, this was found to be very small in both landscapes and grows very slowly with *n* across all the *CV* values. The medians of steps were between 1 to 3 when $n=14$ (with the step size increasing with the *CV*) and only around 3 in $n=100$ with outliers reaching up to 10 in $n=14$ and up to 20 in $n=100$. This can be partially attributed to the fact that the attraction basin sizes in both landscapes were found to be mainly small. In the *H1* landscape, we believe that this is additionally due to the large number of local optima in this landscape. In the *H1+2* landscape, even though the number of optima is much smaller than that in the *H1*, its neighbourhood is larger and its nature can allow it to take fewer number of steps by hopping over spheres to reach the local optimum, which explains the small number of steps in this landscape. The very slow growth of steps with *n* can be attributed to the exponential growth of the number of local optima in both landscapes and across all the *CV* values. The steps in the *H1* landscape of the 0-1QKP, and the 0-1KP uncorrelated and the uncorrelated spanner instances were found to be similar to the previously described results but slightly higher, outliers around 30 in $n=100$, which is believed to be due to the larger basin sizes in these instances. However, the number of steps in the *H1+2* landscape of these problems and the weakly correlated, weakly correlated spanner, strongly correlated spanner, and multiple strongly correlated problem types, seem to grow faster with *n*, evident in the median steps being between 10 and 25 when $n=100$. This supports our observation that the decay of the number of local optima in the *H1+2* landscape of these problem types is faster than that in the rest of the problems. Note that although the local search algorithm under study here considers only the best improving move, that does not necessarily guarantee that the path, starting from a random configuration *x* until a local optimum $x*$ is found, will be the shortest path (i.e., the number of steps taken from *x* until $x*$ is reached is at most equal to the Hamming distance between *x* and $x*$). From the perspective of the Hamming spheres around an optimum $x*$, this can occur when the best improving move of a configuration $xi$ in the path, which resides in the Hamming sphere $hi$, is in a Hamming sphere $hi+1\u2265hi$. The number of steps taken in the *H1* landscape was found to be always equal to the Hamming distance between the initial random configuration and the found local optimum. In the *H1+2* landscape this was found to be almost always smaller or equal to the Hamming distance with the exception of extremely few cases where it was found to be one or five steps larger than the Hamming distance (in $n=22$).

## 8 Conclusions

This article presents an extensive landscape analysis of three NP-hard binary packing problems: the number partitioning problem (NPP), the binary knapsack problem (0-1KP), and the quadratic binary knapsack problem (0-1QKP). We performed a comparative study of the landscape induced by two neighbourhood operators, the *H1* operator with a neighbourhood that grows linearly with the problem size and a larger neighbourhood operator *H1+2* that has a quadratic growth neighbourhood. The set of properties that we studied includes types of search position, number of local and global optima and plateaux, quality of optima and plateaux, basin size and its correlation with fitness, time to local optima, cost of finding the global solution, and quality of optima obtained with a fixed budget search. Our work focuses on studying how these properties vary with different values of problem parameters. Most of the existing studies of these problems only consider instances where the weights are drawn at random from a uniform distribution. We studied instances generated by drawing the weights at random from various distributions. Regarding the three problems, we found that the number of strict local optima and the cost of the local search to find the global, vary greatly (most noticeably in the *H1* landscape) between some of the distributions. We proposed and demonstrated that the use of the *CV* of the weights, a single parameter that is easy to calculate and does not require the knowledge of the underlying distribution of the weights, captures most of this variability. In all the investigated problems, there is a very strong and negative correlation between the *CV* and the number of local optima in the *H1* landscape. We continued to see this trend in all the problem sizes we studied. The average number of local optima in this landscape seemed to be well approximated by the formula $aebCV2n$, where *a* and *b* depend on the problem and *n*. We believe that this phenomenon is particular to the binary packing problems related to the 0-1 knapsack problem. Interestingly, we did not find any significant difference between landscapes of strong (0-1QKP) and weak (0-1KP) NP-hard problems. This raises an important question concerning the difference between strong and weak NP-hard problems from the point of view of meta-heuristics.

Only the NPP and the subset sum problems, which is a generalisation of the NPP and a special case of the 0-1KP, have an identified phase transition. The only two properties that were found to change when the problem crosses the phase transition in these two problems are the number of global optima (and consequently the probability of finding the global) and the number of plateaux. The rest of the properties remained oblivious to the phase transition. This result is in agreement with the results obtained by Stadler et al. (2003), in which they found that the features of the uniform NPP landscape, that have been mapped into barriers trees, are insensitive to the phase transition. However, recently Ochoa et al. (2017) noted that the global funnel structure of the uniform NPP landscape does change with the phase transition. The performance of local search algorithms was found to be affected by the phase transition in both NPP and subset sum, where a considerable increase in the cost of locating the global solution occurs in instances with $k>kc$, confirming that it is indeed a transition between ”easy-to-solve” to ”hard-to-solve” for local search algorithms.

## Acknowledgments

This work was supported by King Saud University, Riyadh, Saudi Arabia, and partially supported by the Engineering and Physical Sciences Research Council, grant number EP/N017846/1.

## Notes

^{1}

A more rigorous derivation of the transition point can be found in Borgs et al. (2001).

^{2}

See the supplementary material for a detailed description of our process of generating the instances.

^{3}

See supplementary material for the full results, available at https://www.mitpressjournals.org/doi/suppl./10.1162/evco_a_00237.