Dynastic Potential Crossover Operator

Abstract An optimal recombination operator for two-parent solutions provides the best solution among those that take the value for each variable from one of the parents (gene transmission property). If the solutions are bit strings, the offspring of an optimal recombination operator is optimal in the smallest hyperplane containing the two parent solutions. Exploring this hyperplane is computationally costly, in general, requiring exponential time in the worst case. However, when the variable interaction graph of the objective function is sparse, exploration can be done in polynomial time. In this article, we present a recombination operator, called Dynastic Potential Crossover (DPX), that runs in polynomial time and behaves like an optimal recombination operator for low-epistasis combinatorial problems. We compare this operator, both theoretically and experimentally, with traditional crossover operators, like uniform crossover and network crossover, and with two recently defined efficient recombination operators: partition crossover and articulation points partition crossover. The empirical comparison uses NKQ Landscapes and MAX-SAT instances. DPX outperforms the other crossover operators in terms of quality of the offspring and provides better results included in a trajectory and a population-based metaheuristic, but it requires more time and memory to compute the offspring.


Introduction
Gene transmission (Radcliffe, 1994) is a popular property commonly fulfilled by many recombination operators for genetic algorithms.When the solutions are represented by a set of variables taking values from finite alphabets (possibly different from each other) with no constraints among the variables, this property implies that any variable in any offspring will take the value of the same variable in one of the parents.In particular, the variables having the same value for both parents will have the same value in all the offspring (i.e., the respect property (Radcliffe, 1994) is obeyed).The other (differing) variables will take one of the values coming from a parent solution.The gene transmission property is a formalization of the idea that taking (good) features from the parents should produce better offspring.This probably explains why most of the recombination operators try to fulfill this property or some variant of it.The set of all the solutions that can be generated by a recombination operator from two parents is called dynastic potential.If we denote with h(x, y) the Hamming distance (number of differing variables) between two solutions x and y, the cardinality of the largest dynastic potential of a recombination operator fulfilling the gene transmission property is 2 h (x,y) .The dynastic potential of uniform crossover has this size.The dynastic potential of single-point crossover has size 2h(x, y) because it takes from 1 to h(x, y) consecutive differing bits from any of the two parents.In two-point crossover, two cut points are chosen.If the first one is before the first differing bit or the second one is after the last differing bit, the operator behaves like the single-point crossover.Otherwise, the two cut points are chosen among the h(x, y) − 1 positions between differing bits and for each cut the bits can be taken from any of the two parents, providing 2 h(x,y)−1 2 additional solutions to the single-point crossover dynastic potential.Thus, the size of the two-point crossover dynastic potential is 2h(x, y) + 2 h(x,y)−1 2 = h(x, y) 2 − h(x, y) + 2. In general, z-point crossover has a dynastic potential of size Θ(h(x, y) z ) when z is small compared to the number of decision variables, n.
An optimal recombination operator (Eremeev and Kovalenko, 2013) obtains a best offspring from the largest dynastic potential of a recombination operator fulfilling the gene transmission property, which has size 2 h(x,y) .In the worst case, such a recombination operator is computationally expensive, because finding a best offspring in the largest dynastic potential of an NP-hard problem is an NP-hard problem.A proof of the NP-hardness is given by Eremeev and Kovalenko (2013), but it can also be easily concluded from the fact that applying an optimal recombination operator to two complementary solutions (e.g., all zeroes and all ones) is equivalent to solving the original problem, and the NP-hardness is derived from the NP-hardness of the original problem.
We propose a recombination operator, named Dynastic Potential Crossover (DPX), that finds a best offspring of the largest dynastic potential if the objective function f has low epistasis, that is, if the number of nonlinear interactions among variables is small, typically Θ(n).In particular, we assume that the objective function f is defined on n binary variables and has k-bounded epistasis.This means that f can be written as a sum of m subfunctions f ℓ , each one depending on at most k variables: f ℓ (x i(ℓ,1) , x i(ℓ,2) , . . ., x i(ℓ,k ℓ ) ), (1) where i(ℓ, j) is the index of the j-th variable in subfunction f ℓ and k ℓ ≤ k.These functions have been named Mk Landscapes by Whitley et al. (2016).DPX has a non-negative integer parameter, β, bounding the exploration of the dynastic potential.The precise condition to certify that the complete dynastic potential has been explored depends on β and will be presented in Section 3.3.The worst-time complexity of DPX is O(4 β (n + m) + n 2 ) for Mk Landscapes.
DPX uses the variable interaction graph of the objective function to simplify the evaluation of the 2 h(x,y) solutions in the dynastic potential by using dynamic programming.The ideas for this efficient evaluation date back to the basic algorithm by Hammer et al. (1963) for variable elimination and are also commonly used in operations over Bayesian networks (Bodlaender, 2005).DPX requires more than just the fitness values of the parent solutions to do the job and, thus, it is framed in the so-called gray box optimization (Whitley et al., 2016).
Recently defined crossover operators similar to DPX are partition crossover (PX) (Tinós et al., 2015) and articulation points partition crossover (APX) (Chicano et al., 2018).Although they were proposed to work with pseudo-Boolean functions, they can also be applied to the more general representation of variables defined over a finite alphabet.We will follow the same approach here.In the rest of the paper we will focus on a binary representation, where each variable takes value in the set B = {0, 1}.But most of the results, claims and the operator itself can be applied to a more general solution representation, where variables take values from finite alphabets.PX and APX also use the variable interaction graph of the objective function • We provided new experiments to compare the performance of DPX with the one of other crossover operators when they are applied to random solutions.The other crossover operators are uniform crossover, network crossover, partition crossover and articulations point partition crossover.
• We included DPX in an evolutionary algorithm, to check its behaviour in this algorithm.
• We compared the five crossover operators inside a trajectory-based and a population-based metaheuristic used to solve two NP-hard pseudo-Boolean optimization problems: NKQ Landscapes and MAX-SAT.
• We modified the code of DPX to improve its performance and updated the source code repos-itory1 with the new code of the evolutionary algorithm, the newly implemented crossover operators used in the comparison and the algorithms to process the results.
• We added a local optima network analysis (Ochoa and Veerapen, 2018) of DPX included within an iterated local search framework to better understand its working principles.
The paper is organized as follows.Section 2 presents the required background to understand how DPX works.The proposed recombination operator is presented in Section 3. Section 4 describes the experiments and presents the results and, finally, Section 5 concludes the paper.

Background
In our gray box optimization setting, the optimizer can independently evaluate the set of m subfunctions in Equation (1), i.e., it is able to evaluate f ℓ when the values of x i(ℓ,1) . . .x i(ℓ,k ℓ ) are given; and it also knows the variables each f ℓ depends on.This contrasts with black box optimization, where the optimizer can only evaluate full solutions (x 0 , ..., x n−1 ) and get their fitness value f (x).We assume that we do not know the internal details of f ℓ (we can only evaluate it), and this is why we call it gray box and not white box.

Variable interaction graph
The variable interaction graph (VIG) (Whitley et al., 2016) is a useful tool that can be constructed under gray box optimization.It is a graph V IG = (V, E), where V is the set of variables and E is the set of edges representing all pairs of variables (x i , x j ) having nonlinear interactions.The set B n contains all the binary strings of length n.Given a set of variables Y , the notation Y is used to represent a binary string of length n with 1 in the positions of the variables in Y and zero in the rest, for example, 1 3 {0,2} = 101.If the set Y has only one element, we will omit the curly brackets in the subscript, e.g., 1 4 3 = 1 4 {3} = 0001.Observe that the first index of the binary strings is 0. When the length is clear from the context we omit n.The operator ⊕ is a bitwise exclusive OR.
We say that variables x i and x j have a nonlinear interaction when the expression ∆ i f (x) = f (x ⊕ 1 i ) − f (x) does depend on x j .Checking the dependency of ∆ i f (x) on x j can be computationally expensive because it requires to evaluate the expression on all the strings in B n−1 .There are other approaches to find the nonlinear interactions among variables.First, assuming that every pair of variables appearing together in a subfunction has a nonlinear interaction.It is not necessarily true that there is a nonlinear interaction among variables appearing as arguments in the same subfunction, but adding extra edges to E does not break the correctness of the operators based on the VIG and requires only a very simple check that is computationally cheap.The graph obtained this way is usually called co-ocurrence graph.A second, and precise, approach to determine the nonlinear interactions is to apply the Fourier transform (Terras, 1999), and then look at every pair of variables to determine if there is a nonzero Fourier coefficient associated to a term with the two variables.This second method is precise and can be done in Θ(m4 k ) time if we know the variables appearing in each subfunction f ℓ and we can evaluate each f ℓ independently (our gray box setting).The interested reader can see the work of Whitley et al. (2016) and Rana et al. (1998) for a more detailed description of this second approach.
The first approach is specially useful when k is relatively large, because it requires time Θ(mk 2 ), which is polynomial in k, in contrast to the exponential time in k of the second approach.In some problems, like MAX-SAT, we know that the co-ocurrence graph (obtained by the first approach) is exactly the variable interaction graph (obtained by the second approach).In some other problems, like NK Landscapes, both are the same with high probability.In these two cases, it makes sense to use the first approach and work with the co-ocurrence graph even in the case that k is small enough to compute the Fourier transform in a reasonable time.In the rest of the paper when we use the term variable interaction graph we could replace it by the co-ocurrence graph.

Recombination Graph
Let us assume that we have two solutions to recombine.We call these two solutions red and blue parents.All the variables with the same value in both parents will also share the same value in the offspring and the solutions in the dynastic potential will be in a hyperplane determined by We use the hyperplane H = * * * * 0 * 0 * * * 0 * * * 0 * * 0 to decompose the VIG in order to produce a recombination graph.We remove all the variables (vertices) that have the same "shared variable assignments" and also remove all edges that are incident on the vertices corresponding to these variables.This produces the recombination graph shown in Figure 2. The recombination graph also defines a reduced evaluation function.This new evaluation function is linearly separable, and decomposes into q ≤ m subfunctions defined over the connected components of the recombination graph.In our example: g(x ′ ) = a + g 1 (9, 11, 16) + g 2 (0, 1, 2, 5) + g 3 (3,7,8,12,13,15), where g(x ′ ) = f | H (x ′ ) and x ′ are solutions restricted to a subspace of the hyperplane H that contains the parent strings as well as the full dynastic potential.The constant a = f (x ′ ) − components induces a new separable function g(x ′ ) that is defined as: Partition crossover (PX), defined by Tinós et al. (2015), generates an offspring, when recombining two parents, based on the recombination graph: all of the variables in the same recombining component in the recombination graph are inherited together from one of the two parents.Partition crossover selects the decision variables from one or another parent yielding the best partial evaluation for each subfunction g i (x ′ ).This way, PX obtains a best offspring among 2 q in O(q) time, which is a remarkable result.The efficiency of PX depends on the number of connected components q in the recombination graph.Larger values for q provide a better performance.We wonder if this number can be large in pseudo-Boolean problems with interest in practice.We provide a positive answer with the VIG and recombination graph in Figure 3.It shows a sample recombination graph with 1 087 connected components of a real SAT instance of the 2014 Competition.2The graph was generated by Chen and Whitley (2017).
Articulation points partition crossover (APX) (Chicano et al., 2018) goes further and finds the articulation points of the recombination graphs.Articulation points are variables whose removal increases the number of connected components.A bi-connected component of a graph is a maximal subgraph with the property that there are two vertex-disjoint paths between any two vertices.Articulation points join several bi-connected components.Variables x 1 , x 2 and x 3 are articulation points in our example (see Figure 2) and the subgraphs induced by the vertex sets {x 5 , x 2 } and {x 3 , x 7 , x 12 , x 13 , x 15 } are examples of bi-connected components.Then, APX efficiently simulates what happens when the articulation points are removed, one at a time, from the recombination graph by flipping the variable in each of the parent solutions before applying PX, and the best solution is returned as offspring.In our example, APX would work as follows.First, it applies PX to the red solution and a copy of the blue solution where the variable x 1 is flipped and stores the best children.Then, it applies PX to the blue solution and a copy of the red solution with variable x 1 flipped.The same process is repeated with flips in variables x 2 and x 3 (the other articulation points).Finally, it applies PX to the original red and blue solutions.APX returns the best solution of all the applications of PX.The key ingredient of APX is that all these computations do not require repeated applications of PX.With the appropriate data structures, 2http://www.satcompetition.org/2014/all the computations can be done in O(n 2 + m), the same complexity of PX, for any choice of parents in Mk Landscapes.

Dynastic potential exploration
The proposed dynastic potential crossover operator (DPX) takes the ideas of PX and APX even further.DPX starts from the recombination graph, like the one in Figure 2.Then, DPX tries to exhaustively explore all the possible combinations of the parent values in the variables of each connected component to find the optimal recombination regarding the hyperplane H.This exploration is not done by brute force, but using dynamic programming.Following with our example, in order to compute the best combination for the variables x 9 , x 11 and x 16 , we need to enumerate the 8 ways of taking each variable from each parent, and this is not better than brute force.However, the component containing variables x 0 , x 1 , x 2 and x 5 forms a path.In this case, we can store in a table what is the best option for variable x 0 when any of the two possible values for variable x 1 are selected.Then, we can store in the same table what is the value of the sum of subfunctions depending only on x 0 and x 1 (and possibly common variables eliminated in the recombination graph).After this step, we can consider that variable x 0 has been removed from the problem, and we can proceed in the same way with the rest of the variables in order: x 1 , x 2 and x 5 .Finally, 12 evaluations are necessary, instead of the 16 required by brute force.In general, for a path of length ℓ, dynamic programming requires 4(ℓ − 1) evaluations while brute force requires 2 ℓ evaluations.
The idea of variable elimination using dynamic programming dates back to the 1960's and the basic algorithm by Hammer et al. (1963).The problem of variable elimination has also been studied in other contexts, like Gaussian elimination (Tarjan and Yannakakis, 1984) and Bayesian networks (Bodlaender, 2005).In fact, we utilize the ideas for computing the junction tree in Bayesian networks.First, a chordal graph is obtained from the recombination graph using the maximum cardinality search and a fill-in procedure to add the missing edges.Then the clique tree (or junction tree) is computed, which will fix the order in which the variables are eliminated using dynamic programming.After assigning the subfunctions to the cliques in the clique tree, dynamic programming is applied to find a best offspring, which is later reconstructed using the information computed in tables during dynamic programming.The runtime of the variable elimination method depends, among others, on the number of missing edges added by the fill-in procedure.Unfortunately, finding the minimum fill-in is an NP-hard problem (Bodlaender, 2005).Thus, we do not try to eliminate the variables in the most efficient way, but we apply algorithms that are efficient in finding a variable elimination order.Our proposal, DPX, is the first, to the best of our knowledge, applying these well-known ideas to design a recombination operator.There is also a difference between our approach and the variable elimination methods in the literature: we introduce a parameter β to limit the exploration of the variables (see Section 3.3).The high-level pseudocode of DPX is outlined in Algorithm 1.In the next subsections we will detail each of these steps.

Chordal graphs
In Algorithm 1, after finding the recombination graph (Line 1), each connected component is transformed into a chordal graph (Lines 2 and 3), if it is not already one.A chordal graph is a graph where all the cycles of length four or more have a chord (edge joining two nodes not adjacent in the cycle).All the connected components in Figure 2 are chordal graphs.Tarjan and Yannakakis (1984) provided algorithms to test if a graph is chordal and add new edges to make it chordal if it is not.Their algorithms run in time O(n + e), where n is the number of nodes in the graph and e is the number of edges.In the worst case the complexity is O(n 2 ).The first step to check the chordality is to number the nodes using maximum cardinality search (MCS).This Algorithm 1: Pseudocode of DPX Data: two parents x and y Result: one offspring z 1 Compute the recombination graph of x and y as in PX (Tinós et al., 2015); 2 Apply maximum cardinality search (Tarjan and Yannakakis, 1984); 3 Apply the fill-in procedure to make the graph chordal (Tarjan and Yannakakis, 1984); 4 Apply the clique tree construction procedure (Galinier et al., 1995); 5 Assign subfunctions to cliques in the clique tree; 6 Apply dynamic programming to find the offspring (see Algorithm 2); 7 Build z using the tables filled by dynamic programming; algorithm numbers each node in descending order, choosing in each step the unnumbered node with more numbered neighbors and solving the ties arbitrarily.The number associated to node u is denoted with γ(u). Figure 4 (left) shows the result of applying MCS to the third connected component of Figure 2, where we started numbering node 12.If the graph is chordal then MCS will provide a numbering of the nodes γ such that for each triple of nodes u, v and w, with (u, v), (u, w) ∈ E and γ(u) < min{γ(v), γ(w)}, it happens that (v, w) ∈ E. If this is not the case, the graph is not chordal.A fill-in algorithm tests this condition and adds the required edges to make the graph chordal.This algorithm runs in O(n + e ′ ) time, where e ′ is the number of edges in the final chordal graph.Again, in the worst case, the complexity is O(n 2 ).These two steps, MCS and fill-in, can be applied to each connected component separately or to the complete recombination graph with the same result (Tarjan and Yannakakis, 1984).

Clique Tree
Dynamic programming is used to exhaustively explore all the variables in each clique3 in the chordal graph.The maximum size of a clique in the chordal graph is an upper bound of its treewidth, and determines the complexity of applying dynamic programming to find the optimal solution.A clique tree of a chordal graph is a tree where the nodes are cliques and for any variable appearing in two of such cliques, the path among the two cliques in the tree is composed of cliques containing the variable (junction tree property).We can also identify a clique tree with 3We will use the term clique to refer to a maximal complete subgraph, as the cited literature does.However, the term clique is sometimes used to refer to any complete subgraph (not necessarily maximal).a tree-decomposition of the chordal graph (Bodlaender, 2005).This clique tree will determine the order in which the variables can be eliminated.
Starting from the chordal graph provided in the previous steps, we apply an algorithm by Galinier et al. (1995) to find the clique tree T (Line 4 in Algorithm 1).This algorithm runs in O(n + e ′ ) time and finds all the O(n) cliques of the chordal graph.In a chordal graph, the number of cliques cannot exceed the number of nodes n of the graph (Galinier et al., 1995).The cliques will be identified with the sets of variables it contains, C i , where i is an integer index for the clique that increases when a clique is discovered by the algorithm.An edge joining two cliques in the clique tree can be labeled with a separator, which is the intersection of the variables in both cliques.A clique C i is parent of a clique C j if they are joined by an edge and i < j.The set of child cliques of a clique i is denoted with ch(i).Although separators are associated to edges, in each clique C i we highlight a particular separator, the separator with its parent clique, and we will use S i to denote it.If a clique C i has no parent, then S i = ∅.The residue, R i , in a clique C i is the set of variables of C i that are not in the separator with its parent, S i .In each clique C i , the residue, R i , and the separator with the parent, S i , forms a partition of the variables in C i .Due to the junction tree property, for each variable x i , the cliques that contain it forms a connected component in the clique tree T .The variable is in the set S j of all the cliques j in the connected component except in the ancestor of all of them (with the lowest index j), where x i is member of its residue R j .Thus, each variable is only in the residue of one clique.In Figure 4 (right) the residues, R i , and separators with the parent, S i , for all the cliques of the third connected component of Figure 2 are shown.
After computing the clique tree, all the subfunctions f ℓ depending on a nonempty set V d ℓ of differing variables must be assigned to one (and only one) clique i where V d ℓ ⊆ C i (Line 5 in Algorithm 1).They will be evaluated when this clique is processed.There can be more than one clique where the subfunction can be assigned.All of them are valid for a correct evaluation, but the clique with less variables is preferred to reduce the runtime.We denote with F i the set of subfunctions assigned to clique C i .
An optimal offspring is found in Algorithm 2 by exhaustively exploring all the variable combinations in each clique C i and storing the best ones.Before describing the algorithm we need to introduce some additional notation.The operator ∧ is a bitwise AND and the expression B n ∧ 1 Y denotes the set of binary strings of length n with zero in all the variables not in Y .
For each combination of the variables in the separator S i (Line 2 in Algorithm 2), all the combinations of the variables in the residue R i are considered (Line 4 in Algorithm 2) and evaluated over the subfunctions assigned to the clique (Lines 6-7) and their child cliques (Lines 8-9).Then, the best combination for the residue R i is stored in the variable[i] array4 (Line 11) and its value in the value[i] array (Line 12).The evaluation in post-order makes it possible to have the value[j] array of the child cliques filled when they are evaluated in Line 9. At this point, variables in the residue R i can be obviated (eliminated) in the rest of the computation.When the separator S i = ∅, w only takes one value, the string with n zeroes.This happens in the root of the clique tree, and its effect is to iterate over all the variables combinations for R i = C i to find the best value.The variable array will be used in the reconstruction of the offspring solution (Line 7 in Algorithm 1).Following with our previous example in Figure 4, first, clique C 3 is evaluated.For all the values of x 3 (variable in S 3 ), and all the values of x 8 (variable in R 3 ) the subfunctions in F 3 are evaluated.All these subfunctions depend only on x 3 , x 8 and variables with common values in both parents.For each value x 3 = 0, 1, the array value[3] is filled with the maximum value of the sum of the subfunctions in F 3 for the two values of x 8 = 0, 1.The array variable [3] 4What is really stored in Algorithm 2 is the change v of the variables in R i over the parent solution x, but this can be considered an implementation detail.

Evolutionary Computation Volume x, Number x
Algorithm 2: Optimal offspring computation Data: Clique tree T with all cliques C i , parent solution x Result: Arrays variable and value will store the value of x 8 for which the maximum is obtained in each case.After C 3 has been evaluated, variable x 8 is eliminated.Now clique C 2 is evaluated, and value[2] and variable[2] are filled for each combination of x 7 and x 13 .In this case the variable to eliminate is x 3 and the evaluation also includes the values in value[3] in addition to the functions in F 2 , because C 3 is a child clique of C 2 .Finally, the root clique C 1 is evaluated and all the 2 4 possible combinations of variables x 7 , x 12 , x 13 and x 15 are evaluated using the subfunctions in F 1 and the array value[2] to find the objective value of an optimal offspring.The offspring itself is built using the arrays variable.In particular, variable[1] will store the combination of variables x 7 , x 12 , x 13 and x 15 that produces the best offspring.Array variable[2] will provide the value of x 3 given the ones of x 7 and x 13 , which were provided by variable [1].Array variable[3] will provide the value for x 8 given the value of x 3 provided by variable[2].
Theorem 1.Given two parent solutions x and y with differing set of variables d(x, y) that produces clique tree T , Algorithm 2 computes a best offspring z in the largest dynastic potential of x and y.That is: Proof.We will prove the theorem by structural induction over the clique tree.We will denote with T i the subtree of T with C i in the root.We also introduce C(T i ) as the union of the C j sets for all the cliques C j ∈ T i and will use the convenient notation R(T Regarding the subfunctions, we introduce the notation F Ti to refer to the set of subfunctions associated with a clique in T i : F Ti = ∪ Cj ∈Ti F j .We only need to consider the subfunctions in F T , because the remaining ones are constant in the dynastic potential of x and y.Thus, Eq. ( 3) is equivalent to: The claim to be proven is that for each clique C i , after its evaluation using Lines 2 to 12 of Algorithm 2, the array value[i] holds the following equation: Eq. ( 5) reduces to Eq. (4) when the clique C i is the root of T and T i = T .Thus, we only need to prove Eq. ( 5) using structural induction.Let us start with the base case: a leaf clique.In this case, R(T i ) = R i and C(T i ) = C i and there is no child clique to iterate over in the for loop of Lines 8 to 9. Eq. ( 5) is reduced to: and Lines 8 to 9 fill the value[i] array using exactly the expression in Eq. ( 6).Now, we use the induction hypothesis to prove that Eq. ( 5) holds for any other node in the tree.In this case we have The values computed by Lines 8 to 9 and stored in the value[i] array for all w ∈ B n ∧ 1 Si are: and using the induction hypothesis we can replace value[j][(w ⊕ v) ∧ 1 Sj ] by the right hand side of Eq. ( 5) to write: where we replaced (w ⊕ v) ∧ 1 Sj by w ⊕ v in the inner sum because the subfunctions in F Tj do not depend on any variable in C i − S j , which are the ones that differ in both expressions.The sets F Tj are disjoint for all j ∈ ch(i), as well as the sets R(T j ).Thus, we can swap the maximum and the sum to write: where we used the identities j∈ch(i) R(T j ) = R(T i ) − R i and j∈ch(i) F Tj = F Ti − F i to simplify the expression.Finally, we can introduce the first sum in the maximum and notice that v ′ is zero in the variables of R i to write: which is Eq. ( 5) written in a different way.
The operator described is an optimal recombination operator: it finds a best offspring from the largest dynastic potential.The time required to evaluate one clique in Algorithm 2 is O((|F i |+|ch(i)|)2 |Ci| ).The number of children is bounded by n and the number of subfunctions m is bounded by O(n k ) due to the k-bounded epistasis of f .However, the exponential factor is a threat to the efficiency of the algorithm.In the worst case C i can contain all the variables and the factor would be 2 n .

Limiting the Complexity
In order to avoid the exponential runtime of DPX, we propose to limit the exploration in Lines 2 and 4 of Algorithm 2. Instead of iterating over all the possible combinations for all the variables in the separators S i and the residues R i , we fix a bound β on the number of variables that will be exhaustively explored.The remaining variables will jointly take only two values, each one coming from one of the parents.In a separator S i or residue R i with more than β variables, we still have to decide which variables are exhaustively explored and which ones will be taken from one parent.One important constraint in this decision is that once we decide that two variables x i and x j will be taken from the same parent, then this should also happen in the other cliques where the two variables appear.We use a disjoint-set data structure (Tarjan, 1975) to keep track of the variables that must be taken from the same parent.In each clique, the variables that are articulation points in the VIG are added first to the list of variable to be fully explored.The motivation for this can be found in Section 3.4.The other variables are added in an arbitrary order (implementation dependent) to the list of variables to fully explore.We defer to future work a more in depth analysis on the strategies to decide which variables are fully explored in each clique.
Let us illustrate this with our example of Figures 2 and 4, where we set β = 2.This does not affect to the evaluation of C 2 or C 3 , because in both cases the sets S i and R i have cardinality less than or equal to β = 2, so all the variables in R 2 , S 2 , R 3 and S 3 will be fully explored.However, once cliques C 3 and C 2 (in that order) have been evaluated, we need to evaluate C 1 and, in this case, |R 1 | = 4 > 2 = β.For the evaluation of C 1 , two variables, say x 7 and x 12 , are fully enumerated (the four combinations of values for them are considered) and the other two variables, x 13 and x 15 , are taken from the same parent and only two combinations are considered for them: 00 (red parent) and 11 (blue parent).In total, only 2 3 = 8 combinations of values for the variables in this clique are explored, instead of the 2 4 = 16 possible combinations.
This reduces the exponential part of the complexity of Algorithm 2 to 2 2(β+1) .Since β is a predefined constant parameter decided by the user, the exponential factor turns into a constant.The operator is not anymore an optimal recombination operator.In the cases where β + 1 ≥ max{|R i |, |S i |} for all the cliques, DPX will still return the optimal offspring.Theorem 2. Given a function in the form of Equation (1) with m subfunctions, the complexity of DPX with a constant bound β for the number of exhaustively explored variables is O(4 β (n + m) + n 2 ).
Proof.We have seen in Section 3.1 that the complexity of maximum cardinality search, the fill-in procedure and the clique tree construction is O(n 2 ).The assignment of subfunctions to cliques can be done in O(n + m) time, using the variable ordering found by MCS to assign the subfunctions that depends on each visited variable to the only clique where the variable is a residue.The complexity of the dynamic programming computation is: where we used the fact that the sum of the cardinality of the children for all the cliques is the number of edges in the clique tree, which is the number of cliques minus one, and the number of cliques is O(n).The reconstruction of the offspring solution requires to read all the variable arrays until building the solution.The complexity of this procedure is O(n).
In many cases, the number of subfunctions m is O(n) or O(n 2 ).In these cases, the complexity of DPX reduces to O(4 β n 2 ).But complexity can even reduce to O(n) in some cases.In particular, when all the connected components of the recombination graph are paths or have a number of variables bounded by a constant, the number of edges in the original and the chordal graph is O(n) and the complexity of DPX inherits this linear behaviour.This is the case for the recombination graph showed in Figure 3 for a real SAT instance, so this linear time complexity is not unusual, even in real and hard instances.

Theoretical comparison with PX and APX
DPX is not worse than PX, since, in the worst case, it will pick the variables for each connected component of the recombination graph from one of the parent solutions (what PX does).In other words, if β = 0 and there is only one clique in all the connected components of the recombination graph (worst case), DPX and PX behave the same and produce offspring with the same quality.We wonder, however, if this happens with APX.If β + 1 ≥ max{|R i |, |S i |} for all the cliques C i in the chordal graph derived from the recombination graph, DPX cannot be worse than any recombination operator with the property of gene transmission and, in particular, it cannot be worse than APX.Otherwise, if the limit in the exploration explained in Section 3.3 is applied, it could happen that articulation points are not explored as they are in APX.One possible threat to the articulation points exploration in DPX is that they disappear after making the graph chordal.Fortunately, to make a graph chordal, the fill-in procedure only adds edges joining vertices in a cycle and, thus, it keeps the articulation points.We provide formal proofs in the following.
Lemma 1.The fill-in procedure adds edges joining vertices in a cycle of the original graph.
Proof.Let us assume that edge (v, w) is added to the graph by the fill-in procedure.The values γ(v) and γ(w) are the numbers assigned by maximum cardinality search to nodes v and w.Let us assume without loss of generality that γ(v) < γ(w).The definition of fill-in (Tarjan and Yannakakis, 1984) implies that there is a path between v and w where all the intermediate nodes have a γ value lower than γ(v).On the other hand, during the application of maximum cardinality search the set of numbered nodes form a connected component of the graph.This implies that at the moment in which v was numbered there existed a path between v and w with γ values higher than γ(v).As a consequence, two non-overlapping paths exist between v and w in the original graph and they form a cycle.
Theorem 3. Articulation points of a graph are kept after the fill-in procedure.
Proof.According to Lemma 1 all the edges added by the fill-in procedure join vertices in a cycle of the original graph.This means that the edges are added to bi-connected components of the graph, and never join vertices in two different bi-connected components.Adding edges to a bi-connected component never removes articulation points and the result follows.
The previous theorem implies that articulation points of the original recombination graph are also articulation points of the chordal graph.Articulation points of a chordal graph are minimal separators of cardinality one (Galinier et al., 1995) and they will appear in the sets S i of some cliques C i .They are, thus, identified during the clique tree construction.This inspires a mechanism to reduce the probability that a solution explored in APX is not explored in DPX.In each clique C i when β variables are chosen to be exhaustively explored (Lines 2 and 4 of Algorithm 2) we choose the articulation points first.This way, articulation points can be exhaustively explored with higher probability.The only thing that can prevent articulation points from being explored is that many of them appear in one single clique.This situation is illustrated in Figure 5.For β = 0 all the cliques are evaluated only in the two parent solutions as PX does, while APX explores 20 different combinations of variables according to Eq. ( 6) of Chicano et al. (2018).For β = 1, the cliques C 2 , C 3 and C 4 are fully explored, but in the clique of articulation points, C 1 , only variable x 4 is fully enumerated, variables x 5 and x 6 are taken Evolutionary Computation Volume x, Number x from the same parent.The total number of solutions explored is 32, which is more than the ones analyzed by APX (20), but the articulation points x 5 and x 6 are not explored in the same way as in APX and the set of solutions explored by DPX and APX differ.Thus, APX could find an offspring with higher fitness than the one obtained by DPX with β = 1.

Generalization of DPX
Although this paper focuses on Mk Landscapes, defined over binary variables, DPX can also be applied as is when the variables take their values in a finite set different from the binary set.In this case, one can imagine that 0 represents the value of a differing variable in the red parent, and 1 the value of the same variable in the blue parent.All the results, including runtime guarantees, are the same.The only difference is that the offspring of DPX is not optimal, in general, in the smallest hyperplane containing the parent solutions, because an optimal solution could have values not present in the parents.Even in this case, DPX can be modified to keep the same runtime and provide an optimal solution in the mentioned hyperplane, at the only cost of violating the gene transmission property.

Experiments
This section will focus on the experimental evaluation of DPX in comparison with other previous crossover operators.In particular, we want to answer the following two research questions: • RQ1: How does DPX perform compared to other crossover operators in terms of runtime and quality of offspring?
• RQ2: How does DPX perform included in a search algorithm for solving NP-hard pseudo-Boolean optimization problems?
Regarding the other operators in the comparison, we include PX and APX because they are gray box crossover operators using the VIG and we want to check the claims exposed in Section 3.4 that relate DPX with these two operators.We also include in this comparison two other operators which dynastic potential has the same size as DPX (2 h(x,y) ): uniform crossover and network crossover.In uniform crossover (UX), each variable is taken from one of the two parents with probability 0.5.Network crossover (NX) (Hauschild and Pelikan, 2010) uses the learned linkages among the variables to select groups of variables from one parent such that variables from the same connected component in the linkage graph are selected first.In our case, we have complete knowledge of the linkage graph: the variable interaction graph.The VIG is used in our implementation of network crossover and variables are selected using randomized breadth first search in the VIG and starting from a random variable until half of the variables are selected.Then, the group of selected variables is taken from one of the parents and inserted into the other to form the offspring.
Two different kinds of NP-hard problems are used in the experiments: NKQ Landscapes, an academic benchmark which allows us to parameterize the density of edges in the VIG by changing K, and MAX-SAT instances from the MAX-SAT Evaluation 20175.Random NKQ ("Quantized" NK) landscapes (Newman and Engelhardt, 1998) can be seen as Mk Landscapes with one subfunction per variable (m = n).Each subfunction f ℓ depends on variable x ℓ and other K random variables, and the codomain of each subfunction is the set {0, 1, . . ., Q − 1}, where Q is a positive integer.Thus, each subfunction depends on exactly k = K +1 variables.The values of the subfunctions are randomly generated, that is, for each subfunction and each combination of variables in the subfunction an integer value in the interval [0, Q − 1] is randomly selected following a uniform distribution.Random NKQ landscapes are NP-hard when K ≥ 2. The parameter K determines the higher order nonzero Walsh coefficients in its Walsh decomposition, which is a measure of the "ruggedness" of the landscape (Hordijk and Stadler, 1998).Regarding MAX-SAT, we used the same instances as Chicano et al. (2018)6 to allow the comparison with APX.They are 160 unweighted and 132 weighted instances.
The computer used for the experiments is one multicore machine with four Intel Xeon CPU (E5-2680 v3) at 2.5 GHz, summing a total of 48 cores, 192 GB of memory and Ubuntu 16.04 LTS.The source code of all the algorithms and operators used in the experiments can be found in Github7 including a link to a docker image to ease the reproduction of the experimental results.
Section 4.1 answers RQ1 and Section 4.2 answers RQ2.In Section 4.3 we include a local optima network analysis of the best overall algorithm identified in Section 4.2 to better understand its behaviour.

Crossover comparison
This section will present the experiments to answer RQ1: how does DPX perform compared to APX, PX, NX and UX in terms of runtime and quality of offspring?In the case of DPX we use values for β from 0 to 5. The optimization problem used is random NKQ Landscapes with n = 10 000 variables, K = 2, 3, 4, 5 and Q = 64.For each value of K we generated ten different instances, summing a total of 40 NKQ Landscapes instances.In each of them we randomly generated 6 000 pairs of solutions with different Hamming distance between them and applied all the crossover operators.Six different values of Hamming distance h were used, generating 1 000 pairs of random solutions for each Hamming distance.Expressed in terms of the percentage of differing variables, the values for h are 1%, 2%, 4%, 8%, 16% and 32%.Two metrics were collected in each application of all the crossover operators: runtime and quality improvement over the parents.The crossover runtime was measured with nanoseconds precision (expressed in the tables in an appropriate multiple) and the quality of the offspring is expressed with a relative measure of quality improvement.If x and y are the parent solutions and z is the offspring we define the quality improvement ratio (QIR) in a maximization problem as: that is, the fraction of improvement of the offspring compared to the best parent.All the experiments were run with a memory limit of 5GB of RAM.In the case of PX, APX and DPX we also collected the number of implicitly explored solutions, expressed with its logarithm (the offspring is one best solution in the set of implicitly explored solutions) and the fraction of runs in which the crossover behaves like an optimal recombination (returns the best solution in the largest dynastic potential).Tables 1 to 4 present the runtime, quality of improvement, logarithm of explored solutions and percentage of crossover runs where an optimal offspring is returned.The figures are averages over 10 000 samples (1 000 crossover operations in each of the ten instances for each value of K).  107.5 32 3.7 5.8 19.4 1 000.5 1 034.0 1 041.1 1 020.3 1 089.9 1 021.7 Regarding the runtime, we observe some clear trends that we will comment in the following.Uniform crossover is the fastest algorithm (less than 200µs in all the cases).It randomly selects one parent for each differing bit and this can be done very fast.The other operators are based on the VIG and they require more time to explore it and compute the offspring.Their runtime can be best measured in milliseconds.NX, PX and APX have runtimes between less than one millisecond to 20 ms.DPX is clearly the slowest crossover operator when the parent solutions differ in 32% of the bits (3 200 variables), reaching 1 second of computation for instances with K = 5.For lower values of h, APX is sometimes slower than DPX.We also observe an increase in runtime with h, which can be explained because the recombination graph to explore is larger.It is interesting to note that no exponential increase is observed in the runtime when β increases linearly.To explain this we have to observe Tables 3 and 4, where we can see that DPX is able to completely explore the dynastic potential when h is low and the logarithm of the number of solutions explored increase very slowly with β because it is near the maximum possible.High runtime is one of the drawbacks of DPX, and the other one is memory consumption.For the instances with n = 10 000 variables, 5GB of memory is enough when K ≤ 5, but we run some experiments with n = 100 000 in which DPX ended with an "Out of Memory" error.In these cases, increasing the memory could help, but the amount of memory required is higher than that required by PX and APX, and much higher than the memory required by UX and NX.
We can observe that the quality improvement ratio (Table 2) is always positive in PX, APX and DPX.These three operators, by design, cannot provide a solution that is worse than the best parent.We also observe how the quality improvement ratio is always the highest for DPX.APX and PX are the second and third operators regarding this metric, respectively.The worst operators are UX and NX.They always show a negative quality improvement ratio.We can explain this with the following intuition: the expected fitness of the offspring is similar to the one of a random solution and the probability of improving both parents is 1/4 because the probability of improving each one is 1/2, which means that in most of the cases (3/4 of the cases on average) the offspring will be worse than the best parent.We can support this with some theory.Parents x and y are random solutions and their fitness values, f (x) and f (y), are random variables with unknown equal distribution.8In UX and NX, the child z is a random solution in the dynastic potential and the expectation of the random variable f (z) should not differ too much from the one of f (x) and f (y) because the dynastic potential is large (at least 2 100 in our experiments) and NKQ Landscapes are composed of functions randomly generated.We can use the following equations for the random variables, where we can take the expectation at both sides and add them to get: Finally, we use the fact that E[f (x)] = E[f (y)] and the assumption that In Table 3 we can observe how DPX explores more solutions than PX and APX when β > 0. We also observe how APX can outperform DPX in the number of explored solutions when β = 0, as we illustrated with an example in Section 3.4.The latter happens when h ≤ 800 for K = 2, when h ≤ 400 for K = 3 and when h ≤ 200 for K = 4 and K = 5.DPX always explores more solutions than PX independently of the value of β, as the theory predicts.As h grows, the dynastic potential increases and also the logarithm of explored solutions for DPX.In fact, in many cases we observe that this logarithm reaches in DPX the maximum possible value, h (the Hamming distance between the parents).This is also reflected in Table 4, where we show the percentage of runs where the full dynastic potential is explored by the operators 8In general, they are not independent because the Hamming distance among them is fixed.But the marginal distributions must be the same.

Evolutionary Computation Volume x, Number x
Table 2: Average quality improvement ratio of crossover operators for random NKQ Landscapes with n =10 000 variables.The numbers are in parts per thousand (‰).The Hamming distance between parents, h, is expressed in percentage of variables.or, equivalently, the percentage of runs in which the logarithm of explored solutions is h.In the case of PX and APX the increase in h does not always imply an increase in the number of explored solutions: there is a value of h for which the logarithm of explored solutions reaches a maximum and then decreases with h.The number of explored solutions in these two operators is proportional to the number of connected components in the recombination graph.Starting from an empty recombination graph the number of connected components increases as new random variables are added, and this explains why the logarithm of explored solutions in PX and APX increases with h for low values of h.At some critical value of h, the number of connected components starts to decrease because the new variables in the recombination graph  join connected components, instead of generating new ones.The exact value of h at which this happens is approximately n/(K + 1) for the adjacent NKQ Landscapes (Chicano et al., 2017).
It is difficult to compute this value for the random NKQ Landscapes that we use here, but the critical value must be a decreasing function of K.This dependence of the critical value with K can also be observed in Table 3: the value of h at which the number of explored solutions is maximum decreases from h = 1 600 to h = 400 when K increases from 2 to 5.
Regarding the fraction of runs in which full dynastic potential exploration is achieved, PX and DPX with β = 0 behaves the same and achieve full exploration for some pairs of parents only when K ≤ 3. APX is slightly better and DPX is the best when β ≥ 1, behaving like an seconds) was chosen based on our previous experience with these algorithms including gray box optimization.In previous works and experiments, where we stopped the experiments after 300 seconds, we found that the results are rather stable after 60 seconds, with few exceptions.Using this short time also allows us to perform a larger set of experiments in the same time obtaining more insight on how the algorithms work.
At the end of the tuning phase, irace provides several configurations that are considered equivalent in performance (irace does not find statistically significant differences among them).We took in all the cases the first of those configurations and we show it in Tables 6 and 7 for each combination of algorithm, crossover operator and set of instances.These are the parameters used in the experiments of Sections 4.2.2 and 4.2.3.We did not check the convergence speed of irace and we do not know how far the parameters in Tables 6 and 7 are from the best possible configurations.Thus, we cannot get any insight from the parameters computed by irace.Our goal is just to compare all the algorithms and operators using good configurations, and avoid bias due to parameter setting.
Table 6: Configuration proposed by irace during the tuning phase of the algorithms for NKQ instances with n = 10 000 variables and K = 2, 5.In order to reduce the bias due to the stochastic nature of the algorithms, they were run ten times for each instance and average results are presented in the next sections.The Mann-Whitney test was run with the samples from the ten independent runs to check if the observed difference in the performance of the algorithms for each instance is statistically significant at the 0.05

Results for NKQ Landscapes
In this section we analyze the results obtained for NKQ Landscapes.We do not know the fitness value of the global optimal solutions in the instances.For this reason, for each instance (there are ten instances per value of K) we computed the fitness of the best solution found in any run by any algorithm+crossover combination, f * .We used f * to normalize the fitness of the best solutions found by the algorithms in the instance.Thus, if x is the best solution found by an algorithm in an instance, we define the quality of solution x as q(x) = f (x)/f * .The benefit of this is that the quality of a solution x is a real number between 0 and 1 that measures how near is the fitness of x to the best known fitness (in this set of experiments).This also allows us to aggregate quality values from different instances because they are normalized to the same range.Higher values of quality are better.
The main results of the section are shown in Table 8, where the column quality is the average over ten runs and ten instances (100 samples in total) of the quality of the best solution found by the algorithm+crossover combination in the row.The column statistical difference shows the result of a Mann-Whitney test (with significance level 0.05) and a median comparison to check if the differences observed in quality between the algorithm+crossover in the row and algorithm+DPX are statistically significant or not.In each row and for each instance, the results of the ten runs are used as input to the Mann-Whitney test.It determines equivalence or dissimilarity of samples.The sense of the inequality is determined by the median comparison.In the table, the numbers followed by a black triangle (▲), white triangle (▽) and equal sign (=) are the numbers of instances in which the algorithm+crossover combination in the row is statistically better, worse or similar to algorithm+DPX.In Figures 6 and 8 we show the average quality of the best found solution at any time during the search using the different algorithms and crossover operators.We group the curves in the figures by algorithm (DRILS and EA) and ruggedness of the instances (K = 2 and K = 5).
The first important conclusion we obtain from the results in Table 8 is that DRILS performs Table 8: Performance of the five recombination operators used in DRILS and EA when solving NKQ Landscapes instances with n = 10 000 variables.The symbols ▲, ▽ and = are used to indicate that the use of the crossover operator in the row yields statistically better, worse or similar results than the use of DPX in each algorithm.
better with DPX than with any other crossover operator.There is no single NKQ Landscapes instance in our experimental setting where another crossover operator outperforms DPX included in DRILS.There are only a few instances (8 in total) where APX and/or PX show a similar performance.We can observe in Figure 6 (a) that DRILS+DPX obtains the best average quality at any time during the search when K = 2, followed by PX and APX.UX and NX provide the worst average quality in this set of instances.We observe in the figure signs of convergence in all the crossover operators.However, after a careful analysis checking the time of last improvement, whose distribution is presented in Figure 7 (a), we notice that DRILS with DPX, APX and PX provides improvements to the best solution after 50 seconds in around 50% of the runs, while DRILS with UX and NX seems to stuck in 30 to 40 seconds after the start of the search, and much earlier in some cases.We wonder if this time could be biased by the different runtime of crossover operators.Perhaps the algorithm produces the last improvement near the end of the execution for DPX, APX and PX but the previous one was in the middle of the run, earlier than the last improvement of NX and UX.To investigate this, we show in Figure 7 (b) the distribution of the average time between improvements for the last three improvements.This time is far below one second in most of the cases for all the crossover operators, which means that they are producing better solutions several times per second on average until the time of last improvement, and there is no bias related to the different crossover runtime.
In the more rugged instances (K = 5), shown in Figure 6 (b), DRILS+DPX is the best performing algorithm after 20 seconds of computation.Before that time DRILS+UX provides the best performance.We can explain this with the help of Table 1.UX is the fastest crossover operator, and helps to advance in the search at the beginning.DPX (as well as PX and APX) are slower operators and, even if they provide better quality offspring, they slow down the search, requiring more time to be effective.We have to recall here that DRILS includes a hill climber, what explains why using a random black box operator like UX the quality of the best solution is If we analyze the performance of the crossover operators in EA, we observe that DPX is also the best crossover operator for the instances with K = 2.However, when K = 5 DPX is outperformed by PX and, in general, shows a performance similar to APX, NX and UX.The best crossover operator in EA when K = 5 is PX.Taking a look to Figure 8 (a) we observe that EA+PX and EA+APX provide the highest average quality of the best solution during the first 35 to 40 seconds, then they are surpassed by DPX.The reason for this slow behaviour of EA+DPX is again the high runtime of DPX, which in the case of EA is higher than in the case of DRILS due to the fact that the solutions in the initial population are random and, thus, differ in around h = 0.5n = 5 000 bits.This slow runtime is specially critical when K = 5 (Figure 8 (b)), where EA+DPX is not able to reach the average quality of EA+PX in 60 seconds.In Figure 9 we plot the time required by DPX during the search when it is included in DRILS and EA.The runtime of DPX in DRILS is a few milliseconds because the parent solutions differ in around αn = 462 bits (α = 0.0462 according to Table 6), while the runtime of DPX in EA starts in six seconds and goes down to one second at the end of the search.This behaviour suggests that in an EA a hybrid approach combining PX at the beginning of the search and DPX later during the search could be a better strategy to reach better quality solutions in a short time.The random crossover operators, UX and NX, show a poorer performance in EA compared to DRILS probably because there is no local search in EA.Finally, although it is not our goal to compare search algorithms (only crossover operators) we would like to highlight some observations regarding the average quality of the best found solutions in DRILS and EA.We conclude that DRILS is always better than EA.For K = 2, the highest quality in EA is obtained when DPX is used and its quality is only slightly higher than that of DRILS with NX and UX (the worst performing crossover operators in DRILS).For K = 5 the difference is even higher: EA+PX reaches an average quality of 0.9085 (highest for EA) which is far below any average quality of DRILS, all above 0.9934 (the one of DRILS+NX).We think that the reason for that could be the presence of a local search operator in DRILS, while EA is mainly guided by selection and crossover (when PX, APX or DPX is used).

Results for MAX-SAT
In this section we analyze the results obtained for MAX-SAT.We also use the quality of the solutions defined in Section 4.2.2 as a normalized measure of quality.Table 9 presents the main results of the section.The meaning of the columns is the same as in Table 8.In the case of DRILS all the instances are used in the statistical tests and the computation of the average quality (160 instances in the unweighted category and 132 instances in the weighted category).In the case of EA, we observed that it failed to complete the execution in some runs for some instances when it was combined with DPX.The reason was an out of memory problem, caused by the large number of differing variables among the solutions in the initial generations.In this case, we only computed the average quality for instances in which at least 90% of the runs were successful (nine of the ten runs) and we manually counted the instances with less than 90% successful runs as significantly worse than EA+DPX for the remaining EA+crossover combinations in Table 9 without performing any statistical test.Nine unweighted instances and five weighted instances had less than 90% successful EA+DPX runs.Three unweighted instances and no weighted instance had exactly 90% of successful runs and in the remaining instances EA+DPX ends successfully in all the runs.From the results in Table 9 we conclude that both algorithms (DRILS and EA) perform better, in general, using DPX as the crossover operator.Only in very few cases any other crossover operator improves the final result of DRILS.When EA is used, the difference is not so clear, but still significant.
Once again, we also observe that the performance of DRILS is better than that of EA.The maximum average quality in EA is 0.9649 (EA+APX in the weighted instances) while the average quality of DRILS is always above 0.9915 for all the crossover operators and category of instances.
We do not expect DRILS or EA to be competitive with state-of-the-art incomplete MAX-SAT solvers like SATLike-c9 (Cai and Lei, 2020), because they are general optimization algorithms.However, DPX could be useful to improve the performance of some incomplete MAX-SAT solvers, as PX did in recent work (Chen et al., 2018).

DPX alone as search algorithm
In this section we analyze the results of DPX alone used to solve pseudo-Boolean optimization problems.We apply DPX to a random solution and its complement, with the goal of finding the global optimum.Due to technical limitations of our current implementation of DPX, we cannot set β = ∞, but we use the maximum value of β allowed by the implementation, which is 28.This also means that if the whole search space is not explored for a concrete instance the result could depend on the initial (random) solution.For this reason, we run DPX ten times per instance on different random solutions.We set a runtime limit of 12 hours.
After applying DPX to the 20 instances of NKQ Landscapes (ten instances for each value of K), we found that all the runs were prematurely terminated due to an out of memory error (the memory limit was set to 5GB as in the previous experiments).In the case of the MAX-SAT instances, the runs finished in less than 12 hours for eight unweighted instances and two weighted instances.In 152 unweighted instances and 128 weighted ones, DPX ran out of memory.In two weighted instances, we stopped DPX after 12 hours of computation.Table 10 shows the MAX-SAT instances that finished without error and, for each one, it shows the average and maximum number of satisfied clauses in the ten independent runs of DPX, the average runtime (in seconds) and the logarithm in base 2 of the implicitly explored solutions (it is the same in all runs).The minimum number of satisfied clauses found in all the runs of DRILS+DPX in less than 60 seconds is shown for comparison in the last column.We mark with an asterisk in the logarithm of implicitly explored solutions the runs that fully explored the search space (thus, DPX was able to certify the global optimum).
The main conclusion is that DPX alone is not good compared to its combination with a search algorithm.It requires a lot of memory and time to compute the result.Even when DPX finishes, the results are always outperformed by DRILS+DPX (see last column of Table 10).
In the previous experiment, we forced DPX to use a lot of memory and runtime because the value of β was high.One of the advantages of DPX compared to an optimal recombination operator is that we can limit the time and space complexity of DPX using β.We wonder if DPX alone with a low value of β could outperform DRILS or EA.In order to answer this question we designed a new experiment in which we run a new algorithm, called iDPX (iterated DPX) which consists in the iterated application of DPX over a random solution and its complement.The algorithm stops when a predefined time limit is reached (one minute in our case).We used irace to tune iDPX.The only parameter to tune is β, and we used the same configuration of irace that we used to tune DRILS and EA.According to irace, the best value for β is 5 for both, NKQ Landscapes and MAX-SAT.We run iDPX ten times per instance and compared it with DRILS and EA using the five crossover operators.
In NKQ Landscapes, iDPX was statistically significantly worse than all the remaining ten algorithms (DRILS and EA with the five crossover operators) for all the instances.The average quality of the solutions obtained by iDPX was 0.7543 for K = 2 and 0.6640 for K = 5, which are very low values compared to the ones obtained by the other algorithms (see Table 8).
In MAX-SAT, iDPX prematurely finished with out of memory errors in 59 unweighted instances and 25 weighted ones.In these cases, we ran iDPX with β = 0 to alleviate the memory requirements but the algorithm ran out of memory again.When iDPX does not have a result for an instance due to an error, we mark it as statistically significant worse than the other algorithms  that finished with a result and equal to the algorithms that did not finished (EA+DPX does not finish in some instances).In Table 11 we compare iDPX with the other ten algorithms, providing the same metrics as in Section 4.2.3.The average quality values change with respect to the ones in Table 9.The reason is that the instances used for the average computation in Table 11 is a subset of the instances used in Table 9, due to the memory errors of iDPX.We observe in Table 11 that iDPX is statistically worse than DRILS+DPX, DRILS+APX and DRILS+PX in all the instances.iDPX does not outperform EA+DPX in any case (it is equivalent in 16 instances).Furthermore, iDPX is worse in most of the instances for algorithms not using DPX (e.g., EA+UX).Thus, our main conclusion of this section is that DPX alone is not competitive with any search algorithm and should only be used as a crossover operator inside a search algorithm.DPX can provide the global optimum only in the cases where the number of variables is low or the VIG has a low number of edges.This is hardly the case in NP-hard problems (it only happened in three MAX-SAT instances of our benchmark).

Local optima network analysis of DRILS+DPX
The best algorithm+crossover combination identified in Section 4.2 is DRILS+DPX, for NKQ Landscapes and MAX-SAT.In this section we conduct a local optima network (LON) analysis (Ochoa et al., 2008(Ochoa et al., , 2015) ) in order to better understand the search dynamics of DRILS+DPX.A LON is a graph where nodes are local optima and edges represent search transitions among them with a given operator.DRILS has two transition operators, crossover (DPX) and perturbation (each followed by local search to produce a local optimum), which are modelled as two types of edges in the LON: DPX + hill climber and perturbation + hill climber.An edge is, respectively, improving if its end node has better fitness than its start node, equal if the start and end nodes have the same fitness, and worsening, if the end node has worse fitness than the start node.Our analysis reports the LONs extracted by ten runs of DRILS+DPX solving NKQ Landscapes instances with n = 10 000 and the two extreme values of K used in our experiments (2 and 5).The parameters used are the ones in Table 5 for this algorithm and value of K.The process of extracting the LON data is as follows.For each of the ten runs per instance, we record every unique local optima and edge encountered, from the start until the end of the run.As it is done in previous work (Ochoa and Veerapen, 2018), we combined the search trajectories produced by the ten runs as a sampling process to construct a single LON and our analysis revealed that there are no overlapping nodes and edges across the runs.Therefore, the combined LONs contain ten connected components, one characterizing each run.Table 12 reports some basic LON statistics, specifically, the number of nodes (local optima) and the number edges of each type, crossover (DPX) and perturbation, grouped as improving, equal and worsening edges.They are statistics over the ten different instances (and LONs).There cannot be a worsening crossover edge, by design of DPX, but we include the column for the sake of completeness.The number of nodes and edges sampled is similar for the different values of K.Although DPX is slower for K = 5, its perturbation factor α = 0.0462 is lower than in the case of K = 2.With the parameters provided by irace, DRILS+DPX visits a similar number of unique local optima during the search for both values of K.It is interesting to note that most of the crossover edges are improving, while most of the perturbation edges are deteriorating.This suggests that the role of perturbation within DRILS is to provide diversity as a raw material for DPX, which then incorporates newly found variable assignments that improve fitness.
In order to have a closer look at the search trajectories, Figure 10 visualizes the LON of a run obtaining the best solution for two selected instances with n = 10 000 and the two values of K.The plots use a tree-based graph layout where the root node is the local optimum at the beginning of the run, highlighted with cyan color.We color in yellow the best local optima found.Red edges are crossover edges, where each parent adds an edge pointing to the offspring.Blue edges with triangular marker represent perturbation edges.Solid, dashed and dotted line styles are used for improving, worsening and equal edges.Since the complete LONs are large (more than 2 000 nodes in each run), we only show the neighbourhood around the starting and best node.The search process is very similar in structure for the two instances and reflects the working of the algorithm.However, we also observe some remarkable differences for different values of K. We observe more improving perturbation edges when K = 5 than when K = 2.
In fact, only one improving perturbation edge appears for K = 2, what means that it is difficult to improve a solution provided by DPX in the near neighborhood reached by the perturbation operator followed by local search.We also should highlight that improving perturbation edges appear only at the beginning of the search in both cases (K = 2 and K = 5).After a few steps, only DPX provides improvements over the best found solution.We also see that many different best local optima are found in K = 2 joined by equal crossover edges, while only one appears in the instance with K = 5.DPX shows here an ability to jump from one best local optimum to another.No equal crossover edges appear near the end node when K = 5: once the best local optimum is found, a worsening perturbation edge makes the search to escape from it.

Conclusions
This paper proposes a new gray box crossover operator, dynastic potential crossover (DPX), with the ability to obtain a best offspring out of the full dynastic potential if the density of interactions among the variables is low, where it can behave like an optimal recombination operator.We have provided theoretical results proving that DPX, when recombining two parents, generates an offspring no worse than partition crossover (PX) and usually no worse than articulation points partition crossover (APX).We also performed a thorough experimental study to compare DPX with four crossover operators, including PX and APX, and using two different algorithms: DRILS and EA.We used NKQ Landscapes and MAX-SAT instances in the comparison.We conclude that DPX offers a great exploration ability, providing offspring that are much better than their parents, compared to the other crossover operators.However, this ability has a cost in runtime and memory consumption, the main drawback of the operator.Included in DRILS, DPX outperforms all the other crossover operators in NKQ Landscapes and MAX-SAT instances.In the case of EA, it also outperforms the other crossover operators except in the case of NKQ Landscapes with K = 5, where PX is the best performing operator.Thus, we suggest that a combined use of PX and DPX in EA could be optimal in high epistasis functions.An interesting future line of research is to analyze the shape of the connected components of the recombination graph to design pre-computed clique trees that could speed up the operator.The code of DPX can also be much optimized when we focus on particular problems, like MAX-SAT.We used in this work a general implementation that can be optimized for particular and National Council for Scientific and Technological Development (CNPq), under grant 305755/2018-8.
Special thanks to J.A. Lozano, Sebastian Herrmann and Hansang Yun for providing pointers to algorithms for the clique tree construction, and to the anonymous reviewers for their thorough review and constructive suggestions.

Figure 4 :
Figure 4: Maximum cardinality search applied to the third connected component of Figure 2 (left) and clique tree with the sets S i and R i (right).

Figure 5 :
Figure 5: Connected component in a recombination graph (left) and its clique tree (right).DPX with β = 1 explores the articulation points in a different way as APX.

Figure 6 :Figure 7 :
Figure 6: Average quality of the best found solution at any time for DRILS using different crossover operators solving NKQ Landscapes.

Figure 8 :
Figure 8: Average quality of the best found solution at any time for EA using different crossover operators solving NKQ Landscapes.

Figure 9 :
Figure 9: Runtime (in seconds) of DPX as the search progresses in DRILS and EA solving NKQ Landscapes with K = 5.
9SATLike-c was the winner in the unweighted incomplete track of the MAX-SAT Evaluation 2021 and got third position in the weighted incomplete track.Evolutionary Computation Volume x, Number x

Table 1 :
Average runtime of crossover operators for random NKQ Landscapes with n =10 000 variables.Time is in microseconds (µs) for UX and in milliseconds (ms) for the rest.The Hamming distance between parents, h, is expressed in percentage of variables.

Table 3 :
Average logarithm in base 2 of the solutions implicitly explored by PX, APX and DPX for random NKQ Landscapes with n =10 000 variables.The Hamming distance between parents, h, is expressed in percentage of variables.

Table 5 :
Parameters tuned by irace and their domain. α

Table 7 :
Configuration proposed by irace during the tuning phase of the algorithms for unweighted and weighted MAX-SAT instances.

Table 9 :
Performance of the five recombination operators used in DRILS and EA when solving MAX-SAT instances.The symbols ▲, ▽ and = are used to indicate that the use of the crossover operator in the row yields statistically better, worse or similar results than the use of DPX.

Table 10 :
Results of DPX with β = 28 for the MAX-SAT instances that finished in less than 12 hours without memory error.The minimum number of satisfied clauses found in all the runs of DRILS+DPX in less than 60 seconds is shown for comparison in the last column.The asterisk means that the whole search space was explored and the number of satisfied clauses is the global maximum.

Table 11 :
Performance comparison of iDPX, DRILS and EA when solving MAX-SAT instances.The symbols ▲, ▽ and = are used to indicate that the algorithm in the row yields statistically better, worse or similar results than the use of iDPX.

Table 12 :
LON Statistics for NKQ Landscapes with n =10 000 variables.Average (avg) and standard deviation (std) of the ten instances for each value of K are shown.