Abstract
An optimal recombination operator for two-parent solutions provides the best solution among those that take the value for each variable from one of the parents (gene transmission property). If the solutions are bit strings, the offspring of an optimal recombination operator is optimal in the smallest hyperplane containing the two parent solutions. Exploring this hyperplane is computationally costly, in general, requiring exponential time in the worst case. However, when the variable interaction graph of the objective function is sparse, exploration can be done in polynomial time. In this article, we present a recombination operator, called Dynastic Potential Crossover (DPX), that runs in polynomial time and behaves like an optimal recombination operator for low-epistasis combinatorial problems. We compare this operator, both theoretically and experimentally, with traditional crossover operators, like uniform crossover and network crossover, and with two recently defined efficient recombination operators: partition crossover and articulation points partition crossover. The empirical comparison uses NKQ Landscapes and MAX-SAT instances. DPX outperforms the other crossover operators in terms of quality of the offspring and provides better results included in a trajectory and a population-based metaheuristic, but it requires more time and memory to compute the offspring.
1 Introduction
Gene transmission (Radcliffe, 1994) is a popular property commonly fulfilled by many recombination operators for genetic algorithms. When the solutions are represented by a set of variables taking values from finite alphabets (possibly different from each other) with no constraints among the variables, this property implies that any variable in any offspring will take the value of the same variable in one of the parents. In particular, the variables having the same value for both parents will have the same value in all the offspring (i.e., the respect property [Radcliffe, 1994] is obeyed). The other (differing) variables will take one of the values coming from a parent solution. The gene transmission property is a formalization of the idea that taking (good) features from the parents should produce better offspring. This probably explains why most of the recombination operators try to fulfill this property or some variant of it. The set of all the solutions that can be generated by a recombination operator from two parents is called dynastic potential. If we denote with the Hamming distance (number of differing variables) between two solutions and , the cardinality of the largest dynastic potential of a recombination operator fulfilling the gene transmission property is . The dynastic potential of uniform crossover has this size. The dynastic potential of single-point crossover has size because it takes from 1 to consecutive differing bits from any of the two parents. In two-point crossover, two cut points are chosen. If the first one is before the first differing bit or the second one is after the last differing bit, the operator behaves like the single-point crossover. Otherwise, the two cut points are chosen among the positions between differing bits and for each cut the bits can be taken from any of the two parents, providing additional solutions to the single-point crossover dynastic potential. Thus, the size of the two-point crossover dynastic potential is . In general, -point crossover has a dynastic potential of size when is small compared with the number of decision variables, .
An optimal recombination operator (Eremeev and Kovalenko, 2013) obtains a best offspring from the largest dynastic potential of a recombination operator fulfilling the gene transmission property, which has size . In the worst case, such a recombination operator is computationally expensive because finding a best offspring in the largest dynastic potential of an NP-hard problem is an NP-hard problem. A proof of the NP-hardness is given by Eremeev and Kovalenko (2013), but it can also be easily concluded from the fact that applying an optimal recombination operator to two complementary solutions (e.g., all zeroes and all ones) is equivalent to solving the original problem, and the NP-hardness is derived from the NP-hardness of the original problem.
DPX has a non-negative integer parameter, , bounding the exploration of the dynastic potential. The precise condition to certify that the complete dynastic potential has been explored depends on and will be presented in Section 3.3. The worst-time complexity of DPX is for Mk Landscapes.
DPX uses the variable interaction graph of the objective function to simplify the evaluation of the solutions in the dynastic potential by using dynamic programming. The ideas for this efficient evaluation date back to the basic algorithm by Hammer et al. (1963) for variable elimination and are also commonly used in operations over Bayesian networks (Bodlaender, 2005). DPX requires more than just the fitness values of the parent solutions to do the job and, thus, it is framed in the so-called gray box optimization (Whitley et al., 2016).
Recently defined crossover operators similar to DPX are partition crossover (PX) (Tinós et al., 2015) and articulation points partition crossover (APX) (Chicano et al., 2018). Although they were proposed to work with pseudo-Boolean functions, they can also be applied to the more general representation of variables defined over a finite alphabet. We will follow the same approach here. In the rest of the article, we will focus on a binary representation, where each variable takes value in the set . But most of the results, claims, and the operator itself can be applied to a more general solution representation, where variables take values from finite alphabets. PX and APX also use the variable interaction graph of the objective function to efficiently compute a good offspring among a large number of them. PX and APX produced excellent performance in different problems (Chicano et al., 2017, 2018; Chen et al., 2018; Tinós et al., 2018). When combined with other gray-box optimization operators, PX and APX were capable of optimizing instances with one million variables in seconds. We compare DPX with these two operators from a theoretical and experimental point of view. The new recombination operator is also compared with uniform crossover and network crossover (Hauschild and Pelikan, 2010), which also uses the variable interaction graph.
This work extends a conference paper (Chicano et al., 2019) where DPX was first presented. The new contributions of this journal version are as follows:
We added a theorem to prove the correctness of the dynamic programming algorithm.
We revised the text to clarify some terms, add extended explanations of some details of the operator and provide more examples.
We provided new experiments to compare the performance of DPX with the one of other crossover operators when they are applied to random solutions. The other crossover operators are uniform crossover, network crossover, partition crossover, and articulations point partition crossover.
We included DPX in an evolutionary algorithm, to check its behavior in this algorithm.
We compared the five crossover operators inside a trajectory-based and a population-based metaheuristic used to solve two NP-hard pseudo-Boolean optimization problems: NKQ Landscapes and MAX-SAT.
We modified the code of DPX to improve its performance and updated the source code repository1 with the new code of the evolutionary algorithm, the newly implemented crossover operators used in the comparison, and the algorithms to process the results.
We added a local optima network analysis (Ochoa and Veerapen, 2018) of DPX included within an iterated local search framework to better understand its working principles.
2 Background
In our gray-box optimization setting, the optimizer can independently evaluate the set of subfunctions in Equation (1); that is, it is able to evaluate when the values of are given, and it also knows the variables each depends on. This contrasts with black-box optimization, where the optimizer can only evaluate full solutions and get their fitness value . We assume that we do not know the internal details of (we can only evaluate it), and this is why we call it gray box and not white box.
2.1 Variable Interaction Graph
The variable interaction graph (VIG) (Whitley et al., 2016) is a useful tool that can be constructed under gray-box optimization. It is a graph , where is the set of variables and is the set of edges representing all pairs of variables having nonlinear interactions. The set contains all the binary strings of length . Given a set of variables , the notation is used to represent a binary string of length with 1 in the positions of the variables in and zero in the rest, for example, . If the set has only one element, we will omit the curly brackets in the subscript, e.g., . Observe that the first index of the binary strings is 0. When the length is clear from the context we omit . The operator is a bitwise exclusive OR.
We say that variables and have a nonlinear interaction when the expression does depend on . Checking the dependency of on can be computationally expensive because it requires an evaluation of the expression on all the strings in . There are other approaches to find the nonlinear interactions among variables: first, assuming that every pair of variables appearing together in a subfunction has a nonlinear interaction. It is not necessarily true that there is a nonlinear interaction among variables appearing as arguments in the same subfunction, but adding extra edges to does not break the correctness of the operators based on the VIG and requires only a very simple check that is computationally cheap. The graph obtained this way is usually called a co-ocurrence graph. A second, and precise, approach to determine the nonlinear interactions is to apply the Fourier transform (Terras, 1999), and then look at every pair of variables to determine if there is a nonzero Fourier coefficient associated to a term with the two variables. This second method is precise and can be done in time if we know the variables appearing in each subfunction and we can evaluate each independently (our gray box setting). The interested reader can see the work of Whitley et al. (2016) and Rana et al. (1998) for a more detailed description of this second approach.
The first approach is especially useful when is relatively large because it requires time , which is polynomial in , in contrast with the exponential time in of the second approach. In some problems, like MAX-SAT, we know that the co-ocurrence graph (obtained by the first approach) is exactly the variable interaction graph (obtained by the second approach). In some other problems, like NK Landscapes, both are the same with high probability. In these two cases, it makes sense to use the first approach and work with the co-ocurrence graph even in the case that is small enough to compute the Fourier transform in a reasonable time. In the rest of the article, when we use the term ``variable interaction graph'' we could replace it by the ``co-ocurrence graph.''
2.2 Recombination Graph
Articulation points partition crossover (APX) (Chicano et al., 2018) goes further and finds the articulation points of the recombination graphs. Articulation points are variables whose removal increases the number of connected components. A bi-connected component of a graph is a maximal subgraph with the property that there are two vertex-disjoint paths between any two vertices. Articulation points join several bi-connected components. Variables , and are articulation points in our example (see Figure 2) and the subgraphs induced by the vertex sets and are examples of bi-connected components. Then, APX efficiently simulates what happens when the articulation points are removed, one at a time, from the recombination graph by flipping the variable in each of the parent solutions before applying PX, and the best solution is returned as offspring. In our example, APX would work as follows. First, it applies PX to the red solution and a copy of the blue solution where the variable is flipped and stores the best children. Then, it applies PX to the blue solution and a copy of the red solution with variable flipped. The same process is repeated with flips in variables and (the other articulation points). Finally, it applies PX to the original red and blue solutions. APX returns the best solution of all the applications of PX. The key ingredient of APX is that all these computations do not require repeated applications of PX. With the appropriate data structures, all the computations can be done in , the same complexity of PX, for any choice of parents in Mk Landscapes.
3 Dynastic Potential Exploration
The proposed dynastic potential crossover operator (DPX) takes the ideas of PX and APX even further. DPX starts from the recombination graph, like the one in Figure 2. Then, DPX tries to exhaustively explore all the possible combinations of the parent values in the variables of each connected component to find the optimal recombination regarding the hyperplane . This exploration is not done by brute force, but using dynamic programming. Following with our example, in order to compute the best combination for the variables , , and , we need to enumerate the 8 ways of taking each variable from each parent, and this is not better than brute force. However, the component containing variables , , , and forms a path. In this case, we can store in a table what is the best option for variable when any of the two possible values for variable are selected. Then, we can store in the same table what is the value of the sum of subfunctions depending only on and (and possibly common variables eliminated in the recombination graph). After this step, we can consider that variable has been removed from the problem, and we can proceed in the same way with the rest of the variables in order: , , and . Finally, 12 evaluations are necessary, instead of the 16 required by brute force. In general, for a path of length , dynamic programming requires evaluations while brute force requires evaluations.
The idea of variable elimination using dynamic programming dates back to the 1960s and the basic algorithm by Hammer et al. (1963). The problem of variable elimination has also been studied in other contexts, like Gaussian elimination (Tarjan and Yannakakis, 1984) and Bayesian networks (Bodlaender, 2005). In fact, we utilize the ideas for computing the junction tree in Bayesian networks. First, a chordal graph is obtained from the recombination graph using the maximum cardinality search and a fill-in procedure to add the missing edges. Then the clique tree (or junction tree) is computed, which will fix the order in which the variables are eliminated using dynamic programming. After assigning the subfunctions to the cliques in the clique tree, dynamic programming is applied to find a best offspring, which is later reconstructed using the information computed in tables during dynamic programming. The runtime of the variable elimination method depends, among others, on the number of missing edges added by the fill-in procedure. Unfortunately, finding the minimum fill-in is an NP-hard problem (Bodlaender, 2005). Thus, we do not try to eliminate the variables in the most efficient way, but we apply algorithms that are efficient in finding a variable elimination order. Our proposal, DPX, is the first, to the best of our knowledge, applying these well-known ideas to design a recombination operator. There is also a difference between our approach and the variable elimination methods in the literature: we introduce a parameter to limit the exploration of the variables (see Section 3.3). The high-level pseudocode of DPX is outlined in Algorithm 1. In the next subsections we will detail each of these steps.
3.1 Chordal Graphs
If the graph is chordal then MCS will provide a numbering of the nodes such that for each triple of nodes , and , with and , it happens that . If this is not the case, the graph is not chordal. A fill-in algorithm tests this condition and adds the required edges to make the graph chordal. This algorithm runs in time, where is the number of edges in the final chordal graph. Again, in the worst case, the complexity is . These two steps, MCS and fill-in, can be applied to each connected component separately or to the complete recombination graph with the same result (Tarjan and Yannakakis, 1984).
3.2 Clique Tree
Dynamic programming is used to exhaustively explore all the variables in each clique3 in the chordal graph. The maximum size of a clique in the chordal graph is an upper bound of its treewidth, and determines the complexity of applying dynamic programming to find the optimal solution. A clique tree of a chordal graph is a tree where the nodes are cliques and for any variable appearing in two of such cliques, the path among the two cliques in the tree is composed of cliques containing the variable (junction tree property). We can also identify a clique tree with a tree-decomposition of the chordal graph (Bodlaender, 2005). This clique tree will determine the order in which the variables can be eliminated.
Starting from the chordal graph provided in the previous steps, we apply an algorithm by Galinier et al. (1995) to find the clique tree (Line 4 in Algorithm 1). This algorithm runs in time and finds all the cliques of the chordal graph. In a chordal graph, the number of cliques cannot exceed the number of nodes of the graph (Galinier et al., 1995). The cliques will be identified with the sets of variables it contains, , where is an integer index for the clique that increases when a clique is discovered by the algorithm. An edge joining two cliques in the clique tree can be labeled with a separator, which is the intersection of the variables in both cliques. A clique is parent of a clique if they are joined by an edge and . The set of child cliques of a clique is denoted with . Although separators are associated to edges, in each clique we highlight a particular separator, the separator with its parent clique, and we will use to denote it. If a clique has no parent, then . The residue, , in a clique is the set of variables of that are not in the separator with its parent, . In each clique , the residue, , and the separator with the parent, , forms a partition of the variables in . Due to the junction tree property, for each variable , the cliques that contain it form a connected component in the clique tree . The variable is in the set of all the cliques in the connected component except in the ancestor of all of them (with the lowest index ), where is member of its residue . Thus, each variable is only in the residue of one clique. In Figure 4 (right) the residues, , and separators with the parent, , for all the cliques of the third connected component of Figure 2 are shown.
After computing the clique tree, all the subfunctions depending on a nonempty set of differing variables must be assigned to one (and only one) clique where (Line 5 in Algorithm 1). They will be evaluated when this clique is processed. There can be more than one clique where the subfunction can be assigned. All of them are valid for a correct evaluation, but the clique with less variables is preferred to reduce the runtime. We denote with the set of subfunctions assigned to clique .
An optimal offspring is found in Algorithm 2 by exhaustively exploring all the variable combinations in each clique and storing the best ones. Before describing the algorithm, we need to introduce some additional notation. The operator is a bitwise AND and the expression denotes the set of binary strings of length with zero in all the variables not in .
For each combination of the variables in the separator (Line 2 in Algorithm 2), all the combinations of the variables in the residue are considered (Line 4 in Algorithm 2) and evaluated over the subfunctions assigned to the clique (Lines 6–7) and their child cliques (Lines 8–9). Then, the best combination for the residue is stored in the variable[] array4 (Line 11) and its value in the value[] array (Line 12). The evaluation in post-order makes it possible to have the value[] array of the child cliques filled when they are evaluated in Line 9. At this point, variables in the residue can be obviated (eliminated) in the rest of the computation. When the separator , takes only one value, the string with zeroes. This happens in the root of the clique tree, and its effect is to iterate over all the variables combinations for to find the best value. The variable array will be used in the reconstruction of the offspring solution (Line 7 in Algorithm 1).
Following with our previous example in Figure 4, first, clique is evaluated. For all the values of (variable in ), and all the values of (variable in ) the subfunctions in are evaluated. All these subfunctions depend only on , , and variables with common values in both parents. For each value , the array value[3] is filled with the maximum value of the sum of the subfunctions in for the two values of . The array variable[3] will store the value of for which the maximum is obtained in each case. After has been evaluated, variable is eliminated. Now clique is evaluated, and value[2] and variable[2] are filled for each combination of and . In this case the variable to eliminate is and the evaluation also includes the values in value[3] in addition to the functions in , because is a child clique of . Finally, the root clique is evaluated and all the possible combinations of variables , , , and are evaluated using the subfunctions in and the array value[2] to find the objective value of an optimal offspring. The offspring itself is built using the arrays variable. In particular, variable[1] will store the combination of variables , , , and that produces the best offspring. Array variable[2] will provide the value of given the ones of and , which were provided by variable[1]. Array variable[3] will provide the value for given the value of provided by variable[2].
The operator described is an optimal recombination operator: it finds a best offspring from the largest dynastic potential. The time required to evaluate one clique in Algorithm 2 is . The number of children is bounded by and the number of subfunctions is bounded by due to the -bounded epistasis of . However, the exponential factor is a threat to the efficiency of the algorithm. In the worst case can contain all the variables and the factor would be .
3.3 Limiting the Complexity
In order to avoid the exponential runtime of DPX, we propose to limit the exploration in Lines 2 and 4 of Algorithm 2. Instead of iterating over all the possible combinations for all the variables in the separators and the residues , we fix a bound on the number of variables that will be exhaustively explored. The remaining variables will jointly take only two values, each one coming from one of the parents. In a separator or residue with more than variables, we still have to decide which variables are exhaustively explored and which ones will be taken from one parent. One important constraint in this decision is that once we decide that two variables and will be taken from the same parent, then this should also happen in the other cliques where the two variables appear. We use a disjoint-set data structure (Tarjan, 1975) to keep track of the variables that must be taken from the same parent. In each clique, the variables that are articulation points in the VIG are added first to the list of variables to be fully explored. The motivation for this can be found in Section 3.4. The other variables are added in an arbitrary order (implementation dependent) to the list of variables to fully explore. We defer to future work a more in depth analysis on the strategies to decide which variables are fully explored in each clique.
Let us illustrate this with our example of Figures 2 and 4, where we set . This does not affect the evaluation of or , because in both cases the sets and have cardinality less than or equal to , so all the variables in , , , and will be fully explored. However, once cliques and (in that order) have been evaluated, we need to evaluate and, in this case, . For the evaluation of , two variables, say and , are fully enumerated (the four combinations of values for them are considered) and the other two variables, and , are taken from the same parent and only two combinations are considered for them: 00 (red parent) and 11 (blue parent). In total, only combinations of values for the variables in this clique are explored, instead of the possible combinations.
This reduces the exponential part of the complexity of Algorithm 2 to . Since is a predefined constant parameter decided by the user, the exponential factor turns into a constant. The operator is not anymore an optimal recombination operator. In the cases where for all the cliques, DPX will still return the optimal offspring.
Given a function in the form of Equation (1) with subfunctions, the complexity of DPX with a constant bound for the number of exhaustively explored variables is .
In many cases, the number of subfunctions is or . In these cases, the complexity of DPX reduces to . But complexity can even reduce to in some cases. In particular, when all the connected components of the recombination graph are paths or have a number of variables bounded by a constant, the number of edges in the original and the chordal graph is and the complexity of DPX inherits this linear behavior. This is the case for the recombination graph showed in Figure 3 for a real SAT instance, so this linear time complexity is not unusual, even in real and hard instances.
3.4 Theoretical Comparison with PX and APX
DPX is not worse than PX, since, in the worst case, it will pick the variables for each connected component of the recombination graph from one of the parent solutions (what PX does). In other words, if and there is only one clique in all the connected components of the recombination graph (worst case), DPX and PX behave the same and produce offspring with the same quality. We wonder, however, if this happens with APX. If for all the cliques in the chordal graph derived from the recombination graph, DPX cannot be worse than any recombination operator with the property of gene transmission and, in particular, it cannot be worse than APX. Otherwise, if the limit in the exploration explained in Section 3.3 is applied, it could happen that articulation points are not explored as they are in APX. One possible threat to the articulation points exploration in DPX is that they disappear after making the graph chordal. Fortunately, to make a graph chordal, the fill-in procedure only adds edges joining vertices in a cycle and, thus, it keeps the articulation points. We provide formal proofs in the following.
The fill-in procedure adds edges joining vertices in a cycle of the original graph.
Let us assume that edge is added to the graph by the fill-in procedure. The values and are the numbers assigned by maximum cardinality search to nodes and . Let us assume without loss of generality that . The definition of fill-in (Tarjan and Yannakakis, 1984) implies that there is a path between and where all the intermediate nodes have a value lower than . On the other hand, during the application of maximum cardinality search the set of numbered nodes form a connected component of the graph. This implies that at the moment in which was numbered there existed a path between and with values higher than . As a consequence, two non-overlapping paths exist between and in the original graph and they form a cycle.
Articulation points of a graph are kept after the fill-in procedure.
According to Lemma 1, all the edges added by the fill-in procedure join vertices in a cycle of the original graph. This means that the edges are added to bi-connected components of the graph, and never join vertices in two different bi-connected components. Adding edges to a bi-connected component never removes articulation points and the result follows.
3.5 Generalization of DPX
Although this article focuses on Mk Landscapes, defined over binary variables, DPX can also be applied as is when the variables take their values in a finite set different from the binary set. In this case, one can imagine that 0 represents the value of a differing variable in the red parent, and 1 the value of the same variable in the blue parent. All the results, including runtime guarantees, are the same. The only difference is that the offspring of DPX is not optimal, in general, in the smallest hyperplane containing the parent solutions, because an optimal solution could have values not present in the parents. Even in this case, DPX can be modified to keep the same runtime and provide an optimal solution in the mentioned hyperplane, at the only cost of violating the gene transmission property.
4 Experiments
This section will focus on the experimental evaluation of DPX in comparison with other previous crossover operators. In particular, we want to answer the following two research questions:
RQ1: How does DPX perform compared with other crossover operators in terms of runtime and quality of offspring?
RQ2: How does DPX perform included in a search algorithm for solving NP-hard pseudo-Boolean optimization problems?
Regarding the other operators in the comparison, we include PX and APX because they are gray box crossover operators using the VIG and we want to check the claims exposed in Section 3.4 that relate DPX with these two operators. We also include in this comparison two other operators which dynastic potential has the same size as DPX (): uniform crossover and network crossover. In uniform crossover (UX), each variable is taken from one of the two parents with probability 0.5. Network crossover (NX) (Hauschild and Pelikan, 2010) uses the learned linkages among the variables to select groups of variables from one parent such that variables from the same connected component in the linkage graph are selected first. In our case, we have complete knowledge of the linkage graph: the variable interaction graph. The VIG is used in our implementation of network crossover and variables are selected using randomized breadth first search in the VIG and starting from a random variable until half of the variables are selected. Then, the group of selected variables is taken from one of the parents and inserted into the other to form the offspring.
Two different kinds of NP-hard problems are used in the experiments: NKQ Landscapes, an academic benchmark which allows us to parameterize the density of edges in the VIG by changing , and MAX-SAT instances from the MAX-SAT Evaluation 2017.5 Random NKQ (“Quantized” NK) landscapes (Newman and Engelhardt, 1998) can be seen as Mk Landscapes with one subfunction per variable (). Each subfunction depends on variable and other random variables, and the codomain of each subfunction is the set , where is a positive integer. Thus, each subfunction depends on exactly variables. The values of the subfunctions are randomly generated; that is, for each subfunction and each combination of variables in the subfunction, an integer value in the interval is randomly selected following a uniform distribution. Random NKQ landscapes are NP-hard when . The parameter determines the higher order nonzero Walsh coefficients in its Walsh decomposition, which is a measure of the “ruggedness” of the landscape (Hordijk and Stadler, 1998). Regarding MAX-SAT, we used the same instances as Chicano et al. (2018)6 to allow the comparison with APX. They are 160 unweighted and 132 weighted instances.
The computer used for the experiments is one multicore machine with four Intel Xeon CPU (E5-2680 v3) at 2.5 GHz, summing a total of 48 cores, 192 GB of memory, and Ubuntu 16.04 LTS. The source code of all the algorithms and operators used in the experiments can be found in Github7 including a link to a docker image to ease the reproduction of the experimental results.
4.1 Crossover Comparison
. | UX . | NX . | PX . | APX . | DPX (ms) . | |||||
---|---|---|---|---|---|---|---|---|---|---|
% . | s . | ms . | ms . | ms . | . | . | . | . | . | . |
1 | 73 | 1.2 | 0.5 | 1.0 | 0.8 | 0.9 | 0.8 | 0.8 | 0.8 | 0.9 |
2 | 95 | 2.3 | 0.9 | 2.5 | 2.1 | 2.3 | 2.4 | 2.0 | 2.1 | 1.9 |
4 | 93 | 2.3 | 1.4 | 4.5 | 2.9 | 2.8 | 2.9 | 2.5 | 2.5 | 2.4 |
8 | 120 | 2.3 | 2.2 | 7.2 | 6.3 | 6.9 | 6.3 | 5.8 | 5.8 | 5.7 |
16 | 113 | 1.2 | 2.8 | 7.1 | 5.5 | 5.9 | 5.8 | 5.8 | 5.4 | 5.3 |
32 | 154 | 1.7 | 9.3 | 12.7 | 22.1 | 22.8 | 23.5 | 23.3 | 24.6 | 23.3 |
1 | 92 | 1.7 | 0.6 | 1.5 | 1.0 | 1.0 | 1.0 | 1.0 | 0.9 | 1.0 |
2 | 87 | 2.4 | 1.2 | 3.5 | 1.8 | 1.6 | 2.0 | 1.7 | 1.6 | 1.7 |
4 | 97 | 2.8 | 1.9 | 6.3 | 3.1 | 3.1 | 3.1 | 2.7 | 2.4 | 2.7 |
8 | 125 | 2.8 | 3.0 | 8.3 | 4.7 | 4.9 | 5.7 | 5.6 | 4.5 | 4.9 |
16 | 120 | 1.9 | 5.1 | 9.1 | 9.7 | 10.5 | 10.1 | 10.9 | 11.0 | 11.3 |
32 | 143 | 2.2 | 5.9 | 16.0 | 251.4 | 257.5 | 256.4 | 273.1 | 267.5 | 263.8 |
1 | 79 | 2.8 | 0.9 | 1.9 | 1.1 | 1.1 | 1.3 | 1.4 | 1.0 | 1.2 |
2 | 96 | 3.8 | 1.7 | 4.4 | 1.7 | 1.9 | 2.1 | 1.8 | 1.5 | 1.6 |
4 | 93 | 3.4 | 2.2 | 7.1 | 3.2 | 3.3 | 3.5 | 3.6 | 3.2 | 3.5 |
8 | 99 | 3.5 | 4.8 | 11.6 | 5.6 | 5.7 | 5.4 | 6.0 | 6.8 | 7.0 |
16 | 116 | 2.7 | 3.7 | 11.2 | 31.7 | 31.9 | 32.8 | 33.3 | 34.0 | 36.2 |
32 | 143 | 2.8 | 5.2 | 18.0 | 596.7 | 601.9 | 587.8 | 598.9 | 683.0 | 692.4 |
1 | 68 | 3.2 | 0.9 | 2.6 | 1.4 | 1.5 | 1.5 | 1.4 | 1.4 | 1.3 |
2 | 82 | 3.7 | 1.8 | 5.2 | 2.1 | 2.3 | 2.3 | 2.1 | 2.2 | 2.0 |
4 | 85 | 4.2 | 3.5 | 8.7 | 3.6 | 3.9 | 3.8 | 3.9 | 4.0 | 4.1 |
8 | 119 | 4.3 | 5.4 | 13.3 | 8.0 | 8.1 | 8.2 | 9.5 | 10.9 | 9.9 |
16 | 113 | 3.0 | 4.1 | 12.8 | 90.7 | 83.0 | 103.0 | 92.2 | 101.3 | 107.5 |
32 | 139 | 3.7 | 5.8 | 19.4 | 1 000.5 | 1 034.0 | 1 041.1 | 1 020.3 | 1 089.9 | 1 021.7 |
. | UX . | NX . | PX . | APX . | DPX (ms) . | |||||
---|---|---|---|---|---|---|---|---|---|---|
% . | s . | ms . | ms . | ms . | . | . | . | . | . | . |
1 | 73 | 1.2 | 0.5 | 1.0 | 0.8 | 0.9 | 0.8 | 0.8 | 0.8 | 0.9 |
2 | 95 | 2.3 | 0.9 | 2.5 | 2.1 | 2.3 | 2.4 | 2.0 | 2.1 | 1.9 |
4 | 93 | 2.3 | 1.4 | 4.5 | 2.9 | 2.8 | 2.9 | 2.5 | 2.5 | 2.4 |
8 | 120 | 2.3 | 2.2 | 7.2 | 6.3 | 6.9 | 6.3 | 5.8 | 5.8 | 5.7 |
16 | 113 | 1.2 | 2.8 | 7.1 | 5.5 | 5.9 | 5.8 | 5.8 | 5.4 | 5.3 |
32 | 154 | 1.7 | 9.3 | 12.7 | 22.1 | 22.8 | 23.5 | 23.3 | 24.6 | 23.3 |
1 | 92 | 1.7 | 0.6 | 1.5 | 1.0 | 1.0 | 1.0 | 1.0 | 0.9 | 1.0 |
2 | 87 | 2.4 | 1.2 | 3.5 | 1.8 | 1.6 | 2.0 | 1.7 | 1.6 | 1.7 |
4 | 97 | 2.8 | 1.9 | 6.3 | 3.1 | 3.1 | 3.1 | 2.7 | 2.4 | 2.7 |
8 | 125 | 2.8 | 3.0 | 8.3 | 4.7 | 4.9 | 5.7 | 5.6 | 4.5 | 4.9 |
16 | 120 | 1.9 | 5.1 | 9.1 | 9.7 | 10.5 | 10.1 | 10.9 | 11.0 | 11.3 |
32 | 143 | 2.2 | 5.9 | 16.0 | 251.4 | 257.5 | 256.4 | 273.1 | 267.5 | 263.8 |
1 | 79 | 2.8 | 0.9 | 1.9 | 1.1 | 1.1 | 1.3 | 1.4 | 1.0 | 1.2 |
2 | 96 | 3.8 | 1.7 | 4.4 | 1.7 | 1.9 | 2.1 | 1.8 | 1.5 | 1.6 |
4 | 93 | 3.4 | 2.2 | 7.1 | 3.2 | 3.3 | 3.5 | 3.6 | 3.2 | 3.5 |
8 | 99 | 3.5 | 4.8 | 11.6 | 5.6 | 5.7 | 5.4 | 6.0 | 6.8 | 7.0 |
16 | 116 | 2.7 | 3.7 | 11.2 | 31.7 | 31.9 | 32.8 | 33.3 | 34.0 | 36.2 |
32 | 143 | 2.8 | 5.2 | 18.0 | 596.7 | 601.9 | 587.8 | 598.9 | 683.0 | 692.4 |
1 | 68 | 3.2 | 0.9 | 2.6 | 1.4 | 1.5 | 1.5 | 1.4 | 1.4 | 1.3 |
2 | 82 | 3.7 | 1.8 | 5.2 | 2.1 | 2.3 | 2.3 | 2.1 | 2.2 | 2.0 |
4 | 85 | 4.2 | 3.5 | 8.7 | 3.6 | 3.9 | 3.8 | 3.9 | 4.0 | 4.1 |
8 | 119 | 4.3 | 5.4 | 13.3 | 8.0 | 8.1 | 8.2 | 9.5 | 10.9 | 9.9 |
16 | 113 | 3.0 | 4.1 | 12.8 | 90.7 | 83.0 | 103.0 | 92.2 | 101.3 | 107.5 |
32 | 139 | 3.7 | 5.8 | 19.4 | 1 000.5 | 1 034.0 | 1 041.1 | 1 020.3 | 1 089.9 | 1 021.7 |
. | UX . | NX . | PX . | APX . | DPX (‰) . | |||||
---|---|---|---|---|---|---|---|---|---|---|
% . | ‰ . | ‰ . | ‰ . | ‰ . | . | . | . | . | . | . |
1 | 0.58 | 0.55 | 4.92 | 4.93 | 4.92 | 5.04 | 5.04 | 5.04 | 5.04 | 5.04 |
2 | 0.79 | 0.81 | 9.89 | 9.99 | 9.95 | 10.38 | 10.39 | 10.39 | 10.39 | 10.39 |
4 | 1.13 | 1.11 | 19.28 | 19.96 | 19.70 | 21.21 | 21.23 | 21.23 | 21.23 | 21.23 |
8 | 1.56 | 1.54 | 35.04 | 39.19 | 38.15 | 42.80 | 42.92 | 42.92 | 42.92 | 42.92 |
16 | 2.08 | 2.07 | 53.43 | 70.87 | 75.03 | 85.72 | 86.21 | 86.21 | 86.21 | 86.21 |
32 | 2.72 | 2.71 | 34.41 | 42.09 | 108.86 | 123.98 | 134.38 | 137.29 | 138.78 | 139.76 |
1 | 0.64 | 0.65 | 5.57 | 5.62 | 5.60 | 5.84 | 5.84 | 5.84 | 5.84 | 5.84 |
2 | 0.92 | 0.91 | 10.93 | 11.33 | 11.18 | 12.02 | 12.03 | 12.03 | 12.03 | 12.03 |
4 | 1.29 | 1.26 | 20.10 | 22.50 | 21.95 | 24.50 | 24.57 | 24.57 | 24.57 | 24.57 |
8 | 1.72 | 1.77 | 30.80 | 40.40 | 43.67 | 49.37 | 49.66 | 49.66 | 49.66 | 49.66 |
16 | 2.39 | 2.37 | 21.30 | 25.45 | 63.04 | 70.96 | 77.15 | 79.14 | 80.21 | 80.96 |
32 | 2.85 | 2.85 | 6.55 | 7.38 | 59.15 | 63.68 | 76.87 | 83.34 | 86.36 | 88.23 |
1 | 0.74 | 0.74 | 6.02 | 6.18 | 6.12 | 6.51 | 6.52 | 6.52 | 6.52 | 6.52 |
2 | 1.04 | 1.04 | 11.47 | 12.48 | 12.20 | 13.46 | 13.48 | 13.48 | 13.48 | 13.48 |
4 | 1.42 | 1.40 | 18.98 | 23.81 | 24.25 | 27.38 | 27.50 | 27.50 | 27.50 | 27.50 |
8 | 1.92 | 1.95 | 17.30 | 21.78 | 41.39 | 46.48 | 49.29 | 50.46 | 51.32 | 52.04 |
16 | 2.47 | 2.55 | 6.92 | 7.93 | 41.63 | 45.38 | 53.90 | 57.24 | 58.91 | 59.98 |
32 | 3.15 | 3.13 | 1.35 | 1.95 | 40.98 | 42.14 | 46.88 | 53.52 | 58.04 | 60.35 |
1 | 0.79 | 0.78 | 6.38 | 6.72 | 6.61 | 7.18 | 7.18 | 7.18 | 7.18 | 7.18 |
2 | 1.10 | 1.10 | 11.46 | 13.40 | 13.17 | 14.77 | 14.81 | 14.81 | 14.81 | 14.81 |
4 | 1.53 | 1.56 | 15.06 | 20.38 | 26.44 | 29.58 | 30.06 | 30.14 | 30.16 | 30.17 |
8 | 2.07 | 2.06 | 8.07 | 9.56 | 31.18 | 34.54 | 39.26 | 41.02 | 41.98 | 42.67 |
16 | 2.68 | 2.66 | 2.19 | 2.90 | 30.14 | 31.61 | 37.08 | 41.51 | 43.48 | 44.83 |
32 | 3.15 | 3.13 | 0.28 | 0.77 | 32.42 | 32.82 | 34.18 | 36.64 | 40.31 | 44.05 |
. | UX . | NX . | PX . | APX . | DPX (‰) . | |||||
---|---|---|---|---|---|---|---|---|---|---|
% . | ‰ . | ‰ . | ‰ . | ‰ . | . | . | . | . | . | . |
1 | 0.58 | 0.55 | 4.92 | 4.93 | 4.92 | 5.04 | 5.04 | 5.04 | 5.04 | 5.04 |
2 | 0.79 | 0.81 | 9.89 | 9.99 | 9.95 | 10.38 | 10.39 | 10.39 | 10.39 | 10.39 |
4 | 1.13 | 1.11 | 19.28 | 19.96 | 19.70 | 21.21 | 21.23 | 21.23 | 21.23 | 21.23 |
8 | 1.56 | 1.54 | 35.04 | 39.19 | 38.15 | 42.80 | 42.92 | 42.92 | 42.92 | 42.92 |
16 | 2.08 | 2.07 | 53.43 | 70.87 | 75.03 | 85.72 | 86.21 | 86.21 | 86.21 | 86.21 |
32 | 2.72 | 2.71 | 34.41 | 42.09 | 108.86 | 123.98 | 134.38 | 137.29 | 138.78 | 139.76 |
1 | 0.64 | 0.65 | 5.57 | 5.62 | 5.60 | 5.84 | 5.84 | 5.84 | 5.84 | 5.84 |
2 | 0.92 | 0.91 | 10.93 | 11.33 | 11.18 | 12.02 | 12.03 | 12.03 | 12.03 | 12.03 |
4 | 1.29 | 1.26 | 20.10 | 22.50 | 21.95 | 24.50 | 24.57 | 24.57 | 24.57 | 24.57 |
8 | 1.72 | 1.77 | 30.80 | 40.40 | 43.67 | 49.37 | 49.66 | 49.66 | 49.66 | 49.66 |
16 | 2.39 | 2.37 | 21.30 | 25.45 | 63.04 | 70.96 | 77.15 | 79.14 | 80.21 | 80.96 |
32 | 2.85 | 2.85 | 6.55 | 7.38 | 59.15 | 63.68 | 76.87 | 83.34 | 86.36 | 88.23 |
1 | 0.74 | 0.74 | 6.02 | 6.18 | 6.12 | 6.51 | 6.52 | 6.52 | 6.52 | 6.52 |
2 | 1.04 | 1.04 | 11.47 | 12.48 | 12.20 | 13.46 | 13.48 | 13.48 | 13.48 | 13.48 |
4 | 1.42 | 1.40 | 18.98 | 23.81 | 24.25 | 27.38 | 27.50 | 27.50 | 27.50 | 27.50 |
8 | 1.92 | 1.95 | 17.30 | 21.78 | 41.39 | 46.48 | 49.29 | 50.46 | 51.32 | 52.04 |
16 | 2.47 | 2.55 | 6.92 | 7.93 | 41.63 | 45.38 | 53.90 | 57.24 | 58.91 | 59.98 |
32 | 3.15 | 3.13 | 1.35 | 1.95 | 40.98 | 42.14 | 46.88 | 53.52 | 58.04 | 60.35 |
1 | 0.79 | 0.78 | 6.38 | 6.72 | 6.61 | 7.18 | 7.18 | 7.18 | 7.18 | 7.18 |
2 | 1.10 | 1.10 | 11.46 | 13.40 | 13.17 | 14.77 | 14.81 | 14.81 | 14.81 | 14.81 |
4 | 1.53 | 1.56 | 15.06 | 20.38 | 26.44 | 29.58 | 30.06 | 30.14 | 30.16 | 30.17 |
8 | 2.07 | 2.06 | 8.07 | 9.56 | 31.18 | 34.54 | 39.26 | 41.02 | 41.98 | 42.67 |
16 | 2.68 | 2.66 | 2.19 | 2.90 | 30.14 | 31.61 | 37.08 | 41.51 | 43.48 | 44.83 |
32 | 3.15 | 3.13 | 0.28 | 0.77 | 32.42 | 32.82 | 34.18 | 36.64 | 40.31 | 44.05 |
. | PX . | APX . | DPX () . | |||||
---|---|---|---|---|---|---|---|---|
% . | . | . | . | . | . | . | . | . |
1 | 97.1 | 97.3 | 97.2 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 |
2 | 188.1 | 190.3 | 189.3 | 199.9 | 200.0 | 200.0 | 200.0 | 200.0 |
4 | 352.9 | 368.1 | 362.0 | 399.5 | 400.0 | 400.0 | 400.0 | 400.0 |
8 | 613.5 | 703.3 | 679.7 | 796.8 | 800.0 | 800.0 | 800.0 | 800.0 |
16 | 873.6 | 1 220.6 | 1 311.2 | 1 586.5 | 1 600.0 | 1 600.0 | 1 600.0 | 1 600.0 |
32 | 660.7 | 828.3 | 2 055.6 | 2 399.2 | 2 586.9 | 2 636.5 | 2 661.3 | 2 677.4 |
1 | 94.1 | 95.2 | 94.7 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 |
2 | 176.5 | 184.3 | 181.2 | 199.8 | 200.0 | 200.0 | 200.0 | 200.0 |
4 | 306.6 | 352.3 | 341.2 | 398.4 | 400.0 | 400.0 | 400.0 | 400.0 |
8 | 437.3 | 602.5 | 663.0 | 793.2 | 799.9 | 800.0 | 800.0 | 800.0 |
16 | 351.5 | 426.4 | 1 019.5 | 1 174.7 | 1 271.0 | 1 300.4 | 1 316.0 | 1 326.9 |
32 | 141.5 | 155.1 | 1 099.0 | 1 179.1 | 1 395.2 | 1 499.5 | 1 547.6 | 1 576.8 |
1 | 90.2 | 93.0 | 91.9 | 99.9 | 100.0 | 100.0 | 100.0 | 100.0 |
2 | 161.2 | 179.0 | 173.7 | 199.5 | 200.0 | 200.0 | 200.0 | 200.0 |
4 | 247.0 | 324.7 | 332.8 | 397.4 | 400.0 | 400.0 | 400.0 | 400.0 |
8 | 238.2 | 305.8 | 580.8 | 674.0 | 713.9 | 729.6 | 740.9 | 750.4 |
16 | 119.7 | 134.1 | 651.3 | 710.3 | 831.5 | 878.1 | 901.3 | 915.9 |
32 | 31.1 | 39.5 | 719.8 | 737.4 | 812.0 | 914.8 | 983.3 | 1 018.2 |
1 | 85.4 | 91.1 | 89.1 | 99.9 | 100.0 | 100.0 | 100.0 | 100.0 |
2 | 142.0 | 172.4 | 168.5 | 199.2 | 200.0 | 200.0 | 200.0 | 200.0 |
4 | 175.4 | 246.1 | 332.2 | 390.6 | 398.2 | 399.4 | 399.8 | 399.9 |
8 | 113.2 | 132.7 | 420.5 | 470.8 | 530.8 | 552.6 | 564.4 | 572.8 |
16 | 38.9 | 47.6 | 449.0 | 469.0 | 542.8 | 601.9 | 627.7 | 645.3 |
32 | 7.5 | 13.7 | 534.0 | 539.3 | 559.3 | 595.7 | 649.7 | 703.5 |
. | PX . | APX . | DPX () . | |||||
---|---|---|---|---|---|---|---|---|
% . | . | . | . | . | . | . | . | . |
1 | 97.1 | 97.3 | 97.2 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 |
2 | 188.1 | 190.3 | 189.3 | 199.9 | 200.0 | 200.0 | 200.0 | 200.0 |
4 | 352.9 | 368.1 | 362.0 | 399.5 | 400.0 | 400.0 | 400.0 | 400.0 |
8 | 613.5 | 703.3 | 679.7 | 796.8 | 800.0 | 800.0 | 800.0 | 800.0 |
16 | 873.6 | 1 220.6 | 1 311.2 | 1 586.5 | 1 600.0 | 1 600.0 | 1 600.0 | 1 600.0 |
32 | 660.7 | 828.3 | 2 055.6 | 2 399.2 | 2 586.9 | 2 636.5 | 2 661.3 | 2 677.4 |
1 | 94.1 | 95.2 | 94.7 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 |
2 | 176.5 | 184.3 | 181.2 | 199.8 | 200.0 | 200.0 | 200.0 | 200.0 |
4 | 306.6 | 352.3 | 341.2 | 398.4 | 400.0 | 400.0 | 400.0 | 400.0 |
8 | 437.3 | 602.5 | 663.0 | 793.2 | 799.9 | 800.0 | 800.0 | 800.0 |
16 | 351.5 | 426.4 | 1 019.5 | 1 174.7 | 1 271.0 | 1 300.4 | 1 316.0 | 1 326.9 |
32 | 141.5 | 155.1 | 1 099.0 | 1 179.1 | 1 395.2 | 1 499.5 | 1 547.6 | 1 576.8 |
1 | 90.2 | 93.0 | 91.9 | 99.9 | 100.0 | 100.0 | 100.0 | 100.0 |
2 | 161.2 | 179.0 | 173.7 | 199.5 | 200.0 | 200.0 | 200.0 | 200.0 |
4 | 247.0 | 324.7 | 332.8 | 397.4 | 400.0 | 400.0 | 400.0 | 400.0 |
8 | 238.2 | 305.8 | 580.8 | 674.0 | 713.9 | 729.6 | 740.9 | 750.4 |
16 | 119.7 | 134.1 | 651.3 | 710.3 | 831.5 | 878.1 | 901.3 | 915.9 |
32 | 31.1 | 39.5 | 719.8 | 737.4 | 812.0 | 914.8 | 983.3 | 1 018.2 |
1 | 85.4 | 91.1 | 89.1 | 99.9 | 100.0 | 100.0 | 100.0 | 100.0 |
2 | 142.0 | 172.4 | 168.5 | 199.2 | 200.0 | 200.0 | 200.0 | 200.0 |
4 | 175.4 | 246.1 | 332.2 | 390.6 | 398.2 | 399.4 | 399.8 | 399.9 |
8 | 113.2 | 132.7 | 420.5 | 470.8 | 530.8 | 552.6 | 564.4 | 572.8 |
16 | 38.9 | 47.6 | 449.0 | 469.0 | 542.8 | 601.9 | 627.7 | 645.3 |
32 | 7.5 | 13.7 | 534.0 | 539.3 | 559.3 | 595.7 | 649.7 | 703.5 |
. | PX . | APX . | DPX(%) . | |||||
---|---|---|---|---|---|---|---|---|
% . | % . | % . | . | . | . | . | . | . |
1 | 5.61 | 6.45 | 5.61 | 99.07 | 100.00 | 100.00 | 100.00 | 100.00 |
2 | 0.00 | 0.00 | 0.00 | 93.17 | 100.00 | 100.00 | 100.00 | 100.00 |
4 | 0.00 | 0.00 | 0.00 | 60.73 | 100.00 | 100.00 | 100.00 | 100.00 |
8 | 0.00 | 0.00 | 0.00 | 4.11 | 99.98 | 100.00 | 100.00 | 100.00 |
16 | 0.00 | 0.00 | 0.00 | 0.00 | 99.67 | 99.98 | 100.00 | 100.00 |
32 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
1 | 0.27 | 0.47 | 0.27 | 96.46 | 99.99 | 100.00 | 100.00 | 100.00 |
2 | 0.00 | 0.00 | 0.00 | 77.88 | 99.94 | 100.00 | 100.00 | 100.00 |
4 | 0.00 | 0.00 | 0.00 | 20.97 | 98.39 | 100.00 | 100.00 | 100.00 |
8 | 0.00 | 0.00 | 0.00 | 0.22 | 88.87 | 99.93 | 99.99 | 100.00 |
16 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
32 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
1 | 0.00 | 0.02 | 0.00 | 92.11 | 99.95 | 100.00 | 100.00 | 100.00 |
2 | 0.00 | 0.00 | 0.00 | 59.18 | 99.54 | 100.00 | 100.00 | 100.00 |
4 | 0.00 | 0.00 | 0.00 | 8.21 | 95.38 | 99.99 | 100.00 | 100.00 |
8 | 0.00 | 0.00 | 0.00 | 0.00 | 0.06 | 0.27 | 1.00 | 2.20 |
16 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
32 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
1 | 0.00 | 0.00 | 0.00 | 86.88 | 99.90 | 99.99 | 100.00 | 100.00 |
2 | 0.00 | 0.00 | 0.00 | 43.67 | 98.89 | 99.98 | 100.00 | 100.00 |
4 | 0.00 | 0.00 | 0.00 | 2.96 | 72.71 | 90.86 | 96.30 | 98.58 |
8 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
16 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
32 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
. | PX . | APX . | DPX(%) . | |||||
---|---|---|---|---|---|---|---|---|
% . | % . | % . | . | . | . | . | . | . |
1 | 5.61 | 6.45 | 5.61 | 99.07 | 100.00 | 100.00 | 100.00 | 100.00 |
2 | 0.00 | 0.00 | 0.00 | 93.17 | 100.00 | 100.00 | 100.00 | 100.00 |
4 | 0.00 | 0.00 | 0.00 | 60.73 | 100.00 | 100.00 | 100.00 | 100.00 |
8 | 0.00 | 0.00 | 0.00 | 4.11 | 99.98 | 100.00 | 100.00 | 100.00 |
16 | 0.00 | 0.00 | 0.00 | 0.00 | 99.67 | 99.98 | 100.00 | 100.00 |
32 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
1 | 0.27 | 0.47 | 0.27 | 96.46 | 99.99 | 100.00 | 100.00 | 100.00 |
2 | 0.00 | 0.00 | 0.00 | 77.88 | 99.94 | 100.00 | 100.00 | 100.00 |
4 | 0.00 | 0.00 | 0.00 | 20.97 | 98.39 | 100.00 | 100.00 | 100.00 |
8 | 0.00 | 0.00 | 0.00 | 0.22 | 88.87 | 99.93 | 99.99 | 100.00 |
16 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
32 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
1 | 0.00 | 0.02 | 0.00 | 92.11 | 99.95 | 100.00 | 100.00 | 100.00 |
2 | 0.00 | 0.00 | 0.00 | 59.18 | 99.54 | 100.00 | 100.00 | 100.00 |
4 | 0.00 | 0.00 | 0.00 | 8.21 | 95.38 | 99.99 | 100.00 | 100.00 |
8 | 0.00 | 0.00 | 0.00 | 0.00 | 0.06 | 0.27 | 1.00 | 2.20 |
16 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
32 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
1 | 0.00 | 0.00 | 0.00 | 86.88 | 99.90 | 99.99 | 100.00 | 100.00 |
2 | 0.00 | 0.00 | 0.00 | 43.67 | 98.89 | 99.98 | 100.00 | 100.00 |
4 | 0.00 | 0.00 | 0.00 | 2.96 | 72.71 | 90.86 | 96.30 | 98.58 |
8 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
16 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
32 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Regarding the runtime, we observe some clear trends that we will comment in the following. Uniform crossover is the fastest algorithm (less than 200 s in all the cases). It randomly selects one parent for each differing bit and this can be done very fast. The other operators are based on the VIG and they require more time to explore it and compute the offspring. Their runtime can be best measured in milliseconds. NX, PX, and APX have runtimes between less than one millisecond to 20 ms. DPX is clearly the slowest crossover operator when the parent solutions differ in 32% of the bits ( variables), reaching 1 second of computation for instances with . For lower values of , APX is sometimes slower than DPX. We also observe an increase in runtime with , which can be explained because the recombination graph to explore is larger. It is interesting to note that no exponential increase is observed in the runtime when increases linearly. To explain this we have to observe Tables 3 and 4, where we can see that DPX is able to completely explore the dynastic potential when is low and the logarithm of the number of solutions explored increase very slowly with because it is near the maximum possible. High runtime is one of the drawbacks of DPX, and the other one is memory consumption. For the instances with variables, 5GB of memory is enough when , but we run some experiments with in which DPX ended with an “Out of Memory” error. In these cases, increasing the memory could help, but the amount of memory required is higher than that required by PX and APX, and much higher than the memory required by UX and NX.
In Table 3, we can observe how DPX explores more solutions than PX and APX when . We also observe how APX can outperform DPX in the number of explored solutions when , as we illustrated with an example in Section 3.4. The latter happens when for , when for , and when for and . DPX always explores more solutions than PX independently of the value of , as the theory predicts. As grows, the dynastic potential increases and also the logarithm of explored solutions for DPX. In fact, in many cases we observe that this logarithm reaches in DPX the maximum possible value, (the Hamming distance between the parents). This is also reflected in Table 4, where we show the percentage of runs where the full dynastic potential is explored by the operators or, equivalently, the percentage of runs in which the logarithm of explored solutions is . In the case of PX and APX the increase in does not always imply an increase in the number of explored solutions: there is a value of for which the logarithm of explored solutions reaches a maximum and then decreases with . The number of explored solutions in these two operators is proportional to the number of connected components in the recombination graph. Starting from an empty recombination graph the number of connected components increases as new random variables are added, and this explains why the logarithm of explored solutions in PX and APX increases with for low values of . At some critical value of , the number of connected components starts to decrease because the new variables in the recombination graph join connected components, instead of generating new ones. The exact value of at which this happens is approximately for the adjacent NKQ Landscapes (Chicano et al., 2017). It is difficult to compute this value for the random NKQ Landscapes that we use here, but the critical value must be a decreasing function of . This dependence on the critical value with can also be observed in Table 3: the value of at which the number of explored solutions is maximum decreases from to when increases from 2 to 5.
Regarding the fraction of runs in which full dynastic potential exploration is achieved, PX and DPX with behaves the same and achieves full exploration for some pairs of parents only when . APX is slightly better and DPX is the best when , behaving like an optimal recombination operator in most of the cases for and . The fraction of runs with full dynastic potential exploration decreases when and increase, due to the higher number of edges in the variable interaction graph, which induces larger cliques.
4.2 Crossover in Search Algorithms
In this section, we want to answer RQ2: how does DPX perform when it is included in a search algorithm compared with APX, PX, NX, and UX? From Section 4.1, we conclude that DPX is better than PX, APX, UX, and NX in terms of quality of the solution due to its higher exploration capability. This feature is not necessarily good when the operator is included in a search algorithm, because it can produce premature convergence. DPX is also slower, in general, than the other crossover operators in our experimental setting, and this could slow down the search. In order to answer the research question, we included the crossover operators in two metaheuristic algorithms, each of them from a different family: deterministic recombination and iterated local search (DRILS), a trajectory-based metaheuristic, and an evolutionary algorithm (EA), a population-based metaheuristic.
DRILS (Chicano et al., 2017) uses a first improving move hill climber to reach a local optimum. The neighborhood in the hill climber is the set of solutions at Hamming distance one from the current solution. Then, it perturbs the solution by randomly flipping bits, where is the so-called perturbation factor, the only parameter of DRILS to be tuned. It then applies local search to the new solution to reach another local optimum and applies crossover to the last two local optima, generating a new solution that is improved further with the hill climber. This process is repeated until the stop condition is satisfied. The pseudocode of DRILS can be found in Algorithm 3.
The evolutionary algorithm (EA) which we use in our empirical evaluation, is a steady-state genetic algorithm where two parents are selected and recombined to produce an offspring that is mutated. The mutated solution replaces the worst solution in the population only if its fitness is strictly higher. The mutation operator used is bit flip with probability of independently flipping each bit. Three selection operators are considered: binary tournament, rank selection, and roulette wheel selection. The pseudocode of the EA is presented in Algorithm 4.
We can also consider applying DPX alone to solve the pseudo-Boolean optimization problems. That is, if we use two parent solutions, and , that are complementary, for all , and we set , DPX will return the global optimum. This approach should be useful in families of Mk Landscapes where the treewidth of the VIG is bounded by a small constant like, for example, the adjacent model of NK Landscapes, which can be solved in linear time (Wright et al., 2000). In these cases, the cliques found in the clique tree can be fully explored due to the low number of variables they contain. However, we do not expect it to work well for NP-hard problems, like the random model of NKQ Landscapes that we use here, or the general MAX-SAT instances, that we also use in this section. The reason is that, in these cases, the cardinality of the cliques, , usually grows with the number of variables, that is, and the complexity of DPX (with ) can be . This approach was used by Ochoa and Chicano (2019) to find global optimal solutions in randomly generated MAX-SAT instances of size variables. It was found that DPX required much time to exhaustively explore the search space and the authors decomposed the search in different parallel non-overlapping searches by fixing the values of some variables in the parent solutions before applying DPX. For the sake of completeness, however, we analyze the execution of DPX alone in Section 4.2.4.
For the experiments in this section, we use NKQ Landscapes with and and MAX-SAT instances (unweighted and weighted). With this diverse setting for the experiments (two metaheuristics belonging to different families and two pseudo-Boolean problems with two categories of instances each), we want to test DPX in different scenarios to identify when DPX is useful for pseudo-Boolean optimization and when it is not.
4.2.1 Parameter Settings
We used an automatic tuning tool, irace (López-Ibáñez et al., 2016), to help us find a good configuration for each algorithm in the different scenarios. The parameters to be tuned are in DPX, in DRILS, and , the selection operator, and the population size in EA. The ranges of values for each of them are presented in Table 5. The version of irace used is 3.4.1 and the budget is 1 000 runs of the tuned algorithm. The rest of parameters of irace were set to the default. We applied irace independently for each combination of algorithm, crossover operator, and family of instances used. This way we can compare the algorithms using a good configuration in each scenario.
DPX . | DRILS . | EA . | |||
---|---|---|---|---|---|
Parameter . | Domain . | Parameter . | Domain . | Parameter . | Domain . |
[0-5] | [0,0.5] | [0,0.5] | |||
popsize | [10,100] | ||||
selection | (tournament, rank, roulette) |
DPX . | DRILS . | EA . | |||
---|---|---|---|---|---|
Parameter . | Domain . | Parameter . | Domain . | Parameter . | Domain . |
[0-5] | [0,0.5] | [0,0.5] | |||
popsize | [10,100] | ||||
selection | (tournament, rank, roulette) |
The stop condition in the search algorithms was set to reach a time limit of 60 seconds (one minute) during the irace tuning and also in the experiments of the following sections. We use runtime as stop condition because the number of fitness function evaluations is not a good metric for the computational budget in our case, where gray-box optimization operators are taking profit from the VIG and the subfunction evaluations. This can be easily observed comparing the huge difference in runtime of UX and DPX in Table 1. The concrete value for the time limit (60 seconds) was chosen based on our previous experience with these algorithms including gray box optimization. In previous works and experiments, where we stopped the experiments after 300 seconds, we found that the results are rather stable after 60 seconds, with few exceptions. Using this short time also allows us to perform a larger set of experiments in the same time obtaining more insight on how the algorithms work.
At the end of the tuning phase, irace provides several configurations that are considered equivalent in performance (irace does not find statistically significant differences among them). We took, in all the cases, the first of those configurations and we show it in Tables 6 and 7 for each combination of algorithm, crossover operator, and set of instances. These are the parameters used in the experiments of Sections 4.2.2 and 4.2.3. We did not check the convergence speed of irace and we do not know how far the parameters in Tables 6 and 7 are from the best possible configurations. Thus, we cannot get any insight from the parameters computed by irace. Our goal is just to compare all the algorithms and operators using good configurations, and avoid bias due to parameter setting.
. | . | . | ||||||
---|---|---|---|---|---|---|---|---|
DRILS . | . | . | . | . | . | . | . | . |
DPX | 1 | 0.2219 | 3 | 0.0462 | ||||
APX | 0.1873 | 0.0231 | ||||||
PX | 0.1240 | 0.0191 | ||||||
NX | 0.0154 | 0.0268 | ||||||
UX | 0.0159 | 0.0238 | ||||||
EA | selection | popsize | selection | popsize | ||||
DPX | 3 | 0.0044 | roulette | 61 | 2 | 0.0080 | rank | 15 |
APX | 0.0172 | roulette | 72 | 0.0002 | roulette | 27 | ||
PX | 0.0084 | rank | 47 | 0.0034 | rank | 70 | ||
NX | 0.0007 | roulette | 37 | 0.0008 | rank | 54 | ||
UX | 0.0003 | rank | 41 | 0.0006 | roulette | 14 |
. | . | . | ||||||
---|---|---|---|---|---|---|---|---|
DRILS . | . | . | . | . | . | . | . | . |
DPX | 1 | 0.2219 | 3 | 0.0462 | ||||
APX | 0.1873 | 0.0231 | ||||||
PX | 0.1240 | 0.0191 | ||||||
NX | 0.0154 | 0.0268 | ||||||
UX | 0.0159 | 0.0238 | ||||||
EA | selection | popsize | selection | popsize | ||||
DPX | 3 | 0.0044 | roulette | 61 | 2 | 0.0080 | rank | 15 |
APX | 0.0172 | roulette | 72 | 0.0002 | roulette | 27 | ||
PX | 0.0084 | rank | 47 | 0.0034 | rank | 70 | ||
NX | 0.0007 | roulette | 37 | 0.0008 | rank | 54 | ||
UX | 0.0003 | rank | 41 | 0.0006 | roulette | 14 |
. | Unweighted . | Weighted . | ||||||
---|---|---|---|---|---|---|---|---|
DRILS . | . | . | . | . | . | . | . | . |
DPX | 4 | 0.0582 | 2 | 0.1832 | ||||
APX | 0.0941 | 0.1870 | ||||||
PX | 0.0482 | 0.0996 | ||||||
NX | 0.0299 | 0.0241 | ||||||
UX | 0.0571 | 0.0214 | ||||||
EA | selection | popsize | selection | popsize | ||||
DPX | 5 | 0.0038 | rank | 18 | 2 | 0.0018 | rank | 52 |
APX | 0.0096 | tournament | 19 | 0.0069 | tournament | 24 | ||
PX | 0.0051 | tournament | 27 | 0.0086 | rank | 20 | ||
NX | 0.0047 | rank | 18 | 0.0020 | rank | 27 | ||
UX | 0.0019 | rank | 18 | 0.0012 | tournament | 78 |
. | Unweighted . | Weighted . | ||||||
---|---|---|---|---|---|---|---|---|
DRILS . | . | . | . | . | . | . | . | . |
DPX | 4 | 0.0582 | 2 | 0.1832 | ||||
APX | 0.0941 | 0.1870 | ||||||
PX | 0.0482 | 0.0996 | ||||||
NX | 0.0299 | 0.0241 | ||||||
UX | 0.0571 | 0.0214 | ||||||
EA | selection | popsize | selection | popsize | ||||
DPX | 5 | 0.0038 | rank | 18 | 2 | 0.0018 | rank | 52 |
APX | 0.0096 | tournament | 19 | 0.0069 | tournament | 24 | ||
PX | 0.0051 | tournament | 27 | 0.0086 | rank | 20 | ||
NX | 0.0047 | rank | 18 | 0.0020 | rank | 27 | ||
UX | 0.0019 | rank | 18 | 0.0012 | tournament | 78 |
In order to reduce the bias due to the stochastic nature of the algorithms, they were run ten times for each instance and average results are presented in the next sections. The Mann--Whitney test was run with the samples from the ten independent runs to check if the observed difference in the performance of the algorithms for each instance is statistically significant at the 0.05 confidence level or not.
4.2.2 Results for NKQ Landscapes
In this section, we analyze the results obtained for NKQ Landscapes. We do not know the fitness value of the global optimal solutions in the instances. For this reason, for each instance (there are ten instances per value of ) we computed the fitness of the best solution found in any run by any algorithm+crossover combination, . We used to normalize the fitness of the best solutions found by the algorithms in the instance. Thus, if is the best solution found by an algorithm in an instance, we define the quality of solution as . The benefit of this is that the quality of a solution is a real number between 0 and 1 that measures how near is the fitness of to the best known fitness (in this set of experiments). This also allows us to aggregate quality values from different instances because they are normalized to the same range. Higher values of quality are better.
. | . | . | ||
---|---|---|---|---|
. | Statistical difference . | Quality . | Statistical difference . | Quality . |
DRILS | ||||
DPX | 0.9997 | 0.9972 | ||
APX | 0.9995 | 0.9947 | ||
PX | 0.9990 | 0.9949 | ||
NX | 0.9786 | 0.9934 | ||
UX | 0.9790 | 0.9935 | ||
EA | ||||
DPX | 0.9795 | 0.8132 | ||
APX | 0.9568 | 0.8890 | ||
PX | 0.9445 | 0.9085 | ||
NX | 0.8803 | 0.7811 | ||
UX | 0.9313 | 0.8407 |
. | . | . | ||
---|---|---|---|---|
. | Statistical difference . | Quality . | Statistical difference . | Quality . |
DRILS | ||||
DPX | 0.9997 | 0.9972 | ||
APX | 0.9995 | 0.9947 | ||
PX | 0.9990 | 0.9949 | ||
NX | 0.9786 | 0.9934 | ||
UX | 0.9790 | 0.9935 | ||
EA | ||||
DPX | 0.9795 | 0.8132 | ||
APX | 0.9568 | 0.8890 | ||
PX | 0.9445 | 0.9085 | ||
NX | 0.8803 | 0.7811 | ||
UX | 0.9313 | 0.8407 |
The first important conclusion we obtain from the results in Table 8 is that DRILS performs better with DPX than with any other crossover operator. There is no single NKQ Landscapes instance in our experimental setting where another crossover operator outperforms DPX included in DRILS. There are only a few instances (8 in total) where APX and/or PX show a similar performance. We can observe in Figure 6a that DRILS+DPX obtains the best average quality at any time during the search when , followed by PX and APX. UX and NX provide the worst average quality in this set of instances. We observe in the figure signs of convergence in all the crossover operators. However, after a careful analysis checking the time of last improvement, whose distribution is presented in Figure 7a, we notice that DRILS with DPX, APX, and PX provides improvements to the best solution after 50 seconds in around 50% of the runs, while DRILS with UX and NX seems to stuck in 30 to 40 seconds after the start of the search, and much earlier in some cases. We wonder if this time could be biased by the different runtime of crossover operators. Perhaps the algorithm produces the last improvement near the end of the execution for DPX, APX, and PX but the previous one was in the middle of the run, earlier than the last improvement of NX and UX. To investigate this, we show in Figure 7b the distribution of the average time between improvements for the last three improvements. This time is far below one second in most of the cases for all the crossover operators, which means that they are producing better solutions several times per second on average until the time of last improvement, and there is no bias related to the different crossover runtime.
In the more rugged instances (), shown in Figure 6b, DRILS+DPX is the best performing algorithm after 20 seconds of computation. Before that time DRILS+UX provides the best performance. We can explain this with the help of Table 1. UX is the fastest crossover operator, and helps to advance in the search at the beginning. DPX (as well as PX and APX) are slower operators and, even if they provide better quality offspring, they slow down the search, requiring more time to be effective. We have to recall here that DRILS includes a hill climber, which explains why when using a random black box operator like UX the quality of the best solution is still high (above 0.93).
Finally, although it is not our goal to compare search algorithms (only crossover operators) we would like to highlight some observations regarding the average quality of the best found solutions in DRILS and EA. We conclude that DRILS is always better than EA. For , the highest quality in EA is obtained when DPX is used and its quality is only slightly higher than that of DRILS with NX and UX (the worst performing crossover operators in DRILS). For the difference is even higher: EA+PX reaches an average quality of 0.9085 (highest for EA) which is far below any average quality of DRILS, all above 0.9934 (the one of DRILS+NX). We think that the reason for that could be the presence of a local search operator in DRILS, while EA is mainly guided by selection and crossover (when PX, APX, or DPX is used).
4.2.3 Results for MAX-SAT
In this section, we analyze the results obtained for MAX-SAT. We also use the quality of the solutions defined in Section 4.2.2 as a normalized measure of quality. Table 9 presents the main results of the section. The meaning of the columns is the same as in Table 8. In the case of DRILS, all the instances are used in the statistical tests and the computation of the average quality (160 instances in the unweighted category and 132 instances in the weighted category). In the case of EA, we observed that it failed to complete the execution in some runs for some instances when it was combined with DPX. The reason was an out of memory problem, caused by the large number of differing variables among the solutions in the initial generations. In this case, we computed the average quality only for instances in which at least 90% of the runs were successful (nine of the ten runs) and we manually counted the instances with less than 90% successful runs as significantly worse than EA+DPX for the remaining EA+crossover combinations in Table 9 without performing any statistical test. Nine unweighted instances and five weighted instances had less than 90% successful EA+DPX runs. Three unweighted instances and no weighted instance had exactly 90% of successful runs and in the remaining instances EA+DPX ends successfully in all the runs.
. | Unweighted . | Weighted . | ||
---|---|---|---|---|
. | Statistical difference . | Quality . | Statistical difference . | Quality . |
DRILS | ||||
DPX | 0.9984 | 0.9996 | ||
APX | 0.9973 | 0.9984 | ||
PX | 0.9968 | 0.9982 | ||
NX | 0.9946 | 0.9915 | ||
UX | 0.9953 | 0.9930 | ||
EA | ||||
DPX | 0.9644 | 0.9583 | ||
APX | 0.9604 | 0.9649 | ||
PX | 0.9095 | 0.9057 | ||
NX | 0.8980 | 0.8786 | ||
UX | 0.9134 | 0.8989 |
. | Unweighted . | Weighted . | ||
---|---|---|---|---|
. | Statistical difference . | Quality . | Statistical difference . | Quality . |
DRILS | ||||
DPX | 0.9984 | 0.9996 | ||
APX | 0.9973 | 0.9984 | ||
PX | 0.9968 | 0.9982 | ||
NX | 0.9946 | 0.9915 | ||
UX | 0.9953 | 0.9930 | ||
EA | ||||
DPX | 0.9644 | 0.9583 | ||
APX | 0.9604 | 0.9649 | ||
PX | 0.9095 | 0.9057 | ||
NX | 0.8980 | 0.8786 | ||
UX | 0.9134 | 0.8989 |
From the results in Table 9 we conclude that both algorithms (DRILS and EA) perform better, in general, using DPX as the crossover operator. Only in very few cases does any other crossover operator improve the final result of DRILS. When EA is used, the difference is not so clear, but still significant.
Once again, we also observe that the performance of DRILS is better than that of EA. The maximum average quality in EA is 0.9649 (EA+APX in the weighted instances) while the average quality of DRILS is always above 0.9915 for all the crossover operators and category of instances.
We do not expect DRILS or EA to be competitive with state-of-the-art incomplete MAX-SAT solvers like SATLike-c9 (Cai and Lei, 2020), because they are general optimization algorithms. However, DPX could be useful to improve the performance of some incomplete MAX-SAT solvers, as PX did in recent work (Chen et al., 2018).
4.2.4 DPX Alone as Search Algorithm
In this section, we analyze the results of DPX alone used to solve pseudo-Boolean optimization problems. We apply DPX to a random solution and its complement, with the goal of finding the global optimum. Due to technical limitations of our current implementation of DPX, we cannot set , but we use the maximum value of allowed by the implementation, which is 28. This also means that if the whole search space is not explored for a concrete instance the result could depend on the initial (random) solution. For this reason, we run DPX ten times per instance on different random solutions. We set a runtime limit of 12 hours.
After applying DPX to the 20 instances of NKQ Landscapes (ten instances for each value of ), we found that all the runs were prematurely terminated due to an out of memory error (the memory limit was set to 5 GB as in the previous experiments). In the case of the MAX-SAT instances, the runs finished in less than 12 hours for eight unweighted instances and two weighted instances. In 152 unweighted instances and 128 weighted ones, DPX ran out of memory. In two weighted instances, we stopped DPX after 12 hours of computation. Table 10 shows the MAX-SAT instances that finished without error and, for each one, it shows the average and maximum number of satisfied clauses in the ten independent runs of DPX, the average runtime (in seconds) and the logarithm in base 2 of the implicitly explored solutions (it is the same in all runs). The minimum number of satisfied clauses found in all the runs of DRILS+DPX in less than 60 seconds is shown for comparison in the last column. We mark with an asterisk in the logarithm of implicitly explored solutions the runs that fully explored the search space (thus, DPX was able to certify the global optimum).
. | Satisfied clauses . | Time . | Explored . | DRILS . | |
---|---|---|---|---|---|
Instance . | avg . | max . | (s) . | () . | +DPX . |
Unweighted instances | |||||
bcp-msp-normalized-ii16c1 | 15 690 | 15 848 | 9 401 | 49 | 20 266 |
maxcut-brock200_1.clq | 889 | 892 | 8 253 | 32 | 894 |
maxcut-brock400_2.clq | 929 | 933 | 6 387 | 32 | 936 |
maxcut-brock800_2.clq | 809 | 816 | 8 572 | 31 | 819 |
maxcut-p_hat1000-2.clq | 600 | 600 | 375 | *40 | 600 |
maxcut-p_hat700-3.clq | 986 | 989 | 7 617 | 32 | 993 |
maxcut-san400_0.5_1.clq | 644 | 644 | 990 | *40 | 644 |
maxcut-san400_0.9_1.clq | 1047 | 1050 | 8 589 | 30 | 1052 |
Weighted instances | |||||
maxcut-johnson8-2-4.clq | 2 048 | 2 048 | 144 | *28 | 2 048 |
maxcut-sanr200_0.9.clq | 5 994 | 6 027 | 10 680 | 31 | 6 036 |
. | Satisfied clauses . | Time . | Explored . | DRILS . | |
---|---|---|---|---|---|
Instance . | avg . | max . | (s) . | () . | +DPX . |
Unweighted instances | |||||
bcp-msp-normalized-ii16c1 | 15 690 | 15 848 | 9 401 | 49 | 20 266 |
maxcut-brock200_1.clq | 889 | 892 | 8 253 | 32 | 894 |
maxcut-brock400_2.clq | 929 | 933 | 6 387 | 32 | 936 |
maxcut-brock800_2.clq | 809 | 816 | 8 572 | 31 | 819 |
maxcut-p_hat1000-2.clq | 600 | 600 | 375 | *40 | 600 |
maxcut-p_hat700-3.clq | 986 | 989 | 7 617 | 32 | 993 |
maxcut-san400_0.5_1.clq | 644 | 644 | 990 | *40 | 644 |
maxcut-san400_0.9_1.clq | 1047 | 1050 | 8 589 | 30 | 1052 |
Weighted instances | |||||
maxcut-johnson8-2-4.clq | 2 048 | 2 048 | 144 | *28 | 2 048 |
maxcut-sanr200_0.9.clq | 5 994 | 6 027 | 10 680 | 31 | 6 036 |
The main conclusion is that DPX alone is not good compared with its combination with a search algorithm. It requires a lot of memory and time to compute the result. Even when DPX finishes, the results are always outperformed by DRILS+DPX (see last column of Table 10).
In the previous experiment, we forced DPX to use a lot of memory and runtime because the value of was high. One of the advantages of DPX compared with an optimal recombination operator is that we can limit the time and space complexity of DPX using . We wonder if DPX alone with a low value of could outperform DRILS or EA. In order to answer this question we designed a new experiment in which we run a new algorithm, called iDPX (iterated DPX), which consists of the iterated application of DPX over a random solution and its complement. The algorithm stops when a predefined time limit is reached (one minute in our case). We used irace to tune iDPX. The only parameter to tune is , and we used the same configuration of irace that we used to tune DRILS and EA. According to irace, the best value for is 5 for both, NKQ Landscapes and MAX-SAT. We run iDPX ten times per instance and compared it with DRILS and EA using the five crossover operators.
In NKQ Landscapes, iDPX was statistically significantly worse than all the remaining ten algorithms (DRILS and EA with the five crossover operators) for all the instances. The average quality of the solutions obtained by iDPX was 0.7543 for and 0.6640 for , which are very low values compared with the ones obtained by the other algorithms (see Table 8).
In MAX-SAT, iDPX prematurely finished with out of memory errors in 59 unweighted instances and 25 weighted ones. In these cases, we ran iDPX with to alleviate the memory requirements but the algorithm ran out of memory again. When iDPX does not have a result for an instance due to an error, we mark it as statistically significantly worse than the other algorithms that finished with a result and equal to the algorithms that did not finish (EA+DPX does not finish in some instances). In Table 11 we compare iDPX with the other ten algorithms, providing the same metrics as in Section 4.2.3. The average quality values change with respect to the ones in Table 9. The reason is that the instances used for the average computation in Table 11 are a subset of the instances used in Table 9, due to the memory errors of iDPX.
. | Unweighted . | Weighted . | ||
---|---|---|---|---|
. | Statistical difference . | Quality . | Statistical difference . | Quality . |
iDPX | 0.8772 | 0.8516 | ||
DRILS | ||||
DPX | 0.9992 | 0.9997 | ||
APX | 0.9979 | 0.9984 | ||
PX | 0.9981 | 0.9982 | ||
NX | 0.9966 | 0.9907 | ||
UX | 0.9967 | 0.9925 | ||
EA | ||||
DPX | 0.9869 | 0.9700 | ||
APX | 0.9675 | 0.9676 | ||
PX | 0.9344 | 0.9155 | ||
NX | 0.9156 | 0.8836 | ||
UX | 0.9334 | 0.9063 |
. | Unweighted . | Weighted . | ||
---|---|---|---|---|
. | Statistical difference . | Quality . | Statistical difference . | Quality . |
iDPX | 0.8772 | 0.8516 | ||
DRILS | ||||
DPX | 0.9992 | 0.9997 | ||
APX | 0.9979 | 0.9984 | ||
PX | 0.9981 | 0.9982 | ||
NX | 0.9966 | 0.9907 | ||
UX | 0.9967 | 0.9925 | ||
EA | ||||
DPX | 0.9869 | 0.9700 | ||
APX | 0.9675 | 0.9676 | ||
PX | 0.9344 | 0.9155 | ||
NX | 0.9156 | 0.8836 | ||
UX | 0.9334 | 0.9063 |
We observe in Table 11 that iDPX is statistically worse than DRILS+DPX, DRILS+APX, and DRILS+PX in all the instances. iDPX does not outperform EA+DPX in any case (it is equivalent in 16 instances). Furthermore, iDPX is worse in most of the instances for algorithms not using DPX (e.g., EA+UX). Thus, our main conclusion of this section is that DPX alone is not competitive with any search algorithm and should only be used as a crossover operator inside a search algorithm. DPX can provide the global optimum only in the cases where the number of variables is low or the VIG has a low number of edges. This is hardly the case in NP-hard problems (it happened in only three MAX-SAT instances of our benchmark).
4.3 Local Optima Network Analysis of DRILS+DPX
The best algorithm+crossover combination identified in Section 4.2 is DRILS+DPX, for NKQ Landscapes and MAX-SAT. In this section we conduct a local optima network (LON) analysis (Ochoa et al., 2008, 2015) in order to better understand the search dynamics of DRILS+DPX. A LON is a graph where nodes are local optima and edges represent search transitions among them with a given operator. DRILS has two transition operators, crossover (DPX) and perturbation (each followed by local search to produce a local optimum), which are modeled as two types of edges in the LON: DPX + hill climber and perturbation + hill climber. An edge is, respectively, improving if its end node has better fitness than its start node, equal if the start and end nodes have the same fitness, and worsening, if the end node has worse fitness than the start node. Our analysis reports the LONs extracted by ten runs of DRILS+DPX solving NKQ Landscapes instances with and the two extreme values of used in our experiments (2 and 5). The parameters used are the ones in Table 5 for this algorithm and value of . The process of extracting the LON data is as follows. For each of the ten runs per instance, we record every unique local optima and edge encountered, from the start until the end of the run. As it is done in previous work (Ochoa and Veerapen, 2018), we combined the search trajectories produced by the ten runs as a sampling process to construct a single LON and our analysis revealed that there are no overlapping nodes and edges across the runs. Therefore, the combined LONs contain ten connected components, one characterizing each run. Table 12 reports some basic LON statistics, specifically, the number of nodes (local optima) and the number edges of each type, crossover (DPX) and perturbation, grouped as improving, equal, and worsening edges. They are statistics over the ten different instances (and LONs). There cannot be a worsening crossover edge, by design of DPX, but we include the column for the sake of completeness. The number of nodes and edges sampled is similar for the different values of . Although DPX is slower for , its perturbation factor is lower than in the case of . With the parameters provided by irace, DRILS+DPX visits a similar number of unique local optima during the search for both values of . It is interesting to note that most of the crossover edges are improving, while most of the perturbation edges are deteriorating. This suggests that the role of perturbation within DRILS is to provide diversity as a raw material for DPX, which then incorporates newly found variable assignments that improve fitness.
. | . | Crossover Edges . | Perturbation Edges . | ||||
---|---|---|---|---|---|---|---|
. | Nodes . | improv. . | equal . | worse. . | improv. . | equal . | worse. . |
avg | 24 361.7 | 20 490.7 | 3 676.9 | 0.0 | 10.8 | 0.0 | 12 263.4 |
std | 2 387.7 | 1 970.6 | 1 422.5 | 0.0 | 0.6 | 0.0 | 1 194.9 |
avg | 23 371.8 | 22 697.6 | 123.0 | 0.0 | 53.2 | 0.0 | 11 898.8 |
std | 3 137.5 | 3 033.4 | 15.0 | 0.0 | 5.4 | 0.0 | 1 612.8 |
. | . | Crossover Edges . | Perturbation Edges . | ||||
---|---|---|---|---|---|---|---|
. | Nodes . | improv. . | equal . | worse. . | improv. . | equal . | worse. . |
avg | 24 361.7 | 20 490.7 | 3 676.9 | 0.0 | 10.8 | 0.0 | 12 263.4 |
std | 2 387.7 | 1 970.6 | 1 422.5 | 0.0 | 0.6 | 0.0 | 1 194.9 |
avg | 23 371.8 | 22 697.6 | 123.0 | 0.0 | 53.2 | 0.0 | 11 898.8 |
std | 3 137.5 | 3 033.4 | 15.0 | 0.0 | 5.4 | 0.0 | 1 612.8 |
5 Conclusions
This article proposes a new gray-box crossover operator, dynastic potential crossover (DPX), with the ability to obtain a best offspring out of the full dynastic potential if the density of interactions among the variables is low, where it can behave like an optimal recombination operator. We have provided theoretical results proving that DPX, when recombining two parents, generates an offspring no worse than partition crossover (PX) and usually no worse than articulation points partition crossover (APX). We also performed a thorough experimental study to compare DPX with four crossover operators, including PX and APX, and using two different algorithms: DRILS and EA. We used NKQ Landscapes and MAX-SAT instances in the comparison. We conclude that DPX offers a great exploration ability, providing offspring that are much better than their parents, compared with the other crossover operators. However, this ability has a cost in runtime and memory consumption, the main drawback of the operator. Included in DRILS, DPX outperforms all the other crossover operators in NKQ Landscapes and MAX-SAT instances. In the case of EA, it also outperforms the other crossover operators except in the case of NKQ Landscapes with , where PX is the best performing operator. Thus, we suggest that a combined use of PX and DPX in EA could be optimal in high epistasis functions.
An interesting future line of research is to analyze the shape of the connected components of the recombination graph to design precomputed clique trees that could speed up the operator. The code of DPX can also be much optimized when we focus on particular problems, like MAX-SAT. We used in this work a general implementation that can be optimized for particular problems. A problem-specialized version of DPX could be combined with state-of-the-art incomplete MAX-SAT solvers to improve their performance.
Acknowledgments
This research is partially funded by the Universidad de Málaga, Consejería de Economía y Conocimiento de la Junta de Andalucía and FEDER under grant number UMA18-FEDERJA-003 (PRECOG); under grant PID 2020-116727RB-I00 (HUmove) funded by MCIN/AEI/ 10.13039/501100011033; and TAILOR ICT-48 Network (No 952215) funded by EU Horizon 2020 research and innovation program. The work is also partially supported in Brazil by São Paulo Research Foundation (FAPESP), under grants 2021/09720-2 and 2019/07665-4, and National Council for Scientific and Technological Development (CNPq), under grant 305755/2018-8.
Special thanks to J.A. Lozano, Sebastian Herrmann, and Hansang Yun for providing pointers to algorithms for the clique tree construction, and to the anonymous reviewers for their thorough review and constructive suggestions.
Notes
Available at https://github.com/jfrchicanog/EfficientHillClimbers
We will use the term clique to refer to a maximal complete subgraph, as the cited literature does. However, the term clique is sometimes used to refer to any complete subgraph (not necessarily maximal).
What is really stored in Algorithm 2 is the change of the variables in over the parent solution , but this can be considered an implementation detail.
The list of instances is available together with the source code of DPX in Github.
In general, they are not independent because the Hamming distance among them is fixed. But the marginal distributions must be the same.
SATLike-c was the winner in the unweighted incomplete track of the MAX-SAT Evaluation 2021 and got third position in the weighted incomplete track.