Abstract

There can be a complicated mapping relation between decision variables and objective functions in multi-objective optimization problems (MOPs). It is uncommon that decision variables influence objective functions equally. Decision variables act differently in different objective functions. Hence, often, the mapping relation is unbalanced, which causes some redundancy during the search in a decision space. In response to this scenario, we propose a novel memetic (multi-objective) optimization strategy based on dimension reduction in decision space (DRMOS). DRMOS firstly analyzes the mapping relation between decision variables and objective functions. Then, it reduces the dimension of the search space by dividing the decision space into several subspaces according to the obtained relation. Finally, it improves the population by the memetic local search strategies in these decision subspaces separately. Further, DRMOS has good portability to other multi-objective evolutionary algorithms (MOEAs); that is, it is easily compatible with existing MOEAs. In order to evaluate its performance, we embed DRMOS in several state of the art MOEAs to facilitate our experiments. The results show that DRMOS has the advantage in terms of convergence speed, diversity maintenance, and portability when solving MOPs with an unbalanced mapping relation between decision variables and objective functions.

1  Introduction

Many excellent MOEAs have been developed recently, and some of them have been summarized in the literature (Coello, 1999; Zitzler and Thiele, 1998; Zitzler et al., 2000; Khare et al., 2003). Among them, NSGA-II (Deb et al., 2002a) and SPEA2 (Zitzler et al., 2001) are the most popular. NSGA-II is well known for its fast non-dominated sort on dominance ranking and crowding distance on diversity maintenance, and SPEA2 is well known for its environmental selection on the basis of both dominance ranking and diversity maintenance. Recently, some MOEAs with new selection techniques have been proposed. For example, -MOEA (Deb et al., 2005) adopts -dominance to improve its performance; MOEA/D (Zhang et al., 2008b) applies the decomposition idea as a new selection pressure; hypervolume-based MOEA (Bader and Zitzler, 2011) performs well on many-objective optimization problems; TDEA (Karahan and Koksalan, 2010) uses the territory defining idea in its diversity maintenance mechanism; and memetic-based MOEAs (Goh et al., 2009; Knowles and Corne, 2000) use local search to solve MOPs.

As the mapping relation between the decision variables and the objective functions of MOPs is considerably more complicated than that of single-objective optimization problems, MOEAs face new challenges. It is quite uncommon for an MOP that all its decision variables influence one objective function value to the same extent. An unbalanced mapping relation in which different decision variables affect any given objective value differently often appears. However, existing MOEAs treat all the decision variables equally. Because of this, they waste their search resources on decision variables that only slightly affect one objective function. Thus, the unbalanced mapping relation causes some redundancy as a result of unnecessary search. MOEAs should focus their search resources on the decision variables that significantly affect the objective function, which is a type of dimension reduction. The mapping relation is a natural characteristic of MOPs, which can be regarded as prior knowledge. MOEAs using prior knowledge for local search can be seen as memetic algorithms (MAs). In order to reduce the difficulty of MOPs, a considerable amount of work on dimension reduction (Brockhoff and Zitzler, 2009; Saxena and Deb, 2007; López Jaimes et al., 2009; Corne and Knowles, 2007) and prior knowledge-based MAs (Meuth et al., 2009; Ong et al., 2010) has already been carried out. A brief introduction of these two types of technologies is presented below.

Dimension Reduction

Dimension reduction has been widely applied in the fields of data mining and statistics as it maximally approximates the original problems by keeping important features and deleting others. Thus, the difficulties of the original problems can be reduced. In the field of many-objective optimization problems (MOPs with more than three objectives), dominance relations (Kukkonen and Lampinen, 2007; Sato et al., 2007), preference-based methods (Thiele et al., 2009), visualization (Pryke et al., 2007) and dimension reduction (Brockhoff and Zitzler, 2009) are the four main topics. The study of dimension reduction focuses mainly on the objective space. As the Pareto dominance relation hardly contributes to the selection in many-objective optimization problems, it is necessary to reduce the dimension of the objective space. The dimension can be reduced when objectives are correlated (Ishibuchi et al., 2011). The mainstream methods can be classified into three types: dimension reduction preserving the dominance relation (Brockhoff and Zitzler, 2009), dimension reduction based on a feature selection technique (López Jaimes et al., 2008), and dimension reduction by PCA to remove the less important objectives (Jolliffe, 2002). In the decision space of an MOP, the linkage between variables also increases its difficulty (Deb et al., 2006). Some researchers have attempted to reduce the dimension of decision space. For example, RM-MEDA (Zhang et al., 2008a) approximates a solution set by several lines.

MA

MA, emphasizing heuristic search is attracting increasing attention. Unlike the global search evolutionary algorithm (EA), MA is a combination of heuristic local search and global search (Meuth et al., 2009; Ong et al., 2010). In MA, a meme represents a kind of local search. Lamarckian learning (Le et al., 2009; Ong and Keane, 2004; Liang et al., 2000a, 2000b), the multi-meme MA (Krasnogor and Smith, 2005), and Baldwinian learning (Gong et al., 2010) are different types of MAs. Recently, the adaptive MA, which adaptively selects suitable memes for different problems, has become popular in the field of MAs (Ong et al., 2006). MAs are powerful in solving real-world problems with prior knowledge. Memes can be designed according to the prior knowledge. MAs aim to solve specific problems, such as traveling salesman problems (TSPs; Lim et al., 2008), job-shop scheduling problems (Hasan et al., 2009), filter design (Tirronen et al., 2008), HIV multidrug therapy design (Neri et al., 2007), and PMSM drives control design (Caponio et al., 2007). It is worth noting that MA has already been successfully applied to MOPs (Goh et al., 2009; Knowles and Corne, 2000).

Taking the related work mentioned above as a foundation, we propose a memetic optimization strategy based on dimension reduction in decision space (DRMOS) in this paper. This strategy improves individuals by local search in the decision subspaces after dimension reduction. Given below are the contributions of this paper.

Analysis of the Relation Between Decision Variables and Objective Functions

An unbalanced mapping relation between decision variables and objective functions is very common in most MOPs, which is obtained as the prior knowledge for memetic local search strategies. DRMOS obtains the relation between decision variables and objective functions by a statistical analysis approach from samples.

Memetic Local Search Strategy

With the above heuristic information about the relation between decision variables and objective functions, DRMOS divides the decision space into several subspaces for memetic local search strategies, which decreases the dimension of the decision space.

Portability to Other MOEAs

DRMOS is similar to a patch in the system of MOEAs which can improve their performance on MOPs with the unbalanced mapping relation.

The rest of this paper is organized as follows. The related definitions are introduced in Section 2. Section 3 describes the details of DRMOS, such as its basic idea, a relation analysis approach, memetic local search strategies, and its portability to MOEAs. Comparative experiments with the applications of DRMOS to other MOEAs are presented in Section 4. Finally, Section 5 concludes the discussion.

2  Related Definitions

2.1  Definitions of MOP

An m-objective optimization problem can be represented as Equation (1),
formula
1
where is its feasible space and is the decision variable. is the objective function in an Rm space.

If and are two decision vectors, then their corresponding objective values are and . If , and , then it is denoted as . is Pareto optimal, only if there is no such that . The set of all Pareto optimal solutions in X is called the Pareto Set (PS). Further, the set of all the Pareto optimal objective values is called the Pareto Front (PF; Miettinen, 1999).

2.2  Decision Subspace

Decision variables influence objective function values differently in most MOPs, forming a type of unbalanced mapping relation. Equation (2) is an example used for explaining the unbalanced mapping relation. In Equation (2), decision variable x3 is unrelated to objective f1. The search for x3 is unnecessary for f1. However, the case in Equation (2) is an extreme situation of the unbalanced mapping relation. Mostly, the unbalanced mapping relation appears in MOPs with some decision variables that have little influence on some objective functions. In summary, redundancy does exist in the search space of some objectives. The decision space of MOPs with the unbalanced mapping relation can be divided into several subspaces for dimension reduction.
formula
2
Ideally, for an MOP with m objectives, its decision space can be divided into m + 1 decision subspaces. Subspace Si () is the subspace spanned by the decision variables related to objective fi only. Equation (3) is its definition, where is the orthogonal basis of subspace Si, Ji is the index set of . is the subspace spanned by decision variables . It is noteworthy that the partial derivatives in Equation (3) simply represent whether the decision variables and the objective functions are related (including both linear and nonlinear correlation). There are no requirements for the differentiability of the objective functions. The m + 1th decision subspace Sothers is the orthogonal complementary set of the former m subspaces’ union set, which is shown in Equation (4). Subspace Sothers is spanned by the decision variables related to multiple objectives. As these m + 1 subspaces are definitely disjointed, their direct sum is the entire decision space as shown in Equation (5), where “+” means sum, and “” means direct sum. The distance d in subspace S can be calculated using Equation (6), in which J is the index set of S’s orthogonal basis and dj is the projection on xj.
formula
3
formula
4
formula
5
formula
6

It is clear that objective fi can be optimized in subspace Si independently, with no influence on the other objectives. It is obvious that subspace Si is smaller than the entire search space. The dimension of the decision space is reduced through the decision subspace division. For an individual P, objective fi can be optimized independently by simply searching in subspace Si. Then, the obtained individual cannot be worse than P. In other words, subspaces Si () only affect convergence. Further, the decision variables in subspace Sothers are related to multiple objectives. Subspace Sothers is related to both convergence and diversity. If every objective function depends on all the decision variables in an MOP, it cannot be decomposed into such subspaces by the above strong assumption. It is still an unbalanced case if decision variables act differently for different objective functions. It is less progressive to search on the decision variables that influence objective functions slightly. The above concept is expanded for common MOPs in Section 2.3.

2.3  MOPs Can Be Dimension-Reduced in Decision Space

2.3.1  MOPs Can Be Strictly Dimension-Reduced in Decision Space

If an MOP can be decomposed into the above decision subspaces, the MOP can be dimension-reduced in its decision space. In other words, there are some decision variables that are related only to some objectives. The definition of such a dimension reduction’s condition is shown in Equation (7), where n is the dimension of the entire decision space. If an MOP strictly satisfies the above definition, it can be referred to as strictly dimension-reduced in the decision space.
formula
7

2.3.2  MOPs Can Be Weakly Dimension-Reduced in Decision Space

The MOPs that can be strictly dimension-reduced in the decision space are very special because the requirements of the decision subspace division are very strict. Without losing generality, the requirements of strict dimension reduction should be relaxed for the weak dimension reduction. Subspace Si () is redefined as the subspace spanned by the decision variables that considerably impact objective fi only. If an MOP satisfies Equation (7) in its redefined subspace, it has an unbalanced mapping relation between decision variables and objective function values and it can be weakly dimension-reduced in the decision space. Furthermore, the relation between the decision variables and the objective functions can be expressed in terms of some statistical features. Our specific approach is discussed in Section 3.2.

If the variance of one objective is large, it means that the decision variable has a significant influence on that objective function; otherwise, it suggests that the influence of the decision variable on that objective function is not significant.

Provided that an MOP can be weakly dimension-reduced in its decision space, the median of objectives’ variances can be set as the threshold for the measurement of the influence between one decision variable and objectives. If one objective’s variance is larger than the threshold, the decision variable is regarded to have considerable influence on the objective.

2.3.3  Reduction Rate

The mapping relation between decision variables and objective functions in different MOPs is different. The reduction rate is defined as Equation (8) to measure the degree of dimension reduction in the decision space. The larger the reduction rate, the larger is the number of dimensions that can be reduced and the more is the computational cost that can be saved. Further, a reduction rate of 0 means that the MOP cannot be dimension-reduced in the decision space. Considering an MOP with three objectives and 12 decision variables as an example, in Table 1, we illustrate the definition of reduction rate. In subspace Sothers, there are multiple objectives to be considered; consequently, their weights in the reduction rate are larger than the ones in subspaces Si ().
formula
8
Table 1:
Different reduction rates of different 3-objective problems.
Case numberSituationReduction rate
, 
, 0.667 
, 0.889 
 0.667 
Case numberSituationReduction rate
, 
, 0.667 
, 0.889 
 0.667 

3  Memetic Optimization Strategy Based on Dimension Reduction in Decision Space

3.1  Basic Idea

All MOPs are considered as the problems that can be weakly dimension-reduced in DRMOS. DRMOS aims at improving the searching ability of existing MOEAs on such MOPs. DRMOS obtains the mapping relation to reduce the dimension of the decision space and applies memetic local search strategies in the divided decision subspaces; the flow-chart of DRMOS is shown in Figure 1.

Figure 1:

Flowchart of DRMOS.

Figure 1:

Flowchart of DRMOS.

As shown in Figure 1, DRMOS consists of two major procedures, namely, relation analysis and memetic local search. The former is used for gaining information where the mapping relation is learned via sampling. Then the decision space is divided into several subspaces according to that heuristic information. Finally, memetic local search strategies are applied to optimize each objective in their corresponding decision subspaces. The relation analysis result is dynamically updated. Memetic local search strategies are adjusted according to the dynamical information updated by the relation analysis approach.

3.2  Relation Analysis Approach

The mapping relation plays an important role in DRMOS. Some work on the prediction of such a relation has been done. For example, an artificial neural network (Adra et al., 2009; Gaspar-Cunha and Vieira, 2004) is used for mapping an objective space locally back to the decision space; the estimation of distribution algorithm (EDA) in Larranaga and Lozano (2002) builds a probability distribution model of variables on the basis of the statistical information; a Bayesian network (Laumanns and Ocenasek, 2002; Khan et al., 2002) adopts a probabilistic model of variables. The mentioned work serves as an inspiration to form our relation analysis approach.

As mentioned above, our approach relies on a simple statistical characteristic of samples. It uses the variances of the samples’ corresponding objective values to predict the level of their influence. If the variance of one objective is small, the decision variable impacts the objective slightly; otherwise, it impacts the objective significantly. The median of all the objectives’ variances is set as a threshold for such measurement. The prediction of one decision variable to objectives can be obtained from the above samples locally with a credibility C. Multiple samplings and predictions are adopted in our approach. Their consistency increases C, while their inconsistency decreases C. Our approach is described in Table 2 in detail by MATLAB notations. In Table 2, is the credibility of the prediction between decision variable xj and objective fk after the ith sampling. When it is sufficiently high, the prediction can be used for the later memetic local search strategies. Because the mapping relation of xj to all the objectives in MOPs is independent, the total credibility can be calculated using Equation (9). Further, is used as a kind of probability to control the sampling on xj. That is, when is large, too many times of sampling on xj are unnecessary.
formula
9
Table 2:
Relation analysis approach.
graphic
graphic

As indicated above, the obtained mapping relation is a kind of prediction result, which means that it may not be the right result. When all the credibility of decision variable xj () is larger than a threshold T, the relation prediction can be used for the decision subspace division. The process of relation analysis is very important for DRMOS. Without the mapping relation, memetic local search strategies cannot be applied. T plays a very important role in DRMOS. If T is small, then the prediction may be a wrong guide for memetic local search strategies in the divided subspace. If T is large, it would incur a significant computational cost for sampling. The experimental analysis of the influence of T on the entire DRMOS is discussed in Section 4.2.

3.3  Memetic Local Search Strategy

According to the mapping relation obtained in the relation analysis approach, the entire decision space can be divided into several disconnected subspaces for the separate optimization. That is, objective fi can be optimized independently through a local search in subspace Si, where the decision variables with little influence on other objectives can be ignored. The memetic local search strategies in DRMOS aim at improving individuals through the search in subspace Si () to optimize objective fi as shown in Table 3. Every objective is optimized in the corresponding subspace, which is relatively easy to be solved even by a classical genetic algorithm (GA) in DRMOS. In the experiments referred to in this paper, a classical GA is used for the local search. The stopping criterion is 2000 function evaluations.

Table 3:
Local search method in Si (1 ≤ im).
graphic
graphic

3.3.1  Two Memetic Local Search Strategies

Convergence and diversity are both important for MOEAs. Two memetic local search strategies are designed to improve the performance on convergence and diversity in DRMOS. They both improve the individuals in subspace Si () as shown in Table 3; however, they improve different individuals. Strategy 1 aims to improve the individuals in the current population to improve convergence. Strategy 2 aims at diversity. Unlike Strategy 1, Strategy 2 does not choose the individuals in the current population. It adds artificial individuals in the less-explored areas of subspace Sothers, where the individuals in the current population are not crowded by the harmonic distance measurement (Wang et al., 2010). As Table 4 shows, the center of the neighborhood of the individual with the largest harmonic distance is used as the artificial individual in Strategy 2.

Table 4:
Local search method in Sother.
Parameter: m: the number of objectives 
1 Calculate the m-neighbor harmonic distances in Sother of all the solutions. 
2 Find the individual P with the largest harmonic distance. 
3 Artificial individual Pa is the center of P’s m-nearest neighbors. 
4 Local search Pa in Si () as in Table 3
Parameter: m: the number of objectives 
1 Calculate the m-neighbor harmonic distances in Sother of all the solutions. 
2 Find the individual P with the largest harmonic distance. 
3 Artificial individual Pa is the center of P’s m-nearest neighbors. 
4 Local search Pa in Si () as in Table 3

A memetic local search strategy requires extra function evaluations. Therefore, memetic local search strategies in DRMOS should be made good use of, but waste function evaluations if they are not properly used. For all MAs, the balance between global search and local search is an important research problem (Ishibuchi and Murata, 1998; Ishibuchi et al., 2003; Jaszkiewicz, 2002). In DRMOS, when the results of the relation analysis approach are reliable (all the values are larger than T), memetic local search strategies can be executed. In order to reduce the number of function evaluations, DRMOS avoids improving the similar individuals in memetic local search strategies. DRMOS opens a set record to store the improved individuals. After an individual is improved by a memetic local search, its decision variables are copied into record as a reference for the similarity measurement for the other individuals. In other words, before applying memetic local search strategies to a selected individual P, a similarity comparison with all the individuals in record is carried out, where the similarity is measured by the Euclidean distance in Sothers. If the distance is smaller than the diagonal of the feasible area in Sothers, DRMOS will drop this individual and choose another one from the current population until the entire population is compared.

3.3.2  Interaction between Two Strategies

Convergence and diversity are two important topics in MOPs. However, when the computational time is limited, convergence must be considered first. In DRMOS, Strategy 1 is employed first to improve convergence. When all the individuals in the current population are similar to the members in record, Strategy 2 is applied in order to add diversity. The details are shown in Table 5. On one hand, the use of Strategy 1 can enable the population to evolve toward the true PF; on the other hand, the use of Strategy 2 can effectively maintain the diversity of the population.

Table 5:
Memetic local search strategy in DRMOS.
graphic
graphic

Compared with the mutation strategy, Strategy 2 has more advantages. The mutation strategy has almost no prior knowledge. Although it also generates individuals in the less explored areas, the fitness of these individuals may be not sufficiently good for surviving in the selection after the mutation strategy. However, Strategy 2 generates individuals in the less-explored areas and improves them at the same time.

3.4  Portability

DRMOS is designed to improve existing MOEAs for the MOPs that can be weakly dimension-reduced in the decision space. DRMOS can be applied as an offspring generation method; that is, DRMOS can be embedded in MOEAs by adding the individuals obtained through DRMOS to the current population, as shown in Figure 2.

Figure 2:

DRMOS embedding in MOEAs.

Figure 2:

DRMOS embedding in MOEAs.

In Figure 2, the solid line represents the general flow of MOEAs, and the dotted line represents DRMOS. DRMOS does not affect the flow of MOEAs. Hence, DRMOS can be easily introduced into MOEAs. In this paper, we refer to MOEA XXX with DRMOS as DR_XXX. For example, NSGA-II with DRMOS is called DR_NSGA-II.

Table 6:
Reference points for calculation of Hypervolume.
UF1UF2UF3UF4
(2.47319,3.33898) (1.61623,1.34417) (1.53571,4.64725) (1.14112,1.15319) 
UF5 UF6 UF7 UF8 
(6.28565,4.51234) (2.96617,2.47571) (3.02704,3.21249) (2.56648,16.0847,6.02378) 
UF9 UF10 ZDT1 ZDT2 
(18.8221,16.5887,2.8220) (16.5010,29.7562,25.3066) (1.00000,1.17915) (1.00000,1.00000) 
ZDT3 ZDT4   
(1.00000,1.00000) (1.00000,19.8939)   
UF1UF2UF3UF4
(2.47319,3.33898) (1.61623,1.34417) (1.53571,4.64725) (1.14112,1.15319) 
UF5 UF6 UF7 UF8 
(6.28565,4.51234) (2.96617,2.47571) (3.02704,3.21249) (2.56648,16.0847,6.02378) 
UF9 UF10 ZDT1 ZDT2 
(18.8221,16.5887,2.8220) (16.5010,29.7562,25.3066) (1.00000,1.17915) (1.00000,1.00000) 
ZDT3 ZDT4   
(1.00000,1.00000) (1.00000,19.8939)   

4  Simulation Results

In order to evaluate the performance of DRMOS, DRMOS is embedded into several popular MOEAs. The experiment includes three parts: parameter analysis, an experiment on the two memetic local search strategies, and a comparative experiment on benchmark problems.

4.1  Metrics

Many metrics can be used for evaluating the performance of MOEAs. Since every metric has its own disadvantages, multiple metrics are employed in our experiments.

4.1.1  Hypervolume

Hypervolume (Zitzler and Thiele, 1999) evaluates the size of space in the objective space, covered by non-dominated solutions to a reference point. It can reflect both convergence and maximum spread. In this study, the reference points are set as the maximum values obtained in the results of all the comparative algorithms, as given in Table 6.

4.1.2  Purity

Purity (Bandyopadhyay et al., 2004) is used for comparing the convergence ability of comparative algorithms. Q non-dominated solution sets from Q algorithms are included in the comparison, written as . R is the union set of . is the non-dominated solution set of R. is defined as . The purity of the ith algorithm is Equation (10), where ri is the number of non-dominated solutions in Ri and is the number of non-dominated solutions in . The larger its value, the greater is the convergence that the algorithm has of all the compared algorithms.
formula
10

4.1.3  Minimal Spacing

Minimal Spacing (Bandyopadhyay et al., 2004) is a modified version of the uniformity metric SP (Van Veldhuizen and Lamont, 2000), as shown in Equation (11). In Equation (11), di is not duplicated, different from the one in SP. If we take solutions j and k as examples, they are their own nearest neighbors. The distance from j to k is used as both dj and dk in SP. This distance can only be used for one solution in Minimal Spacing. That is, when Minimal Spacing is calculated, all the used distances are marked. di is the nearest distance among the unmarked distances from solution i.
formula
11

4.2  Parameter Analysis

Compared with other MOEAs, parameter T is unique in DRMOS. Therefore, in this section, we analyze the effect of parameter T on the behavior of the relation analysis approach. NSGA-II with DRMOS is adopted for the experiment discussed in this section. The original crowding distance has some disadvantages with respect to diversity (Yang et al., 2010). In order to avoid this drawback, the diversity maintenance in Yang et al. (2010) is used. Thus, the algorithm is written as DR_NSGA-II_KN. As the range of T is in [0,1], T is sampled uniformly by the interval 0.1 for the 2-objective problem UF4 and the 3-objective problem UF8 in 300,000 function evaluations in the experiment. On the one hand, we analyze the behavior of the relation analysis approach by the number of function evaluations and the accuracy rate of the divided subspaces, as shown in Figure 3. On the other hand, we also analyze the influence on the final result using Hypervolume and Minimal Spacing, as shown in Figure 4.

Figure 3:

Number of function evaluations and accuracy rate of relation analysis of different s.

Figure 3:

Number of function evaluations and accuracy rate of relation analysis of different s.

Figure 4:

Hypervolume and Minimal Spacing of different parameter Ts.

Figure 4:

Hypervolume and Minimal Spacing of different parameter Ts.

In Figure 3, the number of function evaluations increases with an increase in T. Our relation analysis approach increases the computational cost with an increase in T. When T is larger than 0.5, the accuracy rate reaches 1. The extra function evaluations are useless. Similarly, the situation is reflected in the final result shown in Figure 4. When T is small, the performance of DRMOS is poor because the prediction result has little credibility. Then the subspaces are wrongly divided according to the incorrect information. Finally, the local search turns out to be ineffective. When T is larger than 0.5, the performance of DRMOS drops gently because the memetic local search strategy is applied in the correctly divided subspaces. However, the large T for sampling incurs a relatively high computational cost. From the above, we can see that there is a trade-off for the selection of T between the right mapping relation and the fewest function evaluations.

The principle of choosing T is “using the fewest function evaluations for the correct mapping relation.” From the result in Figures 3 and 4, 0.5 is the best option for the 2-objective and 3-objective problems.

4.3  Experiment on Two Memetic Local Search Strategies

Strategy 1 and Strategy 2 are two memetic local search strategies for MOPs. Especially, Strategy 2 brings a new idea to multi-objective MAs. Therefore, its behavior is analyzed in this section by comparing the DR_NSGA-II_KNs with and without Strategy 2 on UF4 and UF8. All the parameter settings are the same as that in Section 4.2. The results of Purity, Minimal Spacing, and Hypervolume are shown in Table 7. We find that the DR_NSGA-II_KN with Strategy 2 has better performance on both convergence and diversity than the DR_NSGA-II_KN without Strategy 2, especially on the 3-objective problem UF8. Since Strategy 2 can add diversity to the population, it slightly improves the performance of MOEAs.

Table 7:
Metrics Purity, Minimal Spacing, and Hypervolume of the DR_NSGA-II_KNs with and without Strategy 2 (S2) on UF4 and UF8.
PurityMinimal SpacingHypervolume
With S2Without S2With S2Without S2With S2Without S2
 Average  Average  Average  Average  Average  Average  
UF4 0.5053 0.0224 0.4947 0.0224 0.0095 0.0009 0.0095 0.0007 0.3820 0.0004 0.3818 0.0004 
UF8 0.5020 0.0182 0.4980 0.0182 0.0686 0.0134 0.0730 0.0122 0.8941 0.0225 0.8918 0.0197 
PurityMinimal SpacingHypervolume
With S2Without S2With S2Without S2With S2Without S2
 Average  Average  Average  Average  Average  Average  
UF4 0.5053 0.0224 0.4947 0.0224 0.0095 0.0009 0.0095 0.0007 0.3820 0.0004 0.3818 0.0004 
UF8 0.5020 0.0182 0.4980 0.0182 0.0686 0.0134 0.0730 0.0122 0.8941 0.0225 0.8918 0.0197 

In order to show the interaction between Strategy 1 (for convergence) and Strategy 2 (for diversity), the average call numbers of Strategy 1 and Strategy 2 of DR_NSGA-II_KN on UF4 and UF8 are recorded in Table 8. For the 2-objective problem UF4, Strategy 1 is called for 75 times, whereas Strategy 2 is called for five times. For the 3-objective problem UF8, Strategy 1 is called for 34 times, whereas Strategy 2 is called for nine times. Strategy 2 is called more times for 3-objective problems than 2-objective problems, because the diversity of 3-objective problems is harder to maintain than that of 2-objective problems. Therefore, the interaction between Strategy 1 and Strategy 2 is robust for different problems.

Table 8:
Average call numbers of Strategy 1 and Strategy 2 of DR_NSGA-II_KN on UF4 and UF8.
Strategy 1Strategy 2
UF4 75 
UF8 34 
Strategy 1Strategy 2
UF4 75 
UF8 34 

4.4  Benchmark Problems

As DRMOS aims to solve the MOPs that can be weakly dimension-reduced in the decision space, we only employ such problems in the experiment. We choose the problems whose reduction rates are not zero, such as the UF (Zhang et al., 2008b) and ZDT (Zitzler et al., 2000) problems, rather than the DTLZ problems (Deb et al., 2002b). Their reduction rates are shown in Table 9.

Table 9:
Reduction rate of existing test problems.
UF1UF7UF8UF10ZDT1ZDT4DTLZ1DTLZ4
71.6% 82.2% 48.3% 0% 
UF1UF7UF8UF10ZDT1ZDT4DTLZ1DTLZ4
71.6% 82.2% 48.3% 0% 

In order to present the portability of DRMOS, NSGA-II (Deb et al., 2002a), NSGA-II_KN, SPEA2 (Zitzler et al., 2001), and TDEA (Karahan and Koksalan, 2010) are chosen to embed DRMOS in the comparative experiment. The corresponding algorithms with DRMOS are called DR_NSGA-II, DR_NSGA-II_KN, DR_SPEA2, and DR_TDEA. In addition, MOEA/D (Zhang et al., 2008b) is well known for its ability to solve complicated MOPs such as the UF problems; hence, it is also included in the comparative algorithms. Finally, the results with respect to the median of the Hypervolume in our experiments are presented below. The metrics Purity, Hypervolume, and Minimal Spacing are selected to evaluate results. The experiment parameters are set as shown in Table 10.

Table 10:
Parameter settings for comparative experiments.
Population size nNumber of function evaluationsNo. of independent runsCrossover probabilityMutation probabilityThreshold TTerritory parameter for TDEA
110 (2-objective problems for MOEA/D),g120 (3-objective problems for MOEA/D),100 (other algorithms) 300,000 including function evaluations of DRMOS 29 0.1 0.5 0.01 (2-objective problems),0.1 (3-objective problems) 
Population size nNumber of function evaluationsNo. of independent runsCrossover probabilityMutation probabilityThreshold TTerritory parameter for TDEA
110 (2-objective problems for MOEA/D),g120 (3-objective problems for MOEA/D),100 (other algorithms) 300,000 including function evaluations of DRMOS 29 0.1 0.5 0.01 (2-objective problems),0.1 (3-objective problems) 

4.4.1  Results

4.4.1.1 UF1

UF1 is a complicated problem with 30 decision variables, and its PS is a complicated curve restricted by x1. The resulting PFs of DR_NSGA-II, DR_NSGA-II_KN, DR_SPEA2, DR_TDEA, NSGA-II, NSGA-II_KN, SPEA2, TDEA, and MOEA/D are shown in Figure 5. In that figure, DRMOS improves the convergence and diversity of the corresponding MOEAs, among which DRMOS improves TDEA’s diversity the most and SPEA2’s convergence the most. The performance of MOEAs with DRMOS is better than that of MOEA/D. Comparing the results of DR_NSGA-II and DR_NSGA-II_KN, we can see that the uniformity in NSGA-II is improved by the diversity maintenance in Yang et al. (2010).

Figure 5:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on UF1

Figure 5:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on UF1

4.4.1.2 UF2

UF2 is a complicated problem with 30 decision variables, and its PS is a complicated curve restricted by x1, having a tail with a more complicated curvature than its front. The resulting PFs of DR_NSGA-II, DR_NSGA-II_KN, DR_SPEA2, DR_TDEA, NSGA-II, NSGA-II_KN, SPEA2, TDEA, and MOEA/D are shown in Figure 6. In that figure, DRMOS slightly improves the diversity of PF of these four MOEAs in , where MOEA/D has poor diversity.

Figure 6:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on UF2.

Figure 6:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on UF2.

4.4.1.3 UF3

UF3 is a complicated problem with 30 decision variables, and its PS is a complicated curve restricted by x1. The resulting PFs of DR_NSGA-II, DR_NSGA-II_KN, DR_SPEA2, DR_TDEA, NSGA-II, NSGA-II_KN, SPEA2, TDEA, and MOEA/D are shown in Figure 7. In that figure, DRMOS significantly improves the convergence of all MOEAs, particularly TDEA. DRMOS also improves the diversity of NSGA-II and NSGA-II_KN. For SPEA2, DRMOS slightly improves both its convergence and diversity. MOEA/D maintains good convergence and diversity.

Figure 7:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on UF3.

Figure 7:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on UF3.

4.4.1.4 UF4

UF4 is a complicated problem with 30 decision variables, and its PF is a concave. Further, there are many local optima on its landscape. The resulting PFs of DR_NSGA-II, DR_NSGA-II_KN, DR_SPEA2, DR_TDEA, NSGA-II, NSGA-II_KN, SPEA2, TDEA, and MOEA/D are shown in Figure 8. In that figure, DRMOS improves MOEAs’ convergence, all the results of MOEAs with DRMOS converge to the true PF, and the results of the original MOEAs and MOEA/D all get trapped in local optima. Additionally, the diversity of MOEAs with DRMOS is maintained well, except in the case of DR_SPEA2.

Figure 8:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on UF4.

Figure 8:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on UF4.

4.4.1.5 UF5

UF5 is a complicated problem with 30 decision variables, and its PF consists of 21 discrete points. The resulting PFs of DR_NSGA-II, DR_NSGA-II_KN, DR_SPEA2, DR_TDEA, NSGA-II, NSGA-II_KN, SPEA2, and TDEA are shown in Figure 9. In that figure, DRMOS improves the convergence and diversity of MOEAs so significantly that most of the PF are obtained, having a better performance than MOEA/D.

Figure 9:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on UF5.

Figure 9:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on UF5.

4.4.1.6 UF6

UF6 is a complicated problem with 30 decision variables, and its PF is discontinuous. The resulting PFs of DR_NSGA-II, DR_NSGA-II_KN, DR_SPEA2, DR_TDEA, NSGA-II, NSGA-II_KN, SPEA2, TDEA, and MOEA/D are shown in Figure 10. In that figure, DRMOS cannot improve the convergence of the MOEA significantly. Although it improves the diversity, there is still room for improvement. At the same time, MOEA/D does well in this aspect.

Figure 10:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on UF6.

Figure 10:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on UF6.

4.4.1.7 UF7

UF7 is a complicated problem with 30 decision variables. The resulting PFs of DR_NSGA-II, DR_NSGA-II_KN, DR_SPEA2, DR_TDEA, NSGA-II, NSGA-II_KN, SPEA2, TDEA, and MOEA/D are shown in Figure 11. In that figure, the convergence of the original MOEA is already very good, so that DRMOS improves them slightly, but DRMOS does improve their diversity considerably, particularly the diversity of TDEA.

Figure 11:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on UF7.

Figure 11:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on UF7.

4.4.1.8 UF8

UF8 is a 3-objective problem with 30 decision variables. The resulting PFs of DR_NSGA-II, DR_NSGA-II_KN, DR_SPEA2, DR_TDEA, NSGA-II, NSGA-II_KN, SPEA2, TDEA, and MOEA/D are shown in Figure 12. In that figure, DRMOS improves the convergence of these four MOEAs so well that their results converge to the true PF, which is better than those of MOEA/D. Further, the diversity of MOEAs with DRMOS is improved. However, the diversity of DR_NSGA-II is the worst among the MOEAs with DRMOS, which is caused by the disadvantages of crowding distance.

Figure 12:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on UF8.

Figure 12:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on UF8.

4.4.1.9 UF9

UF9 is a 3-objective problem with 30 decision variables, and its PF is discontinuous. The resulting PFs of DR_NSGA-II, DR_NSGA-II_KN, DR_SPEA2, DR_TDEA, NSGA-II, NSGA-II_KN, SPEA2, TDEA, and MOEA/D are shown in Figure 13. In that figure, the convergence and diversity of MOEAs are improved by DRMOS, but not satisfactorily. For example, the convergence and diversity of DR_TDEA are a little worse than that of the others.

Figure 13:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on UF9.

Figure 13:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on UF9.

4.4.1.10 UF10

UF10 is a complicated 3-objective optimization problem with 30 decision variables. The resulting PFs of DR_NSGA-II, DR_NSGA-II_KN, DR_SPEA2, DR_TDEA, NSGA-II, NSGA-II_KN, SPEA2, TDEA, and MOEA/D are shown in Figure 14. In that figure, DRMOS improves the convergence of four MOEAs so well that their results converge to the true PF. Their diversity is improved less by DRMOS. The diversity of DR_NSGA-II, DR_NSGA-II_KN, DR_SPEA2, and DR_TDEA is not satisfactory. Further, neither the convergence nor the diversity of MOEA/D is good in the case of UF10.

Figure 14:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on UF10.

Figure 14:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on UF10.

4.4.1.11 ZDT1

ZDT1 is a problem with 30 decision variables. The resulting PFs of DR_NSGA-II, DR_NSGA-II_KN, DR_SPEA2, DR_TDEA, NSGA-II, NSGA-II_KN, SPEA2, TDEA, and MOEA/D are shown in Figure 15. In that figure, all the algorithms have good results of convergence and diversity. The results of DR_NASGA-II and NSGA-II have relatively poor uniformity. Hence, DRMOS cannot improve MOEAs much on ZDT1, and it does not worsen MOEAs on ZDT1, either.

Figure 15:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on ZDT1.

Figure 15:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on ZDT1.

4.4.1.12 ZDT2

ZDT2 is a problem with 30 decision variables, and its PF is concave. The resulting PFs of DR_NSGA-II, DR_NSGA-II_KN, DR_SPEA2, DR_TDEA, NSGA-II, NSGA-II_KN, SPEA2, TDEA, and MOEA/D are shown in Figure 16. In that figure, DRMOS does not make MOEAs perform worse on ZDT2.

Figure 16:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on ZDT2.

Figure 16:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on ZDT2.

4.4.1.13 ZDT3

ZDT3 is a problem with 30 decision variables, and its PF is discontinuous. The resulting PFs of DR_NSGA-II, DR_NSGA-II_KN, DR_SPEA2, DR_TDEA, NSGA-II, NSGA-II_KN, SPEA2, TDEA, and MOEA/D are shown in Figure 17. In that figure, DRMOS does not improve MOEAs on ZDT3.

Figure 17:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on ZDT3.

Figure 17:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on ZDT3.

4.4.1.14 ZDT4

ZDT4 is a problem with 30 decision variables. The resulting PFs of DR_NSGA-II, DR_NSGA-II_KN, DR_SPEA2, DR_TDEA, NSGA-II, NSGA-II_KN, SPEA2, TDEA, and MOEA/D are shown in Figure 18. In that figure, DRMOS improves MOEAs slightly on ZDT4.

Figure 18:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on ZDT4.

Figure 18:

Results with respect to the median of the Hypervolume in 29 runs of nine algorithms on ZDT4.

In order to quantitatively explain the experiment results, the performance measures Purity, Minimal Spacing, and Hypervolume are shown in Figures 19, 20, and 21 in boxplots as a means of statistical analysis, and in Tables 11, 12, and 13, where the winner of the comparison between an MOEA and the MOEA with DRMOS is in bold. As we do not embed DRMOS into MOEA/D, DR_MOEA/D is not applicable in Tables 12 and 13.

Table 11:
Values of Purity of all test problems.
NSGA-IINSGA-II_KNSPEA2TDEA
  Average  Average  Average  Average  
UF1 DR_XXX 0.5338 0.0367 0.5501 0.0413 0.5448 0.0348 0.6788 0.0560 
 XXX 0.2719 0.0448 0.3557 0.0367 0.3790 0.0356 0.1697 0.0389 
 MOEA/D 0.1944 0.0510 0.0943 0.0325 0.0762 0.0244 0.1515 0.0446 
UF2 DR_XXX 0.4715 0.0211 0.4968 0.0198 0.4974 0.0169 0.5911 0.0263 
 XXX 0.2854 0.0259 0.3608 0.0169 0.4264 0.0178 0.2391 0.0159 
 MOEA/D 0.2430 0.0311 0.1423 0.0220 0.0762 0.0214 0.1698 0.0268 
UF3 DR_XXX 0.6852 0.1609 0.5780 0.1413 0.4613 0.1629 0.4255 0.2334 
 XXX 0.0869 0.0723 0.2757 0.1076 0.2683 0.0870 0.0206 0.0161 
 MOEA/D 0.2280 0.1944 0.1464 0.1644 0.2704 0.1747 0.5540 0.2434 
UF4 DR_XXX 1.0000 0.0000 1.0000 0.0000 1.0000 0.0000 1.0000 0.0000 
 XXX 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 
 MOEA/D 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 
UF5 DR_XXX 0.8697 0.0891 0.8554 0.0885 0.8409 0.0800 0.6077 0.1834 
 XXX 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 
 MOEA/D 0.1303 0.0891 0.1446 0.0885 0.1591 0.0800 0.3923 0.1834 
UF6 DR_XXX 0.0884 0.1051 0.0719 0.0926 0.0661 0.0981 0.0268 0.0212 
 XXX 0.1579 0.1646 0.1916 0.2057 0.1756 0.1494 0.0065 0.0083 
 MOEA/D 0.7536 0.1690 0.7365 0.1876 0.7582 0.1855 0.9667 0.0227 
UF7 DR_XXX 0.4116 0.0200 0.4281 0.0294 0.3596 0.0286 0.3924 0.0536 
 XXX 0.2734 0.0384 0.3448 0.0520 0.4251 0.0430 0.1866 0.0981 
 MOEA/D 0.3151 0.0351 0.2271 0.0379 0.2153 0.0525 0.4210 0.0549 
UF8 DR_XXX 0.5115 0.0616 0.5041 0.0549 0.5162 0.0798 0.4887 0.0548 
 XXX 0.1673 0.0770 0.2219 0.0611 0.2299 0.0905 0.2128 0.0503 
 MOEA/D 0.3213 0.0769 0.2740 0.0632 0.2539 0.0703 0.2985 0.0728 
UF9 DR_XXX 0.6307 0.0814 0.6833 0.1151 0.6401 0.0832 0.3800 0.0796 
 XXX 0.0401 0.0296 0.0223 0.0369 0.0268 0.0313 0.1328 0.1142 
 MOEA/D 0.3292 0.0854 0.2943 0.1045 0.3332 0.1007 0.4872 0.1377 
UF10 DR_XXX 0.7581 0.1440 0.7777 0.1429 0.7828 0.1455 0.6454 0.1952 
 XXX 0.0025 0.0081 0.0018 0.0068 0.0132 0.0410 0.0005 0.0026 
 MOEA/D 0.2394 0.1428 0.2205 0.1410 0.2040 0.1375 0.3541 0.1955 
ZDT1 DR_XXX 0.3386 0.0039 0.3395 0.0024 0.3384 0.0041 0.3271 0.0061 
 XXX 0.3390 0.0035 0.3397 0.0024 0.3392 0.0039 0.3316 0.0066 
 MOEA/D 0.3224 0.0073 0.3208 0.0047 0.3224 0.0076 0.3413 0.0076 
ZDT2 DR_XXX 0.3373 0.0019 0.3377 0.0025 0.3372 0.0020 0.3259 0.0065 
 XXX 0.3374 0.0018 0.3378 0.0023 0.3372 0.0023 0.3309 0.0069 
 MOEA/D 0.3253 0.0036 0.3245 0.0048 0.3256 0.0043 0.3432 0.0045 
ZDT3 DR_XXX 0.3476 0.0120 0.3490 0.0101 0.3449 0.0110 0.3620 0.0214 
 XXX 0.3544 0.0131 0.3530 0.0109 0.3483 0.0111 0.3814 0.0207 
 MOEA/D 0.2980 0.0220 0.2980 0.0205 0.3067 0.0215 0.2566 0.0216 
ZDT4 DR_XXX 0.4691 0.0201 0.4827 0.0208 0.4834 0.0211 0.4711 0.0280 
 XXX 0.4703 0.0211 0.4827 0.0208 0.4834 0.0211 0.4769 0.0274 
 MOEA/D 0.0606 0.0412 0.0346 0.0417 0.0333 0.0422 0.0519 0.0538 
NSGA-IINSGA-II_KNSPEA2TDEA
  Average  Average  Average  Average  
UF1 DR_XXX 0.5338 0.0367 0.5501 0.0413 0.5448 0.0348 0.6788 0.0560 
 XXX 0.2719 0.0448 0.3557 0.0367 0.3790 0.0356 0.1697 0.0389 
 MOEA/D 0.1944 0.0510 0.0943 0.0325 0.0762 0.0244 0.1515 0.0446 
UF2 DR_XXX 0.4715 0.0211 0.4968 0.0198 0.4974 0.0169 0.5911 0.0263 
 XXX 0.2854 0.0259 0.3608 0.0169 0.4264 0.0178 0.2391 0.0159 
 MOEA/D 0.2430 0.0311 0.1423 0.0220 0.0762 0.0214 0.1698 0.0268 
UF3 DR_XXX 0.6852 0.1609 0.5780 0.1413 0.4613 0.1629 0.4255 0.2334 
 XXX 0.0869 0.0723 0.2757 0.1076 0.2683 0.0870 0.0206 0.0161 
 MOEA/D 0.2280 0.1944 0.1464 0.1644 0.2704 0.1747 0.5540 0.2434 
UF4 DR_XXX 1.0000 0.0000 1.0000 0.0000 1.0000 0.0000 1.0000 0.0000 
 XXX 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 
 MOEA/D 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 
UF5 DR_XXX 0.8697 0.0891 0.8554 0.0885 0.8409 0.0800 0.6077 0.1834 
 XXX 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 
 MOEA/D 0.1303 0.0891 0.1446 0.0885 0.1591 0.0800 0.3923 0.1834 
UF6 DR_XXX 0.0884 0.1051 0.0719 0.0926 0.0661 0.0981 0.0268 0.0212 
 XXX 0.1579 0.1646 0.1916 0.2057 0.1756 0.1494 0.0065 0.0083 
 MOEA/D 0.7536 0.1690 0.7365 0.1876 0.7582 0.1855 0.9667 0.0227 
UF7 DR_XXX 0.4116 0.0200 0.4281 0.0294 0.3596 0.0286 0.3924 0.0536 
 XXX 0.2734 0.0384 0.3448 0.0520 0.4251 0.0430 0.1866 0.0981 
 MOEA/D 0.3151 0.0351 0.2271 0.0379 0.2153 0.0525 0.4210 0.0549 
UF8 DR_XXX 0.5115 0.0616 0.5041 0.0549 0.5162 0.0798 0.4887 0.0548 
 XXX 0.1673 0.0770 0.2219 0.0611 0.2299 0.0905 0.2128 0.0503 
 MOEA/D 0.3213 0.0769 0.2740 0.0632 0.2539 0.0703 0.2985 0.0728 
UF9 DR_XXX 0.6307 0.0814 0.6833 0.1151 0.6401 0.0832 0.3800 0.0796 
 XXX 0.0401 0.0296 0.0223 0.0369 0.0268 0.0313 0.1328 0.1142 
 MOEA/D 0.3292 0.0854 0.2943 0.1045 0.3332 0.1007 0.4872 0.1377 
UF10 DR_XXX 0.7581 0.1440 0.7777 0.1429 0.7828 0.1455 0.6454 0.1952 
 XXX 0.0025 0.0081 0.0018 0.0068 0.0132 0.0410 0.0005 0.0026 
 MOEA/D 0.2394 0.1428 0.2205 0.1410 0.2040 0.1375 0.3541 0.1955 
ZDT1 DR_XXX 0.3386 0.0039 0.3395 0.0024 0.3384 0.0041 0.3271 0.0061 
 XXX 0.3390 0.0035 0.3397 0.0024 0.3392 0.0039 0.3316 0.0066 
 MOEA/D 0.3224 0.0073 0.3208 0.0047 0.3224 0.0076 0.3413 0.0076 
ZDT2 DR_XXX 0.3373 0.0019 0.3377 0.0025 0.3372 0.0020 0.3259 0.0065 
 XXX 0.3374 0.0018 0.3378 0.0023 0.3372 0.0023 0.3309 0.0069 
 MOEA/D 0.3253 0.0036 0.3245 0.0048 0.3256 0.0043 0.3432 0.0045 
ZDT3 DR_XXX 0.3476 0.0120 0.3490 0.0101 0.3449 0.0110 0.3620 0.0214 
 XXX 0.3544 0.0131 0.3530 0.0109 0.3483 0.0111 0.3814 0.0207 
 MOEA/D 0.2980 0.0220 0.2980 0.0205 0.3067 0.0215 0.2566 0.0216 
ZDT4 DR_XXX 0.4691 0.0201 0.4827 0.0208 0.4834 0.0211 0.4711 0.0280 
 XXX 0.4703 0.0211 0.4827 0.0208 0.4834 0.0211 0.4769 0.0274 
 MOEA/D 0.0606 0.0412 0.0346 0.0417 0.0333 0.0422 0.0519 0.0538 
Table 12:
Values of Minimal Spacing of all test problems.
NSGA-IINSGA-II_KNSPEA2TDEAMOEA/D
  Average  Average  Average  Average  Average  
UF1 DR_XXX 0.0095 0.0008 0.0030 0.0002 0.0106 0.0021 0.0101 0.0023 NA NA 
 XXX 0.0257 0.0058 0.0327 0.0099 0.0352 0.0094 0.0585 0.0134 0.0039 0.0004 
UF2 DR_XXX 0.0101 0.0013 0.0030 0.0002 0.0047 0.0008 0.0045 0.0010 NA NA 
 XXX 0.0108 0.0011 0.0051 0.0017 0.0089 0.0033 0.0142 0.0047 0.0037 0.0005 
UF3 DR_XXX 0.0122 0.0023 0.0082 0.0037 0.0228 0.0024 0.0257 0.0049 NA NA 
 XXX 0.0265 0.0131 0.0251 0.0063 0.0255 0.0067 0.0364 0.0199 0.0106 0.0043 
UF4 DR_XXX 0.0114 0.0006 0.0093 0.0007 0.0096 0.0010 0.0070 0.0008 NA NA 
 XXX 0.0134 0.0015 0.0092 0.0027 0.0099 0.0020 0.0119 0.0034 0.0094 0.0008 
UF5 DR_XXX 0.0337 0.0045 0.0322 0.0035 0.0364 0.0053 0.0378 0.0068 NA NA 
 XXX 0.0723 0.0305 0.0858 0.0385 0.0917 0.0419 0.1557 0.1276 0.0331 0.0037 
UF6 DR_XXX 0.0520 0.0082 0.0535 0.0074 0.0565 0.0080 0.0977 0.0142 NA NA 
 XXX 0.0480 0.0416 0.0409 0.0374 0.0545 0.0492 0.1064 0.0927 0.0536 0.0067 
UF7 DR_XXX 0.0100 0.0006 0.0028 0.0002 0.0155 0.0040 0.0168 0.0036 NA NA 
 XXX 0.0220 0.0028 0.0224 0.0051 0.0252 0.0042 0.0335 0.0117 0.0058 0.0007 
UF8 DR_XXX 0.1047 0.0133 0.0709 0.0135 0.0716 0.0113 0.0764 0.0122 NA NA 
 XXX 0.2325 0.0734 0.1734 0.0681 0.1603 0.0519 0.1731 0.0739 0.0750 0.012 
UF9 DR_XXX 0.0906 0.0240 0.0818 0.0216 0.0907 0.0220 0.1143 0.0325 NA NA 
 XXX 0.4581 0.1874 0.4522 0.2242 0.5046 0.2216 0.2163 0.0692 0.1009 0.0200 
UF10 DR_XXX 0.1230 0.0208 0.1095 0.0156 0.1165 0.0223 0.1322 0.0288 NA NA 
 XXX 0.2417 0.1069 0.2302 0.1612 0.1585 0.0541 0.2230 0.0731 0.1110 0.0193 
ZDT1 DR_XXX 0.0110 0.0012 0.0033 0.0001 0.0038 0.0003 0.0042 0.0027 NA NA 
 XXX 0.0102 0.0010 0.0029 0.0002 0.0034 0.0002 0.0036 0.0002 0.0039 0.0005 
ZDT2 DR_XXX 0.0112 0.0015 0.0033 0.0001 0.0038 0.0004 0.0037 0.0003 NA NA 
 XXX 0.0103 0.0008 0.0030 0.0002 0.0034 0.0002 0.0037 0.0003 0.0039 0.0006 
ZDT3 DR_XXX 0.0290 0.0007 0.0262 0.0000 0.0277 0.0032 0.0257 0.0028 NA NA 
 XXX 0.0287 0.0021 0.0260 0.0008 0.0268 0.0013 0.0245 0.0022 0.0280 0.0018 
ZDT4 DR_XXX 0.0108 0.0011 0.0033 0.0001 0.0072 0.0101 0.0039 0.0003 NA NA 
 XXX 0.0100 0.0008 0.0029 0.0004 0.0035 0.0004 0.0696 0.3607 0.0041 0.0007 
NSGA-IINSGA-II_KNSPEA2TDEAMOEA/D
  Average  Average  Average  Average  Average  
UF1 DR_XXX 0.0095 0.0008 0.0030 0.0002 0.0106 0.0021 0.0101 0.0023 NA NA 
 XXX 0.0257 0.0058 0.0327 0.0099 0.0352 0.0094 0.0585 0.0134 0.0039 0.0004 
UF2 DR_XXX 0.0101 0.0013 0.0030 0.0002 0.0047 0.0008 0.0045 0.0010 NA NA 
 XXX 0.0108 0.0011 0.0051 0.0017 0.0089 0.0033 0.0142 0.0047 0.0037 0.0005 
UF3 DR_XXX 0.0122 0.0023 0.0082 0.0037 0.0228 0.0024 0.0257 0.0049 NA NA 
 XXX 0.0265 0.0131 0.0251 0.0063 0.0255 0.0067 0.0364 0.0199 0.0106 0.0043 
UF4 DR_XXX 0.0114 0.0006 0.0093 0.0007 0.0096 0.0010 0.0070 0.0008 NA NA 
 XXX 0.0134 0.0015 0.0092 0.0027 0.0099 0.0020 0.0119 0.0034 0.0094 0.0008 
UF5 DR_XXX 0.0337 0.0045 0.0322 0.0035 0.0364 0.0053 0.0378 0.0068 NA NA 
 XXX 0.0723 0.0305 0.0858 0.0385 0.0917 0.0419 0.1557 0.1276 0.0331 0.0037 
UF6 DR_XXX 0.0520 0.0082 0.0535 0.0074 0.0565 0.0080 0.0977 0.0142 NA NA 
 XXX 0.0480 0.0416 0.0409 0.0374 0.0545 0.0492 0.1064 0.0927 0.0536 0.0067 
UF7 DR_XXX 0.0100 0.0006 0.0028 0.0002 0.0155 0.0040 0.0168 0.0036 NA NA 
 XXX 0.0220 0.0028 0.0224 0.0051 0.0252 0.0042 0.0335 0.0117 0.0058 0.0007 
UF8 DR_XXX 0.1047 0.0133 0.0709 0.0135 0.0716 0.0113 0.0764 0.0122 NA NA 
 XXX 0.2325 0.0734 0.1734 0.0681 0.1603 0.0519 0.1731 0.0739 0.0750 0.012 
UF9 DR_XXX 0.0906 0.0240 0.0818 0.0216 0.0907 0.0220 0.1143 0.0325 NA NA 
 XXX 0.4581 0.1874 0.4522 0.2242 0.5046 0.2216 0.2163 0.0692 0.1009 0.0200 
UF10 DR_XXX 0.1230 0.0208 0.1095 0.0156 0.1165 0.0223 0.1322 0.0288 NA NA 
 XXX 0.2417 0.1069 0.2302 0.1612 0.1585 0.0541 0.2230 0.0731 0.1110 0.0193 
ZDT1 DR_XXX 0.0110 0.0012 0.0033 0.0001 0.0038 0.0003 0.0042 0.0027 NA NA 
 XXX 0.0102 0.0010 0.0029 0.0002 0.0034 0.0002 0.0036 0.0002 0.0039 0.0005 
ZDT2 DR_XXX 0.0112 0.0015 0.0033 0.0001 0.0038 0.0004 0.0037 0.0003 NA NA 
 XXX 0.0103 0.0008 0.0030 0.0002 0.0034 0.0002 0.0037 0.0003 0.0039 0.0006 
ZDT3 DR_XXX 0.0290 0.0007 0.0262 0.0000 0.0277 0.0032 0.0257 0.0028 NA NA 
 XXX 0.0287 0.0021 0.0260 0.0008 0.0268 0.0013 0.0245 0.0022 0.0280 0.0018 
ZDT4 DR_XXX 0.0108 0.0011 0.0033 0.0001 0.0072 0.0101 0.0039 0.0003 NA NA 
 XXX 0.0100 0.0008 0.0029 0.0004 0.0035 0.0004 0.0696 0.3607 0.0041 0.0007 
Table 13:
Values of Hypervolume of all test problems.
NSGA-IINSGA-II_KNSPEA2TDEAMOEA/D
  Average  Average  Average  Average  Average  
UF1 DR_XXX 7.9131e+0 1.2946e-3 7.9167e+0 1.1900e-3 7.9163e+0 1.5431e-3 7.9159e+0 3.0492e-3 NA NA 
 XXX 7.8543e+0 8.3469e-2 7.8478e+0 9.1612e-2 7.8139e+0 8.5622e-2 7.7337e+0 1.6247e-1 7.8803e+0 1.6993e-2 
UF2 DR_XXX 1.8284e+0 6.4263e-4 1.8322e+0 5.9349e-4 1.8338e+0 3.1677e-4 1.8336e+0 2.7505e-4 NA NA 
 XXX 1.8234e+0 2.1762e-3 1.8281e+0 1.5463e-3 1.8270e+0 2.9476e-3 1.7798e+0 8.1234e-3 1.8025e+0 8.2006e-3 
UF3 DR_XXX 6.7854e+0 4.5086e-3 6.7886e+0 3.2226e-3 6.7731e+0 2.0519e-3 6.7695e+0 3.5410e-3 NA NA 
 XXX 6.5128e+0 5.2076e-1 6.5884e+0 6.5639e-2 6.5886e+0 6.4911e-2 6.0922e+0 8.4377e-1 6.6602e+0 2.4319e-1 
UF4 DR_XXX 6.2901e-1 4.8576e-4 6.3014e-1 7.0708e-4 6.2988e-1 6.7927e-4 6.2982e-1 6.5975e-4 NA NA 
 XXX 5.3176e-1 8.3824e-3 5.3696e-1 8.3331e-3 5.3871e-1 7.8182e-3 5.2743e-1 8.4661e-3 4.9125e-1 1.5925e-2 
UF5 DR_XXX 2.7699e+1 1.0814e-1 2.7716e+1 8.9045e-2 2.7648e+1 1.5418e-1 2.7665e+1 1.2687e-1 NA NA 
 XXX 2.2902e+1 1.1229e+0 2.2729e+1 1.2970e+0 2.2406e+1 1.1283e+0 2.1620e+1 1.0950e+0 2.6391e+1 1.2971e+0 
UF6 DR_XXX 6.5450e+0 7.3190e-2 6.5442e+0 7.3465e-2 6.5529e+0 6.5465e-2 6.5429e+0 6.5820e-2 NA NA 
 XXX 5.4665e+0 7.8403e-1 5.2870e+0 6.2662e-1 5.4305e+0 6.8668e-1 5.1661e+0 5.6455e-1 6.7192e+0 1.2797e-1 
UF7 DR_XXX 9.2127e+0 5.2528e-4 9.2166e+0 6.9432e-4 9.2061e+0 3.7191e-3 9.2052e+0 4.5628e-3 NA NA 
 XXX 9.1915e+0 2.5094e-2 9.1933e+0 2.7202e-2 9.0740e+0 4.6109e-1 8.7229e+0 8.9918e-1 9.0086e+0 6.0469e-1 
UF8 DR_XXX 2.4781e+2 1.8651e-1 2.4718e+2 2.8136e+0 2.4686e+2 4.6795e+0 2.4783e+2 2.5708e-1 NA NA 
 XXX 2.4445e+2 4.9390e+0 2.4780e+2 8.2368e-2 2.4623e+2 3.8505e+0 2.4075e+2 5.0347e+0 2.3686e+2 7.1091e+0 
UF9 DR_XXX 8.8032e+2 4.6104e-1 8.8059e+2 3.9150e-1 8.8077e+2 8.4689e-2 8.8028e+2 8.9085e-1 NA NA 
 XXX 8.6613e+2 5.8196e+0 8.7017e+2 4.8483e+0 8.6879e+2 3.8653e+0 8.6705e+2 3.5323e+0 8.6965e+2 3.4293e+0 
UF10 DR_XXX 1.2419e+4 2.3643e+1 1.2404e+4 8.3769e+1 1.2419e+4 2.1024e+1 1.2422e+4 1.8994e+0 NA NA 
 XXX 1.1700e+4 2.0852e+2 1.1686e+4 1.8232e+2 1.1697e+4 2.1475e+2 1.1598e+4 1.1099e+2 1.1647e+4 1.3973e+2 
ZDT1 DR_XXX 8.3872e-1 5.2571e-4 8.4095e-1 3.9413e-5 8.4085e-1 5.4778e-5 8.4041e-1 1.3518e-4 NA NA 
 XXX 8.3904e-1 4.8790e-4 8.4100e-1 3.4338e-5 8.4095e-1 5.5007e-5 8.4050e-1 1.5482e-4 8.3945e-1 1.0314e-4 
ZDT2 DR_XXX 3.2637e-1 6.0622e-4 3.2851e-1 3.3327e-5 3.2847e-1 9.2658e-5 3.2803e-1 1.4160e-4 NA NA 
 XXX 3.2664e-1 3.2795e-4 3.2856e-1 3.7613e-5 3.2851e-1 4.9436e-5 3.2809e-1 1.2634e-4 3.2801e-1 3.3747e-5 
ZDT3 DR_XXX 8.8754e-1 2.6749e-4 8.8842e-1 5.0012e-5 8.8673e-1 1.4302e-3 8.8544e-1 1.1316e-2 NA NA 
 XXX 8.8550e-1 1.1279e-2 8.8823e-1 1.1977e-3 8.8646e-1 1.8861e-3 8.8731e-1 2.3451e-3 8.8381e-1 2.1779e-4 
ZDT4 DR_XXX 1.9553e+1 5.4229e-4 1.9556e+1 2.7921e-5 1.9552e+1 1.2566e-2 1.9555e+1 1.3772e-4 NA NA 
 XXX 1.9554e+1 3.3280e-4 1.9549e+1 2.7062e-2 1.9556e+1 3.8367e-4 1.9555e+1 1.3688e-4 1.9544e+1 3.9139e-3 
NSGA-IINSGA-II_KNSPEA2TDEAMOEA/D
  Average  Average  Average  Average  Average  
UF1 DR_XXX 7.9131e+0 1.2946e-3 7.9167e+0 1.1900e-3 7.9163e+0 1.5431e-3 7.9159e+0 3.0492e-3 NA NA 
 XXX 7.8543e+0 8.3469e-2 7.8478e+0 9.1612e-2 7.8139e+0 8.5622e-2 7.7337e+0 1.6247e-1 7.8803e+0 1.6993e-2 
UF2 DR_XXX 1.8284e+0 6.4263e-4 1.8322e+0 5.9349e-4 1.8338e+0 3.1677e-4 1.8336e+0 2.7505e-4 NA NA 
 XXX 1.8234e+0 2.1762e-3 1.8281e+0 1.5463e-3 1.8270e+0 2.9476e-3 1.7798e+0 8.1234e-3 1.8025e+0 8.2006e-3 
UF3 DR_XXX 6.7854e+0 4.5086e-3 6.7886e+0 3.2226e-3 6.7731e+0 2.0519e-3 6.7695e+0 3.5410e-3 NA NA 
 XXX 6.5128e+0 5.2076e-1 6.5884e+0 6.5639e-2 6.5886e+0 6.4911e-2 6.0922e+0 8.4377e-1 6.6602e+0 2.4319e-1 
UF4 DR_XXX 6.2901e-1 4.8576e-4 6.3014e-1 7.0708e-4 6.2988e-1 6.7927e-4 6.2982e-1 6.5975e-4 NA NA 
 XXX 5.3176e-1 8.3824e-3 5.3696e-1 8.3331e-3 5.3871e-1 7.8182e-3 5.2743e-1 8.4661e-3 4.9125e-1 1.5925e-2 
UF5 DR_XXX 2.7699e+1 1.0814e-1 2.7716e+1 8.9045e-2 2.7648e+1 1.5418e-1 2.7665e+1 1.2687e-1 NA NA 
 XXX 2.2902e+1 1.1229e+0 2.2729e+1 1.2970e+0 2.2406e+1 1.1283e+0 2.1620e+1 1.0950e+0 2.6391e+1 1.2971e+0 
UF6 DR_XXX 6.5450e+0 7.3190e-2 6.5442e+0 7.3465e-2 6.5529e+0 6.5465e-2 6.5429e+0 6.5820e-2 NA NA 
 XXX 5.4665e+0 7.8403e-1 5.2870e+0 6.2662e-1 5.4305e+0 6.8668e-1 5.1661e+0 5.6455e-1 6.7192e+0 1.2797e-1 
UF7 DR_XXX 9.2127e+0 5.2528e-4 9.2166e+0 6.9432e-4 9.2061e+0 3.7191e-3 9.2052e+0 4.5628e-3 NA NA 
 XXX 9.1915e+0 2.5094e-2 9.1933e+0 2.7202e-2 9.0740e+0 4.6109e-1 8.7229e+0 8.9918e-1 9.0086e+0 6.0469e-1 
UF8 DR_XXX 2.4781e+2 1.8651e-1 2.4718e+2 2.8136e+0 2.4686e+2 4.6795e+0 2.4783e+2 2.5708e-1 NA NA 
 XXX 2.4445e+2 4.9390e+0 2.4780e+2 8.2368e-2 2.4623e+2 3.8505e+0 2.4075e+2 5.0347e+0 2.3686e+2 7.1091e+0 
UF9 DR_XXX 8.8032e+2 4.6104e-1 8.8059e+2 3.9150e-1 8.8077e+2 8.4689e-2 8.8028e+2 8.9085e-1 NA NA 
 XXX 8.6613e+2 5.8196e+0 8.7017e+2 4.8483e+0 8.6879e+2 3.8653e+0 8.6705e+2 3.5323e+0 8.6965e+2 3.4293e+0 
UF10 DR_XXX 1.2419e+4 2.3643e+1 1.2404e+4 8.3769e+1 1.2419e+4 2.1024e+1 1.2422e+4 1.8994e+0 NA NA 
 XXX 1.1700e+4 2.0852e+2 1.1686e+4 1.8232e+2 1.1697e+4 2.1475e+2 1.1598e+4 1.1099e+2 1.1647e+4 1.3973e+2 
ZDT1 DR_XXX 8.3872e-1 5.2571e-4 8.4095e-1 3.9413e-5 8.4085e-1 5.4778e-5 8.4041e-1 1.3518e-4 NA NA 
 XXX 8.3904e-1 4.8790e-4 8.4100e-1 3.4338e-5 8.4095e-1 5.5007e-5 8.4050e-1 1.5482e-4 8.3945e-1 1.0314e-4 
ZDT2 DR_XXX 3.2637e-1 6.0622e-4 3.2851e-1 3.3327e-5 3.2847e-1 9.2658e-5 3.2803e-1 1.4160e-4 NA NA 
 XXX 3.2664e-1 3.2795e-4 3.2856e-1 3.7613e-5 3.2851e-1 4.9436e-5 3.2809e-1 1.2634e-4 3.2801e-1 3.3747e-5 
ZDT3 DR_XXX 8.8754e-1 2.6749e-4 8.8842e-1 5.0012e-5 8.8673e-1 1.4302e-3 8.8544e-1 1.1316e-2 NA NA 
 XXX 8.8550e-1 1.1279e-2 8.8823e-1 1.1977e-3 8.8646e-1 1.8861e-3 8.8731e-1 2.3451e-3 8.8381e-1 2.1779e-4 
ZDT4 DR_XXX 1.9553e+1 5.4229e-4 1.9556e+1 2.7921e-5 1.9552e+1 1.2566e-2 1.9555e+1 1.3772e-4 NA NA 
 XXX 1.9554e+1 3.3280e-4 1.9549e+1 2.7062e-2 1.9556e+1 3.8367e-4 1.9555e+1 1.3688e-4 1.9544e+1 3.9139e-3 
Figure 19:

Boxplots corresponding to Purity for all test problems, where 1 denotes DR_NSGA-II; 2 NSGA-II; 3 MOEA/D; 4 DR_NSGA-II_KN; 5 NSGA-II_KN; 6 MOEA/D; 7 DR_SPEA2; 8 SPEA2; 9 MOEA/D; 10 DR_TDEA; 11 TDEA; and 12 MOEA/D.

Figure 19:

Boxplots corresponding to Purity for all test problems, where 1 denotes DR_NSGA-II; 2 NSGA-II; 3 MOEA/D; 4 DR_NSGA-II_KN; 5 NSGA-II_KN; 6 MOEA/D; 7 DR_SPEA2; 8 SPEA2; 9 MOEA/D; 10 DR_TDEA; 11 TDEA; and 12 MOEA/D.

Figure 20:

Boxplots corresponding to Minimal Spacing for all test problems, where 1 denotes DR_NSGA-II; 2 NSGA-II; 3 DR_NSGA-II_KN; 4 NSGA-II_KN; 5 DR_SPEA2; 6 SPEA2; 7 DR_TDEA; 8 TDEA; and 9 MOEA/D.

Figure 20:

Boxplots corresponding to Minimal Spacing for all test problems, where 1 denotes DR_NSGA-II; 2 NSGA-II; 3 DR_NSGA-II_KN; 4 NSGA-II_KN; 5 DR_SPEA2; 6 SPEA2; 7 DR_TDEA; 8 TDEA; and 9 MOEA/D.

Figure 21:

Boxplots corresponding to Hypervolume for all test problems, where 1 denotes DR_NSGA-II; 2 NSGA-II; 3 DR_NSGA-II_KN; 4 NSGA-II_KN; 5 DR_SPEA2; 6 SPEA2; 7 DR_TDEA; 8 TDEA; and 9 MOEA/D.

Figure 21:

Boxplots corresponding to Hypervolume for all test problems, where 1 denotes DR_NSGA-II; 2 NSGA-II; 3 DR_NSGA-II_KN; 4 NSGA-II_KN; 5 DR_SPEA2; 6 SPEA2; 7 DR_TDEA; 8 TDEA; and 9 MOEA/D.

Purity represents convergence. The aim of our comparative experiments is to analyze the improvement of DRMOS embedded in classic MOEAs. The comparisons are performed between MOEAs and the MOEAs with DRMOS. Thus, Purity is calculated from the results of three algorithms: an MOEA, the MOEA with DRMOS, and MOEA/D. The results are shown in four groups in Table 11. From the boxplots in Figure 19, it is clear that DRMOS increases the convergence abilities of MOEAs on the UF problems. The Purity of MOEAs with DRMOS has a better value than their corresponding MOEAs and MOEA/D, particularly MOEAs with DRMOS outperforming the corresponding MOEAs on UF4 and UF5. For the ZDT problems, DRMOS has no improvement on convergence. Sometimes, the Purity of MOEAs with DRMOS is a little worse, but not significantly.

Minimal Spacing is employed to evaluate the uniformity of results. From the boxplots shown in Figure 20 and the data given in Table 12, DRMOS improves the uniformity of the corresponding MOEAs for the UF problems, although different MOEAs have a different uniformity ability. The uniformity of MOEAs decreased slightly for the ZDT problems but this does not affect their PFs significantly.

Hypervolume is a metric for evaluating both convergence and maximum spread, and can be used as a general performance measure. From the boxplots shown in Figure 21 and data given in Table 13, DRMOS improves MOEAs in the case of the UF problems because the Hypervolume of MOEAs with DRMOS is larger than that of the corresponding MOEAs. However, as the Hypervolume of the MOEAs with DRMOS and the corresponding MOEAs are similar, the performance on the ZDT problems cannot be improved by DRMOS.

4.4.2  Discussion

In the case of the UF problems, DRMOS reduces the dimension of the decision space by sampling, and it adopts two memetic local search strategies in the divided decision subspaces. DRMOS increases the convergence speed and diversity, its performance is even better than that of MOEA/D. DRMOS cannot improve the performance of MOEAs on UF6 because of the unsuitable local search strategies. For the relatively simpler ZDT problems, the advantages of DRMOS are not obvious. Because the reduction rates of the ZDT problems are small, the dimension cannot be reduced significantly. DRMOS improves MOEAs slightly in the case of the ZDT problems. Furthermore, the relation analysis approach wastes some function evaluations in the meantime, which leads to a slightly worse performance than that of the original MOEAs.

In general, DRMOS significantly improves convergence, diversity, and uniformity for different MOEAs. Although different MOEAs have different advantages and disadvantages, DRMOS improves the search ability on the MOPs that can be weakly dimension-reduced in the decision space. Moreover, DRMOA can be easily embedded in existing MOEAs to improve their performance. For the MOPs that cannot be weakly dimension-reduced in the decision space, or the ones with small dimension reduction rates, the MOEAs with DRMOS degenerate to their original algorithms.

5  Conclusion

A memetic optimization strategy based on dimension reduction in decision space is proposed in this paper. This strategy reduces the dimension of the decision space by a relation analysis approach and two memetic local search strategies in order to improve performance. DRMOS has good portability to existing MOEAs. The work discussed in this paper has three major contributions as follows.

  • Relation Analysis. DRMOS applies a simple model of sampling to obtain the mapping relation between decision variables and objective functions, which provides the knowledge for dimension reduction in the decision space.

  • Memetic Local Search Strategy. Memetic local search strategies in DRMOS are employed in the divided decision subspace, which decompose the complicated problem with high dimensions into several simple problems with low dimensions. Two strategies aim at convergence and diversity, respectively.

  • Portability. DRMOS can be embedded easily in existing MOEAs in order to apply its powerful performance to the MOPs that can be weakly dimension-reduced in the decision space, which has no influence on the other types of problems for the original performance of MOEAs.

DRMOS takes advantage of the information in the process of optimization to guide the search. Although its superior performance has been shown by experiments, DRMOS still has some disadvantages. (1) DRMOS is only effective in the case of the MOPs that can be weakly dimension-reduced. It degenerates to the original algorithm on the MOPs that cannot be weakly dimension-reduced in the decision space. As for the problems that cannot be divided into subspaces, the memetic local search strategy is invalid. (2) Its performance is not satisfactory for the problems with discontinuous PFs, because the computational cost is not self-adaptively assigned according to the different situations of PFs in DRMOS. The self-adaptive computational cost assignment is another topic of our future research. (3) The results show that DRMOS is not effective for the problems with small reduction rates, because the relation analysis approach costs many function evaluations rather than searching. Furthermore, the current relation analysis approach is based on a relatively simple model. Some advanced technology such as statistical learning methods and relation mining should be used in DRMOS.

Acknowledgments

This work was partially supported by the National Basic Research Program (973 Program) of China, under Grant 2013CB329402, an EU FP7 IRSES, under Grant 247619, the National Natural Science Foundation of China, under Grants 61371201 and 61272279, the National Research Foundation for the Doctoral Program of Higher Education of China, under Grant 20100203120008, the Fund for Foreign Scholars in University Research and Teaching Programs, under Grant B07048, and the Program for Cheung Kong Scholars and Innovative Research Team in University under Grant IRT1170. The authors are grateful for Xin Yao’s comments on the paper.

References

Adra
,
S.
,
Dodd
,
T.
,
Griffin
,
I.
, and
Fleming
,
P
. (
2009
).
Convergence acceleration operator for multiobjective optimization
.
IEEE Transactions on Evolutionary Computation
,
13
(
4
):
825
847
.
Bader
,
J.
, and
Zitzler
,
E
. (
2011
).
HypE: An algorithm for fast hypervolume-based many-objective optimization
.
Evolutionary Computation
,
19
(
1
):
45
76
.
Bandyopadhyay
,
S.
,
Pal
,
S.
, and
Aruna
,
B
. (
2004
).
Multiobjective GAs, quantitative indices, and pattern classification
.
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
,
34
(
5
):
2088
2099
.
Brockhoff
,
D.
, and
Zitzler
,
E
. (
2009
).
Objective reduction in evolutionary multiobjective optimization: Theory and applications
.
Evolutionary Computation
,
17
(
2
):
135
166
.
Caponio
,
A.
,
Cascella
,
G.
,
Neri
,
F.
,
Salvatore
,
N.
, and
Sumner
,
M
. (
2007
).
A fast adaptive memetic algorithm for online and offline control design of PMSM drives
.
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
,
37
(
1
):
28
41
.
Coello
,
C
. (
1999
).
A comprehensive survey of evolutionary-based multiobjective optimization techniques
.
Knowledge and Information Systems
,
1
(
3
):
129
156
.
Corne
,
D.
, and
Knowles
,
J.
(
2007
).
Techniques for highly multiobjective optimisation: Some nondominated points are better than others
. In
Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation
, pp.
773
780
.
ACM
.
Deb
,
K.
,
Mohan
,
M.
, and
Mishra
,
S
. (
2005
).
Evaluating the -domination based multi-objective evolutionary algorithm for a quick computation of Pareto-optimal solutions
.
Evolutionary Computation
,
13
(
4
):
501
525
.
Deb
,
K.
,
Pratap
,
A.
,
Agarwal
,
S.
, and
Meyarivan
,
T
. (
2002a
).
A fast and elitist multiobjective genetic algorithm: NSGA-II
.
IEEE Transactions on Evolutionary Computation
,
6
(
2
):
182
197
.
Deb
,
K.
,
Sinha
,
A.
, and
Kukkonen
,
S
. (
2006
).
Multi-objective test problems, linkages, and evolutionary methodologies
. In
Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation
, pp.
1141
1148
.
Deb
,
K.
,
Thiele
,
L.
,
Laumanns
,
M.
, and
Zitzler
,
E.
(
2002b
).
Scalable multi-objective optimization test problems
. In
Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2002
, Vol.
1
, pp.
825
830
.
Gaspar-Cunha
,
A.
, and
Vieira
,
A
. (
2004
).
A hybrid multi-objective evolutionary algorithm using an inverse neural network
. In
Proceedings of Hybrid Metaheuristics Workshop
,
HM 2004
, pp.
25
30
.
Goh
,
C.
,
Ong
,
Y.
, and
Tan
,
K.
(
2009
).
Multi-Objective Memetic Algorithms
, Vol.
171
.
Berlin
:
Springer-Verlag
.
Gong
,
M.
,
Jiao
,
L.
, and
Zhang
,
L
. (
2010
).
Baldwinian learning in clonal selection algorithm for optimization
.
Information Sciences
,
180
(
8
):
1218
1236
.
Hasan
,
S.
,
Sarker
,
R.
,
Essam
,
D.
, and
Cornforth
,
D
. (
2009
).
Memetic algorithms for solving job-shop scheduling problems
.
Memetic Computing
,
1
(
1
):
69
83
.
Ishibuchi
,
H.
,
Hitotsuyanagi
,
Y.
,
Ohyanagi
,
H.
, and
Nojima
,
Y.
(
2011
).
Effects of the existence of highly correlated objectives on the behavior of MOEA/D
. In
Evolutionary multi-criterion optimization. Lecture notes in computer science
, Vol.
6576
(pp.
166
181
).
Berlin
:
Springer-Verlag
.
Ishibuchi
,
H.
, and
Murata
,
T
. (
1998
).
A multi-objective genetic local search algorithm and its application to flowshop scheduling
.
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
,
28
(
3
):
392
403
.
Ishibuchi
,
H.
,
Yoshida
,
T.
, and
Murata
,
T
. (
2003
).
Balance between genetic search and local search in memetic algorithms for multiobjective permutation flowshop scheduling
.
IEEE Transactions on Evolutionary Computation
,
7
(
2
):
204
223
.
Jaszkiewicz
,
A
. (
2002
).
On the performance of multiple-objective genetic local search on the 0/1 knapsack problem: A comparative experiment
.
IEEE Transactions on Evolutionary Computation
,
6
(
4
):
402
412
.
Jolliffe
,
I.
(
2002
).
Principal component analysis
, Vol
2
.
Berlin
:
Springer
.
Karahan
,
I.
, and
Koksalan
,
M
. (
2010
).
A territory defining multiobjective evolutionary algorithms and preference incorporation
.
IEEE Transactions on Evolutionary Computation
,
14
(
4
):
636
664
.
Khan
,
N.
,
Goldberg
,
D.
, and
Pelikan
,
M.
(
2002
).
Multi-objective Bayesian optimization algorithm
.
Paper presented at GECC02002
(p.
684
).
San Mateo, CA
:
Morgan Kauffman
Khare
,
V.
,
Yao
,
X.
, and
Deb
,
K.
(
2003
).
Performance scaling of multi-objective evolutionary algorithms
. In
C.
Fonseca
,
P.
Fleming
,
E.
Zitzler
,
L.
Thiele
, and
K.
Deb
, Eds.
Evolutionary multi-criterion optimization. Lecture notes in computer science
, Vol.
2632
. (pp.
376
390
).
Berlin
:
Springer-Verlag
.
Knowles
,
J.
, and
Corne
,
D
. (
2000
).
M-PAES: A memetic algorithm for multiobjective optimization
. In
Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2000
, pp.
325
332
.
Krasnogor
,
N.
, and
Smith
,
J
. (
2005
).
A tutorial for competent memetic algorithms: Model, taxonomy, and design issues
.
IEEE Transactions on Evolutionary Computation
,
9
(
5
):
474
488
.
Kukkonen
,
S.
, and
Lampinen
,
J
. (
2007
).
Ranking-dominance and many-objective optimization
. In
Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2007
, pp.
3983
3990
.
Larranaga
,
P.
, and
Lozano
,
J.
(
2002
).
Estimation of distribution algorithms: A new tool for evolutionary computation
, Vol
2
.
Berlin
:
Springer-Verlag
.
Laumanns
,
M.
, and
Ocenasek
,
J.
(
2002
).
Bayesian optimization algorithms for multi-objective optimization
. In
Parallel problem solving from nature, PPSN VII. Lecture notes in computer science
, Vol.
2439
(pp.
298
307
).
Berlin
:
Springer-Verlag
.
Le
,
M.
,
Ong
,
Y.
,
Jin
,
Y.
, and
Sendhoff
,
B
. (
2009
).
Lamarckian memetic algorithms: Local optimum and connectivity structure analysis
.
Memetic Computing
,
1
(
3
):
175
190
.
Liang
,
K.
,
Yao
,
X.
, and
Newton
,
C
. (
2000a
).
Evolutionary search of approximated n-dimensional landscapes
.
International Journal of Knowledge Based Intelligent Engineering Systems
,
4
(
3
):
172
183
.
Liang
,
K.
,
Yao
,
X.
, and
Newton
,
C.
(
2000b
).
Lamarckian evolution in global optimization
. In
Proceedings of the 26th Annual Conference of the IEEE Industrial Electronics Society IECON 2000
, Vol
4
, pp.
2975
2980
.
Lim
,
K.
,
Ong
,
Y.
,
Lim
,
M.
,
Chen
,
X.
, and
Agarwal
,
A
. (
2008
).
Hybrid ant colony algorithms for path planning in sparse graphs
.
Soft Computing—A Fusion of Foundations, Methodologies and Applications
,
12
(
10
):
981
994
.
López Jaimes
,
A.
,
Coello
,
C.
, and
Urías Barrientos
,
J.
(
2009
).
Online objective reduction to deal with many-objective problems
. In
Evolutionary multi-criterion optimization. Lecture notes in computer science
, Vol.
5467
(pp.
423
437
).
Berlin
:
Springer-Verlag
.
López Jaimes
,
A.
,
Coello Coello
,
C.
, and
Chakraborty
,
D
. (
2008
).
Objective reduction using a feature selection technique
. In
Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation
, pp.
673
680
.
Meuth
,
R.
,
Lim
,
M.
,
Ong
,
Y.
, and
Wunsch
,
D
. (
2009
).
A proposition on memes and meta-memes in computing for higher-order learning
.
Memetic Computing
,
1
(
2
):
85
100
.
Miettinen
,
K.
(
1999
).
Nonlinear multiobjective optimization
, Vol.
12
.
Berlin
:
Springer-Verlag
.
Neri
,
F.
,
Toivanen
,
J.
,
Cascella
,
G.
, and
Ong
,
Y
. (
2007
).
An adaptive multimeme algorithm for designing HIV multidrug therapies
.
IEEE/ACM Transactions on Computational Biology and Bioinformatics
,
4
(
2
):
264
278
.
Ong
,
Y.
, and
Keane
,
A
. (
2004
).
Meta-Lamarckian learning in memetic algorithms
.
IEEE Transactions on Evolutionary Computation
,
8
(
2
):
99
110
.
Ong
,
Y.
,
Lim
,
M.
, and
Chen
,
X
. (
2010
).
Memetic computation: Past, present and future [research frontier]
.
IEEE Computational Intelligence Magazine
,
5
(
2
):
24
31
.
Ong
,
Y.
,
Lim
,
M.
,
Zhu
,
N.
, and
Wong
,
K
. (
2006
).
Classification of adaptive memetic algorithms: A comparative study
.
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
,
36
(
1
):
141
152
.
Pryke
,
A.
,
Mostaghim
,
S.
, and
Nazemi
,
A.
(
2007
).
Heatmap visualization of population based multi objective algorithms
. In
Evolutionary multi-criterion optimization. Lecture notes in computer science
, Vol.
4403
(pp.
361
375
).
Berlin
:
Springer-Verlag
.
Sato
,
H.
,
Aguirre
,
H.
, and
Tanaka
,
K.
(
2007
).
Controlling dominance area of solutions and its impact on the performance of MOEAs
. In
Evolutionary multi-criterion optimization. Lecture notes in computer science
, Vol.
4403
(pp.
5
20
).
Berlin
:
Springer-Verlag
.
Saxena
,
D.
, and
Deb
,
K.
(
2007
).
Non-linear dimensionality reduction procedures for certain large-dimensional multi-objective optimization problems: Employing correntropy and a novel maximum variance unfolding
. In
Evolutionary multi-criterion optimization. Lecture notes in computer science
, Vol.
4403
(pp.
772
787
).
Berlin
:
Springer-Verlag
.
Thiele
,
L.
,
Miettinen
,
K.
,
Korhonen
,
P.
, and
Molina
,
J
. (
2009
).
A preference-based evolutionary algorithm for multi-objective optimization
.
Evolutionary Computation
,
17
(
3
):
411
436
.
Tirronen
,
V.
,
Neri
,
F.
,
Kärkkäinen
,
T.
,
Majava
,
K.
, and
Rossi
,
T
. (
2008
).
An enhanced memetic differential evolution in filter design for defect detection in paper production
.
Evolutionary Computation
,
16
(
4
):
529
555
.
Van Veldhuizen
,
D.
, and
Lamont
,
G
. (
2000
).
On measuring multiobjective evolutionary algorithm performance
. In
IEEE Congress on Evolutionary Computation, CEC 2000
, pp.
204
211
.
Wang
,
Z.
,
Tang
,
K.
, and
Yao
,
X
. (
2010
).
Multi-objective approaches to optimal testing resource allocation in modular software systems
.
IEEE Transactions on Reliability
,
59
(
3
):
563
575
.
Yang
,
D.
,
Jiao
,
L.
,
Gong
,
M.
, and
Feng
,
J
. (
2010
).
Adaptive ranks clone and k-nearest neighbor list-based immune multi-objective optimization
.
Computational Intelligence
,
26
(
4
):
359
385
.
Zhang
,
Q.
,
Zhou
,
A.
, and
Jin
,
Y
. (
2008a
).
RM-MEDA: A regularity model-based multiobjective estimation of distribution algorithm
.
IEEE Transactions on Evolutionary Computation
,
12
(
1
):
41
63
.
Zhang
,
Q.
,
Zhou
,
A.
,
Zhao
,
S.
,
Suganthan
,
P.
,
Liu
,
W.
, and
Tiwari
,
S.
(
2008b
).
Multiobjective optimization test instances for the CEC 2009 special session and competition
.
Working Report CES-887, School of Computer Science and Electrical Engineering, University of Essex, Colchester, UK
.
Zitzler
,
E.
,
Deb
,
K.
, and
Thiele
,
L
. (
2000
).
Comparison of multiobjective evolutionary algorithms: Empirical results
.
Evolutionary Computation
,
8
(
2
):
173
195
.
Zitzler
,
E.
,
Laumanns
,
M.
,
Thiele
,
L.
, (
2001
).
SPEA2: Improving the strength Pareto evolutionary algorithm
. In
Proceedings of EUROGEN 2001, Evolutionary Methods for Design, Optimization and Control with Applications to Industrial Problems
, pp.
1
21
.
Zitzler
,
E.
, and
Thiele
,
L.
(
1998
).
Multiobjective optimization using evolutionary algorithms: A comparative case study
. In
Parallel Problem Solving from Nature, PPSN V. Lecture notes in computer science
, Vol.
(pp
.
292
301
).
Berlin
:
Springer-Verlag
.
Zitzler
,
E.
, and
Thiele
,
L
. (
1999
).
Multiobjective evolutionary algorithms: A comparative case study and the strength Pareto approach
.
IEEE Transactions on Evolutionary Computation
,
3
(
4
):
257
271
.