## Abstract

Decomposition-based evolutionary algorithms have been quite successful in dealing with multiobjective optimization problems. Recently, more and more researchers attempt to apply the decomposition approach to solve many-objective optimization problems. A many-objective evolutionary algorithm based on decomposition with correlative selection mechanism (MOEA/D-CSM) is also proposed to solve many-objective optimization problems in this article. Since MOEA/D-SCM is based on a decomposition approach which adopts penalty boundary intersection (PBI), a set of reference points must be generated in advance. Thus, a new concept related to the set of reference points is introduced first, namely, the correlation between an individual and a reference point. Thereafter, a new selection mechanism based on the correlation is designed and called correlative selection mechanism. The correlative selection mechanism finds its correlative individuals for each reference point as soon as possible so that the diversity among population members is maintained. However, when a reference point has two or more correlative individuals, the worse correlative individuals may be removed from a population so that the solutions can be ensured to move toward the Pareto-optimal front. In a comprehensive experimental study, we apply MOEA/D-CSM to a number of many-objective test problems with 3 to 15 objectives and make a comparison with three state-of-the-art many-objective evolutionary algorithms, namely, NSGA-III, MOEA/D, and RVEA. Experimental results show that the proposed MOEA/D-CSM can produce competitive results on most of the problems considered in this study.

## 1  Introduction

Many researchers have paid a lot of attention to multiobjective optimization problems since the beginning of the 1990s, and up to now, they have proposed many excellent multiobjective evolutionary algorithms (MOEAs) to deal well with multiobjective optimization problems. PAES (Knowles and Corne, 1999), SPEA2 (Zitzler et al., 2001), and NSGA-II (Deb et al., 2002) are representative of traditional Pareto-based MOEAs and their performance was better than that of other MOEAs proposed at that time. However, in the real world, many multiobjective optimization problems often involve more than two or three contradictory objectives, sometimes demanding to have 10 to 15 objectives (Chikumbo et al., 2012; Coello and Lamont, 2004). Generally, we consider optimization problems with more than three objectives to be many-objective problems. Nevertheless, if these current MOEAs are directly used to solve many-objective problems, we find that their performance will degrade sharply or some of them cannot even deal with these kinds of problems. Many-objective problems have attracted increasing attention in recent years, and consequently, a number of many-objective evolutionary algorithms have been developed. Also, several good reviews on this topic are currently available (Ishibuchi et al., 2008; Li, Li et al., 2015; von Lücken et al., 2014).

Recent studies show that decomposition approaches are helpful for solving many-objective problems. Therefore, this article also proposes a new algorithm based on decomposition called MOEA/D-CSM. MOEA/D-CSM introduces a novel selection mechanism which can select an individual according to the individual's correlative condition rather than using the individual's fitness values based on the hypervolume indicator (Bader and Zitzler, 2011) or on the crowded distance (Deb et al., 2002). Since MOEA/D-SCM also adopts the decomposition approach of MOEA/D (Wang et al., 2017; Zhang and Li, 2007; Wang et al., 2016), a set of reference points must be generated in advance. MOEA/D-CSM proposes two important concepts related to the reference points: 1) correlation between an individual and a reference point, and 2) neighboring reference points of each reference point. Generally, in the decomposition approach, we know that a reference point defines a reference line along with an ideal point. If an individual is closer to the reference line than other reference ones, the individual and the corresponding reference point are considered to be correlative. If an individual and a reference point are correlative, the individual is called a correlative individual of the reference point and the reference point is also called a correlative reference point of the individual. According to the definition of correlation, it is easy to know that an individual must have a correlative reference point while a reference point maybe has any (zero, one, or more) number of correlative individuals. The proposed algorithm aims to find their correlative individuals for each reference point so that the diversity among population members can be maintained. When two solutions have the same correlative reference point, the individual with a lower scalarization function value using the penalty boundary intersection (PBI) approach (Zhang and Li, 2007) is better than the other approaches (if minimization problems are considered where all objectives are to be minimized). If a reference point has two or more correlative individuals, we choose a better correlative individual so that the solution can move toward the Pareto-optimal front. Since the proposed algorithm does not adopt nondominated sorting, its computation cost can be reduced to some degree.

In the remainder of the article, we first review a number of existing many-objective evolutionary algorithms in Section 2. Thereafter, in Section 3, we describe the proposed MOEA/D-CSM algorithm in detail. Section 4 presents experimental results of MOEA/D-CSM and compares it with three other elitist many-objective evolutionary algorithms; that is, NSGA-III (Deb and Jain, 2014), MOEA/D (Zhang and Li, 2007), and RVEA (Cheng et al., 2016). Section 5 gives a comparison between MOEA/D-CSM and MOAE/D-STM (Li et al., 2014). Finally, conclusions are drawn in Section 6.

## 2  Related Work

As the number of objectives in optimization problems increases, the main difficulties to solve such problems using MOEAs are the following:

1. With the increase in the number of objectives, the number of nondominated individuals will be expanded sharply due to the Pareto-based domination relation, which is adopted by most MOEAs. Therefore, when these MOEAs are used to deal with many-objective problems, the probability of generating new individuals better than their parents at each generation, is quite small. This phenomenon will make the execution of these algorithms slow down or some of them will simply be unable to obtain satisfactory results.

2. It is difficult to maintain the diversity of the population. To determine the extent of crowding of solutions in a population, the identification of neighbors becomes computationally expensive when an optimization problem has many objectives (Deb and Jain, 2014).

3. The representation of the Pareto-optimal front becomes difficult in high-dimensional spaces. In order to properly represent the Pareto-optimal front, as a problem has more objectives, a larger population size is required. Nevertheless, the increase in the population size not only considerably increases the running time of the algorithm but also brings difficulties to make a proper decision for a decision maker (Ishibuchi et al., 2008; Deb and Jain, 2014).

For the three problems mentioned previously, recent research directions can be simply divided into three aspects besides preference-based methods and dimensional reduction methods. First, the hypervolume indicator has been adopted to assign a fitness value to each individual so that the quality of each solution can be evaluated. A representative of hypervolume-based MOEAs is HypE (Bader and Zitzler, 2011). Second, GrEA (Yang et al., 2013) was proposed as a grid-based MOEA that adopts a relaxed form of Pareto dominance called $ɛ$-dominance (Laumanns et al., 2002). Moreover, a new decomposition-based (or reference-point-based) algorithm, NSGA III, was proposed by Deb and Jain (2014), and many researchers have paid attention to this new algorithm and developed similar algorithms; for example, MOEA/D-DD (Li, Deb et al., 2015) and RVEA (Cheng et al., 2016). The main idea of NSGA-III is to combine the decomposition strategy of MOEA/D (Zhang and Li, 2007) with the nondominated sorting approach from NSGA-II (Deb et al., 2002). NSGA-III can successfully generate well converged and well diversified sets of solutions to many-objective optimization problems.

These Pareto-based MOEAs work well for low-dimensional objective optimization problems. But for many-objective problems, the proper balance between convergence and diversity for these algorithms becomes very difficult. Therefore, some scholars proposed the hypervolume indicator, which is strictly monotonic with the Pareto dominance and can act as an alternative mechanism to the Pareto dominance. Bader and Zitzler (2011) put forward a fast search algorithm that uses Monte Carlo simulation to approximate exact hypervolume values. Jiang et al. (2015) proposed a simple and fast method to update the exact hypervolume contributions of different solutions. Menchaca-Méndez et al. (2018) developed an adaptative control strategy to reduce the number of hypervolume contributions per iteration. Zapotecas-Martĺnez et al. (2019) put forward a Lebesgue indicator-based evolutionary algorithm to solve continuous and box-constrained multiobjective optimization problems. However, the hypervolume indicator has a high computational cost. As the number of objectives increases, hypervolume-based algorithms become inapplicable. Because of this, some methods to approximate the hypervolume contribution have been proposed in order to reduce its computational cost. For example, HypE uses the Monte Carlo simulations (Everson et al., 2002; Bader et al., 2010). However, experimental results indicate that regardless of the approximation method adopted, hypervolume-based MOEAs that incorporate such methods have poor performance. Therefore, the use of hypervolume-based MOEAs for solving many-objective optimization problems is still an open research area.

$ɛ$-dominance (Laumanns et al., 2002) is a relaxed form of the Pareto dominance that was proposed as an archiving technique, but that can be adopted to dilute the selection pressure generated in many-objective optimization problems. This new relation can not only make the number of nondominated solutions in a population decrease to some degree but also play a crucial role in diversity maintenance. These properties of $ɛ$-dominance inspired Yang et al. (2013) to propose a new evolutionary algorithm called GrEA which is very competitive with respect to some state-of-the-art MOEAs such as HypE, MOEA/D and $ɛ$-MOEA (Deb, Mohan et al., 2005). Nevertheless, the performance of GrEA is affected by its many parameters. For the better use of this algorithm, it is necessary to understand the specific effect of these parameters on its performance.

It is well known by the researchers in the multiobjective optimization field that MOEA/D plays a very important role in solving multiobjective optimization problems. Early decomposition algorithms (Jin et al., 2001) applied dynamic aggregation-based methods to archive the Pareto solutions of each generation. Decomposition-based MOEAs require a set of weighted vectors or reference points. In essence, these vectors or points play a similar role of decomposing the objective space into a number of subspaces to ensure diversity. As we know, a large number of decomposition-based MOEAs have been developed such as MOEA/D-STM (Li et al., 2014), MOEA/D-DE (Li and Zhang, 2009), MOEA/D-M2M (Liu et al., 2014), MOEA/DD (Li, Deb et al., 2015), IM-MOEA (Cheng et al., 2015), RVEA (Cheng et al., 2016), NSGA-III (Deb and Jain, 2014), etc., since Zhang and Li proposed MOEA based on decomposition in 2007. Here, we just give a simple comparison between the proposed algorithm and other decomposition-based MOEAs that adopt a similar algorithmic framework.

The algorithm proposed here is a decomposition-based MOEA designed for solving many-objective optimization problems. Regarding decomposition-based MOEAs for multiobjective optimization problems, there are four important aspects that need to be considered by their designers (Trivedi et al., 2016): 1) the way of generating the weight vectors; 2) the decomposition method; 3) the reproduction operators; and 4) the mating selection and the replacement strategy.

For the four aspects just mentioned, we first explain the proposed algorithm in our article item by item, and then we make a comparison among those similar decomposition-based methods like MOEA/D, NSGA-III, MOEA/D-STM, MOEA/DD, and RVEA.

1. When the number of objectives is more than 8, the proposed algorithm adopts the two-layered generation method (Deb and Jain, 2014) to generate the weight vectors which is also used in NSGA-III, MOEA/DD, and RVEA. MOEA/D-STM adopts the simplex lattice design method. In the original version of MOEA/D, it also employs the simplex lattice design method to generate the weight vectors.

2. As we know, there are three commonly used decomposition methods in the evolutionary multiobjective optimization community, that is, the weighted sum (WS), the weighted Tchebycheff (TCH), and PBI. MOEA/D-STM utilizes the TCH method. NSGA-III, MOEA/DD, MOEA/D, and RVEA essentially use the PBI method, although their authors indicated in their original references that they employed a set of reference vectors that spread over the objective space to divide the objective space into multiple subspaces. The proposed MOEA/D-CSM also adopts the PBI method.

3. In the original MOEA/D, simulated binary crossover (SBX) and polynomial-based mutation (PM) operators are incorporated as the genetic operators. Up to now, many reproduction operators have been developed in different decomposition-based MOEAs such as differential evolution (DE), particle swarm optimization, ant colony optimization, etc. In the proposed algorithm (MOEA/D-CSM), we also use SBX and PM, the same as NSGA-III, MOEA/DD, MOEA/D, and RVEA. MOEA/D-STM adopts DE and SBX as the reproduction operator.

4. As we know, the mating selection and the replacement strategy play important roles in decomposition-based MOEAs. NSGA-III is an extension of the NSGA-II framework and it utilizes a set of reference points that spread over the objective space and decompose the objective space into multiple small subspaces. The hybrid population is classified into different nondominated levels, and solutions in the first level have the highest priority to be selected. Solutions in the last acceptable level are selected based on a niche-preservation operator, in which a solution with a less crowded reference line has a higher probability of being selected. MOEA/D-DD combines dominance and decomposition-based approaches for many-objective optimization and the update of the population is done in a hierarchical manner, adopting Pareto dominance, local density estimation, and scalarization functions, considered in a sequential manner. In MOEA/D-STM, a stable matching model coordinates the selection process of MOEA/D to select the most promising solutions for each subproblem. A subproblem prefers solutions that can lower its aggregation function value, while a solution prefers subproblems whose direction vectors are close to it. As a newly proposed decomposition-based MOEA, RVEA uses the reference vectors to decompose the objective space into multiple small subspaces, and it inherits an elitism strategy similar to NSGA-II where the parents population and the offspring population are combined at every generation to undergo an elitist selection. A new angle-penalized distance (APD) is used to select the solution from each subpopulation to enter the next generation. Experimental results show that it is highly competitive in comparison with MOEA/DD and NSGA-III in terms of the hypervolume indicator. In our algorithm, a new correlation between the reference points and solutions is proposed. The mating selection mechanism based on correlation is based on three entities of a solution and aims to find one correlative solution for all reference points, at least according to the second entity of solutions so that population diversity can be maintained. By computing the scalarization functions and the distance between the objective vector of a solution and the reference line decided by its correlative reference point, the proposed algorithm balances the diversity and the convergence. Moreover, the replacement strategies are also executed on the combination of offspring population and parents population based on three entities of reference points. It is mainly to ensure for a reference point with the smallest number of correlative solutions to save those solutions with better scalarization function values. Thus NSGA-III, MOEA/DD, and RVEA were developed to solve the many-objective problems. MOEA/D was found to perform well on the many-objective problems in the original paper of NSGA-III. While MOEA/D-STM was proposed to solve complex multiobjective problems like UF, RVEA was proposed in 2016 and from the original paper of RVEA (Cheng et al., 2016). We found this algorithm to be very competitite with respect to some outstanding algorithms like MOEA/DD. This is why we make a comparison between MOEA/D, NSGA-III, RVEA, and the proposed algorithm which is also designed for the many-objective optimization problems at the beginning of the experimental section and then, we make a separate comparison of MOEA/D-STM and MOEA/D-CSM in the latter section.

In the following section, the proposed algorithm, that is, a decomposition-based evolutionary algorithm with correlative selection mechanism is suggested, investigated, and discussed in detail.

## 3  The Proposed Algorithm: MOEA/D-CSM

### 3.1  Definition

Generally, a minimization multiobjective optimization problem can be defined as follows (Jain and Deb, 2014):
$Minimize(x)F(x)=(f1(x),f2(x),f3(x),…,fM(x)),x∈Ω,F∈ZSubjecttogj(x)≥0,j=1,2,….,p,hk(x)=0,k=1,2,….,q,xi(L)≤xi≤xi(U),i=1,2,…,n.$
(1)
where $f1(x),f2(x),f3(x),…,fM(x)$ are the $M$ objective functions, $p$ is the number of inequalities, and $q$ is the number of equalities. In this article, $p$ and $q$ are set as zero, and $Z$ is the feasible region of $F$. Therefore, the algorithm proposed by this article is used to handle problems with box constraints.

In order to clarify the proposed algorithm, in this section, we first describe the decomposition approach used in MOEA/D-CSM, and then we introduce a new type of relationship between an individual and a reference point.

### 3.2  Generating Reference Vectors

The proposed MOEA/D-CSM adopts the PBI decomposition approach. For the PBI decomposition approach, the most important issue is how to generate the reference vectors, that is to say, how to uniformly segment the objective space. In earlier MOEA/Ds, Das and Dennis's (1998) systematic approach was a commonly used way to generate the reference vectors. In fact, there are also many other ways to generate the reference vectors. Zapotecas-Martĺnez et al. (2015) introduced a new methodology based on low discrepancy sequences to generate the weighted vectors. He et al. (2016) proposed a reference point sampling approach by taking the specific shape of the Pareto optimal front into account for measuring the performance of MOEAs. Jiang and Yang (2017) presented a k-layer reference direction generation approach. In this article, we use Das and Dennis's systematic approach. The total number of reference points ($H$) is controlled by a parameter $D$ which represents the number of divisions along each objective. So $r1,r2,…,rH$ denote all reference points in which each item of $ri(i=1,…,H)$ takes a value from $0D,1D,…,DD$ and the sum of all items of $ri(i=1,…,H)$ is 1. Therefore, the total number of reference points $H$ and the parameter $D$ must meet the following equation:
$H=CD+M-1M-1$
(2)
Here, $M$ is the number of objectives. For example, in a three-objective problem ($M=3$), the reference points are placed on a triangle with apex at (1, 0, 0), (0, 1, 0), and (0, 0, 1). If the parameter $D$ is set to 3, the total number of reference points $H$ is equal to 10. The distribution of 10 reference points is shown in Figure 1.
Figure 1:

An example of 10 reference points distributed on a normalized hyper-plane by using the one-layered method when $D=3$, $M=3$.

Figure 1:

An example of 10 reference points distributed on a normalized hyper-plane by using the one-layered method when $D=3$, $M=3$.

According to Eq. (2), for an eight-objective problem, if $D=8$, it is easy to get the total number of reference points $H=5,040$, which results in a large and impractical value since the population size is set as the number of reference points or the number of reference points plus 1 in the proposed algorithm. Moreover, if the parameter $D$ or the number of objectives, $M$, increases a bit, the total number of reference points $H$ will dramatically increase. Therefore, it is necessary to apply some advanced experimental design methods (Zhang and Leung, 1999; Santner et al., 2013) to generate reference points. In this article, when the number of objectives is large, two layers of reference points with small values of $D$ are used (Deb and Jain, 2014). For example, for an eight-objective problem, on the first layer, if $D1=3,120$ reference points are created according to Eq. (2); on the second layer, if $D2=2,36$ reference points are created. So 156 ($H=120+36=156$) reference points are generated which is much less than the 5,040 reference points created by the one-layered method. Figure 2 gives an example of two-layered reference points generated for a three-objective problem. In this article, when $M<8$, the one-layered method is used to generate the reference points, and when $M≥8$, the two-layered method can be implemented in the study in order to reduce the computational complexity.
Figure 2:

An example of 9 reference points generated by using the two-layered reference points. There are six reference points on the first layer ($D1=2$) and three reference points on the second layer ($D2=1$).

Figure 2:

An example of 9 reference points generated by using the two-layered reference points. There are six reference points on the first layer ($D1=2$) and three reference points on the second layer ($D2=1$).

As shown in Figure 2, 9 reference points are generated by using the two-layered method when $D1=2$ and $D2=1$. In order to give a more common example, Figure 3 shows a distribution of more than 90 reference points generated by using the two-layered method. All of these reference points are obtained when $D1$, $D2$ are set as 7, 4, respectively. In fact, the two-layered method used here is the same as in NSGA-III, in which the authors did not present the detailed procedure of the method. Here, in order to make the two-layered method easily reproducible, the procedures of the one-layered method and two-layered method are presented in Algorithms 1, 2, and 3.

Figure 3:

A common example for reference points generated by using the two-layered reference points when $M=3$, $D1=7$, and $D2=4$. The black points are the reference points in the first layer. The blue points represent reference points in the second layer.

Figure 3:

A common example for reference points generated by using the two-layered reference points when $M=3$, $D1=7$, and $D2=4$. The black points are the reference points in the first layer. The blue points represent reference points in the second layer.

In Algorithm 1, the set $R0$ is used to store the reference points. Then the algorithm would generate one-layer reference points by calling the sub-function presented in Algorithm 3. In Algorithm 3, the parameter $K$ represents the time of the function $Recursion$ calling itself. The set $R$ is used to store the reference points in the process of $Recursion$. By using $r'/D$, the reference points would be added to the set $R$ until the terminal conditions are satisfied.

In Algorithm 2, $R1$ and $R2$ are used to store the reference points in the first-layer and the second-layer, respectively. Then the algorithm would call the function $Recursion$ to generate the reference points. Since the above reference points are generated respectively, it is indeed necessary to integrate the reference points in $R2$ into $R1$ based on $q'=(D2/M+q*D2)/2/D2$. Thus, $R1$ stores all reference points.

Figure 4:

Illustration of relation between a solution and a reference point.

Figure 4:

Illustration of relation between a solution and a reference point.

### 3.3  Correlation between a Solution and a Reference Point

A reference line can be obtained by linking an ideal point with a reference point. So, when all reference points are generated, all reference lines can also be obtained. These reference lines can be uniformly distributed in the hyperspace. The purpose of a decomposition-based algorithm is to try to conduct the search toward the best and the closest solutions from these reference lines in this article. To explain the idea more clearly, we first propose a new concept of correlation between an individual and a reference point.

In Figure 4, $z*$ and $r$ are an ideal point and a reference point, respectively. Since the exact ideal objective vector is usually unknown a priori, it is generally set as the best value of each objective in the current population; that is, $zi*=min{fi(x)|x∈S}$, $i∈{1,…,M}$, where $S$ denotes a set of solutions in the current generation. In this article, the ideal point is always the original one considering the minimization problem in Eq. (1). $λ=r-z*$ is a reference line across the reference point $r$. The distance $d1$ is the projection of line segment with apex at $F(x)$ and $z*$ on the line $λ$. The distance $d2$ denotes the perpendicular distance between $F(x)$ and the reference line $λ$. $d1$ and $d2$ are computed based on the PBI function which is shown as follows:
$d1=(F(x)-z*)•λλ$
(3)
$d2=F(x)-z*+d1λλ$
(4)

If the distance $d2$ between $F(x)$ and the reference line $λ$ is not larger than that between $F(x)$ and all other reference lines, the solution $x$ and the reference point $r$ are correlative. $x$ is called correlative solution of the reference point $r$ and the reference point $r$ is called correlative reference point of the solution $x$. Therefore, a solution should only have one correlative reference point while a reference point can have zero, one or more correlative solutions.

### 3.4  Mating Selection Based on Correlation

A novel selected mechanism is proposed in MOEA/D-CSM. It includes two parts: mating selection and environmental selection based on correlation. Most traditional MOEAs select solutions from the parents population according to fitness or nondominance rank, while MOEA/D-CSM chooses solutions and updates parent solutions according to related entities of reference points.

Before we introduce the mating selection based on correlation, three entities of each parent solution and each reference point should be given.

For each solution $x$, we calculate three entities: 1) its correlative reference point $rx$; 2) $d1x$, the distance between objective vector $F(x)$ and the reference line decided by the correlative reference point $rx$, that is to say, $d1x=d2$ as shown in Figure 3; and 3) $d2x$, the penalty distance between objective vector $F(x)$ and the reference line decided by the correlative reference point $rx$, that is to say, $d2x=d1+θd2$ where $d1$, $d2$ are also shown in Figure 3 and $θ=5$ is set in this article which is suggested in Zhang and Li (2007). So, three entities of a solution can be presented as ($rx$, $d1x$, $d2x$).

For each reference point $r$, we also calculate three entities: 1) $Ur$, a set of solutions which are correlative to the reference point $r$ in the parents population, 2) $nr$, the number of solutions which are correlative solutions of the reference point $r$ in the parents population, and 3) $Vr$, a set of reference points which are the $T$ closest reference points to the reference point $r$ ($T=8$ was adopted in this article). Therefore, three entities of the reference point $r$, can be presented as ($Ur$, $nr$, $Vr$).

In fact, the mating selection mechanism based on correlation mainly aims to find one correlative solution for all reference points at least according to the second entity of solutions so that population diversity can be maintained. As mentioned previously, whether a solution is relevant to a reference point is based on the second entity of a solution. There possibly exists a case such that a solution has more than one reference points. Here, we just select one as its reference point. When a reference point has more than one correlative solution in the search process, the solution with better convergence according to its third entity is saved. From the definition of the third entity of a solution, it is easy to see that it can denote the convergence of the solution. In other words, the second entity of solutions is used to maintain population diversity and the third entity of solutions is used to maintain population convergence.

Before we execute the mating selection operators based on correlation, entities of each solution and reference point must be calculated in advance. However, we need to point out that these entities are calculated according to the current parents population $Pt$ in Algorithm 4. Next, we will explain the mating selection mechanism based on correlation in detail.

First, two solutions $x1$ and $x2$ are randomly selected from a parents population. $r1$ and $r2$ are their correlative reference points, respectively. If $n1≤n2$ ($n1$ and $n2$ denote the numbers of correlative solutions of reference points $r1$ and $r2$), the solution $x1$ is saved; otherwise, the solution $x2$ is saved. In the following explanation, we suppose that $n1≤n2$, that is, the solution $x1$ is reserved.

Next, we randomly choose a reference point $r3$ from the set $Vr1$ which is a set of $T$ closest reference points to the reference point $r1$. Then, a solution $x3$ is randomly selected from a set $Ur3$ which is a set of correlative solutions of the reference point $r3$.

Finally, $x1$ and $x3$ are considered as two parent solutions. Two offspring solutions are generated from the two parent solutions by using crossover and mutation. In this article, polynomial-based mutation (PM) (Deb et al., 2002) and simulated binary crossover (SBX) (Deb and Agrawal, 1994) are adopted within the mating selection mechanism.

Population size must be even in this article, because offspring solutions are always created in pairs according to mating selection based on correlation. Moreover, in decomposition-based evolutionary algorithms like NSGA-III, population size generally equals the number of reference points. Hence, in this article, if the number of reference points $H$ is even, the population size, $popsize$, is equal to $H$; otherwise, $popsize$ is equal to $H+1$.

### 3.5  Environmental Selection Based on Correlation

Suppose the offspring population is $Qt$ after the mating selection operation is performed on the parents population $Pt$. Then, the update of the population $Pt+1$ using $Qt$ and $Pt$ will be done by the proposed environmental selection based on correlation. So, the environmental selection operator can also be called population update operator, which is given in Algorithm 5.

Suppose that a solution $xk$ is the $k$th solution of the offspring population $Qt$, and three entities of the solution $xk$ are calculated and denoted as ($rk$, $d1k$, $d2k$). Then three entities of the reference point $rk$ can also be obtained and denoted as ($Urk$, $nrk$, $Vrk$).

The offspring individual $xk$ is going to update the current parents population $Pt$. According to its second entity, it will find its correlative reference point, and then by comparing the values of $nri$ ($ri=1,⋯,H$), the reference point with the largest number $nrmax$ of the correlative solutions among all reference points (deoned as $rmax$) is found. In all correlative solutions of $rmax$, the solution $xmax$ with the largest third entity will be computed and removed from the set $Urmax$. Finally, three entities of $rmax$ are updated.

Furthermore, in order to provide a more clear explanation for the environmental selection process, Figures 5 and 6 show the selection process in detail when considering a multiobjective optimization problem with all the objectives being minimized, in which, we show how the current parents population is updated with the current offspring population in different situations. Here, we assume that the objective space has two dimensions and the population consists of six solutions as shown in Figures 5 and 6.

As shown in Figure 5a, in the current parents population, each reference vector has one correlative solution. After obtaining the current offspring population, the current parents population will be updated by using the proposed environmental selection. For example, when a solution $c1$ in the current offspring population is produced, first, we need to find its correlative reference vector ($r1$ in Figure 5b) according to its second entity ($d1c1$), so that the reference vector $r1$ now has two correlative solutions. The number of correlative solutions of $r1$ is the largest and its worst solution ($p1$) which has the largest value of $d2p1$ in $Ur1$, will be discarded as shown in Figure 5c. The other solutions in the current offspring population will successively update the population until all solutions in the current offspring population participate in the updating process.

Figure 5:

(a) Current parents population (six red solid dots) and their correlative reference vectors (dotted lines); (b) Distributon of the current population when an offspring c1 is produced before the environmental selection; (c) The distribution of the current population after the environmental selection.

Figure 5:

(a) Current parents population (six red solid dots) and their correlative reference vectors (dotted lines); (b) Distributon of the current population when an offspring c1 is produced before the environmental selection; (c) The distribution of the current population after the environmental selection.

As shown in Figure 6a, in the current parents population, reference vectors ($r2$ and $r3$) have no correlative solution and a reference vector ($r1$) has three correlative solutions ($p1$, $p2$, and $p3$). In the selection process, by computing the three entities of solution $c1$, $c1$ will be added to $Ur2$, the set of solutions which are correlative to $r2$. Then the worst solution ($p1$) will be removed. So far, the current parents population has been updated by the solution $c1$.

Here, we need to emphasize three points for the environmental selection based on correlation in the proposed algorithm:

1. We try to find a correlative solution at least for each reference point as soon as possible at the start of the evolutionary process.

2. Once a reference point finds a correlative solution, this means that it has at least one correlative solution in the previous iteration.

3. If a reference point has only one correlative solution, we just remove this solution by replacing it with the other correlative solution with a better third entity.

According to the above explanations, the maintenance of diversity among population members in MOEA/D-CSM is aided by finding at least one related solution for each reference point as far as possible. Nevertheless, the sum of correlative solutions of all reference points must equal the population size. Hence, during the search for the correlative solutions, if there are many correlative solutions for a reference point, we should remove some of the worst solutions from these correlative solutions of the reference point. Here, the worst solution is the one whose third entity is larger than that of others in the correlative solutions of the same reference point. Removing these worst solutions can guide the population closer to the Pareto-optimal front.

Figure 6:

(a) Current parents population (six red solid dots) and the reference vectors (dotted lines); (b) Distribution of the current population when an offpsring c1 is produced before the environmental selection; (c) The distribution of the current population after the environmental selection.

Figure 6:

(a) Current parents population (six red solid dots) and the reference vectors (dotted lines); (b) Distribution of the current population when an offpsring c1 is produced before the environmental selection; (c) The distribution of the current population after the environmental selection.

### 3.6  Main Loop

Algorithm 6 provides the main procedure of MOEA/D-CSM. The computation complexity of one generation of MOEA/D-CSM is analyzed here. In Algorithm 4, because there is nothing but a repetition of line 15, reproducing the offspring population by the mating selection operator requires $O(popsize/2)$ computations. However, in Algorithm 5, firstly, determining three entities of an offspring solution (line 2) requires $O(H×M)$ computations; secondly, finding a reference point with the largest second entity in all reference points (line 7) requires $O(H)$ computations; finally, determining a solution whose third entity is the largest in all solutions having the same correlative reference point requires $O(popsize)$ (line 8) in the worst case. Because $popsize≥H$ is used in all our simulations, the complexity of our environmental selection operator is $O(popsize×M)$. Considering all the above considerations, the overall worst-case complexity of one generation of MOEA/D-CSM is $O(popsize2×M)$ which is not worse than that of NSGA-III.

## 4  Simulation Results

### 4.1  Experimental Setup

Since MOEA/D-CSM, NSGA-III, RVEA, and MOEA/D are based on the same decomposition approach, we used 3- to 15-objective DTLZ1, DTLZ2, DTLZ3, and DTLZ4 problems (Deb, Thiele et al., 2005; Wang et al., 2019) and UF8, UF9, UF10 (Zhang et al., 2009; Wang et al., 2019) to assess the performance of four algorithms. The implementation of NSGA-III adopted in this article was taken from http://web.ntnu.edu.tw/∼tcchiang/publications/nsga3cpp/nsga3cpp.htm; the code of MOEA/D is from http://dces.essex.ac.uk/staff/zhang/webofmoead.htm; and the code of RVEA is from http://www.surrey.ac.uk/cs/people/yaochu_jin/. The number of variables are ($M+k-1$), where $k=10$ for DTLZ2, DTLZ3, and DTLZ4. For UF8-UF10, the number of variables is 30.

Table 1 shows the number of reference points ($H$) for problems with different numbers of objectives. The population size and the number of the reference points in NSGA-III and MOEA/D are suggested in Zhang and Li (2007) and Deb and Jain (2014), respectively, and the population size of RVEA is set as in MOEA/D. In MOEA/D-CSM, the population size is a bit different from that of the other three algorithms. The reason is that the population size of MOEA/D-CSM ($popsize$) should be even in the mating selection section which we have explained in Subsection 3.4, so $popsize$ is equal to $H$ or $H+1$ in MOEA/D-CSM. As shown in Table 1, the difference between the population size of the proposed algorithm and that of other algorithms cannot be higher than 2.

Table 1:

The number of reference points and population sizes used in MOEA/D-CSM, NSGA-III, MOEA/D, and RVEA.

The number of objectivesDivisions along eachThe number of referenceMOEA/D-CSM populationNSGA-III populationMOEA/D and RVEA population
($M$)objectives ($D$)points ($H$)size (popsize)size (popsize)size (popsize)
12 91 92 92 91
210 210 212 210
(3, 2) 156 156 156 156
10 (3, 2) 275 276 276 275
15 (2, 1) 135 136 136 135
The number of objectivesDivisions along eachThe number of referenceMOEA/D-CSM populationNSGA-III populationMOEA/D and RVEA population
($M$)objectives ($D$)points ($H$)size (popsize)size (popsize)size (popsize)
12 91 92 92 91
210 210 212 210
(3, 2) 156 156 156 156
10 (3, 2) 275 276 276 275
15 (2, 1) 135 136 136 135

Table 2 presents other parameters of the four algorithms used in this study. The neighborhood size $T$ and the penalty parameter are set as 8 and 5 for both MOEA/D-CSM and MOEA/D, respectively. The remaining special parameters in NSGA-III, MOEA/D, and RVEA are recommended by the authors in the respective articles. These common parameters in Table 2 are suggested in Deb and Jain (2014).

Table 2:

Parameter values used in MOEA/D-CSM, NSGA-III, MOEA/D, and RVEA ($n$ is the number of variables).

ParametersMOEA/D-CSMNSGA-IIIMOEA/DRVEA
Crossover probability $pc$
Mutation probability $pm$ $1/n$ $1/n$ $1/n$ $1/n$
$ηc$ 30 30 30 30
$ηm$ 20 20 20 20
ParametersMOEA/D-CSMNSGA-IIIMOEA/DRVEA
Crossover probability $pc$
Mutation probability $pm$ $1/n$ $1/n$ $1/n$ $1/n$
$ηc$ 30 30 30 30
$ηm$ 20 20 20 20

### 4.2  Performance Measures

The inverted generational distance (IGD) (Coello and Cortés, 2005) is used to compare the performance of four algorithms which can provide combined information about the convergence and diversity of the obtained solutions. First, since the true Pareto-optimal surface for all test problems used in this study is known, we can find a set of targeted Pareto-optimal points which are distributed in the true Pareto-optimal surface uniformly or approximately uniformly. Here, the method for generating these Pareto-optimal points is the same used in NSGA-III. We call these targeted Pareto-optimal points set $A$. In addition, we can obtain the final nondominated points in the objective space by using any of the four algorithms and we call them the set $B$. Now, IGD is computed as follows:
$IGD(A,B)=1|A|∑i=1|A|minj=1|B|d(ai,bj)$
(5)
where $d(ai,bj)=||ai-bj||$ is the Euclidean distance between $ai$ and $bj$. In our study, $|A|$ is set as the number of reference points $H$. It is quite obvious that the smaller the IGD metric value, the better the performance of the algorithm. For each test problem, 20 different runs from different initial populations are performed and mean and standard deviation of the IGD values are reported.

### 4.3  Experimental Results of DTLZ and UF8-U10 Problems and Discussion

In this section, we will present the comparative results of MOEA/D-CSM, NSGA-III, MOEA/D, and RVEA on DTLZ1-4 problems having 3 to 15 objectives and UF8-UF10 with 3 objectives. Table 3 gives the mean IGD and its standard deviation of 20 independent runs.

Table 3:

Mean IGD and its standard deviation obtained by four algorithms on test problems, where the best result for each problem is shown in boldface and the numbers correspond to rankings.

ProblemMMaxGenMOEA/D-CSMNSGA-IIIMOEA/DRVEAp-value
DTLZ1 400 1.075E-3(4.748E-4) 1.826E-3(1.086E-3) 1.990E-3(1.208E-3) 5.199E-4(1.250E-6) 0.0003
600 2.780E-4(1.113E-4) 8.923E-4(3.420E-4) 8.328E-4(3.225E-4) 1.225E-3(2.101E-6)
750 4.153E-3(7.305E-4) 4.960E-3(4.683E-3) 7.204E-3(7.363E-4) 4.534E-3(2.987E-6)
10 1000 5.709E-3(6.735E-4) 3.667E-3(7.816E-4) 6.907E-3(5.084E-4) 6.569E-3(1.220E-5)
15 1500 6.068E-2(2.456E-2) 4.931E-3(2.113E-3) 5.491E-2(3.271E-3) 6.811E-3(2.867E-5) 0.3703
DTLZ2 250 6.553E-4(9.193E-5) 1.250E-3(1.643E-4) 7.371E-4(6.458E-5) 1.253E-3(1.201E-6) 0.004
350 7.259E-4(1.478E-4) 4.507E-3(3.920E-4) 1.652E-3(1.371E-4) 5.393E-3(3.340E-6)
500 5.932E-3(8.043E-4) 1.615E-2(2.269E-3) 4.475E-3(6.474E-4) 1.101E-2(7.440E-6) 0.0001
10 750 1.167E-2(7.958E-4) 1.585E-2(1.048E-3) 4.763E-3(5.174E-4) 1.701E-2(1.324E-6)
15 1000 2.945E-2(2.579E-2) 2.036E-2(2.438E-3) 5.168E-2(1.166E-2) 3.892E-2(2.551E-3) 0.3507
DTLZ3 1000 3.175E-3(2.866E-3) 3.855E-3(1.709E-3) 3.988E-3(2.757E-3) 5.197E-3(4.853E-6) 0.156
1000 1.059E-3(6.656E-4) 4.998E-3(2.713E-3) 2.043E-3(6.600E-4) 9.022E-3(1.692E-5) 0.0012
1000 1.003E-3(1.695E-3) 6.530E-2(1.419E-1) 6.945E-2(2.280E-1) 1.871E-2(1.432E-5)
10 1500 1.375E-2(1.686E-3) 1.508E-2(3.669E-3) 5.614E-2(2.071E-1) 1.912E-2(4.091E-6) 0.2322
15 2000 1.369E-2(1.286E-2) 4.404E-2(3.375E-2) 6.338E-1(5.919E-1) 5.127E-2(2.627E-4)
DTLZ4 600 8.604E-5(1.079E-5) 1.065E-1(2.177E-1) 3.228E-1(3.931E-1) 4.642E-4(1.601E-6)
1000 8.130E-5(1.373E-5) 5.911E-4(1.313E-4) 2.080E-1(2.790E-1) 2.645E-3(4.901E-6)
1250 4.233E-3(7.382E-4) 4.132E-3(6.067E-4) 3.837E-1(1.759E-1) 7.048E-3(4.540E-6) 0.8519
10 2000 2.454E-2(1.317E-3) 4.380E-3(4.407E-4) 2.279E-1(1.641E-1) 1.086E-2(0.950E-6)
15 3000 2.527E-2(3.067E-2) 7.215E-3(1.122E-3) 4.425E-1(1.273E-1) 2.700E-2(2.049E-3)
UF8 300 1.704E-1(7.102E-5) 1.240E-1(9.771E-5) 3.310E-2(4.432E-5) 1.924E+0(3.777E-3)
UF9 300 2.133E-1(6.225E-4) 1.187E-1(7.918E-4) 2.867E-2(2.713E-5) 2.261E+0(2.926E-3)
UF10 300 1.291E-1(4.123E-5) 7.497E-2(2.971E-6) 7.042E-2(4.254E-4) 9.045E+0(2.535E-4)
ProblemMMaxGenMOEA/D-CSMNSGA-IIIMOEA/DRVEAp-value
DTLZ1 400 1.075E-3(4.748E-4) 1.826E-3(1.086E-3) 1.990E-3(1.208E-3) 5.199E-4(1.250E-6) 0.0003
600 2.780E-4(1.113E-4) 8.923E-4(3.420E-4) 8.328E-4(3.225E-4) 1.225E-3(2.101E-6)
750 4.153E-3(7.305E-4) 4.960E-3(4.683E-3) 7.204E-3(7.363E-4) 4.534E-3(2.987E-6)
10 1000 5.709E-3(6.735E-4) 3.667E-3(7.816E-4) 6.907E-3(5.084E-4) 6.569E-3(1.220E-5)
15 1500 6.068E-2(2.456E-2) 4.931E-3(2.113E-3) 5.491E-2(3.271E-3) 6.811E-3(2.867E-5) 0.3703
DTLZ2 250 6.553E-4(9.193E-5) 1.250E-3(1.643E-4) 7.371E-4(6.458E-5) 1.253E-3(1.201E-6) 0.004
350 7.259E-4(1.478E-4) 4.507E-3(3.920E-4) 1.652E-3(1.371E-4) 5.393E-3(3.340E-6)
500 5.932E-3(8.043E-4) 1.615E-2(2.269E-3) 4.475E-3(6.474E-4) 1.101E-2(7.440E-6) 0.0001
10 750 1.167E-2(7.958E-4) 1.585E-2(1.048E-3) 4.763E-3(5.174E-4) 1.701E-2(1.324E-6)
15 1000 2.945E-2(2.579E-2) 2.036E-2(2.438E-3) 5.168E-2(1.166E-2) 3.892E-2(2.551E-3) 0.3507
DTLZ3 1000 3.175E-3(2.866E-3) 3.855E-3(1.709E-3) 3.988E-3(2.757E-3) 5.197E-3(4.853E-6) 0.156
1000 1.059E-3(6.656E-4) 4.998E-3(2.713E-3) 2.043E-3(6.600E-4) 9.022E-3(1.692E-5) 0.0012
1000 1.003E-3(1.695E-3) 6.530E-2(1.419E-1) 6.945E-2(2.280E-1) 1.871E-2(1.432E-5)
10 1500 1.375E-2(1.686E-3) 1.508E-2(3.669E-3) 5.614E-2(2.071E-1) 1.912E-2(4.091E-6) 0.2322
15 2000 1.369E-2(1.286E-2) 4.404E-2(3.375E-2) 6.338E-1(5.919E-1) 5.127E-2(2.627E-4)
DTLZ4 600 8.604E-5(1.079E-5) 1.065E-1(2.177E-1) 3.228E-1(3.931E-1) 4.642E-4(1.601E-6)
1000 8.130E-5(1.373E-5) 5.911E-4(1.313E-4) 2.080E-1(2.790E-1) 2.645E-3(4.901E-6)
1250 4.233E-3(7.382E-4) 4.132E-3(6.067E-4) 3.837E-1(1.759E-1) 7.048E-3(4.540E-6) 0.8519
10 2000 2.454E-2(1.317E-3) 4.380E-3(4.407E-4) 2.279E-1(1.641E-1) 1.086E-2(0.950E-6)
15 3000 2.527E-2(3.067E-2) 7.215E-3(1.122E-3) 4.425E-1(1.273E-1) 2.700E-2(2.049E-3)
UF8 300 1.704E-1(7.102E-5) 1.240E-1(9.771E-5) 3.310E-2(4.432E-5) 1.924E+0(3.777E-3)
UF9 300 2.133E-1(6.225E-4) 1.187E-1(7.918E-4) 2.867E-2(2.713E-5) 2.261E+0(2.926E-3)
UF10 300 1.291E-1(4.123E-5) 7.497E-2(2.971E-6) 7.042E-2(4.254E-4) 9.045E+0(2.535E-4)

From Table 3, we can see that the proposed MOEA/D-CSM performs better than NSGA-III, RVEA, and MOEA/D on DTLZ3 having from 3 to 15 objectives. Moreover, because DTLZ3 is used to investigate an algorithm's ability to converge to the global Pareto-optimal front, it is obvious that the convergence of MOEA/D-CSM is better than that of the three other algorithms considering DTLZ3. IGD results obtained by the proposed algorithm are much better than those of the other algorithms on DTLZ4 with 3 and 5 objectives. However, when the number of objectives of DTLZ4 is larger than 5, IGD results obtained by the proposed algorithm are gradually worse than those of NSGA-III; however, they are still better than those of MOEA/D and RVEA when the number of objectives is 8 or 15. Therefore, when DTLZ4 has fewer than 8 objectives, MOEA/D-CSM performs much better than the other three algorithms. For DTLZ2 having 3 to 19 objectives, IGD results obtained by MOEA/D-CSM are better than those of NSGA-III and RVEA, but they are worse than MOEA/D and for DTLZ2 having 15 objectives. The IGD results of MOEA/D-CSM and MOEA/D are similar. For DTLZ1 having 5 and 8 objectives, the IGD results obtained by MOEA/D-CSM are better than those obtained by the two other algorithms. RVEA is much better than other three algorithms on DTLZ1 with 3 objectives. For DTLZ1 with 10 and 15 objectives, NSGA-III performs better than the three other algorithms. For three UF problems, the performance of MOEA/D is the best.

Out of 23 cases, there are 14 cases in which MOEA/D-CSM performs better than NSGA-III, and there are 17 cases in which MOEA/D-CSM performs better than MOEA/D. Additionally, there are 20 cases in which MOEA/D-CSM performs better than RVEA. Moreover, MOEA/D-CSM performs much better than NSGA-III in 5 cases including DTLZ2 and DTLZ4 with 3 or 5 objectives, and DTLZ3 with 8 objectives. In these cases, MOEA/D-CSM has an index-level-based improvement. MOEA/D-CSM also performs much better than MOEA/D in 8 cases including five-objective DTLZ2, DTLZ3 with 8 or 15 objectives, DTLZ4 with from 3 to 15 objectives. Except for DTLZ1 with 3 objectives and DTLZ4 with 10 objectives, MOEA/D-CSM is better than RVEA in all the other test problems. So, we can conclude that MOEA/D-CSM proposed in this article can be used to deal with many-objective optimization problems and MOEA/D-CSM outperforms NSGA-III, MOEA/D, and RVEA in most cases. For three UF problems with 3 objectives, MOEA/D is the best among the four algorithms.

In order to perform a statistical analysis of results, we adopted the Wilcoxon signed-rank test (Hollander and Wolfe, 1999; Sheskin, 2004) to analyze the IGD results of 20 runs obtained by two algorithms (rank $=$ 1 and rank $=$ 2) under the significance level $α=0.05$; all $p-value$ results are given in the last column in Table 3. As we know, if the $p-value≤0.05$, then it represents that we reject the hypothesis $H0$ at a significance level $α=0.05$, and this means that there exists a significant difference between the best-performance algorithm (rank $=$ 1) and the second best-performance algorithm (rank $=$ 2). Otherwise, if the $p-value>0.05$, then it denotes that we accept the hypothesis $H0$ at the significance level $α=0.05$, there does not exist significant difference between the two algorithms (rank $=$ 1 and rank $=$ 2). For example, for DTLZ1 (5), $p-value=0$ as shown in Table 3 and $p-value<0.05$, it means that MOEA/D-CSM (rank $=$ 1) is significantly better than MOEA/D (rank $=$ 2); and for DTLZ3 (3), $p-value=0.1560$ (in Table 3) and $p-value>0.05$, it represents that it is not significantly better than NSGA-III although MOEA/D-CSM (rank $=$ 1) is better than the NSGA-III (rank $=$ 2).

In summary, when its rank is 1, the proposed algorithm MOEA/D-CSM is significantly better than those algorithms with rank $=$ 2 on 9 test problems and not significantly better than comparative algorithms on 2 test problems out of 23 test problems according to the obtained $p-value$. As for NSGA-III, when its rank $=$ 1, there are only 3 test problems in which NSGA-III is significantly better than those algorithms with rank $=$ 2. Similarly, MOEA/D is significantly better than the second rank algorithms on DTLZ2 (8), DTLZ2 (10), and UF8-UF10. RVEA performs only significantly better than the second rank algorithm on DTLZ1 (3). The statistical results also show the superiority of the proposed algorithm.

Since Table 3 shows the mean and standard deviations of the IGD values, we cannot investigate the stability of these algorithms. In this section, Figures 711 give boxplots of IGD results on DTLZ1-4 and UF8-UF10 obtained by the four algorithms to show the stability of the four algorithms. From Figure 7, we can see that MOEA/D-CSM is more stable than the three other algorithms when DTLZ1 has five objectives. But for DTLZ1 having 15 objectives, the stability of MOEA/D-CSM is worse. From Figure 8, we can also find that the stability of MOEA/D-CSM is better than NSGA-III for DTLZ2 having three to ten objectives. From Figure 9, it is known that the stability of MOEA/D-CSM is better than that of the other algorithms when DTLZ3 has more objectives. From Figure 10, we can find that the stability of MOEA/D-CSM is good when DTLZ4 has 8 to 15 objectives. For UF test problems with 3 objectives, MOEA/D is presented in Figure 11.
Figure 7:

Boxplots of IGD results obtained by the four algorithms for DTLZ1 having 3 to 15 objectives in 20 independent runs.

Figure 7:

Boxplots of IGD results obtained by the four algorithms for DTLZ1 having 3 to 15 objectives in 20 independent runs.

Figure 8:

Boxplots of IGD results obtained by the four algorithms for DTLZ2 having 3 to 15 objectives in 20 independent runs.

Figure 8:

Boxplots of IGD results obtained by the four algorithms for DTLZ2 having 3 to 15 objectives in 20 independent runs.

Figure 9:

Boxplots of IGD results obtained by the four algorithms for DTLZ3 having 3 to 15 objectives in 20 independent runs.

Figure 9:

Boxplots of IGD results obtained by the four algorithms for DTLZ3 having 3 to 15 objectives in 20 independent runs.

Figure 10:

Boxplots of IGD results obtained by the four algorithms for DTLZ4 having 3 to 15 objectives in 20 independent runs.

Figure 10:

Boxplots of IGD results obtained by the four algorithms for DTLZ4 having 3 to 15 objectives in 20 independent runs.

Figure 11:

Boxplots of IGD results obtained by the four algorithms for UF8, UF9, UF10 in 20 independent runs.

Figure 11:

Boxplots of IGD results obtained by the four algorithms for UF8, UF9, UF10 in 20 independent runs.

### 4.4  Discussion on the Setting of the Ideal Point

In all of the above experiments, we set the original point as the ideal point. One reason is that it can reduce the computational cost and we do not need to update it during the evolutionary process; the other reason is that doing so can magnify the selection pressure toward the optimal solution when we apply algorithms to solve a minimization problem and it can speed up convergence to the optimal PF.

Here, we make a comparison between two cases: case 1: the original point is set as the ideal point; case 2: the ideal point is generally set as the best value of each objective in the current population, that is, $z*=min{fi(x)|x∈S}$, which denotes a set of solutions in the current generation. We provide the mean IGD values and the running times of 20 independent runs of MOEA/D-CSM in Table 4.

Table 4:

Mean IGD results and mean running times obtained for two cases on some UF and DTLZ test problems with three objectives, where the best results are shown in boldface.

Case 1Case 2
Set the original point as the ideal pointSet the best objective function value as the ideal point
TestRunningRunning
problem$M$IGDtime(s)IGDtime(s)
DTLZ1 1.075E-3(4.748E-4) 12.79 1.479E-3(5.980E-4) 15.63
DTLZ2 6.553E-4(9.193E-5) 9.83 6.210E-4(7.143E-5) 17.9
DTLZ3 3.175E-3(2.866E-3) 21.91 4.205E-3(3.596E-3) 35.46
DTLZ4 8.604E-5(1.079E-5) 27.53 9.724E-5(2.184E-4) 42.85
UF8 1.704E-1(7.102E-5) 7.62 1.316E-1(5.152E-4) 16.75
UF9 2.133E-1(6.225E-4) 8.94 3.826E-1(5.380E-4) 17.82
UF10 1.291E-1(4.123E-5) 10.05 1.096E-1(3.905E-4) 22.96
Case 1Case 2
Set the original point as the ideal pointSet the best objective function value as the ideal point
TestRunningRunning
problem$M$IGDtime(s)IGDtime(s)
DTLZ1 1.075E-3(4.748E-4) 12.79 1.479E-3(5.980E-4) 15.63
DTLZ2 6.553E-4(9.193E-5) 9.83 6.210E-4(7.143E-5) 17.9
DTLZ3 3.175E-3(2.866E-3) 21.91 4.205E-3(3.596E-3) 35.46
DTLZ4 8.604E-5(1.079E-5) 27.53 9.724E-5(2.184E-4) 42.85
UF8 1.704E-1(7.102E-5) 7.62 1.316E-1(5.152E-4) 16.75
UF9 2.133E-1(6.225E-4) 8.94 3.826E-1(5.380E-4) 17.82
UF10 1.291E-1(4.123E-5) 10.05 1.096E-1(3.905E-4) 22.96

Table 4 clearly shows that the IGD values obtained at the two cases are very similar. As expected, the algorithm takes a relatively short span of time to converge to the optimal PF when setting the original point as the ideal point.

## 5  Comparison of MOEA/D-CSM with MOEA/D-STM

In this section, we first introduce the stable matching-based selection in evolutionary multiobjective optimization (MOEA/D-STM) proposed by Li et al. (2014). Then, the differences between MOEA/D-CSM and MOEA/D-STM are presented. Finally, MOEA/D-CSM and MOEA/D-STM are compared through many experiments.

### 5.1  Introduction of MOEA/D-STM

MOEA/D-STM mainly proposed a new selection mechanism, called the STM model, which is based on MOEA/D. This section will explain the STM model in detail. As we know, MOEA/D decomposes a multiobjective optimization problem into a set of scalar optimization subproblems and optimizes them in a collaborative manner, so there always exist two sets; that is, subproblems and solutions existed in MOEA/D in a natural fashion. In some versions of MOEA/D, each subproblem indirectly chooses its best solution according to a scalar function at each generation. Therefore, a solution may be chosen by more than one subproblem so that the diversity of the current population may be influenced. In MOEA/D-STM, STM proposes a bidirectional selection model between these subproblems and solutions. Specific details are shown in Figure 12. First, according to the scalar optimization, a subproblem $P0$ looks for its best solution which exists in the following two cases: one case is that this solution has not been chosen by other subproblems; another is that this solution has been chosen as its best solution by another subproblem $P1$. If this solution has been chosen as the best solution of another subproblem $P1$, this solution has the opportunity of choosing a subproblem. According to the distance $d0$ (denoting the distance between the solution and the subproblem $P0$) and $d1$ (denoting the distance between the solution and the subproblem $P1$), if $d0$ is smaller than $d1$, the solution is chosen as the best solution of the subproblem $P0$, and then the subproblem $P1$ gives up the solution and looks for the second good solution as its best solution according to the scalar function. But the second good solution may exist in the two above mentioned cases, and these cases are processed in the same way.
Figure 12:

Simple example of STM where $g(x|λ,z*)$ denotes the scalar function and $d$ denotes the distance between a solution and a subproblem. ${S2,S3,S4,S1,S5}$ denotes the ascending order of scalar function $g(x|λ,z*)$, that is to say, $S2$ is the best solution of the subproblem, $P1$ and $S1$ is the second and so on. ${P2,P4,P3,P1,P5}$ denotes the ascending order of the distance between the solution $S1$ and all subproblems.

Figure 12:

Simple example of STM where $g(x|λ,z*)$ denotes the scalar function and $d$ denotes the distance between a solution and a subproblem. ${S2,S3,S4,S1,S5}$ denotes the ascending order of scalar function $g(x|λ,z*)$, that is to say, $S2$ is the best solution of the subproblem, $P1$ and $S1$ is the second and so on. ${P2,P4,P3,P1,P5}$ denotes the ascending order of the distance between the solution $S1$ and all subproblems.

Although two algorithms are essentially decomposition-based evolutionary multiobjective optimization algorithms, they are different from each other including the mechanism for generating reference points, the decomposition method, the solution selection mechanism and the solution updating strategy, as discussed next:

1. When the number of objectives is more than 8, the proposed algorithm adopts the two-layered method to generate the reference vectors while MOEA/D-STM adopts the simplex lattice design method.

2. The decomposition method in MOEA/D-STM is the weighted TCH approach while MOEA/D-CSM uses PBI.

3. The selection of MOEA/D-CSM is based on the correlation between the reference points and solutions, and MOEA/D-STM is based on the relationship between the subproblems and solutions.

4. The genetic operators in MOEA/D-STM include the differential evolution (DE) operator and the polynomial-based mutation (PM), and in MOEA/D-CSM, we use PM and SBX to update the population and by using the proposed mating selection based on correlation, the parents are selected to execute the genetic operators, while MOEA/D-STM randomly selects parent solutions to carry out DE and PM.

5. In the proposed algorithm, after we got an offspring population, we select a set of solutions by using the environmental selection operator to generate the new population in the next generation, while in MOEA/D-STM, for each subproblem, it first selects some solutions from the parents population and from the offspring population according to nondominated sorting. Then, the subproblem and its corresponding solutions are updated by using the STM model.

### 5.2  Comparison of Performance of MOEA/D-STM Using Two Different Crossover Operators

MOEA/D-STM introduced the STM mechanism into MOEA/D-DRA (Zhang et al., 2009) framework---because the differential evolution (DE) operator, which is proposed for those continuous multiobjective optimization test instances with arbitrary prescribed PS shapes, is used in MOEA/D-DRA. MOEA/D-STM, which also uses the DE operator may be appropriate for UF (Zhang et al., 2008) test problems but not appropriate for the DTLZ test problems. In this subsection, to demonstrate this statement that the DE operator is suitable for UF test problems and that the SBX operator is suitable for DTLZ test problems, we give a performance demonstration when the SBX operator is incorporated into MOEA/D-STM.

All parameter settings of MOEA/D-STM when using the DE operator are the same as the original paper (Li et al., 2014). But the original paper did not apply the SBX operator into MOEA/D-STM and also did not test on the DTLZ problems. Hence, when the SBX operator is used into MOEA/D-STM, the crossover probability $pc=1.0$ and its distribution index is set to be 20. When MOEA/D-STM is applied into the DTLZ test problems, the population size, the number of reference points, and the maximum number of generations are shown in Section 4. Table 5 shows the mean IGD values and their standard deviations over 20 independent runs on the UF test problems.

Table 5:

Mean IGD values and its standard deviation obtained by MOEA/D-STM + SBX and MOEA/D-STM + DE on the UF and DTLZ test problems, where the best results for each problem are shown in boldface and the values in the fourth and sixth columns correspond to the ranking of the two algorithms.

ProblemMMOEA/D-STM + SBXMOEA/D-STM + DEp-value
UF1 4.288E-02(2.446E-04) 4.686E-02(9.617E-03) 0.0571
UF2 1.945E-02(1.426E-05) 3.321E-03(1.369E-07)
UF3 7.793E-02(1.405E-04) 4.887E-02(9.445E-03) 0.0014
UF4 3.713E-02(2.437E-07) 4.532E-02(5.474E-04) 0.2536
UF5 2.533E-01 (2.729E-06) 2.331E-01(1.475E-03)
UF6 9.758E-02(8.681E-04) 8.706E-02(1.384E-03) 0.0545
UF7 1.847E-02(8.603E-06) 2.870E-02(3.884E-03) 0.0326
UF8 7.656E-02(1.423E-05) 7.641E-02(1.629E-05) 0.2878
UF9 8.753E-02(3.915E-05) 6.661E-02(6.823E-05)
UF10 2.246E+00(1.140E-02) 2.035E+00(1.228E-02)
DTLZ1 3.799E-03 (5.926E-06) 1.860E-02 (9.141E-05)
DTLZ2 1.346E-04 (8.323E-05) 5.124E-02 (1.476E-04)
DTLZ3 1.477E+00 (4.082E+00) 5.220E-02 (3.714E-06)
DTLZ4 1.346E-02 (1.161E-05) 5.127E-02 (1.150E-06) 0.0025
ProblemMMOEA/D-STM + SBXMOEA/D-STM + DEp-value
UF1 4.288E-02(2.446E-04) 4.686E-02(9.617E-03) 0.0571
UF2 1.945E-02(1.426E-05) 3.321E-03(1.369E-07)
UF3 7.793E-02(1.405E-04) 4.887E-02(9.445E-03) 0.0014
UF4 3.713E-02(2.437E-07) 4.532E-02(5.474E-04) 0.2536
UF5 2.533E-01 (2.729E-06) 2.331E-01(1.475E-03)
UF6 9.758E-02(8.681E-04) 8.706E-02(1.384E-03) 0.0545
UF7 1.847E-02(8.603E-06) 2.870E-02(3.884E-03) 0.0326
UF8 7.656E-02(1.423E-05) 7.641E-02(1.629E-05) 0.2878
UF9 8.753E-02(3.915E-05) 6.661E-02(6.823E-05)
UF10 2.246E+00(1.140E-02) 2.035E+00(1.228E-02)
DTLZ1 3.799E-03 (5.926E-06) 1.860E-02 (9.141E-05)
DTLZ2 1.346E-04 (8.323E-05) 5.124E-02 (1.476E-04)
DTLZ3 1.477E+00 (4.082E+00) 5.220E-02 (3.714E-06)
DTLZ4 1.346E-02 (1.161E-05) 5.127E-02 (1.150E-06) 0.0025

From Table 5, we can see that MOEA/D-STM using the DE operator is better than using the SBX operator except for UF1, UF4, and UF7 on ten UF test problems. While for four DTLZ problems with 3 objectives, MOEA/D-STM using the SBX operator outperforms the use of the DE operator on three cases including DTLZ1, DTLZ2, and DTLZ4. By these simple comparisons, it is concluded that MOEA/D-STM using the DE operator is more appropriate for the UF problems than for the DTLZ problems.

Furthermore, in order to have a statistical analysis of results, we adopt the Wilcoxon signed-rank test to analyze the IGD results of 20 runs obtained by two algorithms (rank $=$ 1 and rank $=$ 2) under a significance level $α=0.05$; all $p-value$ results are given in the last column in Table 5.

It is easy to see that MOEA/D-STM + DE (when its rank $=$ 1) is significantly better than MOEA/D-STM + SBX on 5 out of 10 UF test problems while there are 3 out of 4 test DTLZ problems in which MOEA/D-STM + SBX performs significantly better than MOEA/D-STM + DE. By analyzing the statistical results, we can conclude that MOEA/D-STM + SBX is more suitable for solving the DTLZ problems as we expected.

Since most of the parameters used in MOEA/D-STM are from the original paper, in which the ideal point is not the original point, in order to make a fair comparison in this section, MOEA/D-CSM also applies the same method of creating an ideal point as MOEA/D-STM. MOEA/D-STM used the minimum function values on each objective in the current population as the ideal point; that is, $zi*=min{fi(x)|x∈P}$, for all $i∈{1,…,M}$, where $P$ is a set of solutions that have been created. In this section, population size, the number of reference points and the maximum number of generations required by all experiments are given in Tables 1 and 3.

To compare the performance of the proposed algorithm and MOEA/D-STM + SBX, we applied them to the DTLZ problems with 3, 5, and 8 objectives, respectively. Table 6 shows the mean IGD values and their standard deviation over 20 independent runs. Ranks 1 and 2, according to the average IGD values, are analyzed by using the Wilcoxon signed-rank test and the significance level $α$ is set as 0.05.

Table 6:

Mean IGD values and the standard deviations obtained by MOEA/D-CSM and MOEA/D-STM + SBX on the DTLZ test problems, where the best results for each problem are shown in boldface and the values in the fourth and sixth columns correspond to the ranking of the two algorithms compared.

ProblemMMOEA/D-CSMMOEA/D-STM + SBXp-value
DTLZ1 1.074E-03(2.141E-03) 3.799E-03(5.926E-06)
2.779E-04(1.177E-05) 1.349E-02(1.182E-06)
4.153E-03(5.070E-07) 4.684E-03(2.105E-06) 0.0793
DTLZ2 6.553E-04(9.866E-08) 1.346E-04(8.323E-05)
7.259E-04(2.074E-04) 1.355E-03(2.984E-06)
5.932E-03(6.145E-05) 1.452E-02(8.937E-06) 0.0026
DTLZ3 3.175E-03(2.866E-03) 1.477E+00(4.082E+00)
1.059E-03(6.656E-04) 1.355E-03(5.873E-05) 0.0017
1.003E-03(1.695E-03) 4.606E-03(9.523E-06)
DTLZ4 8.604E-05(1.079E-05) 1.346E+00(1.161E-05)
8.130E-05(1.373E-05) 1.527E-01(5.008E-05)
4.233E-03(7.382E-04) 2.649E-01(1.132E-05) 0.0031
ProblemMMOEA/D-CSMMOEA/D-STM + SBXp-value
DTLZ1 1.074E-03(2.141E-03) 3.799E-03(5.926E-06)
2.779E-04(1.177E-05) 1.349E-02(1.182E-06)
4.153E-03(5.070E-07) 4.684E-03(2.105E-06) 0.0793
DTLZ2 6.553E-04(9.866E-08) 1.346E-04(8.323E-05)
7.259E-04(2.074E-04) 1.355E-03(2.984E-06)
5.932E-03(6.145E-05) 1.452E-02(8.937E-06) 0.0026
DTLZ3 3.175E-03(2.866E-03) 1.477E+00(4.082E+00)
1.059E-03(6.656E-04) 1.355E-03(5.873E-05) 0.0017
1.003E-03(1.695E-03) 4.606E-03(9.523E-06)
DTLZ4 8.604E-05(1.079E-05) 1.346E+00(1.161E-05)
8.130E-05(1.373E-05) 1.527E-01(5.008E-05)
4.233E-03(7.382E-04) 2.649E-01(1.132E-05) 0.0031

From Table 6, we can see that there are eleven out of twelve cases in which MOEA/D-CSM performs better than MOEA/D-STM for DTLZ1-4 problems, especially on seven test problems including DTLZ1 with 5 objectives, DTLZ2 with 5 and 8 objectives, DTLZ3 with 3 objectives, and DTLZ with 3, 5, and 8 objectives; MOEA/D-CSM largely outperforms MOEA/D-STM. There are five test problems in which MOEA/D-CSM obtains a lower standard deviation. But this does not mean that MOEA/D-STM has a poor performance. And we can see that it is more suitable to solve UF test problems as mentioned in its original paper. That is, MOEA/D-STM is appropriate for multiobjective optimization problems with arbitrary prescribed PS shapes, i.e. UF test problems, but it may not be appropriate for many-objective optimization problems with simple PS shapes.

Furthermore, to perform a statistical analysis of results, we adopt the Wilcoxon signed-rank test to analyze the IGD results of 20 runs obtained by two algorithms (rank $=$ 1 and rank $=$ 2) under significance level $α=0.05$; all $p-value$ results are given in the last column in Table 6.

In summary, our proposed MOEA/D-CSM is significantly better than MOEA/D-STM + SBX on 10 test problems and not significantly better than comparative algorithms on a test problem out of 12 test problems according to the obtained $p-value$. As for MOEA/D-STM + SBX, when its rank $=$ 1, there is a test problem in which MOEA/D-STM + SBX is significantly better than MOEA/D-CSM. The statistical results also show the superiority of the proposed algorithm MOEA/D-CSM.

Furthermore, to make a visual comparison between MOEA/D-CSM, MOEA/D-STM + SBX, and MOEA/D-STM + DE, we show the final obtained PF by the three algorithms on all the 3-objective test problems [UF8-UF10, DTLZ1 (3), DTLZ2 (3), DTLZ3 (3), DTLZ4 (3)] in Figures 1319.141516171819
Figure 13:

The final PFs obtained by (a) MOEA/D-CSM, (b) MOEA/D-STM + SBX, (c) MOEA/D-STM + DE on UF8.

Figure 13:

The final PFs obtained by (a) MOEA/D-CSM, (b) MOEA/D-STM + SBX, (c) MOEA/D-STM + DE on UF8.

Figure 14:

The final PFs obtained by (a) MOEA/D-CSM, (b) MOEA/D-STM + SBX, (c) MOEA/D-STM + DE on UF9.

Figure 14:

The final PFs obtained by (a) MOEA/D-CSM, (b) MOEA/D-STM + SBX, (c) MOEA/D-STM + DE on UF9.

Figure 15:

The final PFs obtained by (a) MOEA/D-CSM, (b) MOEA/D-STM + SBX, (c) MOEA/D-STM + DE on UF10.

Figure 15:

The final PFs obtained by (a) MOEA/D-CSM, (b) MOEA/D-STM + SBX, (c) MOEA/D-STM + DE on UF10.

Figure 16:

The final PFs obtained by (a) MOEA/D-CSM, (b) MOEA/D-STM + SBX, (c) MOEA/D-STM + DE on DTLZ1(3).

Figure 16:

The final PFs obtained by (a) MOEA/D-CSM, (b) MOEA/D-STM + SBX, (c) MOEA/D-STM + DE on DTLZ1(3).

Figure 17:

The final PFs obtained by (a) MOEA/D-CSM, (b) MOEA/D-STM + SBX, (c) MOEA/D-STM + DE on DTLZ2(3).

Figure 17:

The final PFs obtained by (a) MOEA/D-CSM, (b) MOEA/D-STM + SBX, (c) MOEA/D-STM + DE on DTLZ2(3).

Figure 18:

The final PFs obtained by (a) MOEA/D-CSM, (b) MOEA/D-STM + SBX, (c) MOEA/D-STM + DE on DTLZ3(3).

Figure 18:

The final PFs obtained by (a) MOEA/D-CSM, (b) MOEA/D-STM + SBX, (c) MOEA/D-STM + DE on DTLZ3(3).

Figure 19:

The final PFs obtained by (a) MOEA/D-CSM, (b) MOEA/D-STM + SBX, (c) MOEA/D-STM + DE on DTLZ4(3).

Figure 19:

The final PFs obtained by (a) MOEA/D-CSM, (b) MOEA/D-STM + SBX, (c) MOEA/D-STM + DE on DTLZ4(3).

Figures 1319 show that the proposed MOEA/D-CSM has good convergence performance on 4 test problems [UF10, DTLZ1(3), DTLZ2(3), and DTLZ4 (3)] and it can obtain a set of reasonable and uniformly distributed solutions very close to the ideal PF when solving the DTLZ test problems, and MOEA/D-STM + DE is more suitable to deal with UF problems.

The proposed correlative selection mechanism is based on PBI decomposition. Therefore, this mechanism could be embedded into any PBI decomposition-based MOEA/Ds. For example, MOEA/D-RBF (Zapotecas-Martĺnez and Coello, 2013), which uses the PBI decomposition mechanism, can adopt the proposed correlative selection mechanism, while for MOEA/D-EGO (Zhang et al., 2010), it may be required to adjust the proposed correlative selection mechanism since it adopts the Tchebycheff decomposition mechanism.

## 6  Conclusion

This article proposes a decomposition-based evolutionary algorithm with a correlative selection mechanism (MOEA/D-CSM) to solve many-objective optimization problems. The new selection mechanism includes the mating selection and environmental selection mechanisms, which are based on correlation. In order to improve the selection mechanism, the correlation between a reference point and a solution, three entities of a reference point and three entities of a solution are introduced. Based on these entities, the mating selection and the environmental selection are used to evolve and update the population. We have carried out systematic experiments to compare our proposed algorithm with three other elitist many-objective evolutionary algorithms (NSGA-III, MOEA/D, and RVEA). Results show that the proposed MOEA/D-CSM is found to produce satisfactory results on most problems considered in this study.

However, in this article, all the test problems adopted have an identical range of values for each objective. According to the distribution of the reference points, we found out that the decomposition-based approach used in this article is very suitable for solving these test problems. As part of our future work, we will improve our algorithm so that it can properly solve more many-objective optimization problems with different characteristics.

## Acknowledgments

This work was supported by the National Natural Science Foundation of China (Nos. 61876141 and 61373111), and the Provincial Natural Science Foundation of Shaanxi of China (No. 2019JZ-26), and the Opening Project of Science and Technology on Reliability Physics and Application Technology of Electronic Component Laboratory (No. 614280620190403-1).

## References

,
J.
,
Deb
,
K.
, and
Zitzler
,
E.
(
2010
). Faster hypervolume-based search using Monte Carlo sampling. In
M.
Ehrgott
,
B.
Naujoks
,
T.
Stewart
, and
J.
Wallenius
(Eds.),
Multiple criteria decision making for sustainable energy and transportation systems
, pp.
313
326
.
Berlin
:
Springer
.
,
J.
, and
Zitzler
,
E
. (
2011
).
HypE: An algorithm for fast hypervolume-based many-objective optimization
.
Evolutionary Computation
,
19
(
1
):
45
76
.
Cheng
,
R.
,
Jin
,
Y.
,
Narukawa
,
K.
, and
Sendhoff
,
B
. (
2015
).
A multiobjective evolutionary algorithm using Gaussian process-based inverse modeling
.
IEEE Transactions on Evolutionary Computation
,
19
(
6
):
838
856
.
Cheng
,
R.
,
Jin
,
Y.
,
Olhofer
,
M.
, and
Sendhoff
,
B
. (
2016
).
A reference vector guided evolutionary algorithm for many-objective optimization
.
IEEE Transactions on Evolutionary Computation
,
20
(
5
):
773
791
.
Chikumbo
,
O.
,
Goodman
,
E.
, and
Deb
,
K
. (
2012
). Approximating a multi-dimensional Pareto front for a land use management problem: A modified MOEA with an epigenetic silencing metaphor. In
2012 IEEE Congress on Evolutionary Computation
, pp.
1
9
.
Coello
,
C. A. C.
, and
Lamont
,
G. B.
(
2004
). Applications of multi-objective evolutionary algorithms, Vol.
1
.
Singapore
:
World Scientific
.
Coello
,
C. A. C.
, and
Cruz Cortés
,
N.
(
2005
).
Solving multiobjective optimization problems using an artificial immune system
.
Genetic Programming and Evolvable Machines
,
6
,
163
190
.
Das
,
I.
, and
Dennis
,
J. E
. (
1998
).
Normal-boundary intersection: A new method for generating the Pareto surface in nonlinear multicriteria optimization problems
.
SIAM Journal on Optimization
,
8
(
3
):
631
657
.
Deb
,
K.
, and
Agrawal
,
R. B
. (
1994
).
Simulated binary crossover for continuous search space
.
Complex Systems
,
9
(
3
):
1
15
.
Deb
,
K.
, and
Jain
,
H
. (
2014
).
An evolutionary many-objective optimization algorithm using reference point-based nondominated sorting approach, Part I: Solving problems with box constraints
.
IEEE Transactions on Evolutionary Computation
,
18
(
4
):
577
601
.
Deb
,
K.
,
Mohan
,
M.
, and
Mishra
,
S
. (
2005a
).
Evaluating the $ɛ$-domination based multi-objective evolutionary algorithm for a quick computation of Pareto-optimal solutions
.
Evolutionary Computation
,
13
(
4
):
501
525
.
Deb
,
K.
,
Pratap
,
A.
,
Agarwal
,
S.
, and
Meyarivan
,
T
. (
2002
).
A fast and elitist multiobjective genetic algorithm: NSGA-II
.
IEEE Transactions on Evolutionary Computation
,
6
(
2
):
182
197
.
Deb
,
K.
,
Thiele
,
L.
,
Laumanns
,
M.
, and
Zitzler
,
E
. (
2005b
).
Scalable test problems for evolutionary multiobjective optimization
.
Berlin
:
Springer
.
Everson
,
R. M.
,
Fieldsend
,
J. E.
, and
Singh
,
S.
(
2002
). Full elite sets for multi-objective optimisation. In
I. C.
Parmee
(Ed.),
Adaptive computing in design and manufacture V
, pp.
343
354
.
London
:
Springer
.
He
,
C.
,
Panf
,
L.
,
Xu
,
H.
,
Tian
,
Y.
, and
Zhang
,
X
. (
2016
). An improved reference point sampling method on Pareto optimal front. In
2016 IEEE Congress on Evolutionary Computation
, pp.
5230
5237
.
Hollander
,
M.
, and
Wolfe
,
D. A
. (
1999
).
Nonparametric statistical methods
.
Hoboken, NJ
:
John Wiley & Sons, Inc
.
Ishibuchi
,
H.
,
Tsukamoto
,
N.
, and
Nojima
,
Y
. (
2008
). Evolutionary many-objective optimization: A short review. In
IEEE Congress on Evolutionary Computation
, pp.
2419
2426
.
Jain
,
H.
, and
Deb
,
K
. (
2014
).
An evolutionary many-objective optimization algorithm using reference point-based nondominated sorting approach, Part II: Handling constraints and extending to an adaptive approach
.
IEEE Transactions on Evolutionary Computation
,
18
(
4
):
602
622
.
Jiang
,
S.
, and
Yang
,
S
. (
2017
).
A strength Pareto evolutionary algorithm based on reference direction for multiobjective and many-objective optimization
.
IEEE Transactions on Evolutionary Computation
,
21
(
3
):
329
346
.
Jiang
,
S.
,
Zhang
,
J.
,
Ong
,
Y.-S.
,
Zhang
,
A. N.
, and
Tan
,
P. S
. (
2015
).
A simple and fast hypervolume indicator-based multiobjective evolutionary algorithm
.
IEEE Transactions on Cybernetics
,
45
(
10
):
2202
2213
.
Jin
,
Y.
,
Olhofer
,
M.
, and
Sendhoff
,
B.
(
2001
).
Dynamic weighted aggregation for evolutionary multi-objective optimization: Why does it work and how?
In
Proceedings of the Genetic and Evolutionary Computation Conference
, pp.
1042
1049
.
Knowles
,
J.
, and
Corne
,
D.
(
1999
).
The Pareto archived evolution strategy: A new baseline algorithm for Pareto multiobjective optimisation
. In
Proceedings of the 1999 Congress on Evolutionary Computation
, pp.
98
105
, Vol.
1
.
Laumanns
,
M.
,
Thiele
,
L.
,
Deb
,
K.
, and
Zitzler
,
E
. (
2002
).
Combining convergence and diversity in evolutionary multiobjective optimization
.
Evolutionary Computation
,
10
(
3
):
263
282
.
Li
,
B.
,
Li
,
J.
,
Tang
,
K.
, and
Yao
,
X
. (
2015
).
Many-objective evolutionary algorithms
.
ACM Computing Surveys (CSUR)
,
48
(
1
):13.
Li
,
H.
, and
Zhang
,
Q
. (
2009
).
Multiobjective optimization problems with complicated Pareto sets, MOEA/D and NSGA-II
.
IEEE Transactions on Evolutionary Computation
,
13
(
2
):
284
302
.
Li
,
K.
,
Deb
,
K.
,
Zhang
,
Q.
, and
Kwong
,
S
. (
2015
).
An evolutionary many-objective optimization algorithm based on dominance and decomposition
.
IEEE Transactions on Evolutionary Computation
,
19
(
5
):
694
716
.
Li
,
K.
,
Zhang
,
Q.
,
Kwong
,
S.
,
Li
,
M.
, and
Wang
,
R
. (
2014
).
Stable matching-based selection in evolutionary multiobjective optimization
.
IEEE Transactions on Evolutionary Computation
,
18
(
6
):
909
923
.
Liu
,
H.-L.
,
Gu
,
F.
, and
Zhang
,
Q
. (
2014
).
Decomposition of a multiobjective optimization problem into a number of simple multiobjective subproblems
.
IEEE Transactions on Evolutionary Computation
,
18
(
3
):
450
455
.
Menchaca-Méndez
,
A.
,
Montero
,
E.
, and
Zapotecas-Martínez
,
S.
(
2018
).
An improved S-metric selection evolutionary multi-objective algorithm with adaptive resource allocation
.
IEEE Access
,
6
:
63382
63401
.
Santner
,
T. J.
,
Williams
,
B. J.
, and
Notz
,
W. I
. (
2013
).
The design and analysis of computer experiments
.
Berlin
:
.
Sheskin
,
D. J.
(
2004
).
Handbook of parametric and nonparametric statistical procedures
.
Chapman & Hall/CRC
.
Trivedi
,
A.
,
Srinivasan
,
D.
,
Sanyal
,
K.
, and
Ghosh
,
A
. (
2016
).
A survey of multiobjective evolutionary algorithms based on decomposition
.
IEEE Transactions on Evolutionary Computation
,
21
(
3
):
440
462
.
von
Lücken
,
C.
,
Barán
,
B.
, and
Brizuela
,
C
. (
2014
).
A survey on multi-objective evolutionary algorithms for many-objective problems
.
Computational Optimization and Applications
,
58
(
3
):
707
756
.
Wang
,
Z.
,
Yew-Soon
,
O.
, and
Hisao
,
I
. (
2019
).
On scalable multiobjective test problems with hardly dominated boundaries
.
IEEE Transactions on Evolutionary Computation
,
23
(
2
):
217
231
.
Wang
,
Z.
,
Zhang
,
Q.
,
Li
,
H.
,
Ishibuchi
,
H.
, and
Jiao
,
L.
(
2017
).
On the use of two reference points in decomposition based multiobjective evolutionary algorithms
.
Swarm & Evolutionary Computation
,
34
:
89
102
.
Wang
,
Z.
,
Zhang
,
Q.
,
Zhou
,
A.
,
Gong
,
M.
, and
Jiao
,
L
. (
2016
).
.
IEEE Transactions on Cybernetics
,
46
(
2
):
474
486
.
Yang
,
S.
,
Li
,
M.
,
Liu
,
X.
, and
Zheng
,
J
. (
2013
).
A grid-based evolutionary algorithm for many-objective optimization
.
IEEE Transactions on Evolutionary Computation
,
17
(
5
):
721
736
.
Zapotecas-Martĺnez
,
S.
,
Aguirre
,
H. E.
,
Tanaka
,
K.
, and
Coello
,
C. A. C
. (
2015
). On the low-discrepancy sequences and their use in MOEA/D for high-dimensional objective spaces. In
2015 IEEE Congress on Evolutionary Computation
, pp.
2835
2842
.
Zapotecas-Martĺnez
,
S.
, and
Coello
,
C. A
. (
2013
). MOEA/D assisted by RBF networks for expensive multi-objective optimization problems. In
Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation
, pp.
1405
1412
.
Zapotecas-Martĺnez
,
S.
,
López-Jaimes
,
A.
, and
Garcĺła-Nájera
,
A.
(
2019
).
LIBEA: A Lebesgue indicator-based evolutionary algorithm for multi-objective optimization
.
Swarm & Evolutionary Computation
, pp.
404
419
.
Zhang
,
Q.
, and
Leung
,
Y.-W
. (
1999
).
An orthogonal genetic algorithm for multimedia multicast routing
.
IEEE Transactions on Evolutionary Computation
,
3
(
1
):
53
62
.
Zhang
,
Q.
, and
Li
,
H
. (
2007
).
MOEA/D: A multiobjective evolutionary algorithm based on decomposition
.
IEEE Transactions on Evolutionary Computation
,
11
(
6
):
712
731
.
Zhang
,
Q.
,
Liu
,
W.
, and
Li
,
H.
(
2009
).
The performance of a new version of MOEA/D on CEC '09 unconstrained MOP test instances
. In
IEEE Congress on Evolutionary Computation
, pp.
203
208
. Vol.
1
.
Zhang
,
Q.
,
Liu
,
W.
,
Tsang
,
E.
, and
Virginas
,
B
. (
2010
).
Expensive multiobjective optimization by MOEA/D with Gaussian process model
.
IEEE Transactions on Evolutionary Computation
,
14
(
3
):
456
474
.
Zhang
,
Q.
,
Zhou
,
A.
,
Zhao
,
S.
,
Suganthan
,
P. N.
,
Liu
,
W.
, and
Tiwari
,
S.
(
2008
).
Multiobjective optimization test instances for the CEC 2009 special session and competition
. Technical Report CEC-264.
University of Essex, Colchester, UK and Nanyang Technological University
,
Singapore
. Special session on performance assessment of multi-objective optimization algorithms.
Zitzler
,
E.
,
Laumanns
,
M.
,
Thiele
,
L.
, et al. (
2001
).
SPEA2: Improving the strength Pareto evolutionary algorithm
.
Eurogen
,
103
:
95
100
.