Abstract

For a many-objective optimization problem with redundant objectives, we propose two novel objective reduction algorithms for linearly and, nonlinearly degenerate Pareto fronts. They are called LHA and NLHA respectively. The main idea of the proposed algorithms is to use a hyperplane with non-negative sparse coefficients to roughly approximate the structure of the PF. This approach is quite different from the previous objective reduction algorithms that are based on correlation or dominance structure. Especially in NLHA, in order to reduce the approximation error, we transform a nonlinearly degenerate Pareto front into a nearly linearly degenerate Pareto front via a power transformation. In addition, an objective reduction framework integrating a magnitude adjustment mechanism and a performance metric σ* are also proposed here. Finally, to demonstrate the performance of the proposed algorithms, comparative experiments are done with two correlation-based algorithms, LPCA and NLMVUPCA, and with two dominance-structure-based algorithms, PCSEA and greedy δ-MOSS, on three benchmark problems: DTLZ5(I,M), MAOP(I,M), and WFG3(I,M). Experimental results show that the proposed algorithms are more effective.

1  Introduction

There are many practical optimization problems that exhibit multiple conflicting objectives (Liu, Gu, Cheung et al., 2014; Zhu et al., 2016), which are known as multi-objective optimization problems (MOPs). The solution to such a MOP is a Pareto set (PS)—that is to say, the solutions in the PS represent optimal tradeoffs among all the objectives. The set of points corresponding to the PS in the objective space is called a Pareto front (PF). Because the solution to a MOP is a set rather than a single optimal solution, evolutionary multi-objective algorithms (EMOAs) have become an effective and popular way to solve such problems, given that an EMOA maintains a population (set) of solutions throughout its execution. As a result, many EMOAs have been proposed, which can be classified into four main categories: Pareto-domination-based algorithms such as NSGA-II (Deb et al., 2002), SPEA2 (Zitzler et al., 2001), and PESA (Corne et al., 2000); decomposition-based algorithms such as MOEA/D (Zhang and Li, 2007) and MOGLS (Jaszkiewicz, 2002); grid-based algorithms (Deb et al., 2005; Hernández-Díaz et al., 2007); and indicator-based algorithms (Bader and Zitzler, 2011; Beume et al., 2007; Zitzler and Knzli, 2004;). Unfortunately, although these EMOAs have strong search ability for MOPs with two or three objectives, most of them severely deteriorate for MOPs with more than three objectives (Ishibuchi et al., 2008), which are also often called many-objective optimization problems(MaOPs).

The curse of dimensionality in MaOPs has brought great challenges to EMOAs, resulting in poor convergence and worse diversity in the PF, in the sense that the sampled set of non-dominated points produced includes many that are not actually on the Pareto front and the points are far from being evenly spaced on that front. As the number of objectives increases, the Pareto dominance-based algorithms work increasingly ineffectively because of the inefficient selection pressure toward the PF and the heavy computational cost brought on by maintaining population diversity (Purshouse and Fleming, 2003; Singh et al., 2008a). And for decomposition-based algorithms, how to deal with the dilemma between limited computational resources and the dramatically increasing number of weight vectors is still an open question (He and Yen, 2016; Deb and Jain, 2014). As for grid-based algorithms and indicator-based algorithms, the data storage and computational time required increase exponentially with the number of objectives (Yang et al., 2013). In addition, the visualization difficulty of the PF also frustrates the development of EMOAs (Ishibuchi et al., 2008). In order to improve the performance of the above EMOAs, a number of efforts have been made, including methods that modify the concept of Pareto dominance (Batista et al., 2011; Ikeda et al., 2001; Laumanns et al., 2002) or improve grid-based methods (Yang et al., 2013), diversity-based methods (Singh et al., 2008a; Deb and Jain, 2014) or path-control strategy based on decomposition methods (Roy et al., 2014), etc. Even though these methods have been greatly improved, they are not yet capable of efficient handling of many MaOPs (Singh et al., 2011).

However, given that a MaOP has many objectives, there may exist some correlated objectives (Cheung and Gu, 2014). As we know, correlated objectives are more likely to reflect duplicative information, which means that some objectives may be redundant, while others are essential. Furthermore, if a MaOP is analyzed with redundant objectives, its PF may not be well distributed over a wide range of the objective space (Cheung et al., 2016); that is, its PF is degenerate. Thus, it is clearly desirable to eliminate redundant objectives before dealing with the problem (Gal and Hanne, 1999). In recent years, many effective objective reduction algorithms have been proposed, which can be classified into two main categories: correlation-based and dominance-structure-based objective reduction algorithms.

The correlation-based objective reduction algorithms have turned out to be popular and effective. Their core idea is to extract the essential objectives based on the correlations among the objectives. For example, Deb and Saxena (2005) proposed a correlation-based objective reduction algorithm based on principal component analysis (PCA), in which the PCA is done on the final non-dominated points obtained by NSGA-II after many generations. The first few descending eigenvalues are chosen if their cumulative sum is above a prespecified threshold, and then the corresponding eigenvectors of the chosen eigenvalues are used to reduce redundant objectives. Building on this work, the linear- and nonlinear-objective algorithms, namely L-PCA(LPCA) and NL-MVU-PCA(NLMVUPCA), respectively, are presented in Saxena et al. (2013). Compared with other work in literature (Deb and Saxena, 2005), its distinctive contributions include higher generality in terms of objective numbers and higher robustness in dealing with noisy data. Jaimes et al. (2009) developed an objective reduction technique called feature selection. This approach first defines a metric to evaluate the degree of conflict between any two objectives. Then these objectives are divided into homogeneous neighborhoods based on similarity measurement and each neighborhood contains several objectives. Thereafter, the most compact neighborhoods are chosen, in which the center objective in each neighborhood is preserved while the other objectives are dropped. Two papers (Cheung and Gu, 2014 and Cheung et al., 2016) consider that the more negative the correlation between two objectives, the more conflict exists between them. Therefore, it formulates the essential objectives as a non-negative linear combination of the original objectives, and the combination weights are determined based on the correlations between each pair of the essential objectives. In so doing, the authors claim the advantage that the Pareto solutions of the reduced problem coincide with those of the original problem.

Dominance-structure-based objective reduction algorithms are another widely used approach. They focus mainly on studying the dominance effects of the objective set on the non-dominated solutions. As a consequence, if one objective set can preserve more non-dominated solutions than another objective set, it is preferred to accept. Brockhoff and Zitzler (2006; 2009) investigate how adding and omitting an objective affect the problem characteristics in two proposed objective reduction methods called δ-MOSS and k-EMOSS. The δ-MOSS aims to find a minimal-objective subset for a given error δ, while the k-EMOSS tries to find k objectives with minimal error, in which k is a pre-specified number. In a similar way, a dominance-structure-based objective reduction algorithm called the Pareto corner search evolutionary algorithm (PCSEA) is also proposed in Singh et al. (2011). It first finds the corner points of the PF to capture the overall features of the PF, and then determines whether an objective is essential or redundant by observing the change of the proportion of Pareto solutions after deleting this objective. Furthermore, extensive experiments have shown that PCSEA has a strong ability to identify the essential objectives. Based on the work in Singh et al. (2011), Guo et al. (2015) proposed a new non-redundant objective set generation algorithm. It has two aspects that are significantly different with PCSEA. Firstly, it applies decomposed method to generate a small number of representative non-dominated points instead of corner points. Secondly, it uses the conflicting objective pairs instead of one objective to identify the essential objectives.

Although many experiments have demonstrated the strong abilities of the above objective reduction algorithms to deal with redundant MaOPs, their limitations have also been exposed. For the correlation-based algorithms, although they can efficiently extract objectives in an acceptable time, some of them, for example, LPCA and NLMVUPCA, require a series of complicated extraction processes. Additionally, as pointed out in Brockhoff and Zitzler (2009), they are not able to guarantee the Pareto-dominance relationships; it has not been shown that the correlation coefficient relationship is equivalent to conflict among objectives. Besides, even though the correlation coefficient can reflect a linear relationship between two objectives, it is difficult to describe the conflict relationship among all of the objectives involved using pairwise correlation coefficients. The dominance-structure-based objective reduction algorithms consider conflicting objectives from the viewpoint of the Pareto-dominance relationship, and as a result, they have the advantage of investigating the conflict among all objectives involved. But their computational requirements are commonly quite high and severely limit their application in practice.

In order to consider the conflict among all objectives involved and make the objective reduction more effective and easier, novel objective reduction algorithms designed for the linearly and nonlinearly degenerate PF are proposed, namely LHA and NLHA. They apply a hyperplane involving sparse, non-negative coefficients to approximate the degenerate PF. Since the degenerate PF may be located within either a lower-dimensional linear hyperplane or a nonlinear hyper-surface, we address the two cases with different algorithms, LHA or NLHA, respectively. The intrinsic difference between LHA and NLHA is that NLHA requires an accompanying power transformation, which roughly transforms the hyper-surface into a hyperplane. The main contributions of this article relate to the following:

  • Ease of implementation and parameterization: compared with the objective reduction algorithms LPCA and NLMVUPCA, the proposed algorithm is not only easy to implement, but also involves fewer parameters, which reduces the difficulty of parameterization. Moreover, it also converts the objective reduction optimization problem into a quadratic optimization problem with simple constraints, which makes it more effective.

  • Integrating the magnitude adjustment mechanism into the objective reduction framework: in general, when using an EMOA, it is easy to ignore the effect of the magnitudes of the objectives. But doing so does have a great deal of influence on the performance of an objective reduction algorithm, especially when the magnitudes of the objectives are very different.

  • Providing a performance metric σ* for evaluating the performance of objective reduction algorithms: in order to compare the performance of different objective reduction algorithms, motivated by the work in Cheung et al. (2016), we propose a new performance metric, namely σ*. The performance metric σ* evaluates two aspects of the extraction results, that is, the correct extraction percentages for both essential objectives and redundant objectives.

The remainder of this article is organized as follows. In Section 2, some basic concepts about objective reduction for MaOPs are briefly introduced. Section 3 first gives a detailed description of the proposed objective reduction approach, then proposes an objective reduction framework integrating magnitude adjustment. Section 4 describes the experimental results about the performance of the proposed algorithms on test benchmarks DTLZ5(I,M), MAOP(I,M), and WFG3(I,M). And then, the further study for the proposed algorithm NLHA is conducted in Section 5. Finally, conclusions are drawn in Section 6.

2  The Basic Concept of the Objective Reduction Algorithm

A MOP can be described as follows:
minxΩF(x)=(f1(x),,fM(x)),
(1)
where Ω=j=1n[aj,bj]Rn is the problem decision space and for all j=1,2,..,n, it satisfies -<aj<bj<+, and F(x) is an objective vector composed of M objectives. Let x1,x2Ω; x1 is said to dominate x2 if and only if fj(x1)fj(x2) for each j{1,2,..,M} and fj1(x1)<fj1(x2) for at least one j1{1,2,..,M}. We say a point x* is a Pareto-optimal point if there exists no point Ω that dominates x* in the objective space. The set of all Pareto-optimal points is called a Pareto set (PS) while its corresponding set of objective vectors is called a Pareto front (PF).

Consider two minimization problems of the form of Eq. (1), denote their objective sets as F={f1(x),,fM(x)} and F'={fk1(x),,fkm(x)}, (ki{1,2,,M}), and their PS as PS(F) and PS(F') respectively. In the special case that PS(F)PS(F')=PS(F), it means the PS of the original objective set F can be generated by the objective subset F', and in this case, the objective subset F' is regarded as an essential objective set. Furthermore, if there exists no objective set F'' that satisfies simultaneously the conditions PS(F)PS(F'')=PS(F) and |F''|<|F'|, where |·| counts the number of elements in a set, F' is regarded as a minimum essential objective set. It should be noted that a minimum essential objective set is not necessarily unique. For example, the minimum essential objective set of test function DTLZ5(2,5) is not unique; both {f4,f5} and {f1,f5} are minimum essential objective sets.

3  An Objective Reduction Framework for Linearly and Nonlinearly Degenerate PFs

3.1  The Objective Reduction Model for a Linearly Degenerate PF

Under certain smoothness assumptions, a MaOP with M objectives has a PF with M-1 dimensions (Li and Zhang, 2009), which also means that there exist M-1 essential objectives to the PF. But for a redundant MaOP, its PF may not span the entire objective space, so that the dimension of the PF may be much smaller than the dimension of the objective space.

For convenience, this article first analyzes one simple situation in which the PF is linearly degenerate. A linearly degenerate PF for a M-objective MaOP means that the PF is contained within a hyperplane of dimension lower than M. As shown in Figure 1a, three linearly degenerate PFs are constructed, all of which satisfy the equation f1(x)+f2(x)+0f3=1. Because of the differences in objective f3, which are given as f3=2f1+0.5f2, f3=f13+2f23 and f3=2log(f1+1)+2log(f2+1), the PFs have different shapes—namely a line and two curves. However, all of them are located within a two-dimensional hyperplane determined by objectives f1 and f2. That is to say, optimizing only objectives f1 and f2 yields a complete PF in all of these cases. As a result, f1 and f2 are essential objectives, while objective f3 is redundant. Since all three of these different PFs are in the hyperplane determined by the equation f1(x)+f2(x)+0f3=1, it motivates us to use a hyperplane to approximate the conflict structure of the PF, and then to regard the corresponding objectives with non-zero objective coefficients as the essential objectives. Generally, a hyperplane can be described as Eq. (2):
w1f1+w2f2+...+wMfM=1.
(2)
Figure 1:

Two typical degenerate PFs.

Figure 1:

Two typical degenerate PFs.

Therefore, an objective reduction model can be built as follows:
minww0s.t.minwi=1N(1-j=1Mwjfj(xi))2w02wj0,j{1,2,..,M},
(3)
where w=(w1,w2,,wM)T, the l0 norm ·0 counts the number of nonzero values in a vector, fj(xi) represents the value of solution xi in objective fj.1 Eq. (3) implies that a hyperplane having both lesser approximation error and fewer non-zero objective coefficients is the best approximation for the PF. The reason why any objective with a non-zero corresponding coefficient is an essential objective is illustrated via Theorem 1.
Theorem 1:

If there exists a subset F'F and if any nondominated solution x with respect to F satisfies jF'wjfj(x)=1, then we have PS(F)PS(F')—that is to say, PS(F)PS(F')=PS(F).

Proof:

To prove PS(F)PS(F'), we just need to prove that every xPS(F) must belong to PS(F'). Assume that there exists a solution xPS(F) that does not belong to PS(F'); then there must exist a solution x˜PS(F') that dominates x, that is to say, for every objective fiF', fi(x˜)fi(x); and there must exist at least one objective fjF' such that fj(x˜)<fj(x). But x˜ must either belong to PS(F) or not belong to PS(F). If we assume that x˜ belongs to PS(F), it follows that jF'wjfj(x˜)<jF'wjfj(x)—that is say, 1<1, which is a contradiction. Therefore, x˜ does not belong to PS(F). So, having established that x˜ does not belong to PS(F), there must exist a solution x˜˜ that dominates x˜. So we can deduce that x˜˜<x˜<x, but as previously proved, x˜˜ cannot dominate x; therefore, the original assumption gave rise to a contradiction and cannot be true. Thus xPS(F').

Unfortunately, Eq. (3) is impossible to solve (Luo et al., 2015). In order to relax the severe constraints of Eq. (3) and make the model tractable, we propose replacing Eq. (3) with Eq. (4):
minwi=1N(1-j=1Mwjfj(xi))2+λw1s.t.w02wj0,j{1,2,..,M},
(4)
where λ is a control constant. The meaning of Eq. (4) is to use a hyperplane to evaluate the structure of the PF. It is worth noting that Eq. (4) can be regarded as a machine learning model; that is, it minimizes the error while regularizing the parameters (Tibshirani, 1996). The parameter λ balances the approximation error against the l1 norm. In addition, Eq. (4) is just a simple convex quadratic programming problem with a simple constraint, and there are many mature theoretical results and software tools to solve it. In this article, a method provided in the widely used software, MATLAB 2014a, is used to solve it, namely the active-set method within the optimization function called “quadprog” (Coleman and Li, 1996; Gould and Toint, 2004).

3.2  The Objective Reduction Model for a Nonlinearly Degenerate PF

As stated above, it is very common that a degenerate PF is located within a lower-dimensional hyper-surface rather than on a linear hyperplane, which means that the PF is nonlinearly degenerate. As shown in Figure 1b, three nonlinearly degenerate PFs are constructed, all of which satisfy the equation f12(x)+f214(x)+0f3=1. Thus, as in the similar analysis of the linearly degenerate PFs above, objectives f1 and f2 are the essential objectives and have a nonlinear conflict relationship. Generally, a nonlinearly degenerate PF can be roughly described as follows:
w1f1q1+w2f2q2+...+wMfMqM=1,
(5)
where qj(j=1,2,,M) are constants determined by the shape of the real PF (Giagkiozis et al., 2014).

However, if Eq. (4) were applied to identify essential objectives for a nonlinearly degenerate PF, it would increase the risk of making mistakes. In fact, Eq. (4) is derived for the situation in which the PF is linearly degenerate. In such a situation, the approximation error represented by the first item in Eq. (4) is expected to be zero or very small, and then the second item, the regularizing parameter w1, helps to identify the essential objectives. But when it is applied to a nonlinearly degenerate PF, the approximation error may play a more important role than the regularizing parameter. In order to decrease the approximation error, this article adopts a method called power transformation, as proposed in the literature (Liu et al., 2012). The main purpose of using a power transformation is to transform a nonlinearly degenerate PF into a nearly linear PF, minimizing the impact of the nonlinearity. In this sense, it has an effect similar to the MVU (Saxena et al., 2013), but it is more easily realized than the MVU.

In the power transformation method, each objective is transformed by a power function as follows:
gj(fj(x))=fjq(x).
(6)
Notably, because the power transformation is a strictly monotonic transformation, it does not change the dominance relationships among solutions. Finding a suitable parameter q that minimizes Eq. (7) allows transformation of the original PF into a closer-to-linear PF as much as possible, as illustrated in Figure 2.
minqi=1N(j=1Mfjq(x)-1)2.
(7)
Figure 2:

Plots of non-dominated solutions post-transform and pre-transform.

Figure 2:

Plots of non-dominated solutions post-transform and pre-transform.

Therefore, an objective reduction model for a nonlinearly degenerate PF is proposed as follows:
minwi=1N(1-j=1Mwjfjq(xi))2+λw1s.t.w02wj0,j{1,2,..,M}.
(8)

In this article, a q that minimizes the first item in Eq. (8), from among a discrete set of qs is chosen, in which the first (summation) term is approximation error while the second term provides regularization. Compared with the hyperplane approximation for the linearly degenerate PF, the approximation error for a nonlinearly degenerate PF is much larger, which makes accurate identification of the essential objectives more difficult. Thus, in this sense, a q that minimizes the first term in Eq. (8) should be more suitable. As a result, the process for identifying a most suitable qbest is given in Algorithm 1.

graphic

Specially, in line 4 of Algorithm 1, the approximation error is defined as the least square error minus the first norm w1. It is because the optimized value of Eq. (8) contains the first norm w1, which is to say, the minimum value of the Eq. (8) is the balance between the least square error and the first norm w1. Thus, to define the approximation error as the least square error minus the first norm w1 should be more reasonable.

3.3  An Online Objective Reduction Framework with Magnitude Adjustment

In this subsection, an objective reduction framework is given, which integrates the magnitude adjustment mechanism to decrease the risk that the evolutionary populations converge into a proper subspace of the PF.

Similarly to the objective reduction framework in Deb and Saxena (2005) and Saxena et al. (2013), a non-dominated set X, obtained by running a state-of-the-art EMOA for a large number of iterations, is regarded as a broad sampling of the real PF. This sampling is then operated on the objective reduction algorithm to extract essential objectives. Only the essential objectives extracted are used in the next stage of the evolutionary optimization process. At this stage, the objective reduction algorithm will terminate if the stopping criterion is met. In this article, the stopping criterion is that the current set of extracted objectives is the same as it was at the previous stage. The detail is given in Algorithm 2.

graphic

In lines 14 to 19 of Algorithm 2, the reason we select the objective with the corresponding coefficient over 0.1 is in order to reduce the effects brought on by poor convergence and diversity of the PF. After all, the EMOAs invoked above are not generally particularly effective when applied on MaOPs. Additionally, we wish to avoid the situation that the number of essential objectives identified by Eq. (4) is only one. Here, if the current number of identified essential objectives is only one, a simple adjustment is made by selecting an additional objective that has the most negative correlation, with the identified essential objective as being the second essential objective.

In Algorithm 2, a magnitude adjustment mechanism is integrated via the operation in lines 5 to 7. When applying an EMOA to deal with a MOP or MaOP, it is very important not to ignore the effect of the magnitudes of the objectives, since they can have a great influence on the performance of the objective reduction algorithm, in which objectives with high magnitudes usually play more important roles than the objectives with small magnitudes during evolutionary selection. In order to reduce the effect brought on by magnitude differences among the objectives, a magnitude adjustment mechanism is adopted here. Instead of searching for the extreme points in each objective axis by using ASF and then computing the intercept of each objective axis, as described in Deb and Jain (2014), a simpler method to compute the intercept Bj with the j-th objective axis is given as Eq. (9).
Bj=maxxX(fj(x)).
(9)
It should be noted that X must be the non-dominated solutions in the current solution set. Then the objective functions can be normalized as follows:
fj*=fj(x)-zjminBj-zjmin,
(10)
where zjmin is the ideal point in each objective.

4  Experimental Design and Results

4.1  The Introduction of an Improved Performance Metric σ*

A performance metric plays an important role in characterizing an algorithm's performance, but there exist few performance metrics to evaluate the performance of objective reduction algorithms. In Saxena et al. (2013), the success rate in identifying essential objectives is applied as the performance metric, but this performance metric makes no distinctions among quality of solution in cases where none of the algorithms is able to identify perfectly the essential objectives.

Cheung et al. (2016) develop an interesting performance metric σ to measure the quality of a given objective set F', which is defined as
σ=|PS(F')PS(Fess)||PS(Fess)|,
(11)
where Fess represents the essential objective set of the original objective set F and |PS(A)| is the size/cardinality of set A. Obviously, σ[0,1]. When σ=1, it means that the PS generated by F' is the same as that of the original problem F. It provides us a new way to measure the performance of the objective reduction, for which it is helpful to allow the researcher to gradually tune the acceptable σ in the PS. Nevertheless, it also has one shortcoming, in that it cannot distinguish the quality of two objective sets that generate the same PS but have different numbers of objectives. For example, if objective set F'={f1(x),f2(x)} and objective set F''={f1(x),f2(x),f3(x)} have the same PS, it is easy to see that F' is minimal, while the other is not. But even though the cardinalities of the sets are different, they yield the same σ. Motivated by this idea, we propose an improved performance metric σ* for an arbitrary set of objectives F', defined as:
σ*=maxFessFE(σ)×(1-|F'Fess||FFess|),
(12)
where FE is the set of all sets of essential objectives Fess with respect to F. The reason why the set FE must be used is that the set of essential objectives may not be unique. For some optimization problems, there can exist more than one set of essential objectives, but in order to be a set of essential objectives, a set that preserves more domination relationship among the solutions would be more reasonable, so the measure above yields a unique result. The second item in Eq. (12) acts as a penalty when the extracted set of objectives contains redundant objectives. The larger the number of redundant objectives that remain in the extracted set F', the smaller the performance metric σ* is. Note that performance metric σ* remains in [0,1].

4.2  Experimental Settings

In order to demonstrate the performance of the proposed algorithms, comparative experiments are done with several correlation-based algorithms and dominance-structure-based algorithms. We select LPCA and NLMVUPCA (Saxena et al., 2013) as representative of the correlation-based algorithms, while PCSEA (Singh et al., 2011) and σ-MOSS based on a greedy method (greedy σ-MOSS) (Brockhoff and Zitzler, 2006; 2009) are chosen as representative of the dominance-structure-based algorithms. All of these are widely used and recognized algorithms. The parameter settings for these algorithms are listed in Table 1, and are the same as those used by the authors in their original experiments.

Table 1:
Parameter settings for the objective reduction algorithms.
Algorithm NameParameter ItemSetting Value
PCSEA 0.8 (used in Singh et al., 2011
greedy σ-MOSS σ 0 (used in Saxena et al., 2013
NLMVUPCA M-1 (used in Saxena et al., 2013
LPCA M-1 (used in Saxena et al., 2013
LHA λ 
NLHA λ 
Algorithm NameParameter ItemSetting Value
PCSEA 0.8 (used in Singh et al., 2011
greedy σ-MOSS σ 0 (used in Saxena et al., 2013
NLMVUPCA M-1 (used in Saxena et al., 2013
LPCA M-1 (used in Saxena et al., 2013
LHA λ 
NLHA λ 

Clearly, the selection of benchmarking functions usually plays an important role in evaluating the performance of objective reduction algorithms. To make the experiments more comprehensive, three test benchmarks with different characteristics have been selected, namely DTLZ5(I,M) (Ishibuchi et al., 2016), MAOP(I,M) (Cheung et al., 2016), and WFG3(I,M) (Huband et al., 2006), where I represents the number of essential objectives and M is the total number of objectives. Note that, unlike the experimental setting for WFG3 in the literature (Saxena et al., 2013; Guo et al., 2015), this article expands the number of its essential objectives by setting the degenerate constants A1:M-1 (Ai is mentioned in the literature (Huband et al., 2006) and for each Ai=0, the dimensionality of the PF is reduced by one). The detailed settings for WFG3(I,M) are listed in Table 2. For the DTLZ5(I,M) and MAOP(I,M), we adopt the same experimental settings as used in the literature (Saxena et al., 2013; Cheung et al., 2016).

Table 2:
Parameter setting for test benchmark WFG3(I,M).
TestNamemMkl
WFG3(2, 15) 15 14 20 
WFG3(5, 15) 15 14 20 
WFG3(7, 15) 15 14 20 
WFG3(2, 20) 20 19 20 
WFG3(5, 20) 20 19 20 
WFG3(7, 20) 20 19 20 
WFG3(2, 25) 25 24 20 
WFG3(5, 25) 25 24 20 
WFG3(7, 25) 25 24 20 
TestNamemMkl
WFG3(2, 15) 15 14 20 
WFG3(5, 15) 15 14 20 
WFG3(7, 15) 15 14 20 
WFG3(2, 20) 20 19 20 
WFG3(5, 20) 20 19 20 
WFG3(7, 20) 20 19 20 
WFG3(2, 25) 25 24 20 
WFG3(5, 25) 25 24 20 
WFG3(7, 25) 25 24 20 

It is worth noting that the PF of DTLZ5(I,M) degenerates into a nonlinear hyper-surface, with essential objectives that satisfy the equation fk2+j=M-I+2Mfj2=1, where k{1,2,..,M-I+1}. In contrast, the PF of WFG3(I,M) degenerates into a linear hyperplane, with essential objectives that satisfy the equation fk+j=M-I+2Mfj=1, where k{1,2,..,M-I+1}. The test benchmark MAOP(I,M), is a version developed from the P* problems (Köppen and Yoshida, 2006; Singh et al., 2008b). As seen in Figure 3, the problem is as follows: given a set of I vertices (P1,P2,,PI) and M-I interior points (PI+1,PI+2,,PM) that are spanned by these I vertices in a Euclidean space, the objective value fj at a point x is defined as fj(x)=d(x,Pj), j=1,2,,M, where d(A,B) denotes the Euclidean distance between points A and B. The authors prove that these I vertices are the essential objectives while the M-I interior points are redundant objectives.

Figure 3:

A diagrammatic figure for test benchmark MAOP(I,M).

Figure 3:

A diagrammatic figure for test benchmark MAOP(I,M).

In short, their essential objective sets are, respectively,

  • DTLZ5(I,M): Fτ={fk,fM-I+2,fM-I+3,,fM},k{1,2,..,M-I+1}.

  • MAOP(I,M): Fτ={f1,f2,,fI}.

  • WFG3(I,M): Fτ={fk,fM-m+2,fM-m+3,,fM},k{1,2,..,M-I+1}.

Finally, in this article, we integrate the objective reduction method into an effective EMOA called MOEA/D-M2M (Liu, Gu, and Zhang, 2014), which is a decomposition-based algorithm and has been widely used in many practical problems (Liu, Gu, Cheung et al., 2014; Li, Deb et al., 2015; Li, Kwong et al., 2015). The reason why we choose a decomposable EMOA, rather than a Pareto-domination-based EMOA such as NSGA-II, is that the decomposable EMOA has comparatively strong convergence and easily obtains a better sampling of the true PF. The parameters of MOEA/D-M2M are set as follows:

  • The population size N is 200.

  • The maximum number of evolution generations G is set as 300.

  • The number of independent experiments is set to 20 for all of the compared objective reduction algorithms.

  • The control parameters of crossover and mutation operators are used as in Liu, Gu, and Zhang (2014).

Furthermore, we want to investigate the performance of the proposed algorithms in two scenarios: the first is when the algorithm is presented with only points that lie exactly on the Pareto front (noiseless), and the second is when the points are randomly generated solutions, so they do not lie on the Pareto front (noisy).

4.3  Experimental Results and Discussion

4.3.1  The Process of Generating the Data for the Dimensionality Reduction Technique

In this section, the process of generating the data for the dimensionality reduction technique is first introduced. Here, the proposed algorithm NLHA is chosen as a representative and its performance is demonstrated for the DTLZ5(2,5) problem corresponding to noiseless input data X.

The detailed implementation of NLHA follows:

Step 1: For the noiseless input data X, apply Eq. (3) to normalize it and obtain the newly constructed data X*.

Step 2: Based on X*, apply Algorithm 1 to search for the most suitable qbest. Table 3 lists the corresponding results for every q value in Algorithm 1.

Table 3:
Corresponding result for every q value in Algorithm 1.
qw1w2w3w4w5Approximation ErrorEssential Objective Set
0.1 0.00 0.00 0.49 0.00 0.57 −0.3894 [f3,f5
0.2 0.00 0.00 0.51 0.00 0.61 −0.0891 [f3,f5
0.3 0.00 0.00 0.53 0.00 0.64 0.2089 [f3,f5
0.4 0.00 0.00 0.56 0.00 0.66 0.4313 [f3,f5
0.5 0.00 0.00 0.59 0.00 0.68 0.5572 [f4,f5
0.6 0.00 0.00 0.62 0.00 0.71 0.5882 [f4,f5
0.7 0.00 0.00 0.65 0.00 0.73 0.5348 [f4,f5
0.8 0.00 0.00 0.68 0.00 0.75 0.4114 [f4,f5
0.9 0.00 0.00 0.71 0.00 0.77 0.2327 [f4,f5
0.00 0.00 0.73 0.00 0.80 0.0134 [f4,f5
0.00 0.00 0.99 0.00 1.00 −1.9607 [f3,f5
0.00 0.21 0.00 1.00 1.00 2.3858 [f2,f4,f5
0.33 0.00 0.00 1.00 1.00 12.2916 [f1,f2,f5
0.38 0.00 0.00 1.00 1.00 23.6437 [f1,f2,f5
1.00 0.00 0.00 0.41 1.00 34.5576 [f1,f2,f5
1.00 0.00 0.00 0.43 1.00 44.4205 [f1,f2,f5
0.44 0.00 0.00 1.00 1.00 53.1497 [f1,f2,f5
1.00 0.00 0.00 0.45 1.00 60.8466 [f1,f2,f5
10 0.45 0.00 0.00 1.00 1.00 67.6575 [f1,f2,f5
qw1w2w3w4w5Approximation ErrorEssential Objective Set
0.1 0.00 0.00 0.49 0.00 0.57 −0.3894 [f3,f5
0.2 0.00 0.00 0.51 0.00 0.61 −0.0891 [f3,f5
0.3 0.00 0.00 0.53 0.00 0.64 0.2089 [f3,f5
0.4 0.00 0.00 0.56 0.00 0.66 0.4313 [f3,f5
0.5 0.00 0.00 0.59 0.00 0.68 0.5572 [f4,f5
0.6 0.00 0.00 0.62 0.00 0.71 0.5882 [f4,f5
0.7 0.00 0.00 0.65 0.00 0.73 0.5348 [f4,f5
0.8 0.00 0.00 0.68 0.00 0.75 0.4114 [f4,f5
0.9 0.00 0.00 0.71 0.00 0.77 0.2327 [f4,f5
0.00 0.00 0.73 0.00 0.80 0.0134 [f4,f5
0.00 0.00 0.99 0.00 1.00 −1.9607 [f3,f5
0.00 0.21 0.00 1.00 1.00 2.3858 [f2,f4,f5
0.33 0.00 0.00 1.00 1.00 12.2916 [f1,f2,f5
0.38 0.00 0.00 1.00 1.00 23.6437 [f1,f2,f5
1.00 0.00 0.00 0.41 1.00 34.5576 [f1,f2,f5
1.00 0.00 0.00 0.43 1.00 44.4205 [f1,f2,f5
0.44 0.00 0.00 1.00 1.00 53.1497 [f1,f2,f5
1.00 0.00 0.00 0.45 1.00 60.8466 [f1,f2,f5
10 0.45 0.00 0.00 1.00 1.00 67.6575 [f1,f2,f5

Step 3: Among all q values, select the q with the minimum approximation error, and its corresponding identified objective set is regarded as an essential objective set. In this case, q=2 is selected, whose approximation is -1.9607, and the corresponding identified essential set is {f3,f5}, with its corresponding objective coefficients w being [0,0,0.99,0,1.00].

4.3.2  Experimental Results for Benchmark DTLZ5(I,M)

In this section, a redundant test benchmark DTLZ5(I,M), in which the redundant objectives are completely correlated, is tested with the proposed algorithm and the comparison algorithms. Table 4 lists the mean values of success rate and performance metric σ* of all objective reduction algorithms in 20 independent experiments for DTLZ5(I,M) with various numbers of objectives and essential objectives. The results in bold italics are the best obtained using these algorithms for each test instance.

Table 4:
Mean values of success rate and performance metric σ* of all objective reduction algorithms in 20 independent experiments for DTLZ5(I,M) with various numbers of objectives and essential objectives.
Success rate
Magnitude AdjustmentYesNo
greedygreedy
Input dataIMPCSEAδ-MOSSLPCANLMVUPCALHANLHAPCSEAδ-MOSSLPCANLMVUPCALHANLHA
Noiseless 20 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
 50 0.950 1.000 0.000 1.000 0.000 0.200 0.850 1.000 0.000 1.000 0.000 0.200 
 20 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.400 1.000 0.000 0.500 
 1.000 1.000 1.000 1.000 1.000 0.950 1.000 1.000 1.000 1.000 1.000 1.000 
 10 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
 20 1.000 1.000 1.000 1.000 0.950 1.000 0.950 1.000 0.650 1.000 0.000 0.050 
 10 0.000 1.000 1.000 0.900 1.000 1.000 0.000 1.000 1.000 0.900 1.000 1.000 
 20 0.000 1.000 1.000 0.900 1.000 1.000 0.000 1.000 1.000 0.900 0.100 0.000 
Summary of noiseless input data 0.772 1.000 0.889 0.978 0.883 0.906 0.756 1.000 0.783 0.978 0.567 0.639 
Noisy 20 1.000 0.800 1.000 1.000 1.000 1.000 1.000 0.400 1.000 1.000 1.000 1.000 
 1.000 0.600 1.000 1.000 1.000 1.000 1.000 0.200 1.000 1.000 1.000 1.000 
 50 1.000 0.750 0.000 1.000 0.000 0.350 0.550 0.500 0.000 1.000 0.100 0.250 
 20 1.000 0.950 1.000 1.000 1.000 1.000 1.000 0.700 0.650 1.000 0.500 0.950 
 1.000 0.900 1.000 1.000 1.000 1.000 1.000 0.750 1.000 1.000 1.000 0.950 
 10 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.950 1.000 
 20 1.000 1.000 1.000 1.000 0.950 0.950 0.900 0.950 0.700 1.000 0.850 0.950 
 10 0.000 1.000 1.000 0.750 0.950 1.000 0.000 1.000 1.000 0.750 1.000 1.000 
 20 0.000 1.000 1.000 0.650 0.900 0.950 0.000 1.000 0.900 0.800 0.600 0.700 
Summary of noisy input data 0.778 0.889 0.889 0.933 0.867 0.917 0.717 0.722 0.806 0.950 0.778 0.867 
Summary of all experiments 0.775 0.944 0.889 0.956 0.875 0.911 0.736 0.861 0.794 0.964 0.672 0.753 
Noiseless 20 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
 50 0.999 1.000 0.000 1.000 0.956 0.961 0.850 1.000 0.000 1.000 0.000 0.200 
 20 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.400 1.000 0.000 0.500 
 1.000 1.000 1.000 1.000 1.000 0.975 1.000 1.000 1.000 1.000 1.000 1.000 
 10 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
 20 1.000 1.000 1.000 1.000 0.950 1.000 0.950 1.000 0.650 1.000 0.000 0.050 
 10 0.000 1.000 1.000 0.900 1.000 1.000 0.000 1.000 1.000 0.900 1.000 1.000 
 20 0.000 1.000 1.000 0.900 1.000 1.000 0.000 1.000 1.000 0.900 0.100 0.000 
Summary of noiseless input data 0.778 1.000 0.889 0.978 0.990 0.993 0.756 1.000 0.783 0.978 0.567 0.639 
Noisy 20 1.000 0.989 1.000 1.000 1.000 1.000 1.000 0.964 1.000 1.000 1.000 1.000 
 1.000 0.850 1.000 1.000 1.000 1.000 1.000 0.700 1.000 1.000 1.000 1.000 
 50 1.000 0.994 0.000 1.000 0.955 0.972 0.550 0.989 0.000 1.000 0.100 0.391 
 20 1.000 0.997 1.000 1.000 1.000 1.000 1.000 0.982 0.650 1.000 0.500 0.950 
 1.000 0.950 1.000 1.000 1.000 1.000 1.000 0.875 1.000 1.000 1.000 0.975 
 10 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.990 1.000 
 20 1.000 1.000 1.000 1.000 0.997 0.997 0.900 0.997 0.700 1.000 0.897 0.997 
 10 0.000 1.000 1.000 0.750 0.950 1.000 0.000 1.000 1.000 0.750 1.000 1.000 
 20 0.000 1.000 1.000 0.650 0.900 0.996 0.000 1.000 0.900 0.800 0.692 0.838 
Summary of noisy input data 0.778 0.976 0.889 0.933 0.978 0.996 0.717 0.945 0.806 0.950 0.798 0.906 
Summary of all experiments 0.778 0.988 0.889 0.956 0.984 0.995 0.736 0.973 0.794 0.964 0.682 0.772 
Success rate
Magnitude AdjustmentYesNo
greedygreedy
Input dataIMPCSEAδ-MOSSLPCANLMVUPCALHANLHAPCSEAδ-MOSSLPCANLMVUPCALHANLHA
Noiseless 20 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
 50 0.950 1.000 0.000 1.000 0.000 0.200 0.850 1.000 0.000 1.000 0.000 0.200 
 20 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.400 1.000 0.000 0.500 
 1.000 1.000 1.000 1.000 1.000 0.950 1.000 1.000 1.000 1.000 1.000 1.000 
 10 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
 20 1.000 1.000 1.000 1.000 0.950 1.000 0.950 1.000 0.650 1.000 0.000 0.050 
 10 0.000 1.000 1.000 0.900 1.000 1.000 0.000 1.000 1.000 0.900 1.000 1.000 
 20 0.000 1.000 1.000 0.900 1.000 1.000 0.000 1.000 1.000 0.900 0.100 0.000 
Summary of noiseless input data 0.772 1.000 0.889 0.978 0.883 0.906 0.756 1.000 0.783 0.978 0.567 0.639 
Noisy 20 1.000 0.800 1.000 1.000 1.000 1.000 1.000 0.400 1.000 1.000 1.000 1.000 
 1.000 0.600 1.000 1.000 1.000 1.000 1.000 0.200 1.000 1.000 1.000 1.000 
 50 1.000 0.750 0.000 1.000 0.000 0.350 0.550 0.500 0.000 1.000 0.100 0.250 
 20 1.000 0.950 1.000 1.000 1.000 1.000 1.000 0.700 0.650 1.000 0.500 0.950 
 1.000 0.900 1.000 1.000 1.000 1.000 1.000 0.750 1.000 1.000 1.000 0.950 
 10 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.950 1.000 
 20 1.000 1.000 1.000 1.000 0.950 0.950 0.900 0.950 0.700 1.000 0.850 0.950 
 10 0.000 1.000 1.000 0.750 0.950 1.000 0.000 1.000 1.000 0.750 1.000 1.000 
 20 0.000 1.000 1.000 0.650 0.900 0.950 0.000 1.000 0.900 0.800 0.600 0.700 
Summary of noisy input data 0.778 0.889 0.889 0.933 0.867 0.917 0.717 0.722 0.806 0.950 0.778 0.867 
Summary of all experiments 0.775 0.944 0.889 0.956 0.875 0.911 0.736 0.861 0.794 0.964 0.672 0.753 
Noiseless 20 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
 50 0.999 1.000 0.000 1.000 0.956 0.961 0.850 1.000 0.000 1.000 0.000 0.200 
 20 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.400 1.000 0.000 0.500 
 1.000 1.000 1.000 1.000 1.000 0.975 1.000 1.000 1.000 1.000 1.000 1.000 
 10 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
 20 1.000 1.000 1.000 1.000 0.950 1.000 0.950 1.000 0.650 1.000 0.000 0.050 
 10 0.000 1.000 1.000 0.900 1.000 1.000 0.000 1.000 1.000 0.900 1.000 1.000 
 20 0.000 1.000 1.000 0.900 1.000 1.000 0.000 1.000 1.000 0.900 0.100 0.000 
Summary of noiseless input data 0.778 1.000 0.889 0.978 0.990 0.993 0.756 1.000 0.783 0.978 0.567 0.639 
Noisy 20 1.000 0.989 1.000 1.000 1.000 1.000 1.000 0.964 1.000 1.000 1.000 1.000 
 1.000 0.850 1.000 1.000 1.000 1.000 1.000 0.700 1.000 1.000 1.000 1.000 
 50 1.000 0.994 0.000 1.000 0.955 0.972 0.550 0.989 0.000 1.000 0.100 0.391 
 20 1.000 0.997 1.000 1.000 1.000 1.000 1.000 0.982 0.650 1.000 0.500 0.950 
 1.000 0.950 1.000 1.000 1.000 1.000 1.000 0.875 1.000 1.000 1.000 0.975 
 10 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.990 1.000 
 20 1.000 1.000 1.000 1.000 0.997 0.997 0.900 0.997 0.700 1.000 0.897 0.997 
 10 0.000 1.000 1.000 0.750 0.950 1.000 0.000 1.000 1.000 0.750 1.000 1.000 
 20 0.000 1.000 1.000 0.650 0.900 0.996 0.000 1.000 0.900 0.800 0.692 0.838 
Summary of noisy input data 0.778 0.976 0.889 0.933 0.978 0.996 0.717 0.945 0.806 0.950 0.798 0.906 
Summary of all experiments 0.778 0.988 0.889 0.956 0.984 0.995 0.736 0.973 0.794 0.964 0.682 0.772 

Before the performance of all tested algorithms is compared, the effect of magnitude adjustment is first discussed. As seen in Figure 4, whether the input data is noiseless or not, most of the tested algorithms performed better with magnitude adjustment mechanisms than they did without magnitude adjustment, in terms of both success rate and performance metric σ*, especially for PCSEA, LPCA, LHA, and NLHA. However, it can also be seen that the magnitude adjustment mechanism does not help NLMVUPCA, and that worse still, the performance of NLMVUPCA seems to deteriorate when magnitude adjustment is added. This diametrically opposite effect can be explained through Figure 5.

Figure 4:

Comparison of results obtained with magnitude adjustment for DLTZ5(I, M) (numbers 1 to 6 represent PCSEA Greedy δ-MOSS, LPCA, NLMVUPCA, LHA, and NLHA, respectively).

Figure 4:

Comparison of results obtained with magnitude adjustment for DLTZ5(I, M) (numbers 1 to 6 represent PCSEA Greedy δ-MOSS, LPCA, NLMVUPCA, LHA, and NLHA, respectively).

Figure 5:

Parallel coordinate plots for the final solutions obtained by objective reduction algorithms LPCA, NLMVUPCA, and NLHA on DTLZ5(5,20).

Figure 5:

Parallel coordinate plots for the final solutions obtained by objective reduction algorithms LPCA, NLMVUPCA, and NLHA on DTLZ5(5,20).

In4 Figure 5, the final solutions obtained by LPCA, NLPCA, and NLHA on test instances of DTLZ5(5, 20) are given, where the true PF of test instance DLTZ5(5,20) is shown in Figure 6. Obviously, for LPCA, the solutions obtained by integrating the magnitude adjustment mechanism have better convergence and diversity than those not using magnitude adjustment, even if their first extracted results are the same. This is mainly because the magnitude of f3 is far from the magnitudes of the other extracted objectives, which results in f3 having no significant influence on the selection process during evolution. Consequently, the population easily converges into a proper subspace of the PF and delivers incomplete information to the objective reduction algorithms. Nevertheless, NLMVUPCA, because of the MVU mechanism, easily selects the objectives with high magnitudes and as a result, the objectives with magnitudes having no significant differences are selected. Therefore, the magnitude adjustment mechanism does not assist NLMVUPCA.

Figure 6:

True PF of DTLZ5(5,20).

Figure 6:

True PF of DTLZ5(5,20).

Based on the above analysis, the experimental results obtained using the objective reduction algorithms with magnitude adjustment included are presented for further comparison. First, in terms of overall success rate in the summary results, NLMVUPCA performs better than the other objective reduction algorithms, but it should be noted that the gaps among NLMVUPCA, the δ-MOSS-based greedy method and NLHA are very small, which means that these three objective reduction algorithms have similar abilities to identify essential objectives. Furthermore, compared with the performance of LHA, NLHA performs better in almost all test instances of DTLZ5(I,M), which means that the power transformation operations really do help to identify essential objectives in this method. Second, in terms of the proposed performance metric σ* for all experimental results, the NLHA performs better than all of the other objective reduction algorithms, with an accuracy reaching nearly 0.995. Furthermore, if we compare the results shown by success rate and performance metric σ* for NLHA, there are significant differences in the results for test instances of DTLZ5(2, 50). It can be seen that the success rate is very low but performance metric σ* is very high for this case, mainly because the number of objectives identified by NLHA is usually more than the true number of essential objectives when the total number of objectives in the problem is very high. Therefore, the performance metric σ* is more informative than success rate in this sense.

4.3.3  Experimental Results for MAOP(I,M)

As introduced earlier, the PFs of MAOP(I,M) problems are nonlinearly degenerate. Benchmarks MAOP(I,M) are tested with all objective reduction algorithms in order to compare their performance in identifying the essential objectives in a problem with nonlinearly degenerate PFs. Table 5 lists the mean values of success rate and performance metric σ* for all of the objective reduction algorithms in 20 independent experiments on MAOP(I,M) with various numbers of objectives and essential objectives. The results in bold italics are the best obtained using these algorithms on each test instance.

Table 5:
Mean values of success rate and performance metric σ* of all objective reduction algorithms in 20 independent experiments for MAOP(I,M) with various numbers of objectives and essential objectives.
Success rate 
Magnitude Adjustment: Yes No 
greedygreedy
Input dataTestNamePCSEAδ-MOSSLPCANLMVUPCALHANLHAPCSEAδ-MOSSLPCANLMVUPCALHANLHA
Noiseless MAOP1(3, 5) 1.000 1.000 0.000 1.000 1.000 1.000 1.000 1.000 0.000 1.000 1.000 1.000 
 MAOP2(3, 5) 1.000 1.000 0.000 0.000 1.000 1.000 1.000 1.000 0.000 0.000 1.000 1.000 
 MAOP3(4, 10) 0.900 1.000 0.000 0.350 1.000 1.000 0.900 1.000 0.000 0.350 1.000 1.000 
 MAOP4(4, 10) 0.000 1.000 0.000 0.000 0.150 0.150 0.000 1.000 0.000 0.000 0.200 0.200 
 MAOP5(6, 10) 0.000 1.000 0.000 0.000 0.750 1.000 0.000 1.000 0.000 0.000 0.750 1.000 
 MAOP6(6, 10) 0.000 0.950 0.000 0.000 0.250 0.750 0.000 0.950 0.000 0.000 0.250 0.750 
 MAOP7(5, 15) 0.000 1.000 0.000 0.000 1.000 1.000 0.000 1.000 0.000 0.000 1.000 1.000 
 MAOP8(5, 15) 0.000 0.950 0.000 0.000 0.100 0.700 0.000 0.950 0.000 0.000 0.100 0.700 
 MAOP9(7, 15) 0.000 0.950 0.000 0.000 0.150 1.000 0.000 0.950 0.000 0.000 0.150 1.000 
 MAOP10(7, 15) 0.000 0.100 0.000 0.000 0.000 0.500 0.000 0.100 0.000 0.000 0.000 0.450 
Summary of noiseless input data 0.290 0.895 0.000 0.135 0.540 0.810 0.290 0.895 0.000 0.135 0.545 0.810 
Noisy MAOP1(3, 5) 1.000 0.850 0.000 1.000 1.000 1.000 1.000 0.850 0.000 1.000 1.000 1.000 
 MAOP2(3, 5) 1.000 0.450 0.000 0.000 1.000 1.000 1.000 0.450 0.000 0.000 1.000 1.000 
 MAOP3(4, 10) 0.950 0.950 0.000 0.350 1.000 1.000 0.950 0.950 0.000 0.350 1.000 1.000 
 MAOP4(4, 10) 0.000 0.850 0.000 0.000 0.100 0.100 0.000 0.850 0.000 0.000 0.100 0.100 
 MAOP5(6, 10) 0.000 0.900 0.000 0.000 0.500 1.000 0.000 0.900 0.000 0.000 0.500 1.000 
 MAOP6(6, 10) 0.000 0.900 0.000 0.000 0.000 0.300 0.000 0.900 0.000 0.000 0.000 0.300 
 MAOP7(5, 15) 0.000 0.900 0.000 0.000 0.900 1.000 0.000 0.900 0.000 0.000 0.900 1.000 
 MAOP8(5, 15) 0.000 0.950 0.000 0.000 0.000 0.700 0.000 0.950 0.000 0.000 0.000 0.700 
 MAOP9(7, 15) 0.000 0.800 0.000 0.000 0.000 0.900 0.000 0.800 0.000 0.000 0.000 0.900 
 MAOP10(7, 15) 0.000 0.050 0.000 0.000 0.000 0.050 0.000 0.050 0.000 0.000 0.000 0.050 
Summary of noisy input data 0.295 0.760 0.000 0.135 0.450 0.705 0.295 0.760 0.000 0.135 0.450 0.705 
Summary of all experiments 0.293 0.828 0.000 0.135 0.495 0.758 0.293 0.828 0.000 0.135 0.498 0.758 
Noiseless MAOP1(3, 5) 1.000 1.000 0.000 1.000 1.000 1.000 1.000 1.000 0.000 1.000 1.000 1.000 
 MAOP2(3, 5) 1.000 1.000 0.500 0.000 1.000 1.000 1.000 1.000 0.500 0.000 1.000 1.000 
 MAOP3(4, 10) 0.950 1.000 0.531 0.650 1.000 1.000 0.950 1.000 0.520 0.650 1.000 1.000 
 MAOP4(4, 10) 0.933 1.000 0.000 0.000 0.943 0.943 0.933 1.000 0.000 0.000 0.946 0.946 
 MAOP5(6, 10) 0.500 1.000 0.358 0.025 0.950 1.000 0.500 1.000 0.357 0.025 0.950 1.000 
 MAOP6(6, 10) 0.452 0.996 0.735 0.105 0.969 0.993 0.452 0.996 0.735 0.094 0.969 0.993 
 MAOP7(5, 15) 0.447 1.000 0.131 0.069 1.000 1.000 0.447 1.000 0.126 0.069 1.000 1.000 
 MAOP8(5, 15) 0.658 0.995 0.543 0.565 0.900 0.969 0.658 0.995 0.543 0.499 0.900 0.969 
 MAOP9(7, 15) 0.436 0.995 0.333 0.247 0.866 1.000 0.436 0.995 0.329 0.240 0.866 1.000 
 MAOP10(7, 15) 0.453 0.952 0.495 0.061 0.811 0.983 0.453 0.948 0.490 0.061 0.821 0.981 
Summary of noiseless input data 0.683 0.994 0.363 0.272 0.944 0.989 0.683 0.993 0.360 0.264 0.945 0.989 
Noisy MAOP1(3, 5) 1.000 0.925 0.000 1.000 1.000 1.000 1.000 0.900 0.000 1.000 1.000 1.000 
 MAOP2(3, 5) 1.000 0.725 0.500 0.000 1.000 1.000 1.000 0.825 0.500 0.000 1.000 1.000 
 MAOP3(4, 10) 0.975 0.992 0.529 0.650 1.000 1.000 0.975 0.992 0.515 0.650 1.000 1.000 
 MAOP4(4, 10) 0.933 0.975 0.015 0.000 0.940 0.940 0.933 0.983 0.000 0.000 0.936 0.936 
 MAOP5(6, 10) 0.500 0.975 0.307 0.025 0.867 1.000 0.500 0.975 0.311 0.025 0.883 1.000 
 MAOP6(6, 10) 0.463 0.984 0.715 0.096 0.956 0.974 0.463 0.975 0.725 0.094 0.957 0.975 
 MAOP7(5, 15) 0.447 0.990 0.152 0.111 0.972 1.000 0.447 0.995 0.158 0.111 0.972 1.000 
 MAOP8(5, 15) 0.658 0.995 0.510 0.460 0.877 0.969 0.658 0.980 0.517 0.496 0.877 0.975 
 MAOP9(7, 15) 0.436 0.972 0.341 0.283 0.774 0.989 0.436 0.966 0.328 0.283 0.774 0.978 
 MAOP10(7, 15) 0.412 0.915 0.377 0.044 0.778 0.888 0.439 0.921 0.342 0.071 0.788 0.891 
Summary of noisy input data 0.682 0.945 0.345 0.267 0.916 0.976 0.685 0.951 0.340 0.273 0.919 0.976 
Summary of all experiments 0.683 0.969 0.354 0.269 0.930 0.982 0.684 0.972 0.350 0.268 0.932 0.982 
Success rate 
Magnitude Adjustment: Yes No 
greedygreedy
Input dataTestNamePCSEAδ-MOSSLPCANLMVUPCALHANLHAPCSEAδ-MOSSLPCANLMVUPCALHANLHA
Noiseless MAOP1(3, 5) 1.000 1.000 0.000 1.000 1.000 1.000 1.000 1.000 0.000 1.000 1.000 1.000 
 MAOP2(3, 5) 1.000 1.000 0.000 0.000 1.000 1.000 1.000 1.000 0.000 0.000 1.000 1.000 
 MAOP3(4, 10) 0.900 1.000 0.000 0.350 1.000 1.000 0.900 1.000 0.000 0.350 1.000 1.000 
 MAOP4(4, 10) 0.000 1.000 0.000 0.000 0.150 0.150 0.000 1.000 0.000 0.000 0.200 0.200 
 MAOP5(6, 10) 0.000 1.000 0.000 0.000 0.750 1.000 0.000 1.000 0.000 0.000 0.750 1.000 
 MAOP6(6, 10) 0.000 0.950 0.000 0.000 0.250 0.750 0.000 0.950 0.000 0.000 0.250 0.750 
 MAOP7(5, 15) 0.000 1.000 0.000 0.000 1.000 1.000 0.000 1.000 0.000 0.000 1.000 1.000 
 MAOP8(5, 15) 0.000 0.950 0.000 0.000 0.100 0.700 0.000 0.950 0.000 0.000 0.100 0.700 
 MAOP9(7, 15) 0.000 0.950 0.000 0.000 0.150 1.000 0.000 0.950 0.000 0.000 0.150 1.000 
 MAOP10(7, 15) 0.000 0.100 0.000 0.000 0.000 0.500 0.000 0.100 0.000 0.000 0.000 0.450 
Summary of noiseless input data 0.290 0.895 0.000 0.135 0.540 0.810 0.290 0.895 0.000 0.135 0.545 0.810 
Noisy MAOP1(3, 5) 1.000 0.850 0.000 1.000 1.000 1.000 1.000 0.850 0.000 1.000 1.000 1.000 
 MAOP2(3, 5) 1.000 0.450 0.000 0.000 1.000 1.000 1.000 0.450 0.000 0.000 1.000 1.000 
 MAOP3(4, 10) 0.950 0.950 0.000 0.350 1.000 1.000 0.950 0.950 0.000 0.350 1.000 1.000 
 MAOP4(4, 10) 0.000 0.850 0.000 0.000 0.100 0.100 0.000 0.850 0.000 0.000 0.100 0.100 
 MAOP5(6, 10) 0.000 0.900 0.000 0.000 0.500 1.000 0.000 0.900 0.000 0.000 0.500 1.000 
 MAOP6(6, 10) 0.000 0.900 0.000 0.000 0.000 0.300 0.000 0.900 0.000 0.000 0.000 0.300 
 MAOP7(5, 15) 0.000 0.900 0.000 0.000 0.900 1.000 0.000 0.900 0.000 0.000 0.900 1.000 
 MAOP8(5, 15) 0.000 0.950 0.000 0.000 0.000 0.700 0.000 0.950 0.000 0.000 0.000 0.700 
 MAOP9(7, 15) 0.000 0.800 0.000 0.000 0.000 0.900 0.000 0.800 0.000 0.000 0.000 0.900 
 MAOP10(7, 15) 0.000 0.050 0.000 0.000 0.000 0.050 0.000 0.050 0.000 0.000 0.000 0.050 
Summary of noisy input data 0.295 0.760 0.000 0.135 0.450 0.705 0.295 0.760 0.000 0.135 0.450 0.705 
Summary of all experiments 0.293 0.828 0.000 0.135 0.495 0.758 0.293 0.828 0.000 0.135 0.498 0.758 
Noiseless MAOP1(3, 5) 1.000 1.000 0.000 1.000 1.000 1.000 1.000 1.000 0.000 1.000 1.000 1.000 
 MAOP2(3, 5) 1.000 1.000 0.500 0.000 1.000 1.000 1.000 1.000 0.500 0.000 1.000 1.000 
 MAOP3(4, 10) 0.950 1.000 0.531 0.650 1.000 1.000 0.950 1.000 0.520 0.650 1.000 1.000 
 MAOP4(4, 10) 0.933 1.000 0.000 0.000 0.943 0.943 0.933 1.000 0.000 0.000 0.946 0.946 
 MAOP5(6, 10) 0.500 1.000 0.358 0.025 0.950 1.000 0.500 1.000 0.357 0.025 0.950 1.000 
 MAOP6(6, 10) 0.452 0.996 0.735 0.105 0.969 0.993 0.452 0.996 0.735 0.094 0.969 0.993 
 MAOP7(5, 15) 0.447 1.000 0.131 0.069 1.000 1.000 0.447 1.000 0.126 0.069 1.000 1.000 
 MAOP8(5, 15) 0.658 0.995 0.543 0.565 0.900 0.969 0.658 0.995 0.543 0.499 0.900 0.969 
 MAOP9(7, 15) 0.436 0.995 0.333 0.247 0.866 1.000 0.436 0.995 0.329 0.240 0.866 1.000 
 MAOP10(7, 15) 0.453 0.952 0.495 0.061 0.811 0.983 0.453 0.948 0.490 0.061 0.821 0.981 
Summary of noiseless input data 0.683 0.994 0.363 0.272 0.944 0.989 0.683 0.993 0.360 0.264 0.945 0.989 
Noisy MAOP1(3, 5) 1.000 0.925 0.000 1.000 1.000 1.000 1.000 0.900 0.000 1.000 1.000 1.000 
 MAOP2(3, 5) 1.000 0.725 0.500 0.000 1.000 1.000 1.000 0.825 0.500 0.000 1.000 1.000 
 MAOP3(4, 10) 0.975 0.992 0.529 0.650 1.000 1.000 0.975 0.992 0.515 0.650 1.000 1.000 
 MAOP4(4, 10) 0.933 0.975 0.015 0.000 0.940 0.940 0.933 0.983 0.000 0.000 0.936 0.936 
 MAOP5(6, 10) 0.500 0.975 0.307 0.025 0.867 1.000 0.500 0.975 0.311 0.025 0.883 1.000 
 MAOP6(6, 10) 0.463 0.984 0.715 0.096 0.956 0.974 0.463 0.975 0.725 0.094 0.957 0.975 
 MAOP7(5, 15) 0.447 0.990 0.152 0.111 0.972 1.000 0.447 0.995 0.158 0.111 0.972 1.000 
 MAOP8(5, 15) 0.658 0.995 0.510 0.460 0.877 0.969 0.658 0.980 0.517 0.496 0.877 0.975 
 MAOP9(7, 15) 0.436 0.972 0.341 0.283 0.774 0.989 0.436 0.966 0.328 0.283 0.774 0.978 
 MAOP10(7, 15) 0.412 0.915 0.377 0.044 0.778 0.888 0.439 0.921 0.342 0.071 0.788 0.891 
Summary of noisy input data 0.682 0.945 0.345 0.267 0.916 0.976 0.685 0.951 0.340 0.273 0.919 0.976 
Summary of all experiments 0.683 0.969 0.354 0.269 0.930 0.982 0.684 0.972 0.350 0.268 0.932 0.982 

Analyzing similarly to the previous section, depending on whether or not the objective reduction algorithms are integrated with the magnitude adjustment mechanism, Figure 7 shows the results in terms of success rate and performance metric σ*. Unlike the situation with DTLZ5(I,M), it is seen that the magnitude adjustment mechanism hardly makes any difference on all tested objective reduction algorithms in all tested conditions. However, such results make sense in light of the fact that the objective magnitudes of test benchmark MAOP(I,M) are very similar; that is to say, the magnitude differences among MAOP(I,M) objectives are very small. So in summary, the magnitude adjustment mechanism only has strong positive effects in situations where there exist enormous magnitude differences among the objectives, and has no influence on the situation where the objective magnitudes are similar.

Figure 7:

Comparison of results using magnitude adjustment on MAOP (I, M) (the numbers 1 to 6 represent PCSEA Greedy δ-MOSS, LPCA, NLMVUPCA, LHA, and NLHA, respectively).

Figure 7:

Comparison of results using magnitude adjustment on MAOP (I, M) (the numbers 1 to 6 represent PCSEA Greedy δ-MOSS, LPCA, NLMVUPCA, LHA, and NLHA, respectively).

Next, the experimental results of all tested objective reduction algorithms integrating magnitude adjustment are given for comparison. First, in terms of success rate, although the performance of the greedy δ-MOSS is the best among all tested objective reduction algorithms, the proposed algorithm NLHA has the closest performance to the greedy δ-MOSS. In particular, it can be seen that the proposed algorithm has better performance than the greedy δ-MOSS on test instances MAOP1, MAOP3, MAOP5, MAOP7, and MAOP9, where the essential objectives of these test instance are uniformly distributed on a unit circle. And in terms of performance metric σ*, the proposed algorithm NLHA performs better than all other objective reduction algorithms. The reason why the proposed algorithms perform better in terms of performance metric σ* but worse in terms of success rate than greedy δ-MOSS, is that NLHA lost one or two unimportant essential objectives in test instances MAOP2, MAOP4, MAOP6, MAOP8, and MAOP10. Unfortunately, the performance metric success rate is unable to reflect this information. In this sense, it also shows that the proposed performance metric σ* is better than success rate.

4.3.4  Experimental Results for WFG3(I,M)

This section presents results using the benchmark WFG3(I,M). Unlike test instances DTLZ5(I,M) and MAOP(I,M), WFG3(I,M) has a very simple PF shape, a simple linear hyperplane, but it is really difficult for evolutionary algorithms to reach the PF because of transformation functions t1 to t3. Table 6 lists the mean values of success rate and performance metric σ* of all tested objective reduction algorithms in 20 independent experiments for WFG3(I,M) with various numbers of objectives and essential objectives. The results in bold italics are the best obtained using these algorithms on each test instance.

Table 6:
Mean values of success rate and performance metric σ* of all objective reduction algorithms in 20 independent experiments for WFG3(I,M) with various numbers of objectives and essential objectives.
Success rate
Magnitude AdjustmentYesNo
greedygreedy
Input datamMPCSEAδ-MOSSLPCANLMVUPCALHANLHAPCSEAδ-MOSSLPCANLMVUPCALHANLHA
Noiseless 15 1.000 1.000 1.000 1.000 0.950 0.950 0.950 1.000 1.000 1.000 1.000 1.000 
 15 1.000 1.000 1.000 0.600 1.000 1.000 1.000 1.000 0.050 0.850 0.000 0.000 
 15 0.000 0.750 1.000 0.050 1.000 1.000 0.000 0.400 0.100 0.100 0.050 0.050 
 20 1.000 1.000 1.000 1.000 0.400 0.450 0.950 1.000 1.000 1.000 0.950 0.950 
 20 1.000 1.000 1.000 0.700 0.800 0.800 0.000 1.000 0.250 0.800 0.000 0.000 
 20 0.000 0.850 1.000 0.000 0.950 0.950 0.000 0.400 0.050 0.100 0.000 0.050 
 25 1.000 1.000 0.000 1.000 0.600 0.600 1.000 1.000 0.000 1.000 1.000 1.000 
 25 1.000 1.000 1.000 0.700 0.850 0.900 0.000 1.000 0.150 0.750 0.000 0.000 
 25 0.000 0.950 1.000 0.000 0.900 0.900 0.000 0.650 0.050 0.100 0.000 0.000 
Summary of Noiseless 0.667 0.950 0.889 0.561 0.828 0.839 0.433 0.828 0.294 0.633 0.333 0.339 
Noisy 15 0.000 0.000 0.000 1.000 0.000 0.150 0.000 0.000 0.000 1.000 0.000 0.000 
 15 0.000 0.000 0.000 0.000 0.350 0.650 0.000 0.000 0.000 0.000 0.000 0.000 
 15 0.000 0.000 0.000 0.000 0.750 0.550 0.000 0.000 0.000 0.000 0.000 0.000 
 20 0.000 0.000 0.000 1.000 0.000 0.150 0.000 0.000 0.000 0.900 0.000 0.050 
 20 0.000 0.000 0.000 0.000 0.400 0.600 0.000 0.000 0.000 0.000 0.000 0.000 
 20 0.000 0.000 0.000 0.000 0.550 0.250 0.000 0.000 0.000 0.000 0.000 0.050 
 25 0.000 0.000 0.000 1.000 0.000 0.250 0.000 0.000 0.000 0.950 0.000 0.000 
 25 0.000 0.000 0.000 0.000 0.250 0.450 0.000 0.000 0.000 0.000 0.000 0.000 
 25 0.000 0.050 0.000 0.000 0.600 0.100 0.000 0.000 0.000 0.000 0.000 0.000 
Summary of noisy input data 0.000 0.006 0.000 0.333 0.322 0.350 0.000 0.000 0.000 0.317 0.000 0.011 
Summary of all results 0.333 0.478 0.444 0.447 0.575 0.594 0.217 0.414 0.147 0.475 0.167 0.175 
Noiseless 15 1.000 1.000 1.000 1.000 0.996 0.996 0.950 1.000 1.000 1.000 1.000 1.000 
 15 1.000 1.000 1.000 0.600 1.000 1.000 1.000 1.000 0.050 0.850 0.000 0.000 
 15 0.000 0.750 1.000 0.050 1.000 1.000 0.000 0.400 0.100 0.100 0.050 0.050 
 20 1.000 1.000 1.000 1.000 0.967 0.969 0.950 1.000 1.000 1.000 0.547 0.497 
 20 1.000 1.000 1.000 0.700 0.987 0.987 0.000 1.000 0.250 0.800 0.000 0.000 
 20 0.000 0.850 1.000 0.000 0.996 0.996 0.000 0.400 0.050 0.100 0.000 0.050 
 25 1.000 1.000 0.000 1.000 0.980 0.980 1.000 1.000 0.000 1.000 0.600 0.550 
 25 1.000 1.000 1.000 0.700 0.993 0.995 0.000 1.000 0.150 0.750 0.000 0.000 
 25 0.000 0.950 1.000 0.000 0.994 0.994 0.000 0.650 0.050 0.100 0.000 0.000 
Summary of noiseless input data 0.667 0.950 0.889 0.561 0.990 0.991 0.433 0.828 0.294 0.633 0.244 0.239 
Noisy 15 0.912 0.738 0.642 1.000 0.765 0.888 0.912 0.638 0.296 1.000 0.792 0.896 
 15 0.000 0.855 0.885 0.000 0.920 0.910 0.000 0.795 0.375 0.000 0.720 0.805 
 15 0.000 0.219 0.869 0.000 0.963 0.806 0.000 0.131 0.481 0.000 0.794 0.763 
 20 0.942 0.808 0.656 1.000 0.819 0.886 0.933 0.764 0.475 0.994 0.794 0.822 
 20 0.000 0.913 0.897 0.000 0.957 0.823 0.000 0.763 0.593 0.000 0.720 0.580 
 20 0.000 0.046 0.877 0.000 0.823 0.477 0.000 0.000 0.685 0.000 0.865 0.662 
 25 0.952 0.852 0.696 1.000 0.857 0.928 0.948 0.822 0.659 0.998 0.678 0.641 
 25 0.000 0.933 0.893 0.000 0.910 0.683 0.000 0.850 0.743 0.000 0.405 0.325 
 25 0.000 0.144 0.861 0.000 0.883 0.242 0.000 0.044 0.817 0.000 0.586 0.500 
Summary of noisy input data 0.312 0.612 0.808 0.333 0.877 0.738 0.310 0.534 0.569 0.332 0.706 0.666 
Summary of all experiments 0.489 0.781 0.849 0.447 0.934 0.865 0.372 0.681 0.432 0.483 0.475 0.452 
Success rate
Magnitude AdjustmentYesNo
greedygreedy
Input datamMPCSEAδ-MOSSLPCANLMVUPCALHANLHAPCSEAδ-MOSSLPCANLMVUPCALHANLHA
Noiseless 15 1.000 1.000 1.000 1.000 0.950 0.950 0.950 1.000 1.000 1.000 1.000 1.000 
 15 1.000 1.000 1.000 0.600 1.000 1.000 1.000 1.000 0.050 0.850 0.000 0.000 
 15 0.000 0.750 1.000 0.050 1.000 1.000 0.000 0.400 0.100 0.100 0.050 0.050 
 20 1.000 1.000 1.000 1.000 0.400 0.450 0.950 1.000 1.000 1.000 0.950 0.950 
 20 1.000 1.000 1.000 0.700 0.800 0.800 0.000 1.000 0.250 0.800 0.000 0.000 
 20 0.000 0.850 1.000 0.000 0.950 0.950 0.000 0.400 0.050 0.100 0.000 0.050 
 25 1.000 1.000 0.000 1.000 0.600 0.600 1.000 1.000 0.000 1.000 1.000 1.000 
 25 1.000 1.000 1.000 0.700 0.850 0.900 0.000 1.000 0.150 0.750 0.000 0.000 
 25 0.000 0.950 1.000 0.000 0.900 0.900 0.000 0.650 0.050 0.100 0.000 0.000 
Summary of Noiseless 0.667 0.950 0.889 0.561 0.828 0.839 0.433 0.828 0.294 0.633 0.333 0.339 
Noisy 15 0.000 0.000 0.000 1.000 0.000 0.150 0.000 0.000 0.000 1.000 0.000 0.000 
 15 0.000 0.000 0.000 0.000 0.350 0.650 0.000 0.000 0.000 0.000 0.000 0.000 
 15 0.000 0.000 0.000 0.000 0.750 0.550 0.000 0.000 0.000 0.000 0.000 0.000 
 20 0.000 0.000 0.000 1.000 0.000 0.150 0.000 0.000 0.000 0.900 0.000 0.050 
 20 0.000 0.000 0.000 0.000 0.400 0.600 0.000 0.000 0.000 0.000 0.000 0.000 
 20 0.000 0.000 0.000 0.000 0.550 0.250 0.000 0.000 0.000 0.000 0.000 0.050 
 25 0.000 0.000 0.000 1.000 0.000 0.250 0.000 0.000 0.000 0.950 0.000 0.000 
 25 0.000 0.000 0.000 0.000 0.250 0.450 0.000 0.000 0.000 0.000 0.000 0.000 
 25 0.000 0.050 0.000 0.000 0.600 0.100 0.000 0.000 0.000 0.000 0.000 0.000 
Summary of noisy input data 0.000 0.006 0.000 0.333 0.322 0.350 0.000 0.000 0.000 0.317 0.000 0.011 
Summary of all results 0.333 0.478 0.444 0.447 0.575 0.594 0.217 0.414 0.147 0.475 0.167 0.175 
Noiseless 15 1.000 1.000 1.000 1.000 0.996 0.996 0.950 1.000 1.000 1.000 1.000 1.000 
 15 1.000 1.000 1.000 0.600 1.000 1.000 1.000 1.000 0.050 0.850 0.000 0.000 
 15 0.000 0.750 1.000 0.050 1.000 1.000 0.000 0.400 0.100 0.100 0.050 0.050 
 20 1.000 1.000 1.000 1.000 0.967 0.969 0.950 1.000 1.000 1.000 0.547 0.497 
 20 1.000 1.000 1.000 0.700 0.987 0.987 0.000 1.000 0.250 0.800 0.000 0.000 
 20 0.000 0.850 1.000 0.000 0.996 0.996 0.000 0.400 0.050 0.100 0.000 0.050 
 25 1.000 1.000 0.000 1.000 0.980 0.980 1.000 1.000 0.000 1.000 0.600 0.550 
 25 1.000 1.000 1.000 0.700 0.993 0.995 0.000 1.000 0.150 0.750 0.000 0.000 
 25 0.000 0.950 1.000 0.000 0.994 0.994 0.000 0.650 0.050 0.100 0.000 0.000 
Summary of noiseless input data 0.667 0.950 0.889 0.561 0.990 0.991 0.433 0.828 0.294 0.633 0.244 0.239 
Noisy 15 0.912 0.738 0.642 1.000 0.765 0.888 0.912 0.638 0.296 1.000 0.792 0.896 
 15 0.000 0.855 0.885 0.000 0.920 0.910 0.000 0.795 0.375 0.000 0.720 0.805 
 15 0.000 0.219 0.869 0.000 0.963 0.806 0.000 0.131 0.481 0.000 0.794 0.763 
 20 0.942 0.808 0.656 1.000 0.819 0.886 0.933 0.764 0.475 0.994 0.794 0.822 
 20 0.000 0.913 0.897 0.000 0.957 0.823 0.000 0.763 0.593 0.000 0.720 0.580 
 20 0.000 0.046 0.877 0.000 0.823 0.477 0.000 0.000 0.685 0.000 0.865 0.662 
 25 0.952 0.852 0.696 1.000 0.857 0.928 0.948 0.822 0.659 0.998 0.678 0.641 
 25 0.000 0.933 0.893 0.000 0.910 0.683 0.000 0.850 0.743 0.000 0.405 0.325 
 25 0.000 0.144 0.861 0.000 0.883 0.242 0.000 0.044 0.817 0.000 0.586 0.500 
Summary of noisy input data 0.312 0.612 0.808 0.333 0.877 0.738 0.310 0.534 0.569 0.332 0.706 0.666 
Summary of all experiments 0.489 0.781 0.849 0.447 0.934 0.865 0.372 0.681 0.432 0.483 0.475 0.452 

First, as with test benchmark DTLZ5(I,M), the magnitudes of the objectives in WFG3(I,M) differ greatly, so the performance of the objective reduction algorithms integrated with magnitude adjustment is better than without the magnitude adjustment. Second, because the shapes of the PFs are quite simple, almost all tested objective reduction algorithms performed nearly equally well when the input data used for initialization of populations was noiseless. However, when the population initialization was noisy, the performance of all objective reduction algorithms suffered serious degradation, due to poor convergence. Because it is really difficult for a population to reach the PF, almost all of the objective reduction algorithms had no success in identifying the essential objectives, according to the success rate performance metric. As a consequence, success rate is not useful to distinguish the performances of the objective reduction algorithms. In this case, performance metric σ* helps us understand the properties of the tested objective reduction algorithms. In terms of performance metric σ*, the performance of NLHA is better than that of the greedy δ-MOSS and NLMVUPCA algorithms, but it performs a little worse than LPCA and LHA. The main reasons for this are that the true PF is a linear hyperplane and the population is a poor sample of that PF, which interferes with the power transformation in NLHA.

4.4  Summary of Experimental Results

Through the numerous experiments and analyses, it can be observed that:

  • In terms of success rate, the proposed algorithm NLHA has similar performance on test benchmark DTLZ5(I,M) with the best-performing algorithm on that problem, NLMVUPCA; and on test benchmark MAOP(I,M), with the best-performing algorithm on that problem, greedy δ-MOSS. And further, the proposed algorithm behaves better than NLMVUPCA and greedy δ-MOSS on test benchmark WFG3(I,M), which means that the proposed algorithm is of good robustness.

  • In terms of performance metric σ*, the proposed algorithm NLHA performs better than all other tested objective reduction algorithms on test benchmarks DTLZ5(I,M) and MAOP(I,M), which shows that the proposed algorithm can work well with nonlinearly degenerate PFs. And as for linearly degenerate PFs, based on the experimental results, NLHA also has similar performance with LHA and LPCA.

  • Numerous experimental results show that magnitude adjustment makes a large difference in the performance of the objective reduction algorithms, especially when the objective magnitudes of the test instance have obvious differences.

5  The Further Study of the Proposed Algorithm NLHA

In order to further explore the performance of the NLHA, more comparative experiments have been done and are performed on benchmarks DTLZ2(M) and DTLZ5(I,M) with different intentions. In this section, to verify the proposed objective reduction algorithm is attributed to the population evolution, the comparison including the original MOEA/D-M2M without the dimensionality reduction is given first. And then, the sensitivity of λ involved in the NLHA is discussed for studying the robustness of the proposed algorithm. Finally, to take full knowledge of effectiveness of the NLHA, the experiments that the input set belongs to a non-degenerate problem are also conducted.

5.1  Study of the Effect of Dimensionality Reduction

In this section, comparative experiments between the original MOEA/D-M2M and the proposed algorithm NLHA for the test benchmark DTLZ5(I,M) are performed. In order to more clearly distinguish the comparative results, we have used the Inverted Generational Distance (IGD) metric (Bosman and Thierens, 2003) here, which helps to know the approximation performance of the solutions obtained by the proposed algorithms. The smaller the IGD value is, the better the result is. IGD works as Eq. (13),
IGD(P*,P)=vεP*d(v,P)|P*|,
(13)
where P* is a set of uniformly distributed points along the PF and P is a solution set approximating the PF. d(v,P) is the minimum Euclidean distance between v and the points in P. Herein, the reference points set P* of the benchmark DTLZ5(I,M) is the same as given in Cheung et al. (2016); that is, the cardinalities of P* for these problems with the number of essential objectives I={2,3,5} are respectively set as |P*|={1000,1891,10636}. Table 7 lists the minimum (best), mean, and maximum (worst) values of the IGD metric for the final solutions obtained by the original MOEA/D-M2M without dimensionality reduction and with dimensionality reduction (the proposed algorithm NLHA). From Table 7, it can been seen that the IGD value obtained when the proposed algorithm NLHA is employed is smaller than the IGD value obtained by the original MOEA/D-M2M, which shows that the proposed algorithm is observably helpful for approaching the PF. Furthermore, in terms of the number of essential objectives of test benchmark DTLZ5(I,M), the fewer the number of essential objectives is, the larger the improvement of the IGD metric is.
Table 7:
Mean, minimum (best), and maximum (worst) of IGD metric values for the final solutions obtained by the original MOEA/D-M2M and the proposed NLHA algorithm in 20 independent experiments.
Original MOEA/D-M2MProposed NLHA Algorithm
Test BenchmarkMeanBestWorstMeanBestWorst
DTLZ5(2,20) 0.023354367 0.018374874 0.033013541 0.002042712 0.001978744 0.002175738 
DTLZ5(2,5) 0.006393938 0.005145366 0.009111397 0.002183058 0.002085831 0.002446823 
DTLZ5(2,50) 0.048456913 0.019717847 0.085993684 0.002104547 0.002014196 0.002785216 
DTLZ5(3,20) 0.163108453 0.091270249 0.233428751 0.050002569 0.047100464 0.052833914 
DTLZ5(3,5) 0.058595949 0.051740464 0.071565775 0.051152352 0.046503836 0.055450573 
DTLZ5(5,10) 0.238346648 0.214811308 0.291353097 0.19762407 0.189200212 0.209553757 
DTLZ5(5,20) 0.324153214 0.285784871 0.361283163 0.19408083 0.189264966 0.201021485 
Original MOEA/D-M2MProposed NLHA Algorithm
Test BenchmarkMeanBestWorstMeanBestWorst
DTLZ5(2,20) 0.023354367 0.018374874 0.033013541 0.002042712 0.001978744 0.002175738 
DTLZ5(2,5) 0.006393938 0.005145366 0.009111397 0.002183058 0.002085831 0.002446823 
DTLZ5(2,50) 0.048456913 0.019717847 0.085993684 0.002104547 0.002014196 0.002785216 
DTLZ5(3,20) 0.163108453 0.091270249 0.233428751 0.050002569 0.047100464 0.052833914 
DTLZ5(3,5) 0.058595949 0.051740464 0.071565775 0.051152352 0.046503836 0.055450573 
DTLZ5(5,10) 0.238346648 0.214811308 0.291353097 0.19762407 0.189200212 0.209553757 
DTLZ5(5,20) 0.324153214 0.285784871 0.361283163 0.19408083 0.189264966 0.201021485 

5.2  Study of the Sensitivity to λ

To study the robustness of the proposed algorithm, the sensitivity of the λ involved in NLHA is discussed here. A number of experiments were performed for different λ's that range from 0 to 5 with a step size 0.5. Table 8 lists the mean values of success rate and performance metric σ* for different λ's in 20 independent experiments for DTLZ5(I,M). As seen in Figure 8, in terms of the summary of all results of the success rate and performance metric σ*, the proposed NLHA algorithm can identify the essential objectives with a success rate of 90% when λ ranges from 0.5 to 4, implying that the algorithm is not highly sensitive to the choice of λ.

Figure 8:

Summary of all results of the success rate and performance metric σ* for different λ in 20 independent experiments for DTLZ5(I,M).

Figure 8:

Summary of all results of the success rate and performance metric σ* for different λ in 20 independent experiments for DTLZ5(I,M).

Table 8:
Mean values of success rate and performance metric σ* for different λ in 20 independent experiments for DTLZ5(I,M) with various numbers of objectives and essential objectives.
success rate
λ0.000.501.001.502.002.503.003.504.004.505.00
DTLZ5(2,20) 0.950 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
DTLZ5(2,5) 0.850 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
DTLZ5(2,50) 1.000 0.600 0.350 0.450 0.700 0.500 0.500 0.600 0.600 0.700 0.650 
DTLZ5(3,20) 0.650 0.950 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
DTLZ5(3,5) 0.700 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
DTLZ5(5,10) 0.650 0.950 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.950 0.900 
DTLZ5(5,20) 0.450 0.850 0.950 0.900 0.900 1.000 1.000 1.000 1.000 0.950 0.850 
DTLZ5(7,10) 0.750 0.950 1.000 1.000 1.000 1.000 0.900 0.950 0.900 0.600 0.300 
DTLZ5(7,20) 0.900 0.950 0.950 1.000 1.000 0.950 0.950 0.900 0.550 0.150 0.000 
summary of 0.767 0.917 0.917 0.928 0.956 0.939 0.928 0.939 0.894 0.817 0.744 
all results 
performance metric σ* 
λ 0.00 0.50 1.00 1.50 2.00 2.50 3.00 3.50 4.00 4.50 5.00 
DTLZ5(2,20) 0.997 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
DTLZ5(2,5) 0.950 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
DTLZ5(2,50) 1.000 0.986 0.972 0.971 0.980 0.974 0.973 0.980 0.976 0.983 0.982 
DTLZ5(3,20) 0.976 0.997 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
DTLZ5(3,5) 0.850 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
DTLZ5(5,10) 0.930 0.990 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.950 0.900 
DTLZ5(5,20) 0.963 0.990 0.997 0.993 0.993 1.000 1.000 1.000 1.000 0.950 0.850 
DTLZ5(7,10) 0.917 0.983 1.000 1.000 1.000 1.000 0.900 0.950 0.900 0.600 0.300 
DTLZ5(7,20) 0.992 0.996 0.996 1.000 1.000 0.950 0.950 0.900 0.550 0.150 0.000 
summary of 0.953 0.994 0.996 0.996 0.997 0.992 0.980 0.981 0.936 0.848 0.781 
all results 
success rate
λ0.000.501.001.502.002.503.003.504.004.505.00
DTLZ5(2,20) 0.950 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
DTLZ5(2,5) 0.850 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
DTLZ5(2,50) 1.000 0.600 0.350 0.450 0.700 0.500 0.500 0.600 0.600 0.700 0.650 
DTLZ5(3,20) 0.650 0.950 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
DTLZ5(3,5) 0.700 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
DTLZ5(5,10) 0.650 0.950 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.950 0.900 
DTLZ5(5,20) 0.450 0.850 0.950 0.900 0.900 1.000 1.000 1.000 1.000 0.950 0.850 
DTLZ5(7,10) 0.750 0.950 1.000 1.000 1.000 1.000 0.900 0.950 0.900 0.600 0.300 
DTLZ5(7,20) 0.900 0.950 0.950 1.000 1.000 0.950 0.950 0.900 0.550 0.150 0.000 
summary of 0.767 0.917 0.917 0.928 0.956 0.939 0.928 0.939 0.894 0.817 0.744 
all results 
performance metric σ* 
λ 0.00 0.50 1.00 1.50 2.00 2.50 3.00 3.50 4.00 4.50 5.00 
DTLZ5(2,20) 0.997 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
DTLZ5(2,5) 0.950 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
DTLZ5(2,50) 1.000 0.986 0.972 0.971 0.980 0.974 0.973 0.980 0.976 0.983 0.982 
DTLZ5(3,20) 0.976 0.997 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
DTLZ5(3,5) 0.850 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 
DTLZ5(5,10) 0.930 0.990 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.950 0.900 
DTLZ5(5,20) 0.963 0.990 0.997 0.993 0.993 1.000 1.000 1.000 1.000 0.950 0.850 
DTLZ5(7,10) 0.917 0.983 1.000 1.000 1.000 1.000 0.900 0.950 0.900 0.600 0.300 
DTLZ5(7,20) 0.992 0.996 0.996 1.000 1.000 0.950 0.950 0.900 0.550 0.150 0.000 
summary of 0.953 0.994 0.996 0.996 0.997 0.992 0.980 0.981 0.936 0.848 0.781 
all results 

5.3  Study on the Non-Degenerate Problems DTLZ2(M)

The proposed algorithm is mainly designed for detecting and dealing with degenerate problems like the test benchmarks DTLZ5(I,M), MAOP(I,M), and WFG3(I,M). However, its performance on non-degenerate problems should also be tested. To have full knowledge of the effectiveness of the NLHA, simulation experiments in which the input set belongs to a non-degenerate problem are also conducted, using test function DTLZ2(M) (Ishibuchi et al., 2017). In this experiment, because the true PF of the DTZL2(M) is known, a set with solution points evenly spread on the PF is generated by the method presented in the literature (Li, Deb et al., 2015) to obtain the P* in Eq. (13). The cardinalities of P* for these problems, with the number of objectives = {3,5,8,10} were set as |P*|={91,210,156,275}. Table 9 lists the mean value of success rate and IGD-metric values in 20 independent experiments for the non-degenerate problems DTLZ2(M).

Table 9:
Mean values of success rate and IGD-metric values in 20 independent experiments for non-degenerate problems DTLZ2(M).
Success rate
TestNamePCSEAgreedy σ-MOSSLPCANLMVUPCALHANLHA
DTLZ2(3) 1.000 1.000 1.000 1.000 1.000 1.000 
DTLZ2(5) 1.000 1.000 1.000 1.000 1.000 1.000 
DTLZ2(8) 0.000 0.850 1.000 0.100 1.000 1.000 
DTLZ2(10) 0.000 0.050 1.000 0.000 0.700 0.700 
IGD 
DTLZ2(3) 0.0568 0.0568 0.0568 0.0568 0.0568 0.0568 
DTLZ2(5) 0.2448 0.2448 0.2448 0.2448 0.2448 0.2448 
DTLZ2(8) 1.1862 0.5615 0.4461 1.1494 0.4461 0.4461 
DTLZ2(10) 1.2525 1.2110 0.5545 1.2292 0.7573 0.7624 
Success rate
TestNamePCSEAgreedy σ-MOSSLPCANLMVUPCALHANLHA
DTLZ2(3) 1.000 1.000 1.000 1.000 1.000 1.000 
DTLZ2(5) 1.000 1.000 1.000 1.000 1.000 1.000 
DTLZ2(8) 0.000 0.850 1.000 0.100 1.000 1.000 
DTLZ2(10) 0.000 0.050 1.000 0.000 0.700 0.700 
IGD 
DTLZ2(3) 0.0568 0.0568 0.0568 0.0568 0.0568 0.0568 
DTLZ2(5) 0.2448 0.2448 0.2448 0.2448 0.2448 0.2448 
DTLZ2(8) 1.1862 0.5615 0.4461 1.1494 0.4461 0.4461 
DTLZ2(10) 1.2525 1.2110 0.5545 1.2292 0.7573 0.7624 

Table 9 shows that all objective reduction algorithms can correctly identify the essential objectives of DTLZ2(M) when the number of objectives M is three or five. However, when M reaches 8, only the proposed algorithm NLHA and LPCA can obtain is the true essential objective set, and all other objective reduction algorithms deteriorate severely. And with M=10, the population approximation to the true PF gets poor, which can been seen by comparing the IGD-metric value of each objective reduction algorithm with increasing number of objectives; the performance of the proposed algorithm also get worse, but its success rate is still 0.7.

6  Conclusion

In this article, depending on whether the PF of a redundant MaOP is linearly or nonlinearly degenerate, we have proposed novel objective reduction algorithms called LHA and NLHA; NLHA can be regarded as an improved version of LHA for dealing with nonlinear degeneracies. It transforms a nonlinearly degenerate PF into linearly degenerate PF and uses a hyperplane with non-negative sparse coefficients to roughly approximate the conflicting structure of the PF. We then propose a many-objective reduction framework integrating magnitude adjustment. In order to demonstrate the performance of the proposed algorithms, we conduct extensive experimental simulations with correlation-based algorithms, that is, LPCA and NLMVUPCA, and dominance-structure-based algorithm, that is, PCSEA and greedy δ-MOSS, on three benchmarks: DTLZ5(I,M), MAOP(I,M), and WFG3(I,M). The experimental results show that the proposed algorithm has better performance than the other algorithms in terms of success rate and the newly introduced performance metric σ*. Finally, to know deeply the performance of the proposed algorithm NLHA, three different experiments are done, that are respectively the comparison including the original MOEA/D-M2M without the dimensionality reduction; the sensitivity of λ involved in the NLHA; and the experiments that the input set belongs to a non-degenerate problem.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grant 61673121, and in part by the Projects of Science and Technology of Guangzhou under Grant 201804010352.

Note

1
Additionally, before applying Eq. (3), in order to reduce the effect brought on by differences in objective magnitudes, each objective is normalized as follows:
fj(xi)=fj(xi)-min(fj)max(fj)-min(fj).

References

Bader
,
J.
, and
Zitzler
,
E
. (
2011
).
Hype: An algorithm for fast hypervolume-based many-objective optimization
.
Evolutionary Computation
,
19
(
1
):
45
76
.
Batista
,
L. S.
,
Campelo
,
F.
, Guimar?
es
,
F. G.
, and
Ramłrez
,
J. A
. (
2011
). Pareto cone ɛ-dominance: Improving convergence and diversity in multiobjective evolutionary algorithms. In
Proceedings of Evolutionary Multi-Criterion Optimization
, pp.
76
90
.
Beume
,
N.
,
Naujoks
,
B.
, and
Emmerich
,
M
. (
2007
).
Sms-emoa: Multiobjective selection based on dominated hypervolume
.
European Journal of Operational Research
,
181
(
3
):
1653
1669
.
Bosman
,
P. A.
, and
Thierens
,
D
. (
2003
).
The balance between proximity and diversity in multiobjective evolutionary algorithms
.
IEEE Transactions on Evolutionary Computation
,
7
(
2
):
174
188
.
Brockhoff
,
D.
, and
Zitzler
,
E
. (
2006
). Are all objectives necessary? On dimensionality reduction in evolutionary multiobjective optimization. In
International Conference on Parallel Problem Solving from Nature
, pp.
533
542
.
Brockhoff
,
D.
, and
Zitzler
,
E
. (
2009
).
Objective reduction in evolutionary multiobjective optimization: Theory and applications
.
Evolutionary Computation
,
17
(
2
):
135
66
.
Cheung
,
Y. M.
, and
Gu
,
F
. (
2014
). Online objective reduction for many-objective optimization problems. In
Proceedings of IEEE Congress on Evolutionary Computation
, pp.
1165
1171
.
Cheung
,
Y.-M.
,
Gu
,
F.
, and
Liu
,
H.-L
. (
2016
).
Objective extraction for many-objective optimization problems: Algorithm and test problems
.
IEEE Transactions on Evolutionary Computation
,
20
(
5
):
755
772
.
Coleman
,
T. F.
, and
Li
,
Y
. (
1996
).
A reflective Newton method for minimizing a quadratic function subject to bounds on some of the variables
.
Society for Industrial and Applied Mathematics, Journal on Optimization
,
6
(
4
):
1040
1058
.
Corne
,
D. W.
,
Knowles
,
J. D.
, and
Oates
,
M. J
. (
2000
). The Pareto envelope-based selection algorithm for multiobjective optimization. In
International Conference on Parallel Problem Solving from Nature
, pp.
839
848
.
Deb
,
K.
, and
Jain
,
H
. (
2014
).
An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, Part I: Solving problems with box constraints
.
IEEE Transactions on Evolutionary Computation
,
18
(
4
):
577
601
.
Deb
,
K.
,
Mohan
,
M.
, and
Mishra
,
S
. (
2005
).
Evaluating the ε-domination based multi-objective evolutionary algorithm for a quick computation of Pareto-optimal solutions
.
Evolutionary Computation
,
13
(
4
):
501
525
.
Deb
,
K.
,
Pratap
,
A.
,
Agarwal
,
S.
, and
Meyarivan
,
T
. (
2002
).
A fast and elitist multiobjective genetic algorithm: NSGA-II
.
IEEE Transactions on Evolutionary Computation
,
6
(
2
):
182
197
.
Deb
,
K.
, and
Saxena
,
D. K.
(
2005
).
On finding Pareto-optimal solutions through dimensionality reduction for certain large-dimensional multi-objective optimization problems
.
Technical Report. Indian Institute of Technology, Kanpur
.
Gal
,
T.
, and
Hanne
,
T
. (
1999
).
Consequences of dropping nonessential objectives for the application of MCDM methods
.
European Journal of Operational Research
,
119
(
2
):
373
378
.
Giagkiozis
,
I.
,
Purshouse
,
R. C.
, and
Fleming
,
P. J.
(
2014
).
Generalized decomposition and cross entropy methods for many-objective optimization
.
Information Sciences
,
282:363
387
.
Gould
,
N.
, and
Toint
,
P. L
. (
2004
).
Preprocessing for quadratic programming
.
New York
:
Springer
.
Guo
,
X.
,
Wang
,
Y.
,
Wang
,
X.
, and
Wei
,
J
. (
2015
). A new non-redundant objective set generation algorithm in many-objective optimization problems. In
Proceedings of IEEE Congress on Evolutionary Computation
, pp.
2851
2858
.
He
,
Z.
, and
Yen
,
G. G
. (
2016
).
Many-objective evolutionary algorithm: Objective space reduction + diversity improvement
.
IEEE Transactions on Evolutionary Computation
,
20
(
1
):
145
160
.
Hernández-Díaz
,
A. G.
,
Santana-Quintero
,
L. V.
,
Coello
,
C. A. C.
, and
Molina
,
J
. (
2007
).
Pareto-adaptive ɛ-dominance
.
Evolutionary Computation
,
15
(
4
):
493
517
.
Huband
,
S.
,
Hingston
,
P.
,
Barone
,
L.
, and
While
,
L
. (
2006
).
A review of multiobjective test problems and a scalable test problem toolkit
.
IEEE Transactions on Evolutionary Computation
,
10
(
5
):
477
506
.
Ikeda
,
K.
,
Kita
,
H.
, and
Kobayashi
,
S.
(
2001
).
Failure of Pareto-based MOEAs: Does non-dominated really mean near to optimal?
In
Proceedings of the 2001 Congress on Evolutionary Computation
, pp.
957
962
.
Ishibuchi
,
H.
,
Masuda
,
H.
, and
Nojima
,
Y.
(
2016
).
Pareto fronts of many-objective degenerate test problems
.
IEEE Transactions on Evolutionary Computation
,
20
(
5
):
807
813
.
Ishibuchi
,
H.
,
Tsukamoto
,
N.
, and
Nojima
,
Y.
(
2008
).
Evolutionary many-objective optimization: A short review
.
Evolutionary Computation
,
2419
2426
.
Ishibuchi
,
H.
,
Yu
,
S.
,
Masuda
,
H.
, and
Nojima
,
Y
. (
2017
).
Performance of decomposition-based many-objective algorithms strongly depends on Pareto front shapes
.
IEEE Transactions on Evolutionary Computation
,
21
(
2
):
169
190
.
Jaimes
,
A. L.
,
Coello
,
C. A. C.
, and
Barrientos
,
J. E. U
. (
2009
). Online objective reduction to deal with many-objective problems. In
Proceedings of the 5th International Conference on Evolutionary Multi-Criterion Optimization
, pp.
423
437
.
Jaszkiewicz
,
A
. (
2002
).
On the performance of multiple-objective genetic local search on the 0/1 knapsack problem—A comparative experiment
.
IEEE Transactions on Evolutionary Computation
,
6
(
4
):
402
412
.
Köppen
,
M.
, and
Yoshida
,
K
. (
2006
). Substitute distance assignments in nsga-ii for handling many-objective optimization problems. In
Proceedings of Evolutionary Multi-Criterion Optimization, 4th International Conference
, pp.
727
741
.
Laumanns
,
M.
,
Thiele
,
L.
,
Deb
,
K.
, and
Zitzler
,
E
. (
2002
).
Combining convergence and diversity in evolutionary multiobjective optimization
.
Evolutionary Computation
,
10
(
3
):
263
282
.
Li
,
H.
, and
Zhang
,
Q
. (
2009
).
Multiobjective optimization problems with complicated Pareto sets, MOEA/D and NSGA-II
.
IEEE Transactions on Evolutionary Computation
,
13
(
2
):
284
302
.
Li
,
K.
,
Deb
,
K.
,
Zhang
,
Q.
, and
Kwong
,
S
. (
2015
).
An evolutionary many-objective optimization algorithm based on dominance and decomposition
.
IEEE Transactions on Evolutionary Computation
,
19
(
5
):
694
716
.
Li
,
K.
,
Kwong
,
S.
,
Zhang
,
Q.
, and
Deb
,
K
. (
2015
).
Interrelationship-based selection for decomposition multiobjective optimization
.
IEEE Transactions on Cybernetics
,
45
(
10
):
2076
2088
.
Liu
,
H.
,
Gu
,
F.
, and
Cheung
,
Y
. (
2012
).
A weight design method based on power transformation for multi-objective evolutionary algorithm moea/d
.
Journal of Computer Research and Development
,
49
(
6
):
1264
1271
.
Liu
,
H.-L.
,
Gu
,
F.
,
Cheung
,
Y.-M.
,
Xie
,
S.
, and
Zhang
,
J
. (
2014
).
On solving WCDMA network planning using iterative power control scheme and evolutionary multiobjective algorithm
.
IEEE Computational Intelligence Magazine
,
9
(
1
):
44
52
.
Liu
,
H. L.
,
Gu
,
F.
, and
Zhang
,
Q
. (
2014
).
Decomposition of a multiobjective optimization problem into a number of simple multiobjective subproblems
.
IEEE Transactions on Evolutionary Computation
,
18
(
3
):
450
455
.
Luo
,
J.
,
Jiao
,
L.
, and
Lozano
,
J
. (
2015
).
A sparse spectral clustering framework via multi-objective evolutionary algorithm
.
IEEE Transactions on Evolutionary Computation
,
PP
(
99
): 1.
Purshouse
,
R. C.
, and
Fleming
,
P. J
. (
2003
). Evolutionary many-objective optimisation: An exploratory analysis. In
Proceedings of 2003 Congress on Evolutionary Computation
, pp.
2066
2073
.
Roy
,
P. C.
,
Islam
,
M. M.
,
Murase
,
K.
, and
Xin
,
Y
. (
2014
).
Evolutionary path control strategy for solving many-objective optimization problem
.
Transactions on Cybernetics IEEE
,
45
(
4
):
702
715
.
Saxena
,
D. K.
,
Duro
,
J. A.
,
Tiwari
,
A.
,
Deb
,
K.
, and
Zhang
,
Q
. (
2013
).
Objective reduction in many-objective optimization: Linear and nonlinear algorithms
.
IEEE Transactions on Evolutionary Computation
,
17
(
1
):
77
99
.
Singh
,
H. K.
,
Isaacs
,
A.
, and
Ray
,
T
. (
2011
).
A Pareto corner search evolutionary algorithm and dimensionality reduction in many-objective optimization problems
.
IEEE Transactions on Evolutionary Computation
,
15
(
4
):
539
556
.
Singh
,
H. K.
,
Isaacs
,
A.
,
Ray
,
T.
, and
Smith
,
W
. (
2008a
). A study on the performance of substitute distance based approaches for evolutionary many objective optimization. In
Proceedings of Simulated Evolution and Learning
, pp.
401
410
.
Singh
,
H. K.
,
Isaacs
,
A.
,
Ray
,
T.
, and
Smith
,
W
. (
2008b
). A study on the performance of substitute distance based approaches for evolutionary many objective optimization. In
International Conference on Simulated Evolution and Learning
, pp.
401
410
.
Tibshirani
,
R
. (
1996
).
Regression shrinkage and selection via the lasso
.
Journal of the Royal Statistical Society
,
58
(
1
):
267
288
.
Yang
,
S.
,
Li
,
M.
,
Liu
,
X.
, and
Zheng
,
J
. (
2013
).
A grid-based evolutionary algorithm for many-objective optimization
.
IEEE Transactions on Evolutionary Computation
,
17
(
5
):
721
736
.
Zhang
,
Q.
, and
Li
,
H
. (
2007
).
MOEA/D: A multiobjective evolutionary algorithm based on decomposition
.
IEEE Transactions on Evolutionary Computation
,
11
(
6
):
712
731
.
Zhu
,
C.
,
Xu
,
L.
, and
Goodman
,
E. D
. (
2016
).
Generalization of Pareto-optimality for many-objective evolutionary optimization
.
IEEE Transactions on Evolutionary Computation
,
20
(
2
):
299
315
.
Zitzler
,
E.
, and
Knzli
,
S.
(
2004
).
Indicator-based selection in multiobjective search
.
Lecture Notes in Computer Science
,
3242:832
842
.
Zitzler
,
E.
,
Laumanns
,
M.
, and
Thiele
,
L
. (
2001
). Spea2: Improving the strength Pareto evolutionary algorithm. In
Proceedings of Evolutionary Methods for Design Optimization and Control with Applications to Industrial Problems
, pp.
95
100
.