Abstract

An objective normalization strategy is essential in any evolutionary multiobjective or many-objective optimization (EMO or EMaO) algorithm, due to the distance calculations between objective vectors required to compute diversity and convergence of population members. For the decomposition-based EMO/EMaO algorithms involving the Penalty Boundary Intersection (PBI) metric, normalization is an important matter due to the computation of two distance metrics. In this article, we make a theoretical analysis of the effect of instabilities in the normalization process on the performance of PBI-based MOEA/D and a proposed PBI-based NSGA-III procedure. Although the effect is well recognized in the literature, few theoretical studies have been done so far to understand its true nature and the choice of a suitable penalty parameter value for an arbitrary problem. The developed theoretical results have been corroborated with extensive experimental results on three to 15-objective convex and non-convex instances of DTLZ and WFG problems. The article, makes important theoretical conclusions on PBI-based decomposition algorithms derived from the study.

1 Introduction

In solving optimization problems having two or more objectives, most evolutionary multiobjective and many-objective optimization (EMO/EMaO) algorithms use a diversity measure by computing relative distance of population members in the objective space. In NSGA-II (Deb et al., 2002), the crowding distance operator computes a sort of Manhattan distance between two neighboring non-dominated points. In MOEA/D (Zhang et al., 2010) with the Penalty Boundary Intersection (PBI)-based approach, two orthogonal distances—one along a supplied decomposition vector and one orthogonal to it of every population member—need to be computed for the fitness-based selection operator. In NSGA-III (Deb and Jain, 2014), only the distance perpendicular to the decomposition vector is needed. If all objectives are such that all function values have a similar scale or range of values, the so-called uniformly scaled problems, like in DTLZ problems (Deb et al., 2005), no normalization operator may be needed. On the other hand, an EMO algorithm should not be developed purely based on its performance on uniformly scaled problems alone, as most practical problems involve different functionalities, such as cost, efficiency, quality, etc., which are likely to have non-uniformly scaled objective values. Thus, any distance computation between two population members is relevant and useful only if the objectives are normalized properly so that distance along each objective is given an almost equal importance.

It is well understood that as soon as a many-objective optimization problem is formulated, its respective theoretical efficient front (we loosely call here, the Pareto-optimal Front (PF)) gets decided (Schutze et al., 2010). While in certain convex and other simplistic problem structures, the theoretical PF can be obtained mathematically, in most problems, including discrete variable space problems, it is not possible and thus a computational optimization method is usually employed to find a few well-distributed points on PF to provide decision makers a clear idea of the PF for subsequently choosing a single preferred solution. Every PF has an ideal point (zideal), constructed with individual objective-wise minimum values (assuming all objectives are to be minimized) and a nadir point (znadir), constructed with the worst value of each objective function over the theoretical Pareto-optimal Set (PS). If the objective functions are normalized with ideal and nadir points, as follows:
fin(x)=fi(x)-ziidealzinadir-ziideal,i=1,2,,M,
(1)
the normalized objective values fin(x) of each population member x will lie in [0,1] and these normalized objective vectors can be compared with the supplied decomposition vectors meaningfully, as they are also defined to lie in [0,1]. Another way to compensate the differences in objective magnitudes is to adapt the decomposition vectors (Cheng et al., 2016). While the ideal point is relatively easy to compute by performing independent minimization of M objectives, the computation of nadir points is not an easy matter (Isermann and Steuer, 1988; Deb et al., 2010) and, in principle, requires the knowledge of all PO points. In all EMO algorithms, the knowledge of exact ideal and nadir points are not assumed; rather they are dynamically estimated during an optimization run (Seada et al., 2018). Thus, if at any generation, the estimated ideal and nadir points are z0 and z1, the normalization of the i-th objective function can be performed, as follows:
fin(x)=fi(x)-zi0zi1-zi0.
(2)
Therefore, it becomes an important matter to investigate the effect of instability in estimating ideal and nadir points to the performance of decomposition-based EMO methods. Due to the inaccuracy in ideal and nadir point estimation and their changing values from generation to generation, population members are not properly associated with its right decomposition vector and any distance computation along or orthogonal to decomposition vectors (needed in PBI-based methods) becomes erroneous introducing noise in the selection operation of an EMO algorithm. This phenomenon was noticed by some researchers and experimental studies have been conducted to examine the effect of objective normalization on the performance of MOEA/D (Ishibuchi et al., 2017).

In this article, we estimate the sensitivity of association and distances of population members to specific decomposition vectors due to the variation of estimated ideal and nadir vectors (z0 and z1) with generations. We define a sensitivity ratio comparing the sensitivity of MOEA/D-PBI algorithm over NSGA-III and compute its value theoretically as a function of PBI's penalty parameter θ. We also find a theoretical connection between the lower bound of θ and the tangent of the PF. Theoretically, these two aspects, sensitivity due to instability in normalization and geometric bound from PF, dictate the most preferred value of θ. We then apply two PBI-based EMO algorithms to a series of DTLZ and WFG problems and demonstrate the validity of our theoretical results.

In the remainder of the article, we provide a brief overview of the preliminaries of a multiobjective optimization problem. In Section 2, we describe the basic principles of decomposition-based EMO algorithms. The PBI-metric--based selection approach is described. We then present a theoretical analysis of a sensitivity ratio of the PBI-metric to the orthogonal distance metric-based fitness selection approaches on instabilities of the normalization procedure described in Section 3. Section 4 presents extensive simulations of PBI-based NSGA-III and MOEA/D algorithms to DTLZ and WFG problems having 3 to 15 objectives to validate the theoretical results obtained in the previous section. The experimental results on convex and non-convex PFs are presented and analyzed. Finally, conclusions are drawn in Section 5.

2 Decomposition-Based EMO Algorithms

Decomposition-based EMO algorithms are becoming more popular in dealing with multiobjective optimization problems (MOPs), especially MOPs with more than three objectives (Deb et al., 2007; Khare et al., 2003). Decomposition-based EMOs need a set of reference vectors to guide the population search, and those vectors are either used for objective aggregation (Ishibuchi and Murata, 1998; Zhang et al., 2010; Li and Zhang, 2009) or diversity and convergence enhancement (Liu et al., 2013, 2018; Deb and Jain, 2014; Li et al., 2015; Yuan et al., 2016; Cheng et al., 2016). For example, the Multiobjective Evolutionary Algorithm Based on Decomposition (MOEA/D) (Zhang and Li, 2007; Zhang et al., 2010; Li and Zhang, 2009) decomposes an MaOP into a number of scalar optimization sub-problems by reference vectors. MOEA/D-M2M (Liu et al., 2013) is a new variant of MOEA/D for population decomposition, and it decomposes an MaOP into a number of many-objective optimization subproblems by direction vectors. Deb and Jain (2014) proposed the third generation non-dominated sorting genetic algorithm (NSGA-III) by using reference directions to enhance the convergence and maintain the diversity. Reference points are also used for decomposition in MOEA/DD (Li et al., 2015) and θ-DEA (Yuan et al., 2016). Cheng et al. (2016) proposed the Reference Vector-guided Evolutionary Algorithm (RVEA). It is worth noting that we collectively call these vectors as decomposition vectors in this article, as they are essentially used to decompose the overall problem into several interacting subproblems.

2.1 Penalty Boundary Intersection (PBI) Fitness (Zhang and Li, 2007)

Suppose that v=(v1,,vM) is a unit vector (if not, we can replace it by v/||v||) in the first hyper-octant of the objective space, i.e., vi0 for all i=1,,M and iMvi2=1. In order to simplify the expression, the PBI method is defined in the normalized objective space. For the given unit decomposition vector v=(v1,,vM), the PBI fitness is defined as follows:
PBI(x|v)=d1(F'(x))+θd2(F'(x)),
(3)
where F'(x) is the normalization of F(x), d1(F'(x))=F'(x)Tv>0 is the projection distance of F'(x) to the decomposition vector v, and d2(F'(x))=||F'(x)-d1(F'(x))v||0 is the orthogonal distance of F'(x) to decomposition vector v. θ>0 is the penalty parameter used to strike a balance between d1(F'(x)) and d2(F'(x)).

2.2 NSGA-III's Niching Fitness

There are two steps in NSGA-III's selection. The first step is non-dominated sorting, where the parents and offspring are combined together and are sorted in different non-domination levels. From the first non-domination level, solutions are kept until the size of selected solutions exceeds the initial population size. The second step is a niching selection, in which the remaining solutions are selected from the last accepted level by a niche-preservation operator. The resulting solutions of the first step selection are associated with the evenly distributed decomposition vectors (called reference points in the original NSGA-III study) according to d2(F'(x)). During the niching selection, the resulting solutions except for those in the last acceptable level are first selected. After that, solutions in the last acceptable level associated with a less crowded reference line are preferred to be selected, and this procedure is called d2-based selection in this study.

3 Sensitivity of Fitness Assignment Due to Normalization Instability

In this section, we investigate the sensitivity of the two above-mentioned fitness assignment strategies with respect to instabilities involved in the objective normalization procedure. Equation 2 suggests that there are two sources of inaccuracy in the normalization procedure: (i) in the estimation of the ideal point by z0 and (ii) in the estimation of the nadir point by z1. By defining the range αi of the i-th objective, we rewrite the equation as follows:
fin(x)=fi(x)-zi0ai.
(4)
Thus, the sensitivity of computation of fin from one generation to another depends on the variation of two vectors from one generation to another: the ideal point vector z0 and the range vector a.

To have an idea of the relative importance of these two vectors, first, we observe how the estimated ideal point z0 is varied over generations in a standard EMO run, NSGA-III (Deb and Jain, 2014), on test problems. Figure 1 shows how the norm of the observed ideal point (z0) varies with the generation counter for three- to 15-objective instances of DTLZ1. The norm value is expected to be zero for these problems.

The median of 15 runs is plotted. It is clear that the ideal point estimation gets settled very early (not necessarily the original point) during a simulation. A similar observation is also made for other problems of this study and has also been observed in other studies in the past (Seada et al., 2018; Bechikh et al., 2010). For brevity, those plots are not shown here. These plots show that, except for a few early generations, the ideal point z0 does not contribute much to the normalization procedure in most generations.

Next, we shall investigate the convergence behavior of the objective range α. Figure 2 illustrates how the norm of the observed objective range α varies with the generation counter for three- to 15-objective instances of DTLZ1.
Figure 1:

Variation of the norm of the ideal vector (z0) over generations for 3-, 5-, 8-, 10-, and 15-objective instances of DTLZ1.

Figure 1:

Variation of the norm of the ideal vector (z0) over generations for 3-, 5-, 8-, 10-, and 15-objective instances of DTLZ1.

Figure 2:

Variation of the norm α over generations for 3-, 5-, 8-, 10-, and 15-objective instances of DTLZ1.

Figure 2:

Variation of the norm α over generations for 3-, 5-, 8-, 10-, and 15-objective instances of DTLZ1.

It is clear from Equation 4 that any variation in the estimation of α from one generation to another will cause a variation in the computation of the normalized objective vector, fn. This will then cause a variation in the calculation of distance metrics d1 and d2 for the purpose of PBI or NSGA-III fitness assignment procedure. Although we keep Das and Dennis's (1998) vectors invariant in an EMO simulation, the generation-wise change in α destabilizes the effective normalized objective values.

3.1 Sensitivity Ratio

The distance along a unit decomposition vector v=(v1,v2,,vM)T of a point in the normalized objective space can be written as: d1=i=1Mvifi. Due to changes in the ideal vector and objective range, the vector v will change from one generation to another. By considering that the ideal point variation is negligible, as evident from Figure 1, the above equation can be rewritten as: i=1Mwifi, where wi=vi/αi is an auxiliary variable. The orthogonal distance d2 can be written as
d2=f-(wTf)ww2.
(5)
In other words, the variation in both d1 and d2 can be viewed as coming from the variation in w, which occurs mainly due to variation in the objective range vector α. Thus, the decomposition vector adaptation method utilized in RVEA (Cheng et al., 2016) is essentially the same as the objective normalization discussed in this study.

Now we are ready to define a sensitivity ratio between PBI-based fitness and orthogonal distance-based fitness values due to the perturbation of the revised decomposition vector v. We define the sensitivity ratio as the absolute ratio of relative perturbation of PBI-metric to orthogonal distance metric by infinitesimal analysis, as follows:

Definition 1 (Sensitivity Ratio):

For any given point in the objective space, the sensitivity ratio between PBI-based and orthogonal distance-based fitness values is ρ=ΔPBI/PBIΔOrtho/Ortho.

Knowing that the PBI-metric is defined as PBI=d1+θd2 and orthogonal distance metric is defined as Ortho=d2, we can further have
ρ=Δd1+θΔd2Δd2d2d1+θd2,=θ+Δd1Δd2d2d1+θd2,=|ρ1ρ2|,
(6)
where Δd1 and Δd2 are perturbations in distance computations due to changes in α. The first term within brackets is ρ1 and the second term is ρ2.
Theorem 1:

For a multiobjective optimization problem, PBI-based fitness has smaller percentage sensitivity than d2-based fitness for θ(d22-d12)/(2d1d2).

Proof:
For a multiobjective optimization problem with M objectives, we have d1=v1f1+v2f2+,+vMfM and v12+v22++vM2=1. Then,
d2=((f1+f2++fM)2-d12)1/2=v22+v32++vM2f12+v12+v32++vM2f22++v12+v22++vM-12fM2-2v1v2f1f2-2v1v3f1f3--2vM-1vMfM-1fM1/2=v1f2-v2f1)2+(v1f3-v3f1)2++(vM-1fM-vMfM-1)21/2.
(7)
Since v12+v22++vM2=1, we can get v1Δv1+v2Δv2++vMΔvM=0, and furthermore Δv1=-v2v1Δv2-v3v1Δv2-vMv1ΔvM. By substituting this expression to Δd1 and Δd2, we have the following:
Δd1=d1v1Δv1+d1v2Δv2++d1vMΔvM,=f1Δv1+f2Δv2++fMΔvM=(f2-v2v1f1)Δv2+(f3-v3v1f1)Δv3++(fM-vMv1f1)ΔvM,Δd2=d2v1Δv1+d2v2Δv2++d2vMΔvM=(d2v2-v2v1d2v1)Δv2+(d2v3-v3v1d2v1)Δv3++(d2vM-vMv1d2v1)ΔvM.
(8)
Also,
d2v1=2f2(v1f2-v2f1)+2f3(v1f3-v3f1)++2fM(v1fM-vMf1).
(9)
For i=2,3,,M, we have
d2vi=2f1(vif1-v1fi)+2f2(vif2-v2fi)++2fM(vifM-vMfi).
(10)
Furthermore,
d2vi-viv1d2v1=fi-viv1f1×-v1f1-v2f2--vMfM/d2.
(11)
By substituting Equation 11 to Equation 8, we obtain
Δd2=(-v1f1-v2f2--vMfM)d2Δd1.
(12)
Therefore,
ρ1=θ+Δd1Δd2,=θ-d2(v1f1+v2f2++vMfM),=θ-d2d1,or,ρ=θ-d2d1d2d1+θd2,=|θ-β|β1+θβ,
(13)
where β=d1/d2. A little calculation will reveal that ρ is less than one for θ(β2-1)/(2β)=(d22-d12)/(2d1d2).
Figure 3:

Relationship between the projection distance d1, orthogonal distance d2, and angle γ.

Figure 3:

Relationship between the projection distance d1, orthogonal distance d2, and angle γ.

Thus, in such cases, the percentage sensitivity of the PBI distance metric is smaller than the percentage sensitivity of the d2 distance metric. Also, ρ is small for a small θ value, meaning that PBI-based approaches are less sensitive compared to d2-based approach for small θ. It also implies that if NSGA-III's orthogonal distance metric (d2) is replaced with the PBI-metric (d1+θd2, with the introduction of a parameter θ>0), it should introduce a smaller effect in the selection operator due to the instabilities in the normalization process.

We now investigate the validity of the condition θ(β2-1)/(2β) by noting that the ratio β=d2/d1 is related to the specified decomposition (or reference) vectors. Figure 3 shows the geometric meaning of the two distances in a two-objective case.

A little thought will reveal that the maximum value of the ratio d2/d1 is related to the angle (γ) made by two neighboring reference vectors in NSGA-III. For MOEA/D, since a neighborhood of T=20 reference lines are used, the ratio is expected to be larger than that for NSGA-III, as it uses a tighter association procedure. In fact, for any point, the maximum d2/d1 ratio is equal to β=tan(γ/2). By substituting this condition in the above theorems, the following condition must be valid for PBI-based methods to have a smaller sensitivity than the d2-based metric:
θ-cotγ.
(14)
Since the angle (γ) between two neighboring reference lines is expected to be acute, cotγ is expected to be positive. Hence, for all practical purposes, the above condition is always satisfied for any number of reference lines used in an EMO algorithm. To have a better understanding of the actual γ values in standard EMO runs, we compute tan(γ/2) for each reference line by noting the minimum angle of the reference line with any other reference line for 3- to 15-objective problem setups of this study. Then, compute maximum value of β as equal to tan(γ/2). Figure 4 shows the box plots of β showing variation of β for different objective dimensions. Interestingly, although β increases with the increase in the number of objectives, the respective β values are much smaller than the usual values (θ5) used in PBI-based MOEA/D studies (Zhang et al., 2010). For example, for M=5, median β=0.15 and Equation 13 calculates ρ=0.416 for θ=5, meaning that percentage sensitivity of a PBI with θ=5 is only 41.6% to that of that orthogonal distance metric.
Figure 4:

Maximum β for Das and Dennis's (1998) reference direction settings for different objective problems of this study.

Figure 4:

Maximum β for Das and Dennis's (1998) reference direction settings for different objective problems of this study.

Table 1:
Average number of generations needed for termination in 30 runs for each instance of DTLZ2.
θ0.11510100
Reference Direction: (1,1,,1)T/M 
DTLZ2-3 338.33 345.00 340.50 344.07 393.93 
DTLZ2-5 301.27 305.37 324.50 331.23 351.80 
Reference Direction: (a,a,,0.9)T, a=0.19/(M-1) 
DTLZ2-3 338.80 327.43 333.77 355.00 384.50 
DTLZ2-5 329.20 326.13 324.57 332.17 356.40 
θ0.11510100
Reference Direction: (1,1,,1)T/M 
DTLZ2-3 338.33 345.00 340.50 344.07 393.93 
DTLZ2-5 301.27 305.37 324.50 331.23 351.80 
Reference Direction: (a,a,,0.9)T, a=0.19/(M-1) 
DTLZ2-3 338.80 327.43 333.77 355.00 384.50 
DTLZ2-5 329.20 326.13 324.57 332.17 356.40 

3.2 Validation

To validate the effect of reducing θ for a better performance of the PBI-metric--based EMO, we perform controlled simulations of the PBI-based NSGA-III on instances of DTLZ2 having three and five objectives. First, we eliminate the non-dominated sorting from the NSGA-III procedure to study the effect of PBI-metric--based selection and then we use a single reference direction vector to initialize a population around the specific reference direction vector but away from the resulting Pareto-optimal solution. We use the PBI-metric with a prespecified θ to determine survival of half the population from the merged population of parents and offspring. A population of size 20 is used. The resulting NSGA-III is terminated when any solution within a Euclidean objective distance of 0.01 from the targeted Pareto-optimal point is obtained. Table 1 presents the average number of generations of 30 independent runs needed for different θ=0.1,1,5,10, and 100.
Figure 5:

Working of PBI-metric with minimal θ on non-convex PF problems.

Figure 5:

Working of PBI-metric with minimal θ on non-convex PF problems.

When an equi-angled reference direction (1,1,,1)T/M to each objective axis is used, the controlled NSGA-III performs the best on both three and five-objective instances of DTLZ2 with the smallest θ used in the study. The noise coming from the normalization of objectives makes a smaller effect when reducing θ and the resulting algorithm works better. Please note that the Wilcoxon test has shown that the best one is significantly better than the others.

However, the latter part of this table indicates that when a skewed reference direction (closer to the fM objective axis: (a,a,,a,0.9)T with a=0.19/(M-1)) is used, there exists an optimal θ value which does not correspond to the smallest θ considered. That is, the performance gets better with smaller θ, but beyond certain θ, any further reduction causes the performance to deteriorate. We explain this behavior of the controlled NSGA-III in the following paragraph.

Consider Figure 5 for a two-objective minimization problem having a non-convex PF.

For a particular reference line (w(1)), the targeted Pareto-optimal point is A. Say, B and C are the two closest points in the population for this direction. For point B, the two PBI distances (d1 and d2) are marked. All the points on the line joining B and A have the same PBI-metric value (PBI(B) = d1+θd2) for a particular θ, leading to the following:
PBI(B)=PBI(A),OP+θBP=OA,θ=OA-OPBP=APBP,θ=cotα.
(15)
Thus, for a given reference line and a point B making an angle α, there exists a minimum PBI penalty parameter θ=cotα below which the point A (targeted Pareto-optimal point) is judged to be worse compared to another non-targeted point B. As the point B gets closer to the targeted point A, the angle α can be computed as the angle between the gradient of the PF at A and the reference direction w(1). For a comparison, a contour of the PBI-metric for a higher θ (=2 in this example) is shown as a red dashed line passing through B. For this higher θ, point A is judged to be better than B by the respective PBI-metric.

An interesting phenomenon happens for the point C, for which any positive θ would make point A better than point C. Thus, if the reference direction passes through the intermediate part of the PF, it is expected to have points around the reference direction (such as B and C), and although points like B will not allow point A to be chosen for a small θ, but points like C will make it possible to converge to A for any θ. This is the reason, for which for non-convex PF scenarios (such as DTLZ2), an equi-angled reference direction is able to find near-A like point with a very small θ, such as 0.1 (Table 1). Our sensitivity analysis predicted that a smaller θ handles the normalization sensitivity better. Thus, based on the sensitivity analysis performed and the above geometric argument in favor of small θ with intermediate reference lines, NSGA-III with the PBI-metric having a very small θ produced the best result.

However, as the reference lines (w(2)) closer to the axis directions are considered, the gradient concept still determines the angle α, such as α' in Figure 5, leading to a minimum θmin'=cotα' for targeted Pareto-optimal points like D to be better than point E. Of course, unlike points like F, point D will be judged better for any positive θ, but since points like F lie on the extreme side of the reference line, there may not be many points present there in the population. Therefore, an EMO algorithm will require a minimum θ (=θmin' in this case) to select a point close to D. This is why in Table 1, we observed a relatively larger θ to work the best.

The above discussions make the following aspects clear:

  1. The normalization uncertainty (due to instability in ideal and nadir point fixation), a smaller θ value in the PBI-metric introduces smaller sensitivity. From this perspective, the use of the smallest θ (>0) is the best strategy.

  2. However, for non-convex PFs, the angle made by the tangent of the PF at the Pareto-optimal point and the corresponding reference direction determines a minimum required θ for a PBI-metric--based EMO algorithm to converge near the true Pareto-optimal point from the commonly available neighboring points. For intermediate reference directions, commonly available points are usually all around, hence there is no minimal lower bound on θ for points to find a way to reach the targeted Pareto-optimal point. But for near-boundary reference directions, commonly available points are not always available all around, and a finite minimal θ(cotα) is needed for finding the targeted Pareto-optimal point.

We shall discuss the effect of the geometric lower bound on convex PFs later, but before we do that, we present results of NSGA-III and MOEA/D with PBI-metric on standard DTLZ and WFG problems, which have non-convex PFs, in support of our above arguments.

4 Experimental Validation

From our theoretical results on normalization sensitivity, it is expected that with a decrease in θ value, the performance of both algorithms on non-convex problems should get better, but to obtain the extreme and boundary solutions, a lower bound on θ is expected. In this section, extensive simulations are conducted to validate this theory and identify best parameter values of θ, when the PBI-metric is introduced to NSGA-III and MOEA/D.

A number of scalable MaOP test problems from the DTLZ family (Deb et al., 2005) and WFG family (Huband et al., 2006) are used for this purpose. Problems DTLZ1-DTLZ4, and WFG5-WFG8 with the number of objectives M=3, 5, 8, 10, and 15 are tested in this study. The number of decision variables is set as n=M+4 for DTLZ1, n=M+9 for DTLZ2-4, and n=M+19 for WFG5-8. The IGD metric (Bosman and Thierens, 2003) is used to measure the performance of the two algorithms. The intersecting points of decomposition vectors and PF are used as the reference points for IGD calculation (Li et al., 2015), and thus the number of reference points is the same as that of decomposition vectors.

Table 2:
Population size and number of divisions in each objective axis for generating decomposition vectors.
Obj.Div.Reduction factorPopSize
12 (1,1) 91 
(1,1) 210 
3,2 (1,0.5) 156 
10 3,2 (1,0.5) 275 
15 2,1 (1,0.5) 135 
Obj.Div.Reduction factorPopSize
12 (1,1) 91 
(1,1) 210 
3,2 (1,0.5) 156 
10 3,2 (1,0.5) 275 
15 2,1 (1,0.5) 135 
Table 3:
Number of generations for DTLZ1-4 and WFG5-8 problems.
Obj.DTLZ1DTLZ2DTLZ3DTLZ4WFG5-8
400 250 1000 600 400 
600 350 1000 1000 750 
750 500 1000 1250 1500 
10 1000 750 1500 2000 2000 
15 1500 1000 2000 3000 3000 
Obj.DTLZ1DTLZ2DTLZ3DTLZ4WFG5-8
400 250 1000 600 400 
600 350 1000 1000 750 
750 500 1000 1250 1500 
10 1000 750 1500 2000 2000 
15 1500 1000 2000 3000 3000 

The general settings of NSGA-III, MOEA/D and their modifications are as follows:

  • Initialization of the reference points are kept the same as the original NSGA-III paper (Deb and Jain, 2014). Population size N, divisions H, and respective reduction factor for layer-wise Das and Dennis (1998) reference vectors are shown in Table 2.

  • The number of generations for DTLZ1-4, and WFG5-8 are shown in Table 3.

  • The SBX operator with pc=1 and ηc=30, and polynomial mutation with pm=1/n and ηm=20 are used in all experiments.

  • For MOEA/D, T=20, δ=0.9, and nr=2, are used according to suggestions of Zhang et al. (2010).

4.1 Experimental Studies on NSGA-III

First, we modify the selection procedure of the original NSGA-III by replacing the d2-based selection with the PBI-metric. Note that the original NSGA-III can be considered to have used the PBI-metric with θ=. For the Modified-1 of NSGA-III, we use θ=100, for Modified-2, we use θ=5, and for Modified-3, we use θ=1. For points associated with M axis directions, we use a large θ=106 such that the orthogonal distance to the axis line is used as the fitness measure. In Table 4, we have observed that this stabilizes the overall normalization procedure and without this modification, the PBI-based NSGA-IIIs do not converge well.

Table 4 shows that a very small θ (=0.1) does not produce best results for most DTLZ problems. For DTLZ1 problem, θ=5 and for other DTLZ problems θ=1 perform the best. Clearly, a very high θ is good for geometric consideration, but it is not a good choice from the sensitivity point of view. The documented good θ values around 1--5 in the literature agree well with our controlled experimental study in Table 1.

Next, we apply the PBI-based NSGA-IIIs on the WFG family in Table 5. Here, for most problems θ=1 works better.

Both tables reveal an interesting result, which the original NSGA-III study (Deb and Jain, 2014) did not show. If the original NSGA-III's orthogonal distance metric is replaced with the PBI-metric having θ1, the performance is better. The previous MOEA/DD (Li et al., 2015) and θ-DEA (Yuan et al., 2016) have reported similar observations. This stays as an important result for NSGA-III researchers and users.

4.2 Experimental Studies on MOEA/D

Here, we provide results on the original MOEA/D and MOEA/D modifications on the DTLZ and WFG problems. In the original MOEA/D, PBI selection with θ=5 was suggested. In this study, we include three modifications: Modified-1 with θ=1, Modified-2 with θ=10, and Modified-3 with θ=100. Tables 6 and 7 present the IGD metric values on DTLZ and WFG problems, respectively.

Clearly, θ=5 works the best for both families of problems. Again, larger θ values do not produce good results on WFG problems and too small (θ=1, which worked very well for NSGA-III procedure) does not work well on both families of problems with MOEA/D. Similar results were reported in another study (Ishibuchi et al., 2016a). We now describe the reason for MOEA/D requiring a larger θ compared with PBI-based NSGA-III in the following paragraph.

Consider Figure 5 again. In NSGA-III, there is no neighborhood concept and a point close to a reference line is declared to be associated with the line. For a given θ, an ND point closer to a reference line will be chosen by the PBI-metric. On the other hand, in MOEA/D, points within a prespecified neighborhood size (usually T=20) are used for the PBI-metric computation. Thus, in MOEA/D, points are expected to be relatively far away from a reference line used for their PBI-metric computations, compared with the same in NSGA-III. For the reference line w(1), if point B is associated with it, for MOEA/D it can be point K which is in the neighborhood of the reference line. In comparison with a point B, which is closer to the targeted Pareto-optimal point A, point K will be judged better than B for a large α (or, small θ). A relatively large θ (about θ=2, meaning α=cot-1(2)= 26.57 deg) is needed to establish that point B is better than point K. However, if points like K are not allowed to be associated with the reference line w(1), the above situation does not arise and a small θ, found adequate in our sensitivity analysis would produce a better normalization accuracy. While other researchers (Ishibuchi et al., 2015, 2016b) have made similar arguments for relatively large θ requirement with MOEA/D, our argument comes from two separate opposing requirements: smaller θ is better from sensitivity due to normalization uncertainty and large θ for accepting closer to reference line points.

Table 4:
The best, mean, median, and worst IGD-metric values of the original and modified NSGA-IIIs in 15 independent runs for 3-, 5-, 8-, 10-, and 15-objective instances of DTLZ1-4. The best performance is highlighted in boldface and others with 95% confidence on Wilcoxon rank test are marked in italics.
OriginalMod.-1Mod.-2Mod.-3Mod.-4OriginalMod.-1Mod.-2Mod.-3Mod.-4
θ=θ=100θ=5θ=1θ=0.1θ=θ=100θ=5θ=1θ=0.1
DTLZ1-3 Best 0.000734 0.000756 0.000471 0.006653 0.026794 DTLZ2-3 0.001585 0.001209 0.001188 0.001034 0.001691 
 Mean 0.003571 0.005882 0.002884 0.008847 0.029435  0.002890 0.002235 0.002379 0.001722 0.002490 
 Median 0.001694 0.002816 0.001953 0.007895 0.029897  0.002295 0.002140 0.001817 0.001582 0.001849 
 Worst 0.019658 0.021075 0.019090 0.015798 0.030401  0.005744 0.004363 0.005317 0.002842 0.006739 
DTLZ1-5 Best 0.000660 0.000739 0.000534 0.046282 0.081161 DTLZ2-5 0.004654 0.003640 0.002631 0.002490 0.244405 
 Mean 0.004007 0.002480 0.001086 0.048948 0.085159  0.006143 0.005608 0.004329 0.003294 0.249664 
 Median 0.001624 0.000959 0.000855 0.048546 0.082673  0.006367 0.005503 0.003787 0.003076 0.249836 
 Worst 0.031360 0.022896 0.003019 0.053315 0.114747  0.007277 0.007403 0.013922 0.005087 0.255702 
DTLZ1-8 Best 0.002422 0.002908 0.001862 0.114866 0.128819 DTLZ2-8 0.015160 0.012683 0.007214 0.006679 0.404300 
 Mean 0.007591 0.004064 0.004363 0.117540 0.132351  0.017717 0.014630 0.009075 0.009046 0.427947 
 Median 0.004574 0.003530 0.002835 0.117258 0.132163  0.017164 0.014553 0.009331 0.009192 0.430195 
 Worst 0.024133 0.007894 0.011184 0.125629 0.135879  0.020411 0.016477 0.010548 0.009958 0.444921 
DTLZ1-10 Best 0.002494 0.002268 0.002221 0.133002 0.143090 DTLZ2-10 0.015725 0.012952 0.007758 0.007873 0.454973 
 Mean 0.004382 0.004253 0.002708 0.138898 0.144640  0.016915 0.014616 0.009386 0.008852 0.469464 
 Median 0.003995 0.003310 0.002561 0.139152 0.144179  0.016776 0.014268 0.009167 0.008749 0.469991 
 Worst 0.006952 0.014363 0.003609 0.140878 0.150614  0.019258 0.018098 0.011223 0.010164 0.479207 
DTLZ1-15 Best 0.003909 0.002664 0.002557 0.248012 0.230312 DTLZ2-15 0.016626 0.012934 0.010570 0.009778 0.838016 
 Mean 0.006412 0.005008 0.004495 0.275086 0.274735  0.019394 0.015575 0.012302 0.011944 0.894165 
 Median 0.005406 0.004495 0.003574 0.272007 0.278855  0.019415 0.015695 0.012420 0.011647 0.887618 
 Worst 0.012014 0.010818 0.018819 0.303847 0.297285  0.023279 0.018497 0.015053 0.013622 0.944622 
DTLZ3-3 Best 0.001107 0.001673 0.001640 0.000568 0.000766 DTLZ4-3 0.000193 0.000190 0.000187 0.000196 0.000319 
 Mean 0.003720 0.007151 0.004915 0.002676 0.002593  0.000485 0.000504 0.000478 0.000367 0.063949 
 Median 0.003478 0.006728 0.003343 0.001802 0.002764  0.000336 0.000271 0.000308 0.000275 0.000492 
 Worst 0.006688 0.025029 0.011614 0.009888 0.005039  0.001303 0.001821 0.001715 0.000863 0.950334 
DTLZ3-5 Best 0.003174 0.001428 0.001787 0.001833 0.232564 DTLZ4-5 0.000355 0.000351 0.000324 0.000347 0.244238 
 Mean 0.010607 0.022190 0.005920 0.003408 0.252875  0.000632 0.000426 0.000403 0.000461 0.254399 
 Median 0.008207 0.008718 0.004307 0.003263 0.254534  0.000594 0.000390 0.000406 0.000433 0.256961 
 Worst 0.033873 0.204320 0.029554 0.005683 0.257728  0.000934 0.000612 0.000482 0.000687 0.259750 
DTLZ3-8 Best 0.020622 0.014013 0.010982 0.007072 0.411993 DTLZ4-8 0.003301 0.003184 0.002604 0.002435 0.447962 
 Mean 0.034544 0.032622 0.018080 0.016265 0.430882  0.004128 0.003788 0.003195 0.003350 0.455380 
 Median 0.030376 0.023475 0.015726 0.015575 0.429256  0.004031 0.003728 0.003037 0.003448 0.456836 
 Worst 0.055018 0.079593 0.030327 0.025870 0.475743  0.006171 0.004920 0.004536 0.004193 0.460993 
DTLZ3-10 Best 0.009954 0.007245 0.006661 0.005799 0.455437 DTLZ4-10 0.003750 0.003515 0.003088 0.003009 0.481555 
 Mean 0.017469 0.011962 0.008510 0.008746 0.470103  0.004491 0.004192 0.003466 0.003407 0.490259 
 Median 0.015052 0.011979 0.007851 0.007257 0.471480  0.004503 0.004256 0.003467 0.003455 0.490497 
 Worst 0.042242 0.016739 0.013779 0.021624 0.482391  0.005261 0.004928 0.003866 0.003779 0.495206 
DTLZ3-15 Best 0.017209 0.014163 0.011105 0.010687 0.539536 DTLZ4-15 0.005185 0.005814 0.004957 0.004419 0.616883 
 Mean 0.033811 0.027357 0.019034 0.095422 0.637558  0.006616 0.007252 0.006049 0.005553 0.620320 
 Median 0.029908 0.025238 0.017343 0.015982 0.635302  0.006418 0.006947 0.005957 0.005579 0.620762 
 Worst 0.088101 0.055753 0.037096 0.633880 0.788643  0.008431 0.009042 0.008128 0.006791 0.620817 
OriginalMod.-1Mod.-2Mod.-3Mod.-4OriginalMod.-1Mod.-2Mod.-3Mod.-4
θ=θ=100θ=5θ=1θ=0.1θ=θ=100θ=5θ=1θ=0.1
DTLZ1-3 Best 0.000734 0.000756 0.000471 0.006653 0.026794 DTLZ2-3 0.001585 0.001209 0.001188 0.001034 0.001691 
 Mean 0.003571 0.005882 0.002884 0.008847 0.029435  0.002890 0.002235 0.002379 0.001722 0.002490 
 Median 0.001694 0.002816 0.001953 0.007895 0.029897  0.002295 0.002140 0.001817 0.001582 0.001849 
 Worst 0.019658 0.021075 0.019090 0.015798 0.030401  0.005744 0.004363 0.005317 0.002842 0.006739 
DTLZ1-5 Best 0.000660 0.000739 0.000534 0.046282 0.081161 DTLZ2-5 0.004654 0.003640 0.002631 0.002490 0.244405 
 Mean 0.004007 0.002480 0.001086 0.048948 0.085159  0.006143 0.005608 0.004329 0.003294 0.249664 
 Median 0.001624 0.000959 0.000855 0.048546 0.082673  0.006367 0.005503 0.003787 0.003076 0.249836 
 Worst 0.031360 0.022896 0.003019 0.053315 0.114747  0.007277 0.007403 0.013922 0.005087 0.255702 
DTLZ1-8 Best 0.002422 0.002908 0.001862 0.114866 0.128819 DTLZ2-8 0.015160 0.012683 0.007214 0.006679 0.404300 
 Mean 0.007591 0.004064 0.004363 0.117540 0.132351  0.017717 0.014630 0.009075 0.009046 0.427947 
 Median 0.004574 0.003530 0.002835 0.117258 0.132163  0.017164 0.014553 0.009331 0.009192 0.430195 
 Worst 0.024133 0.007894 0.011184 0.125629 0.135879  0.020411 0.016477 0.010548 0.009958 0.444921 
DTLZ1-10 Best 0.002494 0.002268 0.002221 0.133002 0.143090 DTLZ2-10 0.015725 0.012952 0.007758 0.007873 0.454973 
 Mean 0.004382 0.004253 0.002708 0.138898 0.144640  0.016915 0.014616 0.009386 0.008852 0.469464 
 Median 0.003995 0.003310 0.002561 0.139152 0.144179  0.016776 0.014268 0.009167 0.008749 0.469991 
 Worst 0.006952 0.014363 0.003609 0.140878 0.150614  0.019258 0.018098 0.011223 0.010164 0.479207 
DTLZ1-15 Best 0.003909 0.002664 0.002557 0.248012 0.230312 DTLZ2-15 0.016626 0.012934 0.010570 0.009778 0.838016 
 Mean 0.006412 0.005008 0.004495 0.275086 0.274735  0.019394 0.015575 0.012302 0.011944 0.894165 
 Median 0.005406 0.004495 0.003574 0.272007 0.278855  0.019415 0.015695 0.012420 0.011647 0.887618 
 Worst 0.012014 0.010818 0.018819 0.303847 0.297285  0.023279 0.018497 0.015053 0.013622 0.944622 
DTLZ3-3 Best 0.001107 0.001673 0.001640 0.000568 0.000766 DTLZ4-3 0.000193 0.000190 0.000187 0.000196 0.000319 
 Mean 0.003720 0.007151 0.004915 0.002676 0.002593  0.000485 0.000504 0.000478 0.000367 0.063949 
 Median 0.003478 0.006728 0.003343 0.001802 0.002764  0.000336 0.000271 0.000308 0.000275 0.000492 
 Worst 0.006688 0.025029 0.011614 0.009888 0.005039  0.001303 0.001821 0.001715 0.000863 0.950334 
DTLZ3-5 Best 0.003174 0.001428 0.001787 0.001833 0.232564 DTLZ4-5 0.000355 0.000351 0.000324 0.000347 0.244238 
 Mean 0.010607 0.022190 0.005920 0.003408 0.252875  0.000632 0.000426 0.000403 0.000461 0.254399 
 Median 0.008207 0.008718 0.004307 0.003263 0.254534  0.000594 0.000390 0.000406 0.000433 0.256961 
 Worst 0.033873 0.204320 0.029554 0.005683 0.257728  0.000934 0.000612 0.000482 0.000687 0.259750 
DTLZ3-8 Best 0.020622 0.014013 0.010982 0.007072 0.411993 DTLZ4-8 0.003301 0.003184 0.002604 0.002435 0.447962 
 Mean 0.034544 0.032622 0.018080 0.016265 0.430882  0.004128 0.003788 0.003195 0.003350 0.455380 
 Median 0.030376 0.023475 0.015726 0.015575 0.429256  0.004031 0.003728 0.003037 0.003448 0.456836 
 Worst 0.055018 0.079593 0.030327 0.025870 0.475743  0.006171 0.004920 0.004536 0.004193 0.460993 
DTLZ3-10 Best 0.009954 0.007245 0.006661 0.005799 0.455437 DTLZ4-10 0.003750 0.003515 0.003088 0.003009 0.481555 
 Mean 0.017469 0.011962 0.008510 0.008746 0.470103  0.004491 0.004192 0.003466 0.003407 0.490259 
 Median 0.015052 0.011979 0.007851 0.007257 0.471480  0.004503 0.004256 0.003467 0.003455 0.490497 
 Worst 0.042242 0.016739 0.013779 0.021624 0.482391  0.005261 0.004928 0.003866 0.003779 0.495206 
DTLZ3-15 Best 0.017209 0.014163 0.011105 0.010687 0.539536 DTLZ4-15 0.005185 0.005814 0.004957 0.004419 0.616883 
 Mean 0.033811 0.027357 0.019034 0.095422 0.637558  0.006616 0.007252 0.006049 0.005553 0.620320 
 Median 0.029908 0.025238 0.017343 0.015982 0.635302  0.006418 0.006947 0.005957 0.005579 0.620762 
 Worst 0.088101 0.055753 0.037096 0.633880 0.788643  0.008431 0.009042 0.008128 0.006791 0.620817 
Table 5:
The best, mean, median, and worst IGD-metric values of the original and modified NSGA-III in 15 independent runs for 3-, 5-, 8-, 10-, and 15-objective instances of WFG5-8. The best performance is highlighted in boldface and others with 95% confidence on Wilcoxon rank test are marked in italics.
OriginalMod.-1Mod.-2Mod.-3Mod.-4OriginalMod.-1Mod.-2Mod.-3Mod.-4
θ=θ=100θ=5θ=1θ=0.1θ=θ=100θ=5θ=1θ=0.1
WFG5-3 Best 0.029529 0.029400 0.029673 0.029609 0.029224 WFG6-3 0.024446 0.025722 0.027581 0.024734 0.023945 
 Mean 0.030250 0.030273 0.030256 0.030127 0.029494  0.031930 0.031572 0.031606 0.030001 0.028295 
 Median 0.030140 0.030302 0.030159 0.030051 0.029440  0.032103 0.032828 0.031803 0.029944 0.028056 
 Worst 0.030942 0.030885 0.031826 0.031215 0.029963  0.039725 0.036509 0.036104 0.035634 0.032418 
WFG5-5 Best 0.031668 0.031195 0.031085 0.031014 0.240909 WFG6-5 0.027978 0.029140 0.026478 0.025098 0.251058 
 Mean 0.032158 0.032182 0.031858 0.031734 0.248648  0.034956 0.033575 0.032453 0.030111 0.255444 
 Median 0.032099 0.032284 0.031788 0.031670 0.248194  0.035416 0.033452 0.032374 0.030280 0.256176 
 Worst 0.032894 0.032754 0.033013 0.032606 0.257383  0.041662 0.037879 0.036056 0.035305 0.259884 
WFG5-8 Best 0.031288 0.031239 0.031036 0.031076 0.391804 WFG6-8 0.027032 0.025368 0.023572 0.021864 0.420389 
 Mean 0.031812 0.031665 0.031592 0.031415 0.437451  0.031845 0.033541 0.030302 0.029402 0.434091 
 Median 0.031686 0.031660 0.031576 0.031316 0.436726  0.030489 0.034343 0.029681 0.027965 0.434548 
 Worst 0.032661 0.032470 0.032545 0.032146 0.452130  0.039457 0.038831 0.035358 0.038290 0.442667 
WFG5-10 Best 0.041319 0.031603 0.031567 0.031531 0.460240 WFG6-10 0.025299 0.025715 0.023929 0.024122 0.460777 
 Mean 0.042758 0.032029 0.031917 0.031858 0.477337  0.030770 0.030449 0.030660 0.029187 0.475042 
 Median 0.042510 0.032128 0.031946 0.031829 0.480024  0.031564 0.029652 0.030927 0.028530 0.475120 
 Worst 0.045012 0.032573 0.032281 0.032552 0.500939  0.036398 0.037461 0.035442 0.035803 0.482776 
WFG5-15 Best 0.038796 0.032430 0.031934 0.031735 0.616944 WFG6-15 0.030314 0.034490 0.028455 0.021915 0.913095 
 Mean 0.040314 0.032955 0.032591 0.032419 0.690253  0.040221 0.039419 0.038068 0.033352 0.944621 
 Median 0.040450 0.032785 0.032578 0.032426 0.692224  0.041333 0.038229 0.038004 0.033560 0.934805 
 Worst 0.041108 0.033820 0.033805 0.032956 0.742976  0.046922 0.044174 0.045214 0.041015 1.070096 
WFG7-3 Best 0.005949 0.006704 0.005661 0.002783 0.001670 WFG8-3 0.052454 0.051143 0.051351 0.050267 0.071721 
 Mean 0.007682 0.007751 0.006557 0.003585 0.001947  0.057769 0.057056 0.055285 0.053923 0.074531 
 Median 0.007755 0.007789 0.006573 0.003509 0.001987  0.057582 0.056222 0.054996 0.052994 0.074732 
 Worst 0.010080 0.008752 0.007788 0.004441 0.002413  0.067473 0.064808 0.059258 0.057896 0.077962 
WFG7-5 Best 0.007348 0.006930 0.006517 0.003040 0.241858 WFG8-5 0.087143 0.087212 0.086968 0.086623 0.223901 
 Mean 0.008860 0.008984 0.007452 0.003480 0.252648  0.089531 0.089524 0.089359 0.089677 0.229461 
 Median 0.008633 0.009204 0.007180 0.003429 0.253021  0.089409 0.089287 0.088923 0.089492 0.229278 
 Worst 0.010378 0.010889 0.008766 0.004193 0.257304  0.094001 0.092952 0.093617 0.092791 0.236103 
WFG7-8 Best 0.006343 0.006749 0.004908 0.002818 0.419294 WFG8-8 0.152547 0.151489 0.150115 0.159755 0.401655 
 Mean 0.008456 0.008992 0.006630 0.003747 0.433592  0.155987 0.155832 0.155609 0.162495 0.422773 
 Median 0.007908 0.009479 0.005947 0.003387 0.434240  0.156411 0.156395 0.155759 0.161491 0.420867 
 Worst 0.011405 0.012219 0.010715 0.007866 0.445217  0.159299 0.160778 0.159517 0.168571 0.455499 
WFG7-10 Best 0.007980 0.006930 0.005977 0.004153 0.453290 WFG8-10 0.197249 0.196512 0.199314 0.214325 0.454045 
 Mean 0.008633 0.008548 0.007249 0.004466 0.474685  0.198996 0.199609 0.201493 0.216943 0.474604 
 Median 0.008347 0.008274 0.007147 0.004454 0.473806  0.199168 0.200145 0.201573 0.217071 0.473519 
 Worst 0.009577 0.011092 0.009663 0.005071 0.486307  0.200557 0.202426 0.203408 0.218581 0.504967 
WFG7-15 Best 0.007680 0.007540 0.009822 0.009053 0.783148 WFG8-15 0.211817 0.186587 0.129592 0.264042 0.921889 
 Mean 0.011139 0.010832 0.011402 0.011178 0.879032  0.244624 0.237459 0.249798 0.340657 1.019098 
 Median 0.010820 0.011151 0.011073 0.011261 0.874181  0.247754 0.245924 0.257058 0.296750 1.020007 
 Worst 0.013924 0.012930 0.014133 0.014361 0.964103  0.261895 0.253243 0.277220 0.512158 1.116144 
OriginalMod.-1Mod.-2Mod.-3Mod.-4OriginalMod.-1Mod.-2Mod.-3Mod.-4
θ=θ=100θ=5θ=1θ=0.1θ=θ=100θ=5θ=1θ=0.1
WFG5-3 Best 0.029529 0.029400 0.029673 0.029609 0.029224 WFG6-3 0.024446 0.025722 0.027581 0.024734 0.023945 
 Mean 0.030250 0.030273 0.030256 0.030127 0.029494  0.031930 0.031572 0.031606 0.030001 0.028295 
 Median 0.030140 0.030302 0.030159 0.030051 0.029440  0.032103 0.032828 0.031803 0.029944 0.028056 
 Worst 0.030942 0.030885 0.031826 0.031215 0.029963  0.039725 0.036509 0.036104 0.035634 0.032418 
WFG5-5 Best 0.031668 0.031195 0.031085 0.031014 0.240909 WFG6-5 0.027978 0.029140 0.026478 0.025098 0.251058 
 Mean 0.032158 0.032182 0.031858 0.031734 0.248648  0.034956 0.033575 0.032453 0.030111 0.255444 
 Median 0.032099 0.032284 0.031788 0.031670 0.248194  0.035416 0.033452 0.032374 0.030280 0.256176 
 Worst 0.032894 0.032754 0.033013 0.032606 0.257383  0.041662 0.037879 0.036056 0.035305 0.259884 
WFG5-8 Best 0.031288 0.031239 0.031036 0.031076 0.391804 WFG6-8 0.027032 0.025368 0.023572 0.021864 0.420389 
 Mean 0.031812 0.031665 0.031592 0.031415 0.437451  0.031845 0.033541 0.030302 0.029402 0.434091 
 Median 0.031686 0.031660 0.031576 0.031316 0.436726  0.030489 0.034343 0.029681 0.027965 0.434548 
 Worst 0.032661 0.032470 0.032545 0.032146 0.452130  0.039457 0.038831 0.035358 0.038290 0.442667 
WFG5-10 Best 0.041319 0.031603 0.031567 0.031531 0.460240 WFG6-10 0.025299 0.025715 0.023929 0.024122 0.460777 
 Mean 0.042758 0.032029 0.031917 0.031858 0.477337  0.030770 0.030449 0.030660 0.029187 0.475042 
 Median 0.042510 0.032128 0.031946 0.031829 0.480024  0.031564 0.029652 0.030927 0.028530 0.475120 
 Worst 0.045012 0.032573 0.032281 0.032552 0.500939  0.036398 0.037461 0.035442 0.035803 0.482776 
WFG5-15 Best 0.038796 0.032430 0.031934 0.031735 0.616944 WFG6-15 0.030314 0.034490 0.028455 0.021915 0.913095 
 Mean 0.040314 0.032955 0.032591 0.032419 0.690253  0.040221 0.039419 0.038068 0.033352 0.944621 
 Median 0.040450 0.032785 0.032578 0.032426 0.692224  0.041333 0.038229 0.038004 0.033560 0.934805 
 Worst 0.041108 0.033820 0.033805 0.032956 0.742976  0.046922 0.044174 0.045214 0.041015 1.070096 
WFG7-3 Best 0.005949 0.006704 0.005661 0.002783 0.001670 WFG8-3 0.052454 0.051143 0.051351 0.050267 0.071721 
 Mean 0.007682 0.007751 0.006557 0.003585 0.001947  0.057769 0.057056 0.055285 0.053923 0.074531 
 Median 0.007755 0.007789 0.006573 0.003509 0.001987  0.057582 0.056222 0.054996 0.052994 0.074732 
 Worst 0.010080 0.008752 0.007788 0.004441 0.002413  0.067473 0.064808 0.059258 0.057896 0.077962 
WFG7-5 Best 0.007348 0.006930 0.006517 0.003040 0.241858 WFG8-5 0.087143 0.087212 0.086968 0.086623 0.223901 
 Mean 0.008860 0.008984 0.007452 0.003480 0.252648  0.089531 0.089524 0.089359 0.089677 0.229461 
 Median 0.008633 0.009204 0.007180 0.003429 0.253021  0.089409 0.089287 0.088923 0.089492 0.229278 
 Worst 0.010378 0.010889 0.008766 0.004193 0.257304  0.094001 0.092952 0.093617 0.092791 0.236103 
WFG7-8 Best 0.006343 0.006749 0.004908 0.002818 0.419294 WFG8-8 0.152547 0.151489 0.150115 0.159755 0.401655 
 Mean 0.008456 0.008992 0.006630 0.003747 0.433592  0.155987 0.155832 0.155609 0.162495 0.422773 
 Median 0.007908 0.009479 0.005947 0.003387 0.434240  0.156411 0.156395 0.155759 0.161491 0.420867 
 Worst 0.011405 0.012219 0.010715 0.007866 0.445217  0.159299 0.160778 0.159517 0.168571 0.455499 
WFG7-10 Best 0.007980 0.006930 0.005977 0.004153 0.453290 WFG8-10 0.197249 0.196512 0.199314 0.214325 0.454045 
 Mean 0.008633 0.008548 0.007249 0.004466 0.474685  0.198996 0.199609 0.201493 0.216943 0.474604 
 Median 0.008347 0.008274 0.007147 0.004454 0.473806  0.199168 0.200145 0.201573 0.217071 0.473519 
 Worst 0.009577 0.011092 0.009663 0.005071 0.486307  0.200557 0.202426 0.203408 0.218581 0.504967 
WFG7-15 Best 0.007680 0.007540 0.009822 0.009053 0.783148 WFG8-15 0.211817 0.186587 0.129592 0.264042 0.921889 
 Mean 0.011139 0.010832 0.011402 0.011178 0.879032  0.244624 0.237459 0.249798 0.340657 1.019098 
 Median 0.010820 0.011151 0.011073 0.011261 0.874181  0.247754 0.245924 0.257058 0.296750 1.020007 
 Worst 0.013924 0.012930 0.014133 0.014361 0.964103  0.261895 0.253243 0.277220 0.512158 1.116144 
Table 6:
The best, mean, median, and worst of IGD-metric values of the original and modified MOEA/Ds in 15 independent runs for 3-, 5-, 8-, 10-, and 15-objective instances of DTLZ1-4. The best performance is highlighted in boldface and others with 95% confidence on Wilcoxon rank test are marked in italics.
Modified-1OriginalModified-2Modified-3Modified-1OriginalModified-2Modified-3
θ=1θ=5θ=10θ=100θ=1θ=5θ=10θ=100
DTLZ1-3 Best 0.032317 0.000773 0.000701 0.001680 DTLZ2-3 0.044246 0.000961 0.000954 0.002791 
 Mean 0.118949 0.003749 0.006976 0.009008  0.226264 0.002264 0.002480 0.006310 
 Median 0.130941 0.002560 0.003639 0.005804  0.046261 0.001961 0.002014 0.006566 
 Worst 0.173449 0.014123 0.038214 0.039429  0.724020 0.004880 0.005007 0.009299 
DTLZ1-5 Best 0.098663 0.000217 0.000366 0.000330 DTLZ2-5 0.540920 0.001577 0.004477 0.005293 
 Mean 0.145714 0.001681 0.005267 0.003738  0.768006 0.003258 0.005657 0.007774 
 Median 0.139377 0.000773 0.001057 0.001881  0.745034 0.002606 0.005051 0.007141 
 Worst 0.215779 0.007717 0.047719 0.023780  0.997619 0.007489 0.008528 0.015630 
DTLZ1-8 Best 0.225628 0.002385 0.001206 0.001291 DTLZ2-8 0.947509 0.003655 0.003921 0.006897 
 Mean 0.263245 0.005090 0.005722 0.005117  1.050276 0.006290 0.006285 0.008610 
 Median 0.263946 0.003392 0.002295 0.002434  1.063836 0.005953 0.006007 0.008332 
 Worst 0.309882 0.013687 0.038669 0.036270  1.171537 0.009212 0.013900 0.012077 
DTLZ1-10 Best 0.208833 0.001353 0.000775 0.000710 DTLZ2-10 0.949512 0.001500 0.003287 0.005146 
 Mean 0.244095 0.002895 0.002119 0.003310  1.040312 0.003480 0.004537 0.007263 
 Median 0.244483 0.002059 0.001032 0.001040  1.037126 0.002377 0.004337 0.006670 
 Worst 0.264490 0.011703 0.014472 0.032996  1.097058 0.009007 0.006672 0.012776 
DTLZ1-15 Best 0.296512 0.119989 0.005399 0.002292 DTLZ2-15 1.164959 0.012674 0.010121 0.008700 
 Mean 0.338193 0.169557 0.030588 0.004505  1.208323 0.127634 0.015971 0.012940 
 Median 0.340922 0.165516 0.017494 0.003782  1.208998 0.016643 0.014761 0.013934 
 Worst 0.367629 0.226773 0.114660 0.015573  1.234267 0.458311 0.036413 0.017599 
DTLZ3-3 Best 0.035985 0.001302 0.001937 0.007935 DTLZ4-3 0.045197 0.000111 0.000095 0.000133 
 Mean 0.078660 0.006652 0.009230 0.044883  0.284181 0.037973 0.098882 0.071382 
 Median 0.071399 0.005173 0.005237 0.026366  0.046572 0.000152 0.000183 0.000737 
 Worst 0.140144 0.020688 0.034629 0.111482  0.662430 0.530575 0.950334 0.530575 
DTLZ3-5 Best 0.392666 0.000645 0.001988 0.001670 DTLZ4-5 0.464082 0.000096 0.000090 0.000113 
 Mean 0.704282 0.012377 0.019656 0.026446  0.544112 0.065689 0.023019 0.000173 
 Median 0.703783 0.003493 0.004865 0.014986  0.532719 0.000129 0.000134 0.000182 
 Worst 1.099575 0.084039 0.098929 0.130457  0.707449 0.620124 0.343247 0.000248 
DTLZ3-8 Best 0.908195 0.004010 0.004748 0.005186 DTLZ4-8 0.612986 0.000910 0.000938 0.001192 
 Mean 1.008760 0.017464 0.012747 0.022646  0.684458 0.171258 0.189551 0.116438 
 Median 0.995763 0.005631 0.008103 0.013141  0.669895 0.221417 0.221087 0.002289 
 Worst 1.064035 0.085676 0.046172 0.099292  0.830480 0.407835 0.580249 0.407791 
DTLZ3-10 Best 0.916240 0.001621 0.001906 0.001678 DTLZ4-10 0.649495 0.000660 0.000702 0.000974 
 Mean 1.034194 0.005139 0.010946 0.009009  0.691298 0.048643 0.048686 0.037002 
 Median 1.033373 0.002241 0.002465 0.004983  0.696729 0.001127 0.001098 0.001273 
 Worst 1.128520 0.028837 0.091690 0.039975  0.732495 0.179694 0.180242 0.180111 
DTLZ3-15 Best 1.125717 0.008235 0.007569 0.006334 DTLZ4-15 0.653794 0.005859 0.009518 0.008665 
 Mean 1.158706 0.013389 0.009382 0.008923  0.767025 0.177320 0.168551 0.231965 
 Median 1.157900 0.010716 0.009356 0.007405  0.775593 0.208535 0.118535 0.207945 
 Worst 1.211416 0.026114 0.012116 0.027146  0.900289 0.308762 0.402313 0.593047 
Modified-1OriginalModified-2Modified-3Modified-1OriginalModified-2Modified-3
θ=1θ=5θ=10θ=100θ=1θ=5θ=10θ=100
DTLZ1-3 Best 0.032317 0.000773 0.000701 0.001680 DTLZ2-3 0.044246 0.000961 0.000954 0.002791 
 Mean 0.118949 0.003749 0.006976 0.009008  0.226264 0.002264 0.002480 0.006310 
 Median 0.130941 0.002560 0.003639 0.005804  0.046261 0.001961 0.002014 0.006566 
 Worst 0.173449 0.014123 0.038214 0.039429  0.724020 0.004880 0.005007 0.009299 
DTLZ1-5 Best 0.098663 0.000217 0.000366 0.000330 DTLZ2-5 0.540920 0.001577 0.004477 0.005293 
 Mean 0.145714 0.001681 0.005267 0.003738  0.768006 0.003258 0.005657 0.007774 
 Median 0.139377 0.000773 0.001057 0.001881  0.745034 0.002606 0.005051 0.007141 
 Worst 0.215779 0.007717 0.047719 0.023780  0.997619 0.007489 0.008528 0.015630 
DTLZ1-8 Best 0.225628 0.002385 0.001206 0.001291 DTLZ2-8 0.947509 0.003655 0.003921 0.006897 
 Mean 0.263245 0.005090 0.005722 0.005117  1.050276 0.006290 0.006285 0.008610 
 Median 0.263946 0.003392 0.002295 0.002434  1.063836 0.005953 0.006007 0.008332 
 Worst 0.309882 0.013687 0.038669 0.036270  1.171537 0.009212 0.013900 0.012077 
DTLZ1-10 Best 0.208833 0.001353 0.000775 0.000710 DTLZ2-10 0.949512 0.001500 0.003287 0.005146 
 Mean 0.244095 0.002895 0.002119 0.003310  1.040312 0.003480 0.004537 0.007263 
 Median 0.244483 0.002059 0.001032 0.001040  1.037126 0.002377 0.004337 0.006670 
 Worst 0.264490 0.011703 0.014472 0.032996  1.097058 0.009007 0.006672 0.012776 
DTLZ1-15 Best 0.296512 0.119989 0.005399 0.002292 DTLZ2-15 1.164959 0.012674 0.010121 0.008700 
 Mean 0.338193 0.169557 0.030588 0.004505  1.208323 0.127634 0.015971 0.012940 
 Median 0.340922 0.165516 0.017494 0.003782  1.208998 0.016643 0.014761 0.013934 
 Worst 0.367629 0.226773 0.114660 0.015573  1.234267 0.458311 0.036413 0.017599 
DTLZ3-3 Best 0.035985 0.001302 0.001937 0.007935 DTLZ4-3 0.045197 0.000111 0.000095 0.000133 
 Mean 0.078660 0.006652 0.009230 0.044883  0.284181 0.037973 0.098882 0.071382 
 Median 0.071399 0.005173 0.005237 0.026366  0.046572 0.000152 0.000183 0.000737 
 Worst 0.140144 0.020688 0.034629 0.111482  0.662430 0.530575 0.950334 0.530575 
DTLZ3-5 Best 0.392666 0.000645 0.001988 0.001670 DTLZ4-5 0.464082 0.000096 0.000090 0.000113 
 Mean 0.704282 0.012377 0.019656 0.026446  0.544112 0.065689 0.023019 0.000173 
 Median 0.703783 0.003493 0.004865 0.014986  0.532719 0.000129 0.000134 0.000182 
 Worst 1.099575 0.084039 0.098929 0.130457  0.707449 0.620124 0.343247 0.000248 
DTLZ3-8 Best 0.908195 0.004010 0.004748 0.005186 DTLZ4-8 0.612986 0.000910 0.000938 0.001192 
 Mean 1.008760 0.017464 0.012747 0.022646  0.684458 0.171258 0.189551 0.116438 
 Median 0.995763 0.005631 0.008103 0.013141  0.669895 0.221417 0.221087 0.002289 
 Worst 1.064035 0.085676 0.046172 0.099292  0.830480 0.407835 0.580249 0.407791 
DTLZ3-10 Best 0.916240 0.001621 0.001906 0.001678 DTLZ4-10 0.649495 0.000660 0.000702 0.000974 
 Mean 1.034194 0.005139 0.010946 0.009009  0.691298 0.048643 0.048686 0.037002 
 Median 1.033373 0.002241 0.002465 0.004983  0.696729 0.001127 0.001098 0.001273 
 Worst 1.128520 0.028837 0.091690 0.039975  0.732495 0.179694 0.180242 0.180111 
DTLZ3-15 Best 1.125717 0.008235 0.007569 0.006334 DTLZ4-15 0.653794 0.005859 0.009518 0.008665 
 Mean 1.158706 0.013389 0.009382 0.008923  0.767025 0.177320 0.168551 0.231965 
 Median 1.157900 0.010716 0.009356 0.007405  0.775593 0.208535 0.118535 0.207945 
 Worst 1.211416 0.026114 0.012116 0.027146  0.900289 0.308762 0.402313 0.593047 
Table 7:
The best, mean, median, and worst of IGD-metric values of the original and modified MOEA/Ds in 15 independent runs for 3-, 5-, 8-, 10-, and 15-objective instances of WFG5-8. The best performance is highlighted in boldface and clearly occurs for θ=5. Other performances coming statistically similar with 95% confidence to the best performing θ on Wilcoxon rank test are marked in italics.
Modified-1OriginalModified-2Modified-3Modified-1OriginalModified-2Modified-3
θ=1θ=5θ=10θ=100θ=1θ=5θ=10θ=100
WFG5-3 Best 0.058012 0.035019 0.039847 0.057417 WFG6-3 0.059963 0.038593 0.049642 0.072146 
 Mean 0.063229 0.038251 0.045543 0.068114  0.066125 0.046696 0.061066 0.088153 
 Median 0.066617 0.036876 0.044645 0.067214  0.066113 0.046628 0.059189 0.086152 
 Worst 0.067994 0.046541 0.056928 0.077213  0.073754 0.055645 0.078238 0.099428 
WFG5-5 Best 0.426187 0.031833 0.032654 0.033200 WFG6-5 0.436428 0.026400 0.033990 0.030833 
 Mean 0.959232 0.032476 0.033691 0.034546  0.707170 0.035699 0.038671 0.039950 
 Median 1.086249 0.032369 0.033633 0.034675  0.548173 0.036108 0.038552 0.040259 
 Worst 1.087762 0.033200 0.035587 0.036056  1.065790 0.042880 0.044163 0.047554 
WFG5-8 Best 1.200961 0.032300 0.039130 0.038942 WFG6-8 1.217370 0.031013 0.038567 0.040274 
 Mean 1.206425 0.037229 0.040681 0.046567  1.219570 0.038217 0.044717 0.053827 
 Median 1.206899 0.037902 0.039877 0.044023  1.219983 0.037139 0.046626 0.052299 
 Worst 1.207291 0.039393 0.045732 0.062536  1.221085 0.048355 0.053724 0.069147 
WFG5-10 Best 1.245058 0.032556 0.033239 0.038626 WFG6-10 1.045015 0.028313 0.032725 0.040688 
 Mean 1.246439 0.033400 0.036772 0.162299  1.224909 0.033715 0.037000 0.049201 
 Median 1.246582 0.033184 0.036524 0.044699  1.255957 0.034477 0.037481 0.048038 
 Worst 1.246906 0.034595 0.044603 0.931273  1.258195 0.041322 0.042418 0.061626 
WFG5-15 Best 1.276872 0.032389 0.032507 0.032747 WFG6-15 1.318290 0.020075 0.024078 0.021070 
 Mean 1.309315 0.032691 0.033271 0.034292  1.318996 0.030735 0.032210 0.032619 
 Median 1.313868 0.032694 0.032895 0.033956  1.319045 0.031259 0.033109 0.035831 
 Worst 1.313895 0.032887 0.036098 0.036820  1.319863 0.037763 0.047023 0.041301 
WFG7-3 Best 0.034726 0.024495 0.041458 0.060620 WFG8-3 0.112627 0.056685 0.062796 0.083755 
 Mean 0.095201 0.032216 0.050113 0.076284  0.123671 0.062120 0.072063 0.091958 
 Median 0.035853 0.030759 0.051123 0.075015  0.121110 0.062350 0.071377 0.092076 
 Worst 0.871333 0.042331 0.062867 0.095124  0.136199 0.069957 0.082827 0.105585 
WFG7-5 Best 0.642057 0.007569 0.008858 0.011388 WFG8-5 0.498848 0.062842 0.065036 0.066105 
 Mean 0.900268 0.009141 0.011266 0.013133  0.909316 0.064966 0.066433 0.067494 
 Median 0.869645 0.008923 0.011029 0.013279  0.880768 0.064804 0.066131 0.067509 
 Worst 1.120592 0.010945 0.014423 0.016112  1.120437 0.067035 0.068652 0.070306 
WFG7-8 Best 1.193506 0.007310 0.009988 0.023971 WFG8-8 1.085678 0.100018 0.102191 0.107071 
 Mean 1.224362 0.013771 0.023376 0.035535  1.196606 0.103376 0.106547 0.110409 
 Median 1.227327 0.013223 0.023101 0.035279  1.224486 0.102855 0.106960 0.110550 
 Worst 1.228107 0.021194 0.039043 0.047604  1.227472 0.106997 0.110635 0.113671 
WFG7-10 Best 1.253165 0.005315 0.007624 0.014213 WFG8-10 1.041080 0.119690 0.120717 0.121705 
 Mean 1.262560 0.007348 0.010710 0.021339  1.230719 0.335790 0.121766 0.122861 
 Median 1.263507 0.007107 0.010150 0.021109  1.256075 0.120873 0.121492 0.122425 
 Worst 1.263726 0.008834 0.015068 0.029303  1.263566 1.257932 0.123412 0.126153 
WFG7-15 Best 1.322570 0.005069 0.005301 0.007104 WFG8-15 1.234265 0.190013 0.166814 0.166870 
 Mean 1.322674 0.006502 0.007844 0.009706  1.294629 0.633678 0.492696 0.471995 
 Median 1.322679 0.006397 0.007665 0.009983  1.319208 0.668265 0.626492 0.599626 
 Worst 1.322746 0.008161 0.009994 0.013068  1.320879 0.691573 0.689794 0.678283 
Modified-1OriginalModified-2Modified-3Modified-1OriginalModified-2Modified-3
θ=1θ=5θ=10θ=100θ=1θ=5θ=10θ=100
WFG5-3 Best 0.058012 0.035019 0.039847 0.057417 WFG6-3 0.059963 0.038593 0.049642 0.072146 
 Mean 0.063229 0.038251 0.045543 0.068114  0.066125 0.046696 0.061066 0.088153 
 Median 0.066617 0.036876 0.044645 0.067214  0.066113 0.046628 0.059189 0.086152 
 Worst 0.067994 0.046541 0.056928 0.077213  0.073754 0.055645 0.078238 0.099428 
WFG5-5 Best 0.426187 0.031833 0.032654 0.033200 WFG6-5 0.436428 0.026400 0.033990 0.030833 
 Mean 0.959232 0.032476 0.033691 0.034546  0.707170 0.035699 0.038671 0.039950 
 Median 1.086249 0.032369 0.033633 0.034675  0.548173 0.036108 0.038552 0.040259 
 Worst 1.087762 0.033200 0.035587 0.036056  1.065790 0.042880 0.044163 0.047554 
WFG5-8 Best 1.200961 0.032300 0.039130 0.038942 WFG6-8 1.217370 0.031013 0.038567 0.040274 
 Mean 1.206425 0.037229 0.040681 0.046567  1.219570 0.038217 0.044717 0.053827 
 Median 1.206899 0.037902 0.039877 0.044023  1.219983 0.037139 0.046626 0.052299 
 Worst 1.207291 0.039393 0.045732 0.062536  1.221085 0.048355 0.053724 0.069147 
WFG5-10 Best 1.245058 0.032556 0.033239 0.038626 WFG6-10 1.045015 0.028313 0.032725 0.040688 
 Mean 1.246439 0.033400 0.036772 0.162299  1.224909 0.033715 0.037000 0.049201 
 Median 1.246582 0.033184 0.036524 0.044699  1.255957 0.034477 0.037481 0.048038 
 Worst 1.246906 0.034595 0.044603 0.931273  1.258195 0.041322 0.042418 0.061626 
WFG5-15 Best 1.276872 0.032389 0.032507 0.032747 WFG6-15 1.318290 0.020075 0.024078 0.021070 
 Mean 1.309315 0.032691 0.033271 0.034292  1.318996 0.030735 0.032210 0.032619 
 Median 1.313868 0.032694 0.032895 0.033956  1.319045 0.031259 0.033109 0.035831 
 Worst 1.313895 0.032887 0.036098 0.036820  1.319863 0.037763 0.047023 0.041301 
WFG7-3 Best 0.034726 0.024495 0.041458 0.060620 WFG8-3 0.112627 0.056685 0.062796 0.083755 
 Mean 0.095201 0.032216 0.050113 0.076284  0.123671 0.062120 0.072063 0.091958 
 Median 0.035853 0.030759 0.051123 0.075015  0.121110 0.062350 0.071377 0.092076 
 Worst 0.871333 0.042331 0.062867 0.095124  0.136199 0.069957 0.082827 0.105585 
WFG7-5 Best 0.642057 0.007569 0.008858 0.011388 WFG8-5 0.498848 0.062842 0.065036 0.066105 
 Mean 0.900268 0.009141 0.011266 0.013133  0.909316 0.064966 0.066433 0.067494 
 Median 0.869645 0.008923 0.011029 0.013279  0.880768 0.064804 0.066131 0.067509 
 Worst 1.120592 0.010945 0.014423 0.016112  1.120437 0.067035 0.068652 0.070306 
WFG7-8 Best 1.193506 0.007310 0.009988 0.023971 WFG8-8 1.085678 0.100018 0.102191 0.107071 
 Mean 1.224362 0.013771 0.023376 0.035535  1.196606 0.103376 0.106547 0.110409 
 Median 1.227327 0.013223 0.023101 0.035279  1.224486 0.102855 0.106960 0.110550 
 Worst 1.228107 0.021194 0.039043 0.047604  1.227472 0.106997 0.110635 0.113671 
WFG7-10 Best 1.253165 0.005315 0.007624 0.014213 WFG8-10 1.041080 0.119690 0.120717 0.121705 
 Mean 1.262560 0.007348 0.010710 0.021339  1.230719 0.335790 0.121766 0.122861 
 Median 1.263507 0.007107 0.010150 0.021109  1.256075 0.120873 0.121492 0.122425 
 Worst 1.263726 0.008834 0.015068 0.029303  1.263566 1.257932 0.123412 0.126153 
WFG7-15 Best 1.322570 0.005069 0.005301 0.007104 WFG8-15 1.234265 0.190013 0.166814 0.166870 
 Mean 1.322674 0.006502 0.007844 0.009706  1.294629 0.633678 0.492696 0.471995 
 Median 1.322679 0.006397 0.007665 0.009983  1.319208 0.668265 0.626492 0.599626 
 Worst 1.322746 0.008161 0.009994 0.013068  1.320879 0.691573 0.689794 0.678283 
Figure 6:

Median IGD values of the original and PBI-based NSGA-IIIs with different θ values from 15 independent runs for 3-, 5-, 8-, 10-, and 15-objective instances of CDTLZ2 indicate that θ= works the best. The best and statistically similar performing to the best θ are shown in gold color.

Figure 6:

Median IGD values of the original and PBI-based NSGA-IIIs with different θ values from 15 independent runs for 3-, 5-, 8-, 10-, and 15-objective instances of CDTLZ2 indicate that θ= works the best. The best and statistically similar performing to the best θ are shown in gold color.

Figure 7:

Median IGD values of the original and PBI-based NSGA-IIIs with different θ values from 15 independent runs for 3-, 5-, 8-, 10-, and 15-objective instances of WFG2 indicate that θ= works the best.

Figure 7:

Median IGD values of the original and PBI-based NSGA-IIIs with different θ values from 15 independent runs for 3-, 5-, 8-, 10-, and 15-objective instances of WFG2 indicate that θ= works the best.

4.3 Problems with a Convex PF

Next, we investigate if the above conclusions are true for problems having a convex PF. For this purpose, we consider two problems – Convex DTLZ2 (CDTLZ2) (Deb and Jain, 2014) and WFG2. All parameters are the same as before, except for CDTLZ2, we use 250, 750, 2,000, 4,000, and 4,500 generations for M=3, 5, 8, 10, and 15 objectives, respectively. For WFG2, an identical number of generations as in other WFG problems is used.

Figures 6 and 7 show the median IGD values of PBI-based NSGA-III procedure with different θ values on CDTLZ2 and WFG2 problems, respectively. The best performing method is shown in gold color. The best and worst IGD over 15 runs are also marked in red line bounds. It can be observed that θ= (that is, the original NSGA-III procedure) performs the best for both problems using from 3 up to 15 objectives.

We now present results of MOEA/D-PBI on the two above convex problems in Figures 8 and 9. We observe again that a large θ works the best for the two convex problems.
Figure 8:

Median IGD values of MOEA/Ds with different θ values from 15 independent runs for 3-, 5-, 8-, 10-, and 15-objective instances of CDTLZ2 indicate that θ=100 works the best.

Figure 8:

Median IGD values of MOEA/Ds with different θ values from 15 independent runs for 3-, 5-, 8-, 10-, and 15-objective instances of CDTLZ2 indicate that θ=100 works the best.

Figure 9:

Median IGD values of MOEA/Ds with different θ values from 15 independent runs for 3-, 5-, 8-, 10-, and 15-objective instances of WFG2 indicate that θ=100 works the best.

Figure 9:

Median IGD values of MOEA/Ds with different θ values from 15 independent runs for 3-, 5-, 8-, 10-, and 15-objective instances of WFG2 indicate that θ=100 works the best.

It is quite clear from the above four tables on convex problems that both PBI-based NSGA-III and MOEA/D methods perform better with a large θ ( or 100), which is contrary to the best θ values (1 to 5) observed for non-convex DTLZ and WFG5-8 problems. To understand the reason for this behavior, let us consider Figure 10. While the points close to intermediate reference lines, such as w(1) in the figure, the argument for a small θ is still valid for a comparison with points like C, they are the boundary reference lines which demand a larger θ value. Consider an associated point G which is close to the f2-axis line. The angle (αe) made by the line joining point G with targeted point Y and the axis line is very small. This demands a large θ, as for small αe, cotαe is large. In the limit (when G gets close to Y), the angle made by the tangent line at the Pareto-optimal point Y will dictate the minimum required αe. For the convex DTLZ2 problems, the angle αe is zero; hence a large θ is needed to obtain the boundary points for convex DTLZ2 problem. For many-objective problems, there are increasingly more of such boundary points and a large θ is needed to ensure finding the boundary points.
Figure 10:

Working of PBI-metric with minimal θ on convex PF problems.

Figure 10:

Working of PBI-metric with minimal θ on convex PF problems.

Figure 11:

Solutions obtained by NSGA-III with θ=5 and θ=100 for the 3-objective instance of CDTLZ2.

Figure 11:

Solutions obtained by NSGA-III with θ=5 and θ=100 for the 3-objective instance of CDTLZ2.

To support our argument, we show the points obtained by PBI-based NSGA-III on three-objective CDTLZ2 problems with two different θ values (θ=5 and 100) in Figures 11a to 11b, respectively. It is clear that although intermediate points are always found, the boundary points are not found for small θ values, but for θ=100, all boundary points are closely found. A similar observation is made with MOEA/D-PBI method with the same two θ values in Figures 12a and 12b, respectively. MOEA/D with θ=100 is able to find close to boundary points, but due to the use of a large neighborhood size in MOEA/D, the search cannot precisely focus on targeted points.
Figure 12:

Solutions obtained by MOEA/D with θ=5 and θ=100 for the 3-objective instance of CDTLZ2.

Figure 12:

Solutions obtained by MOEA/D with θ=5 and θ=100 for the 3-objective instance of CDTLZ2.

Figure 13:

Median IGD values of the original and PBI-based NSGA-IIIs with boundary reference lines removed from 15 independent runs for 3-, 5-, 8-, 10-, and 15-objective instances of CDTLZ2 indicate that θ=5 works the best.

Figure 13:

Median IGD values of the original and PBI-based NSGA-IIIs with boundary reference lines removed from 15 independent runs for 3-, 5-, 8-, 10-, and 15-objective instances of CDTLZ2 indicate that θ=5 works the best.

To further support our above argument for higher θ requirement for boundary points, we remove all boundary Das and Dennis's reference points, except for the M extreme points and apply both PBI-based NSGA-III and MOEA/D on 3- to 15-objective convex DTLZ2 problems. Figures 13 and 14 present the median IGD values from 15 runs. The boundary Pareto-optimal points are also removed from IGD computation as well. It is clear from both figures that a relatively small θ (5 or 10) now performs the best, as boundary points, which require a large θ, are now not required to be found, thereby supporting our argument.

4.4 Problems with a Mixed PF

To verify the above theoretical conclusion in a more general case, we create a new problem called CCDTLZ2 with a mixed PF (i.e., part of the PF is convex and part is concave) based on DTLZ2. If the last objective value fM of DTLZ2 is more than 0.3827*(1+g) (here, g is the same as the original DTLZ2 problem; Deb et al. (2005)), we do the following mapping:
fifi4(i=1,2,,M-1),fMfM2
Figure 15 gives an illustration of the mixed PF in three-dimensional objective space. We investigate the performance of PBI-based NSGA-III procedure with different θ values on the created CCDTLZ2 with the number of objectives M=3, 5, 8, 10, and 15, and all parameters are kept the same as in CDTLZ2.
Figure 14:

Median IGD values of MOEA/D with boundary reference lines removed from 15 independent runs for 3-, 5-, 8-, 10-, and 15-objective instances of CDTLZ2 indicate that θ=5 and 10 works better than other θ values.

Figure 14:

Median IGD values of MOEA/D with boundary reference lines removed from 15 independent runs for 3-, 5-, 8-, 10-, and 15-objective instances of CDTLZ2 indicate that θ=5 and 10 works better than other θ values.

Figure 15:

Illustration of the mixed PF of the created CCDTLZ2 problem in three-dimensional objective space.

Figure 15:

Illustration of the mixed PF of the created CCDTLZ2 problem in three-dimensional objective space.

Table 8 presents the mean and median IGD-metric values of NSGA-III with different θ values in 15 independent runs for 3-, 5-, 8-, 10-, and 15-objective instances of CCDTLZ2. It can be seen that NSGA-III with θ= has the best performance on all the test problems. Although the concave part of the mixed PF requires a relatively small θ value, an extremely large θ value is still needed for the convex part. Thus, NSGA-III with θ= is still the best optimizer for those test problems with mixed PFs. The simulation results furthermore verify our theory-based conclusion.

Table 8:
The mean and median IGD-metric values of NSGA-III with different θ values in 15 independent runs for 3-, 5-, 8-, 10-, and 15-objective instances of CCDTLZ2. The best performance is highlighted in boldface.
Obj.θ=0.1θ=1θ=5θ=10θ=
0.058516 0.057847 0.041657 0.037129 0.031482 
 0.058604 0.057822 0.041527 0.036852 0.031933 
0.216665 0.095054 0.079638 0.054185 0.046100 
 0.218181 0.094940 0.079743 0.055063 0.046051 
0.376227 0.069905 0.066995 0.062922 0.045747 
 0.377278 0.069720 0.066886 0.063449 0.045744 
10 0.418004 0.057185 0.055125 0.054785 0.038789 
 0.418619 0.057119 0.054967 0.054673 0.038825 
15 0.643232 0.560948 0.569747 0.052623 0.021879 
 0.614609 0.574227 0.574475 0.055552 0.021697 
Obj.θ=0.1θ=1θ=5θ=10θ=
0.058516 0.057847 0.041657 0.037129 0.031482 
 0.058604 0.057822 0.041527 0.036852 0.031933 
0.216665 0.095054 0.079638 0.054185 0.046100 
 0.218181 0.094940 0.079743 0.055063 0.046051 
0.376227 0.069905 0.066995 0.062922 0.045747 
 0.377278 0.069720 0.066886 0.063449 0.045744 
10 0.418004 0.057185 0.055125 0.054785 0.038789 
 0.418619 0.057119 0.054967 0.054673 0.038825 
15 0.643232 0.560948 0.569747 0.052623 0.021879 
 0.614609 0.574227 0.574475 0.055552 0.021697 

The above extensive study reveals one aspect about the PBI-metric--based EMO methods: the choice of θ must be problem dependent. For problems with a purely concave PF, the smallest angle (α) between the tangent plane at the PF and a reference line is usually large (by geometric properties); hence a small θ (around 1 to 5) is the best from the sensitivity due to normalization instability. For problems having a convex PF, the smallest angle (α) can be small specially for boundary reference lines, requiring a relatively large θ to be used. Although a large θ is highly sensitive to normalization instability, the geometry effect does not allow a small θ to work well for finding the boundary points. Since the convexity (or, non-convexity) of the PF is not usually known a priori, these conclusions raise an interesting idea for a more efficient PBI-based approach in which a different θ can used for different reference lines—a large θ for boundary reference lines and a small θ for intermediate lines. We are currently pursuing such a dynamic θ update for PBI-metric--based EMO algorithms. A modified PBI with a curved counter line (such as, PBI = d1+θd2k with k>1) can be another remedy for this issue.

5 Conclusions

In this article, we have computed a theoretical sensitivity analysis of the PBI-metric--based selection operators due to normalization inaccuracy. It has been found that the smaller the penalty parameter θ, the lower is the theoretical sensitivity to normalization. Although this theoretical result motivates us to use a very small but positive θ, the real lower bound on θ comes from another consideration related to the geometry and shape of the PF. We have identified that the minimum acute angle α made by the tangent hyper-plane of the PF with the reference direction dictates the lower bound; at least θ=cotα is needed to find the targeted Pareto-optimal point for a reference line. If a single θ value is to be used for all reference lines, then the smallest α among all reference lines will dictate the computation of a suitable θ.

In a number of non-convex DTLZ and WFG problems varying from 3 objectives to 15 objectives, we have observed that both NSGA-III and MOEA/D methods perform better with the PBI-metric and work the best for θ=1 to 5. For problems with convex PF, our extensive results on convex DTLZ and WFG2 problems have revealed that a large θ=100 or performs the best and the performance deteriorates with reducing θ. If the boundary reference directions, where the worst convergence occurs, are eliminated from consideration, both NSGA-III and MOEA/D's performance gets better with a small θ=5. The above clearly indicates that the choice of a suitable θ for PBI-metric requires a knowledge of the shape of the efficient front. Our extensive experiments reveal, justify, and support our theory-based arguments made for the working of the PBI-metric--based selection operator on different shapes of the PF. The theoretical understanding also takes us a step closer to developing an adaptive θ-update strategy for a problem without knowing the shape of the efficient front beforehand, results of which will be communicated in a later study.

Acknowledgments

This material is based on work supported in part by the National Science Foundation under Cooperative Agreement No. DBI-0939454, in part by the Natural Science Foundation of Guangdong Province No. 2020A1515011500, in part by the Science and Technology Program of Guangdong Province No. 2020A0505100056, in part by the National Natural Science Foundation of China under Grant 61876163, and in part by the ANR/RGC Joint Research Scheme sponsored by the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. A-CityU101/16).

References

Bechikh
,
S.
,
Ben Said
,
L.
, and
Ghédira
,
K.
(
2010
).
Searching for knee regions in multi-objective optimization using mobile reference points.
In
Proceedings of the 2010 ACM Symposium on Applied Computing
, pp.
1118
1125
.
Bosman
,
P. A.
, and
Thierens
,
D.
(
2003
).
The balance between proximity and diversity in multiobjective evolutionary algorithms.
IEEE Transactions on Evolutionary Computation
,
7
(
2
):
174
188
.
Cheng
,
R.
,
Jing
,
Y.
,
Olhofer
,
M.
, and
Sendhoff
,
B.
(
2016
).
A reference vector guided evolutionary algorithm for many-objective optimization.
IEEE Transactions on Evolutionary Computation
,
20
(
20
):
773
791
.
Das
,
I.
, and
Dennis
,
J.
(
1998
).
Normal-boundary intersection: A new method for generating the Pareto surface in nonlinear multicriteria optimization problems.
SIAM Journal of Optimization
,
8
(
8
):
631
657
.
Deb
,
K.
,
Agrawal
,
S.
,
Pratap
,
A.
, and
Meyarivan
,
T.
(
2002
).
A fast and elitist multi-objective genetic algorithm: NSGA-II.
IEEE Transactions on Evolutionary Computation
,
6
(
6
):
182
197
.
Deb
,
K.
, and
Jain
,
H.
(
2014
).
An evolutionary many-objective optimization algorithm using reference-point based non-dominated sorting approach, Part I: Solving problems with box constraints.
IEEE Transactions on Evolutionary Computation
,
18
(
18
):
577
601
.
Deb
,
K.
,
Miettinen
,
K.
, and
Chaudhuri
,
S.
(
2010
).
Toward an estimation of nadir objective vector using a hybrid of evolutionary and local search approaches.
IEEE Transactions on Evolutionary Computation
,
14
(
14
):
821
841
.
Deb
,
K.
,
Padmanabhan
,
D.
,
Gupta
,
S.
, and
Mall
,
A. K.
(
2007
).
Reliability-based multi-objective optimization using evolutionary algorithms.
In
Evolutionary Multi-Criterion Optimization
, pp.
66
80
. Lecture Notes in Computer Science, Vol.
4403
.
Deb
,
K.
,
Thiele
,
L.
,
Laumanns
,
M.
, and
Zitzler
,
E.
(
2005
).
Scalable test problems for evolutionary multi-objective optimization.
In
A.
Abraham
,
L.
Jain
, and
R.
Goldberg
(Eds.),
Evolutionary multiobjective optimization
, pp.
105
145
.
London
:
Springer-Verlag
.
Huband
,
S.
,
Hingston
,
P.
,
Barone
,
L.
, and
While
,
L.
(
2006
).
A review of multiobjective test problems and a scalable test problem toolkit.
IEEE Transactions on Evolutionary Computation
,
10
(
10
):
477
506
.
Isermann
,
H.
, and
Steuer
,
R. E.
(
1988
).
Computational experience concerning payoff tables and minimum criterion values over the efficient set.
European Journal of Operational Research
,
33
(
33
):
91
97
.
Ishibuchi
,
H.
,
Akedo
,
N.
, and
Nojima
,
Y.
(
2015
).
Behavior of multiobjective evolutionary algorithms on many-objective knapsack problems.
IEEE Transactions on Evolutionary Computation
,
19
(
19
):
264
283
.
Ishibuchi
,
H.
,
Doi
,
K.
, and
Nojima
,
Y.
(
2016a
).
Characteristics of many-objective test problems and penalty parameter specification in MOEA/D.
In
2016 IEEE Congress on Evolutionary Computation
, pp.
1115
1122
.
Ishibuchi
,
H.
,
Doi
,
K.
, and
Nojima
,
Y.
(
2016b
).
Use of piecewise linear and nonlinear scalarizing functions in MOEA/D.
In
International Conference on Parallel Problem Solving from Nature
, pp.
503
513
.
Ishibuchi
,
H.
,
Doi
,
K.
, and
Nojima
,
Y.
(
2017
).
On the effect of normalization in MOEA/D for multi-objective and many-objective optimization.
Complex & Intelligent Systems
,
3
(
3
):
279
294
.
Ishibuchi
,
H.
, and
Murata
,
T.
(
1998
).
A multi-objective genetic local search algorithm and its application to flowshop scheduling.
IEEE Transactions on Systems, Man and Cybernetics—Part C: Applications and Reviews
,
28
(
28
):
392
403
.
Khare
,
V.
,
Yao
,
X.
, and
Deb
,
K.
(
2003
).
Performance scaling of multi-objective evolutionary algorithms.
In
International Conference on Evolutionary Multi-Criterion Optimization
, pp.
376
390
.
Li
,
H.
, and
Zhang
,
Q.
(
2009
).
Multiobjective optimization problems with complicated Pareto sets, MOEA/D and NSGA-II.
IEEE Transactions on Evolutionary Computation
,
13
(
13
):
284
302
.
Li
,
K.
,
Deb
,
K.
,
Zhang
,
Q.
, and
Kwong
,
S.
(
2015
).
An evolutionary many-objective optimization algorithm based on dominance and decomposition.
IEEE Transactions on Evolutionary Computation
,
19
(
19
):
694
716
.
Liu
,
H.-L.
,
Chen
,
L.
,
Zhang
,
Q.
, and
Deb
,
K.
(
2018
).
Adaptively allocating search effort in challenging many-objective optimization problems.
IEEE Transactions on Evolutionary Computation
,
22
(
22
):
433
448
.
Liu
,
H.-L.
,
Gu
,
F.
, and
Zhang
,
Q.
(
2013
).
Decomposition of a multiobjective optimization problem into a number of simple multiobjective subproblems.
IEEE Transactions on Evolutionary Computation
,
18
(
18
):
773
791
.
Schutze
,
O.
,
Laumanns
,
M.
,
Tantar
,
E.
,
Coello
,
C. A. C.
, and
Talbi
,
E.-G.
(
2010
).
Computing gap-free Pareto front approximations with stochastic search algorithms.
Evolutionary Computation
,
18
(
18
):
65
96
.
Seada
,
H.
,
Abouhawwash
,
M.
, and
Deb
,
K.
(
2018
).
Multi-phase balance of diversity and convergence in multiobjective optimization.
IEEE Transactions on Evolutionary Computation
,
23
(
23
):
503
513
.
Yuan
,
Y.
,
Xu
,
H.
,
Wang
,
B.
, and
Yao
,
X.
(
2016
).
A new dominance relation-based evolutionary algorithm for many-objective optimization.
IEEE Transactions on Evolutionary Computation
,
20
(
20
):
16
37
.
Zhang
,
Q.
, and
Li
,
H.
(
2007
).
MOEA/D: A multiobjective evolutionary algorithm based on decomposition.
IEEE Transactions on Evolutionary Computation
,
11
(
11
):
712
731
.
Zhang
,
Q.
,
Li
,
H.
,
Maringer
,
D.
, and
Tsang
,
E.
(
2010
).
MOEA/D with NBI-style Tchebycheff approach for portfolio management.
In
IEEE Congress on Evolutionary Computation
, pp.
1
8
.