Abstract

Given a nondominated point set of size and a suitable reference point , the Hypervolume Subset Selection Problem (HSSP) consists of finding a subset of size that maximizes the hypervolume indicator. It arises in connection with multiobjective selection and archiving strategies, as well as Pareto-front approximation postprocessing for visualization and/or interaction with a decision maker. Efficient algorithms to solve the HSSP are available only for the 2-dimensional case, achieving a time complexity of . In contrast, the best upper bound available for is . Since the hypervolume indicator is a monotone submodular function, the HSSP can be approximated to a factor of using a greedy strategy. In this article, greedy -time algorithms for the HSSP in 2 and 3 dimensions are proposed, matching the complexity of current exact algorithms for the 2-dimensional case, and considerably improving upon recent complexity results for this approximation problem.

1  Introduction*

Multiobjective optimization consists of finding solutions that minimize, without loss of generality, a vector of objective functions, . Each solution is an element of a decision space, and is mapped onto a point in a -dimensional objective space. Due to conflicting objectives, there is usually no ideal solution to a multiobjective optimization problem but multiple Pareto-optimal, or efficient, solutions (Ehrgott, 2005). In this article, only the points in objective space are considered.

Let and be two points in objective space. Point is said to weakly dominate point , denoted by , if for all , and to dominate if, in addition, . If neither nor , then and are said to be incomparable, or mutually nondominated. A point is said to be nondominated if no other point dominates it, and the set of all nondominated points is called the Pareto front (Ehrgott, 2005; Deb, 2001).

Quality indicators map a set of (mutually nondominated) points onto a real value, and are used to evaluate the quality of discrete Pareto-front approximations. Due to its seemingly unique properties (Auger et al., 2009), the hypervolume indicator (Zitzler and Thiele, 1998; Knowles et al., 2003) is used extensively, both in performance studies and in multiobjective selection and archiving strategies. Unfortunately, the hypervolume indicator becomes computationally demanding as the number of objective dimensions grows. Its time complexity is in up to 3 dimensions (Beume et al., 2009), but, for general , the best upper bound known to date is (Chan, 2013). So far, the fastest algorithms in practice (HV4D, Guerreiro et al., 2012, for , and WFG, While et al., 2012, for ) are asymptotically slower.

The problem of selecting out of points that maximize the hypervolume indicator commonly arises in multiobjective selection and archiving. This is known as the Hypervolume Subset Selection Problem (HSSP) (Bader and Zitzler, 2011), and algorithms to solve it exactly are available only for 2 dimensions, with (Bringmann et al., 2014) and (Kuhn et al., 2016) time complexity. For , the equivalent problem of determining a subset of points that contribute the least hypervolume to the original set can be solved in (Bringmann and Friedrich, 2010).

Since the hypervolume indicator is a monotone submodular function (Ulrich and Thiele, 2012), and the HSSP consists of maximizing it subject to a cardinality constraint, an -approximation to the hypervolume of an optimal subset may be obtained with an (incremental) greedy strategy (Nemhauser et al., 1978). Based on this result, Friedrich and Neumann (2014) show that a particular EMO algorithm (GSEMO) obtains such an approximation in an expected number of steps, for any number of dimensions. However, their analysis does not take into account the time required to compute the hypervolume of the subsets evaluated in the process.

Greedy heuristics for the HSSP were also considered by Bradstreet et al. (2007), who studied both incremental and decremental strategies. In the incremental case, points are greedily selected one at a time so as to maximize the increase in hypervolume at each step. In contrast, in the decremental case, points are removed one at a time from the original set of points so as to minimize the loss of hypervolume in each step. The decremental case was also considered by Bader and Zitzler (2011), and was analyzed by Bringmann and Friedrich (2010), who showed that the decremental greedy solution may be very far from optimal, considering the volume left out by the points discarded. In other words, the ratio between the volumes left out in the greedy solution and in the optimal solution may be very large.

As noted also by Bradstreet et al. (2007), both incremental and decremental greedy solutions for the HSSP can be easily computed using existing algorithms to compute hypervolume contributions (e.g., Emmerich and Fonseca, 2011; Bringmann and Friedrich, 2010) or even just the hypervolume indicator (e.g., Guerreiro et al., 2012; While et al., 2012). For example, for , a decremental greedy algorithm with -time complexity is obtained using the EF algorithm (Emmerich and Fonseca, 2011) to compute all contributions in time. For the incremental greedy approach, an -time complexity is achieved by iterating over the procedure used in the HV4D (Guerreiro et al., 2012) algorithm to compute single 3-dimensional contributions in linear time.

Directly solving the HSSP, even if only approximately, provides an interesting means of implementing selection in Evolutionary Multiobjective Optimization Algorithms (EMOAs). Indeed, EMOAs frequently work as a combination of generation and archiving strategies, where solutions are maintained in the parental population (the archive), and a number, , of offspring are generated from the parental population at each generation. Then, the next parental population is obtained by selecting a new set of individuals from the parent and offspring individuals available, often using the hypervolume indicator as the selection criterion (Knowles et al., 2003; Beume et al., 2007; Bader and Zitzler, 2011). Furthermore, since discarded solutions cannot be recovered unless they are generated again by the genetic operators, assessing the approximation quality of the intermediate and/or final populations with respect to the quality of the best subset that may be selected from all solutions evaluated up to the corresponding generation becomes of interest (Bringmann et al., 2011). The incremental greedy approximation to the HSSP allows lower bounds on the quality of such optimal subsets to be determined, and may also be used to select a (possibly) better set of solutions than the final population for further consideration by a Decision Maker.

In this article, incremental greedy algorithms for the HSSP in 2 and 3 dimensions are proposed, providing a -approximation to the optimal subset. Rather than simply iterating over existing algorithms to compute hypervolume contributions (or the hypervolume indicator) in order to determine the greatest hypervolume contributor at each step, the algorithms proposed here exploit the incremental nature of the greedy approach, and efficiently update only those contributions that are changed by the selection of a new point at each iteration. In particular, in 3 dimensions, partially overlapping regions are specifically considered, and computing the volume of the same common subregion more than once is avoided. Careful analysis shows that both algorithms proposed have time complexity, which improves upon the -time bound previously reported for 3 dimensions (Guerreiro et al., 2015), and reveals the suitability of that algorithm for postprocessing analysis, where may be very large, but is typically constant and relatively small.

The next section introduces relevant definitions, and reviews an existing approach to the computation of 3-dimensional hypervolume contributions in linear time. In Section 3, the general strategy is outlined first, and the simpler 2-dimensional case is considered before the main algorithm proposed is described in detail and illustrative examples are given. Experimental results are presented in Section 4, and are followed by some concluding remarks.

2  Preliminaries

The following definitions are useful in stating the problem of interest and explaining the algorithms proposed. Illustrative examples are given in Figures 1a–d.

Figure 1:

Two-dimensional examples: (a) hypervolume indicator (dark gray region), (b) hypervolume contribution (light gray region), (c) joint hypervolume contribution (mid gray region), and (d) delimiters of ( and ), as well as dominated point , represented by a filled circle, and nondominated points and , elements of , represented by hollow circles.

Figure 1:

Two-dimensional examples: (a) hypervolume indicator (dark gray region), (b) hypervolume contribution (light gray region), (c) joint hypervolume contribution (mid gray region), and (d) delimiters of ( and ), as well as dominated point , represented by a filled circle, and nondominated points and , elements of , represented by hollow circles.

Definition 1
(Hypervolume Indicator) Given a point set and a reference point , the hypervolume indicator of is the measure of the region weakly dominated by and bounded above by , that is,
formula
where denotes the Lebesgue measure.

A fixed reference point is assumed throughout the article.

Definition 2
(Hypervolume Contribution) Given a point , a point set , and a reference point , the (hypervolume) contribution of to is:
formula

In some cases, such as when determining the decrease in the contribution of a given point to a set due to the addition of another point to , it is also useful to consider the contribution dominated simultaneously and exclusively by two points:

Definition 3
(Joint Hypervolume Contribution) The joint hypervolume contribution of to is:
formula
where denotes the join, or component-wise maximum between two points.

Moreover, the contribution of a point to a set is bounded above by certain points that shall be referred to as delimiters, and are defined as follows:

Definition 4 (Delimiter)

Given a point set and a point , let . Then, is called a (weak) delimiter of the contribution of to iff . If, in addition, , then is also a strong delimiter of the contribution of to .

where denotes the set of nondominated points in . Note that is the smallest set of points weakly dominated by that delimits its contribution to , that is, . Consequently, all are such that , and is such that . For example, in Figure 1d, the contribution of is delimited only by and , where both and are strong delimiters. Non-strong delimiters can exist only when contains points with repeated coordinates. In the same example, if there were a fifth point, , such that and , then both and would be weak delimiters but not strong delimiters. This means that, in practice, only one in such a group of delimiters is needed to bound the contribution of . If one of them is deleted, then the contribution of is kept unchanged, whereas it increases if all are deleted. Finally, the HSSP problem (Bader and Zitzler, 2011) is formally defined next in Problem 1. Problem 1 (Hypervolume Subset Selection Problem (HSSP)):Given a point set and an integer , find a subset such that and:
formula
In the example of Figure 1a, where , subset is the (single) optimal solution of the HSSP with . Any other subset of with at most two points has a hypervolume indicator value lower than .

2.1  Computing Hypervolume Contributions

The efficient computation of hypervolume contributions in 3 dimensions is an important aspect of the greedy algorithm for the HSSP developed in this work. In HV4D (Guerreiro et al., 2012), an algorithm for hypervolume computation in 4 dimensions, the 3-dimensional contribution of each new point visited in the main loop is computed in linear time. This is achieved using a dimension-sweep approach based on another algorithm by Emmerich and Fonseca (2011) for the problem of computing all contributions in 3 dimensions. A simplified version of that approach, which does not explicitly use a box-division of the contribution, is described next using Figure 2 for illustration. It will be referred to as IHV3D. Moreover, let , and denote the -, -, and -coordinate of a point in an -space, respectively.

Figure 2:

Example of the computation of the contribution , where . It is assumed that .

Figure 2:

Example of the computation of the contribution , where . It is assumed that .

Given a point and a set of points, the contribution is computed in IHV3D by sweeping the points such that in ascending order of the -coordinate, and partitioning the 3-dimensional contribution in horizontal slices. The contribution of is the sum of the volumes of all slices. The volume of a slice is the area of the base of that slice multiplied by its height. The height of a slice is the absolute difference between the two consecutive points defining that slice. The base is delimited by the projection onto the -plane of the first point defining that slice and the points below it in . Thus, is split into two sets, and . In addition, a set of points whose projections on the -plane delimit the area exclusively dominated by in each iteration, , is maintained. This set of mutually nondominated points is initialized with such points in to represent the base of the first slice.

Both the splitting of and the initialization of are performed in linear time. In the example of Figure 2a, , , and . Note that at most two points in are not dominated by on the -plane, one above and to the left, and another below and to the right ( and in the example).

The area of the base of the first slice is computed by adding up the areas of the non-overlapping rectangles into which the base is partitioned (see Figure 2b) as the points in are visited in ascending order of . Then, the points in are visited in ascending order of . For each new point, the volume of the current slice is computed and the area of its base is updated to obtain the base area of the next slice. In the example, the first point visited is . Therefore, the area of the base of the bottom slice is multiplied by . Then, the base area is updated by subtracting the area of the region that is now dominated also by (see Figure 2c). This area is computed by visiting the points in that are dominated by on the -plane. In the example, these are points and , which are subsequently replaced in by . Hence, becomes . The procedure for is similar (see Figure 2d). Visited points that do not dominate part of the region dominated by are skipped (e.g., ).

The algorithm continues until a point in that dominates on the -plane is found, in the example. The volume of the last slice is computed by multiplying the current base area by . In IHV3D, all sets are implemented as sorted lists, and sentinels are used to ensure that limiting points such as , , and always exist. IHV3D has an amortized time complexity because each point in is visited once when it is added to and a second time when it is removed from , and all operations on are performed in constant time.

3  Greedy HSS Algorithm

In this section, a general greedy algorithm (Bradstreet et al., 2007) for the HSSP is explained. Subsequently, specialized algorithms for 2 and 3 dimensions are proposed.

3.1  General Case

Given a nondominated point set , where , in the greedy algorithm for the HSSP (Problem 1), points from are chosen and stored in , one at a time, always selecting the point that contributes the most hypervolume to the set of points already chosen. For that reason, in gHSS (Algorithm 1), the contribution of each point to the initially empty set (line 3) is computed first and is stored in . Afterwards, the point in with maximal contribution is picked (line 5). Then, the contribution of the remaining points in to is updated (line 8); that is, the portion of the contribution of each that is dominated by is removed. Finally, is moved from to (lines 6 and 9). Lines 5 to 9 are repeated until contains points.

formula

Note that ties may occur, that is, at some point, more than one point may have the (same) highest contribution. In such cases, it is correct to choose any of the tied points. However, ties that are solved differently will possibly result in different subsequent intermediate and final greedy solutions, both with respect to the set of points selected and, thus, to the corresponding hypervolume indicator value.

It is clear from Definition 3 that, in the absence of an efficient algorithm to compute the Joint Hypervolume contribution (line 8), an algorithm for the Hypervolume Indicator can be used to determine this quantity. Furthermore, the whole contribution could be simply recomputed in line 8. Updating the contributions of all points in a set to the corresponding sets under single-point changes to can already be performed efficiently in 2 dimensions (Hupkens and Emmerich, 2013). However, in the incremental greedy algorithm, the contributions of points in have to be updated with respect to a set and not to itself. Thus, the algorithm proposed by Hupkens and Emmerich (2013) cannot be used directly by the greedy algorithm. The same is true for algorithms to compute the contributions of all points in to (Emmerich and Fonseca, 2011; Bringmann and Friedrich, 2010). In contrast, the procedure explained in SubSection 2.1 to compute a single 3-dimensional contribution in linear time could be adapted to this case, resulting in a -time algorithm.

3.1.1  Example

Figure 3 shows an example of how the greedy algorithm can be applied to a set of nondominated points in two-dimensional space for up to 7. Figure 3 illustrates which points are selected by the greedy algorithm and in which order. Figures 3a, 3b, 3c, and 3d show the evolution of sets and in the first 4 iterations, and depict the corresponding contributions of the points in to . The table in Figure 3e shows the values of these contributions for every iteration of the algorithm. Each column corresponds to an iteration of the algorithm, and is labeled with the size of the set of selected points, . Each row shows the contribution of a point in as set grows. Consequently, column shows the contribution to of each point not yet selected at iteration . The values in bold indicate which point from is selected in each iteration. Note that column shows both the points in the greedy solution for (the dashed cells) and an intermediate solution for . Thus, any particular case where leads to performing just the first steps of the example.

Figure 3:

Example of the incremental greedy algorithm (gHSS2D) for the HSSP in 2 dimensions.

Figure 3:

Example of the incremental greedy algorithm (gHSS2D) for the HSSP in 2 dimensions.

The first step of the algorithm (line 3 in Algorithm 1) is to compute the contribution of every point to , which corresponds to the first column in the table. Then, in iteration 1 of the for loop in line 4, since contributes the most to , it is selected and moved from to (lines 6 and 9). The contributions of the points remaining in are updated (line 8) to account for the addition of to . For example, the contribution of is updated by subtracting the area of its contribution that becomes dominated by (the area between and ). Formally, . The same calculations are performed for . After the contribution update step, the contributions of every point still in to are known as shown in the second column of the table. In iteration 2, is the point that contributes the most to , and so it is selected and moved from to . Note that, in such a case, the contributions of and remain the same with the addition of to , and so only those of , and have to be updated. The algorithm repeats the select and update steps until points are selected. In Figure 3, the greedy solution for is formed by the first points of the sequence: , . For a given , the greedy solution contains the points that, in the column , correspond to dashed cells. For example, for , the greedy solution can be seen in the fourth column, which corresponds to , and is .

The following subsections show how can be efficiently computed in the 2- and 3-dimensional cases.

3.2  2-Dimensional Case

The algorithm proposed in this subsection (gHSS2D) deals with the particular case of gHSS in 2 dimensions. In gHSS2D, a simple procedure can be used to update the contributions of the unchosen points in line 8 of Algorithm 1. Assume that points in are kept sorted in ascending order of the -coordinate (and in descending order of the -coordinate). Therefore, every point needs to keep the information of the next and of the previous point in which is determined once by sorting in a preprocessing step.

Because a contribution is represented by a rectangle, every stores its contribution () and also the corresponding upper bound. All upper bounds are initially set to the reference point. Let and be the closest points in to the left and to the right of , respectively. Then, for each point chosen, the points to its left are visited, in ascending order of coordinate , until is reached. Similarly, the points to its right are visited until is reached. Every point between and has the upper bound previously set to and therefore, the quantity is , and the upper bound is set to . Points between and have and their upper bound is set to .

The time complexity of gHSS2D is because the initial sorting costs -time and, for each point chosen, up to contributions have to be computed, where each contribution is computed in constant time. A worst case example is the set of points with . Note that gHSS2D and the exact algorithms (Bringmann et al., 2014; Kuhn et al., 2016) have similar time complexities. However, the greedy version should be easier to implement and, because it is very simple and uses simple data structures, it is very fast in practice.

3.3  3-Dimensional Case

The algorithm that deals with the particular case of gHSS in 3 dimensions (gHSS3D) is presented in this subsection. This subsection starts by defining the data structures, some notation, and procedures used by the algorithm. Then, the algorithm itself and, in particular, the update of the contribution of the points in in line 8 of Algorithm 1 are detailed. This subsection finishes with a discussion of the time complexity of gHSS3D.

The main aspect of gHSS3D is how the contributions of points in are updated. For an easier understanding of how those are performed in gHSS3D, the problem depicted in Figure 4 will be used as an example. The figure shows an intermediate iteration of Algorithm 1, where some points have already been moved from to . Note that Figure 4 does not intend to illustrate an actual choice of points by gHSS3D, and that the update procedure presented here is independent of the choice of points moved from to . The only assumptions about and are that they are disjoint sets and is a nondominated point set. In Figure 4a, the contribution to of every point in is shown. The transparent (yellow) volume in Figure 4b shows more clearly the volume that adds to . Any point in whose contribution to lies partially in that yellow region has to have that volume removed from its contribution, as it becomes dominated. 2D projections as the one in Figure 4c will be used further in this article.

Figure 4:

Example where , and is the last point removed from and to be added to . (a) shows , , and ; (b) shows how much volume will add to (transparent region). (c) shows a cut at and the 2-dimensional projection of all points in (a). In (c), points in are represented with circles, and points in are represented with squares. The squares and circles painted gray have a higher -coordinate than and the remaining ones have a lower -coordinate than . As can be seen in (a), when is sorted in ascending order of the -coordinate, the following sequence is obtained: .

Figure 4:

Example where , and is the last point removed from and to be added to . (a) shows , , and ; (b) shows how much volume will add to (transparent region). (c) shows a cut at and the 2-dimensional projection of all points in (a). In (c), points in are represented with circles, and points in are represented with squares. The squares and circles painted gray have a higher -coordinate than and the remaining ones have a lower -coordinate than . As can be seen in (a), when is sorted in ascending order of the -coordinate, the following sequence is obtained: .

formula

3.3.1  Data Structures and Procedures

In gHSS3D, doubly linked lists are used to maintain the sets of points sorted, and sentinels ensure that there is always a point in the limiting conditions. Algorithm 2 keeps both sets and sorted in ascending order of all coordinates. Each point keeps some information associated to it, such as area (), volume (), height (), and contribution (). The first three values are temporary values that are used to compute the volume which will be subtracted from , the contribution of . The value indicates the value of the third coordinate up to which volume has been updated, and keeps the area dominated at height . and will denote the projections of and onto the -plane, respectively.

Given represented by sorted lists and the points and , the following procedures are available:

  • The point following in with respect to coordinate , for .

  • The point with the least .

  • The point with the least such that .

  • Add point to where becomes .

  • Return the points that are not dominated on the -plane, that is, .

  • Given , split in two sets, and such that and and return and .

  • The quantity .

  • Given , for each point , update its volume, save it in and update , that is, compute and set .

  • For each point , compute and save it in .

  • For each compute .

Procedures , , , , and are also available for coordinates and and, apart from and , they all run in constant time. has a cost of because it is the cost of finding the break point by sweeping points in descending order of coordinate . The remaining procedures have linear cost with respect to the total size of the input sets, that is, either , , or . Both and will be explained in more detail in SubSection 3.3.3. All procedures that modify or return a subset of a given sorted set guarantee that the returned sets are also sorted according to the coordinate used for sweeping the points. These procedures may also guarantee that those points are sorted according to other coordinates, if needed. More information about those procedures will be given in the next subsections. Note that, if a set of nondominated points on the -plane is sorted in ascending order of one coordinate, then it is also sorted according to the other coordinate, but in descending order.

3.3.2  Main Loop of gHSS3D

gHSS3D follows the same working principle as gHSS, but, instead of updating the contributions of points in one by one (lines 7 and 8 in Algorithm 1), the -space is divided into 8 octants with a common vertex at , and the contributions of the points in each pair of opposite octants are updated at the same time (lines 9 to 17 in Algorithm 2).

The two octants corresponding to the region that dominates and the region dominated by are ignored because they do not contain any points. The remaining three pairs of octants are all updated in the same way, except that a different coordinate order is considered for each pair. The order is set in such a way that a different dimension is used as the -coordinate in each case (lines 10 to 17 of Algorithm 2).

Given a coordinate order , the two octants considered when order is selected are those that contain sets and , defined as follows. contains the points in that are dominated by in the first two dimensions of , but are equal to or better (i.e., lower) than in the third dimension. contains the points in that dominate in the first two dimensions of but are equal to or worse (i.e., higher) than in the third dimension. Figure 5 shows how the set from the example in Figure 4 is split into octants, and shows and according to the objective order considered. In Figure 5a, points are assigned colors, each associated with a single octant. Figures 5b, 5c, and 5d show the corresponding sets and according to objective order , , and , respectively. Note that considering a different objective order is equivalent to rotating the space.

Figure 5:

The subproblems and for the three objective orders considered, .

Figure 5:

The subproblems and for the three objective orders considered, .

Lines 9 and 17 of Algorithm 2 guarantee that no point in is updated more than once in case there are points with repeated coordinates, that is, points on the boundary between two octants. The computation of for and is detailed in Algorithms 4 and 3, respectively.

3.3.3  Updating Contributions of Points in

Consider the case where the order considered is . In the example given, and , as depicted in Figure 5b. The update procedure for the remaining coordinate orders is similar (Figures 5c and  5d). Figure 6 shows (in red) the joint contributions of with each point in (Figures 6a to 6c) and with each point in (Figures 6d and  6e), which have to be computed and removed. Note that, in the case of , there is no joint contribution with . Furthermore, note that some of the joint contributions (partially) overlap, for example, those of with and of with .

Figure 6:

Individual contributions to (dark gray plus red region) of the points in and in . The region of each contribution that is dominated by is in red.

Figure 6:

Individual contributions to (dark gray plus red region) of the points in and in . The region of each contribution that is dominated by is in red.

The update procedure will be explained first for and then for . In both cases, the algorithms are built upon IHV3D (explained in Section 2). Therefore, Algorithm 2 splits into and . In this case only, function takes -time as it has to guarantee that and are sorted according to all dimensions. Set is also maintained along the execution of both Algorithms 4 and 3, and is initialized as the subset of that delimits the area dominated by . In the example, we have that , and (initial set).

In the case of (), all points dominate on the -plane, and have a higher -coordinate. Therefore, if is sorted in ascending order of , then, given any and its previous point in , the joint contribution of with is equal to the joint contribution of with above the value of coordinate . Moreover, the contribution of when it is is equal to the contribution of above the value of coordinate . Figure 7a shows the joint contributions of with each point in . Note that the joint contribution of with is equal to the joint contribution of with for . The joint contribution of with is equal to the contribution of for . Thus, if the volume between and is associated only to , then the joint contribution of every can be calculated by accumulating the volume associated to each point while visiting in descending order of . Hence, in Algorithm 3, the previous point of each is stored in .

Figure 7:

The (partially overlapping) joint contributions of every point in with and their division into slices.

Figure 7:

The (partially overlapping) joint contributions of every point in with and their division into slices.

formula
formula

Algorithm 3 behaves just like IHV3D, that is, in the algorithm, is used to maintain the points delimiting the area of at height , and points in are visited in ascending order of in order to update the area of and to compute the slice volume (lines 8 to 15). Additionally, is swept along with in ascending order of the -coordinate by merging the two lists (line 4). The last point visited from is recorded as (except for first initialization of , which is a sentinel), and its volume (line 16) and area (line 17) are updated when the area of () is updated. Whenever the current point belongs to , the volume simultaneously dominated by and bounded below by becomes bounded above by (line 19) and stays associated to (and only to) . The contribution of above becomes associated to (lines 20 and 21) and is set as the new (line 23). In that way, each part of the contribution (a consecutive set of slices) is associated to a single point of . When the first point from that dominates on the -plane is found (), the volume of the last point from is updated (line 25). Figures 7b and  7c show the two slices (whose volume is) associated to , and Figure 7d shows the only slice associated to . Lastly, are visited again but in descending order of the -coordinate, and their volumes are accumulated and added to the next visited points (lines 27 to 30). In the example, the volume associated to is stored in , and corresponds to its joint contribution with . Then the volume associated to is added to , corresponding now to the joint contribution of and .

In the case of points , not every point has its contribution reduced with the addition of to , as is the case of in Figure 6a. In Algorithm 4, a set is used to keep the points of whose joint contributions with are still being computed. is initialized in linear time with the points in that need to be updated, that is, those points such that is not dominated by . Figure 8a shows the joint contributions of with every point in the initial set , which partially overlap. Figures 8b to 8d show how these joint contributions are split into slices by the algorithm. The computation of for each is performed in two main steps, the computation of the base area of the volume (line 3) and the area updates (lines 13 and 21). Figures 9 and 10 will be used as examples of these steps, respectively. Note that the base areas of the volumes correspond to the base areas in the red volume in Figure 8b, while the base areas of the red volumes of Figures 8c and  8b correspond to the subsequently updated areas.

Figure 8:

The (partially overlapping) joint contributions of every point in with and their division into slices.

Figure 8:

The (partially overlapping) joint contributions of every point in with and their division into slices.

Figure 9:

Example of how the base area of each point in is computed in .

Figure 9:

Example of how the base area of each point in is computed in .

Figure 10:

Example of how points from are used in Algorithm 4 to update the areas of points in .

Figure 10:

Example of how points from are used in Algorithm 4 to update the areas of points in .

Figure 9a corresponds to Figure 8b in the cut plane at and shows, also in red, the base areas of and to be computed, which overlap partially. The base areas are initialized in linear time by computing , for each . The areas associated to points in are initialized with , which was previously computed in linear time in line 2 of Algorithm 3. The value is computed in two sweeps. Points are first visited in ascending order of and the area dominated by between and each is computed in a cumulative way. When a point from is visited, the area accumulated so far is subtracted from the area associated with that point. In Figures 9b and  9c, the striped area is the accumulated area that is subtracted from and , respectively. The remaining area to be subtracted, that is, the area between and to the right of , is computed in an analogous way by sweeping points in in ascending order of the -coordinate. Note that, in the second sweep, it is also necessary to add to the area bounded below by and above by , for each , as it was subtracted twice.

Figure 10 shows the update of the areas of points in (lines 13 and 21). The volumes are updated in lines 12 and 20. This corresponds to the cases where a point in cuts above the area dominated by (lines 7 to 14 and in Figures 10a and  10b) or to the right (line 16 to 22 and in Figures 10c and  10d). Points are visited in ascending order of the -coordinate, as in IHV3D. Looking at the case where , let be the set of points in with a higher -coordinate than (). In Algorithm 4, set is first removed from (), in -time, through function . Thus, becomes ). is kept sorted according to the