Abstract

Ordinal classification refers to classification problems in which the classes have a natural order imposed on them because of the nature of the concept studied. Some ordinal classification approaches perform a projection from the input space to one-dimensional (latent) space that is partitioned into a sequence of intervals (one for each class). Class identity of a novel input pattern is then decided based on the interval its projection falls into. This projection is trained only indirectly as part of the overall model fitting. As with any other latent model fitting, direct construction hints one may have about the desired form of the latent model can prove very useful for obtaining high-quality models. The key idea of this letter is to construct such a projection model directly, using insights about the class distribution obtained from pairwise distance calculations. The proposed approach is extensively evaluated with 8 nominal and ordinal classifiers methods, 10 real-world ordinal classification data sets, and 4 different performance measures. The new methodology obtained the best results in average ranking when considering three of the performance metrics, although significant differences are found for only some of the methods. Also, after observing other methods of internal behavior in the latent space, we conclude that the internal projections do not fully reflect the intraclass behavior of the patterns. Our method is intrinsically simple, intuitive, and easily understandable, yet highly competitive with state-of-the-art approaches to ordinal classification.

1.  Introduction

Ordinal classification or ordinal regression is a supervised learning problem of predicting categories that have an ordered arrangement. When the problem is exhibiting an ordinal nature, it is expected that this order is also present in the data input space (Hühn & Hüllermeier, 2008). The samples are labeled by a set of ranks with an ordering among the categories. In contrast to nominal classification, there is an ordinal relationship throughout the categories, and it is different from regression in that the number of ranks is finite and exact amounts of difference between ranks are not defined. In this way, ordinal classification lies somewhere between nominal classification and regression.

Ordinal regression should not be confused with sorting or ranking. Sorting is related to ranking all samples in the test set, with a total order. Ranking is related to ranking with a relative order of samples and a limited number of ranks. Of course, ordinal regression can be used to rank samples, but its objective is to obtain good accuracy and, at the same time, good ranking.

Ordinal classification problems are important, since they are common in our everyday life where many problems require classification of items into naturally ordered classes. Examples of these problems are the teaching assistant evaluation (Lim, Loh, & Shih, 2000), car insurance risk rating (Kibler, Aha, & Albert, 1989), pasture production (Barker, 1995), preference learning (Arens, 2010), breast cancer conservative treatment (Cardoso, Pinto da Costa, & Cardoso, 2005), wind forescasting (Gutiérrez et al., 2013), and credit rating (Kim & Ahn, 2012).

A variety of approaches have been proposed for ordinal classification. For example, Raykar, Duraiswami, and Krishnapuram (2008) learn ranking functions in the context of ordinal regression and collaborative filtering data sets. Kramer, Widmer, Pfahringer, and de Groeve (2010) map the ordinal scale by assigning numerical values and then apply a regression tree model. The main problem with this simple approach is the assignment of a numerical value corresponding to each class, without a principled way of deciding the true metric distances between the ordinal scales. Also, representing all patterns in a class by the same value may not reflect the relationships among the patterns in a natural way. In this letter, we propose that the numerical values associated with different patterns may differ (even within the same class), and, most important, the value for each individual pattern is decided based on its relative localization in the input space.

Other simple alternatives that have appeared in the literature tried to impose the ordinal structure through the use of cost-sensitive classification, where standard (nominal) classifiers are made aware of ordinal information through penalizing the misclassification error, commonly selecting a cost equal to the absolute deviation between the actual and the predicted ranks (Kotsiantis & Pintelas, 2004). This is suitable when the knowledge about the problem is sufficient to completely define a cost matrix. However, when this is not possible, this approach is making an important assumption about the distances between the adjacent labels, all of them being equal, which may not be appropriate.

The third direct alternative suggested in the literature is to transform the ordinal classification problem into a nested binary classification one (Frank & Hall, 2001; Waegeman & Boullart, 2009), and then to combine the resulting classifier predictions to obtain the final decision. It is clear that ordinal information allows ranks to be compared. For a given rank k, an associated question could be, “Is the rank of pattern x greater than k?” This question is exactly a binary classification problem, and ordinal classification can be solved by approaching each binary classification problem independently and combining the binary outputs to a rank (Frank & Hall, 2001). Other alternative (Waegeman & Boullart, 2009) imposes explicit weights over the patterns of each binary system in such a way that errors on training objects are penalized proportionally to the absolute difference between their rank and k. Binarization of ordinal regression problems can also be tackled from augmented binary classification perspective, that is, binary problems are not solved independently, but a single binary classifier is constructed for all the subproblems. For example, Cardoso and Pinto da Costa (2007) add more dimensions and replicate the data points through what is known as the data replication method. This augmented space is used to construct a binary classifier, and the projection onto the original one results in an ordinal classifier. A very interesting framework in this direction is that proposed by Li and Lin (2007) and Lin and Li (2012): reduction from cost-sensitive ordinal ranking to weighted binary classification (RED), which is able to reformulate the problem as a binary problem by using a matrix for extension of the original samples, a weighting scheme, and a V-shaped cost matrix. An attractive feature of this framework is that it unifies many existing ordinal ranking algorithms, such as perceptron ranking (Crammer & Singer, 2005) and support vector ordinal regression (Chu & Keerthi, 2007). Recently, Fouad and Tiňo (2012) found that the learning vector quantization (LVQ) is adapted to the ordinal case in the context of prototype-based learning. In that work, the order information is utilized to select class prototypes to be adapted and improve the prototype updating process.

The vast majority of proposals addressing ordinal classification can be grouped under the umbrella of threshold methods (Verwaeren, Waegeman, & De Baets, 2012). These methods assume that ordinal response is a coarsely measured latent continuous variable and model it as real intervals in one dimension. Based on this assumption, the algorithms seek a direction onto which the samples are projected and a set of thresholds that partition the direction into consecutive intervals representing ordinal categories (McCullagh, 1980; Verwaeren et al., 2012; Herbrich, Graepel, & Obermayer, 2000; Crammer & Singer, 2001; Chu & Keerthi, 2005). Proportional odds model (POM) (McCullagh, 1980) is a standard statistical approach in this direction, where the latent variable is modeled by using a linear combination of the inputs and a probabilistic distribution is assumed for the patterns projected by this function. Crammer and Singer (2001) generalized the online perceptron algorithm with multiple thresholds to perform ordinal ranking. Support vector machines (SVMs) (Cortes & Vapnik, 1995; Vapnik, 1999) were also adapted for ordinal regression, first by the large-margin algorithm of Herbrich et al. (2000). The main drawback of this first proposal was that the problem size was a quadratic function of the training data size. A related, more efficient approach was presented by Shashua and Levin (2002), who excluded the inequality constraints on the thresholds. However, this can result in nondesirable solutions because the absence of constraints can lead to difficulties in imposing order on thresholds. Chu and Keerthi (2005) explicitly and implicitly included the constraints in the model formulation (support vector for ordinal regression, SVOR), deriving the associated dual problem and the optimality conditions. From another perspective, discriminant learning has been adapted to the ordinal setup by (apart from maximizing between-class distance and minimizing within-class distance) trying to minimize distance separation between projected patterns of consecutive classes (kernel discriminant learning for ordinal regression, KDLOR) (Sun, Li, Wu, Zhang, & Li, 2010). Finally, threshold models have also been estimated by using a Bayesian framework (gaussian processes for ordinal regression, GPOR) (Chu & Ghahramani, 2005), where the latent function is modeled using gaussian processes and then all the parameters are estimated by maximum likelihood optimization.

While threshold approaches offer an interesting perspective on the problem of ordinal classification, they learn the projection from the input space onto the one-dimensional latent space only indirectly, as part of the overall model fitting. As with any other latent model fitting, direct construction hints one may have about the desired form of the latent model can prove very useful for obtaining high-quality models. The key idea of this letter is to construct such a projection model directly, using insights about class distribution obtained from pairwise distance calculations. Indeed, our motivation stems from the fact that the order information should also be present in the data input space, and it could be interesting to take advantage of it to construct a useful variable for ordering the patterns using the ordinal scale. Additionally, regression is clearly the most natural way to approximate this continuous variable. As a result, we propose to construct the ordinal classifier in two stages: the input data are first projected into a one-dimensional variable by considering the relative position of the patterns in the input space, and then a standard regression algorithm is applied to learn a function to predict new values of this derived variable.

The main contribution of this work is the projection onto a one-dimensional variable, which is done by a guided projection process. This process exploits the ordinal space distribution of patterns in the input space. A measure of how well a pattern is located within its corresponding class region is defined by considering the distances between patterns of the adjacent classes in the ordinal scale. Then a projection interval is defined for each class, and the centers of those intervals (for nonboundary classes) are associated with the best located patterns for the corresponding classes (quantified by the measure mentioned above). For the boundary classes (first and last in the class order), the extreme end points of their projection intervals are associated with the most separated patterns of those classes. All the other patterns are assigned proportional positions in their corresponding class intervals, again according to their goodness values, expressing how well a pattern is located within its class. We refer to this projection as pairwise class distances (PCD) based. The behavior of this projection is evaluated over synthetic data sets, showing an intuitive response and good ability to separate adjacent classes even in nonlinear settings.

Once the mapping is done, our framework allows the design of effective ordinal ranking algorithms based on well-tuned regression approaches. The final classifier constructed by combining PCD and a regressor is called pairwise class distances ordinal classifier (PCDOC). In this contribution, PCDOC is implemented using -support vector regression () (Schölkopf & Smola, 2001; Vapnik, 1999) as the base regressor, although any other properly handled regression method could be used.

We carry out an extensive set of experiments on 10 real-world ordinal regression data sets, comparing our approach with 8 state-of-the-art methods. Our method, a simple one, holds out very well. Under four complementary performance metrics, the proposed method obtained the best mean ranking for three of the four metrics.

The rest of the letter is organized as follows. Section 2 introduces the ordinal classification problem and performance metrics we use to evaluate the ordinal classifiers. Section 3 explains the proposed data projection method and the classification algorithm. It also evaluates the behavior of the projection using two synthetic data sets and the performance of the classification algorithm under situations that may hamper classification. Section 4 presents the experimental design, data sets, and alternative ordinal classification methods that will be compared with our approach and discusses the experimental results. Finally, the last section sums up key conclusions and points to future work.

2.  Ordinal Classification

This section briefly introduces the mathematical notation and the ordinal classification performance metrics, including the threshold model formulation.

2.1.  Problem Formulation.

In an ordinal classification problem, the purpose is to learn a mapping from an input space to a finite set containing Q labels, where the label set has an order relation imposed on it. The symbol denotes the ordering between different ranks. A rank for the ordinal label can be defined as . Each pattern is represented by a K-dimensional feature vector and a class label . The training data set T is composed of N patterns , with .

Given these definitions, an ordinal classifier should be constructed taking into account two goals. First, the nature of the problem implies that the class order is somehow related to the distribution of patterns in the space of attributes and also to the topological distribution of the classes. Therefore the classifier must exploit this a priori knowledge about the input space (Hühn & Hüllermeier, 2008). Second, when evaluating an ordinal classifier, the performance metrics must consider the order of the classes so that misclassifications between adjacent classes should be considered less important than the ones between nonadjacent classes, more separated in the class order. For example, given an ordinal data set of weather prediction with the natural order between classes , it is straightforward to think that predicting class Hot when the real class is Cold represents a more severe error than that associated with a Very Cold prediction. Thus, specialized measures are needed for evaluating ordinal classifiers performance (Pinto da Costa, Alonso, & Cardoso, 2008; Cruz-Ramírez, Hervás-Martínez, Sánchez-Monedero, & Gutiérrez, 2011).

2.2.  Ordinal Classification Performance Metrics.

In this work, we utilize four evaluation metrics quantifying the accuracy of N predicted ordinal labels for a given data set , with respect to the true targets :

  1. Acc: the accuracy (Acc), also known as the correct classification rate.1 is the rate of correctly classified patterns:
    formula
    where yi is the true rank, is the predicted rank, and is the indicator function, being equal to 1 if c is true and 0 otherwise. Acc values range from 0 to 1, and they represent a global performance on the classification task. Although Acc is widely used in classification tasks, it is not suitable for some types of problems, such as imbalanced data sets (Sánchez-Monedero, Gutiérrez, Fernández-Navarro, & Hervás-Martínez, 2011) (very different number of patterns for each class) or ordinal data sets (Baccianella, Esuli, & Sebastiani, 2009).
  2. MAE: The mean absolute error (MAE) is the average absolute deviation of the predicted ranks from the true ranks (Baccianella et al., 2009),
    formula
    where . The MAE values range from 0 to Q−1. Since Acc does not reflect the category order, MAE is typically used in the ordinal classification literature together with Acc (Pinto da Costa et al., 2008; Agresti, 1984; Waegeman & De Baets, 2011; Chu & Keerthi, 2007; Chu & Ghahramani, 2005; Li & Lin, 2007). However, neither Acc nor MAE is suitable for problems with imbalanced classes. This is rectified in the average MAE (AMAE) (Baccianella et al., 2009) measuring the mean performance of the classifier across all classes.
  3. AMAE: This measure evaluates the mean of the MAEs across classes (Baccianella et al., 2009). It has been proposed as a more robust alternative to MAE for imbalanced data sets—a common situation in ordinal classification, where extreme classes (associated with rare situations) tend to be less populated:
    formula
    where AMAE values range from 0 to Q−1 and nj is the number of patterns in class j.
  4. : The Kendall's is a statistic used to measure the association between two measured quantities. Specifically, it is a measure of the rank correlation (Kendall, 1962),
    formula
    where is +1 if is greater than (in the ordinal scale) , 0 if and are the same, and −1 if is lower than , and the same for cij (using yi and yj). values range from −1 (maximum disagreement between the prediction and the true label), to 0 (no correlation between them), to 1 (maximum agreement). has been advocated as a better measure for ordinal variables because it is independent of the values used to represent classes (Cardoso & Sousa, 2011) since it works directly on the set of pairs corresponding to different observations. One may argue that shifting the predictions one class would will keep the same value, whereas the quality of the ordinal classification is lower. However, since there is a finite number of classes, shifting all predictions by one class will have detrimental effect in the boundary classes and so would substantially decrease the performance, even as measured by . As a consequence, is an interesting measure for ordinal classification but should be used in conjunction with other ones.

2.3.  Latent Variable Modeling for Ordinal Classification.

Latent variable models or threshold models are probably the most important type of ordinal regression models. These models consider the ordinal scale as the result of coarse measurements of a continuous variable, called the latent variable. It is typically assumed that the latent variable is difficult to measure or cannot be observed itself (Verwaeren et al., 2012). The threshold model can be represented with the following general expression,
formula
2.1
where is the function that projects data space onto the one-dimensional latent space and are the thresholds that divide the space into ordered intervals corresponding to the classes.

In our proposal, it is assumed that a model can be found that links data items with their latent space representation . We place our proposal in the context of latent variable models for ordinal classification because of its similarity to these models. In contrast to other models employing a one-dimensional latent space, such as POM (McCullagh, 1980), we do not consider variable thresholds but impose fixed values for θ. However, suitable dimensionality reduction is given due attention: first, by trying to exploit the ordinal structure of the space , and by explicitly putting external pressure on the margins between the classes in (see section 3.2).

3.  Proposed Method

Our approach is different from the previous ones in that it does not implicitly learn latent representations of the training inputs. Instead, we impose how training inputs xi are going to be represented through . Then this representation is generalized to the whole input space by training a regressor on the (xi, zi) pairs, resulting in a projection function . To ease the presentation, we sometimes write training input patterns x as x(q) to explicitly reflect their class label rank q (i.e., the class label of x is Cq).

3.1.  Pairwise Class Distance Projection.

To describe the pairwise class distance (PCD) projection, first, we define a measure of how well a pattern x(q) is placed within other instances of class Cq, by considering its Euclidean distances to the patterns in adjacent classes. This is done on the assumption of ordinal pattern distribution in the input space . For calculating this measure, the minimum distances of a pattern x(q)i to patterns in the previous and next classes, Cq−1 and Cq+1, respectively, are used. The minimum distance to the previous/next class is
formula
3.1
where is the Euclidean distance between . Then
formula
3.2
where the sum of the minimum distances of a pattern with respect to adjacent classes is normalized across all patterns of the class, so that has a maximum value of 1.

Figure 1 shows the idea of minimum distances for each pattern with respect to the patterns of the adjacent classes. In this figure, patterns of the second class are considered. The example illustrates how the value is obtained for the pattern x(2) marked with a circle. For distances between x(2) and class 1 patterns, the item x(1) has the minimum distance, so is calculated by using this pattern. For distances between x(2) and class 3 patterns, is the minimum distance between x(2) and x(3).

Figure 1:

Illustration of the idea of minimum pairwise class distances. All the minimum distances of patterns of class C2 regarding patterns of adjacent classes are painted with lines. x(2) is the point we want to calculate its associated .

Figure 1:

Illustration of the idea of minimum pairwise class distances. All the minimum distances of patterns of class C2 regarding patterns of adjacent classes are painted with lines. x(2) is the point we want to calculate its associated .

By using , we can derive a latent variable value . Before continuing, thresholds must be defined in order to stablize the intervals on that correspond to each class, so that calculated values for zi may be positioned on the proper interval. Also, predicted values of unseen data would be classified in different classes according to these thresholds (see section 3.3) in a similar way to any other threshold model. For simplicity, is defined between 0 and 1, and the thresholds are positioned in a uniform manner:2
formula
3.3
Considering θ, the centers for values belonging to class Cq are set to c1=0, cQ=1, and
formula
3.4
We now construct zi values for training inputs x(q)i by considering the following criteria. If x(q)i has similar minimum distances and (and consequently a high value of ), the resulting zi value should be closer to ci, so that intuitively we consider this pattern as well located within its class. If and are very different (and consequently a low value of is obtained), the pattern x(q)i is closer to one of these classes, and so the corresponding zi value should be closer to the interval of values of the closest adjacent class, Q−1 or q+1. This idea is formalized in the following expression:
formula
3.5
where is defined in equation 3.2, cq is the center of class interval corresponding to Cq (see equation 3.4), and Q is the number of classes. Equation 3.5 guarantees that all z values lie in the correct class interval.3 This methodology for data projection is called pairwise class distances (PCD).

3.2.  Analysis of the Proposed Projection in Synthetic Data Sets.

For illustration purposes, we generated synthetic ordinal classification data sets in with four classes (Q=4). Figure 2 shows the patterns of a synthetic data set, SyntheticLinearOrder, with a linear order between classes, and Figure 3 shows the SyntheticNonLinearOrder data set, with a nonlinear ordinal relationship between classes. Points at SyntheticLinearOrder were generated by adding a uniform noise to points of a line. Points in SyntheticNonLinearOrder were generated by adding a gaussian noise to points on a spiral. In both figures, points belonging to different classes are marked with different colors and symbols. Besides the points, the figures also illustrate basic concepts of the proposed method on example points (surrounded by gray circles). For these points, the minimum distances are illustrated with lines of the corresponding class color. The minimum distances of a point to the previous and next class patterns are marked with dashed and solid lines, respectively. For selected points, we show the value of the PCD projection (calculated using equation 3.5).

Figure 2:

Example of the generated zi values on a synthetic data set with a linear order relationship.

Figure 2:

Example of the generated zi values on a synthetic data set with a linear order relationship.

Figure 3:

Example of the generated zi values on the synthetic data set with a nonlinear class order structure. The figure on the right shows a zooming over the upper left area at the center of the data set shown on the left.

Figure 3:

Example of the generated zi values on the synthetic data set with a nonlinear class order structure. The figure on the right shows a zooming over the upper left area at the center of the data set shown on the left.

In Figure 2, the z value increases for patterns of the higher classes, and this value varies depending on the position of the pattern x(q) in the space with respect to the patterns x(q−1) and x(q+1) of adjacent classes. Extreme values, z=0.0 and z=1.0, correspond to the patterns more distant from classes 1 and Q, respectively (and with a maximum value). Synthetic NonLinearOrder in Figure 3 is designed to demonstrate that the PCD projection is suitable for more complex ordinal topologies of the data. This is, for any topology in an ordinal data set, it is expected that patterns of classes Q−1 and q+1 are always the closest ones to the patterns of class q, and PCD will take advantage of this situation to decide the relative order of the pattern within its class, even when this is produced in a nonlinear manner.

Figures 4a and 4b show histograms of the PCD projections from the synthetic data sets in Figures 2 and 3, respectively. The thresholds θ that divide the z values of the different classes are also included. Observe that the z values of the different classes are clearly separated and that they are compacted within a range that is always smaller than the range initially indicated by the thresholds. This is due to the scaling of the z values in equation 3.2, where the value cannot be zero, so a pattern can never be located close to the boundary separating intervals of adjacent classes.

Figure 4:

Histograms of the PCD projection of the synthetic data sets.

Figure 4:

Histograms of the PCD projection of the synthetic data sets.

3.3.  Algorithm for Ordinal Classification.

Once the PCD projections have been obtained for all training inputs, we construct a new training set . Any generic regression tool can be trained on to obtain the projection function . In this respect, our method is quite general, allowing the user to choose his or her favorite regression method or any other improved regression tool introduced in the future. The resulting algorithm, pairwise class distances for ordinal classification (PCDOC), is described in two steps in Figures 5 and 6.

Figure 5:

PCDOC regression training algorithm pseudocode.

Figure 5:

PCDOC regression training algorithm pseudocode.

Figure 6:

PCDOC classification algorithm for unseen data.

Figure 6:

PCDOC classification algorithm for unseen data.

It is expected that formulating the problem as a regression problem would help the model to capture the ordinal structure of the input and output spaces and their relationship. In addition, due to the nature of the regression problem, it is expected that the performance of the classification task will be improved regarding metrics that consider the difference between the predicted and actual classes within the linear class order, such as MAE or AMAE, or the correlation between the target and predicted values, such as . Experimental results confirm this hypothesis in section 4.3.

3.4.  PCDOC Performance Analysis in Some Controlled Experiments.

3.4.1.  Analysis of the Influence of Dimensionality and Class Overlapping.

This section analyzes the performance of the PCDOC algorithm under situations that may hamper classification: class overlapping and large dimensionality of the data. For this purpose, different synthetic data sets have been generated by sampling random points from Q gaussian distributions, where Q is the number of classes, so that each class point is a random sample of the corresponding gaussian distribution. In order to easily control the overlap of the classes, the variance () is kept constant independent of the number of dimensions (K). In addition, the Q centers (means ) are set up in order to keep the distance of 1 between two adjacent class means independent of K. Under this situation, each coordinate of adjacent class means is separated by so that , , and so on.

The number of features tested (input space dimensionality) were and the different width values are , so that 18 data sets were generated. The number of patterns for each class from 1 to 4 was 10, 100, 100, and 5. Figure 7 shows two of these data sets generated different variance values for K=2.

Figure 7:

Synthetic gaussian data set example for two dimensions.

Figure 7:

Synthetic gaussian data set example for two dimensions.

For these experiments, our approach uses the support vector regression (SVR) algorithm as the model for the z variable (the method will be referred to as SVR-PCDOC). We have also included three methods as baseline methods: the C-support vector classification (SVC) (Cortes & Vapnik, 1995; Vapnik, 1999), the support vector ordinal regression with explicit constraints (SVOREX) (Chu & Keerthi, 2005, 2007), and the kernel discriminant learning for ordinal regression (KDLOR) (Sun et al., 2010). As in the next experimental section (section 4), the experimental design includes 30 stratified random splits (with 75% of patterns for training and the remaining for generalization). The mean MAE and AMAE generalization results are used for comparison purposes in Figure 8 (for further details about experimental procedure, methods description, and hyperparameter optimization, refer to section 4.2).

Figure 8:

MAE and AMAE performance for synthetic gaussian data set with distribution and , where is the identity matrix.

Figure 8:

MAE and AMAE performance for synthetic gaussian data set with distribution and , where is the identity matrix.

From the results depicted in Figure 8, we can generally conclude that the three methods, except KDLOR, have similar MAE performance degradation with the increase of class overlapping and dimensionality. Figure 8a shows that SVR-PCDOC has a slightly worse performance than SVC and SVOREX. However, in experiments with higher K (see Figures 8c and 8e), the performance of the three ordinal methods varies in a similar way. In particular, in Figure 8e we can observe that SVC performance decreases with high overlapping and high dimensionality, whereas the ordinal methods have similar performance here. From the analysis of the AMAE performance, we can conclude that KDLOR outperforms the rest of the methods in cases of low class overlapping. Regarding our method, we can conclude that compared with the other methods, its AMAE performance is worse in the case of low class overlap. However, in general, our method seems more robust when the class overlap increases.

3.4.2.  Analysis of the Influence of Data Multimodality.

This section extends the experiments to the case of multimodal data, the data sets are generated with K=2 and , and the number of modes per class is varied. Figure 9a presents the unimodal case. The data sets with more modes per class are generated in the following way. A gaussian distribution is set up as in the previous section, with center . For each class, each additional gaussian distribution is centered in a random location within the hypersphere with center and radius 0.75. Then patterns are sampled from each distribution. For each class, we considered different number of modes, from one mode to four modes. The number of patterns generated for each mode was 36, 90, 90, and 24 for classes 1, 2, 3, and 4, respectively, using the same number for all modes of a class. An example of the bimodal case (two gaussian distributions per class) is shown in Figure 9b, having 72, 180, 180, and 48 patterns for classes 1, 2, 3, and 4, respectively.

Figure 9:

Illustration of the unimodal and bimodal cases of the synthetic gaussian data set example for K=2 and .

Figure 9:

Illustration of the unimodal and bimodal cases of the synthetic gaussian data set example for K=2 and .

Experiments were carried out as in the previous section, and MAE and AMAE generalization results are depicted in Figure 10. Regarding MAE, Figure 10a reveals that the four methods perform similarly in data sets with one and four modes, but they differ on performance for those with two and three modes. Considering only MAE, SVRPCDOC has the worse performance in cases 2 and 3. Nevertheless, considering AMAE results in Figure 10b, SVRPCDOC and KDLOR achieve the best results. The different behavior of the methods depending on the performance measure can be explained by observing the nature of the bimodal data set (see Figure 9b), where the majority of the patterns are from classes 2 and 3. In this context, the optimization done by SVOREX and SVC can move the decision thressholds to better classify patterns of these two classes at the expense of misclassifying class 1 and 4 patterns, especially patterns of classes placed on the class boundaries (see Figure 9b).

Figure 10:

MAE and AMAE performance for synthetic gaussian data set with distribution and , where is the identity matrix (being K=2 and for all the synthetic data sets).

Figure 10:

MAE and AMAE performance for synthetic gaussian data set with distribution and , where is the identity matrix (being K=2 and for all the synthetic data sets).

4.  Experiments

In this section we report on extensive experiments that were performed to check the competitiveness of the proposed methodology. The source code of the proposed method, synthetic data sets analysis code, and real ordinal data sets partitions used for the experiments are available at a public website (http://www.uco.es/grupos/ayrna/neco-pairwisedistances).

4.1.  Ordinal Classification Data Sets and Experimental Design.

To the best of our knowledge, there are no public specific data set repositories for real ordinal classification problems. The ordinal regression benchmark data sets repository provided by Chu and Ghahramani (2005) is the most widely used repository in the literature. However, these data sets are not real ordinal classification data sets but regression ones. To turn regression into ordinal classification, the target variable was discretized into Q different bins (representing classes) with equal frequency or equal width. However, there are potential problems with this approach. If equal frequency labeling is considered, the data sets do not exhibit some characteristics of typical complex classification tasks, such as class imbalance. On the other hand, severe class imbalance can be introduced by using the same binning width. Finally, as the actual target regression variable exists with observed values, the classification problem can be simpler than on those data sets where the variable z is really unobservable and has to be modeled.

We have therefore decided to use a set of real ordinal classification data sets publicly available at the UCI (Asuncion & Newman, 2007) and mldata.org repositories (Sonnenburg, 2011) (see Table 1 for data description). All of them are ordinal classification problems, although one can find literature where the ordering information is discarded. The nature of the target variable is now analyzed for two example data sets. The bondrate data set is a classification problem where the purpose is to assign the right ordered category to bonds with the category labels C2=AA, C3=A, C4=BBB, . These labels represent the quality of a bond and are assigned by credit rating agencies, AAA being the highest quality and BB the worst. In this case, classes AAA, AA, and A are more similar than classes BBB and BB, so that no assumptions should be made about the distance between classes in both the input and the latent space. The other example is the eucalyptus data set; in this case, the problem is to predict which eucalyptus seedlots are best for soil conservation in a seasonally dry hill country. The classes are C2=low, C3=average, C4=good, ; it cannot be assumed there is an equal width for each class in the latent space.

Table 1:
Data Sets Used for the Experiments.
Data SetNKQOrdered Class Distribution
automobile 205 71 (3,22,67,54,32,27) 
bondrate 57 37 (6,33,12,5,1) 
contact-lenses 24 (15,5,4) 
eucalyptus 736 91 (180,107,130,214,105) 
newthyroid 215 (30,150,35) 
pasture 36 25 (12,12,12) 
squash-stored 52 51 (23,21,8) 
squash-unstored 52 52 (24,24,4) 
tae 151 54 (49,50,52) 
winequality-red 1599 11 (10,53,681,638,199,18) 
Data SetNKQOrdered Class Distribution
automobile 205 71 (3,22,67,54,32,27) 
bondrate 57 37 (6,33,12,5,1) 
contact-lenses 24 (15,5,4) 
eucalyptus 736 91 (180,107,130,214,105) 
newthyroid 215 (30,150,35) 
pasture 36 25 (12,12,12) 
squash-stored 52 51 (23,21,8) 
squash-unstored 52 52 (24,24,4) 
tae 151 54 (49,50,52) 
winequality-red 1599 11 (10,53,681,638,199,18) 

Note: N is the number of patterns, K is the number of attributes and Q is the number of classes.

Regarding the experimental setup, 30 random splits of the data sets have been considered, with 75% and 25% of the instances in the training and test sets, respectively. The partitions were the same for all compared methods, and since all of them are deterministic, one model was obtained and evaluated (in the test (generalization) set) for each split. All nominal attributes were transformed into as many binary attributes as the number of categories. All the data sets were property standardized.

4.2.  Existing Methods Used for Comparisons.

For comparison purposes, different state-of-the-art methods have been included in the experimentation:

  • Gaussian processes for ordinal regression (GPOR) (Chu & Ghahramani, 2005) presents a probabilistic kernel approach to ordinal regression based on gaussian processes where a threshold model that generalizes the probit function is used as the likelihood function for ordinal variables. In addition, Chu and Ghahramani apply the automatic relevance determination (ARD) method proposed by Mackay (1994) and Neal (1996) to the GPOR model. When using GPOR with ARD feature selection, we will refer the algorithm to as GPOR-ARD.

  • Support vector ordinal regression (SVOR) (Chu & Keerthi, 2005, 2007) proposes two new support vector approaches for ordinal regression. Here, multiple thresholds are optimized in order to define parallel discriminant hyperplanes for the ordinal scales. The first approach, with explicit inequality constraints on the thresholds, derives the optimal conditions for the dual problem and adapts the SMO algorithm for the solution; we will refer to it as SVOREX. In the second approach, the samples in all the categories are allowed to contribute errors for each threshold; therefore, there is no need to include the inequality constraints in the problem. This approach is named an SVOR with implicit constraints (SVORIM).

  • RED-SVM (Li & Lin, 2007) applies the reduction from cost-sensitive ordinal ranking to weighted binary classification (RED) framework to SVM. The RED method can be summarized in three steps. First, transform all training samples into extended samples by using a coding matrix and weighting these samples with a cost matrix. Second, all the extended examples are jointly learned by a binary classifier with confidence outputs, aiming at a low weighted 0/1 loss. Finally, convert the binary outputs to a rank. In this letter, the coding matrix considered is the identity, and the cost matrix is the absolute value matrix, applied to the standard binary soft-margin SVM.

  • A simple approach to ordinal regression (ASAOR) (Frank & Hall, 2001) is a general method that enables standard classification algorithms to make the use of order information in attributes. For the training process, the method transforms the Q-class ordinal problem into Q−1 binary class problems. Any ordinal attribute with ordered values is converted into Q−1 binary attributes. The prediction of new instances of class is done by estimating the probability of belonging to each of the Q classes with the Q−1 models. In the current work, the C4.5 method available in Weka (Hall et al., 2009) is used as the underlying classification algorithm since this is the one initially employed by the authors of ASAOR. In this way, the algorithm is identified as ASAOR(C4.5).

  • The proportional odds model (POM) is one of the first models specifically designed for ordinal regression (McCullagh, 1980). The model is based on the assumption of stochastic ordering of the space . Stochastic ordering is satisfied by a monotonic function (the model) that defines a probability density function over the class labels for a given feature vector x. Due to the thresholds that divide the monotonic function values corresponding to different classes, this method was the first one to be named a threshold model. The main problem associated with this model is that the projection is done by considering a linear combination of the inputs (linear projection), which hinders its performance. For the POM model, the function of Matlab software has been used.

  • Kernel discriminant learning for ordinal regression (KDLOR) (Sun et al., 2010) extends the kernel discriminant analysis (KDA) using a rank constraint. The method looks for the optimal projection that maximizes the separation between the projection of the different classes and minimizes the intraclass distance as in traditional discriminant analysis for nominal classes. Crucially, the order of the classes in the resulting projection is also considered. The authors claim that compared with the SVM-based methods, the KDA approach takes advantage of the global information of the data and the distribution of the classes and also reduces the computational complexity of the problem.

  • Support vector machine (SVM) (Cortes & Vapnik, 1995; Vapnik, 1999) nominal classifier is included in the experiments in order to establish a baseline nominal performance. C-support vector classification (SVC) available in libSVM 3.0 (Chang & Lin, 2011) is used as the SVM classifier implementation. In order to deal with the multiclass case, a 1-versus-1 approach has been considered, following the recommendations of Hsu and Lin (2002).

In our approach, the support vector regression (SVR) algorithm is used as the model for the z variable. The method will be referred to by the acronym SVR-PCDOC. The available in libSVM is used. The authors of GPOR, SVOREX, SVORIM, and RED-SVM provide publicly available software implementations of their methods.4 In the case of KDLOR, this method has been implemented by the authors using Matlab software (Perez-Ortiz et al., 2011).

Model selection is an important issue and involves selecting the best hyperparameter combination for all the methods compared. All the methods were configured to use the gaussian kernel. For the support vector algorithms (SVC, RED-SVM, SVOREX, SVORIM and ), the corresponding hyperparameters (regularization parameter, C, and width of the gaussian functions, ) were adjusted using a grid search over each of the 30 training sets by a five-fold nested cross-validation with the following ranges: and . Regarding , the additional parameter has to be adjusted. The range considered was . For KDLOR, the width of the gaussian kernel was adjusted using the range , and the regularization parameter, u, for avoiding the singularity problem values was . The POM and ASAOR(C4.5) methods do not have hyperparameters. Finally, GPOR-ARD has no hyperparameters to fix since the method optimizes the associated parameters itself.

For all the methods, the MAE measure is used as the performance metric for guiding the grid search to be consistent with the authors of the different state-of-the-art methods. The grid search procedure of SVC at libSVM has been modified in order to use MAE as the criteria for hyperparameter selection.

4.3.  Performance Results.

Table 2 outlines the results through the mean and standard deviation (SD) of AccG, MAEG, AMAEG, and across the 30 holdout splits, where the subindex G indicates that results were obtained on the (holdout) generalization fold. As a summary, Table 3 shows for each performance metric the mean values of the metrics across all the data sets and the mean ranking values when comparing the different methods (R=1 for the best-performing method and R=9 for the worst one). To enhance readability, in tables 2 and 3, the best and second-best results are in boldface and italics, respectively.

Table 2:
Comparison of the Proposed Method to Other Ordinal Classification Methods and SVC.
Method/contact-squash-squash-winequality-
Data Setautomobilebondratelenseseucalyptusnewthyroidpasturestoredunstoredtaered
Acc MeanSD  
   ASAOR(C4.5) 0.6960.059 0.5330.074  0.6390.036 0.9170.039 0.7520.145 0.6030.118  0.3950.058 0.6030.021 
   GPOR 0.6110.073 0.5780.032 0.6060.093 0.6860.034 0.9660.024 0.5220.178 0.4510.101 0.6440.162 0.3280.041 0.6060.015 
   KLDOR 0.7220.058 0.5420.087 0.5890.174 0.6110.028   0.7030.112 0.8280.104 0.5550.052 0.6030.017 
   POM 0.4670.194 0.3440.161 0.6220.138 0.1590.036  0.4960.154 0.3820.152 0.3490.143 0.5120.089 0.5940.017 
   SVC   0.7940.129  0.9670.025 0.6330.134 0.6560.127 0.7000.082 0.5390.062 0.6360.021 
   RED-SVM 0.6840.055 0.5530.073 0.7000.111 0.6510.024 0.9690.022 0.6480.134 0.6640.104 0.7490.086 0.5220.074 0.6180.022 
   SVOREX 0.6650.068 0.5530.096 0.6500.127 0.6470.029 0.9670.022 0.6300.125 0.6280.133 0.7180.128 0.5810.060 0.6290.022 
   SVORIM 0.6390.076 0.5470.092 0.6330.127 0.6390.028 0.9690.021 0.6670.120 0.6390.118 0.7640.103 0.5900.066 0.6300.022 
   SVR-PCDOC 0.6780.060 0.5400.101 0.6890.095 0.6480.029 0.9730.020 0.6560.103  0.6950.084   
MAE MeanSD  
   ASAOR(C4.5) 0.4010.095 0.6240.079  0.3840.042 0.0830.039 0.2480.145 0.4440.140  0.6860.146 0.4410.023 
   GPOR 0.5940.131 0.6240.062 0.5110.175 0.3310.038 0.0340.024 0.4890.190 0.6260.148 0.3560.162 0.8610.155 0.4250.017 
   KLDOR 0.3340.076 0.5870.107 0.5390.208 0.4240.032   0.3080.128 0.1720.104 0.4730.069 0.4430.019 
   POM 0.9530.687 0.9470.321 0.5330.241 2.0290.070  0.5850.204 0.8130.248 0.8260.230 0.6260.126 0.4390.019 
   SVC 0.4460.095 0.6240.090 0.3110.222 0.3940.042 0.0330.025 0.3670.134 0.3770.160 0.3080.090 0.5780.083 0.4080.020 
   RED-SVM  0.5980.088 0.3780.169  0.0320.022 0.3590.142 0.3460.110 0.2510.086 0.5150.087 0.4190.021 
   SVOREX 0.4080.089  0.4890.185 0.3920.031 0.0330.022 0.3700.125 0.3820.139 0.2820.128 0.4850.078 0.4080.023 
   SVORIM 0.4240.090 0.5910.102 0.5060.167 0.3950.035 0.0310.021 0.3330.120 0.3720.126    
   SVR-PCDOC 0.3970.093 0.5680.126  0.3920.038 0.0270.020 0.3480.104  0.3050.084 0.4570.071 0.4000.023 
AMAE MeanSD  
   ASAOR(C4.5) 0.5110.104 1.2260.175  0.4280.045 0.1150.056 0.2480.145 0.5020.192 0.2560.149 0.6890.151  
   GPOR 0.7920.200 1.3600.122 0.6510.286 0.3620.040 0.0620.049 0.4890.190 0.7970.234 0.4430.226 0.8630.164 1.0650.065 
   KLDOR 0.3450.104  0.5190.280 0.4260.038 0.0590.040  0.3490.156  0.4710.070 1.2580.069 
   POM 1.0260.800 1.1030.403 0.5350.275 1.9900.048  0.5850.204 0.8150.251 0.7910.332 0.6270.128 1.0850.037 
   SVC 0.4860.125 1.2650.183 0.3070.277 0.4330.048 0.0600.051 0.3670.134 0.4460.189 0.4440.163 0.5760.083 1.1190.069 
   RED-SVM 0.4680.096 1.1840.225 0.3850.198 0.4140.030 0.0570.049 0.3590.142 0.3910.149 0.3480.159 0.5130.086 1.0680.069 
   SVOREX 0.5180.096 1.0720.217 0.5170.303 0.4110.034 0.0540.042 0.3700.125 0.4330.172 0.4260.157 0.4840.079 1.0950.067 
   SVORIM 0.5230.105 1.1140.233 0.5890.259 0.4200.043 0.0550.042 0.3330.120 0.4270.148 0.3670.140  1.0930.072 
   SVR-PCDOC  0.9690.224 0.4200.098  0.0450.040 0.3480.104  0.3960.158 0.4550.071 1.0400.096 
  
   ASAOR(C4.5) 0.7410.069 0.1430.159   0.8530.067 0.7780.132 0.4150.245  0.2430.177 0.4960.036 
   GPOR 0.5570.118 0.0000.000 0.3480.304 0.8300.020 0.9380.045 0.4610.314 0.0750.211 0.4200.331 −0.0180.108 0.5230.026 
   KLDOR 0.7930.056 0.3560.257 0.4480.273 0.7860.017 0.9480.034  0.6460.160 0.7640.161 0.4770.114 0.4600.028 
   POM 0.4950.283 0.2900.302 0.4580.309 0.0080.038  0.4630.237 0.1690.304 0.1090.305 0.3170.129 0.4970.025 
   SVC 0.6950.077 0.1210.177 0.6010.300 0.7830.025 0.9390.045 0.6980.133 0.5410.240 0.5990.140 0.3750.110 0.5160.027 
   RED-SVM  0.2540.247 0.5770.242 0.8000.017 0.9430.041 0.7070.129 0.6010.148 0.6620.108 0.4170.120 0.5250.030 
   SVOREX 0.7490.062  0.4250.304 0.7940.019 0.9410.040 0.6910.115 0.5340.207 0.5920.212 0.4450.110 0.5310.028 
   SVORIM 0.7480.065 0.2990.230 0.3820.269 0.7920.020 0.9440.038 0.7100.114 0.5420.167 0.6560.187   
   SVR-PCDOC 0.7440.076 0.4550.218 0.6200.217 0.7950.024 0.9520.037 0.7120.102  0.6020.133 0.4930.101 0.5420.033 
Method/contact-squash-squash-winequality-
Data Setautomobilebondratelenseseucalyptusnewthyroidpasturestoredunstoredtaered
Acc MeanSD  
   ASAOR(C4.5) 0.6960.059 0.5330.074  0.6390.036 0.9170.039 0.7520.145 0.6030.118  0.3950.058 0.6030.021 
   GPOR 0.6110.073 0.5780.032 0.6060.093 0.6860.034 0.9660.024 0.5220.178 0.4510.101 0.6440.162 0.3280.041 0.6060.015 
   KLDOR 0.7220.058 0.5420.087 0.5890.174 0.6110.028   0.7030.112 0.8280.104 0.5550.052 0.6030.017 
   POM 0.4670.194 0.3440.161 0.6220.138 0.1590.036  0.4960.154 0.3820.152 0.3490.143 0.5120.089 0.5940.017 
   SVC   0.7940.129  0.9670.025 0.6330.134 0.6560.127 0.7000.082 0.5390.062 0.6360.021 
   RED-SVM 0.6840.055 0.5530.073 0.7000.111 0.6510.024 0.9690.022 0.6480.134 0.6640.104 0.7490.086 0.5220.074 0.6180.022 
   SVOREX 0.6650.068 0.5530.096 0.6500.127 0.6470.029 0.9670.022 0.6300.125 0.6280.133 0.7180.128 0.5810.060 0.6290.022 
   SVORIM 0.6390.076 0.5470.092 0.6330.127 0.6390.028 0.9690.021 0.6670.120 0.6390.118 0.7640.103 0.5900.066 0.6300.022 
   SVR-PCDOC 0.6780.060 0.5400.101 0.6890.095 0.6480.029 0.9730.020 0.6560.103  0.6950.084   
MAE MeanSD  
   ASAOR(C4.5) 0.4010.095 0.6240.079  0.3840.042 0.0830.039 0.2480.145 0.4440.140  0.6860.146 0.4410.023 
   GPOR 0.5940.131 0.6240.062 0.5110.175 0.3310.038 0.0340.024 0.4890.190 0.6260.148 0.3560.162 0.8610.155 0.4250.017 
   KLDOR 0.3340.076 0.5870.107 0.5390.208 0.4240.032   0.3080.128 0.1720.104 0.4730.069 0.4430.019 
   POM 0.9530.687 0.9470.321 0.5330.241 2.0290.070  0.5850.204 0.8130.248 0.8260.230 0.6260.126 0.4390.019 
   SVC 0.4460.095 0.6240.090 0.3110.222 0.3940.042 0.0330.025 0.3670.134 0.3770.160 0.3080.090 0.5780.083 0.4080.020 
   RED-SVM  0.5980.088 0.3780.169  0.0320.022 0.3590.142 0.3460.110 0.2510.086 0.5150.087 0.4190.021 
   SVOREX 0.4080.089  0.4890.185 0.3920.031 0.0330.022 0.3700.125 0.3820.139 0.2820.128 0.4850.078 0.4080.023 
   SVORIM 0.4240.090 0.5910.102 0.5060.167 0.3950.035 0.0310.021 0.3330.120 0.3720.126    
   SVR-PCDOC 0.3970.093 0.5680.126  0.3920.038 0.0270.020 0.3480.104  0.3050.084 0.4570.071 0.4000.023 
AMAE MeanSD  
   ASAOR(C4.5) 0.5110.104 1.2260.175  0.4280.045 0.1150.056 0.2480.145 0.5020.192 0.2560.149 0.6890.151  
   GPOR 0.7920.200 1.3600.122 0.6510.286 0.3620.040 0.0620.049 0.4890.190 0.7970.234 0.4430.226 0.8630.164 1.0650.065 
   KLDOR 0.3450.104  0.5190.280 0.4260.038 0.0590.040  0.3490.156  0.4710.070 1.2580.069 
   POM 1.0260.800 1.1030.403 0.5350.275 1.9900.048  0.5850.204 0.8150.251 0.7910.332 0.6270.128 1.0850.037 
   SVC 0.4860.125 1.2650.183 0.3070.277 0.4330.048 0.0600.051 0.3670.134 0.4460.189 0.4440.163 0.5760.083 1.1190.069 
   RED-SVM 0.4680.096 1.1840.225 0.3850.198 0.4140.030 0.0570.049 0.3590.142 0.3910.149 0.3480.159 0.5130.086 1.0680.069 
   SVOREX 0.5180.096 1.0720.217 0.5170.303 0.4110.034 0.0540.042 0.3700.125 0.4330.172 0.4260.157 0.4840.079 1.0950.067 
   SVORIM 0.5230.105 1.1140.233 0.5890.259 0.4200.043 0.0550.042 0.3330.120 0.4270.148 0.3670.140  1.0930.072 
   SVR-PCDOC  0.9690.224 0.4200.098  0.0450.040 0.3480.104  0.3960.158 0.4550.071 1.0400.096 
  
   ASAOR(C4.5) 0.7410.069 0.1430.159   0.8530.067 0.7780.132 0.4150.245  0.2430.177 0.4960.036 
   GPOR 0.5570.118 0.0000.000 0.3480.304 0.8300.020 0.9380.045 0.4610.314 0.0750.211 0.4200.331 −0.0180.108 0.5230.026 
   KLDOR 0.7930.056 0.3560.257 0.4480.273 0.7860.017 0.9480.034  0.6460.160 0.7640.161 0.4770.114 0.4600.028 
   POM 0.4950.283 0.2900.302 0.4580.309 0.0080.038  0.4630.237 0.1690.304 0.1090.305 0.3170.129 0.4970.025 
   SVC 0.6950.077 0.1210.177 0.6010.300 0.7830.025 0.9390.045 0.6980.133 0.5410.240 0.5990.140 0.3750.110 0.5160.027 
   RED-SVM  0.2540.247 0.5770.242 0.8000.017 0.9430.041 0.7070.129 0.6010.148 0.6620.108 0.4170.120 0.5250.030 
   SVOREX 0.7490.062  0.4250.304 0.7940.019 0.9410.040 0.6910.115 0.5340.207 0.5920.212 0.4450.110 0.5310.028 
   SVORIM 0.7480.065 0.2990.230 0.3820.269 0.7920.020 0.9440.038 0.7100.114 0.5420.167 0.6560.187   
   SVR-PCDOC 0.7440.076 0.4550.218 0.6200.217 0.7950.024 0.9520.037 0.7120.102  0.6020.133 0.4930.101 0.5420.033 

Notes: The mean and standard deviation (SD) of the generalization results are reported for each data set. The best statistical result is in boldface and the second-best result in italics.

Table 3:
Mean Results of Accuracy (), MAE (), AMAE (), and () and Mean Ranking (, , and ) for the Generalization Sets.
Method/Metric         
GPOR 0.666 5.40 0.392 5.25 0.534 4.90 0.577 5.20 
ASAOR(C4.5) 0.600 6.50 0.485 7.00 0.688 7.00 0.413 7.50 
KDLOR 0.680 4.20 0.363 4.00 0.509 3.80 0.640 3.60 
POM 0.490 7.90 0.778 7.80 0.861 7.00 0.375 6.90 
SVC 0.683 3.60 0.385 5.55 0.550 6.20 0.587 6.20 
RED-SVM 0.676 4.15 0.367 4.00 0.519 4.10 0.624 4.00 
SVOREX 0.667 5.05 0.382 4.75 0.538 4.90 0.607 5.00 
SVORIM 0.672 4.40 0.376 4.05 0.538 4.80 0.609 4.20 
SVR-PCDOC 0.678 3.80 0.359 2.60 0.487 2.30 0.653 2.40 
Method/Metric         
GPOR 0.666 5.40 0.392 5.25 0.534 4.90 0.577 5.20 
ASAOR(C4.5) 0.600 6.50 0.485 7.00 0.688 7.00 0.413 7.50 
KDLOR 0.680 4.20 0.363 4.00 0.509 3.80 0.640 3.60 
POM 0.490 7.90 0.778 7.80 0.861 7.00 0.375 6.90 
SVC 0.683 3.60 0.385 5.55 0.550 6.20 0.587 6.20 
RED-SVM 0.676 4.15 0.367 4.00 0.519 4.10 0.624 4.00 
SVOREX 0.667 5.05 0.382 4.75 0.538 4.90 0.607 5.00 
SVORIM 0.672 4.40 0.376 4.05 0.538 4.80 0.609 4.20 
SVR-PCDOC 0.678 3.80 0.359 2.60 0.487 2.30 0.653 2.40 

Regarding Table 2, it can be seen that the majority of methods are very competitive. The best-performing method depends on the considered performance metric, as can be seen from the mean rankings. This is also true when separately considering each of the data sets; the performance for some data sets varies noticeably if AMAEG is considered instead of MAEG (see bondrate, contact-lenses, eucalyptus, squash-unstored, and winequality-red). In the case of winequality-red, it happens that the second-worse method in MAEG, ASAOR(C4.5), is the second-best one for AMAEG. It is worthwhile mentioning that for the pasture data set, the mean MAEG and AMAEG are the same, which is due to the fact that pasture is a perfectly balanced data set (see section 2.2). In the case of tae, MAEG and AMAEG are very similar since the pattern distribution across classes is very similar. Regarding , it is interesting to highlight that a value close to zero of this metric reveals that the classifier predictions are not related to the real values, this is, the classifier performance is similar to the performance of a trivial classifier. This happens for the GPOR method in the bondrate, squash-stored, and tae data sets and for POM in the eucalyptus data set.

From Table 3, it can be observed how best mean value across the different data sets is not always translated into best mean ranking (see and columns). We now analyze the results in greater detail, highlighting the best and second-best performances. When considering AccG, SVC is clearly the best method, in both average performance and ranking. KDLOR and SVR-PCDOC are the second-best methods in average value and ranking, respectively. However, results are very different for all the other measures where the order is included in the evaluation. The best method in average MAEG and ranking of MAEG is SVR-PCDOC, and the second-best ranks are for KDLOR and RED-SVM, having similar mean MAEG. AMAE is a better alternative than MAE when the distribution of patterns is not balanced, and this is clearly the case for several data sets (see Table 1). The best values for mean AMAEG and mean ranking are obtained by SVR-PCDOC, and the second-best ones are those reported by KDLOR. Finally, the reveals the clearest differences. When this metric is used, the best mean values and ranks are reported by SVR-PCDOC, followed by KDLOR.

4.4.  Statistical Comparisons Between Methods.

To quantify whether a statistical difference exists between any of these algorithms, a procedure for comparing multiple classifiers over multiple data sets is employed (Demšar, 2006). First, a Friedman's nonparametric test (Friedman, 1940) with a significance level of has been carried out to determine the statistical significance of the differences in the mean ranks of Table 3 for each different measure. The test rejected the null hypothesis stating that the differences in mean rankings of AccG, MAEG, AMAEG, and obtained by the different algorithms were statistically significant (with ). Specifically, the confidence interval for this number of data sets and algorithms is , and the corresponding F-values for each metric were 3.257∉C0, 4.821∉C0, 4.184∉C0, and 5.099∉C0, respectively.

On the basis of this rejection, the Nemenyi post hoc test is used to compare all classifiers to each other (Demšar, 2006). This test considers that the performance of any two classifiers is deemed significantly different if their mean ranks differ by at least the critical difference (CD), which depends on the number of data sets and methods. A 5% significance confidence was considered () to obtain this CD and the results can be observed in Figure 11, which shows CD diagrams as proposed in Demšar (2006). Each method is represented as a point in a ranking scale, corresponding to its mean ranking performance. CD segments are included to measure the separation needed between methods in order to assess statistical differences. The horizontal lines in the figures define a set of algorithms with no statistical differences in mean ranking performance. Table 3 should also be considered when interpreting this graph.

Figure 11:

Ranking test diagrams for the mean generalization Acc, MAE, AMAE, and .

Figure 11:

Ranking test diagrams for the mean generalization Acc, MAE, AMAE, and .

Figure 11a shows that SVC, the nominal classifier, has the best performance in Acc when not considering the order of the label prediction errors, and SVR-PCDOC has the second-best one. RED-SVM, KDLOR, and SVORIM have similar performance here. In Figure 11b, the best mean ranking is for SVR-PCDOC, and SVORIM, KDLOR, and RED-SVM have similar performances. However, when considering AMAE, it can be seen in Figure 11c that the SVR-PCDOC mean ranking distance to the other methods increases, specifically for RED-SVM and SVORIM. Finally, Figure 11d shows the mean rank CD diagram for where SVR-PCDOC still has the best mean performance.

The Nemenyi approach comparing all classifiers to each other in a post hoc test is not as sensitive as the approach comparing all classifiers to a given classifier, a control method (Demšar, 2006). The Bonferroni-Dunn test allows this latter type of comparison, and in our case, it is done using the proposed method as the control method for the four metrics. The results of the Bonferroni-Dunn test for are in Table 4, where the corresponding critical values are included. From the results of this test, it can be concluded that SVR-PCDOC does not report a statistically significant difference with respect to the SVM ordinal regression methods, KDLOR and ASAOR(C4.5), but it does when it is compared to POM for all the metrics and compared to GPOR for the ordinal metrics. Moreover, there are significant differences with respect to SVC when considering AMAE and .

Table 4:
Differences and Critical Difference (CD) Value in Rankings in the Bonferroni-Dunn Test, Using SVR-PCDOC as the Control Method.
Method
Metric ASAOR(C4.5) GPOR KDLOR POM SVC RED-SVM SVOREX SVORIM 
Acc 1.60 2.70 0.40 4.10a 0.20 0.35 1.25 0.60 
MAE 2.65 4.40a 1.40 5.20a 2.95 1.40 2.15 1.45 
AMAE 2.60 4.70a 1.50 4.70a 3.90a 1.80 2.60 2.50 
 2.80 5.10a 1.20 4.50a 3.80a 1.60 2.60 1.80 
Method
Metric ASAOR(C4.5) GPOR KDLOR POM SVC RED-SVM SVOREX SVORIM 
Acc 1.60 2.70 0.40 4.10a 0.20 0.35 1.25 0.60 
MAE 2.65 4.40a 1.40 5.20a 2.95 1.40 2.15 1.45 
AMAE 2.60 4.70a 1.50 4.70a 3.90a 1.80 2.60 2.50 
 2.80 5.10a 1.20 4.50a 3.80a 1.60 2.60 1.80 

Note: Bonferroni-Dunn test: 3.336.

aStatistical difference with 0.05.

From the experiments, we can conclude that the reference (baseline) nominal classifier, SVC, is improved with statistical differences when considering ordinal classification measures. Regarding ASAOR(C4.5), SVOREX, SVORIM, KDLOR, and RED-SVM, whereas the general performance is slightly better, there are no statistically significant differences favoring any of the methods.

Two important conclusions can be drawn about the performance measures. When unbalanced data sets are considered, AccG is clearly omitting important aspects of ordinal classification, and so does MAEG. If comparative performance is taken into account, KDLOR and SVR-PCDOC appear to be very good classifiers when the objective is to improve AMAEG and . The best mean ranking performance is obtained by the method we propose in this letter.

4.5.  Latent Space Representations of the Ordinal Classes.

In the previous section, we showed that our simple and intuitive methodology can compete on equal footing with established more complex or less direct methods for ordinal classification. In this section, we complement this performance-based comparison with a deeper analysis of the main ingredient of our approach and related ones to ordinal classification: projection onto the one-dimensional (latent) space naturally representing the ordinal nature of the class organization. In particular, we study how nonlinear latent variable models, SVR-PCDOC, KDLOR, SVOREX, and SVORIM, organize their one-dimensional latent space data projections. For comparison purposes, the latent variable values of the training and generalization data of the first fold of the tae data set are shown (see Figure 12). Both histograms and values are plotted so that the behavior of the models can be analyzed. In the case of PCDOC, the PCD projection is also included to see whether the regressor model is close to the PCDOC projection. The histograms represent the relative frequency of the projections. SVORIM histograms and latent variable values are not presented since they are similar to the SVOREX ones in the selected data set.

Figure 12:

PCD projection and SVR-PCDOC's histograms and prediction corresponding to the latent variable of the tae data set: train PCD, train predicted by SVR-PCDOC, generalization PCD, and generalization predicted by SVR-PCDOC. Generalization results are Acc=0.582, MAE=0.457, AMAE=0.455, and .

Figure 12:

PCD projection and SVR-PCDOC's histograms and prediction corresponding to the latent variable of the tae data set: train PCD, train predicted by SVR-PCDOC, generalization PCD, and generalization predicted by SVR-PCDOC. Generalization results are Acc=0.582, MAE=0.457, AMAE=0.455, and .

We first analyze the SVR-PCDOC method. From PCD projections in Figure 12a, we deduce that classes C1 and C2 contain patterns that are very close in the input space: projection of some patterns from C2 lies near the threshold that divides the values for the two classes. An analogous comment applies to classes C2 and C3. The regressor seems to have learned the imposed projection reasonably well since the predicted latent values have a histogram similar to the training PCD projection histogram. The generalization PCD projections (see Figure 12c) have similar characteristics as the training ones.5 Note the concentration of values around on the prediction of the generalization . This concentration of values is due to incorrect prediction of class C1 and C3 patterns that were both assigned to C2. This behavior can be better seen in Figures 12e and 12f, where the modeled latent value for each pattern is shown together with its class label. Indeed, during training, some C1 and C2 patterns were mapped to positions near the thresholds. This is probably caused by noise or overlapping class distribution in the input space.

Figure 13 presents latent variable values of KDLOR. The KDLOR method projects the data onto the latent space by minimizing the intraclass distance while maximizing the interclass distance of the projections. As a result, the latent representations of the data are quite compact for each class (see the training projection histogram in Figure 13a). While this philosophy often leads to superior classification results, the projections do not reflect the structure of patterns within a single class, that is, the ordinal nature of the data is not fully captured by the model. In addition, KDLOR projections occur in the incorrect bins more often than in the case of SVR-PCDOC (see the generalization projections in Figure 13d).

Figure 13:

Prediction of train and generalization values corresponding to KDLOR at the tae data set. Generalization results are Acc=0.555, MAE=0.473, AMAE=0.471, .

Figure 13:

Prediction of train and generalization values corresponding to KDLOR at the tae data set. Generalization results are Acc=0.555, MAE=0.473, AMAE=0.471, .

Finally, Figure 14 presents latent representations of patterns by the SVOREX model. As in the KDLOR case, (except for a few patterns) the training latent representations are highly compact within each class. Again, the relative structure of patterns within their classes is lost in the projections.

Figure 14:

Prediction of train and generalization values corresponding to SVOREX at the tae data set. Generalization results are Acc=0.581, MAE=0.485, AMAE=0.484,

Figure 14:

Prediction of train and generalization values corresponding to SVOREX at the tae data set. Generalization results are Acc=0.581, MAE=0.485, AMAE=0.484,

In both models, KDLOR and SVOREX, there is pressure in the model construction phase to find one-dimensional projections of the data that result in compact classes while maximizing the interclass separation. In the case of KDLOR, this is explicitly formulated in the objective function. On the other hand, the key idea behind SVM-based approaches is margin maximization. Data projections that maximize interclass margins implicitly make the projected classes compact. We hypothesize that the pressure for compact within-class latent projections can lead to poorer generalization performance, as illustrated in Figure 14d. In the case of overlapping classes, the drive for compact class projections can result in locally highly nonlinear projections of the overlapping regions, over which we do not have direct control (unlike in the case of PCDOC, where the nonlinear projection is guided by the relative positions of points with respect to the other classes). Having such highly expanding projections can result in test points being projected to incorrect classes in an arbitrary manner. Although we provide detailed analysis for one data set and one fold only, the observed tendencies were quite general across the data sets and holdout folds.

5.  Conclusion

This letter addresses ordinal classification by proposing a projection of the input data into a one-dimensional variable, based on the relative position of each pattern with respect to the patterns of the adjacent classes. Our approach is based on a simple and intuitive idea: instead of implicitly inducing a one-dimensional data projection into a series of class intervals (as done in threshold-based methods), construct such projections explicitly and in a controlled manner. Threshold methods crucially depend on such projections, and we propose that it might be advantageous to have direct control over how the projection is done rather than having to rely on its indirect induction through a one-stage ordinal classification learning process.

Applying this one-dimensional projection to the training set yields data on which generalized projection can be trained using any standard regression method. The generalized projection in turn can be applied to new instances, which are then classified based on the interval into which their projection falls.

We construct the projection by imposing that the best-separated pattern of each class (i.e., the pattern most distant from the adjacent classes) should be mapped to the centre of the interval representing that class (or in the interval extremes for the extreme—the first and the last—classes). All the other patterns are proportionally positioned in their corresponding class intervals around the centers mentioned above. We designed a projection method having such desirable properties and empirically verified its appropriateness on data sets with linear and nonlinear class ordering topologies.

We extensively evaluated our method on 10 real-world data sets, 4 performance metrics, and a measure of statistical significance and performed comparison with 8 alternative methods, including the most recent proposals for ordinal regression and a baseline nominal classifier. In spite of the intrinsic simplicity and straightforward intuition behind our proposal, the results are competitive and consistent with respect to the state of the art in the literature. The mean ranking performance of our method was particularly impressive, when robust ordinal performance metrics were considered, such as the average mean absolute error or the correlation coefficient. Moreover, we studied in detail the latent space organization of the projection-based methods considered in this letter. We suggest that while the pressure for compact within-class latent projections can make training sample projections compact well within classes, it can lead to poorer generalization performance overall.

We also identify some interesting discussion points. First, the latent space thresholds are fixed by the projection with an equal width. This may be interpreted as an assumption of equal widths for each class, which is not always true for all the problems. This would indeed be a problem if we used a linear regressor from the data space to the projection space. However, we employ nonlinear projections, and the adjustment for unequal widths of the different classes can be naturally achieved within such nonlinear mapping from the data to the projection space. From the model-fitting standpoint, having fixed-width class regions in the projection space is desirable. Allowing for variable widths would increase the number of free parameters and would make the free parameters dependent in a potentially complicated manner (flexibility of projections versus class widths in the projection space). This may have a harmful effect on model fitting, especially if the data set is of limited size. Having fewer free parameters is also advantageous from the point of view of computational complexity.

The second discussion point is the possible undesirable influence of outliers in the PCD projection. One possible solution can be to place each pattern in the projection considering more classes than just the adjacent ones. However, this should be done carefully in order not to decrease the role of ordinal information in the projection. A direct alternative can be to use a k-NN–like scheme in equation 3.1, where instead of taking the minimum distance to a point, the average distance to the k closest points of class can be used. This will represent a generalization of the current scheme that calculates distances with k=1. Nevertheless, the inclusion of k would imply the addition of a new free parameter to the training process.

In conclusion, the results indicate that our two-phase approach to ordinal classification is a viable and simple-to-understand alternative to the state of the art. The projection constructed in the first phase is consistently extracting useful information for ordinal classification. As such, it can be used not only as the basis for classifier construction but also as a starting point for devising measures able to detect and quantify possible ordering of classes in any data set. This is a matter for our future research.

Acknowledgments

This work has been partially subsidized by the TIN2011-22794 project of the Spanish Ministerial Commision of Science and Technology (MICYT), FEDER funds and the P11-TIC-7508 project of the Junta de Andalucía (Spain). The work of P.T. was supported by BBSRC grant BB/H012508/1.

References

Agresti
,
A.
(
1984
).
Analysis of ordinal categorical data
.
New York
:
Wiley
.
Arens
,
R.
(
2010
).
Learning SVM ranking functions from user feedback using document metadata and active learning in the biomedical domain
. In
J. Fürnkranz & E. Hüllermeier
(Eds.),
Preference learning
(pp.
363
383
).
New York
:
Springer-Verlag
.
Asuncion
,
A.
, &
Newman
,
D.
(
2007
).
UCI machine learning repository
.
Irvine, CA
:
University of California Irvine
.
Baccianella
,
S.
,
Esuli
,
A.
, &
Sebastiani
,
F.
(
2009
).
Evaluation measures for ordinal regression
. In
Proceedings of the Ninth International Conference on Intelligent Systems Design and Applications (ISDA’09)
(pp.
283
287
).
San Mateo, CA
:
IEEE Computer Society
.
Barker
,
D.
(
1995
).
Pasture production data set
. Available online at http://www.cs.waikato.ac.nz/~ml/weka/datasets.html
Cardoso
,
J. S.
, &
Pinto da Costa
,
J. F.
(
2007
).
Learning to classify ordinal data: The data replication method
.
Journal of Machine Learning Research
,
8
,
1393
1429
.
Cardoso
,
J.
,
Pinto da Costa
,
J.
, &
Cardoso
,
M.
(
2005
).
Modelling ordinal relations with SVMs: An application to objective aesthetic evaluation of breast cancer conservative treatment
.
Neural Networks
,
18
(
5–6
),
808
817
.
Cardoso
,
J. S.
, &
Sousa
,
R.
(
2011
).
Measuring the performance of ordinal classification
.
International Journal of Pattern Recognition and Artificial Intelligence
,
25
(
8
),
1173
1195
.
Chang
,
C.-C.
, &
Lin
,
C.-J.
(
2011
).
LIBSVM: A library for support vector machines
.
ACM Trans. Intell. Syst. Technol.
,
2
,
27:1
27:27
.
Chu
,
W.
, &
Ghahramani
,
Z.
(
2005
).
Gaussian processes for ordinal regression
.
Journal of Machine Learning Research
,
6
,
1019
1041
.
Chu
,
W.
, &
Keerthi
,
S. S.
(
2005
).
New approaches to support vector ordinal regression
. In
In ICML’05: Proceedings of the 22nd international Conference on Machine Learning
(pp.
145
152
).
New York
:
ACM
.
Chu
,
W.
, &
Keerthi
,
S. S.
(
2007
).
Support vector ordinal regression
.
Neural Computation
,
19
(
3
),
792
815
.
Cortes
,
C.
, &
Vapnik
,
V.
(
1995
).
Support-vector networks
.
Machine Learning
,
20
(
3
),
273
297
.
Crammer
,
K.
, &
Singer
,
Y.
(
2001
).
Pranking with ranking
. In
T. G. Dieterrich, S. Becker, & Z. Ghahramani (Eds.)
,
Advances in neural information processing systems
,
14
(pp. 
641
647
).
Cambridge, MA
:
MIT Press
.
Crammer
,
K.
, &
Singer
,
Y.
(
2005
).
Online ranking by projecting
.
Neural Computation
,
17
(
1
),
145
175
.
Cruz-Ramírez
,
M.
,
Hervás-Martínez
,
C.
,
Sánchez-Monedero
,
J.
, &
Gutiérrez
,
P. A.
(
2011
).
A Preliminary study of ordinal metrics to guide a multi-objective evolutionary algorithm
. In
Proceedings of the 11th International Conference on Intelligent Systems Design and Applications (ISDA 2011)
(pp.
1176
1181
).
San Mateo, CA
:
IEEE Computer Society
.
Demšar
,
J.
(
2006
).
Statistical comparisons of classifiers over multiple data sets
.
J. Mach. Learn. Res.
,
7
,
1
30
.
Fouad
,
S.
, &
Tiňo
,
P.
(
2012
).
Adaptive metric learning vector quantization for ordinal classification
.
Neural Computation
,
24
(
11
),
2825
2851
.
Frank
,
E.
, &
Hall
,
M.
(
2001
).
A simple approach to ordinal classification
. In
Proceedings of the 12th European Conference on Machine Learning
(pp.
145
156
).
New York
:
Springer-Verlag
.
Friedman
,
M.
(
1940
).
A comparison of alternative tests of significance for the problem of m rankings
.
Annals of Mathematical Statistics
,
11
(
1
),
86
92
.
Gutiérrez
,
P.
,
Salcedo-Sanz
,
S.
,
Hervás-Martínez
,
C.
,
Carro-Calvo
,
L.
,
Sánchez-Monedero
,
J.
, &
Prieto
,
L.
(
2013
).
Ordinal and nominal classification of wind speed from synoptic pressure patterns
.
Engineering Applications of Artificial Intelligence
,
26
(
3
),
1008
1015
.
Hall
,
M.
,
Frank
,
E.
,
Holmes
,
G.
,
Pfahringer
,
B.
,
Reutemann
,
P.
, &
Witten
,
I. H.
(
2009
).
The WEKA data mining software: An update
.
Special Interest Group on Knowledge Discovery and Data Mining Explorer Newsletter
,
11
,
10
18
.
Herbrich
,
R.
,
Graepel
,
T.
, &
Obermayer
,
K.
(
2000
).
Large margin rank boundaries for ordinal regression
. In
A. Smola, P. Bartlett, B. Schölkopf, & D. Schuurmans
(Eds.),
Advances in large margin classifiers
(pp.
115
132
).
Cambridge, MA
:
MIT Press
.
Hsu
,
C.-W.
, &
Lin
,
C.-J.
(
2002
).
A comparison of methods for multi-class support vector machines
.
IEEE Transactions on Neural Networks
,
13
(
2
),
415
425
.
Hühn
,
J. C.
, &
Hüllermeier
,
E.
(
2008
).
Is an ordinal class structure useful in classifier learning?
Int. J. Data Mining, Modelling and Management
,
1
(
1
),
45
67
.
Kendall
,
M. G.
(
1962
).
Rank correlation methods
(3rd3rd ed.).
New York
:
Hafner Press
.
Kibler
,
D. F.
,
Aha
,
D. W.
, &
Albert
,
M. K.
(
1989
).
Instance-based prediction of real-valued attributes
.
Computational Intelligence
,
5
,
51
.
Kim
,
K.-J.
, &
Ahn
,
H.
(
2012
).
A corporate credit rating model using multi-class support vector machines with an ordinal pairwise partitioning approach
.
Computers and Operations Research
,
39
(
8
),
1800
1811
.
Kotsiantis
,
S. B.
, &
Pintelas
,
P. E.
(
2004
).
A cost sensitive technique for ordinal classification problems
. In
G. Vouros & T. Panayiotopoulos
(Eds.),
Methods and applications of artificial intelligence
(pp.
220
229
).
Berlin
:
Springer-Verlag
.
Kramer
,
S.
,
Widmer
,
G.
,
Pfahringer
,
B.
, &
de Groeve
,
M.
(
2010
).
Prediction of ordinal classes using regression trees
. In
Z. Ras & S. Ohsuga
(Eds.),
Foundations of intelligent systems
(pp.
665
674
).
Berlin
:
Springer-Verlag
.
Li
,
L.
, &
Lin
,
H.-T.
(
2007
).
Ordinal regression by extended binary classification
. In
B. Schölkopf, J. Platt, & T. Hofmann
(Eds.),
Advances in neural information processing systems, 19
(pp.
865
872
).
Cambridge, MA
:
MIT Press
.
Lim
,
T.-S.
,
Loh
,
W.-Y.
, &
Shih
,
Y.-S.
(
2000
).
A comparison of prediction accuracy, complexity, and training time of thirty-three old and new classification Algorithms
.
Machine Learning
,
40
,
203
228
.
Lin
,
H.-T.
, &
Li
,
L.
(
2012
).
Reduction from cost-sensitive ordinal ranking to weighted binary classification
.
Neural Computation
,
24
(
5
),
1329
1367
.
Mackay
,
D. J. C.
(
1994
).
Bayesian methods for backpropagation networks
. In
E. Domany, J. L. van Hemmen, & K. Schulten
(Eds.),
Models of neural networks III
(pp.
211
254
).
New York
:
Springer-Verlag
.
McCullagh
,
P.
(
1980
).
Regression models for ordinal data
.
Journal of the Royal Statistical Society, Series B (Methodological)
,
42
(
2
),
109
142
.
Neal
,
R. M.
(
1996
).
Bayesian learning for neural networks
.
New York
:
Springer-Verlag
.
Perez-Ortiz
,
M.
,
Gutierrez
,
P. A.
,
Garcia-Alonso
,
C.
,
Salvador-Carulla
,
L.
,
Salinas-Perez
,
J. A.
, &
Hervas-Martinez
,
C.
(
2011
).
Ordinal classification of depression spatial hot-spots of prevalence
. In
Proceedings of the 11th International Conference on Intelligent Systems Design and Applications (ISDA)
(pp.
1170
1175
).
San Mateo, CA
:
IEEE Computer Society
.
Pinto da Costa
,
J. F.
,
Alonso
,
H.
, &
Cardoso
,
J. S.
(
2008
).
The unimodal model for the classification of ordinal data
.
Neural Networks
,
21
,
78
91
.
Raykar
,
V. C.
,
Duraiswami
,
R.
, &
Krishnapuram
,
B.
(
2008
).
A fast algorithm for learning a ranking function from large-scale data sets
.
IEEE Transactions on Pattern Analysis and Machine Intelligence
,
30
,
1158
1170
.
Sánchez-Monedero
,
J.
,
Gutiérrez
,
P. A.
,
Fernández-Navarro
,
F.
, &
Hervás-Martínez
,
C.
(
2011
).
Weighting efficient accuracy and minimum sensitivity for evolving multiclass classifiers
.
Neural Processing Letters
,
34
(
2
),
101
116
.
Schölkopf
,
B.
, &
Smola
,
A. J.
(
2001
).
Learning with kernels: Support vector machines, regularization, optimization, and beyond
.
Cambridge, MA
:
MIT Press
.
Shashua
,
A.
, &
Levin
,
A.
(
2002
).
Ranking with large margin principle: Two approaches
. In
S. Becker, S. Thrün, & K. Obermeyer
(Eds.),
Advances in neural information processing systems, 15
(pp.
937
944
).
Cambridge, MA
:
MIT Press
.
Sonnenburg
,
D. S.
(
2011
).
Machine learning data set repository
. http://midata.org
Sun
,
B.-Y.
,
Li
,
J.
,
Wu
,
D. D.
,
Zhang
,
X.-M.
, &
Li
,
W.-B.
(
2010
).
Kernel discriminant learning for ordinal regression
.
IEEE Transactions on Knowledge and Data Engineering
,
22
(
6
),
906
910
.
Vapnik
,
V.
(
1999
).
An overview of statistical learning theory
.
IEEE Transactions on Neural Networks
,
10
(
5
),
988
999
.
Verwaeren
,
J.
,
Waegeman
,
W.
, &
De Baets
,
B.
(
2012
).
Learning partial ordinal class memberships with kernel-based proportional odds models
.
Computational Statistics and Data Analysis
,
56
(
4
),
928
942
.
Waegeman
,
W.
, &
Boullart
,
L.
(
2009
).
An ensemble of weighted support vector machines for ordinal regression
.
International Journal of Computer Systems Science and Engineering
,
3
(
1
),
47
51
.
Waegeman
,
W.
, &
De Baets
,
B.
(
2011
).
A survey on ROC-based ordinal regression
. In
J. Fürnkranz & E. Hüllermeier
(Eds.),
Preference learning
(pp.
127
154
).
Berlin
:
Springer-Verlag
.

Notes

1

Acc is referred to as mean zero-one error when expressed as an error.

2

This does not in any way hamper generality, as our regressors defining g will be smooth nonlinear functions.

3

Recall that the threshold set θ delimiting class intervals is defined in equation 3.3.

5

There are many fewer patterns in the holdout set than in the training set, making direct comparison of the two histograms problematic.