Ordinal classification refers to classification problems in which the classes have a natural order imposed on them because of the nature of the concept studied. Some ordinal classification approaches perform a projection from the input space to one-dimensional (latent) space that is partitioned into a sequence of intervals (one for each class). Class identity of a novel input pattern is then decided based on the interval its projection falls into. This projection is trained only indirectly as part of the overall model fitting. As with any other latent model fitting, direct construction hints one may have about the desired form of the latent model can prove very useful for obtaining high-quality models. The key idea of this letter is to construct such a projection model directly, using insights about the class distribution obtained from pairwise distance calculations. The proposed approach is extensively evaluated with 8 nominal and ordinal classifiers methods, 10 real-world ordinal classification data sets, and 4 different performance measures. The new methodology obtained the best results in average ranking when considering three of the performance metrics, although significant differences are found for only some of the methods. Also, after observing other methods of internal behavior in the latent space, we conclude that the internal projections do not fully reflect the intraclass behavior of the patterns. Our method is intrinsically simple, intuitive, and easily understandable, yet highly competitive with state-of-the-art approaches to ordinal classification.

You do not currently have access to this content.