Abstract
Efficient learning of a data analysis task strongly depends on the data representation. Most methods rely on (symmetric) similarity or dissimilarity representations by means of metric inner products or distances, providing easy access to powerful mathematical formalisms like kernel or branch-and-bound approaches. Similarities and dissimilarities are, however, often naturally obtained by nonmetric proximity measures that cannot easily be handled by classical learning algorithms. Major efforts have been undertaken to provide approaches that can either directly be used for such data or to make standard methods available for these types of data. We provide a comprehensive survey for the field of learning with nonmetric proximities. First, we introduce the formalism used in nonmetric spaces and motivate specific treatments for nonmetric proximity data. Second, we provide a systematization of the various approaches. For each category of approaches, we provide a comparative discussion of the individual algorithms and address complexity issues and generalization properties. In a summarizing section, we provide a larger experimental study for the majority of the algorithms on standard data sets. We also address the problem of large-scale proximity learning, which is often overlooked in this context and of major importance to make the method relevant in practice. The algorithms we discuss are in general applicable for proximity-based clustering, one-class classification, classification, regression, and embedding approaches. In the experimental part, we focus on classification tasks.
1 Introduction
The notion of pairwise proximities plays a key role in most machine learning algorithms.
The comparison of objects by a metric, often Euclidean, distance measure is a standard
element in basically every data analysis algorithm. This is mainly due to the easy access to
powerful mathematical models in metric spaces. Based on work of Schölkopf and Smola (2002) and others, the use of similarities by means of
metric inner products or kernel matrices has led to the great success of similarity-based
learning algorithms. The data are represented by metric pairwise similarities only. We can
distinguish similarities, indicating how close or similar two items are to each other, and
dissimilarities as measures of the unrelatedness of two items. Given a set of N data items, their pairwise proximity (similarity or dissimilarity)
measures can be conveniently summarized in an proximity
matrix. In the following we refer to similarity and dissimilarity type proximity matrices as
and
,
respectively. For some methods, symmetry of the proximity measures is not strictly required,
while some other methods add additional constraints, such as the nonnegativity of the
proximity matrix. These notions enter into models by means of similarity or dissimilarity
functions
, where
and
are the compared objects. The objects
may exist in
a d-dimensional vector space, so that
, but they
can also be given without an explicit vectorial representation (e.g., biological sequences;
see Figure 1). However, as Pekalska and Duin (2005) pointed out, proximities often occur to be
nonmetric and their usage in standard algorithms leads to invalid model formulations.
(Left) Illustration of a proximity (in this case dissimilarity) measure between pairs of documents—the compression distance (Cilibrasi & Vitányi, 2005). It is based on the difference between the total information-theoretic complexity of two documents considered in isolation and the complexity of the joint document obtained by concatenation of the two documents. In its standard form, it violates the triangle inequality. (Right) A simplified illustration of the blast sequence alignment providing symmetric but nonmetric similarity scores in comparing pairs of biological sequences.
(Left) Illustration of a proximity (in this case dissimilarity) measure between pairs of documents—the compression distance (Cilibrasi & Vitányi, 2005). It is based on the difference between the total information-theoretic complexity of two documents considered in isolation and the complexity of the joint document obtained by concatenation of the two documents. In its standard form, it violates the triangle inequality. (Right) A simplified illustration of the blast sequence alignment providing symmetric but nonmetric similarity scores in comparing pairs of biological sequences.
The function may violate the metric properties to different
degrees. Symmetry is in general assumed to be valid because a large number of algorithms
become meaningless for asymmetric data. However, especially in the field of graph analysis,
asymmetric weightings have already been considered. Asymmetric weightings have also been
used in the fields of clustering and data embedding (Strickert, Bunte, Schleif, &
Huellermeier, 2014; Olszewski & Ster, 2014). Examples of algorithms capable of processing
asymmetric proximity data in supervised learning are exemplar-based methods (Nebel, Hammer,
& Villmann, 2014). A recent article focusing on
this topic is available in Calana et al. (2013). More
frequently, proximities are symmetric, but the triangle inequality is violated, proximities
are negative, or self-dissimilarities are not zero. Such violations can be attributed to
different sources. While some authors attribute it to noise (Luss & d’Aspremont, 2009), for some proximities and proximity functions f, this may be purposely caused by the measure itself. If noise is the
source, often a simple eigenvalue correction (Y. Chen, Garcia, Gupta, Rahimi, &
Cazzanti, 2009) can be used, although this can become
costly for large data sets. A recent analysis of the possible sources of negative
eigenvalues is provided in Xu, Wilson, and Hancock (2011). Such analysis can be potentially helpful in, for example, selecting the
appropriate eigenvalue correction method applied to the proximity matrix. Prominent examples
for genuine nonmetric proximity measures can be found in the field of bioinformatics, where
classical sequence alignment algorithms (e.g., Smith-Waterman score; Gusfield, 1997) produce nonmetric proximity values. For such data,
some authors argue that the nonmetric part of the data contains valuable information and
should not be removed (Pekalska, Duin, Günter, & Bunke, 2004).
For nonmetric inputs, the support vector machine formulation (Vapnik, 2000) no longer leads to a convex optimization problem. Prominent solvers such as sequential minimization (SMO) will converge to a local optimum (Platt, 1999; Tien Lin & Lin, 2003) and other kernel algorithms may not converge at all. Accordingly, dedicated strategies for nonmetric data are very desirable.
A previous review on nonmetric learning was given by Y. Chen, Garcia, Gupta, Rahimi, and Cazzanti (2009) with a strong focus on support vector classification and eigenspectrum corrections for similarity data evaluated on multiple small world data sets. While we include and update these topics, our focus is on the broader context of general supervised learning. Most approaches can be transferred to the unsupervised setting in a straightforward manner.
Besides eigenspectrum corrections making the similarity matrix positive semidefinite (psd), we also consider generic novel proxy approaches (which learn a psd matrix from a non-psd representation), different novel embedding approaches, and, crucially, natural indefinite learning algorithms, which are not restricted to psd matrices. We also address the issue of out-of-sample extension and the widely ignored topic of larger-scale data processing (given the quadratic complexity in sample size).
The review is organized as follows. In section 2 we outline the basic notation and some mathematical formalism related to machine learning with nonmetric proximities. Section 3 discusses different views and sources of indefinite proximities and addresses the respective challenges in more detail. A taxonomy of the various approaches is proposed in section 4, followed by sections 5 and 6, which detail the two families of methods. In section 7 we discuss some techniques to improve the scalability of the methods for larger data sets. Section 8 provides experimental results comparing the different approaches for various classification tasks, and section 9 concludes.
2 Notation and Basic Concepts
We briefly review some concepts typically used in proximity-based learning.
2.1 Kernels and Kernel Functions
Let be a collection of N objects xi,
, in some
input space. Further, let
be a
mapping of patterns from
to a
high-dimensional or infinite-dimensional Hilbert space
equipped
with the inner product
. The
transformation
is in general a nonlinear mapping to a
high-dimensional space
and in
general may not be given in an explicit form. Instead a kernel function
is given that encodes the inner product in
. The kernel k is a positive
(semi-)definite function such that
for any
. The matrix
is an
kernel matrix derived from the training data,
where
is a matrix of images (column vectors) of the
training data in
. The motivation for such an embedding comes
with the hope that the nonlinear transformation of input data into higher-dimensional
allows for using linear techniques in
. Kernelized methods process the embedded data
points in a feature space using only the inner products
(kernel
trick) (Shawe-Taylor & Cristianini, 2004),
without the need to explicitly calculate
. The
specific kernel function can be very generic. Most prominent are the linear kernel with
where
is the
Euclidean inner product or the rbf kernel
, with
as a free parameter. Thereby, it is assumed
that the kernel function
is
positive semidefinite (psd).
2.2 Krein and Pseudo-Euclidean Spaces
A Krein space is an indefinite inner product space endowed with a Hilbertian topology.
Let be a real vector space. An inner product space
with an indefinite inner product
on
is a bilinear form where all
and
obey the
following conditions. Symmetry:
;
linearity:
; and
implies
. An inner product is positive definite if
,
and
negative definite if
,
;
otherwise, it is indefinite. A vector space
with
inner product
is called an inner product space.
An inner product space is a
Krein space if we have two Hilbert spaces
and
spanning
such that
we have
with
and
and
,
.
Indefinite kernels are typically observed by means of domain-specific nonmetric
similarity functions (such as alignment functions used in biology; Smith & Waterman, 1981), by specific kernel functions—for example,
the Manhattan kernel , tangent distance kernel (Haasdonk &
Keysers, 2002) or divergence measures plugged into
standard kernel functions (Cichocki & Amari, 2010). Other sources of non-psd kernels are noise artifacts on standard kernel
functions (Haasdonk, 2005). A finite-dimensional
Krein space is a so-called pseudo-Euclidean space.







3 Indefinite Proximities
Proximity functions can be very generic but are often restricted to fulfilling metric properties to simplify the mathematical modeling and especially the parameter optimization. Deza and Deza (2009) reviewed a large variety of such measures; basically most public methods now make use of metric properties. While this appears to be a reliable strategy, researchers in the fields of psychology (Hodgetts & Hahn, 2012; Hodgetts, Hahn, & Chater, 2009), vision (Kinsman, Fairchild, & Pelz, 2012; Xu et al., 2011; Van der Maaten & Hinton, 2012; Scheirer, Wilber, Eckmann, & Boult, 2014), and machine learning (Pekalska et al., 2004; Duin & Pekalska, 2010) have criticized this restriction as inappropriate in multiple cases. In fact (Duin & Pekalska, 2010), multiple examples from real problems show that many real-life problems are better addressed by proximity measures that are not restricted to be metric.
The triangle inequality is most often violated if we consider object comparisons in daily life problems like the comparison of text documents, biological sequence data, spectral data, or graphs (Y. Chen et al., 2009; Kohonen & Somervuo, 2002; Neuhaus & Bunke, 2006). These data are inherently compositional and a feature representation leads to information loss. As an alternative, tailored dissimilarity measures such as pairwise alignment functions, kernels for structures or other domain-specific similarity and dissimilarity functions can be used as the interface to the data (Gärtner, Lloyd, & Flach, 2004; Poleksic, 2011). But also for vectorial data, nonmetric proximity measures are common in some disciplines. An example of this type is the use of divergence measures (Cichocki & Amari, 2010; Zhang, Ooi, Parthasarathy, & Tung, 2009; Schnitzer, Flexer, & Widmer, 2012), which are very popular for spectral data analysis in chemistry, geo-, and medical sciences (Mwebaze et al., 2010; Nguyen, Abbey, & Insana, 2013; Tian, Cui, & Reinartz, 2013; van der Meer, 2006; Bunte, Haase, Biehl, & Villmann, 2012), and are not metric in general. Also the popular dynamic time warping (DTW) (Sakoe & Chiba, 1978) algorithm provides a nonmetric alignment score that is often used as a proximity measure between two one-dimensional functions of different length. In image processing and shape retrieval, indefinite proximities are often obtained by means of the inner distance. It specifies the dissimilarity between two objects that are solely represented by their shape. Thereby a number of landmark points are used, and the shortened paths within the shape are calculated in contrast to the Euclidean distance between the landmarks. Further examples can be found in physics, where problems of the special relativity theory naturally lead to indefinite spaces.
Examples of indefinite measures can be easily found in many domains; some of them are exemplary (see Figure 2). A list of nonmetric proximity measures is given in Table 1. Most of these measures are very popular but often violate the symmetry or triangle inequality condition or both. Hence many standard proximity-based machine learning methods like kernel methods are not easy accessible for these data.
Visualization of two frequently used nonmetric distance measures. (Left) Dynamic time warping (DTW)—a frequently used measure to align one dimensional time series (Sakoe & Chiba, 1978). (Right) Inner distance, a common measure in shape retrieval (Ling & Jacobs, 2005).
Measure . | Application field . |
---|---|
Dynamic time warping (DTW) (Sakoe & Chiba, 1978) | Time series or spectral alignment |
Inner distance (Ling & Jacobs, 2005) | Shape retrieval (e.g., in robotics) |
Compression distance (Cilibrasi & Vitányi, 2005) | Generic used also for text analysis |
Smith Waterman alignment (Gusfield, 1997) | Bioinformatics |
Divergence measures (Cichocki & Amari, 2010) | Spectroscopy and audio processing |
Generalized Lp norm (Lee & Verleysen, 2005) | Time series analysis |
Nonmetric modified Hausdorff (Dubuisson & Jain, 1994) | Template matching |
(Domain-specific) alignment score (Maier, Klebel, Renner, & Kostrzewa, 2006) | Mass spectrometry |
Measure . | Application field . |
---|---|
Dynamic time warping (DTW) (Sakoe & Chiba, 1978) | Time series or spectral alignment |
Inner distance (Ling & Jacobs, 2005) | Shape retrieval (e.g., in robotics) |
Compression distance (Cilibrasi & Vitányi, 2005) | Generic used also for text analysis |
Smith Waterman alignment (Gusfield, 1997) | Bioinformatics |
Divergence measures (Cichocki & Amari, 2010) | Spectroscopy and audio processing |
Generalized Lp norm (Lee & Verleysen, 2005) | Time series analysis |
Nonmetric modified Hausdorff (Dubuisson & Jain, 1994) | Template matching |
(Domain-specific) alignment score (Maier, Klebel, Renner, & Kostrzewa, 2006) | Mass spectrometry |
3.1 Why Is a Nonmetric Proximity Function a Problem?
A large number of algorithmic approaches assume that the data are given in a metric vector space, typically a Euclidean vector space, motivated by the strong mathematical framework that is available for metric Euclidean data. But with the advent of new measurement technologies and many nonstandard data, this strong constraint is often violated in practical applications, and nonmetric proximity matrices are more and more common.
This is often a severe problem for standard optimization frameworks as used, for example, for the support vector machines (SVM), where psd matrices or more specific Mercer kernels are expected (Vapnik, 2000). The naive use of non-psd matrices in such a context invalidates the guarantees of the original approach (e.g., ensured convergence to a convex or stationary point or the expected generalization accuracy to new points).
Haasdonk (2005) showed that the SVM no longer optimizes a global convex function but is minimizing the distance between reduced convex hulls in a pseudo-Euclidean space leading to a local optimum. Laub (2004) and Filippone (2009) analyzed different cost functions for clustering and point out that the spectrum shift operation was found to be robust with respect to the optimization function used.
Currently the vast majority of approaches encode such comparisons by enforcing metric properties into these measures or by using alternative, and in general less expressive, measures, which do obey metric properties. With the continuous increase of nonstandard and nonvectorial data sets, nonmetric measures and algorithms in Krein or pseudo-Euclidean spaces are getting more popular and have attracted wide interest from the research community (Gnecco, 2013; Yang & Fan, 2013; Liwicki, Zafeiriou, & Pantic, 2013; Kanzawa, 2012; Gu & Guo, 2012; Zafeiriou, 2012; Miranda, Chvez, Piccoli, & Reyes, 2013; Epifanio, 2013; Kar & Jain, 2012). In this review, we review major research directions in the field of nonmetric proximity learning where data are given by pairwise proximities only.
4 A Systematization of Nonmetric Proximity Learning
The problem of nonmetric proximity learning has been addressed by some research groups, and multiple approaches have been proposed. A schematic view summarizing the major research directions is show in Figure 3 and in Table 2.
Turn Nonmetric Proximities into Metric Ones (Section 5) . | ||
---|---|---|
A1: Eigenvalue corrections (like clipping, flipping, shifting) are applied to the eigenspectrum of the data (Muoz & De Diego, 2006; Roth, Laub, Buhmann, & Müller, 2002; Y. Chen, Garcia, Gupta, Rahimi, & Cazzanti, 2009; Filippone, 2009). This can also be effectively done for dissimilarities by a specific preprocessing (Schleif & Gisbrecht, 2013) | A2: Embedding approaches like (variants of MDS (Cox & Cox, 2000; Choo, Bohn, Nakamura, White, & Park, 2012), t-SNE (Van der Maaten & Hinton, 2012), NeRV (Venna, Peltonen, Nybo, Aidos, & Kaski, 2010) can be used to obtain Euclidean embedding in a lower-dimensional space but also the (dis-)similarity (proximity) space is a kind of embedding leading to a vectorial representation (Pekalska & Duin, 2008a, 2008b, 2002; Pekalska, Paclík, & Duin, 2001; Pekalska, Duin, & Paclík, 2006; Kar & Jain, 2011; Duin et al., 2014), as well as nonmetric locality-sensitive hashing (Mu & Yan, 2010) and local embedding or triangle correction techniques (L. Chen & Lian, 2008) | A3: Learning of a proxy function is frequently used to obtain an alternative psd proximity matrix that has maximum alignment with the original non-psd matrix. (J. Chen & Ye, 2008; Luss & d’Aspremont, 2009; Y. Chen, Gupta, & Recht, 2009; Gu & Guo, 2012; Lu, Keles, Wright, & Wahba, 2005; Brickell, Dhillon, Sra, & Tropp, 2008; Li, Yeung, & Ko, 2015) |
Algorithms for Learning on Nonmetric data (section 6) | ||
B1: Algorithms with a decision function that can be based on nonmetric proximities (Kar & Jain, 2012; Tipping, 2001a; Chen, Tino, & Yao, 2009, 2014; Graepel, Herbrich, Bollmann-Sdorra, & Obermayer, 1998) | B2: Algorithms that define their models in the pseudo-Euclidean space (Haasdonk & Pekalska, 2008; Pekalska & Haasdonk, 2009; Liwicki, Zafeiriou, & Pantic, 2013; Liwicki, Zafeiriou, Tzimiropoulos, & Pantic, 2012; Zafeiriou, 2012; Kowalski, Szafranski, & Ralaivola, 2009; Xue & Chen, 2014; J. Yang & Fan, 2013; Pekalska et al., 2001) | |
Theoretical Work for Indefinite Data Analysis and Related Overviews | ||
Focusing on SVM (Haasdonk, 2005; Mierswa & Morik, 2008; Tien Lin & Lin, 2003; Ying, Campbell, & Girolami, 2009), indefinite kernels and pseudo-euclidean spaces (Balcan, Blum, & Srebro, 2008; Wang, Sugiyama, Yang, Hatano, & Feng, 2009; Brickell et al., 2008; Schleif & Gisbrecht, 2013; Schleif, 2014; Pekalska & Duin, 2005; Pekalska et al., 2004, 2001; Ong et al., 2004; Laub, Roth, Buhmann, & Müller, 2006; D. Chen, Wang, & Tsang, 2008; Duin & Pekalska, 2010; Gnecco, 2013; Xu et al., 2011; Higham, 1988; Goldfarb, 1984; Graepel & Obermayer, 1999; Zhou & Wang, 2011; Alpay, 1991; Haasdonk & Keysers, 2002); indexing, retrieval, and metric modification techniques (Zhang et al., 2009; Skopal & Loko, 2008; Bustos & Skopal, 2011; Vojt & Eckhardt, 2009; Jensen, Mungure, Pedersen, Srensen, & Delige, 2010); overview papers and cross-discipline studies (Y. Chen et al., 2009; Muoz & De Diego, 2006; Duin, 2010; Kinsman et al., 2012; Laub, 2004; Hodgetts & Hahn, 2012; Hodgetts et al., 2009; Kanzawa, 2012) |
Turn Nonmetric Proximities into Metric Ones (Section 5) . | ||
---|---|---|
A1: Eigenvalue corrections (like clipping, flipping, shifting) are applied to the eigenspectrum of the data (Muoz & De Diego, 2006; Roth, Laub, Buhmann, & Müller, 2002; Y. Chen, Garcia, Gupta, Rahimi, & Cazzanti, 2009; Filippone, 2009). This can also be effectively done for dissimilarities by a specific preprocessing (Schleif & Gisbrecht, 2013) | A2: Embedding approaches like (variants of MDS (Cox & Cox, 2000; Choo, Bohn, Nakamura, White, & Park, 2012), t-SNE (Van der Maaten & Hinton, 2012), NeRV (Venna, Peltonen, Nybo, Aidos, & Kaski, 2010) can be used to obtain Euclidean embedding in a lower-dimensional space but also the (dis-)similarity (proximity) space is a kind of embedding leading to a vectorial representation (Pekalska & Duin, 2008a, 2008b, 2002; Pekalska, Paclík, & Duin, 2001; Pekalska, Duin, & Paclík, 2006; Kar & Jain, 2011; Duin et al., 2014), as well as nonmetric locality-sensitive hashing (Mu & Yan, 2010) and local embedding or triangle correction techniques (L. Chen & Lian, 2008) | A3: Learning of a proxy function is frequently used to obtain an alternative psd proximity matrix that has maximum alignment with the original non-psd matrix. (J. Chen & Ye, 2008; Luss & d’Aspremont, 2009; Y. Chen, Gupta, & Recht, 2009; Gu & Guo, 2012; Lu, Keles, Wright, & Wahba, 2005; Brickell, Dhillon, Sra, & Tropp, 2008; Li, Yeung, & Ko, 2015) |
Algorithms for Learning on Nonmetric data (section 6) | ||
B1: Algorithms with a decision function that can be based on nonmetric proximities (Kar & Jain, 2012; Tipping, 2001a; Chen, Tino, & Yao, 2009, 2014; Graepel, Herbrich, Bollmann-Sdorra, & Obermayer, 1998) | B2: Algorithms that define their models in the pseudo-Euclidean space (Haasdonk & Pekalska, 2008; Pekalska & Haasdonk, 2009; Liwicki, Zafeiriou, & Pantic, 2013; Liwicki, Zafeiriou, Tzimiropoulos, & Pantic, 2012; Zafeiriou, 2012; Kowalski, Szafranski, & Ralaivola, 2009; Xue & Chen, 2014; J. Yang & Fan, 2013; Pekalska et al., 2001) | |
Theoretical Work for Indefinite Data Analysis and Related Overviews | ||
Focusing on SVM (Haasdonk, 2005; Mierswa & Morik, 2008; Tien Lin & Lin, 2003; Ying, Campbell, & Girolami, 2009), indefinite kernels and pseudo-euclidean spaces (Balcan, Blum, & Srebro, 2008; Wang, Sugiyama, Yang, Hatano, & Feng, 2009; Brickell et al., 2008; Schleif & Gisbrecht, 2013; Schleif, 2014; Pekalska & Duin, 2005; Pekalska et al., 2004, 2001; Ong et al., 2004; Laub, Roth, Buhmann, & Müller, 2006; D. Chen, Wang, & Tsang, 2008; Duin & Pekalska, 2010; Gnecco, 2013; Xu et al., 2011; Higham, 1988; Goldfarb, 1984; Graepel & Obermayer, 1999; Zhou & Wang, 2011; Alpay, 1991; Haasdonk & Keysers, 2002); indexing, retrieval, and metric modification techniques (Zhang et al., 2009; Skopal & Loko, 2008; Bustos & Skopal, 2011; Vojt & Eckhardt, 2009; Jensen, Mungure, Pedersen, Srensen, & Delige, 2010); overview papers and cross-discipline studies (Y. Chen et al., 2009; Muoz & De Diego, 2006; Duin, 2010; Kinsman et al., 2012; Laub, 2004; Hodgetts & Hahn, 2012; Hodgetts et al., 2009; Kanzawa, 2012) |
Note: The table provides a brief summary and the most relevant references.
Basically, there exist two main directions:
Transform the nonmetric proximities to become metric.
Stay in the nonmetric space by providing a method that is insensitive to metric violations or can naturally deal with nonmetric data.
The first direction can be divided into substrategies:
Applying direct eigenvalue corrections. The original data are decomposed by an eigenvalue decomposition and the eigenspectrum is corrected in different ways to obtain a corrected psd matrix.
Embedding of the data in a metric space. Here, the input data are embedded into a (in general Euclidean) vector space. A very simple strategy is to use multidimensional scaling (MDS) to get a two- dimensional representation of the distance relations encoded in the original input matrix.
Learning of a proxy function to the proximities. These approaches learn an alternative (proxy) psd representation with maximum alignment to the non-psd input data.
The second branch is less diverse, but there are at least two substrategies:
Model definition based on the nonmetric proximity function. Recent theoretical work on generic dissimilarity and similarity functions is used to define models that can directly employ the given proximity function with only very moderate assumptions.
Krein space model definition. The Krein space is the natural representation for non-psd data. Some approaches have been formulated within this much less restrictive, but hence more complicated, mathematical space.
In the following, we detail the different strategies and their advantages and disadvantages. As a general comment, the approaches covered in B stay closer to the original input data, whereas for strategy A, the input data are in part substantially modified, which can lead to reduced interpretability and limits of a valid out-of sample extension in many cases.
5 Make the Input Space Metric
5.1 Eigenspectrum approaches (A.1)
The metric violations cause negative eigenvalues in the eigenspectrum of , leading to non-psd proximity matrices. Many
learning algorithms are based on kernels yielding symmetric and psd similarity (kernel)
matrices. The mathematical meaning of a kernel is the inner product in some Hilbert space
(Shawe-Taylor & Cristianini, 2004). However,
it is often loosely considered simply as a pairwise similarity measure between data items.
If a particular learning algorithm requires the use of Mercer kernels and the similarity
measure does not fulfill the kernel conditions, steps must be taken to ensure a valid
model.




5.1.1 Clip Eigenvalue Correction
5.1.2 Flip Eigenvalue Correction
5.1.3 Shift Eigenvalue Correction





5.1.4 Square and Bending Eigenvalue Correction



5.1.5 Complexity
All of these approaches are applicable to similarity (as opposed to dissimilarity) data
and require eigenvalue decomposition of the full matrix. The eigendecomposition (EVD) in
equation 5.1 has a complexity of using standard approaches. Gisbrecht and
Schleif (2014) proposed a linear EVD based on the
Nyström approximation; it can also be used for indefinite low-rank matrices
.





5.1.6 Out-of-Sample Extension to New Test Points






5.2 Learning of Alternative Metric Representations (A3)
Many algorithmic optimization approaches become invalid for nonmetric data. An early approach to address this problem used an optimization framework to address the violation of assumptions in the input data. A prominent way is to optimize not on the original proximity matrix but on a proxy matrix that is ensured to be psd and is aligned to the original non-psd proximity matrix.
5.2.1 Proxy Matrix for Noisy Kernels
The proxy matrix learning problem for indefinite kernel matrices is addressed in Luss and d’Aspremont (2009) for support vector classification (SVC), regression (SVR), and one-class classification. The authors attribute the indefiniteness to noise affecting the original kernel and propose to learn a psd proxy matrix. The SVC or SVR problem is reformulated to be based on the proxy kernel with additional constraints to keep the proxy kernel psd and aligned to the original non-psd kernel. A similar conceptually related proxy learning algorithm for indefinite kernel regression was recently proposed in Li, Yeung, and Ko (2015). The specific modification is done as an update on the cone of psd matrices, which effectively removes the negative eigenvalues of the input kernel matrix.
A similar but more generic approach was proposed for dissimilarities in Lu et al.
(2005). Thereby the input can be a noisy,
incomplete, and inconsistent dissimilarity matrix. A convex optimization problem is
established, estimating a regularized psd kernel from the given dissimilarity
information. Also Brickell et al. (2008) consider
potentially asymmetric but nonnegative dissimilarity data. Thereby a proxy matrix is
searched for such that the triangle violations for triple points sets of the data are
minimized or removed. This is achieved by specifying a convex optimization problem on
the cone of metric dissimilarity matrices constrained to obey all triangle inequality
relations for the data. Various triangle inequality fixing algorithms are proposed to
solve the optimization problem at reasonable costs for moderate data sets. The benefit
of Brickell et al. (2008) is that as few distances
as possible are modified to obtain a metric solution. Another approach is to learn a
metric representation based only on given conditions on the data point relations, such
as linked or unlinked. In Davis, Kulis, Jain, Sra, and Dhillon (2007) a Mahalanobis type metric is learned such that where the user-given constraints are
optimized with the matrix
.
5.2.2 Proxy Matrix Guided by Eigenspectrum Correction
The work of J. Chen and Ye (2008) and Luss and d’Aspremont (2009) was adapted to a semi-infinite quadratic constraint linear program with an extra pruning strategy to handle the large number of constraints. Further approaches following this line of research were recently reviewed in Muoz and De Diego (2006).















A similar strategy coupling the SVM optimization with a modified kernel PCA was proposed recently in Gu and Guo (2012). Here the basic idea is to modify the eigenspectrum of the non-psd input matrix as discussed in Y. Chen et al. (2009), but based on a kernel PCA for indefinite kernels. The whole problem was formalized in a multiclass SVM learning scheme.
For all those methods, the common idea is to convert the non-psd proximity matrix into a psd similarity matrix by using a numerical optimization framework. The approach of Lu et al. (2005) learns the psd matrix independent of the algorithm, which subsequently uses the matrix. The other approaches jointly solve the matrix conversion and the model-specific optimization problem.
5.2.3 Complexity
While the approaches of Luss and d’Aspremont (2009) and J. Chen and Ye (2008) appear
to be quite resource demanding, the approaches of Gu and Guo (2012) and Y. Chen et al. (2009) are more tractable by constraining the matrix conversion to few possible
strategies and providing a simple out-of-sample strategy for mapping new data points.
The approach of Luss and d’Aspremont (2009) uses
a full eigenvalue decomposition in the first step .
Further, the full kernel matrix is approximated by a psd proxy matrix with
memory complexity. The approach of J. Chen
and Ye (2008) has similar conditions. The
approach in Brickell et al. (2008) shows
run-time complexity. All of these approaches
have a rather high computational complexity and do not scale to larger data sets with
.
5.2.4 Out-of-Sample Extension to New Test Points






In Gu and Guo (2012) the extension is directly available by use of a projection function within a multiclass optimization framework.
5.3 Experimental Evaluation
The approaches noted thus far are all similar to each other but from the published
experiments, it is not clear how they compare. Subsequently we briefly compare the
approach of Luss and d’Aspremont (2009) and J. Chen
and Ye (2008). We consider different non-psd
standard data sets processed by the two methods, systematically varying the penalization
parameter at a logarithmic scale with
steps. The various kernel matrices form a
manifold in the cone of the psd matrices. We compared these kernel matrices pairwise using
the Frobenius norm. The obtained distance matrix is embedded into two dimensions using the
t-SNE algorithm (van der Maaten & Hinton, 2008) and a manually adapted penalty term. As anchor points, we also included the
clip, flip, shift, square, and the original kernel solution.
The considered data are the Amazon47 data (204 points, two classes), the Aural Sonar data (100 points, two classes), and the Protein data (213 points, two classes). The similarity matrices are shown in Figure 4 with indices sorted according to the class labels. For all data sets, the labeling has been changed to a two-class scheme by combining odd or even class labels, respectively. All data sets are then quite simple classification problems leading to an empirical error of close to 0 in the SVM model trained on the obtained proxy kernels. However they are also strongly non-psd, as can be seen from the eigenspectra plots in Figure 5.
Visualization of the proxy kernel matrices: Amazon, Aural Sonar, and Protein (resp. left to right).
Visualization of the proxy kernel matrices: Amazon, Aural Sonar, and Protein (resp. left to right).
Eigenspectra of the proxy kernel matrices: Amazon, Aural Sonar, and Protein (resp. left to right).
Eigenspectra of the proxy kernel matrices: Amazon, Aural Sonar, and Protein (resp. left to right).
An exemplary embedding is shown in Figure 6 with arbitrary units (so we omit the axis labeling). There are basically two trajectories of kernel matrices (each represented by a circle) where the penalty parameter value is indicated by red or blue shading. We also see some separate clusters caused by the embedding procedure. We see the kernel matrices for the protein data set. On the left, we have the trajectory of the approach provided by Chen and in the right the one obtained by the method of Luss. We see that the clip solution is close to the crossing point of the two trajectories. The square, shift, and flip solutions are near to the original kernel matrix (light green circle). We can find the squared solution quite close to the original kernel matrix, but also some points of the Luss trajectory are close to this matrix. Similar observations can be made for the other data sets.4 We note again that both algorithms are not only optimizing with respect to the Frobenius norm but also in the line of the SVM optimization.
Embedding of adapted proxy kernel matrices for the protein data as obtained by Luss
(blue shaded) and Chen (red shaded). One sees typical proximity matrix trajectories
for the approaches of Y. Chen et al. (2009) and
Luss and d’Aspremont (2009), both using the
clip strategy. The embedding was obtained by t-distributed stochastic
neighbor embedding (t-SNE) (van der Maaten & Hinton, 2008), where the Frobenius norm was used as a similarity
measure between two matrices. Although the algorithms start from different
initialization points of the proximity matrices, the trajectories roughly end in the
clip solution for increasing .
Embedding of adapted proxy kernel matrices for the protein data as obtained by Luss
(blue shaded) and Chen (red shaded). One sees typical proximity matrix trajectories
for the approaches of Y. Chen et al. (2009) and
Luss and d’Aspremont (2009), both using the
clip strategy. The embedding was obtained by t-distributed stochastic
neighbor embedding (t-SNE) (van der Maaten & Hinton, 2008), where the Frobenius norm was used as a similarity
measure between two matrices. Although the algorithms start from different
initialization points of the proximity matrices, the trajectories roughly end in the
clip solution for increasing .
From the plots, we can conclude that both methods calculate psd kernel matrices along a smooth trajectory with respect to the penalty parameter, finally leading to the clip solution. The square, shift, and original kernel solution appear to be very similar and are close to but in general not crossing the trajectory of Luss or Chen. The flip solution is typically less similar to the other kernel matrices.
5.4 A Geometric View of Eigenspectrum and Proxy Approaches
The surrogate or proxy kernel is not learned from scratch but is often restricted to be
in a set of valid psd kernels originating from some standard spectrum modification
approaches (such as flip or clip) applied to K. The approach in Luss and
d’Aspremont (2009) is formulated primarily with
respect to an increase of the class separation by the proxy kernel and, as the second
objective, to ensure that the obtained kernel matrix is still psd. This can be easily seen
in equation 5.8. If a pair of data items are from the same class,
, the corresponding similarities in the kernel
matrix are emphasized (increased); otherwise they are decreased. If by doing this the
kernel becomes indefinite, it is clipped back to the boundary of the space of psd kernel
matrices.5 This approach can also be
considered as a type of kernel matrix learning (Lanckriet et al., 2004).
In Y. Chen et al. (2009) the proxy matrix is
restricted to be a combination of clip or flip operations on the eigenspectrum of the
matrix . We denote the cone of
positive semidefinite matrices by
(see Figure 7). Further, we define the kernel matrix obtained by the approach of equation 5.8 as KL and of equation 5.9 as KC. The approaches of equations 5.8 and 5.9 can be interpreted as a smooth path in
. Given
the balancing parameter
, the
optimization problems in equations 5.8 and 5.9 have unique solutions
and
,
respectively. In the interior of
, a small
perturbation of
will lead to small perturbations in KL and KC, meaning that the
optimization problems in equations 5.8 and 5.9 define continuous paths
and
,
respectively. It has been shown that as
grows,
approaches
(Y. Chen
et al., 2009). Note that for this approach, the
vector a (see equation 5.9) defines the limiting behavior of the path
. This
can be easily seen by defining
and
as follows: if
, then
. Otherwise,
Clip :
and
otherwise.
Flip :
.
Square:
.
Schematic visualization of the eigenspectrum and proxy matrix approaches with respect
to the cone of psd matrices. The cone interior covers the full-rank psd matrices, and
the cone boundary contains the psd matrices having at least one zero eigenvalue. In
the origin, we have the matrix with all eigenvalues zero. Out of the cone are the
non-psd matrices. Both strategies project the matrices to the cone of psd-matrices.
The parameter controls how strong the matrices
are regularized toward a clipping solution with a matrix update A.
Depending on the penalizer and the rank of S, the matrices follow
various trajectories (an exemplary one is shown by the curved line in the cone). If
, the path reaches the clipping solution at
the boundary of the cone.
Schematic visualization of the eigenspectrum and proxy matrix approaches with respect
to the cone of psd matrices. The cone interior covers the full-rank psd matrices, and
the cone boundary contains the psd matrices having at least one zero eigenvalue. In
the origin, we have the matrix with all eigenvalues zero. Out of the cone are the
non-psd matrices. Both strategies project the matrices to the cone of psd-matrices.
The parameter controls how strong the matrices
are regularized toward a clipping solution with a matrix update A.
Depending on the penalizer and the rank of S, the matrices follow
various trajectories (an exemplary one is shown by the curved line in the cone). If
, the path reaches the clipping solution at
the boundary of the cone.
Depending on the setting of the vector a, converges to
,
, or
.
Following the idea of eigendecomposition by Y. Chen et al. (2006) , we suggest a unified intuitive interpretation
of proximity matrix psd corrections. Applying an eigendecomposition to the kernel
, we can view K0 is
a weighted mixture of N rank 1 expert proximity suggestions6Ki:
, where
.
Different proximity matrix psd corrections result in different weights of the experts Ki, :
No correction:
.
Clip:
.
Flip:
.
Square:
.
Shift
.
Each expert provides an opinion about
the similarity for an object pair
,
weighted by
. Note that in some cases, the similarities
and the weights
can be
negative. If both terms are positive or negative, the contribution of the ith expert increases the overall similarity
; otherwise, it is decreased. If we consider a
classification task, we can now analyze the misclassifications in more detail by
inspecting the similarities of misclassified entries for individual experts. Depending on
the used eigenvalue correction, one gets information whether similarities are increased or
decreased. In the experiments given in section 8, we
see that clipping is in general worse than flipping or square. Clipping removes some of
the experts, opinions. Consider a negative similarity value
from the ith expert. Negative
eigenvalue
of K0 causes the
contribution from expert i to increase the overall similarity
between items a and b. Flipping corrects this by enforcing the contribution from expert i to decrease
. Square
in addition enhances and suppresses weighting all experts with
and
,
respectively. On the other hand, shift consistently raises the importance of unimportant
experts (weights in K0 close to 0), explaining the (in
general) bad results for shift in Table 7.
An exemplary visualization of the proximity matrix trajectories for the approaches of
Y. Chen et al. (2009) and Luss and d’Aspremont
(2009) is shown in Figure 6. Basic eigenspectrum approaches project the input matrix K0 on the boundary of the cone if the matrix has low rank or project it in the
interior of
when the transformed matrix still has full
rank. Hence, the clip and shift approaches always give a matrix on the boundary and are
quite restricted. The other approaches can lead to projections in the cone and may still
permit additional modifications of the matrix (e.g., to enhance inner-class similarities).
However, the additional modifications may lead to low-rank matrices such that they are
projected back to the boundary of the cone.
Having a look at the protein data (see Figure 5), we
see that the eigenspectrum of K0 shows strong negative
components. We know that the proximities of the protein data are generated by a nonmetric
alignment algorithm; errors in the triangle inequalities are therefore most likely caused
by the algorithm and not by numerical errors (noise). For simplicity, we reduce the
protein data to a two-class problem by focusing on the two largest classes. We obtain a
proximity matrix with entries
and an eigenspectrum very similar to the one of the original protein data. The smallest
eigenvalue is
and the largest
. Now we
identify those points that show a stronger alignment to the eigenvector of the dominant
negative eigenvalue: points with high absolute values in the corresponding coordinates of
the eigenvector. We collected the top 61 of such points in a set
. Training SVM on the two-class problem without
eigenvalue correction leads to a 57% training error. We observed that
of data items from
were misclassified. By applying an eigenvalue
correction, we still have misclassifications (flip,
; clip,
), but for flip, none of the misclassified items
and for clip
of them are in
. This
shows again that the negative eigenvalues can contain relevant information for the
decision process.
5.5 Embedding and Mapping Strategies (A2)
5.5.1 Global Proximity Embeddings
An alternative approach is to consider different types of embeddings or local representation models to effectively deal with non-psd matrices. After the embedding into a (in general low-dimensional) Euclidean space, standard data analysis algorithms (e.g., to define classification functions) can be used. While many embedding approaches are applicable to nonmetric matrix data, the embedding can lead to a substantial information loss (Wilson & Hancock, 2010). Some embedding algorithms like Laplacian eigenmaps (Belkin & Niyogi, 2003) cannot be calculated based on nonmetric input data, and preprocessing, as mentioned before, is needed to make the data psd.












In a comparison of distance distributions of high-dimensional Euclidean data points and low-dimensional points, it turns out that the former is shifted to higher average distances with relatively low standard deviation. This phenomenon is referred to as concentration of the norm (Lee & Verleysen, 2007).










MDS takes a symmetric dissimilarity matrix as
input and calculates a d-dimensional vector space representation such
that for the
dissimilarities, the new N points
, with
are
close to those of the original dissimilarity measure using some stress function. In
classical MDS (cMDS), this stress function is the Euclidean distance. During this
procedure (for details, see Kruskal, 1964),
negative eigenvalues are clipped and a psd kernel can be obtained as
, where
. The
approach is exact if the input data can be embedded into a Euclidean space without any
extra loss, which is not always possible (Wilson & Hancock, 2010).
5.5.2 Local Embeddings
L. Chen and Lian (2008) consider an unsupervised retrieval problem where the used distance function is nonmetric. A model is defined such that the data can be divided into disjoint groups and the triangle inequality holds within each group by constant shifting.7 Similar approaches were discussed in Bustos and Skopal (2011), who proposed a specific distance modification approach in Skopal and Loko (2008). Local concepts in the line of nonmetric proximities were also recently analyzed for the visualization of nonmetric proximity data by Van Der Maaten and Hinton (2012) where different (local) maps are defined to get different views of the data. Another interesting approach was proposed in Goldfarb (1984) where the nonmetric proximities are mapped in a pseudo-Euclidean space.
5.5.3 Proximity Feature Space
Finally, the so-called similarity or dissimilarity space representation (Graepel et al., 1998; Pekalska & Duin, 2008a, 2005) has found wide usage. Graepel et al. (1998) proposed an SVM in pseudo-Euclidean space, and Pekalska and Duin (2005, 2008a) proposed a generalized nearest mean classifier and Fisher linear discriminant classifier, also using the feature space representation.
The proximity matrix is considered to be a feature matrix with rows as the data points (cases) and columns as the features. Accordingly each point is represented in an N-dimensional feature space where the features are the proximities of the considered point to all other points. This view on proximity learning is also conceptually related to a more advanced theory proposed in Balcan et al. (2008).
The approaches either transform the given proximities by a local strategy or completely change the data space representation, as in the last case. The approach by Pekalska and Duin (2005) is cheap, but a feature selection problem is raised because in general, it is not acceptable to use all N features to represent a point during training but also for out-of-sample extensions in the test phase (Pekalska, Duin, & Paclík, 2006). Further, this type of representation radically changes the original data representation.
The embedding suggested in Goldfarb (1984) is rather costly because it involves an eigenvalue decomposition (EVD) of the proximity matrix, which can be done effectively only by using some novel strategies for low-rank approximations, which also provide an out-of-sample approach (Schleif & Gisbrecht, 2013).
Balcan et al. (2008) also provided a theoretical analysis for using similarities as features, with similar findings for dissimilarities in Wang et al. (2009). Balcan et al. (2008) provide criteria for a good similarity function to be used in a discrimination function. Roughly, they say that a similarity is good if the expected intraclass similarity is sufficiently large compared to the expected interclass similarity (this is more specific in theorem 4 of Balcan et al., 2008). Given N training points and a good similarity function, there exists a linear separator on the similarities as features that has a specifiable maximum error at a margin that depends on N (Balcan et al., 2008).
Wang et al. (2009) show that under slightly less restrictive assumptions on the similarity function, there exists with high probability a convex combination of simple classifiers on the similarities as features that has a maximum specifiable error.
5.5.4 Complexity
The classical MDS has a complexity of but by
using Landmark MDS (de Silva & Tenenbaum, 2002; Platt, 2005) (L-MDS), the
complexity can be reduced to
with m as the number of landmarks. L-MDS is, however, double-centering the
input data on the small landmark matrix only and applies a clipping of the eigenvalues
obtained on the
similarity matrix. It therefore has two
sources of inaccuracy: in the double centering and the eigenvalue estimation step (the
eigenfunction of
is estimated only on the
Landmark matrix
). Further, the clipping may remove relevant
information, as pointed out before. Gisbrecht and Schleif (2014) propose a generalization of L-MDS that is more accurate
and flexible in these two points.
The local approaches already noted cannot directly be used in, say, a classification or
standard clustering context but are method specific for a retrieval or inspection task.
The proximity feature space approach basically has no extra cost (given the proximity
matrix is fully available) but defines a finite-dimensional space of size d, with d determined by the number of (in this
context) prototypes or reference points. So often d is simply chosen as , which can lead to a high-dimensional
vectorial data representation and costly distance calculations.
5.5.5 Out-of-Sample Extension to New Test Points
To obtain the correct similarities for MDS, one can calculate . If this operation is too costly,
approximative approaches, as suggested in Gisbrecht, Lueks, Mokbel, and Hammer (2012), Gisbrecht, Schulz, and Hammer (2015), and Vladymyrov and Carreira-Perpiñán (2013), can also be used. The local embedding approaches
typically generate a model that has to be regenerated from scratch to be completely
valid, or specific insertion concepts can be used as shown in Skopal and Loko (2008). The proximity space representation is
directly extended to new samples by providing the proximity scores to the corresponding
prototypes, which can be costly for a large number of prototypes.
6 Natural Nonmetric Learning Approaches
An alternative to correct the non-psd matrix is to use the additional information in the
negative eigenspectrum in the optimization framework. This is in agreement with research
done by Pekalska et al. (2004). The simplest
strategy is to use a nearest-neighbor classifier (NNC) as discussed in Duin et al. (2014). The NNC is optimal if , but it is very costly because for a new item,
all potential neighbors have to be evaluated in the worst case. The organization into a tree
structure can resolve this issue for the average case using, for example, the NM-Tree, as
proposed in Skopal and Loko (2008) but is
complicated to maintain for lifelong learning and suffers from the shortcomings of NN for a
final N.
There are models that functionally resemble kernel machines, such as SVM, but do not
require Mercer kernels for their model formulation and fitting—for example, the relevance
vector machine (RVM; Tipping, 2001a), radial basis
function (RBF) networks (Buhmann, 2003, with kernels
positioned on top of each training point), or the probabilistic classification vector
machine (PCVM; Chen et al., 2009). In such
approaches, kernels expressing similarity between data pairs are treated as nonlinear basis
functions , transforming input x into its
nonlinear image
and making the out-of-sample extension
straightforward, while not requiring any additional conditions on K. The
main part of the models is formed by the projection of the data image
onto the parameter weight vector
:
. We next
detail some of these methods.
6.1 Approaches Using the Indefinite Krein or Pseudo-Euclidean Space (B2)
Some approaches are formulated using the Krein space and avoid costly transformations of the given indefinite similarity matrices. Pioneering work about learning with indefinite or nonpositive kernels can be found in Ong et al. (2004) and Haasdonk (2005). Ong et al. (2004) noticed that if the kernels are indefinite, one cannot any longer minimize the loss of standard kernel algorithms but instead must stabilize the loss in average. They showed that for every kernel, there is an associated Krein space, and for every reproducing kernel krein space (RKKS) (Alpay, 1991), there is a unique kernel. Ong et al. (2004) provided a list of indefinite kernels like the linear combination of gaussians with negative combination coefficients and proposed initial work for learning algorithms in RKKS combined by Rademacher bounds. Haasdonk (2005) provided a geometric interpretation of SVMs with indefinite kernel functions and showed that indefinite SVMs are optimal hyperplane classifiers not by margin maximization, but by minimization of distances between convex hulls in pseudo-Euclidean spaces. The approach is solely defined on distances and convex hulls, which can be fully defined in the pseudo-Euclidean space. This approach is very appealing; it shows that SVMs can be learned for indefinite kernels, although not as a convex problem. However, Haasdonk also mentioned that the approach is inappropriate for proximity data with a large number of negative eigenvalues. Based on address theory, multiple kernel approaches have been extended to be applicable for indefinite kernels.
6.1.1 Indefinite Fisher and Kernel Quadratic Discriminant
Haasdonk and Pekalska (2008) Pekalska and Haasdonk (2009) proposed indefinite kernel Fisher discriminant analysis (iKFDA) and indefinite kernel quadratic discriminant analysis (iKQDA), focusing on classification problems, recently extended by a weighting scheme in J. Yang and Fan (2013).
The initial idea is to embed the training data into a Krein space and apply a modified kernel Fisher discriminant analysis (KFDA) or kernel quadratic discriminant analysis (KQDA) for indefinite kernels.








Zafeiriou (2012) and Liwicki et al. (2012) proposed and integrated an indefinite kernel
PCA in the Fisher discriminant framework to get a low-dimensional feature extraction for
indefinite kernels. The basic idea is to define an optimization problem similar to the
psd kernel PCA but using the squared indefinite kernel, which has no effect on the
eigenvectors but only on the eigenvalues. In the corresponding derivation of the
principal components, the eigenvalues are only considered as such that those principal components are
found corresponding to the largest absolute eigenvalues. Later, this approach was
applied in the context of slow-feature analysis for indefinite kernels (Liwicki et al., 2013). A multiple indefinite kernel learning
approach was proposed in Kowalski et al. (2009),
and a recent work about indefinite kernel machines was proposed in Xue and Chen (2014). Also the kernelized version of localized
sensitive hashing has been extended to indefinite kernels (Mu & Yan, 2010) by combining kernelized hash functions on the associated
Hilbert spaces of the decomposed pseudo-Euclidean space.
6.1.2 Complexity
All of these methods have a run-time complexity of and do
not directly scale to large data sets. The test phase complexity is linear in the number
of used points to represent
.
Accordingly, sparsity concepts as suggested in Tipping (2000) can be employed to further reduce the complexity for test cases.
6.1.3 Out-of-Sample Extension to New Test Points
The models of iKFD, iKPCA, and iKQDA allow a direct and easy out-of-sample extension by
calculating the (indefinite) similarities of a new test point to the corresponding
training points used in the linear combination of .
6.2 Learning of Decision Functions Using Indefinite Proximities (B1)
Balcan et al. (2008) proposed a theory for learning with similarity function, with extensions for dissimilarity data in Wang et al. (2009). Balcan et al. (2008) discussed necessary properties of proximity functions to ensure good generalization capabilities for learning tasks. This theory motivates generic learning approaches based purely on symmetric, potentially nonmetric proximity functions minimizing the hinge loss, relative to the margin. They show that such a similarity function can be used in a two-stage algorithm. First, the data are represented by creating an empirical similarity map by selecting a subset of data points as landmarks and then representing each data point using the similarities to those landmarks. Subsequently, standard methods can be employed to find a large-margin linear separator in this new space. Indeed in recent years, multiple approaches have been proposed that could be covered by these theoretical frameworks, although most often not explicitly considered in this way.
6.2.1 Probabilistic Classification Vector Machine
H. Chen et al. (2009; 2014) propose the probabilistic classification vector machine
(PCVM), which can deal also with asymmetric indefinite proximity matrices.8 Within a Bayesian approach, a linear
classifier function is learned such that each point can be represented by a sparse
weighted linear combination of the original similarities. Similar former approaches like
the relevance vector machine (RVM; Tipping, 2001b) were found to be unstable without early stopping during learning. In
order to tackle this problem, a signed and truncated gaussian prior is adopted over
every weight in PCVMs, where the sign of prior is determined by the class label: or
. The
truncated gaussian prior not only restricts the sign of weights but also leads to a
sparse estimation of weight vectors, and thus controls the complexity of the model. The
empirical feature map is thereby automatically generated by a sparse adaptation scheme
using the expectation-maximization (EM) algorithm.























6.2.2 Supervised Learning with Similarity Functions
The theoretical foundations for classifier construction based on generic -good similarity functions was proposed in
Balcan et al. (2008). The theory in this review
suggests a constructive approach to derive a classifier. After a mapping like the one
already described, the similarity functions are normalized, and this representation is
used in a linear SVM to find a large margin classifier.










Wang et al. (2009) proposed a similar approach for dissimilarity functions whereby the landmarks set is optimized by a boosting procedure.
Some other related approaches are given by so-called median algorithms. The model parameters are specific data points of the original training, identified during the optimization and considered as cluster centers or prototypes, which can be used to assign new points. One may consider this also as a sparse version of one-nearest neighbor, and it can also be related to the nearest mean classifier for dissimilarities proposed in Wilson and Hancock (2010). An example for such median approaches can be found in Nebel et al. (2014) and Hammer and Hasenfuss (2010). Approaches in the same line but with a weighted linear combination were proposed in D. Hofmann, Schleif, and Hammer (2014), Hammer, Hoffmann, Schleif, and Zhu (2014), and Gisbrecht, Mokbel, et al. (2012) for dissimilarity data. As discussed in Haasdonk (2005), these approaches may converge only to a saddle point for indefinite proximities.
6.2.3 Complexity
Algorithms that derive decision functions in the former way are in general very costly,
involving to
operations or make use of random selection strategies that can lead to models of very
different generalization accuracy if the selection procedure is included in the
evaluation. The approaches directly following Balcan et al. (2008) are, however, efficient if the similarity measure already
separates the classes very well, regardless of the specific landmark set. (See Table 3.)
Method . | Memory Complexity . | Run-Time Complexity . | Out of Sample . |
---|---|---|---|
Eigenvalue correction (A1) | ![]() | ![]() | ![]() |
Proxy matrix (A3) | ![]() | ![]() | ![]() |
Proximity space (A2) | ![]() | ![]() | ![]() |
Embeddings (like MDS) (A2) | ![]() | ![]() | ![]() |
iKFD (B2) | ![]() | ![]() | ![]() |
PCVM (B1) | ![]() ![]() | ![]() | ![]() |
(Linear) similarity function (B1) | ![]() ![]() | ![]() ![]() | ![]() |
Method . | Memory Complexity . | Run-Time Complexity . | Out of Sample . |
---|---|---|---|
Eigenvalue correction (A1) | ![]() | ![]() | ![]() |
Proxy matrix (A3) | ![]() | ![]() | ![]() |
Proximity space (A2) | ![]() | ![]() | ![]() |
Embeddings (like MDS) (A2) | ![]() | ![]() | ![]() |
iKFD (B2) | ![]() | ![]() | ![]() |
PCVM (B1) | ![]() ![]() | ![]() | ![]() |
(Linear) similarity function (B1) | ![]() ![]() | ![]() ![]() | ![]() |
Notes: Most often the approaches are an average less complicated. For MDS-like approaches, the complexity depends very much on the method used and whether the data are given as vectors or proximities. The proximity space approach may generate further costs if, for example, a classification model has to be calculated for the representation. Proxy matrix approaches are very costly due to the raised optimization problem and the classical solver used. Some proxy approaches solve a similar complex optimization problem for out-of-sample extensions. For low-rank proximity matrices, the costs can often be reduced by a magnitude or more. See section 7.
6.2.4 Out-of-Sample Extension to New Test Points
For PCVM and the median approaches, the weight vector is in
general very sparse such that out-of-sample extensions are easily calculated by just
finding the few similarities
.
Because all approaches in section 6 can naturally
deal with nonmetric data, additional modifications of the similarities are avoided and
the out-of-sample extension is consistent.
7 Scaling Up Approaches of Proximity Learning for Larger Data Sets
A major issue with the application of the approaches explored so far is the scalability to large N. While we have provided a brief complexity analysis for each major branch, recent research has focused on improving the scalability of the approaches to reduce memory or run-time costs, or both. Subsequently we briefly sketch some of the more recent approaches used in this context that have already been proposed in the line of nonmetric proximity learning or can be easily transferred.
7.1 Nyström Approximation
The Nyström approximation technique has been proposed in the context of kernel methods in
Williams and Seeger (2000). Here, we give a short
review of this technique before it is employed in PCVM. One well-known way to approximate
an Gram matrix is to use a low-rank approximation.
This can be done by computing the eigendecomposition of the kernel matrix
where
is a
matrix, whose columns are orthonormal eigenvectors, and
is a
diagonal matrix consisting of eigenvalues
, and
keeping only the m eigenspaces that correspond to the m largest eigenvalues of the matrix. The approximation is
where
the indices refer to the size of the corresponding submatrix restricted to the largest m eigenvalues. The Nyström method approximates a kernel in a similar
way, without computing the eigendecomposition of the whole matrix, which is an
operation.
with a column orthonormal and
a diagonal matrix.
















7.2 Linear Time Eigenvalue Decomposition Using the Nyström Approximation


















7.3 Approximation Concepts for Low-Dimensional Embeddings
Recently various strategies have been proposed to reduce the general run-time complexity of various embedding
approaches. Two general ideas have been suggested. One is based on the Barnes-Hut
concepts, widely known in the analysis of astrophysical data (Barnes & Hut, 1986), and the second is based on a representer concept
where latent projections of each point are constrained to be a local linear function of
latent projections of some landmarks (Vladymyrov & Carreira-Perpiñán, 2013). Both approaches assume that mapped data have an intrinsic
group structure in the input and the output space that can be effectively employed to
reduce computation costs. As a consequence, they are in general efficient only if the
target embeddings are really in a low-dimensional space, such that an efficient data
structure for low dimensions can be employed.
Yang, Peltonen, and Kaski (2013) proposed a Barnes-Hut approach as a general framework for a multitude of embedding approaches. A specific strategy for t-SNE was recently presented in van der Maaten (2013). Here we briefly summarize the main ideas suggested in Yang et al. (2013). We refer to the corresponding journal papers for more details.








Gisbrecht and Schleif (2014) and Schleif and Gisbrecht (2013) proposed a generalization of Landmark-MDS that is also very efficient for nonmetric proximity data. Using the same concepts, it is also possible to obtain linear run-time complexity of Laplacian eigenmaps for (corrected) nonmetric input matrices.
7.4 Random Projection and Sparse Models
The proximity (dissimilarity) space discussed in section 5.5 makes use of all N similarities for a point i. To reduce the computational costs for generating a model, this N-dimensional space can be reduced in various ways. Various heuristics and multiobjective criteria have been employ to select an appropriate set of similarities, which are also sometimes called prototypes (Pekalska et al., 2006).
Random projection is another effective way and widely studied in recent, publications
also in the context of classification (Durrant & Kaban, 2010, 2013; Mylavarapu
& Kaban, 2013). It is based on the
Johnson-Lindenstrauss lemma, which states that a (random) mapping of N points from a high-dimensional (D) to a low-dimensional feature space distorts the length of the vector by at most
. More recent work can be found in Kane and
Nelson (2014). Another option is to derive the
decision function directly on only a subset of the proximities where theoretically work
discussing this option is available in Balcan et al. (2008), Wang et al. (2009), and Guo and
Ying (2014).
8 Experiments
In Table 5 we compare previously discussed methods
on various non-psd data sets with different attributes. (Table 4 gives an overview of the datasets.) As a baseline, we use the
k-nearest-neighbor (kNN) algorithm with k as the number of considered
neighbors, optimized on an independent hold-out meta-parameter tuning set. We modified k in the range . It should
be noted that kNN is known to be very efficient in general but requests the storage of the
full training set and is hence very unattractive in the test phase due to high memory load
and computation costs. In case of proximity data, a new test sample has to be compared to
all training points to get mapped in the kNN model. We also compare to an SVM with different
eigenvalue corrections the SVM-proxy approach proposed by (J. Chen & Ye, 2008) and two native methods: the iKFD and PCVM approaches already
discussed.
Data Set | Points | Classes | Balanced | +EV | -EV |
Aural Sonar | ![]() | 2 | yes | ![]() | ![]() |
Chromosoms | ![]() | ![]() | yes | ![]() | ![]() |
Delft | ![]() | ![]() | yes | ![]() | ![]() |
FaceRec | ![]() | ![]() | no | ![]() | ![]() |
ProDom | ![]() | ![]() | no | ![]() | ![]() |
Protein | ![]() | 4 | no | ![]() | ![]() |
Sonatas | ![]() | 5 | no | ![]() | 4 |
SwissProt | ![]() | ![]() | no | ![]() | ![]() |
Voting | ![]() | 2 | no | ![]() | ![]() |
Zongker | ![]() | ![]() | yes. | ![]() | ![]() |
Data Set | Points | Classes | Balanced | +EV | -EV |
Aural Sonar | ![]() | 2 | yes | ![]() | ![]() |
Chromosoms | ![]() | ![]() | yes | ![]() | ![]() |
Delft | ![]() | ![]() | yes | ![]() | ![]() |
FaceRec | ![]() | ![]() | no | ![]() | ![]() |
ProDom | ![]() | ![]() | no | ![]() | ![]() |
Protein | ![]() | 4 | no | ![]() | ![]() |
Sonatas | ![]() | 5 | no | ![]() | 4 |
SwissProt | ![]() | ![]() | no | ![]() | ![]() |
Voting | ![]() | 2 | no | ![]() | ![]() |
Zongker | ![]() | ![]() | yes. | ![]() | ![]() |
Note: The last two columns refer to the number of positive and negative eigenvalues, respectively.
8.1 Data Sets
We consider data sets already used in Y. Chen et al. (2009) and Duin (2012) and additional larger-scale problems. All data are used as similarity matrices (dissimilarities have been converted to similarities by double-centering in advance) and shown in Figures 9 and 12. The data sets are from very different practical domains such as sequence alignments, image processing, or audio data analysis.
8.1.1 Aural Sonar
The Aural Sonar data set is taken from Philips, Pitton, and Atlas (2006), investigating the human ability to distinguish different
types of sonar signals by ear. (For properties of this data set, see Figures 8a, 9a, and 10a). The signals were returns from a broadband active
sonar system, with 50 target-of-interest signals and 50 clutter signals. Every pair of
signals was assigned a similarity score from 1 to 5 by two randomly chosen human
subjects unaware of the true labels, and these scores were added to produce a similarity matrix with integer values from 2
to 10 (Y. Chen et al., 2009) with a signature of
Embeddings of the similarity matrices of Aural Sonar, Chromosom, Delft, and ProDom using t-SNE.
Embeddings of the similarity matrices of Aural Sonar, Chromosom, Delft, and ProDom using t-SNE.
Visualization of the proxy kernel matrices of Aural Sonar, Chromosom, Delft, and Prodom.
Visualization of the proxy kernel matrices of Aural Sonar, Chromosom, Delft, and Prodom.
Eigenspectra of the proxy kernel matrices of Aural Sonar, Chromosom, Delft, and ProDom.
Eigenspectra of the proxy kernel matrices of Aural Sonar, Chromosom, Delft, and ProDom.
8.1.2 Chromosom

8.1.3 Delft
The Delft gestures (DS5, 1500 points, 20 classes, balanced, signature: ), taken from Duin (2012) is a set of dissimilarities generated from a
sign-language interpretation problem (see Figures 8 to 10c). It consists of
points with
classes and
points per class. The gestures are measured
by two video cameras observing the positions of the two hands in 75 repetitions of
creating 20 different signs. The dissimilarities are computed using a dynamic
time-warping procedure on the sequence of positions (Lichtenauer, Hendriks, &
Reinders, 2008).
8.1.4 Face Rec
The Face Rec data set consists of 945 sample faces of 139 people from the NIST Face
Recognition Grand Challenge data set. There are 139 classes, one for each person.
Similarities for pairs of the original three-dimensional face data were computed as the
cosine similarity between integral invariant signatures based on surface curves of the
face (Feng, Krim, & Kogan, 2007) with a a
signature of
8.1.5 ProDom
The ProDom data set with signature consists of
protein sequences with
labels (see Figures 8d to 10d). It contains a
comprehensive set of protein families and appeared first in the work of Roth et al.
(2002), with the pairwise structural
alignments computed by Roth et al. Each sequence belongs to a group labeled by experts,
here we use the data as provided in (Duin, 2012).
8.1.6 Protein
The Protein data set has sequence-alignment similarities for proteins from four classes, where classes 1
through 4 contain 72, 72, 39, and 30 points, respectively (Hofmann & Buhmann, 1997). (See Figures 11a to 13a.) The signature is (170, 40, 3).
Embeddings of the similarity matrices of Protein, Swissprot, Voting, and Zongker Using t-SNE.
Embeddings of the similarity matrices of Protein, Swissprot, Voting, and Zongker Using t-SNE.
Visualization of the proxy kernel matrices of Protein, Swissprot, Voting, and Zongker.
Visualization of the proxy kernel matrices of Protein, Swissprot, Voting, and Zongker.
Eigenspectra of the proxy kernel matrices of Protein, Swissprot, Voting, and Zongker.
Eigenspectra of the proxy kernel matrices of Protein, Swissprot, Voting, and Zongker.
8.1.7 Sonatas
The Sonatas data set contains complex symbolic data with a signature taken from Mokbel, Hasenfuss, and Hammer
(2009). It comprises pairwise dissimilarities
between 1068 sonatas from the classical period (by Beethoven, Mozart, & Haydn) and
the baroque era (by Scarlatti and Bach). The musical pieces were given in the MIDI file
format, taken from the online MIDI collection Kunst der Fuge.9 Their mutual dissimilarities were measured with the
normalized compression distance (NCD; see Cilibrasi & Vitányi, 2005). The musical pieces are classified according to their
composer.
8.1.8 SwissProt
The SwissProt data set with a signature ,
consists of 5,791 points of protein sequences in
classes taken as a subset from the popular SwissProt database of protein sequences
(Boeckmann et al., 2003; see Figures 11b to 13b). The considered subset of the
SwissProt database refers to the release 37. A typical protein sequence consists of a
string of amino acids, and the length of the full sequences varies between
and more than
amino
acids depending on the sequence. The
most
common classes, such as Globin, Cytochrome b, and Protein kinase st, provided by the
Prosite labeling (Gasteiger et al., 2003), were
taken, leading to 5791 sequences. Due to this choice, an associated classification
problem maps the sequences to their corresponding Prosite labels. These sequences are
compared using Smith-Waterman, which computes a local alignment of sequences (Gusfield, 1997). This database is the standard source for
identifying and analyzing protein sequences such that an automated classification and
processing technique would be very desirable.
8.1.9 Voting
The Voting data set comes from the UCI Repository (see Figures 11c to 13c). It is a two-class classification problem with points, where each sample is a categorical
feature vector with
components and three possibilities for each component. We compute the value difference
metric (Stanfill & Waltz, 1986) from the
categorical data, a dissimilarity that uses the training class labels to weight
different components differently so as to achieve maximum probability of class
separation. The signature is
.
8.1.10 Zongker
The Zongker digit dissimilarity data (2000 points in classes) from Duin (2012) is based on deformable
template matching (See Figures 11d to 13d). The dissimilarity measure was computed between
handwritten NIST digits in
classes, with
entries each, as a result of an iterative optimization of the nonlinear deformation of
the grid (Jain & Zongker, 1997). The
signature is
.
We also show the eigenspectra of the data sets in Figures 10 and 13 indicating how strong a data set violates the metric properties. Additionally, some summarizing information about the data sets is provided in table 4 and t-SNE embeddings of the data in Figures 8 and 11 to get a rough estimate whether the data are classwise multimodal. Further we can interpret local neighborhood relations and whether data sets are more overlapping or well separated.10
We observe that there is no clear winning method, but we find an advance for SVM-square (four times best) and kNN (three times best). If we remove kNN from the ranking due to the high costs in the test phase, the best two approaches would be SVM-squared and iKFD.
If we analyze the prediction accuracy with respect to the negativity fraction (NF) of
the data, as shown in Figure 14, one can see that with increasing NF, the performance
variability of the methods increases. In a further experiment, we take the Protein data
and actively vary the negativity of the eigenspectrum by varying the number of negative
eigenvalues fixed at zero. We analyze the behavior of an SVM classifier by using the
different eigenvalue correction methods already discussed. The results are shown in
Figure 15. We see that for vanishing negativity,
the accuracy is around
. With
increasing negativity, the differences between the eigenvalue correction methods become
more pronounced. When the negativity reaches
,
larger negative eigenvalues are included in the data, and we observe that flip and
square show a beneficial behavior. Without any corrections (blue dotted line), the
accuracy drops significantly with increasing negativity. The shift approach is the
worst. With respect to the discussion in section 5.4, this can now be easily explained. For the Protein data, the largest
negative eigenvalues are obviously encoding relevant information and smaller negative
eigenvalues appear to encode noise. The shift approach removes the largest negative
eigenvalue, suppresses the second, and so on, while increasing all originally
nonnegative eigenvalue contributions, including those close to zero. Similar
observations hold for the other data sets.
Analysis of eigenvalue correction approaches with respect to the negativity of the data sets. For each data set and each correction method, we show the prediction accuracy of the SVM with respect to the negativity of the data. The performance variability of the methods increases with increasing negativity of the eigenspectrum.
Analysis of eigenvalue correction approaches with respect to the negativity of the data sets. For each data set and each correction method, we show the prediction accuracy of the SVM with respect to the negativity of the data. The performance variability of the methods increases with increasing negativity of the eigenspectrum.
Analysis of eigenvalue correction approaches using the Protein data with varying negativity. The prediction accuracies have been obtained by using SVM. An increase in the negativity, such that the data set is less metric, leads to stronger errors in the SVM model. This effect is severe for larger negativity and especially the shift correction or if no correction is applied.
Analysis of eigenvalue correction approaches using the Protein data with varying negativity. The prediction accuracies have been obtained by using SVM. An increase in the negativity, such that the data set is less metric, leads to stronger errors in the SVM model. This effect is severe for larger negativity and especially the shift correction or if no correction is applied.
9 Discussion
This review shows that learning with indefinite proximities is a complex task that can be
addressed by a variety of methods. We discussed the sources of indefiniteness in proximity
data and have outlined a taxonomy of algorithmic approaches. We have identified two major
methodological directions: approaches modifying the input proximities such that a metric
representation is obtained and algorithmic formulations of dedicated methods that are
insensitive to metric violations. The metric direction is the most established field with a
variety of approaches and algorithms. From our experiments in section 8, we found that for many data sets, the differences between
algorithms of the metric direction are only minor regarding the prediction accuracy on the
test data. Small advantages could be found for the square and flipping approach. Especially
shift is in general worse than the other approaches followed by clip. From the experiments,
one can conclude that the correction of indefinite proximities to metric ones is in general
effective. If the indefiniteness can be attributed to a significant amount of noise, a
clipping operation is preferable because it will reduce the noise in the input. If the
indefiniteness is due to relevant information, it is better to keep this information in the
data representation (e.g., by using the square operation). Besides the effect on model
accuracy, the methods also differ in the way out-of-sample extensions are treated and with
respect to the overall complexity of the approaches. We have addressed these topics in the
respective sections and provided efficient approximation schemes for some of the methods
given that the input data have low rank. If the rank of the input data is rather high,
approximations are inappropriate, and the methods have complexity.
The alternative direction is to preserve the input data in their given form and generate
models that are insensitive to indefinite proximities or can be directly derived in the
pseudo-Euclidean space. Comparing the results in table 5, we observe that the methods that avoid modifications of the input proximities
are in general competitive, but at a complexity of . But for
many of these methods, low-rank approximation schemes can be applied as well. As a very
simple alternative, we also considered the nearest-neighbor classifier, which worked
reasonably well. However, NN is known to be very sensitive to outliers and requires the
storage of all training points to calculate out-of-sample extensions.
Method | PCVM (B1) | IKFD (B2) | kNN | SVM | SVM-Flip (A1) | SVM-Clip | SVM-Squared | SVM-Shift | SVM-Proxy (A3) |
Aural Sonar | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
Chromosoms | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | n.a. |
Delft | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | n.a. |
FaceRec | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | n.a. |
ProDom | ![]() | ![]() | ![]() | not converged | ![]() | ![]() | ![]() | ![]() | n.a. |
Protein | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
Sonatas | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | n.a. |
SwissProt | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | n.a. |
Voting | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
Zongker | ![]() | ![]() | ![]() | not converged | ![]() | ![]() | ![]() | ![]() | n.a. |
Method | PCVM (B1) | IKFD (B2) | kNN | SVM | SVM-Flip (A1) | SVM-Clip | SVM-Squared | SVM-Shift | SVM-Proxy (A3) |
Aural Sonar | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
Chromosoms | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | n.a. |
Delft | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | n.a. |
FaceRec | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | n.a. |
ProDom | ![]() | ![]() | ![]() | not converged | ![]() | ![]() | ![]() | ![]() | n.a. |
Protein | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
Sonatas | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | n.a. |
SwissProt | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | n.a. |
Voting | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() |
Zongker | ![]() | ![]() | ![]() | not converged | ![]() | ![]() | ![]() | ![]() | n.a. |
In conclusion, the machine learning expert has to know a bit about the underlying data and especially the proximity function used to make an educated decision. In particular:
If the proximity function is derived from a mathematical distance or inner product, the presence of negative eigenvalues is likely caused by numerical errors. In this case, a very simple eigenvalue correction of the proximity matrix (e.g., clipping) (A1) may be sufficient.
If the given proximity function is domain specific and nonmetric, more careful modifications of the proximity matrix are in order (as discussed in sections 5.1 and 5.2 and shown in the experiments in section 8).
For asymmetric proximity measures, we have provided links to the few existing methods capable of dealing with asymmetric proximity matrices (see A2, B1). However, all of them are either costly in the model generation or in the out-of-sample extension (application to new test points). Fortunately, some form of symmetrization of the proximity matrix is often acceptable. For example, in the analysis of biological sequences, the proximity scores are in general almost symmetric and a symmetrization leads to no performance degradation.
If rank of the proximity matrix is rather high (e.g., FaceRec data), low-rank approximations (see section 7) will lead to information loss.
There are many open research questions in the field of indefinite proximity learning. The handling of nonmetric data is still not very comfortable, although a compact set of efficient methods is available. As indefinite proximities can occur due to numerical errors or noise, it would be desirable to have a more systematic procedure isolating these components from those that carry relevant information. It would also be very desirable to have a larger benchmark of indefinite proximity data similar to those within the UCI database for (most often) vectorial data sets. Also in the algorithms, we can find various open topics: the set of algorithms with explicit formulations in the Krein space (Haasdonk & Pekalska, 2008; Pekalska & Haasdonk, 2009; Liwicki et al., 2013; Zafeiriou, 2012) is still very limited. Further, the run-time performance for the processing of large-scale data is often inappropriate. It would also be of interest whether some of the methods can be extended to asymmetric input data or if concepts from the analysis of large asymmetric graph networks can be transferred to the analysis of indefinite proximities.
Data Sets and Implementations
The data sets used in this review have been made available at http://promos-science.blogspot.de/p/blog-page.html. Parts of the implementations of the algorithms discussed can be accessed at http://www.techfak.uni-bielefeld.de/∼fschleif/review/. An implementation of the probabilistic classification vector machine is available at https://mloss.org/software/view/610/.
Acknowledgments
This work was funded by a Marie Curie Intra-European Fellowship within the 7th European Community Framework Program (PIEF-GA-2012-327791). P.T. was also supported by EPSRC grant EP/L000296/1.
References
Notes
The associated similarity matrix can be obtained by double centering (Pekalska &
Duin, 2005) of the dissimilarity matrix. with
,
identity matrix
and vector of ones
.
The validity of the transformation function can be easily shown by . Similar derivations can also be found for the
other transformation functions (flip, shift, square).
Later extended to regression and one-class SVM.
It should be noted that the two-dimensional embedding is neither unique nor perfect because the intrinsic dimensionality of the observed matrix space is larger and t-SNE is a stochastic embedding technique. But with different parameter settings and multiple runs at different random start points, we consistently observe similar results. As only local relations are valid within the t-SNE embedding, the Chen solutions can also be close to, for example, the squared matrix in the high-dimensional manifold and may have been potentially teared apart in the plot.
In general a matrix with negative entries can still be psd.
It can effectively be less than N experts if
rank.
Unrelated to the eigenspectrum shift approach mentioned before.
In general the input is a symmetric kernel matrix, but the method is not restricted in this way.
T-SNE visualizations are not unique, and we have adapted the perplexity parameter to get
reasonable visualization in general as .