Abstract

We introduce single-set spectral sparsification as a deterministic sampling–based feature selection technique for regularized least-squares classification, which is the classification analog to ridge regression. The method is unsupervised and gives worst-case guarantees of the generalization power of the classification function after feature selection with respect to the classification function obtained using all features. We also introduce leverage-score sampling as an unsupervised randomized feature selection method for ridge regression. We provide risk bounds for both single-set spectral sparsification and leverage-score sampling on ridge regression in the fixed design setting and show that the risk in the sampled space is comparable to the risk in the full-feature space. We perform experiments on synthetic and real-world data sets; a subset of TechTC-300 data sets, to support our theory. Experimental results indicate that the proposed methods perform better than the existing feature selection methods.

1  Introduction

Ridge regression, a popular technique in machine learning and statistics, is a commonly used penalized regression method. Regularized least-squares classifier (RLSC) is a simple classifier based on least squares and has a long history in machine learning (Zhang & Peng, 2004; Poggio & Smale, 2003; Rifkin, Yeo, & Poggio, 2003; Fung & Mangasarian, 2001; Suykens & Vandewalle, 1999; Zhang & Oles, 2001; Agarwal, 2002). RLSC is also the classification analogue to ridge regression. It has been known to perform comparably to the popular support vector machines (SVM) (Rifkin et al., 2003; Fung & Mangasarian, 2001; Suykens & Vandewalle, 1999; Zhang & Oles, 2001). RLSC can be solved by simple vector space operations and does not require quadratic optimization techniques like SVM.

We propose a deterministic feature selection technique for RLSC with provable guarantees. There exist numerous feature selection techniques, which work well empirically. There also exist randomized feature selection methods like leverage-score sampling, (Dasgupta, Drineas, Harb, Josifovski, & Mahoney, 2007) with provable guarantees, which work well empirically. But the randomized methods have a failure probability and have to be rerun multiple times to get accurate results. Also, a randomized algorithm may not select the same regardless in different runs. A deterministic algorithm will select the same features regardless of how many times it is run. This becomes important in many applications. Unsupervised feature selection involves selecting features oblivious to the class or labels.

In this work, we present a new, provably accurate unsupervised feature selection technique for RLSC. We study a deterministic sampling-based feature selection strategy for RLSC with provable nontrivial worst-case performance bounds. We also use single-set spectral sparsification and leverage-score sampling as unsupervised feature selection algorithms for ridge regression in the fixed design setting. Since the methods are unsupervised, it will ensure that the methods work well in the fixed design setting, where the target variables have an additive homoskedastic noise. The algorithms sample a subset of the features from the original data matrix and then perform regression tasks on the reduced dimension matrix. We provide risk bounds for the feature selection algorithms on ridge regression in the fixed design setting.

The number of features selected by both algorithms is proportional to the rank of the training set. The deterministic sampling-based feature selection algorithm performs better in practice when compared to existing methods of feature selection.

2  Our Contributions

We introduce single-set spectral sparsification as a provably accurate deterministic feature selection technique for RLSC in an unsupervised setting. The number of features selected by the algorithm is independent of the number of features but depends on the number of data points. The algorithm selects a small number of features and solves the classification problem using those features. Dasgupta et al. (2007) used a leverage-score-based randomized feature selection technique for RLSC and provided worst-case guarantees of the approximate classifier function to that using all features. We use a deterministic algorithm to provide worst-case generalization error guarantees. The deterministic algorithm does not come with a failure probability, and the number of features required by the deterministic algorithm is less than that required by the randomized algorithm. The leverage-score-based algorithm has a sampling complexity of , whereas single-set spectral sparsification requires to be picked, where n is the number of training points, is a failure probability, and is an accuracy parameter. As in Dasgupta et al. (2007), we also provide additive-error approximation guarantees for any test point and relative-error approximation guarantees for test points that satisfy some conditions with respect to the training set.

We introduce single-set spectral sparsification and leverage-score sampling as unsupervised feature selection algorithms for ridge regression and provide risk bounds for the subsampled problems in the fixed design setting. The risk in the sampled space is comparable to the risk in the full-feature space. We give relative-error guarantees of the risk for both feature selection methods in the fixed design setting.

From an empirical perspective, we evaluate single-set spectral sparsification on synthetic data and 48 document-term matrices, which are a subset of the TechTC-300 (Davidov, Gabrilovich, & Markovitch, 2004) data set. We compare the single-set spectral sparsification algorithm with leverage-score sampling, information gain, rank-revealing QR factorization (RRQR), and random feature selection. We do not report running times because feature selection is an offline task. The experimental results indicate that single-set spectral sparsification outperforms all the methods in terms of out-of-sample error for all 48 TechTC-300 data sets. We observe that a much smaller number of features is required by the deterministic algorithm to achieve good performance when compared to leverage-score sampling.

3  Background and Related Work

3.1  Notation

denote matrices and denote column vectors; (for all ) is the standard basis, whose dimensionality will be clear from context; and is the identity matrix. The singular value decomposition (SVD) of a matrix is equal to where is an orthogonal matrix containing the left singular vectors, is a diagonal matrix containing the singular values , and is a matrix containing the right singular vectors. The spectral norm of is . and are the largest and smallest singular values of . is the condition number of . denotes any orthogonal matrix whose columns span the subspace orthogonal to . A vector can be expressed as for some vectors and , that is, has one component along and another component orthogonal to .

3.2  Matrix Sampling Formalism

We now present the tools of feature selection. Let be the data matrix consisting of n points and d dimensions and be a matrix such that contains r rows of Matrix is a binary indicator matrix, which has exactly one nonzero element in each row. The non-zero element of indicates which row of will be selected. Let be the diagonal matrix such that rescales the rows of that are in The matrices and are called the sampling and rescaling matrices, respectively. We will replace the sampling and rescaling matrices by a single matrix , where denotes the matrix specifying which of the r rows of are to be sampled and how they are to be rescaled.

3.3  RLSC Basics

Consider a training data of n points in d dimensions with respective labels for The solution of binary classification problems via Tikhonov regularization in a reproducing kernel Hilbert space (RKHS) using the squared loss function results in the regularized least squares classification (RLSC) problem (Rifkin et al., 2003), which can be stated as
formula
3.1
where is the kernel matrix defined over the training data set, is a regularization parameter, and is the n-dimensional class label vector. In matrix notation, the training data set is a matrix, consisting of n data points and d features . Throughout this study, we assume that is a full-rank matrix. We shall consider the linear kernel, which can be written as Using the SVD of , the optimal solution of equation 3.1 in the full-dimensional space is
formula
3.2
The vector can be used as a classification function that generalizes to test data. If is the new test point, then the binary classification function is
formula
3.3
Then, gives the predicted label ( or ) to be assigned to the new test point .
Our goal is to study how RLSC performs when the deterministic sampling-based feature selection algorithm is used to select features in an unsupervised setting. Let be the matrix that samples and rescales r rows of , thus reducing the dimensionality of the training set from d to and r is proportional to the rank of the input matrix. The transformed data set into r dimensions is given by , and the RLSC problem becomes
formula
3.4
giving an optimal vector . The new test point is first dimensionally reduced to , where , and then classified by the function,
formula
3.5
In subsequent sections, we will assume that the test point is of the form The first part of the expression shows the portion of the test point that is similar to the training set, and the second part shows how much the test point is novel compared to the training set, that is, measures how much of lies outside the subspace spanned by the training set.

3.4  Ridge Regression Basics

Consider a data set of n points in d dimensions with . Here contains n independent and identically distributed (i.i.d.). samples from the d-dimensional independent variable. is the real-valued response vector. Ridge regression(RR) or Tikhonov regularization penalizes the norm of a parameter vector and shrinks the estimated coefficients toward zero. In the fixed design setting, we have , where is the homoskedastic noise vector with mean 0 and variance . Let be the solution to the ridge regression problem. The RR problem is stated as
formula
3.6
The solution to equation 3.6 is . One can also solve the same problem in the dual space. Using change of variables , where , and let be the linear kernel defined over the training data set. The optimization problem becomes
formula
3.7
Throughout this study, we assume that is a full-rank matrix. Using the SVD of , the optimal solution in the dual space (see equation 3.7) for the full-dimensional data is given by The primal solution is
In the sampled space, we have The dual problem in the sampled space can be posed as
formula
3.8
The optimal dual solution in the sampled space is The primal solution is

3.5  Related Work

The work most closely related to ours is that of Dasgupta, Drineas, Harb, Josifovski, and Mahoney (2007), who used a leverage-score-based randomized feature selection technique for RLSC and provided worst-case bounds of the approximate classifier with that of the classifier for all features. The proof of their main quality-of-approximation results provided an intuition of the circumstances when their feature selection method will work well. The running time of leverage-score-based sampling is dominated by the time to compute the SVD of the training set (i.e., ), whereas for single-set spectral sparsification, it is . Single-set spectral sparsification is a slower and more accurate method than leverage-score sampling. Another work on dimensionality reduction of RLSC is that of Avron, Sindhwani, and Woodruff (2013), who used efficient randomized algorithms for solving RLSC in settings where the design matrix has a Vandermonde structure. However, this technique is different from ours, since their work is focused on dimensionality reduction using linear combinations of features but not on actual feature selection.

Lu, Dhillon, Foster, and Ungar (2013) used randomized Walsh-Hadamard transform to lower the dimension of data matrix and subsequently solve the ridge regression problem in the lower-dimensional space. They provided risk bounds of their algorithm in the fixed design setting. However, this is different from our work, since they use linear combinations of features, while we select actual features from the data.

4  Our Main Tools

4.1  Single-Set Spectral Sparsification

We describe the single-set spectral sparsification algorithm (BSS1, for short) of Batson, Spielman, and Srivastava (2009) as algorithm  1. Algorithm  1 is a greedy technique that selects columns one at a time. Consider the input matrix as a set of d column vectors , with Given and , we iterate over . Define the parameters and . For and , a symmetric positive-definite matrix with eigenvalues , defines
formula
as the lower and upper potentials, respectively. These potential functions measure how far the eigenvalues of are from the upper and lower barriers U and L, respectively. We define and as follows:
formula
At every iteration, there exists an index and a weight such that and Thus, there will be at most r columns selected after iterations. The running time of the algorithm is dominated by the search for an index satisfying
formula
and computing the weight One needs to compute the upper and lower potentials and and hence the eigenvalues of . Cost per iteration is , and the total cost is For , we need to compute and for every which can be done in for every iteration, for a total of Thus, the total running time of the algorithm is We present the following lemma for the single-set spectral sparsification algorithm (algorithm 1).
formula
Lemma 1.
BSS (Batson et al., 2009). Given satisfying and , we can deterministically construct sampling and rescaling matrices and with , such that, for all ,
formula

We now present a slightly modified version of lemma 1 for our theorems:

Lemma 2.
Given satisfying and , we can deterministically construct sampling and rescaling matrices and such that for ,
formula
Proof.
From lemma 1, it follows that
formula
Thus,
formula
Similarly,
formula
Combining these, we have

Note: Let It is possible to set an upper bound on by setting the value of r. We will assume .

4.2  Leverage Score Sampling

Our randomized feature selection method is based on importance sampling or the so-called leverage-score sampling of Rudelson and Vershynin (2007). Let be the top- left singular vectors of the training set . We select rows of in i.i.d. trials and rescale the rows with where pi is a carefully chosen probability distribution of the form
formula
4.1
that is, proportional to the squared Euclidean norms of the rows of the left-singular vectors. The time complexity is dominated by the time to compute the SVD of .
Lemma 3

(Rudelson and Vershynin, 2007). Let be an accuracy parameter and be the failure probability. Given satisfying , let , pi be as equation 4.1, and . Construct the sampling and rescaling matrix . Then with probability at least ,

5  Theory

In this section, we describe the theoretical guarantees of RLSC using BSS and also the risk bounds of ridge regression using BSS and leverage-score sampling. Before we begin, we state the following lemmas from numerical linear algebra, which will be required for our proofs.

Lemma 4

(Stewart & Sun, 1990). For any matrix , such that is invertible,

Lemma 5
(Stewart & Sun, 1990). Let and be invertible matrices. Then
formula
Lemma 6

(Demmel & Veselic, 1992). Let and be matrices such that the product is a symmetric positive-definite matrix with matrix . Let the product be a perturbation such that Here corresponds to the smallest eigenvalue of . Let be the i eigenvalue of , and let be the i eigenvalue of Then,

Lemma 7.

Let . Then

The proof of this lemma is similar to lemma 4.3 of Drineas, Mahoney, and Muthukrishnan (2006).

5.1  Our Main Theorems on RLSC

Theorem 1 shows the additive error guarantees of the generalization bounds of the approximate classifer with that of the classifier with no feature selection. The classification error bound of BSS on RLSC depends on the condition number of the training set and on how much of the test set lies in the subspace of the training set.

Theorem 1.

Let be an accuracy parameter and be the number of features selected by BSS. Let be the matrix, as defined in lemma 2. Let with , be the training set, the reduced dimensional matrix, and the test point of the form . Then the following hold:

  • If , then

  • If , then

Proof.
We assume that is a full-rank matrix. Let and . Using the SVD of , we define
formula
5.1
The optimal solution in the sampled space is given by
formula
5.2
It can be proven easily that and are invertible matrices. We focus on the term Using the SVD of , we get
formula
5.3
formula
5.4
Equation 5.3 follows because of the fact that and by substituting from equation 3.2. Equation 5.4 follows from the fact that the matrices and are invertible. Now,
formula
5.5
formula
5.6
We bound equations 5.5 and 5.6 separately. Substituting the values of and :
formula
5.7
The last line follows from lemma 4, which states that , where . The spectral norm of is bounded by
formula
5.8
We now bound equation 5.5. Substituting equations 5.4 and 5.7 in 5.5,
formula
The last line follows because of lemma 5 and the fact that all matrices involved are invertible. Here,
formula
Since the spectral norms of and are bounded, we need only to bound the spectral norm of to bound the spectral norm of . The spectral norm of the matrix is the inverse of the smallest singular value of From perturbation theory of matrices (Stewart & Sun, 1990) and equation 5.8, we get
formula
Here, represents the ith singular value of the matrix . Also, where are the singular values of :
formula
Thus,
formula
Here, and denote the largest and smallest singular value of . Since , (condition number of ), we bound equation 5.5:
formula
5.9
For , the term in equation 5.9 is always larger than , so it can be upper-bounded by (assuming ). Also,
formula
This follows from the fact that and as is a full-rank orthonormal matrix and the singular values of are equal to , making the spectral norm of its inverse at most one. Thus we get,
formula
5.10
We now bound equation 5.6. Expanding equation 5.6 using SVD and ,
formula
The first inequality follows from , and the second inequality follows from Lemma 7. To conclude the proof, we bound the spectral norm of . Note that from equation 5.1, and ,
formula
One can get a lower bound for the smallest singular value of using matrix perturbation theory and by comparing the singular values of this matrix to the singular values of We get
formula
5.11
We assumed that , which implies Combining these, we get
formula
5.12
Combining equations 5.10 and 5.12, we complete the proof for the case . For , equation 5.9 becomes zero, and the result follows.

Our next theorem provides relative-error guarantees to the bound on the classification error when the test point has no new components, that is,

Theorem 2.
Let be an accuracy parameter, be the number of features selected by BSS, and . Let be the test point of the form , that is, it lies entirely in the subspace spanned by the training set, and the two vectors and satisfy the property
formula
for some constant . If we run RLSC after BSS, then
formula

The proof follows directly from the proof of theorem 8 if we consider .

5.2  Our Main Theorems on Ridge Regression

We compare the risk of subsampled ridge regression with the risk of true dual ridge regression in the fixed design setting. Recall that the response vector where is the homoskedastic noise vector with mean 0 and variance . Also, we assume that the data matrix is of full rank.

Lemma 8.
Let be the rank of . Form using BSS. Then,
formula
where For p.s.d matrices means is p.s.d.
Proof.
Using the SVD of , . Lemma 2 implies
formula
Multiplying the left- and right-hand sides of the inequality by and , respectively, to the above inequality completes the proof.
Lemma 9.
Let be the rank of . Form using leverage-score sampling. Then, with probability at least , where ,
formula
where

5.3  Risk Function for Ridge Regression

Let . The risk for a prediction function is For any positive symmetric matrix , we define the following risk function:
formula
Theorem 3.

Under the fixed design setting, the risk for the ridge regression solution in the full-feature space is , and the risk for the ridge regression in the reduced dimensional space is

Proof.
The risk of the ridge regression estimator in the reduced dimensional space is
formula
5.13
Taking as , we can write equation 5.13 as
formula
The expectation is only over the random noise and is conditional on the feature selection method used.

Our next theorem bounds the risk inflation of ridge regression in the reduced dimensional space compared with the ridge regression solution in the full-feature space.

Theorem 4.
Let be the rank of the matrix . When using leverage-score sampling as a feature selection technique, with probability at least , where ,
formula
where
Proof.
For any positive semidefinite matrix, , we define the bias and variance of the risk function as follows:
formula
Therefore, Now, due to, Bach (2013), we know that is nonincreasing in and is nondecreasing in . When lemma 11 holds,
formula

We can prove a similar theorem for BSS.

Theorem 5.
Let be the rank of the matrix . When using BSS as a feature selection technique, with
formula

6  Experiments

All experiments were performed in Matlab R2013b on an Intel i-7 processor with 16GB RAM.

6.1  BSS Implementation Issues

Batson et al. (2009) do not provide any implementation details of the BSS algorithm. Here we discuss several issues arising during the implementation.

6.1.1  Choice of Column Selection

At every iteration, there are multiple columns that satisfy the condition Batson et al. (2009) suggest picking any column that satisfies this constraint. Instead of breaking ties arbitrarily, we choose the column , which has not been selected in previous iterations and whose Euclidean norm is highest among the candidate set. Columns with zero Euclidean norm never get selected by the algorithm. In the inner loop of algorithm 1, and has to be computed for all the d columns in order to pick a good column. This step can be done efficiently using a single line of Matlab code by making use of matrix and vector operations.

6.2  Other Feature Selection Methods

In this section, we describe other feature-selection methods with which we compare BSS.

6.2.1  Rank-Revealing QR Factorization

Within the numerical linear algebra community, subset selection algorithms use the so-called rank revealing QR (RRQR) factorization. Here we slightly abuse notation and state as a short and fat matrix as opposed to the tall and thin matrix. Let be an matrix with and an integer and assume partial QR factorizations of the form
formula
where is an orthogonal matrix, is a permutation matrix, . The above factorization is called an RRQR factorization if , where p(k,d) is a function bounded by a low-degree polynomial in k and d. The important columns are given by and with We perform feature selection using RRQR by picking the important columns that preserve the rank of the matrix.

6.2.2  Random Feature Selection

We select features uniformly at random without replacement, which serves as a baseline method. To get around the randomness, we repeat the sampling process five times.

6.2.3  Leverage-Score Sampling

For leverage-score sampling, we repeat the experiments five times to get around the randomness. We pick the top- left singular vectors of where is the rank of the matrix

6.2.4  Information Gain

The information gain (IG) feature selection method (Yang & Pedersen, 1997) measures the amount of information obtained for binary class prediction by knowing the presence or absence of a feature in a data set. The method is a supervised strategy, whereas the other methods used here are unsupervised.

6.3  Experiments on RLSC

The goal of this section is to compare BSS with existing feature selection methods for RLSC and show that BSS is better than the other methods.

6.3.1  Synthetic Data

We run our experiments on synthetic data where we control the number of relevant features in the data set and demonstrate the working of algorithm 1 on RLSC. We generate synthetic data in the same manner as given in Bhattacharyya (2004). The data set has n data points and d features. The class label yi of each data point was randomly chosen to be 1 or −1 with equal probability. The first k features of each data point are drawn from distribution, where is a random normal distribution with mean and variance and j varies from 1 to k. The remaining dk features are chosen from a distribution. Thus, the data set has k relevant features and noisy features. By construction, among the first k features, the kth feature has the most discriminatory power, followed by th feature and so on. We set n to 30 and d to 1000. We set k to 90 and 100 and ran two sets of experiments.

We set the value of r (i.e., the number of features selected by BSS) to 80 and 90 for all experiments. We performed ten-fold cross-validation and repeated it 10 times. The value of was set to 0, 0.1, 0.3, 0.5, 0.7, and 0.9. We compared BSS with RRQR, IG, and leverage-score sampling. The mean out-of-sample error was 0 for all methods for both and . Table 1 shows the set of five most frequently selected features by the different methods for one such synthetic data set across 100 training sets. The top features picked up by the different methods are the relevant features by construction and also have good discriminatory power. This shows that BSS is as good as any other method in terms of feature selection and often picks more discriminatory features than the other methods. We repeated our experiments on 10 different synthetic data sets, and each time, the 5 most frequently selected features were from the set of relevant features. Thus, by selecting only 8% to 9% of all features, we show that we are able to obtain the most discriminatory features along with good out-of-sample error using BSS.

Table 1:
Most Frequently Selected Features Using the Synthetic Data Set.
BSS 89, 88, 87, 86, 85 100, 99, 98, 97, 95 
RRQR 90, 80, 79, 78, 77 100, 80, 79, 78, 77 
Lvg-Score 73, 85, 84, 81, 87 93, 87, 95, 97, 96 
IG 80, 79, 78, 77, 76 80, 79, 78, 77, 76 
   
BSS 90, 88, 87, 86, 85 100, 99, 98, 97, 96 
RRQR 90, 89, 88, 87, 86 100, 90, 89, 88, 87 
Lvg-Score 67, 88, 83, 87, 85 100, 97, 92, 48, 58 
IG 90, 89, 88, 87, 86 90, 89, 88, 87, 86 
BSS 89, 88, 87, 86, 85 100, 99, 98, 97, 95 
RRQR 90, 80, 79, 78, 77 100, 80, 79, 78, 77 
Lvg-Score 73, 85, 84, 81, 87 93, 87, 95, 97, 96 
IG 80, 79, 78, 77, 76 80, 79, 78, 77, 76 
   
BSS 90, 88, 87, 86, 85 100, 99, 98, 97, 96 
RRQR 90, 89, 88, 87, 86 100, 90, 89, 88, 87 
Lvg-Score 67, 88, 83, 87, 85 100, 97, 92, 48, 58 
IG 90, 89, 88, 87, 86 90, 89, 88, 87, 86 

Though running time is not the main subject of this study, we point out that we computed the running time of the different feature selection methods averaged over 10 ten-fold cross-validation experiments. The time to perform feature selection for each of the methods averaged over 10 ten-fold cross-validation experiments was less than 1 second (see Table 2), which shows that the methods can be implemented in practice.

Table 2:
Running Time of Various Feature Selection Methods (in Seconds).
BSSIGLVGRRQR
Synthetic data 0.1025 0.0003 0.0031 0.0016 
TechTC-300 75.7624 0.0242 0.4054 0.2631 
BSSIGLVGRRQR
Synthetic data 0.1025 0.0003 0.0031 0.0016 
TechTC-300 75.7624 0.0242 0.4054 0.2631 

Notes: For synthetic data, the running time corresponds to the experiment when and and is averaged over 10 ten-fold cross-validation experiments. For TechTC-300, the running time corresponds to the experiment when and is averaged over 10 ten-fold cross-validation experiments and over 48 TehTC-300 data sets.

6.3.2  TechTC-300

We use the TechTC-300 data (Davidov et al., 2004) consisting of a family of 295 document-term data matrices. The TechTC-300 data set comes from the Open Directory Project (ODP), a large, comprehensive directory of the web, maintained by volunteer editors. Each matrix in the TechTC-300 data set contains a pair of categories from the ODP. Each category corresponds to a label, and thus the resulting classification task is binary. The documents that are collected from the union of all the subcategories within each category are represented in the bag-of-words model, with the words constituting the features of the data (Davidov et al., 2004). Each data matrix consists of 150 to 280 documents, and each document is described with respect to 10,000 to 50,000 words. Thus, TechTC-300 provides a diverse collection of data sets for a systematic study of the performance of the RLSC using BSS. We removed all words of length at most four from the data sets. Next, we grouped the data sets based on the categories and selected data sets whose categories appeared at least thrice. There were 147 data sets, and we performed ten-fold cross-validation and repeated it 10 times on 48 such data sets. We set the values of the regularization parameter of RLSC to 0.1, 0.3, 0.5 and 0.7.

We set r to 300, 400, and 500. We report the out-of-sample error for all 48 data sets. BSS consistently outperforms leverage-score sampling, IG, RRQR and random feature selection on all 48 data sets for all values of the regularization parameter. Table 3 and Figure 1 show the results. The out-of-sample error decreases with an increase in the number of features for all methods. In terms of out-of-sample error, BSS is the best, followed by leverage-score sampling, IG, RRQR, and random feature selection. BSS is at least 3% to 7% better than the other methods when averaged over 48 document matrices. From Figures 1 and 2, it is evident that BSS is comparable to the other methods and often better on all 48 data sets. Leverage-score sampling requires a greater number of samples to achieve the same out-of-sample error as BSS (see table 3, for Lvg-Score and for BSS). Therefore, for the same number of samples, BSS outperforms leverage-score sampling in terms of out-of-sample error. The out-of-sample error of supervised IG is worse than that of unsupervised BSS, which could be due to the worse generalization of the supervised IG metric. We also observe that the out-of-sample error decreases with increase in for the different feature selection methods.

Figure 1:

Out-of-sample error of 48 TechTC-300 documents averaged over 10 ten-fold cross-validation experiments for different values of regularization parameter and number of features . Vertical bars represent standard deviations.

Figure 1:

Out-of-sample error of 48 TechTC-300 documents averaged over 10 ten-fold cross-validation experiments for different values of regularization parameter and number of features . Vertical bars represent standard deviations.

Figure 2:

Out-of-sample error of 48 TechTC-300 documents averaged over 10 ten-fold cross-validation experiments for different values of regularization parameter and number of features and . Vertical bars represent standard deviation.

Figure 2:

Out-of-sample error of 48 TechTC-300 documents averaged over 10 ten-fold cross-validation experiments for different values of regularization parameter and number of features and . Vertical bars represent standard deviation.

Table 3:
Out-of-Sample Error of TechTC-300 Data Sets Averaged over 10 Ten-Fold Cross-Validation and over 48 Data Sets for Three Values of r.
BSS 31.760.68 31.460.67 31.240.65 31.030.66 
Lvg-Score 38.22 1.26 37.63 1.25 37.23 1.24 36.94 1.24 
RRQR 37.84 1.20 37.07 1.19 36.57 1.18 36.10 1.18 
Randomfs 50.01 1.2 49.43 1.2 49.18 1.19 49.04 1.19 
IG 38.35 1.21 36.64 1.18 35.81 1.18 35.15 1.17 
     
BSS 30.590.66 30.330.65 30.110.65 29.960.65 
Lvg-Score 35.06 1.21 34.63 1.20 34.32 1.2 34.11 1.19 
RRQR 36.61 1.19 36.04 1.19 35.46 1.18 35.05 1.17 
Randomfs 47.82 1.2 47.02 1.21 46.59 1.21 46.27 1.2 
IG 37.37 1.21 35.73 1.19 34.88 1.18 34.19 1.18 
     
BSS 29.800.77 29.530.77 29.340.76 29.180.75 
Lvg-Score 33.33 1.19 32.98 1.18 32.73 1.18 32.52 1.17 
RRQR 35.77 1.18 35.18 1.16 34.67 1.16 34.25 1.14 
Randomfs 46.26 1.21 45.39 1.19 44.96 1.19 44.65 1.18 
IG 36.24 1.20 34.80 1.19 33.94 1.18 33.39 1.17 
BSS 31.760.68 31.460.67 31.240.65 31.030.66 
Lvg-Score 38.22 1.26 37.63 1.25 37.23 1.24 36.94 1.24 
RRQR 37.84 1.20 37.07 1.19 36.57 1.18 36.10 1.18 
Randomfs 50.01 1.2 49.43 1.2 49.18 1.19 49.04 1.19 
IG 38.35 1.21 36.64 1.18 35.81 1.18 35.15 1.17 
     
BSS 30.590.66 30.330.65 30.110.65 29.960.65 
Lvg-Score 35.06 1.21 34.63 1.20 34.32 1.2 34.11 1.19 
RRQR 36.61 1.19 36.04 1.19 35.46 1.18 35.05 1.17 
Randomfs 47.82 1.2 47.02 1.21 46.59 1.21 46.27 1.2 
IG 37.37 1.21 35.73 1.19 34.88 1.18 34.19 1.18 
     
BSS 29.800.77 29.530.77 29.340.76 29.180.75 
Lvg-Score 33.33 1.19 32.98 1.18 32.73 1.18 32.52 1.17 
RRQR 35.77 1.18 35.18 1.16 34.67 1.16 34.25 1.14 
Randomfs 46.26 1.21 45.39 1.19 44.96 1.19 44.65 1.18 
IG 36.24 1.20 34.80 1.19 33.94 1.18 33.39 1.17 

Note: The first and second entries of each cell represent the mean and standard deviation. Items in bold indicate the best results.

We list the most frequently occurring words selected by BSS and leverage-score sampling for the case for seven TechTC-300 data sets over 100 training sets used in the cross-validation experiments. Table 4 shows the names of the seven TechTC-300 document term matrices. The words shown in Tables 5 and 6 were selected in all cross-validation experiments for these seven data sets. The words are closely related to the categories to which the documents belong, which shows that BSS and leverage-score sampling select important features from the training set. For example, for the document pair , where 1092 belongs to the category of “Arts:Music:Styles:Opera” and 789236 belongs to the category of “US:Navy: Decommisioned Submarines,” the BSS algorithm selects submarine, shipyard, triton, opera, libretto, theatre, which are closely related to the two classes. The top words selected by leverage-score sampling for the same document pair are seawolf, sturgeon, opera, triton finback, which are closely related to the class. Another example is the document pair , where 10539 belongs to “US:Indiana:Localities:S” and 300332 belongs to the category of “Canada: Ontario: Localities:E.” The top words selected for this document pair are ontario, elliot, shelbyville, indiana, schererville, which are closely related to the class values. Thus, we see that in using only 2% to 4% of all features, we are able to select relevant features and obtain good out-of-sample error. The top words selected by leverage-score sampling are library, fishing, elliot, indiana, shelbyville, ontario, which are closely related to the class.

Table 4:
A Subset of the TechTC Matrices of Our Study.
id1_id2id1id2
1092_789236 Arts:Music:Styles:Opera US Navy:Decommisioned Submarines 
17899_278949 US:Michigan:Travel & Tourism Recreation:Sailing Clubs:UK 
17899_48446 US:Michigan:Travel & Tourism Chemistry:Analytical:Products 
14630_814096 US:Colorado:Localities:Boulder Europe:Ireland:Dublin:Localities 
10539_300332 US:Indiana:Localities:S Canada:Ontario:Localities:E 
10567_11346 US:Indiana:Evansville US:Florida:Metro Areas:Miami 
10539_194915 US:Indiana:Localities:S US:Texas:Localities:D 
id1_id2id1id2
1092_789236 Arts:Music:Styles:Opera US Navy:Decommisioned Submarines 
17899_278949 US:Michigan:Travel & Tourism Recreation:Sailing Clubs:UK 
17899_48446 US:Michigan:Travel & Tourism Chemistry:Analytical:Products 
14630_814096 US:Colorado:Localities:Boulder Europe:Ireland:Dublin:Localities 
10539_300332 US:Indiana:Localities:S Canada:Ontario:Localities:E 
10567_11346 US:Indiana:Evansville US:Florida:Metro Areas:Miami 
10539_194915 US:Indiana:Localities:S US:Texas:Localities:D 
Table 5:
Frequently Occurring Terms of the TechTC-300 Data Sets of Table 4 Selected by BSS.
id1_id2Words
1092_789236 naval,shipyard,submarine,triton,music,opera,libretto,theatre 
17899_278949 sailing,cruising,boat,yacht,racing,michigan,leelanau,casino 
17899_48446 vacation,lodging,michigan,asbestos,chemical,analytical,laboratory 
14630_814096 ireland,dublin,boulder,colorado,lucan,swords,school,dalkey 
10539_300332 ontario,fishing,county,elliot,schererville,shelbyville,indiana,bullet 
10567_11346 florida,miami,beach,indiana,evansville,music,business,south 
10539_194915 texas,dallas,plano,denton,indiana,schererville,gallery,north 
id1_id2Words
1092_789236 naval,shipyard,submarine,triton,music,opera,libretto,theatre 
17899_278949 sailing,cruising,boat,yacht,racing,michigan,leelanau,casino 
17899_48446 vacation,lodging,michigan,asbestos,chemical,analytical,laboratory 
14630_814096 ireland,dublin,boulder,colorado,lucan,swords,school,dalkey 
10539_300332 ontario,fishing,county,elliot,schererville,shelbyville,indiana,bullet 
10567_11346 florida,miami,beach,indiana,evansville,music,business,south 
10539_194915 texas,dallas,plano,denton,indiana,schererville,gallery,north 
Table 6:
Frequently Occurring Terms of the TechTC-300 Data Sets of Table 4 Selected by Leverage-Score Sampling.
id1_id2Words
1092_789236 sturgeon, seawolf, skate, triton, frame, opera, finback 
17899_278949 sailing, yacht, laser, michigan,breakfast, county, clear 
17899_48446 analysis, michigan, water, breakfast, asbestos, environmental, analytical 
14630_814096 ireland, dublin, estate, lucan, dalkey, colorado, boulder 
10539_300332 library, fishing, service, lodge, ontario, elliot, indiana, shelbyville 
10567_11346 evansville, services, health, church, south, bullet, florida 
10539_194915 dallas, texas, schererville, indiana, shelbyville, plano 
id1_id2Words
1092_789236 sturgeon, seawolf, skate, triton, frame, opera, finback 
17899_278949 sailing, yacht, laser, michigan,breakfast, county, clear 
17899_48446 analysis, michigan, water, breakfast, asbestos, environmental, analytical 
14630_814096 ireland, dublin, estate, lucan, dalkey, colorado, boulder 
10539_300332 library, fishing, service, lodge, ontario, elliot, indiana, shelbyville 
10567_11346 evansville, services, health, church, south, bullet, florida 
10539_194915 dallas, texas, schererville, indiana, shelbyville, plano 

Though feature selection is an offline task, we give a discussion of the running times of the different methods to highlight that BSS can be implemented in practice. We computed the running time of the different feature selection methods averaged over 10 ten-fold cross-validation experiments and over 48 data sets (see Table 2). The average time for feature selection by BSS is approximately over 1 minute, while the rest of the methods take less than 1 second. This shows that BSS can be implemented in practice and can scale up to reasonably large data sets with 20,000 to 50,000 features. For BSS and leverage-score sampling, the running time includes the time to compute the SVD of the matrix. BSS takes approximately 1 minute to select features but is at least 3% to 7% better in terms of out-of-sample error than the other methods. IG takes less than 1 second to select features, but it is 4% to 7% worse than BSS in terms of out-of-sample error.

6.4  Experiments on Ridge Regression in the Fixed Design Setting

In this section, we describe experiments on feature selection on ridge regression in the fixed design setting using synthetic and real data.

6.4.1  Synthetic Data

We generate the features of the synthetic data in the same manner as described in section 6.3.1. We generate and , where and We set n to 30 and d to 1000. We set the number of relevant features k to 90 and 100 and ran two sets of experiments. We set the value of r, the number of features selected by BSS and leverage-score sampling, to , where for both experiments. The value of was set to 0.1, 0.3, 0.5, and 0.7. We compared the risk of ridge regression using BSS and leverage-score sampling with the risk of full-feature selection and report the MSE/Risk in the fixed design setting as a measure of accuracy. Figure 3 shows the risk of synthetic data for both BSS and leverage-score sampling as a function of . The risk of the sampled data is comparable to the risk of the full data in most cases, which follows from our theory. We observe that for higher values of , the risk of sampled space becomes worse than that of full data for both BSS and leverage-score sampling. The risk in the sampled space is almost the same for both BSS and leverage-score sampling. The time to compute feature selection is less than 1 second for both methods (see Table 7).

Figure 3:

MSE/Risk for synthetic data for and using different feature selection methods as a function of . The risk after feature selection is comparable to the risk of full data.

Figure 3:

MSE/Risk for synthetic data for and using different feature selection methods as a function of . The risk after feature selection is comparable to the risk of full data.

Table 7:
Running Time of Various Feature Selection Methods (in Seconds).
Synthetic DataTechTC (10341-14525)TechTC (10341-61792)
BSS 0.3368 68.8474 67.013 
LVG 0.0045 0.3994 0.3909 
Synthetic DataTechTC (10341-14525)TechTC (10341-61792)
BSS 0.3368 68.8474 67.013 
LVG 0.0045 0.3994 0.3909 

Note: For synthetic data, the running time corresponds to the experiment when For TechTC-300, the running time corresponds to the experiment when

6.4.2  TechTC-300

We use two TechTC-300 data sets, “10341-14525” and “10341-61792,” to illustrate our theory. We add gaussian noise to the labels. We set the value of the number of features to be selected, to 300, 400, and 500. The value of was set to 0.1, 0.3 and 0.5. We compared the risk of ridge regression using BSS and leverage-score sampling with the risk of full-feature selection and report the MSE/Risk in the fixed design setting as a measure of accuracy. Figure 4 shows the risk of real data for both BSS and leverage-score sampling as a function of . The risk of the sampled data is comparable to the risk of the full data in most cases, which follows from our theory. The risk of the sampled data decreases with an increase in The time to perform feature selection is approximately 1 minute for BSS and less than 1 second for leverage-score sampling (see Table 7).

Figure 4:

MSE/Risk for TechTC-300 data using different feature selection methods as a function of . The risk after feature selection is comparable to the risk of full data.

Figure 4:

MSE/Risk for TechTC-300 data using different feature selection methods as a function of . The risk after feature selection is comparable to the risk of full data.

7  Conclusion

We present a provably accurate feature selection method for RLSC that works well empirically and also gives better generalization peformance than prior existing methods. The number of features required by BSS is of the order , which makes the result tighter than that obtained by leverage-score sampling. BSS has been used recently as a feature selection technique for k-means clustering (Boutsidis & Magdon-Ismail, 2013), and linear SVMs (Paul, Magdon-Ismail, & Drineas, 2015), and our work on RLSC helps to expand research in this direction. The risk of ridge regression in the sampled space is comparable to the risk of ridge regression in the full feature space in the fixed design setting, and we observe this in both theory and experiments. Interesting future work in this direction would be to include feature selection for nonlinear kernels with provable guarantees.

Acknowledgments

Most of the work was done when S.P. was a graduate student at RPI. This work is supported by NSF CCF 1016501 and NSF IIS 1319280.

References

Agarwal
,
D.
(
2002
).
Shrinkage estimator generalizations of proximal support vector machines
. In
Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
(pp.
173
182
).
New York
:
ACM
.
Avron
,
H.
,
Sindhwani
,
V.
, &
Woodruff
,
D.
(
2013
).
Sketching structured matrices for faster nonlinear regression
. In
C. J. C.
Burges
,
L.
Bottou
,
Z.
Ghahramani
, &
K. Q.
Weinberger
(Eds.),
Advances in neural information processing systems, 26
(pp. 
2994
3002
).
Red Hook, NY
:
Curran
.
Bach
,
F.
(
2013
).
Sharp analysis of low-rank kernel matrix approximations
. In
Proceedings of the 26th Annual Conference on Learning Theory (COLT)
(pp.
185
209
). JMLR.org.
Batson
,
J.
,
Spielman
,
D.
, &
Srivastava
,
N.
(
2009
).
Twice-Ramanujan sparsifiers
. In
Proceedings of the 41st Annual ACM STOC
(pp.
255
262
).
New York
:
ACM
.
Bhattacharyya
,
C.
(
2004
).
Second order cone programming formulations for feature selection
.
JMLR
,
5
,
1417
1433
.
Boutsidis
,
C.
, &
Magdon-Ismail
,
M.
(
2013
).
Deterministic feature selection for k-means clustering
.
IEEE Transactions on Information Theory
,
59
(
9
),
6099
6110
.
Dasgupta
,
A.
,
Drineas
,
P.
,
Harb
,
B.
,
Josifovski
,
V.
, &
Mahoney
,
M.
(
2007
).
Feature selection methods for text classification
. In
Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
(pp.
230
239
).
New York
:
ACM
.
Davidov
,
D.
,
Gabrilovich
,
E.
, &
Markovitch
,
S.
(
2004
).
Parameterized generation of labeled datasets for text categorization based on a hierarchical directory
. In
Proceedings of the 27th Annual International ACM SIGIR Conference
(pp.
250
257
).
New York
:
ACM
. http://techtc.cs.technion.ac.il/techtc300/techtc300.html
Demmel
,
J.
, &
Veselic
,
K.
(
1992
).
Jacobi’s method is more accurate than QR
.
SIAM Journal on Matrix Analysis and Applications
,
13
(
4
),
1204
1245
.
Drineas
,
P.
,
Mahoney
,
M.
, &
Muthukrishnan
,
S.
(
2006
).
Sampling algorithms for l2 regression and applications
. In
Proceedings of the 17th Annual ACM-SIAM SODA
(pp.
1127
1136
).
New York
:
ACM
.
Fung
,
G.
, &
Mangasarian
,
O.
(
2001
).
Proximal support vector machine classifiers
. In
Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
(pp.
77
86
).
New York
:
ACM
.
Lu
,
Y.
,
Dhillon
,
P.
,
Foster
,
D.
, &
Ungar
,
L.
(
2013
).
Faster ridge regression via the subsampled randomized hadamard transform
. In
C. J. C.
Burges
,
L.
Bottou
,
Z.
Ghahramani
, &
K. Q.
Weinberger
(Eds.),
Advances in neural information processing systems, 26
(pp.
369
377
).
Red Hook, NY
:
Curran
.
Paul
,
S.
,
Magdon-Ismail
,
M.
, &
Drineas
,
P.
(
2015
).
Feature selection for linear SVM with provable guarantees
. In
Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics
(pp.
735
743
). JMLR.org.
Poggio
,
T.
, &
Smale
,
S.
(
2003
).
The mathematics of learning: Dealing with data
.
Notices of the AMS
,
50
(
5
),
537
544
.
Rifkin
,
R.
,
Yeo
,
G.
, and
Poggio
,
T.
(
2003
).
Regularized least-squares classification
.
Nato Science Series Sub Series III Computer and Systems Sciences
,
190
,
131
154
.
Rudelson
,
M.
, &
Vershynin
,
R.
(
2007
).
Sampling from large matrices: An approach through geometric functional analysis
.
J. ACM
54
(
4
).
Stewart
,
G.
, &
Sun
,
J.
(
1990
).
Matrix perturbation theory
.
Boston
:
Academic Press
.
Suykens
,
J.
, &
Vandewalle
,
J.
(
1999
).
Least squares support vector machine classifiers
.
Neural Processing Letters
,
9
(
3
),
293
300
.
Yang
,
Y.
, &
Pedersen
,
J.
(
1997
).
A comparative study on feature selection in text categorization
. In
Proceedings of the International Conference on Machine Learning
(vol. 97, pp.
412
420
).
San Mateo, CA
:
Morgan Kaufmann
.
Zhang
,
P.
, &
Peng
,
J.
(
2004
).
SVM vs regularized least squares classification
. In
Proceedings of the 17th International Conference on Pattern Recognition
(vol. 1, pp. 
176
179
).
IEEE Computer Society
.
Zhang
,
T.
, &
Oles
,
F.
(
2001
).
Text categorization based on regularized linear classification methods
.
Information Retrieval
,
4
(
1
),
5
31
.

Note

1

The name BSS comes from the authors: Batson, Spielman, and Srivastava.