Regularization is a well-recognized powerful strategy to improve the performance of a learning machine and lq regularization schemes with are central in use. It is known that different q leads to different properties of the deduced estimators, say, l2 regularization leads to a smooth estimator, while l1 regularization leads to a sparse estimator. Then how the generalization capability of lq regularization learning varies with q is worthy of investigation. In this letter, we study this problem in the framework of statistical learning theory. Our main results show that implementing lq coefficient regularization schemes in the sample-dependent hypothesis space associated with a gaussian kernel can attain the same almost optimal learning rates for all . That is, the upper and lower bounds of learning rates for lq regularization learning are asymptotically identical for all . Our finding tentatively reveals that in some modeling contexts, the choice of q might not have a strong impact on the generalization capability. From this perspective, q can be arbitrarily specified, or specified merely by other nongeneralization criteria like smoothness, computational complexity or sparsity.
Many scientific questions boil down to learning an underlying rule from finitely many input-output samples. Learning means synthesizing a function that can represent or approximate the underlying rule based on the samples. A learning system is normally developed for tackling such a supervised learning problem. Generally a learning system should comprise a hypothesis space, an optimization strategy, and a learning algorithm. The hypothesis space is a family of parameterized functions that regulate the forms and properties of the estimator to be found. The optimization strategy depicts the sense in which the estimator is defined, and the learning algorithm is an inference process to yield the objective estimator. A central question of learning is and will always be how well the synthesized function generalizes to reflect the reality that the given examples purport to show.
A recent trend in supervised learning is to use the kernel approach, which takes a reproducing kernel Hilbert space (RKHS) (Cucker & Smale, 2001) associated with a positive-definite kernel as the hypothesis space. RKHS is a Hilbert space of functions in which the pointwise evaluation is a continuous linear functional. This property makes the sampling stable and effective, since the samples available for learning are commonly modeled by point evaluations of the unknown target function. Consequently, various learning schemes based on RKHS, such as the regularized least squares (RLS) (Cucker & Smale, 2001; Wu, Ying, & Zhou, 2006; Steinwart, Hush, & Scovel, 2009) and support vector machine (SVM) (Schölkopf & Smola, 2001; Steinwart & Scovel, 2007), have triggered enormous research activities in the past decade. From the point of view of statistics, the kernel approach is proved to possess perfect learning capabilities (Wu et al., 2006; Steinwart et al., 2009). From the perspective of implementation, however, kernel methods can be attributed to such a procedure: to deduce an estimator by using the linear combination of finitely many functions, one first tackles the problem in an infinitely dimensional space and then reduces the dimension by using an optimization technique. Obviously the infinite- dimensional assumption of the hypothesis space brings many difficulties to the implementation and computation in practice.
This phenomenon was first observed in Wu and Zhou (2008), who suggested the use of the sample dependent hypothesis space (SDHS) to construct the estimators. From the so-called representation theorem in learning theory (Cucker & Smale, 2001), the learning procedure in RKHS can be converted into such a problem, whose hypothesis space can be expressed as a linear combination of the kernel functions evaluated at the sample points with finitely many coefficients. Thus, it implies that the generalization capabilities of learning in SDHS are not worse than those of learning in RKHS in a certain sense. Furthermore, because SDHS is an m-dimensional linear space, various optimization strategies such as the coefficient-based regularization strategies (Shi, Feng, & Zhou, 2011; Wu & Zhou, 2008) and greedy-type schemes (Barron, Cohen, Dahmen, & Devore, 2008; Lin, Rong, Sun, & Xu, 2013) can be applied to construct the estimator.
1.1. Problem Setting
Answering that question is of great importance since it uncovers the role of the penalty term in regularization learning, which underlies the learning strategies. However, it is known that the approximation capability of SDHS depends heavily on the choice of the kernel; it is therefore almost impossible to give a general answer to the question above independent of kernel functions. In this letter, we aim to provide an answer to the question when the widely used gaussian kernel is used.
1.2. Related Work and Our Contribution
The lq coefficient regularization strategy 1.1 is solvable for arbitrary . Thus, studying the learning performance of the strategy with different q is more interesting. Based on a series of work such as Feng and Lv (2011), Shi et al. (2011), Sun and Wu (2011), Tong, Chen, and Yang (2010), Wu and Zhou (2008), and Xiao and Zhou (2010), we have shown that there is a positive-definite kernel such that the learning rate of the corresponding lq regularizer is independent of q in the previous paper (Lin, Rong, Sun, & Xu, 2013). However, the problem is that the kernel constructed in Lin, Xu, Zeng, and Fang (2013) cannot be easily formulated in practice. Thus, seeking kernels that possess a similar property and can be easily implemented is worthy of investigation.
We show in this letter that the well-known gaussian kernel possesses a similar property, that is, as far as the learning rate is concerned, all lq regularization schemes (see equation 1.1) associated with the gaussian kernel for can realize the same almost optimal theoretical rates. That is, the influence of q on the learning rates of the learning scheme 1.1 with gaussian kernel is negligible. Here, we emphasize that our conclusion is based on the understanding of attaining almost the same optimal learning rate by appropriately tuning the regularization parameter . Thus, in applications, q can be arbitrarily specified or specified merely by other criteria (e.q., complexity, sparsity).
2. Generalization Capabilities of lq Coefficient Regularization Learning
2.1. A Brief Review of Statistical Learning Theory
2.2. Learning Rate Analysis
The following theorem shows the learning capability of the learning strategy 2.2, for arbitrary .
In this subsection, we give some explanations of and remarks on theorem 1: remarks on the learning rate, the choice of the width of gaussian kernel, the role of the regularization parameter, and the relationship between q and the generalization capability.
2.3.1. Learning Rate Analysis
Due to equation 2.6, we know that the learning strategy 2.2 is almost the optimal method if the smoothness information of is known. It should be noted that the optimality is given in the background of the worst-case analysis. That is, for a concrete , the learning rate of strategy 2.2 may be much faster than . For example, if the concrete , then the learning rate of equation 2.2 can achieve to . The conception of optimal learning rate is based on rather than a fixed regression functions.
2.3.2. Choice of the Width
However, to deduce a good approximation capability of , it can be deduced from Lin, Liu, Fang, and Xu (2014) that can not be very small. Thus, we use equation 2.8 rather than 2.9 to describe the complexity of . Noting equation 2.7, when is not very small (corresponding to 1/m), the complexity of asymptomatically equals that of . Under this circumstance, recalling that the optimal widths of the learning strategies 1.2 and 2.2, may not be very small, the capacities of and are asymptomatically identical. Therefore, the optimal choice of in equation 2.2 is the same as that in equation 1.2.
2.3.3. Importance of the Regularization Term
We can address the regularized learning model as a collection of empirical minimization problems. Indeed, let be the unit ball of a space related to the regularization term and consider the empirical minimization problem in for some r>0. As r increases, the approximation error for decreases and its sample error increases. We can achieve a small total error by choosing the correct value of r and performing empirical minimization in such that the approximation error and sample error are asymptomatically identical. The role of the regularization term is to force the algorithm to choose the correct value of r for empirical minimization (Mendelson & Neeman, 2010) and then provide a method of solving the bias-variance problem. Therefore, the main role of the regularization term is to control the capacity of the hypothesis space.
Compared with the regularized least squares strategy 1.2, a consensus is that lq coefficient regularization schemes 2.2 may bring a certain additional interest such as the sparsity for suitable choice of q (Shi et al., 2011). However, this assertion may not always be true.
There are usually two criteria to choose the regularization parameter in such a setting: (1) the approximation error should be as small as possible, and (2) the sample error should be as small as possible. Under criterion 1, should not be too large, while under criterion 2, cannot be too small. As a consequence, there is an uncertainty principle in the choice of the optimal for generalization. Moreover, if the sparsity of the estimator is needed, another criterion should be also taken into consideration: (3) the sparsity of the estimator should be as sparse as possible.
This sparsity criterion requires that should be large enough, since the sparsity of the estimator monotonously decreases with respect to . It should be pointed out that the optimal for generalization may be smaller than the smallest value of to guarantee the sparsity. Therefore, to obtain the sparse estimator, the generalization capability may degrade in certain a sense. Summarily, lq coefficient regularization scheme may bring a certain additional attribution of the estimator without sacrificing the generalization capability but not always so. It may depend on the distribution , the choice of q, and the samples. In a word, the lq coefficient regularization scheme 2.2 provides a possibility of bringing other advantages without degrading the generalization capability. Therefore, it may outperform the classical kernel methods.
2.3.4. q and the Learning Rate
Generally the generalization capability of the lq regularization schemes 2.2 may depend on the width of the gaussian kernel, the regularization parameter , the behavior of priors, the size of samples m, and, obviously, the choice of q. From theorem 1 and equation 2.6, it has been demonstrated that the learning schemes defined by equation 2.2 can achieve the asymptotically optimal rates for all choices of q. In other words, the choice of q has no influence on the learning rate, which means that q should be chosen according to other nongeneralization considerations such as smoothness, sparsity, and computational complexity.
This assertion is not surprising if we cast lq regularization schemes (see equation 2.2) into the process of empirical minimization. From the analysis, it is known that the width of the gaussian kernel depicts the complexity of the lq empirical unit ball, and the regularization parameter describes the choice of the radius of the lq ball. Also, the choice of q implies the route of the change in order to find the hypothesis space with the appropriate capacity. A regularization scheme can be regarded as the following process according to the bias and variance problem. One first chooses a large hypothesis space to guarantee the small approximation error and then shrinks the capacity of the hypothesis space until the sample error and approximation error are asymptomatically identical. From Figure 1, we see that lq regularization schemes with different q may possess different paths of shrinking and then derive estimators with different attributions. Figure 1 shows that by appropriately tuning the regularization (the radius of the lq empirical ball), we can always obtain lq regularizer estimators for all with similar learning rates. In a sense, it can be concluded that the learning rate of lq regularization learning is independent of the choice of q.
In this section, we give many comparisons between theorem 1 and related work to show the novelty of our result. We divide the comparisons into three categories. First we illustrate the difference between learning in RKHS and SDHS associated with gaussian kernel. Then we compare our result with other results on coefficient-based regularization in SDHS. Finally, we note certain papers concerning the choice of regularization exponent q and show the novelty of our result.
2.4.1. Learning in RKHS and SDHS with Gaussian Kernel
Kernel methods with gaussian kernels are one of the classes of the standard and state-of-the-art learning strategies. Therefore, the corresponding properties, such as the covering numbers, RKHS norms, and formats of the elements in the RKHS, associated with gaussian kernels, were studied in by Steinwart and Christmann (2008), Minh (2010), Zhou (2002), and Steinwart, Hush, and Scovel (2006). Based on these analyses, the learning capabilities of gaussian kernel learning were thoroughly revealed in Eberts and Steinwart (2011), Ye and Zhou (2008), Hu (2011), Steinwart and Scovel (2007), and Xiang and Zhou (2009). For classification, Steinwart and Scovel (2007) showed that the learning rates for support vector machines with hinge loss and gaussian kernels can attain the order of m−1. For regression, Eberts and Steinwart (2011) showed that the regularized least squares algorithm with gaussian kernel can achieve an almost optimal learning rate if the smoothness information of the regression function is given.
However, the learning capability of the coefficient-based regularization schemes 2.2 remains open. It should be stressed that the roles of regularization terms in equations 2.2 and 1.2 are distinct even though the solutions to these two schemes are identical for q=2. More specifically, without the regularization term, there are infinite many solutions to the least squares problem in the gaussian RKHS. In order to obtain an expected and unique solution, we should impose a certain structure on the solution, which can be achieved by introducing a specified regularization term. Therefore, the regularized least squares algorithm 1.2 can be regarded as a structural risk minimization strategy since it chooses a solution with the simplest structure among the infinite many solutions. However, due to the positive definiteness of the gaussian kernel, there is a unique solution to equation 2.2 with , and the role of regularization can be regarded to improve the generalization capability only. The introduction of regularization in equation 1.2 can be regarded as a passive choice, while that in equation 2.2 is an active operation.
This difference requires different techniques to analyze the performance of strategy 2.2. Indeed, the most widely used method was proposed in Wu and Zhou (2008). Based on Wu and Zhou (2005), Wu and Zhou (2008) pointed out that the generalization error can be divided into three terms: approximation error, sample error, and hypothesis space. Basically, the generalization error can be bounded by the following three steps:
Find an alternative estimator outside the SDHS to approximate the regression function.
Find an approximation of the alternative function in SDHS and deduce the hypothesis error.
Bound the sample error that describes the distance between the approximant in SDHS and the lq regularizer.
In this letter, we also employ this technique to analyze the performance of learning strategy 2.5. We shows that, similar to the regularized least squares algorithm (Eberts & Steinwart, 2011), the lq coefficient-based regularization scheme 2.2 can also achieve an almost optimal learning rate if the smoothness information of the regression function is given.
2.4.2. lq Regularizer with Fixed q
Several papers focus on the generalization capability analysis of the lq regularization scheme 1.1. Wu and Zhou (2008) was the first, to the best of our knowledge, to show the mathematical foundation of learning algorithms in SDHS. They claimed that the data-dependent nature of the algorithm leads to an extra hypothesis error, which is essentially different from regularization schemes with sample independent hypothesis spaces (SIHSs). Based on this, the authors proposed a coefficient-based regularization strategy and conducted a theoretical analysis of the strategy by dividing the generalization error into approximation error, sample error, and hypothesis error. Following their work, Xiao and Zhou (2010) derived a learning rate of l1 regularizer via bounding the regularization error, sample error, and hypothesis error, respectively. Their result was improved in Shi et al. (2011) by adopting a concentration technique with l2 empirical covering numbers to tackle the sample error. For the lq regularizers, Tong et al. (2010) deduced an upper bound for the generalization error by using a different method to cope with the hypothesis error. Later, the learning rate of Tong et al. (2010) was improved further in Feng and Lv (2011) by giving a sharper estimation of the sample error.
In all that research, the spectrum assumption of the regression function and the concentration property of should be satisfied. Noting this, for l2 regularizer, Sun and Wu (2011) conducted a generalization capability analysis for the l2 regularizer by using the spectrum assumption to the regression function only. For l1 regularizer, by using a sophisticated functional analysis method, Zhang, Xu, and Zhang (2009) and Song, Zhang, and Hickernell (2013) built the regularized least squares algorithm on the reproducing kernel Banach space (RKBS) and proved that the regularized least squares algorithm in RKBS is equivalent to the l1 regularizer if the kernel satisfies some restricted conditions. Following this method, Song and Zhang (2011) deduced a similar learning rate for the l1 regularizer and eliminated the concentration assumption on the marginal distribution.
To characterize the generalization capability of a learning strategy, the essential generalization bound rather than the upper bound is desired, that is, we must deduce both the lower and upper bounds for the learning strategy and prove that these two can be asymptotically identical. Under this circumstance, we can essentially deduce the learning capability of the learning scheme. All of the above results for lq regularizers with fixed q were concerned only with the upper bound. Thus, it is generally difficult to reveal their essential learning capabilities. Nevertheless, as shown by theorem 1, our established learning rate is essential. It can be found in equation 2.6 that if , the deduced learning rate cannot be essentially improved.
2.4.3. The choice of q
Blanchard, Bousquet, and Massart (2008) were the first, to the best of our knowledge, to focus on the choice of the optimal q for the kernel method. Indeed, as far as the sample error is concerned, Blanchard et al. (2008) pointed out that there is an optimal exponent for support vector machine with hinge loss. Then, Mendelson and Neeman (2010) found that this assertion also held for the regularized least square strategy 1.3. That is, as far as the sample error is concerned, regularized least squares may have a design flaw. However, Steinwart et al. (2009) derived a q-independent optimal learning rate of strategy 1.3 in a minmax sense. Therefore, they concluded that the RLS algorithm 1.2 had no advantages or disadvantages compared with other values of q in equation 1.3 from the statistical point of view.
Since the lq coefficient regularization strategy 1.1 is solvable for arbitrary , and different q may derive different attributions of the estimator, studying the dependence between learning performance of learning strategy 1.1 and q is more interesting. This topic was first studied in Lin, Xu, Zeng, and Fang (2013), where we have shown that there is a positive- definite kernel such that the learning rate of the corresponding lq regularizer is independent of q. However, the kernel constructed in Lin, Xu, Zeng, and Fang (2013) cannot be easily formulated in practice. Thus, we study the dependency of the generalization capabilities and q of lq regularization learning with the widely used gaussian kernel. Fortunately, we find that a similar conclusion also holds for the gaussian kernel, which is witnessed in theorem 1 in this letter.
3. Proof of Theorem 1.
3.1. Error Decomposition
Denote by and the RKHS associated with and its corresponding RKHS norm, respectively. To prove theorem 1, the following error decomposition strategy is required.
3.2. Approximation Error Estimation
To bound the approximation error, the following three lemmas are required:
Let r>0. If, thensatisfies
Furthermore, it can be easily deduced from Eberts and Steinwart (2011, theorem 2.3) and lemma 1 that lemma 3 holds:
Lemma 3, together with lemma 2 and , yields the following approximation error estimation.
3.3. Sample Error Estimation
To bound , we need the following well-known Bernstein inequality (Shi et al., 2011).
By the help of lemma 4, we provide an upper bound estimate of
We are now in a position to deduce an upper-bound estimate for .
3.4. Hypothesis Error Estimation
In this section, we give an error estimate for .
3.5. Learning Rate Analysis
Two anonymous referees carefully read the manuscript for this letter and provided numerous constructive suggestions. As a result, the overall quality of the letter has been noticeably enhanced, for which we feel much indebted and are grateful. The research was supported by the National 973 Programming (2013CB329404), the Key Program of National Natural Science Foundation of China (Grant 11131006).