## Abstract

Regularization is a well-recognized powerful strategy to improve the performance of a learning machine and ** l^{q}** regularization schemes with are central in use. It is known that different

**leads to different properties of the deduced estimators, say,**

*q***regularization leads to a smooth estimator, while**

*l*^{2}**regularization leads to a sparse estimator. Then how the generalization capability of**

*l*^{1}**regularization learning varies with**

*l*^{q}**is worthy of investigation. In this letter, we study this problem in the framework of statistical learning theory. Our main results show that implementing**

*q***coefficient regularization schemes in the sample-dependent hypothesis space associated with a gaussian kernel can attain the same almost optimal learning rates for all . That is, the upper and lower bounds of learning rates for**

*l*^{q}**regularization learning are asymptotically identical for all . Our finding tentatively reveals that in some modeling contexts, the choice of**

*l*^{q}**might not have a strong impact on the generalization capability. From this perspective,**

*q***can be arbitrarily specified, or specified merely by other nongeneralization criteria like smoothness, computational complexity or sparsity.**

*q*## 1. Introduction

Many scientific questions boil down to learning an underlying rule from finitely many input-output samples. Learning means synthesizing a function that can represent or approximate the underlying rule based on the samples. A learning system is normally developed for tackling such a supervised learning problem. Generally a learning system should comprise a hypothesis space, an optimization strategy, and a learning algorithm. The hypothesis space is a family of parameterized functions that regulate the forms and properties of the estimator to be found. The optimization strategy depicts the sense in which the estimator is defined, and the learning algorithm is an inference process to yield the objective estimator. A central question of learning is and will always be how well the synthesized function generalizes to reflect the reality that the given examples purport to show.

A recent trend in supervised learning is to use the kernel approach, which takes a reproducing kernel Hilbert space (RKHS) (Cucker & Smale, 2001) associated with a positive-definite kernel as the hypothesis space. RKHS is a Hilbert space of functions in which the pointwise evaluation is a continuous linear functional. This property makes the sampling stable and effective, since the samples available for learning are commonly modeled by point evaluations of the unknown target function. Consequently, various learning schemes based on RKHS, such as the regularized least squares (RLS) (Cucker & Smale, 2001; Wu, Ying, & Zhou, 2006; Steinwart, Hush, & Scovel, 2009) and support vector machine (SVM) (Schölkopf & Smola, 2001; Steinwart & Scovel, 2007), have triggered enormous research activities in the past decade. From the point of view of statistics, the kernel approach is proved to possess perfect learning capabilities (Wu et al., 2006; Steinwart et al., 2009). From the perspective of implementation, however, kernel methods can be attributed to such a procedure: to deduce an estimator by using the linear combination of finitely many functions, one first tackles the problem in an infinitely dimensional space and then reduces the dimension by using an optimization technique. Obviously the infinite- dimensional assumption of the hypothesis space brings many difficulties to the implementation and computation in practice.

This phenomenon was first observed in Wu and Zhou (2008), who suggested the use of the sample dependent hypothesis space (SDHS) to construct the estimators. From the so-called representation theorem in learning theory (Cucker & Smale, 2001), the learning procedure in RKHS can be converted into such a problem, whose hypothesis space can be expressed as a linear combination of the kernel functions evaluated at the sample points with finitely many coefficients. Thus, it implies that the generalization capabilities of learning in SDHS are not worse than those of learning in RKHS in a certain sense. Furthermore, because SDHS is an *m*-dimensional linear space, various optimization strategies such as the coefficient-based regularization strategies (Shi, Feng, & Zhou, 2011; Wu & Zhou, 2008) and greedy-type schemes (Barron, Cohen, Dahmen, & Devore, 2008; Lin, Rong, Sun, & Xu, 2013) can be applied to construct the estimator.

### 1.1. Problem Setting

*q*in equation 1.1 is critical, since it embodies the properties of the anticipated estimators such as sparsity and smoothness and also takes some other perspectives, such as complexity and generalization capability into consideration. For example, for an

*l*

^{2}regularizer, the solution to equation 1.1, is the same as the solution to the regularized least squares (RLS) algorithm in RKHS (Cucker & Smale, 2001) where

*H*is the RKHS associated with the kernel

_{K}*K*. Furthermore, the solution can be analytically represented by the kernel function (Cucker & Zhou, 2007). The obtained solution, however, is smooth but not sparse; the nonzero coefficients of the solution are potentially as many as the sampling points if no special treatment is taken. Thus,

*l*

^{2}regularizer is a good smooth regularizer but not a sparse one. For 0<

*q*<1, there are many algorithms such as the iteratively reweighted least squares algorithm (Daubechies, Devore, Fornasier, & Güntürk, 2010) and iterative half-thresholding algorithm (Xu, Chang, Xu, & Zhang, 2012), to obtain a sparse approximation of the target function. However, all of these algorithms suffer from the local minimum problem due to the nonconvex natures. For

*q*=1, many algorithms exist, say, the iterative soft thresholding algorithm (Daubechies, Defrise, & De Mol, 2004), LASSO (Hastie, Tibshirani, & Friedman, 2001; Tibshirani, 1995) and iteratively reweighted least square algorithm (Daubechies et al., 2010), to yield sparse estimators of the target function. However, as far as sparsity is concerned, the

*l*

^{1}regularizer is somewhat worse than the

*l*(0<

^{q}*q*<1) regularizer, and as far as training speed is concerned, the

*l*

^{1}regularizer is slower than that of the

*l*

^{2}regularizer. Thus, we can see that different choices of

*q*may lead to estimators with different forms, properties, and attributions. Since the study of generalization capabilities lies at the center of learning theory, we ask the following question: what about the generalization capabilities of the

*l*regularization schemes 1.1 for ?

^{q}Answering that question is of great importance since it uncovers the role of the penalty term in regularization learning, which underlies the learning strategies. However, it is known that the approximation capability of SDHS depends heavily on the choice of the kernel; it is therefore almost impossible to give a general answer to the question above independent of kernel functions. In this letter, we aim to provide an answer to the question when the widely used gaussian kernel is used.

### 1.2. Related Work and Our Contribution

*q*in the following optimization strategy, such that the performance of learning process can be improved. To this end, Steinwart, Hush, and Scovel (2009) derived a

*q*-independent optimal learning rate of equation 1.3 in the minmax sense. Therefore, they concluded that the RLS strategy 1.2 has no advantages or disadvantages compared to other values of

*q*in equation 1.3, from the viewpoint of learning theory. However, even without such a result, it is unclear how to solve equation 1.3 when . That is,

*q*= 2 is currently the only feasible case, which makes RLS strategy the method of choice.

The *l ^{q}* coefficient regularization strategy 1.1 is solvable for arbitrary . Thus, studying the learning performance of the strategy with different

*q*is more interesting. Based on a series of work such as Feng and Lv (2011), Shi et al. (2011), Sun and Wu (2011), Tong, Chen, and Yang (2010), Wu and Zhou (2008), and Xiao and Zhou (2010), we have shown that there is a positive-definite kernel such that the learning rate of the corresponding

*l*regularizer is independent of

^{q}*q*in the previous paper (Lin, Rong, Sun, & Xu, 2013). However, the problem is that the kernel constructed in Lin, Xu, Zeng, and Fang (2013) cannot be easily formulated in practice. Thus, seeking kernels that possess a similar property and can be easily implemented is worthy of investigation.

We show in this letter that the well-known gaussian kernel possesses a similar property, that is, as far as the learning rate is concerned, all *l ^{q}* regularization schemes (see equation 1.1) associated with the gaussian kernel for can realize the same almost optimal theoretical rates. That is, the influence of

*q*on the learning rates of the learning scheme 1.1 with gaussian kernel is negligible. Here, we emphasize that our conclusion is based on the understanding of attaining almost the same optimal learning rate by appropriately tuning the regularization parameter . Thus, in applications,

*q*can be arbitrarily specified or specified merely by other criteria (e.q., complexity, sparsity).

### 1.3. Organization

## 2. Generalization Capabilities of *l*^{q} Coefficient Regularization Learning

^{q}

### 2.1. A Brief Review of Statistical Learning Theory

*M*>0, be an input space and be an output space. Let

**z**=(

*x*)

_{i}, y_{i}^{m}

_{i=1}be a random sample set with a finite size , drawn independent and identically according to an unknown distribution on , which admits the decomposition Suppose further that is a function to model the correspondence between

*x*and

*y*, as induced by . A natural measurement of the error incurred by using

*f*for this purpose is the generalization error, defined by which is minimized by the regression function (Cucker & Smale, 2001), defined by However, we do not know this ideal minimizer becuase is unknown. Instead, we can turn to the random examples sampled according to .

*X*, with the norm denoted by Under the assumption , it is known that for every , there holds The task of the least squares regression problem is then to construct function

*f*

_{z}that approximates , in the sense of the norm , using the finitely many samples

**z**.

### 2.2. Learning Rate Analysis

*l*coefficient-based regularization strategy, where . The main purpose of this letter is to derive the optimal bound of the following generalization error, for all .

^{q}*X*=

**I**

^{d}≔[0, 1]

^{d},

*c*

_{0}be a positive constant, , and

*r*=

*u*+

*v*for some . A function is said to be (

*r, c*

_{0})-smooth if for every , the partial derivatives exist and satisfy Denote by the set of all (

*r, c*

_{0})-smooth functions. In our analysis, we assume the prior information is known.

The following theorem shows the learning capability of the learning strategy 2.2, for arbitrary .

*Let r>0,*

*c*_{0}>0, , , , and be defined as in equation 2.2. If , and*then, for arbitrary*,

*with probability at least*,

*there holds*

*where C is a constant depending only on d, r, c*.

_{0}, q, and M### 2.3. Remarks

In this subsection, we give some explanations of and remarks on theorem 1: remarks on the learning rate, the choice of the width of gaussian kernel, the role of the regularization parameter, and the relationship between *q* and the generalization capability.

#### 2.3.1. Learning Rate Analysis

*m*samples cannot be faster than More specifically, let be the class of all Borel measures on

*Z*such that . We enter into a competition over all estimators and define It is easy to see that quantitively measures the quality of

*f*

_{z}. Then it can be found in Györfy et al. (2002) or DeVore et al. (2006) that where

*C*is a constant depending only on

*M, d, c*

_{0}, and

*r*.

*C*

_{1}and

*C*

_{2}are constants depending only on

*r, c*

_{0},

*M*and

*d*.

Due to equation 2.6, we know that the learning strategy 2.2 is almost the optimal method if the smoothness information of is known. It should be noted that the optimality is given in the background of the worst-case analysis. That is, for a concrete , the learning rate of strategy 2.2 may be much faster than . For example, if the concrete , then the learning rate of equation 2.2 can achieve to . The conception of optimal learning rate is based on rather than a fixed regression functions.

#### 2.3.2. Choice of the Width

*m*-dimensional linear space and the gaussian RKHS is an infinite space for arbitrary (kernel width) (Minh, 2010), the complexity of the gaussian SDHS may be smaller than the gaussian RKHS at the first glance. Hence, there naturally arises the following question: Does the optimal of the gaussian SDHS learning coincide with that of the gaussian RKHS learning? Theorem 1, together with Eberts and Steinwart (2011), demonstrates that the optimal widths of the above two strategies are asymptomatically identical. That is, if the smooth information of the regression function is known, then the optimal choices of of both learning strategies 2.2 and 1.2 are the same. The above phenomenon can be explained as follows. Let be the unit ball of the gaussian RKHS and be the

*l*

^{2}empirical ball. Denote by the

*l*

_{2}-empirical covering number (Shi et al., 2011), whose definition can be found in the descriptions above lemma 5 in this letter. Then it can be found in Steinwart and Scovel (2007) that for any , there holds where

*p*is an arbitrary real number in (0, 2] and is an arbitrary positive number. For the gaussian SDHS, , on one hand, we can use the fact that and deduce where is a constant depending only on , and

*d*. On the other hand, it follows from Györfy et al. (2002) that where the finite-dimensional property of is used. Therefore, it should be highlighted that the finite-dimensional property of is used if which always implies that is very small (it may be smaller than ).

However, to deduce a good approximation capability of , it can be deduced from Lin, Liu, Fang, and Xu (2014) that can not be very small. Thus, we use equation 2.8 rather than 2.9 to describe the complexity of . Noting equation 2.7, when is not very small (corresponding to 1/*m*), the complexity of asymptomatically equals that of . Under this circumstance, recalling that the optimal widths of the learning strategies 1.2 and 2.2, may not be very small, the capacities of and are asymptomatically identical. Therefore, the optimal choice of in equation 2.2 is the same as that in equation 1.2.

#### 2.3.3. Importance of the Regularization Term

We can address the regularized learning model as a collection of empirical minimization problems. Indeed, let be the unit ball of a space related to the regularization term and consider the empirical minimization problem in for some *r*>0. As *r* increases, the approximation error for decreases and its sample error increases. We can achieve a small total error by choosing the correct value of *r* and performing empirical minimization in such that the approximation error and sample error are asymptomatically identical. The role of the regularization term is to force the algorithm to choose the correct value of *r* for empirical minimization (Mendelson & Neeman, 2010) and then provide a method of solving the bias-variance problem. Therefore, the main role of the regularization term is to control the capacity of the hypothesis space.

Compared with the regularized least squares strategy 1.2, a consensus is that *l ^{q}* coefficient regularization schemes 2.2 may bring a certain additional interest such as the sparsity for suitable choice of

*q*(Shi et al., 2011). However, this assertion may not always be true.

There are usually two criteria to choose the regularization parameter in such a setting: (1) the approximation error should be as small as possible, and (2) the sample error should be as small as possible. Under criterion 1, should not be too large, while under criterion 2, cannot be too small. As a consequence, there is an uncertainty principle in the choice of the optimal for generalization. Moreover, if the sparsity of the estimator is needed, another criterion should be also taken into consideration: (3) the sparsity of the estimator should be as sparse as possible.

This sparsity criterion requires that should be large enough, since the sparsity of the estimator monotonously decreases with respect to . It should be pointed out that the optimal for generalization may be smaller than the smallest value of to guarantee the sparsity. Therefore, to obtain the sparse estimator, the generalization capability may degrade in certain a sense. Summarily, *l ^{q}* coefficient regularization scheme may bring a certain additional attribution of the estimator without sacrificing the generalization capability but not always so. It may depend on the distribution , the choice of

*q*, and the samples. In a word, the

*l*coefficient regularization scheme 2.2 provides a possibility of bringing other advantages without degrading the generalization capability. Therefore, it may outperform the classical kernel methods.

^{q}#### 2.3.4. *q* and the Learning Rate

Generally the generalization capability of the *l ^{q}* regularization schemes 2.2 may depend on the width of the gaussian kernel, the regularization parameter , the behavior of priors, the size of samples

*m*, and, obviously, the choice of

*q*. From theorem 1 and equation 2.6, it has been demonstrated that the learning schemes defined by equation 2.2 can achieve the asymptotically optimal rates for all choices of

*q*. In other words, the choice of

*q*has no influence on the learning rate, which means that

*q*should be chosen according to other nongeneralization considerations such as smoothness, sparsity, and computational complexity.

This assertion is not surprising if we cast *l ^{q}* regularization schemes (see equation 2.2) into the process of empirical minimization. From the analysis, it is known that the width of the gaussian kernel depicts the complexity of the

*l*empirical unit ball, and the regularization parameter describes the choice of the radius of the

^{q}*l*ball. Also, the choice of

^{q}*q*implies the route of the change in order to find the hypothesis space with the appropriate capacity. A regularization scheme can be regarded as the following process according to the bias and variance problem. One first chooses a large hypothesis space to guarantee the small approximation error and then shrinks the capacity of the hypothesis space until the sample error and approximation error are asymptomatically identical. From Figure 1, we see that

*l*regularization schemes with different

^{q}*q*may possess different paths of shrinking and then derive estimators with different attributions. Figure 1 shows that by appropriately tuning the regularization (the radius of the

*l*empirical ball), we can always obtain

^{q}*l*regularizer estimators for all with similar learning rates. In a sense, it can be concluded that the learning rate of

^{q}*l*regularization learning is independent of the choice of

^{q}*q*.

### 2.4. Comparisons

In this section, we give many comparisons between theorem 1 and related work to show the novelty of our result. We divide the comparisons into three categories. First we illustrate the difference between learning in RKHS and SDHS associated with gaussian kernel. Then we compare our result with other results on coefficient-based regularization in SDHS. Finally, we note certain papers concerning the choice of regularization exponent *q* and show the novelty of our result.

#### 2.4.1. Learning in RKHS and SDHS with Gaussian Kernel

Kernel methods with gaussian kernels are one of the classes of the standard and state-of-the-art learning strategies. Therefore, the corresponding properties, such as the covering numbers, RKHS norms, and formats of the elements in the RKHS, associated with gaussian kernels, were studied in by Steinwart and Christmann (2008), Minh (2010), Zhou (2002), and Steinwart, Hush, and Scovel (2006). Based on these analyses, the learning capabilities of gaussian kernel learning were thoroughly revealed in Eberts and Steinwart (2011), Ye and Zhou (2008), Hu (2011), Steinwart and Scovel (2007), and Xiang and Zhou (2009). For classification, Steinwart and Scovel (2007) showed that the learning rates for support vector machines with hinge loss and gaussian kernels can attain the order of *m*^{−1}. For regression, Eberts and Steinwart (2011) showed that the regularized least squares algorithm with gaussian kernel can achieve an almost optimal learning rate if the smoothness information of the regression function is given.

However, the learning capability of the coefficient-based regularization schemes 2.2 remains open. It should be stressed that the roles of regularization terms in equations 2.2 and 1.2 are distinct even though the solutions to these two schemes are identical for *q*=2. More specifically, without the regularization term, there are infinite many solutions to the least squares problem in the gaussian RKHS. In order to obtain an expected and unique solution, we should impose a certain structure on the solution, which can be achieved by introducing a specified regularization term. Therefore, the regularized least squares algorithm 1.2 can be regarded as a structural risk minimization strategy since it chooses a solution with the simplest structure among the infinite many solutions. However, due to the positive definiteness of the gaussian kernel, there is a unique solution to equation 2.2 with , and the role of regularization can be regarded to improve the generalization capability only. The introduction of regularization in equation 1.2 can be regarded as a passive choice, while that in equation 2.2 is an active operation.

This difference requires different techniques to analyze the performance of strategy 2.2. Indeed, the most widely used method was proposed in Wu and Zhou (2008). Based on Wu and Zhou (2005), Wu and Zhou (2008) pointed out that the generalization error can be divided into three terms: approximation error, sample error, and hypothesis space. Basically, the generalization error can be bounded by the following three steps:

Find an alternative estimator outside the SDHS to approximate the regression function.

Find an approximation of the alternative function in SDHS and deduce the hypothesis error.

Bound the sample error that describes the distance between the approximant in SDHS and the

*l*regularizer.^{q}

In this letter, we also employ this technique to analyze the performance of learning strategy 2.5. We shows that, similar to the regularized least squares algorithm (Eberts & Steinwart, 2011), the *l ^{q}* coefficient-based regularization scheme 2.2 can also achieve an almost optimal learning rate if the smoothness information of the regression function is given.

#### 2.4.2. *l*^{q} Regularizer with Fixed *q*

^{q}

Several papers focus on the generalization capability analysis of the *l ^{q}* regularization scheme 1.1. Wu and Zhou (2008) was the first, to the best of our knowledge, to show the mathematical foundation of learning algorithms in SDHS. They claimed that the data-dependent nature of the algorithm leads to an extra hypothesis error, which is essentially different from regularization schemes with sample independent hypothesis spaces (SIHSs). Based on this, the authors proposed a coefficient-based regularization strategy and conducted a theoretical analysis of the strategy by dividing the generalization error into approximation error, sample error, and hypothesis error. Following their work, Xiao and Zhou (2010) derived a learning rate of

*l*

^{1}regularizer via bounding the regularization error, sample error, and hypothesis error, respectively. Their result was improved in Shi et al. (2011) by adopting a concentration technique with

*l*

^{2}empirical covering numbers to tackle the sample error. For the

*l*regularizers, Tong et al. (2010) deduced an upper bound for the generalization error by using a different method to cope with the hypothesis error. Later, the learning rate of Tong et al. (2010) was improved further in Feng and Lv (2011) by giving a sharper estimation of the sample error.

^{q}In all that research, the spectrum assumption of the regression function and the concentration property of should be satisfied. Noting this, for *l*^{2} regularizer, Sun and Wu (2011) conducted a generalization capability analysis for the *l*^{2} regularizer by using the spectrum assumption to the regression function only. For *l*^{1} regularizer, by using a sophisticated functional analysis method, Zhang, Xu, and Zhang (2009) and Song, Zhang, and Hickernell (2013) built the regularized least squares algorithm on the reproducing kernel Banach space (RKBS) and proved that the regularized least squares algorithm in RKBS is equivalent to the *l*^{1} regularizer if the kernel satisfies some restricted conditions. Following this method, Song and Zhang (2011) deduced a similar learning rate for the *l*^{1} regularizer and eliminated the concentration assumption on the marginal distribution.

To characterize the generalization capability of a learning strategy, the essential generalization bound rather than the upper bound is desired, that is, we must deduce both the lower and upper bounds for the learning strategy and prove that these two can be asymptotically identical. Under this circumstance, we can essentially deduce the learning capability of the learning scheme. All of the above results for *l ^{q}* regularizers with fixed

*q*were concerned only with the upper bound. Thus, it is generally difficult to reveal their essential learning capabilities. Nevertheless, as shown by theorem 1, our established learning rate is essential. It can be found in equation 2.6 that if , the deduced learning rate cannot be essentially improved.

#### 2.4.3. The choice of *q*

Blanchard, Bousquet, and Massart (2008) were the first, to the best of our knowledge, to focus on the choice of the optimal *q* for the kernel method. Indeed, as far as the sample error is concerned, Blanchard et al. (2008) pointed out that there is an optimal exponent for support vector machine with hinge loss. Then, Mendelson and Neeman (2010) found that this assertion also held for the regularized least square strategy 1.3. That is, as far as the sample error is concerned, regularized least squares may have a design flaw. However, Steinwart et al. (2009) derived a *q*-independent optimal learning rate of strategy 1.3 in a minmax sense. Therefore, they concluded that the RLS algorithm 1.2 had no advantages or disadvantages compared with other values of *q* in equation 1.3 from the statistical point of view.

Since the *l ^{q}* coefficient regularization strategy 1.1 is solvable for arbitrary , and different

*q*may derive different attributions of the estimator, studying the dependence between learning performance of learning strategy 1.1 and

*q*is more interesting. This topic was first studied in Lin, Xu, Zeng, and Fang (2013), where we have shown that there is a positive- definite kernel such that the learning rate of the corresponding

*l*regularizer is independent of

^{q}*q*. However, the kernel constructed in Lin, Xu, Zeng, and Fang (2013) cannot be easily formulated in practice. Thus, we study the dependency of the generalization capabilities and

*q*of

*l*regularization learning with the widely used gaussian kernel. Fortunately, we find that a similar conclusion also holds for the gaussian kernel, which is witnessed in theorem 1 in this letter.

^{q}## 3. Proof of Theorem 1.

### 3.1. Error Decomposition

^{d}, we can define for arbitrary . Finally, for every , we define Therefore, we have constructed a function defined on

**R**

^{d}. From the definition, it follows that is an even, continuous, and periodic function with respect to arbitrary variable .

Denote by and the RKHS associated with and its corresponding RKHS norm, respectively. To prove theorem 1, the following error decomposition strategy is required.

### 3.2. Approximation Error Estimation

To bound the approximation error, the following three lemmas are required:

*Let r>0. If*, *then**satisfies*

**k**

_{j,h}such that , that is, Since is even, we can deduce Hence, by the definition of the modulus of smoothness, we have which finishes the proof of lemma 1.

*Let r>0 and*

*f*_{0}be defined as in equation 3.1. If , then*where C is a constant depending only on d and r*.

*f*

_{0}that As it follows from lemma 1 that Then, the same method as that of Eberts and Steinwart (2011) yields that

Furthermore, it can be easily deduced from Eberts and Steinwart (2011, theorem 2.3) and lemma 1 that lemma 3 holds:

*Let f*

_{0}be defined as in equation 3.1. Then we have withLemma 3, together with lemma 2 and , yields the following approximation error estimation.

### 3.3. Sample Error Estimation

To bound , we need the following well-known Bernstein inequality (Shi et al., 2011).

By the help of lemma 4, we provide an upper bound estimate of

*l*

^{2}empirical covering number (Shi et al., 2011) should be introduced. Let be a pseudo-metric space and a subset. For every , the covering number of

*T*with respect to and is defined as the minimal number of balls of radius whose union covers

*T*, that is, for some , where . The

*l*

^{2}-empirical covering number of a function set is defined by means of the normalized

*l*

^{2}-metric on the Euclidean space

**R**

^{d}given in with for

The following two lemmas can be easily deduced from Steinwart and Scovel (2007) and Sun and Wu (2011), respectively.

We are now in a position to deduce an upper-bound estimate for .

*Let and be defined as in equation 2.2. Then for arbitrary and arbitrary , there exists a constant*

*C*depending only on*d*, ,*p*, and*M*such that*with confidence at least*,

*where*

*Z*. Hence and where

*z*≔(

_{i}*x*). Observe that Therefore, and For , and arbitrary , we have It follows that which together with lemma 5 implies By lemma 6 with

_{i}, y_{i}*B*=

*c*=16

*M*

^{2}, , and , we know that for any with confidence there exists a constant

*C*depending only on

*d*such that for all , Here Hence, we obtain Now we turn to estimate

*R*. It follows from the definition of that Thus, On the other hand, that is, Set which finishes the proof of proposition 4.

_{q}### 3.4. Hypothesis Error Estimation

In this section, we give an error estimate for .

### 3.5. Learning Rate Analysis

*q*<1, , if we set , then holds with confidence at least , where

*C*is a constant depending on only

*d*and

*r*.

*C*is a constant depending on only

*d*and

*r*.

## Acknowledgments

Two anonymous referees carefully read the manuscript for this letter and provided numerous constructive suggestions. As a result, the overall quality of the letter has been noticeably enhanced, for which we feel much indebted and are grateful. The research was supported by the National 973 Programming (2013CB329404), the Key Program of National Natural Science Foundation of China (Grant 11131006).

## References

^{q}regularization learning depend on q? A negative example

*l*

^{1}-regularizer and data dependent hypothesis spaces

*l*

^{1}norm II: Error analysis for regularized least square regression

*l*

^{1}norm

*l*-coefficient regularization

^{p}*l*

^{1}-regularizer

*L*

_{1/2}regularization: A thresholding representation theory and a fast solver

## Author notes

*Corresponding author.