Abstract

Regularization is a well-recognized powerful strategy to improve the performance of a learning machine and lq regularization schemes with are central in use. It is known that different q leads to different properties of the deduced estimators, say, l2 regularization leads to a smooth estimator, while l1 regularization leads to a sparse estimator. Then how the generalization capability of lq regularization learning varies with q is worthy of investigation. In this letter, we study this problem in the framework of statistical learning theory. Our main results show that implementing lq coefficient regularization schemes in the sample-dependent hypothesis space associated with a gaussian kernel can attain the same almost optimal learning rates for all . That is, the upper and lower bounds of learning rates for lq regularization learning are asymptotically identical for all . Our finding tentatively reveals that in some modeling contexts, the choice of q might not have a strong impact on the generalization capability. From this perspective, q can be arbitrarily specified, or specified merely by other nongeneralization criteria like smoothness, computational complexity or sparsity.

1.  Introduction

Many scientific questions boil down to learning an underlying rule from finitely many input-output samples. Learning means synthesizing a function that can represent or approximate the underlying rule based on the samples. A learning system is normally developed for tackling such a supervised learning problem. Generally a learning system should comprise a hypothesis space, an optimization strategy, and a learning algorithm. The hypothesis space is a family of parameterized functions that regulate the forms and properties of the estimator to be found. The optimization strategy depicts the sense in which the estimator is defined, and the learning algorithm is an inference process to yield the objective estimator. A central question of learning is and will always be how well the synthesized function generalizes to reflect the reality that the given examples purport to show.

A recent trend in supervised learning is to use the kernel approach, which takes a reproducing kernel Hilbert space (RKHS) (Cucker & Smale, 2001) associated with a positive-definite kernel as the hypothesis space. RKHS is a Hilbert space of functions in which the pointwise evaluation is a continuous linear functional. This property makes the sampling stable and effective, since the samples available for learning are commonly modeled by point evaluations of the unknown target function. Consequently, various learning schemes based on RKHS, such as the regularized least squares (RLS) (Cucker & Smale, 2001; Wu, Ying, & Zhou, 2006; Steinwart, Hush, & Scovel, 2009) and support vector machine (SVM) (Schölkopf & Smola, 2001; Steinwart & Scovel, 2007), have triggered enormous research activities in the past decade. From the point of view of statistics, the kernel approach is proved to possess perfect learning capabilities (Wu et al., 2006; Steinwart et al., 2009). From the perspective of implementation, however, kernel methods can be attributed to such a procedure: to deduce an estimator by using the linear combination of finitely many functions, one first tackles the problem in an infinitely dimensional space and then reduces the dimension by using an optimization technique. Obviously the infinite- dimensional assumption of the hypothesis space brings many difficulties to the implementation and computation in practice.

This phenomenon was first observed in Wu and Zhou (2008), who suggested the use of the sample dependent hypothesis space (SDHS) to construct the estimators. From the so-called representation theorem in learning theory (Cucker & Smale, 2001), the learning procedure in RKHS can be converted into such a problem, whose hypothesis space can be expressed as a linear combination of the kernel functions evaluated at the sample points with finitely many coefficients. Thus, it implies that the generalization capabilities of learning in SDHS are not worse than those of learning in RKHS in a certain sense. Furthermore, because SDHS is an m-dimensional linear space, various optimization strategies such as the coefficient-based regularization strategies (Shi, Feng, & Zhou, 2011; Wu & Zhou, 2008) and greedy-type schemes (Barron, Cohen, Dahmen, & Devore, 2008; Lin, Rong, Sun, & Xu, 2013) can be applied to construct the estimator.

In this letter, we consider the general coefficient-based regularization strategies in SDHS. Let
formula
be an SDHS, where and is a positive-definite kernel. The coefficient-based lq regularization strategy (lq regularizer) takes the form of
formula
1.1
where is the regularization parameter and is defined by
formula

1.1.  Problem Setting

In practice, the choice of q in equation 1.1 is critical, since it embodies the properties of the anticipated estimators such as sparsity and smoothness and also takes some other perspectives, such as complexity and generalization capability into consideration. For example, for an l2 regularizer, the solution to equation 1.1, is the same as the solution to the regularized least squares (RLS) algorithm in RKHS (Cucker & Smale, 2001)
formula
1.2
where HK is the RKHS associated with the kernel K. Furthermore, the solution can be analytically represented by the kernel function (Cucker & Zhou, 2007). The obtained solution, however, is smooth but not sparse; the nonzero coefficients of the solution are potentially as many as the sampling points if no special treatment is taken. Thus, l2 regularizer is a good smooth regularizer but not a sparse one. For 0<q<1, there are many algorithms such as the iteratively reweighted least squares algorithm (Daubechies, Devore, Fornasier, & Güntürk, 2010) and iterative half-thresholding algorithm (Xu, Chang, Xu, & Zhang, 2012), to obtain a sparse approximation of the target function. However, all of these algorithms suffer from the local minimum problem due to the nonconvex natures. For q=1, many algorithms exist, say, the iterative soft thresholding algorithm (Daubechies, Defrise, & De Mol, 2004), LASSO (Hastie, Tibshirani, & Friedman, 2001; Tibshirani, 1995) and iteratively reweighted least square algorithm (Daubechies et al., 2010), to yield sparse estimators of the target function. However, as far as sparsity is concerned, the l1 regularizer is somewhat worse than the lq (0<q<1) regularizer, and as far as training speed is concerned, the l1 regularizer is slower than that of the l2 regularizer. Thus, we can see that different choices of q may lead to estimators with different forms, properties, and attributions. Since the study of generalization capabilities lies at the center of learning theory, we ask the following question: what about the generalization capabilities of the lq regularization schemes 1.1 for ?

Answering that question is of great importance since it uncovers the role of the penalty term in regularization learning, which underlies the learning strategies. However, it is known that the approximation capability of SDHS depends heavily on the choice of the kernel; it is therefore almost impossible to give a general answer to the question above independent of kernel functions. In this letter, we aim to provide an answer to the question when the widely used gaussian kernel is used.

1.2.  Related Work and Our Contribution

There exists a huge number of theoretical analyses of kernel methods, many of which are treated in Cucker and Smale (2001), Cucker and Zhou (2007), Eberts and Steinwart (2011), Caponnetto and De Vito (2007), Steinwart and Scovel (2007), Schölkopf and Smola (2001). This means that various results on the learning rate of the algorithm 1.2 are given. Recent work by Mendelson and Neeman (2010) has suggested that the penalty may not be the optimal choice from a statistical point of view, that is, the RLS strategy may have a design flaw. There may be an appropriate choice of q in the following optimization strategy,
formula
1.3
such that the performance of learning process can be improved. To this end, Steinwart, Hush, and Scovel (2009) derived a q-independent optimal learning rate of equation 1.3 in the minmax sense. Therefore, they concluded that the RLS strategy 1.2 has no advantages or disadvantages compared to other values of q in equation 1.3, from the viewpoint of learning theory. However, even without such a result, it is unclear how to solve equation 1.3 when . That is, q = 2 is currently the only feasible case, which makes RLS strategy the method of choice.

The lq coefficient regularization strategy 1.1 is solvable for arbitrary . Thus, studying the learning performance of the strategy with different q is more interesting. Based on a series of work such as Feng and Lv (2011), Shi et al. (2011), Sun and Wu (2011), Tong, Chen, and Yang (2010), Wu and Zhou (2008), and Xiao and Zhou (2010), we have shown that there is a positive-definite kernel such that the learning rate of the corresponding lq regularizer is independent of q in the previous paper (Lin, Rong, Sun, & Xu, 2013). However, the problem is that the kernel constructed in Lin, Xu, Zeng, and Fang (2013) cannot be easily formulated in practice. Thus, seeking kernels that possess a similar property and can be easily implemented is worthy of investigation.

We show in this letter that the well-known gaussian kernel possesses a similar property, that is, as far as the learning rate is concerned, all lq regularization schemes (see equation 1.1) associated with the gaussian kernel for can realize the same almost optimal theoretical rates. That is, the influence of q on the learning rates of the learning scheme 1.1 with gaussian kernel is negligible. Here, we emphasize that our conclusion is based on the understanding of attaining almost the same optimal learning rate by appropriately tuning the regularization parameter . Thus, in applications, q can be arbitrarily specified or specified merely by other criteria (e.q., complexity, sparsity).

1.3.  Organization

The remainder of the letter is organized as follows. In section 2, after reviewing some basic conceptions of statistical learning theory, we give the main results, that is, the learning rates of lq regularizers associated with gaussian kernel. In section 3, the proof of the main result is given.

2.  Generalization Capabilities of lq Coefficient Regularization Learning

2.1.  A Brief Review of Statistical Learning Theory

Let M>0, be an input space and be an output space. Let z=(xi, yi)mi=1 be a random sample set with a finite size , drawn independent and identically according to an unknown distribution on , which admits the decomposition
formula
Suppose further that is a function to model the correspondence between x and y, as induced by . A natural measurement of the error incurred by using f for this purpose is the generalization error, defined by
formula
which is minimized by the regression function (Cucker & Smale, 2001), defined by
formula
However, we do not know this ideal minimizer becuase is unknown. Instead, we can turn to the random examples sampled according to .
Let be the Hilbert space of square integrable function defined on X, with the norm denoted by Under the assumption , it is known that for every , there holds
formula
2.1
The task of the least squares regression problem is then to construct function fz that approximates , in the sense of the norm , using the finitely many samples z.

2.2.  Learning Rate Analysis

Let
formula
be the gaussian kernel, where is called the width of . The SDHS associated with is then defined by
formula
We are concerned with the following lq coefficient-based regularization strategy,
formula
2.2
where . The main purpose of this letter is to derive the optimal bound of the following generalization error,
formula
2.3
for all .
Generally it is impossible to obtain a nontrivial rate of convergence result of equation 2.3 without imposing strong restrictions on (Györfy, Kohler, Krzyzak, & Walk, 2002). Then a large portion of learning theory proceeds under the condition that is in a known set . A typical choice of is a set of compact sets, which are determined by some smoothness conditions (DeVore, Kerkyacharian, Picard, & Temlyakov, 2006). Such a choice of is also adopted in our analysis. Let X=Id≔[0, 1]d, c0 be a positive constant, , and r=u+v for some . A function is said to be (r, c0)-smooth if for every , the partial derivatives exist and satisfy
formula
Denote by the set of all (r, c0)-smooth functions. In our analysis, we assume the prior information is known.
Let denote the clipped value of t at , that is, , where sgnt represents the signum function of t. Then it is obvious (Györfy et al., 2002; Steinwart et al., 2009; Zhou & Jetter, 2006) that for all and , there holds
formula

The following theorem shows the learning capability of the learning strategy 2.2, for arbitrary .

Theorem 1. 
Let r>0, c0>0, , , , and be defined as in equation 2.2. If , and
formula
then, for arbitrary, with probability at least, there holds
formula
2.4
where C is a constant depending only on d, r, c0, q, and M.

2.3.  Remarks

In this subsection, we give some explanations of and remarks on theorem 1: remarks on the learning rate, the choice of the width of gaussian kernel, the role of the regularization parameter, and the relationship between q and the generalization capability.

2.3.1.  Learning Rate Analysis

It can be found in Györfy et al. (2002) and DeVore et al. (2006) that if we know only , then the learning rates of all learning strategies based on m samples cannot be faster than More specifically, let be the class of all Borel measures on Z such that . We enter into a competition over all estimators and define
formula
It is easy to see that quantitively measures the quality of fz. Then it can be found in Györfy et al. (2002) or DeVore et al. (2006) that
formula
2.5
where C is a constant depending only on M, d, c0, and r.
Modulo the arbitrary small positive number , the established learning rate in equation 2.4 is asymptotically optimal in a minmax sense. If we notice the identity,
formula
then there holds
formula
2.6
where C1 and C2 are constants depending only on r, c0, M and d.

Due to equation 2.6, we know that the learning strategy 2.2 is almost the optimal method if the smoothness information of is known. It should be noted that the optimality is given in the background of the worst-case analysis. That is, for a concrete , the learning rate of strategy 2.2 may be much faster than . For example, if the concrete , then the learning rate of equation 2.2 can achieve to . The conception of optimal learning rate is based on rather than a fixed regression functions.

2.3.2.  Choice of the Width

The width of gaussian kernel determines both the approximation capability and complexity of the corresponding RKHS, and thus plays a crucial role in the learning process. Admittedly, as a function of , the complexity of the gaussian RKHS is monotonically decreasing. Thus, due to the so-called bias and variance problem in learning theory (Cucker & Zhou, 2007), there exists an optimal choice of for the gaussian kernel method. Since SDHS is essentially an m-dimensional linear space and the gaussian RKHS is an infinite space for arbitrary (kernel width) (Minh, 2010), the complexity of the gaussian SDHS may be smaller than the gaussian RKHS at the first glance. Hence, there naturally arises the following question: Does the optimal of the gaussian SDHS learning coincide with that of the gaussian RKHS learning? Theorem 1, together with Eberts and Steinwart (2011), demonstrates that the optimal widths of the above two strategies are asymptomatically identical. That is, if the smooth information of the regression function is known, then the optimal choices of of both learning strategies 2.2 and 1.2 are the same. The above phenomenon can be explained as follows. Let be the unit ball of the gaussian RKHS and be the l2 empirical ball. Denote by the l2-empirical covering number (Shi et al., 2011), whose definition can be found in the descriptions above lemma 5 in this letter. Then it can be found in Steinwart and Scovel (2007) that for any , there holds
formula
2.7
where p is an arbitrary real number in (0, 2] and is an arbitrary positive number. For the gaussian SDHS, , on one hand, we can use the fact that and deduce
formula
2.8
where is a constant depending only on , and d. On the other hand, it follows from Györfy et al. (2002) that
formula
2.9
where the finite-dimensional property of is used. Therefore, it should be highlighted that the finite-dimensional property of is used if
formula
which always implies that is very small (it may be smaller than ).

However, to deduce a good approximation capability of , it can be deduced from Lin, Liu, Fang, and Xu (2014) that can not be very small. Thus, we use equation 2.8 rather than 2.9 to describe the complexity of . Noting equation 2.7, when is not very small (corresponding to 1/m), the complexity of asymptomatically equals that of . Under this circumstance, recalling that the optimal widths of the learning strategies 1.2 and 2.2, may not be very small, the capacities of and are asymptomatically identical. Therefore, the optimal choice of in equation 2.2 is the same as that in equation 1.2.

2.3.3.  Importance of the Regularization Term

We can address the regularized learning model as a collection of empirical minimization problems. Indeed, let be the unit ball of a space related to the regularization term and consider the empirical minimization problem in for some r>0. As r increases, the approximation error for decreases and its sample error increases. We can achieve a small total error by choosing the correct value of r and performing empirical minimization in such that the approximation error and sample error are asymptomatically identical. The role of the regularization term is to force the algorithm to choose the correct value of r for empirical minimization (Mendelson & Neeman, 2010) and then provide a method of solving the bias-variance problem. Therefore, the main role of the regularization term is to control the capacity of the hypothesis space.

Compared with the regularized least squares strategy 1.2, a consensus is that lq coefficient regularization schemes 2.2 may bring a certain additional interest such as the sparsity for suitable choice of q (Shi et al., 2011). However, this assertion may not always be true.

There are usually two criteria to choose the regularization parameter in such a setting: (1) the approximation error should be as small as possible, and (2) the sample error should be as small as possible. Under criterion 1, should not be too large, while under criterion 2, cannot be too small. As a consequence, there is an uncertainty principle in the choice of the optimal for generalization. Moreover, if the sparsity of the estimator is needed, another criterion should be also taken into consideration: (3) the sparsity of the estimator should be as sparse as possible.

This sparsity criterion requires that should be large enough, since the sparsity of the estimator monotonously decreases with respect to . It should be pointed out that the optimal for generalization may be smaller than the smallest value of to guarantee the sparsity. Therefore, to obtain the sparse estimator, the generalization capability may degrade in certain a sense. Summarily, lq coefficient regularization scheme may bring a certain additional attribution of the estimator without sacrificing the generalization capability but not always so. It may depend on the distribution , the choice of q, and the samples. In a word, the lq coefficient regularization scheme 2.2 provides a possibility of bringing other advantages without degrading the generalization capability. Therefore, it may outperform the classical kernel methods.

2.3.4.  q and the Learning Rate

Generally the generalization capability of the lq regularization schemes 2.2 may depend on the width of the gaussian kernel, the regularization parameter , the behavior of priors, the size of samples m, and, obviously, the choice of q. From theorem 1 and equation 2.6, it has been demonstrated that the learning schemes defined by equation 2.2 can achieve the asymptotically optimal rates for all choices of q. In other words, the choice of q has no influence on the learning rate, which means that q should be chosen according to other nongeneralization considerations such as smoothness, sparsity, and computational complexity.

This assertion is not surprising if we cast lq regularization schemes (see equation 2.2) into the process of empirical minimization. From the analysis, it is known that the width of the gaussian kernel depicts the complexity of the lq empirical unit ball, and the regularization parameter describes the choice of the radius of the lq ball. Also, the choice of q implies the route of the change in order to find the hypothesis space with the appropriate capacity. A regularization scheme can be regarded as the following process according to the bias and variance problem. One first chooses a large hypothesis space to guarantee the small approximation error and then shrinks the capacity of the hypothesis space until the sample error and approximation error are asymptomatically identical. From Figure 1, we see that lq regularization schemes with different q may possess different paths of shrinking and then derive estimators with different attributions. Figure 1 shows that by appropriately tuning the regularization (the radius of the lq empirical ball), we can always obtain lq regularizer estimators for all with similar learning rates. In a sense, it can be concluded that the learning rate of lq regularization learning is independent of the choice of q.

Figure 1:

The routes of the change of l2, l1, and l1/2 regularizers, respectively.

Figure 1:

The routes of the change of l2, l1, and l1/2 regularizers, respectively.

2.4.  Comparisons

In this section, we give many comparisons between theorem 1 and related work to show the novelty of our result. We divide the comparisons into three categories. First we illustrate the difference between learning in RKHS and SDHS associated with gaussian kernel. Then we compare our result with other results on coefficient-based regularization in SDHS. Finally, we note certain papers concerning the choice of regularization exponent q and show the novelty of our result.

2.4.1.  Learning in RKHS and SDHS with Gaussian Kernel

Kernel methods with gaussian kernels are one of the classes of the standard and state-of-the-art learning strategies. Therefore, the corresponding properties, such as the covering numbers, RKHS norms, and formats of the elements in the RKHS, associated with gaussian kernels, were studied in by Steinwart and Christmann (2008), Minh (2010), Zhou (2002), and Steinwart, Hush, and Scovel (2006). Based on these analyses, the learning capabilities of gaussian kernel learning were thoroughly revealed in Eberts and Steinwart (2011), Ye and Zhou (2008), Hu (2011), Steinwart and Scovel (2007), and Xiang and Zhou (2009). For classification, Steinwart and Scovel (2007) showed that the learning rates for support vector machines with hinge loss and gaussian kernels can attain the order of m−1. For regression, Eberts and Steinwart (2011) showed that the regularized least squares algorithm with gaussian kernel can achieve an almost optimal learning rate if the smoothness information of the regression function is given.

However, the learning capability of the coefficient-based regularization schemes 2.2 remains open. It should be stressed that the roles of regularization terms in equations 2.2 and 1.2 are distinct even though the solutions to these two schemes are identical for q=2. More specifically, without the regularization term, there are infinite many solutions to the least squares problem in the gaussian RKHS. In order to obtain an expected and unique solution, we should impose a certain structure on the solution, which can be achieved by introducing a specified regularization term. Therefore, the regularized least squares algorithm 1.2 can be regarded as a structural risk minimization strategy since it chooses a solution with the simplest structure among the infinite many solutions. However, due to the positive definiteness of the gaussian kernel, there is a unique solution to equation 2.2 with , and the role of regularization can be regarded to improve the generalization capability only. The introduction of regularization in equation 1.2 can be regarded as a passive choice, while that in equation 2.2 is an active operation.

This difference requires different techniques to analyze the performance of strategy 2.2. Indeed, the most widely used method was proposed in Wu and Zhou (2008). Based on Wu and Zhou (2005), Wu and Zhou (2008) pointed out that the generalization error can be divided into three terms: approximation error, sample error, and hypothesis space. Basically, the generalization error can be bounded by the following three steps:

  1. Find an alternative estimator outside the SDHS to approximate the regression function.

  2. Find an approximation of the alternative function in SDHS and deduce the hypothesis error.

  3. Bound the sample error that describes the distance between the approximant in SDHS and the lq regularizer.

In this letter, we also employ this technique to analyze the performance of learning strategy 2.5. We shows that, similar to the regularized least squares algorithm (Eberts & Steinwart, 2011), the lq coefficient-based regularization scheme 2.2 can also achieve an almost optimal learning rate if the smoothness information of the regression function is given.

2.4.2.  lq Regularizer with Fixed q

Several papers focus on the generalization capability analysis of the lq regularization scheme 1.1. Wu and Zhou (2008) was the first, to the best of our knowledge, to show the mathematical foundation of learning algorithms in SDHS. They claimed that the data-dependent nature of the algorithm leads to an extra hypothesis error, which is essentially different from regularization schemes with sample independent hypothesis spaces (SIHSs). Based on this, the authors proposed a coefficient-based regularization strategy and conducted a theoretical analysis of the strategy by dividing the generalization error into approximation error, sample error, and hypothesis error. Following their work, Xiao and Zhou (2010) derived a learning rate of l1 regularizer via bounding the regularization error, sample error, and hypothesis error, respectively. Their result was improved in Shi et al. (2011) by adopting a concentration technique with l2 empirical covering numbers to tackle the sample error. For the lq regularizers, Tong et al. (2010) deduced an upper bound for the generalization error by using a different method to cope with the hypothesis error. Later, the learning rate of Tong et al. (2010) was improved further in Feng and Lv (2011) by giving a sharper estimation of the sample error.

In all that research, the spectrum assumption of the regression function and the concentration property of should be satisfied. Noting this, for l2 regularizer, Sun and Wu (2011) conducted a generalization capability analysis for the l2 regularizer by using the spectrum assumption to the regression function only. For l1 regularizer, by using a sophisticated functional analysis method, Zhang, Xu, and Zhang (2009) and Song, Zhang, and Hickernell (2013) built the regularized least squares algorithm on the reproducing kernel Banach space (RKBS) and proved that the regularized least squares algorithm in RKBS is equivalent to the l1 regularizer if the kernel satisfies some restricted conditions. Following this method, Song and Zhang (2011) deduced a similar learning rate for the l1 regularizer and eliminated the concentration assumption on the marginal distribution.

To characterize the generalization capability of a learning strategy, the essential generalization bound rather than the upper bound is desired, that is, we must deduce both the lower and upper bounds for the learning strategy and prove that these two can be asymptotically identical. Under this circumstance, we can essentially deduce the learning capability of the learning scheme. All of the above results for lq regularizers with fixed q were concerned only with the upper bound. Thus, it is generally difficult to reveal their essential learning capabilities. Nevertheless, as shown by theorem 1, our established learning rate is essential. It can be found in equation 2.6 that if , the deduced learning rate cannot be essentially improved.

2.4.3.  The choice of q

Blanchard, Bousquet, and Massart (2008) were the first, to the best of our knowledge, to focus on the choice of the optimal q for the kernel method. Indeed, as far as the sample error is concerned, Blanchard et al. (2008) pointed out that there is an optimal exponent for support vector machine with hinge loss. Then, Mendelson and Neeman (2010) found that this assertion also held for the regularized least square strategy 1.3. That is, as far as the sample error is concerned, regularized least squares may have a design flaw. However, Steinwart et al. (2009) derived a q-independent optimal learning rate of strategy 1.3 in a minmax sense. Therefore, they concluded that the RLS algorithm 1.2 had no advantages or disadvantages compared with other values of q in equation 1.3 from the statistical point of view.

Since the lq coefficient regularization strategy 1.1 is solvable for arbitrary , and different q may derive different attributions of the estimator, studying the dependence between learning performance of learning strategy 1.1 and q is more interesting. This topic was first studied in Lin, Xu, Zeng, and Fang (2013), where we have shown that there is a positive- definite kernel such that the learning rate of the corresponding lq regularizer is independent of q. However, the kernel constructed in Lin, Xu, Zeng, and Fang (2013) cannot be easily formulated in practice. Thus, we study the dependency of the generalization capabilities and q of lq regularization learning with the widely used gaussian kernel. Fortunately, we find that a similar conclusion also holds for the gaussian kernel, which is witnessed in theorem 1 in this letter.

3.  Proof of Theorem 1.

3.1.  Error Decomposition

For an arbitrary , define To construct a function defined on [−1, 1]d, we can define
formula
for arbitrary . Finally, for every , we define
formula
Therefore, we have constructed a function defined on Rd. From the definition, it follows that is an even, continuous, and periodic function with respect to arbitrary variable .
In order to give an error decomposition strategy for , we should construct a function as follows. Define
formula
3.1
where
formula

Denote by and the RKHS associated with and its corresponding RKHS norm, respectively. To prove theorem 1, the following error decomposition strategy is required.

Proposition 1. 
Let and f0 be defined as in equations 2.2 and 3.1, respectively. Then we have
formula
where.
Upon making the shorthand notations
formula
and
formula
for the approximation error, sample error, and hypothesis error, respectively, we have
formula
3.2

3.2.  Approximation Error Estimation

Let . Denote by C(A) the space of continuous functions defined on A endowed with norm . Denote by
formula
the rth modulus of smoothness (DeVore & Lorentz, 1993), where the rth difference is defined by
formula
for and . It is well known (DeVore & Lorentz, 1993) that
formula
3.3

To bound the approximation error, the following three lemmas are required:

Lemma 1. 

Let r>0. If, thensatisfies

Proof. 
Based on the definition of , it suffices to prove the third assertion. To this end, for an arbitrary , noting that the period of with respect to each variable is 2, there exists a kj,h such that , that is,
formula
Since is even, we can deduce
formula
Hence, by the definition of the modulus of smoothness, we have
formula
which finishes the proof of lemma 1.
Lemma 2.
Let r>0 and f0 be defined as in equation 3.1. If , then
formula
where C is a constant depending only on d and r.
Proof. 
It follows from the definition of f0 that
formula
As
formula
it follows from lemma 1 that
formula
Then, the same method as that of Eberts and Steinwart (2011) yields that
formula

Furthermore, it can be easily deduced from Eberts and Steinwart (2011, theorem 2.3) and lemma 1 that lemma 3 holds:

Lemma 3.
Let f0 be defined as in equation 3.1. Then we have with
formula

Lemma 3, together with lemma 2 and , yields the following approximation error estimation.

Proposition 2. 
Let r>0. If, then
formula
where C is a constant depending on only d, c0, and r.

3.3.  Sample Error Estimation

In this section, we bound the sample error . On using the short-hand notations
formula
and
formula
we have
formula
3.4

To bound , we need the following well-known Bernstein inequality (Shi et al., 2011).

Lemma 4. 
Letbe a random variable on a probability space Z with variancesatisfyingfor some constant. Then for any, with confidence, we have
formula

By the help of lemma 4, we provide an upper bound estimate of

Proposition 3. 
For any, with confidence, there holds
formula
Proof. 
Let the random variable on Z be defined by
formula
Since and almost everywhere, we have
formula
and almost surely
formula
Moreover, we have
formula
which implies that the variance of can be bounded as Applying lemma 4, with confidence , we have
formula
To bound , an l2 empirical covering number (Shi et al., 2011) should be introduced. Let be a pseudo-metric space and a subset. For every , the covering number of T with respect to and is defined as the minimal number of balls of radius whose union covers T, that is,
formula
for some , where . The l2-empirical covering number of a function set is defined by means of the normalized l2-metric on the Euclidean space Rd given in with for
Definition 1. 
Letbe a set of functions on X, x=(xi)mi=1, and
formula
Set. The l2-empirical covering number of is defined by
formula

The following two lemmas can be easily deduced from Steinwart and Scovel (2007) and Sun and Wu (2011), respectively.

Lemma 5. 
Let, be a compact subset with nonempty interior. Then for alland all, there exists a constantindependent ofsuch that for all, we have
formula
Lemma 6. 
Let be a class of measurable functions on Z. Assume that there are constants B, c>0 and such that and for every If for some a>0 and ,
formula
3.5
then there exists a constant depending only on p such that for any t>0, with probability at least 1−et, there holds
formula
where
formula

We are now in a position to deduce an upper-bound estimate for .

Proposition 4. 
Let and be defined as in equation 2.2. Then for arbitrary and arbitrary , there exists a constant C depending only on d, , p, and M such that
formula
with confidence at least, where
formula
Proof. 
We apply lemma 6 to the set of functions , where
formula
3.6
and
formula
Each function has the form
formula
and is automatically a function on Z. Hence
formula
and
formula
where zi≔(xi, yi). Observe that
formula
Therefore,
formula
and
formula
For , and arbitrary , we have
formula
It follows that
formula
which together with lemma 5 implies
formula
By lemma 6 with B=c=16M2, , and , we know that for any with confidence there exists a constant C depending only on d such that for all ,
formula
Here
formula
Hence, we obtain
formula
Now we turn to estimate Rq. It follows from the definition of that
formula
Thus,
formula
On the other hand,
formula
that is,
formula
Set
formula
which finishes the proof of proposition 4.

3.4.  Hypothesis Error Estimation

In this section, we give an error estimate for .

Proposition 5. 
Ifand f0 are defined in equations 1.1 and 3.1 respectively, then we have
formula
Proof. 
If the vector satisfies , then there holds . Here, and be the matrix with its elements being . Then it follows from the well-known representation theorem (Cucker & Zhou, 2007) that
formula
is the solution to
formula
Hence, if we write , then
formula
Recalling that
formula
we get
formula
This finishes the proof of proposition 5.

3.5.  Learning Rate Analysis

Proof of Theorem 1. 
We assemble the results in propositions 1 through 5 to write
formula
which holds with confidence at least , where
formula
and
formula
Thus, for 0<q<1, , if we set , then
formula
holds with confidence at least , where C is a constant depending on only d and r.
For , , if we set , then
formula
holds with confidence at least , where C is a constant depending on only d and r.
For q>2, , if we set
formula
then
formula
holds with confidence at least , where C is a constant depending only on d, M, and r. This finishes the proof of the main result.

Acknowledgments

Two anonymous referees carefully read the manuscript for this letter and provided numerous constructive suggestions. As a result, the overall quality of the letter has been noticeably enhanced, for which we feel much indebted and are grateful. The research was supported by the National 973 Programming (2013CB329404), the Key Program of National Natural Science Foundation of China (Grant 11131006).

References

Barron
,
A.
,
Cohen
,
A.
,
Dahmen
,
W.
, &
Devore
,
R.
(
2008
).
Approximation and learning by greedy algorithms
.
Ann. Statist.
,
36
,
64
94
.
Blanchard
,
G.
,
Bousquet
,
O.
, &
Massart
,
P.
(
2008
).
Statistical performance of support vector machines
.
Ann. Statis.
,
36
,
489
531
.
Caponnetto
,
A.
, &
De Vito
,
E.
(
2007
).
Optimal rates for the regularized least squares algorithm
.
Found. Comput. Math.
,
7
,
331
368
.
Cucker
,
F.
, &
Smale
,
S.
(
2001
).
On the mathematical foundations of learning
.
Bull. Amer. Math. Soc.
,
39
,
1
49
.
Cucker
,
F.
, &
Zhou
,
D.
(
2007
).
Learning theory: An approximation theory viewpoint
.
Cambridge
:
Cambridge University Press
.
Daubechies
,
I.
,
Defrise
,
M.
, &
De Mol
,
C.
(
2004
).
An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun
.
Pure Appl. Math.
,
57
,
1413
1457
.
Daubechies
,
I.
,
Devore
,
R.
,
Fornasier
,
M.
, &
Güntürk
,
C.
(
2010
).
Iteratively reweighted least squares minimization for sparse recovery
.
Commun. Pure Appl. Math.
,
63
,
1
38
.
DeVore
,
R.
, &
Lorentz
,
G.
(
1993
).
Constructive approximation
.
Berlin
:
Springer-Verlag
.
DeVore
,
R.
,
Kerkyacharian
,
G.
,
Picard
,
D.
, &
Temlyakov
,
V.
(
2006
).
Approximation methods for supervised learning
.
Found. Comput. Math.
,
6
,
3
58
.
Eberts
,
M.
, &
Steinwart
,
I.
(
2011
).
Optimal learning rates for least squares SVMs using gaussian kernels
. In
J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, & K. Q. Weinberger
(Eds.),
Advances in neural information processing systems 24
(pp. 
1539
1547
).
Red Hook, NY
:
Curran
.
Feng
,
Y.
, &
Lv
,
S.
(
2011
).
Unified approach to coefficient-based regularized regression
.
Comput. Math. Appl.
,
62
,
506
515
.
Györfy
,
L.
,
Kohler
,
M.
,
Krzyzak
,
A.
, &
Walk
,
H.
(
2002
).
A distribution-free theory of nonparametric regression
.
Berlin
:
Springer
.
Hastie
,
T.
,
Tibshirani
,
R.
, &
Friedman
,
J.
(
2001
).
The elements of statistical learning
.
New York
:
Springer
.
Hu
,
T.
(
2011
).
Online regression with varying Gaussians and non-identical distributions
.
Anal. Appl.
,
9
,
395
408
.
Lin
,
S.
,
Liu
,
X.
,
Fang
,
J.
, &
Xu
,
Z.
(
2014
).
Is extreme learning machine feasible? A theoretical assessment (part II)
.
arXiv:1401.6240
Lin
,
S.
,
Rong
,
Y.
,
Sun
,
X.
, &
Xu
,
Z.
(
2013
).
Learning capability of relaxed greedy algorithms
.
IEEE Trans. Neural Netw. Learn. Syst.
,
24
,
1598
1608
.
Lin
,
S.
,
Xu
,
C.
,
Zeng
,
J.
, &
Fang
,
J.
(
2013
).
Does generalization performance of lq regularization learning depend on q? A negative example
.
arXiv:1307.6616
Mendelson
,
S.
, &
Neeman
,
J.
(
2010
).
Regularization in kernel learning
.
Ann. Statist.
,
38
,
526
565
.
Minh
,
H.
(
2010
).
Some properties of gaussian reproducing kernel Hilbert spaces and their implications for function approximation and learning theory
.
Constr. Approx.
,
32
,
307
338
.
Schölkopf
,
B.
, &
Smola
,
A.
(
2001
).
Learning with kernel: Support vector machine, regularization, optimization, and beyond
.
Cambridge, MA
:
MIT Press
.
Shi
,
L.
,
Feng
,
Y.
, &
Zhou
,
D.
(
2011
).
Concentration estimates for learning with l1-regularizer and data dependent hypothesis spaces
.
Appl. Comput. Harmon. Anal.
,
31
,
286
302
.
Song
,
G.
, &
Zhang
,
H.
(
2011
).
Reproducing kernel Banach spaces with the l1 norm II: Error analysis for regularized least square regression
.
Neural Comput.
,
23
,
2713
2729
.
Song
,
G.
,
Zhang
,
H.
, &
Hickernell
,
F.
(
2013
).
Reproducing kernel Banach spaces with the l1 norm
.
Appl. Comput. Harmon. Anal.
,
34
,
96
116
.
Steinwart
,
I.
, &
Christmann
,
A.
(
2008
).
Support vector machines
.
New York
:
Springer
.
Steinwart
,
I.
,
Hush
,
D.
, &
Scovel
,
C.
(
2009
).
Optimal rates for regularized least squares regression
. In
Proceedings of the 22nd Conference on Learning Theory
.
Madison, WI
:
Omnipress
.
Steinwart
,
I.
,
Hush
,
D.
, &
Scovel
,
C.
(
2006
).
An explicit description of the reproducing kernel Hilbert spaces of gaussian RBF kernels
.
IEEE Trans. Inform. Theory
,
52
,
4635
4643
.
Steinwart
,
I.
, &
Scovel
,
C.
(
2007
).
Fast rates for support vector machines using gaussian kernels
.
Ann. Statist.
,
35
,
575
607
.
Sun
,
H.
, &
Wu
,
Q.
(
2011
).
Least square regression with indefinite kernels and coefficient regularization
.
Appl. Comput. Harmon. Anal.
,
30
,
96
109
.
Tibshirani
,
R.
(
1995
).
Regression shrinkage and selection via the LASSO
.
J. Roy. Statist. Soc. Ser. B
,
58
,
267
288
.
Tong
,
H.
,
Chen
,
D.
, &
Yang
,
F.
(
2010
).
Least square regression with lp-coefficient regularization
.
Neural Comput.
,
22
,
3221
3235
.
Wu
,
Q.
,
Ying
,
Y.
, &
Zhou
,
D.
(
2006
).
Learning rates of least square regularized regression
.
Found. Comput. Math.
,
6
,
171
192
.
Wu
,
Q.
, &
Zhou
,
D.
(
2005
).
SVM soft margin classifiers: Linear programming versus quadratic programming
.
Neural Comput.
,
17
,
1160
1187
.
Wu
,
Q.
, &
Zhou
,
D.
(
2008
).
Learning with sample dependent hypothesis space
.
Comput. Math. Appl.
,
56
,
2896
2907
.
Xiang
,
D.
, &
Zhou
,
D.
(
2009
).
Classification with gaussians and convex loss
.
J. Mach. Learn. Res.
,
10
,
1447
1468
.
Xiao
,
Q.
, &
Zhou
,
D.
(
2010
).
Learning by nonsymmetric kernel with data dependent spaces and l1-regularizer
.
Taiwanese J. Math.
,
14
,
1821
1836
.
Xu
,
Z.
,
Chang
,
X.
,
Xu
,
F.
, &
Zhang
,
H.
(
2012
).
L1/2 regularization: A thresholding representation theory and a fast solver
.
IEEE. Trans. Neural Netw. Learn. System.
,
23
,
1013
1018
.
Ye
,
G.
, &
Zhou
,
D.
(
2008
).
Learning and approximation by gaussians on Riemannian manifolds
.
Adv. Comput. Math.
,
29
,
291
310
.
Zhang
,
H.
,
Xu
,
Y.
, &
Zhang
,
J.
(
2009
).
Reproducing kernel Banach spaces for machine learning
.
J. Mach. Learn. Res.
,
10
,
2741
2775
.
Zhou
,
D.
(
2002
).
The covering number in learning theory
.
J. Complex.
,
18
,
739
767
.
Zhou
,
D.
, &
Jetter
,
K.
(
2006
).
Approximation with polynomial kernels and SVM classifiers
.
Adv. Comput. Math.
,
25
,
323
344
.

Author notes

*Corresponding author.