Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-1 of 1
Shaobo Lin
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (10): 2350–2378.
Published: 01 October 2014
FIGURES
Abstract
View article
PDF
Regularization is a well-recognized powerful strategy to improve the performance of a learning machine and l q regularization schemes with are central in use. It is known that different q leads to different properties of the deduced estimators, say, l 2 regularization leads to a smooth estimator, while l 1 regularization leads to a sparse estimator. Then how the generalization capability of l q regularization learning varies with q is worthy of investigation. In this letter, we study this problem in the framework of statistical learning theory. Our main results show that implementing l q coefficient regularization schemes in the sample-dependent hypothesis space associated with a gaussian kernel can attain the same almost optimal learning rates for all . That is, the upper and lower bounds of learning rates for l q regularization learning are asymptotically identical for all . Our finding tentatively reveals that in some modeling contexts, the choice of q might not have a strong impact on the generalization capability. From this perspective, q can be arbitrarily specified, or specified merely by other nongeneralization criteria like smoothness, computational complexity or sparsity.