Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Jinshan Zeng
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2017) 29 (12): 3353–3380.
Published: 01 December 2017
Abstract
View article
PDF
This letter aims at refined error analysis for binary classification using support vector machine (SVM) with gaussian kernel and convex loss. Our first result shows that for some loss functions, such as the truncated quadratic loss and quadratic loss, SVM with gaussian kernel can reach the almost optimal learning rate provided the regression function is smooth. Our second result shows that for a large number of loss functions, under some Tsybakov noise assumption, if the regression function is infinitely smooth, then SVM with gaussian kernel can achieve the learning rate of order , where is the number of samples.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (10): 2350–2378.
Published: 01 October 2014
FIGURES
Abstract
View article
PDF
Regularization is a well-recognized powerful strategy to improve the performance of a learning machine and l q regularization schemes with are central in use. It is known that different q leads to different properties of the deduced estimators, say, l 2 regularization leads to a smooth estimator, while l 1 regularization leads to a sparse estimator. Then how the generalization capability of l q regularization learning varies with q is worthy of investigation. In this letter, we study this problem in the framework of statistical learning theory. Our main results show that implementing l q coefficient regularization schemes in the sample-dependent hypothesis space associated with a gaussian kernel can attain the same almost optimal learning rates for all . That is, the upper and lower bounds of learning rates for l q regularization learning are asymptotically identical for all . Our finding tentatively reveals that in some modeling contexts, the choice of q might not have a strong impact on the generalization capability. From this perspective, q can be arbitrarily specified, or specified merely by other nongeneralization criteria like smoothness, computational complexity or sparsity.