Abstract
A typical approach in estimating the learning rate of a regularized learning scheme is to bound the approximation error by the sum of the sampling error, the hypothesis error, and the regularization error. Using a reproducing kernel space that satisfies the linear representer theorem brings the advantage of discarding the hypothesis error from the sum automatically. Following this direction, we illustrate how reproducing kernel Banach spaces with the ℓ1 norm can be applied to improve the learning rate estimate of ℓ1-regularization in machine learning.
Issue Section:
Letters
© 2011 Massachusetts Institute of Technology
2011
You do not currently have access to this content.