Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Di-Rong Chen
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (1): 158–184.
Published: 01 January 2014
Abstract
View article
PDF
We consider a kind of kernel-based regression with general convex loss functions in a regularization scheme. The kernels used in the scheme are not necessarily symmetric and thus are not positive semidefinite; l 1 −norm of the coefficients in the kernel ensembles is taken as the regularizer. Our setting in this letter is quite different from the classical regularized regression algorithms such as regularized networks and support vector machines regression. Under an established error decomposition that consists of approximation error, hypothesis error, and sample error, we present a detailed mathematical analysis for this scheme and, in particular, its learning rate. A reweighted empirical process theory is applied to the analysis of produced learning algorithms, which plays a key role in deriving the explicit learning rate under some assumptions.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (12): 3221–3235.
Published: 01 December 2010
Abstract
View article
PDF
The selection of the penalty functional is critical for the performance of a regularized learning algorithm, and thus it deserves special attention. In this article, we present a least square regression algorithm based on l p -coefficient regularization. Comparing with the classical regularized least square regression, the new algorithm is different in the regularization term. Our primary focus is on the error analysis of the algorithm. An explicit learning rate is derived under some ordinary assumptions.