Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-3 of 3
Tong Zhang
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (9): 2077–2098.
Published: 01 September 2005
Abstract
View article
PDF
Kernel methods can embed finite-dimensional data into infinite-dimensional feature spaces. In spite of the large underlying feature dimensionality, kernel methods can achieve good generalization ability. This observation is often wrongly interpreted, and it has been used to argue that kernel learning can magically avoid the “curse-of-dimensionality” phenomenon encountered in statistical estimation problems. This letter shows that although using kernel representation, one can embed data into an infinite-dimensional feature space; the effective dimensionality of this embedding, which determines the learning complexity of the underlying kernel machine, is usually small. In particular, we introduce an algebraic definition of a scale-sensitive effective dimension associated with a kernel representation. Based on this quantity, we derive upper bounds on the generalization performance of some kernel regression methods. Moreover, we show that the resulting convergent rates are optimal under various circumstances.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (6): 1397–1437.
Published: 01 June 2003
Abstract
View article
PDF
In this article, we study leave-one-out style cross-validation bounds for kernel methods. The essential element in our analysis is a bound on the parameter estimation stability for regularized kernel formulations. Using this result, we derive bounds on expected leave-one-out cross-validation errors, which lead to expected generalization bounds for various kernel algorithms. In addition, we also obtain variance bounds for leave-oneout errors. We apply our analysis to some classification and regression problems and compare them with previous results.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (12): 3013–3042.
Published: 01 December 2002
Abstract
View article
PDF
Gaussian processes have been widely applied to regression problems with good performance. However, they can be computationally expensive. In order to reduce the computational cost, there have been recent studies on using sparse approximations in gaussian processes. In this article, we investigate properties of certain sparse regression algorithms that approximately solve a gaussian process. We obtain approximation bounds and compare our results with related methods.