Abstract
The proposal of considering nonlinear principal component analysis as a kernel eigenvalue problem has provided an extremely powerful method of extracting nonlinear features for a number of classification and regression applications. Whereas the utilization of Mercer kernels makes the problem of computing principal components in, possibly, infinite-dimensional feature spaces tractable, there are still the attendant numerical problems of diagonalizing large matrices. In this contribution, we propose an expectation-maximization approach for performing kernel principal component analysis and show this to be a computationally efficient method, especially when the number of data points is large.
Issue Section:
Note
This content is only available as a PDF.
© 2001 Massachusetts Institute of Technology
2001
You do not currently have access to this content.