Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-3 of 3
Ole Winther
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (9): 1921–1926.
Published: 01 September 2005
Abstract
View articletitled, On the Slow Convergence of EM and VBEM in Low-Noise Linear Models
View
PDF
for article titled, On the Slow Convergence of EM and VBEM in Low-Noise Linear Models
We analyze convergence of the expectation maximization (EM) and variational Bayes EM (VBEM) schemes for parameter estimation in noisy linear models. The analysis shows that both schemes are inefficient in the low-noise limit. The linear model with additive noise includes as special cases independent component analysis, probabilistic principal component analysis, factor analysis, and Kalman filtering. Hence, the results are relevant for many practical applications.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (4): 889–918.
Published: 01 April 2002
Abstract
View articletitled, Mean-Field Approaches to Independent Component Analysis
View
PDF
for article titled, Mean-Field Approaches to Independent Component Analysis
We develop mean-field approaches for probabilistic independent component analysis (ICA). The sources are estimated from the mean of their posterior distribution and the mixing matrix (and noise level) is estimated by maximum a posteriori (MAP). The latter requires the computation of (a good approximation to) the correlations between sources. For this purpose, we investigate three increasingly advanced mean-field methods: the variational (also known as naive mean field) approach, linear response corrections, and an adaptive version of the Thouless, Anderson and Palmer (1977) (TAP) mean-field approach, which is due to Opper and Winther (2001). The resulting algorithms are tested on a number of problems. On synthetic data, the advanced mean-field approaches are able to recover the correct mixing matrix in cases where the variational mean-field theory fails. For handwritten digits, sparse encoding is achieved using nonnegative source and mixing priors. For speech, the mean-field method is able to separate in the underdetermined (overcomplete) case of two sensors and three sources. One major advantage of the proposed method is its generality and algorithmic simplicity. Finally, we point out several possible extensions of the approaches developed here.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2000) 12 (11): 2655–2684.
Published: 01 November 2000
Abstract
View articletitled, Gaussian Processes for Classification: Mean-Field Algorithms
View
PDF
for article titled, Gaussian Processes for Classification: Mean-Field Algorithms
We derive a mean-field algorithm for binary classification with gaussian processes that is based on the TAP approach originally proposed in statistical physics of disordered systems. The theory also yields an approximate leave-one-out estimator for the generalization error, which is computed with no extra computational cost. We show that from the TAP approach, it is possible to derive both a simpler “naive” mean-field theory and support vector machines (SVMs) as limiting cases. For both mean-field algorithms and support vector machines, simulation results for three small benchmark data sets are presented. They show that one may get state-of-the-art performance by using the leave-one-out estimator for model selection and the built-in leave-one-out estimators are extremely precise when compared to the exact leave-one-out estimate. The second result is taken as strong support for the internal consistency of the mean-field approach.