Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
William T. Freeman
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (10): 2173–2200.
Published: 01 October 2001
Abstract
View article
PDF
Graphical models, such as Bayesian networks and Markov random fields, represent statistical dependencies of variables by a graph. Local “belief propagation” rules of the sort proposed by Pearl (1988) are guaranteed to converge to the correct posterior probabilities in singly connected graphs. Recently, good performance has been obtained by using these same rules on graphs with loops, a method we refer to as loopy belief propagation . Perhaps the most dramatic instance is the near Shannon-limit performance of “Turbo codes,” whose decoding algorithm is equivalent to loopy propagation. Except for the case of graphs with a single loop, there has been little theoretical understanding of loopy propagation. Here we analyze belief propagation in networks with arbitrary topologies when the nodes in the graph describe jointly gaussian random variables. We give an analytical formula relating the true posterior probabilities with those calculated using loopy propagation. We give sufficient conditions for convergence and show that when belief propagation converges, it gives the correct posterior means for all graph topologies, not just networks with a single loop. These results motivate using the powerful belief propagation algorithm in a broader class of networks and help clarify the empirical performance results.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2000) 12 (6): 1247–1283.
Published: 01 June 2000
Abstract
View article
PDF
Perceptual systems routinely separate “content” from “style,” classifying familiar words spoken in an unfamiliar accent, identifying a font or handwriting style across letters, or recognizing a familiar face or object seen under unfamiliar viewing conditions. Yet a general and tractable computational model of this ability to untangle the underlying factors of perceptual observations remains elusive (Hofstadter, 1985). Existing factor models (Mardia, Kent, & Bibby, 1979; Hinton & Zemel, 1994; Ghahramani, 1995; Bell & Sejnowski, 1995; Hinton, Dayan, Frey, & Neal, 1995; Dayan, Hinton, Neal, & Zemel, 1995; Hinton & Ghahramani, 1997) are either insufficiently rich to capture the complex interactions of perceptually meaningful factors such as phoneme and speaker accent or letter and font, or do not allow efficient learning algorithms. We present a general framework for learning to solve two-factor tasks using bilinear models, which provide sufficiently expressive representations of factor interactions but can nonetheless be fit to data using efficient algorithms based on the singular value decomposition and expectation-maximization. We report promising results on three different tasks in three different perceptual domains: spoken vowel classification with a benchmark multi-speaker database, extrapolation of fonts to unseen letters, and translation of faces to novel illuminants.