Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Alan L. Yuille
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2000) 12 (8): 1839–1867.
Published: 01 August 2000
Abstract
View article
PDF
We develop a theory for the temporal integration of visual motion motivated by psychophysical experiments. The theory proposes that input data are temporally grouped and used to predict and estimate the motion flows in the image sequence. This temporal grouping can be considered a generalization of the data association techniques that engineers use to study motion sequences. Our temporal grouping theory is expressed in terms of the Bayesian generalization of standard Kalman filtering. To implement the theory, we derive a parallel network that shares some properties of cortical networks. Computer simulations of this network demonstrate that our theory qualitatively accounts for psychophysical experiments on motion occlusion and motion outliers. In deriving our theory, we assumed spatial factorizability of the probability distributions and made the approximation of updating the marginal distributions of velocity at each point. This allowed us to perform local computations and simplified our implementation. We argue that these approximations are suitable for the stimuli we are considering (for which spatial coherence effects are negligible).
Journal Articles
Publisher: Journals Gateway
Neural Computation (1995) 7 (3): 580–595.
Published: 01 May 1995
Abstract
View article
PDF
Recent work by Becker and Hinton (1992) shows a promising mechanism, based on maximizing mutual information assuming spatial coherence, by which a system can self-organize to learn visual abilities such as binocular stereo. We introduce a more general criterion, based on Bayesian probability theory, and thereby demonstrate a connection to Bayesian theories of visual perception and to other organization principles for early vision (Atick and Redlich 1990). Methods for implementation using variants of stochastic learning are described.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1994) 6 (2): 334–340.
Published: 01 March 1994
Abstract
View article
PDF
We show that there are strong relationships between approaches to optmization and learning based on statistical physics or mixtures of experts. In particular, the EM algorithm can be interpreted as converging either to a local maximum of the mixtures model or to a saddle point solution to the statistical physics system. An advantage of the statistical physics approach is that it naturally gives rise to a heuristic continuation method, deterministic annealing, for finding good solutions.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1990) 2 (1): 1–24.
Published: 01 March 1990
Abstract
View article
PDF
We describe how to formulate matching and combinatorial problems of vision and neural network theory by generalizing elastic and deformable templates models to include binary matching elements. Techniques from statistical physics, which can be interpreted as computing marginal probability distributions, are then used to analyze these models and are shown to (1) relate them to existing theories and (2) give insight into the relations between, and relative effectivenesses of, existing theories. In particular we exploit the power of statistical techniques to put global constraints on the set of allowable states of the binary matching elements. The binary elements can then be removed analytically before minimization. This is demonstrated to be preferable to existing methods of imposing such constraints by adding bias terms in the energy functions. We give applications to winner-take-all networks, correspondence for stereo and long-range motion, the traveling salesman problem, deformable template matching, learning, content addressable memories, and models of brain development. The biological plausibility of these networks is briefly discussed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1989) 1 (3): 334–347.
Published: 01 September 1989
Abstract
View article
PDF
A winner-take-all mechanism is a device that determines the identity and amplitude of its largest input (Feldman and Ballard 1982). Such mechanisms have been proposed for various brain functions. For example, a theory for visual velocity estimate (Grzywacz and Yuille 1989) postulates that a winner-take-all selects the strongest responding cell in the cortex's middle temporal area (MT). This theory proposes a circuitry that links the directionally selective cells in the primary visual cortex to MT cells, making them velocity selective. Generally, several velocity cells would respond, but only the winner would determine the perception. In another theory, a winner-take-all guides the spotlight of attention to the most salient image part (Koch and Ullman 1985). Also, such mechanisms improve the signal-to-noise ratios of VLSI emulations of brain functions (Lazzaro and Mead 1989). Although computer algorithms for winner-take-all mechanisms exist (Feldman and Ballard 1982; Koch and Ullman 1985), good biologically motivated models do not. A candidate for a biological mechanism is lateral (mutual) inhibition (Hartline and Ratliff 1957). In some theoretical mutual-inhibition networks, the inhibition sums linearly to the excitatory inputs and the result is passed through a threshold non linearity (Hadeler 1974). However, these networks work only if the difference between winner and losers is large (Koch and Ullman 1985). We propose an alternative network, in which the output of each element feeds back to inhibit the inputs to other elements. The action of this presynaptic inhibition is nonlinear with a possible biophysical substrate. This paper shows that the new network converges stably to a solution that both relays the winner's identity and amplitude and suppresses information on the losers with arbitrary precision. We prove these results mathematically and illustrate the effectiveness of the network and some of its variants by computer simulations.