Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-3 of 3
Thomas J. Anastasio
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (4): 783–810.
Published: 01 April 2003
Abstract
View article
PDF
Cross-modal enhancement (CME) occurs when the neural response to a stimulus of one modality is augmented by another stimulus of a different modality. Paired stimuli of the same modality never produce supra-additive enhancement but may produce modality-specific suppression (MSS), in which the response to a stimulus of one modality is diminished by another stimulus of the same modality. Both CME and MSS have been described for neurons in the deep layers of the superior colliculus (DSC), but their neural mechanisms remain unknown. Previous investigators have suggested that CME involves a multiplicative amplifier, perhaps mediated by N-methyl D-aspartate (NMDA) receptors, which is engaged by cross-modal but not modality-specific input. We previously postulated that DSC neurons use multisensory input to compute the posterior probability of a target using Bayes' rule. The Bayes' rule model reproduces the major features of CME. Here we use simple neural implementations of our model to simulate both CME and MSS and to argue that multiplicative processes are not needed for CME, but may be needed to represent input variance and covariance. Producing CME requires only weighted summation of inputs and the threshold and saturation properties of simple models of biological neurons. Multiplicative nodes allow accurate computation of posterior target probabilities when the spontaneous and driven inputs have unequal variances and covariances. Neural implementations of the Bayes' rule model account better than the multiplicative amplifier hypothesis for the effects of pharmacological blockade of NMDA receptors on the multisensory responses of DSC neurons. The neural implementations also account for MSS, given only the added hypothesis that input channels of the same modality have more spontaneous covariance than those of different modalities.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2000) 12 (5): 1165–1187.
Published: 01 May 2000
Abstract
View article
PDF
The deep layers of the superior colliculus (SC) integrate multisensory inputs and initiate an orienting response toward the source of stimulation (target). Multisensory enhancement, which occurs in the deep SC, is the augmentation of a neural response to sensory input of one modality by input of another modality. Multisensory enhancement appears to underlie the behavioral observation that an animal is more likely to orient toward weak stimuli if a stimulus of one modality is paired with a stimulus of another modality. Yet not all deep SC neurons are multisensory. Those that are exhibit the property of inverse effectiveness: combinations of weaker unimodal responses produce larger amounts of enhancement. We show that these neurophysiological findings support the hypothesis that deep SC neurons use their sensory inputs to compute the probability that a target is present. We model multimodal sensory inputs to the deep SC as random variables and cast the computation function in terms of Bayes' rule. Our analysis suggests that multisensory deep SC neurons are those that combine unimodal inputs that would be more uncertain by themselves. It also suggests that inverse effectiveness results because the increase in target probability due to the integration of multisensory inputs is larger when the unimodal responses are weaker.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1989) 1 (2): 230–241.
Published: 01 June 1989
Abstract
View article
PDF
The mechanisms of eye-movement control are among the best understood in motor neurophysiology. Detailed anatomical and physiological data have paved the way for theoretical models that have unified existing knowledge and suggested further experiments. These models have generally taken the form of black-box diagrams (for example, Robinson 1981) representing the flow of hypothetical signals between idealized signal-processing blocks. They approximate overall oculomotor behavior but indicate little about how real eye-movement signals would be carried and processed by real neural networks. Neurons that combine and transmit oculomotor signals, such as those in the vestibular nucleus (VN), actually do so in a diverse, seemingly random way that would be impossible to predict from a block diagram. The purpose of this study is to use a neural-network learning scheme (Rumelhart et al. 1986) to construct parallel, distributed models of the vestibulo-oculomotor system that simulate the diversity of responses recorded experimentally from VN neurons.