Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Elisa Magosso
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2017) 29 (3): 735–782.
Published: 01 March 2017
FIGURES
| View All (65)
Abstract
View article
PDF
Recent theoretical and experimental studies suggest that in multisensory conditions, the brain performs a near-optimal Bayesian estimate of external events, giving more weight to the more reliable stimuli. However, the neural mechanisms responsible for this behavior, and its progressive maturation in a multisensory environment, are still insufficiently understood. The aim of this letter is to analyze this problem with a neural network model of audiovisual integration, based on probabilistic population coding—the idea that a population of neurons can encode probability functions to perform Bayesian inference. The model consists of two chains of unisensory neurons (auditory and visual) topologically organized. They receive the corresponding input through a plastic receptive field and reciprocally exchange plastic cross-modal synapses, which encode the spatial co-occurrence of visual-auditory inputs. A third chain of multisensory neurons performs a simple sum of auditory and visual excitations. The work includes a theoretical part and a computer simulation study. We show how a simple rule for synapse learning (consisting of Hebbian reinforcement and a decay term) can be used during training to shrink the receptive fields and encode the unisensory likelihood functions. Hence, after training, each unisensory area realizes a maximum likelihood estimate of stimulus position (auditory or visual). In cross-modal conditions, the same learning rule can encode information on prior probability into the cross-modal synapses. Computer simulations confirm the theoretical results and show that the proposed network can realize a maximum likelihood estimate of auditory (or visual) positions in unimodal conditions and a Bayesian estimate, with moderate deviations from optimality, in cross-modal conditions. Furthermore, the model explains the ventriloquism illusion and, looking at the activity in the multimodal neurons, explains the automatic reweighting of auditory and visual inputs on a trial-by-trial basis, according to the reliability of the individual cues.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (1): 190–243.
Published: 01 January 2010
FIGURES
| View All (13)
Abstract
View article
PDF
Neurophysiological and behavioral studies suggest that the peripersonal space is represented in a multisensory fashion by integrating stimuli of different modalities. We developed a neural network to simulate the visual-tactile representation of the peripersonal space around the right and left hands. The model is composed of two networks (one per hemisphere), each with three areas of neurons: two are unimodal (visual and tactile) and communicate by synaptic connections with a third downstream multimodal (visual-tactile) area. The hemispheres are interconnected by inhibitory synapses. We applied a combination of analytic and computer simulation techniques. The analytic approach requires some simplifying assumptions and approximations (linearization and a reduced number of neurons) and is used to investigate network stability as a function of parameter values, providing some emergent properties. These are then tested and extended by computer simulations of a more complex nonlinear network that does not rely on the previous simplifications. With basal parameter values, the extended network reproduces several in vivo phenomena: multisensory coding of peripersonal space, reinforcement of unisensory perception by multimodal stimulation, and coexistence of simultaneous right- and left-hand representations in bilateral stimulation. By reducing the strength of the synapses from the right tactile neurons, the network is able to mimic the responses characteristic of right-brain-damaged patients with left tactile extinction: perception of unilateral left tactile stimulation, cross-modal extinction and cross-modal facilitation in bilateral stimulation. Finally, a variety of sensitivity analyses on some key parameters was performed to shed light on the contribution of single-model components in network behaviour. The model may help us understand the neural circuitry underlying peripersonal space representation and identify its alterations explaining neurological deficits. In perspective, it could help in interpreting results of psychophysical and behavioral trials and clarifying the neural correlates of multisensory-based rehabilitation procedures.