Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Emilio Salinas
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (6): 1349–1379.
Published: 01 June 2006
Abstract
View article
PDF
Cortical sensory neurons are known to be highly variable, in the sense that responses evoked by identical stimuli often change dramatically from trial to trial. The origin of this variability is uncertain, but it is usually interpreted as detrimental noise that reduces the computational accuracy of neural circuits. Here we investigate the possibility that such response variability might in fact be beneficial, because it may partially compensate for a decrease in accuracy due to stochastic changes in the synaptic strengths of a network. We study the interplay between two kinds of noise, response (or neuronal) noise and synaptic noise, by analyzing their joint influence on the accuracy of neural networks trained to perform various tasks. We find an interesting, generic interaction: when fluctuations in the synaptic connections are proportional to their strengths (multiplicative noise), a certain amount of response noise in the input neurons can significantly improve network performance, compared to the same network without response noise. Performance is enhanced because response noise and multiplicative synaptic noise are in some ways equivalent. So if the algorithm used to find the optimal synaptic weights can take into account the variability of the model neurons, it can also take into account the variability of the synapses. Thus, the connection patterns generated with response noise are typically more resistant to synaptic degradation than those obtained without response noise. As a consequence of this interplay, if multiplicative synaptic noise is present, it is better to have response noise in the network than not to have it. These results are demonstrated analytically for the most basic network consisting of two input neurons and one output neuron performing a simple classification task, but computer simulations show that the phenomenon persists in a wide range of architectures, including recurrent (attractor) networks and sensorimotor networks that perform coordinate transformations. The results suggest that response variability could play an important dynamic role in networks that continuously learn.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (7): 1439–1475.
Published: 01 July 2003
Abstract
View article
PDF
A bright red light may trigger a sudden motor action in a driver crossing an intersection: stepping at once on the brakes. The same red light, however, may be entirely inconsequential if it appears, say, inside a movie theater. Clearly, context determines whether a particular stimulus will trigger a motor response, but what is the neural correlate of this? How does the nervous system enable or disable whole networks so that they are responsive or not to a given sensory signal? Using theoretical models and computer simulations, I show that networks of neurons have a built-in capacity to switch between two types of dynamic state: one in which activity is low and approximately equal for all units, and another in which different activity distributions are possible and may even change dynamically. This property allows whole circuits to be turned on or off by weak, unstructured inputs. These results are illustrated using networks of integrate-and-fire neurons with diverse architectures. In agreement with the analytic calculations, a uniform background input may determine whether a random network has one or two stable firing levels; it may give rise to randomly alternating firing episodes in a circuit with reciprocal inhibition; and it may regulate the capacity of a center-surround circuit to produce either self-sustained activity or traveling waves. Thus, the functional properties of a network may be drastically modified by a simple, weak signal. This mechanism works as long as the network is able to exhibit stable firing states, or attractors.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (9): 2111–2155.
Published: 01 September 2002
Abstract
View article
PDF
Neurons are sensitive to correlations among synaptic inputs. However, analytical models that explicitly include correlations are hard to solve analytically, so their influence on a neuron's response has been difficult to ascertain. To gain some intuition on this problem, we studied the firing times of two simple integrate-and-fire model neurons driven by a correlated binary variable that represents the total input current. Analytic expressions were obtained for the average firing rate and coefficient of variation (a measure of spike-train variability) as functions of the mean, variance, and correlation time of the stochastic input. The results of computer simulations were in excellent agreement with these expressions. In these models, an increase in correlation time in general produces an increase in both the average firing rate and the variability of the output spike trains. However, the magnitude of the changes depends differentially on the relative values of the input mean and variance: the increase in firing rate is higher when the variance is large relative to the mean, whereas the increase in variability is higher when the variance is relatively small. In addition, the firing rate always tends to a finite limit value as the correlation time increases toward infinity, whereas the coefficient of variation typically diverges. These results suggest that temporal correlations may play a major role in determining the variability as well as the intensity of neuronal spike trains.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2000) 12 (2): 313–335.
Published: 01 February 2000
Abstract
View article
PDF
Sets of neuronal tuning curves, which describe the responses of neurons as functions of a stimulus, can serve as a basis for approximating other functions of stimulus parameters. In a function-approximating network, synaptic weights determined by a correlation-based Hebbian rule are closely related to the coefficients that result when a function is expanded in an orthogonal basis. Although neuronal tuning curves typically are not orthogonal functions, the relationship between function approximation and correlation-based synaptic weights can be retained if the tuning curves satisfy the conditions of a tight frame. We examine whether the spatial receptive fields of simple cells in cat and monkey primary visual cortex (V1) form a tight frame, allowing them to serve as a basis for constructing more complicated extrastriate receptive fields using correlation-based synaptic weights. Our calculations show that the set of V1 simple cell receptive fields is not tight enough to account for the acuity observed psychophysically.