Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-6 of 6
Nicolas Brunel
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (5): 1066–1110.
Published: 01 May 2006
Abstract
View article
PDF
GABAergic interneurons play a major role in the emergence of various types of synchronous oscillatory patterns of activity in the central nervous system. Motivated by these experimental facts, modeling studies have investigated mechanisms for the emergence of coherent activity in networks of inhibitory neurons. However, most of these studies have focused either when the noise in the network is absent or weak or in the opposite situation when it is strong. Hence, a full picture of how noise affects the dynamics of such systems is still lacking. The aim of this letter is to provide a more comprehensive understanding of the mechanisms by which the asynchronous states in large, fully connected networks of inhibitory neurons are destabilized as a function of the noise level. Three types of single neuron models are considered: the leaky integrate-and-fire (LIF) model, the exponential integrate-and-fire (EIF), model and conductance-based models involving sodium and potassium Hodgkin-Huxley (HH) currents. We show that in all models, the instabilities of the asynchronous state can be classified in two classes. The first one consists of clustering instabilities, which exist in a restricted range of noise. These instabilities lead to synchronous patterns in which the population of neurons is broken into clusters of synchronously firing neurons. The irregularity of the firing patterns of the neurons is weak. The second class of instabilities, termed oscillatory firing rate instabilities, exists at any value of noise. They lead to cluster state at low noise. As the noise is increased, the instability occurs at larger coupling, and the pattern of firing that emerges becomes more irregular. In the regime of high noise and strong coupling, these instabilities lead to stochastic oscillations in which neurons fire in an approximately Poisson way with a common instantaneous probability of firing that oscillates in time.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (10): 2281–2306.
Published: 01 October 2003
Abstract
View article
PDF
We calculate the firing rate of the quadratic integrate-and-fire neuron in response to a colored noise input current. Such an input current is a good approximation to the noise due to the random bombardment of spikes, with the correlation time of the noise corresponding to the decay time of the synapses. The key parameter that determines the firing rate is the ratio of the correlation time of the colored noise, τ s , to the neuronal time constant, τ m . We calculate the firing rate exactly in two limits: when the ratio, τ s /τ m , goes to zero (white noise) and when it goes to infinity. The correction to the short correlation time limit is O(τ s /τ m ), which is qualitatively different from that of the leaky integrate-and-fire neuron, where the correction is O(√τ s /τ m ). The difference is due to the different boundary conditions of the probability density function of the membrane potential of the neuron at firing threshold. The correction to the long correlation time limit is O(τ m /τ s ). By combining the short and long correlation time limits, we derive an expression that provides a good approximation to the firing rate over the whole range of τ s /τ m in the suprathreshold regime—that is, in a regime in which the average current is sufficient to make the cell fire. In the subthreshold regime, the expression breaks down somewhat when τ s becomes large compared to τ m .
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (9): 2057–2110.
Published: 01 September 2002
Abstract
View article
PDF
Cortical neurons in vivo undergo a continuous bombardment due to synaptic activity, which acts as a major source of noise. Here, we investigate the effects of the noise filtering by synapses with various levels of realism on integrate-and-fire neuron dynamics. The noise input is modeled by white (for instantaneous synapses) or colored (for synapses with a finite relaxation time) noise. Analytical results for the modulation of firing probability in response to an oscillatory input current are obtained by expanding a Fokker-Planck equation for small parameters of the problem—when both the amplitude of the modulation is small compared to the background firing rate and the synaptic time constant is small compared to the membrane time constant. We report here the detailed calculations showing that if a synaptic decay time constant is included in the synaptic current model, the firing-rate modulation of the neuron due to an oscillatory input remains finite in the high-frequency limit with no phase lag. In addition, we characterize the low-frequency behavior and the behavior of the high-frequency limit for intermediate decay times. We also characterize the effects of introducing a rise time to the synaptic currents and the presence of several synaptic receptors with different kinetics. In both cases, we determine, using numerical simulations, an effective decay time constant that describes the neuronal response completely.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (7): 1621–1671.
Published: 01 October 1999
Abstract
View article
PDF
We study analytically the dynamics of a network of sparsely connected inhibitory integrate-and-fire neurons in a regime where individual neurons emit spikes irregularly and at a low rate. In the limit when the number of neurons N → ∞, the network exhibits a sharp transition between a stationary and an oscillatory global activity regime where neurons are weakly synchronized. The activity becomes oscillatory when the inhibitory feedback is strong enough. The period of the global oscillation is found to be mainly controlled by synaptic times but depends also on the characteristics of the external input. In large but finite networks, the analysis shows that global oscillations of finite coherence time generically exist both above and below the critical inhibition threshold. Their characteristics are determined as functions of systems parameters in these two different regimes. The results are found to be in good agreement with numerical simulations.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (7): 1731–1757.
Published: 01 October 1998
Abstract
View article
PDF
In the context of parameter estimation and model selection, it is only quite recently that a direct link between the Fisher information and information-theoretic quantities has been exhibited. We give an interpretation of this link within the standard framework of information theory. We show that in the context of population coding, the mutual information between the activity of a large array of neurons and a stimulus to which the neurons are tuned is naturally related to the Fisher information. In the light of this result, we consider the optimization of the tuning curves parameters in the case of neurons responding to a stimulus represented by an angular variable.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (8): 1677–1710.
Published: 01 November 1996
Abstract
View article
PDF
Single electrode recordings in the inferotemporal cortex of monkeys during delayed visual memory tasks provide evidence for attractor dynamics in the observed region. The persistent elevated delay activities could be internal representations of features of the learned visual stimuli shown to the monkey during training. When uncorrelated stimuli are presented during training in a fixed sequence, these experiments display significant correlations between the internal representations. Recently a simple model of attractor neural network has reproduced quantitatively the measured correlations. An underlying assumption of the model is that the synaptic matrix formed during the training phase contains in its efficacies information about the contiguity of persistent stimuli in the training sequence. We present here a simple unsupervised learning dynamics that produces such a synaptic matrix if sequences of stimuli are repeatedly presented to the network at fixed order. The resulting matrix is then shown to convert temporal correlations during training into spatial correlations between attractors. The scenario is that, in the presence of selective delay activity, at the presentation of each stimulus, the activity distribution in the neural assembly contains information of both the current stimulus and the previous one (carried by the attractor). Thus the recurrent synaptic matrix can code not only for each of the stimuli presented to the network but also for their context. We combine the idea that for learning to be effective, synaptic modification should be stochastic, with the fact that attractors provide learnable information about two consecutive stimuli. We calculate explicitly the probability distribution of synaptic efficacies as a function of training protocol, that is, the order in which stimuli are presented to the network. We then solve for the dynamics of a network composed of integrate-and-fire excitatory and inhibitory neurons with a matrix of synaptic collaterals resulting from the learning dynamics. The network has a stable spontaneous activity, and stable delay activity develops after a critical learning stage. The availability of a learning dynamics makes possible a number of experimental predictions for the dependence of the delay activity distributions and the correlations between them, on the learning stage and the learning protocol. In particular it makes specific predictions for pair-associates delay experiments.