Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-10 of 10
André Longtin
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2021) 33 (2): 341–375.
Published: 01 February 2021
FIGURES
Abstract
View article
PDF
Spike trains with negative interspike interval (ISI) correlations, in which long/short ISIs are more likely followed by short/long ISIs, are common in many neurons. They can be described by stochastic models with a spike-triggered adaptation variable. We analyze a phenomenon in these models where such statistically dependent ISI sequences arise in tandem with quasi-statistically independent and identically distributed (quasi-IID) adaptation variable sequences. The sequences of adaptation states and resulting ISIs are linked by a nonlinear decorrelating transformation. We establish general conditions on a family of stochastic spiking models that guarantee this quasi-IID property and establish bounds on the resulting baseline ISI correlations. Inputs that elicit weak firing rate changes in samples with many spikes are known to be more detectible when negative ISI correlations are present because they reduce spike count variance; this defines a variance-reduced firing rate coding benchmark. We performed a Fisher information analysis on these adapting models exhibiting ISI correlations to show that a spike pattern code based on the quasi-IID property achieves the upper bound of detection performance, surpassing rate codes with the same mean rate—including the variance-reduced rate code benchmark—by 20% to 30%. The information loss in rate codes arises because the benefits of reduced spike count variance cannot compensate for the lower firing rate gain due to adaptation. Since adaptation states have similar dynamics to synaptic responses, the quasi-IID decorrelation transformation of the spike train is plausibly implemented by downstream neurons through matched postsynaptic kinetics. This provides an explanation for observed coding performance in sensory systems that cannot be accounted for by rate coding, for example, at the detection threshold where rate changes can be insignificant.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2020) 32 (8): 1448–1498.
Published: 01 August 2020
FIGURES
| View All (13)
Abstract
View article
PDF
Understanding how rich dynamics emerge in neural populations requires models exhibiting a wide range of behaviors while remaining interpretable in terms of connectivity and single-neuron dynamics. However, it has been challenging to fit such mechanistic spiking networks at the single-neuron scale to empirical population data. To close this gap, we propose to fit such data at a mesoscale, using a mechanistic but low-dimensional and, hence, statistically tractable model. The mesoscopic representation is obtained by approximating a population of neurons as multiple homogeneous pools of neurons and modeling the dynamics of the aggregate population activity within each pool. We derive the likelihood of both single-neuron and connectivity parameters given this activity, which can then be used to optimize parameters by gradient ascent on the log likelihood or perform Bayesian inference using Markov chain Monte Carlo (MCMC) sampling. We illustrate this approach using a model of generalized integrate-and-fire neurons for which mesoscopic dynamics have been previously derived and show that both single-neuron and connectivity parameters can be recovered from simulated data. In particular, our inference method extracts posterior correlations between model parameters, which define parameter subsets able to reproduce the data. We compute the Bayesian posterior for combinations of parameters using MCMC sampling and investigate how the approximations inherent in a mesoscopic population model affect the accuracy of the inferred single-neuron parameters.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (8): 1896–1931.
Published: 01 August 2006
Abstract
View article
PDF
In two recent articles, Rudolph and Destexhe (2003, 2005) studied a leaky integrator model (an RC-circuit) driven by correlated (“colored”) gaussian conductance noise and Gaussian current noise. In the first article, they derived an expression for the stationary probability density of the membrane voltage; in the second, they modified this expression to cover a larger parameter regime. Here we show by standard analysis of solvable limit cases (white noise limit of additive and multiplicative noise sources; only slow multiplicative noise; only additive noise) and by numerical simulations that their first result does not hold for the general colored-noise case and uncover the errors made in the derivation of a Fokker-Planck equation for the probability density. Furthermore, we demonstrate analytically (including an exact integral expression for the time-dependent mean value of the voltage) and by comparison to simulation results that the extended expression for the probability density works much better but still does not exactly solve the full colored-noise problem. We also show that at stronger synaptic input, the stationary mean value of the linear voltage model may diverge and give an exact condition relating the system parameters for which this takes place.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (10): 2139–2175.
Published: 01 October 2005
Abstract
View article
PDF
Oscillatory and synchronized neural activities are commonly found in the brain, and evidence suggests that many of them are caused by global feedback. Their mechanisms and roles in information processing have been discussed often using purely feedforward networks or recurrent networks with constant inputs. On the other hand, real recurrent neural networks are abundant and continually receive information-rich inputs from the outside environment or other parts of the brain. We examine how feedforward networks of spiking neurons with delayed global feedback process information about temporally changing inputs. We show that the network behavior is more synchronous as well as more correlated with and phase-locked to the stimulus when the stimulus frequency is resonant with the inherent frequency of the neuron or that of the network oscillation generated by the feedback architecture. The two eigenmodes have distinct dynamical characteristics, which are supported by numerical simulations and by analytical arguments based on frequency response and bifurcation theory. This distinction is similar to the class I versus class II classification of single neurons according to the bifurcation from quiescence to periodic firing, and the two modes depend differently on system parameters. These two mechanisms may be associated with different types of information processing.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (12): 2779–2822.
Published: 01 December 2003
Abstract
View article
PDF
We examine the effects of paired delayed excitatory and inhibitory feedback on a single integrate—and—fire neuron with reversal potentials embedded within a feedback network. These effects are studied using bifurcation theory and numerical analysis. The feedback occurs through modulation of the excitatory and inhibitory conductances by the previous firing history of the neuron; as a consequence, the feedback also modifies the membrane time constant. Such paired feedback is ubiquitous in the nervous system. We assume that the feedback dynamics are slower than the membrane time constant, which leads to a rate model formulation. Our article provides an extensive analysis of the possible dynamical behaviors of such simple yet realistic neural loops as a function of the balance between positive and negative feedback, with and without noise, and offers insight into the potential behaviors such loops can exhibit in response to time-varying external inputs. With excitatory feedback, the system can be quiescent, can be periodically firing, or can exhibit bistability between these two states. With inhibitory feedback, quiescence, oscillatory firing rates, and bistability between constant and oscillatory firing-rate solutions are possible. The general case of paired feedback exhibits a blend of the behaviors seen in the extreme cases and can produce chaotic firing. We further derive a condition for a dynamically balanced paired feedback in which there is neither bistability nor oscillations. We also show how a biophysically plausible smoothing of the firing function by noise can modify the existence and stability of fixed points and oscillations of the system. We take advantage in our mathematical analysis of the existence of an invariant manifold, which reduces the dimensionality of the dynamics, and prove the stability of this manifold. The novel computational challenges involved in analyzing such dynamics with and without noise are also described. Our results demonstrate that a paired delayed feedback loop can act as a sophisticated computational unit, capable of switching between a variety of behaviors depending on the input current, the relative strengths and asymmetry of the two parallel feedback pathways, and the delay distributions and noise level.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (8): 1761–1788.
Published: 01 August 2003
Abstract
View article
PDF
We study the one-dimensional normal form of a saddle-node system under the influence of additive gaussian white noise and a static “bias current” input parameter, a model that can be looked upon as the simplest version of a type I neuron with stochastic input. This is in contrast with the numerous studies devoted to the noise-driven leaky integrate-and-fire neuron. We focus on the firing rate and coefficient of variation (CV) of the interspike interval density, for which scaling relations with respect to the input parameter and noise intensity are derived. Quadrature formulas for rate and CV are numerically evaluated and compared to numerical simulations of the system and to various approximation formulas obtained in different limiting cases of the model. We also show that caution must be used to extend these results to the neuron model with multiplicative gaussian white noise. The correspondence between the first passage time statistics for the saddle-node model and the neuron model is obtained only in the Stratonovich interpretation of the stochastic neuron model, while previous results have focused only on the Ito interpretation. The correct Stratonovich interpretation yields CVs that are still relatively high, although smaller than in the Ito interpretation; it also produces certain qualitative differences, especially at larger noise intensities. Our analysis provides useful relations for assessing the distance to threshold and the level of synaptic noise in real type I neurons from their firing statistics. We also briefly discuss the effect of finite boundaries (finite values of threshold and reset) on the firing statistics.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (2): 253–278.
Published: 01 February 2003
Abstract
View article
PDF
Neuronal adaptation as well as interdischarge interval correlations have been shown to be functionally important properties of physiological neurons. We explore the dynamics of a modified leaky integrate-and-fire (LIF) neuron, referred to as the LIF with threshold fatigue, and show that it reproduces these properties. In this model, the postdischarge threshold reset depends on the preceding sequence of discharge times. We show that in response to various classes of stimuli, namely, constant currents, step currents, white gaussian noise, and sinusoidal currents, the model exhibits new behavior compared with the standard LIF neuron. More precisely, (1) step currents lead to adaptation, that is, a progressive decrease of the discharge rate following the stimulus onset, while in the standard LIF, no such patterns are possible; (2) a saturation in the firing rate occurs in certain regimes, a behavior not seen in the LIF neuron; (3) interspike intervals of the noise-driven modified LIF under constant current are correlated in a way reminiscent of experimental observations, while those of the standard LIF are independent of one another; (4) the magnitude of the correlation coefficients decreases as a function of noise intensity; and (5) the dynamics of the sinusoidally forced modified LIF are described by iterates of an annulus map, an extension to the circle map dynamics displayed by the LIF model. Under certain conditions, this map can give rise to sensitivity to initial conditions and thus chaotic behavior.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (1): 227–248.
Published: 01 January 2001
Abstract
View article
PDF
The influence of voltage-dependent inhibitory conductances on firing rate versus input current (f-I) curves is studied using simulations from a new compartmental model of a pyramidal cell of the weakly electric fish Apteronotus leptorhynchus . The voltage dependence of shunting-type inhibition enhances the subtractive effect of inhibition on f-I curves previously demonstrated in Holt and Koch (1997) for the voltage-independent case. This increased effectiveness is explained using the behavior of the average subthreshold voltage with input current and, in particular, the nonlinearity of Ohm's law in the subthreshold regime. Our simulations also reveal, for both voltage-dependent and -independent inhibitory conductances, a divisive inhibition regime at low frequencies (f < 40 Hz). This regime, dependent on stochastic inhibitory synaptic input and a coupling of inhibitory strength and variance, gives way to subtractive inhibition at higher-output frequencies (f > 40 Hz). A simple leaky integrate- and-fire type model that incorporates the voltage dependence supports the results from our full ionic simulations.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2000) 12 (5): 1067–1093.
Published: 01 May 2000
Abstract
View article
PDF
We present a tractable stochastic phase model of the temperature sensitivity of a mammalian cold receptor. Using simple linear dependencies of the amplitude, frequency, and bias on temperature, the model reproduces the experimentally observed transitions between bursting, beating, and stochastically phase-locked firing patterns. We analyze the model in the deterministic limit and predict, using a Strutt map, the number of spikes per burst for a given temperature. The inclusion of noise produces a variable number of spikes per burst and also extends the dynamic range of the neuron, both of which are analyzed in terms of the Strutt map. Our analysis can be readily applied to other receptors that display various bursting patterns following temperature changes.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (2): 215–255.
Published: 15 February 1996
Abstract
View article
PDF
Mammalian cold thermoreceptors encode steady-state temperatures into characteristic temporal patterns of action potentials. We propose a mechanism for the encoding process. It is based on Plant's ionic model of slow wave bursting, to which stochastic forcing is added. The model reproduces firing patterns from cat lingual cold receptors as the parameters most likely to underlie the thermosensitivity of these receptors varied over a 25°C range. The sequence of firing patterns goes from regular bursting, to simple periodic, to stochastically phase-locked firing or “skipping.” The skipping at higher temperatures is shown to necessitate an interaction between noise and a subthreshold endogenous oscillation in the receptor. The basic period of all patterns is robust to noise. Further, noise extends the range of encodable stimuli. An increase in firing irregularity with temperature also results from the loss of stability accompanying the approach by the slow dynamics of a reverse Hopf bifurcation. The results are not dependent on the precise details of the Plant model, but are generic features of models where an autonomous slow wave arises through a Hopf bifurcation. The model also addresses the variability of the firing patterns across fibers. An alternate model of slow-wave bursting (Chay and Fan 1993) in which skipping can occur without noise is also analyzed here in the context of cold thermoreception. Our study quantifies the possible origins and relative contribution of deterministic and stochastic dynamics to the coding scheme. Implications of our findings for sensory coding are discussed.