Abstract

Recently van Elburg and van Ooyen (2009) published a generalization of the event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory currents and double exponential inhibitory synaptic currents, introduced by Carnevale and Hines. In the paper, it was shown that the constraints on the synaptic time constants imposed by the Newton-Raphson iteration scheme, can be relaxed. In this note, we show that according to the results published in D'Haene, Schrauwen, Van Campenhout, and Stroobandt (2009), a further generalization is possible, eliminating any constraint on the time constants. We also demonstrate that in fact, a wide range of linear neuron models can be efficiently simulated with this computation scheme, including neuron models mimicking complex neuronal behavior. These results can change the way complex neuronal spiking behavior is modeled: instead of highly nonlinear neuron models with few state variables, it is possible to efficiently simulate linear models with a large number of state variables.

In Carnevale and Hines (2002) and Hines and Carnevale (2004), an event-driven approach was described for three types of integrate-and-fire models. The most general model in these contributions, IntFire4, uses four time constants: τe for fast excitatory currents, τj and τi for a slower inhibitory current with rise time, and τm for the membrane. The model was described by the following set of differential equations:
formula
In order to ensure correct convergence of a Newton-Raphson iteration scheme, the following restrictions were imposed: τe < τj < τi < τm. These constraints imply a significant limitation on the biological realism of the simulations, for example, the decay times of excitatory AMPA synapses are typically larger than the rise times of inhibitory GABA currents (Bier, Kits, & Borst, 1996; Häusser & Roth, 1997), or the simulation of high-conductance states involves the use of small membrane time constants (Destexhe, Rudolph, & Paré, 2003).

Recently, van Elburg and van Ooyen (2009) showed that these non-physiological constraints on the time constants can be relaxed. It was shown that multiple time constants can be used, with the only restriction that the slowest-decaying excitatory synaptic current should decay faster than the fastest-decaying inhibitory current: for any combination of ν and μ. The authors proved that when these constraints are applied, a normal Newton-Raphson iteration scheme can be used in order to find the firing time. Moreover, since there are no overestimations of the exact firing time, the overhead of firing time predictions can be reduced for each incoming spike by performing only a single Newton-Raphson iteration at a time. After each iteration, an internal self-event with the current estimated time is rescheduled in the event-driven environment.

The generalizations discussed in van Elburg and van Ooyen (2009) allow simulating more general neuron models in an event-driven way. The remaining constraint that all excitatory currents should decay faster than any inhibitory current is also assumed in many related publications on integrate-and-fire models with multiple synaptic time constants, since this significantly simplifies the threshold finding mechanism (e.g., Brette, 2007; Rudolph & Destexhe, 2006). However, in some circumstances, this is still a (physiologically) unacceptable restriction. For example, the simulation of NMDA-like currents, which can be modeled by an excitatory double exponential current (when ignoring nonlinear effects such as the magnesium block), requires slower excitatory time constants (van Elburg & van Ooyen, 2009).

A similar approach for finding the firing time was proposed in D'Haene, Schrauwen, Van Campenhout, and Stroobandt (2009). This paper presented an efficient integration scheme that can be used for arbitrary integrate-and-fire neuron models with multiple exponentially decaying synaptic time constants, with no limitations on the time constants or the number of different time constants. The analytical solution of the membrane function is a sum of exponential functions. It is shown that the convergence problems with Newton-Raphson–based iterative methods can occur only when there are negative contributions in the membrane function (which is a sum of exponential functions) with a time constant larger than any of the positive terms. In this case, the first derivative of the membrane function, used to perform a Newton-Raphson iteration, can underestimate the membrane function and converge to the wrong firing time or even not converge at all. To prevent this, all contributions to the first derivative that can cause convergence problems are replaced by a virtual contribution in which the time constant of the exponential function is set to the smallest time constant that occurs in all positive terms.

The applicability of the algorithm of D'Haene et al. (2009) is not limited to integrate-and-fire neuron models with multiple exponentially decaying synapses. Since the solution of the double exponential inhibitory currents used in the IntFire4 model also takes the form of the sum of two exponential functions, the algorithm in D'Haene et al. (2009) can be applied without modifications. In general, arbitrary synaptic functions can be efficiently simulated when they can be approximated as a sum of exponential functions. From the simulator's point of view, these synapse functions are represented by multiple interconnections between two neurons, each implementing a simple exponential decaying synapse. Using also the propagation delay of the interconnections, synaptic responses with multiple peaks can be easily approximated.

It can be shown that if the constraints from van Elburg and van Ooyen (2009) are applied to the IntFire4 model, the membrane function will never contain negative exponential terms with larger time constants. In this case, the algorithm of D'Haene et al. (2009) will not need to replace any negative contributions during the computation of the first derivative of the membrane function. On this level, both methods would perform the same computations. However, it was also shown in D'Haene et al. (2009) that the computationally costly Newton-Raphson iterations can be completely eliminated most of the time, as are the updates of all state variables after each incoming spike. This accounts for the largest part of the high speed-ups that were reported.

One of the main ideas behind many of the recently presented neuron models is limiting the number of state variables in order to decrease the simulation time, while maximizing the number of biologically observed neuron properties such as frequency adaptation and bursting (e.g., Izhikevich, 2004; Brette & Gerstner, 2005). These models are generally characterized by strongly nonlinear differential equations and thereby a lack of an analytical solution for the firing times. Although the limitation of the number of state variables simplifies the model and allows a faster time-step-based simulation compared to biologically realistic neuron models (e.g., compartmental Hodgkin-Huxley neurons), the lack of an analytical solution as well as the complex nonlinearities makes the simulation of these models hard to optimize computationally using event-based methods. This stands in contrast to the high speed-up ratios that can be reached by the event-based simulation of linear models (D'Haene et al., 2009). An important side effect of the optimizations presented in that work is that the simulation time is almost unaffected by the number of state variables that are used to describe the neuron model. Therefore, there is increasing interest in powerful linear models that try to mimic more complex behavior by adding more state variables. An excellent example of this is the model recently presented by Mihalaş and Niebur (2009). This model starts with a simple integrate-and-fire neuron model with exponential synapses. But the authors show that the addition of membrane-voltage-dependent threshold dynamics can mimic many of the rich behaviors that are normally possible only in nonlinear models. The model is described by the following system:
formula
This system of differential equations is analytically solvable, and, moreover, the solution for both the membrane V(t) and the threshold θ(t) takes the form of a sum of exponential functions. The firing time can be easily found for V(t) − θ(t) = 0, which is also a sum of exponential functions. Therefore, the optimization methods presented in D'Haene et al. (2009) can be directly applied to this model, giving it an important computational benefit over nonlinear neuron models.

We conclude that there exists an interesting alternative to the recent trend of using highly nonlinear neuron models with the least possible number of state variables (and which cannot well be accelerated using event-based techniques): neuron models where the membrane potential can be written as a sum of (possible many) exponentials can be efficiently simulated using the techniques discussed in this note, and in both simple single cell simulations and large-scale biologically plausible networks (D'Haene et al., 2009). Moreover, these algorithms do not require any additional constraint on the time constants of the different state variables. The introduction of these novel event-based simulation techniques could lead to a paradigm shift in the way complex neuronal spiking behavior is modeled, moving from low-order nonlinear to high-order linear differential equations. This was already demonstrated in Mihalaş and Niebur (2009), where a complex neuron model was introduced that can exhibit rich neuronal behavior, usually observed only in nonlinear models.

Acknowledgments

This work is partially funded by the Institute for the Promotion of Innovation Through Science and Technology in Flanders (IWT-Vlaanderen) and by the European Community's Seventh Framework Programme (EU FP7) under grant agreement no. 231267, Self-Organized Recurrent Neural Learning for Language Processing (ORGANIC).

References

Bier
,
M.
,
Kits
,
K.
, &
Borst
,
G.
(
1996
).
Relation between rise times and amplitude of GABAergic postsynaptic currents
.
Journal of Neurophysiology
,
75
,
1008
1012
.
Brette
,
R.
(
2007
).
Exact simulation of integrate-and-fire models with exponential currents
.
Neural Computation
,
19
,
2604
2609
.
Brette
,
R.
, &
Gerstner
,
W.
(
2005
).
Adaptive exponential integrate-and-fire model as an effective description of neuronal activity
.
Journal of Neurophysiology
,
94
,
3637
3642
.
Carnevale
,
N.
, &
Hines
,
M.
(
2002
).
Efficient discrete event simulation of spiking neurons in NEURON
.
Poster presented at the 2002 SFN Meeting, Yale University, New Haven. SfN abstract 312.10
.
Destexhe
,
A.
,
Rudolph
,
M.
, &
Paré
,
D.
(
2003
).
The high-conductance state of neocortical neurons in vivo
.
Neuroscience
,
4
,
739
751
.
D'Haene
,
M.
,
Schrauwen
,
B.
,
Van Campenhout
,
J.
, &
Stroobandt
,
D.
(
2009
).
Accelerating event-driven simulation of spiking neurons with multiple synaptic time constants
.
Neural Computation
,
21
(
4
),
1068
1099
.
Häusser
,
M.
, &
Roth
,
A.
(
1997
).
Estimating the time course of the excitatory synaptic conductance in neocortical pyramidal cells using a novel voltage jump method
.
Journal of Neuroscience
,
17
,
7606
7625
.
Hines
,
M.
, &
Carnevale
,
N.
(
2004
).
Discrete event simulation in the NEURON environment
.
Neurocomputing
,
58
,
1117
1122
.
Izhikevich
,
E.
(
2004
).
Which model to use for cortical spiking neurons
.
IEEE Transactions on Neural Networks
,
15
,
1063
1070
.
Mihalaş
,
Ş.
, &
Niebur
,
E.
(
2009
).
A generalized linear integrate-and-fire neural model produces diverse spiking behaviors
.
Neural Computation
,
21
,
704
718
.
Rudolph
,
M.
, &
Destexhe
,
A.
(
2006
).
Analytical integrate-and-fire neuron models with conductance-based dynamics for event-driven simulation strategies
.
Neural Computation
,
18
,
2146
2210
.
van Elburg
,
R. A. J.
, &
van Ooyen
,
A.
(
2009
).
Generalization of the event-based Carnevale-Hines integration scheme for integrate-and-fire models
.
Neural Computation
,
21
,
1913
1930
.