## Abstract

Recently van Elburg and van Ooyen (2009) published a generalization of the event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory currents and double exponential inhibitory synaptic currents, introduced by Carnevale and Hines. In the paper, it was shown that the constraints on the synaptic time constants imposed by the Newton-Raphson iteration scheme, can be relaxed. In this note, we show that according to the results published in D'Haene, Schrauwen, Van Campenhout, and Stroobandt (2009), a further generalization is possible, eliminating any constraint on the time constants. We also demonstrate that in fact, a wide range of linear neuron models can be efficiently simulated with this computation scheme, including neuron models mimicking complex neuronal behavior. These results can change the way complex neuronal spiking behavior is modeled: instead of highly nonlinear neuron models with few state variables, it is possible to efficiently simulate linear models with a large number of state variables.

_{e}for fast excitatory currents, τ

_{j}and τ

_{i}for a slower inhibitory current with rise time, and τ

_{m}for the membrane. The model was described by the following set of differential equations: In order to ensure correct convergence of a Newton-Raphson iteration scheme, the following restrictions were imposed: τ

_{e}< τ

_{j}< τ

_{i}< τ

_{m}. These constraints imply a significant limitation on the biological realism of the simulations, for example, the decay times of excitatory AMPA synapses are typically larger than the rise times of inhibitory GABA currents (Bier, Kits, & Borst, 1996; Häusser & Roth, 1997), or the simulation of high-conductance states involves the use of small membrane time constants (Destexhe, Rudolph, & Paré, 2003).

Recently, van Elburg and van Ooyen (2009) showed that these non-physiological constraints on the time constants can be relaxed. It was shown that multiple time constants can be used, with the only restriction that the slowest-decaying excitatory synaptic current should decay faster than the fastest-decaying inhibitory current: for any combination of ν and μ. The authors proved that when these constraints are applied, a normal Newton-Raphson iteration scheme can be used in order to find the firing time. Moreover, since there are no overestimations of the exact firing time, the overhead of firing time predictions can be reduced for each incoming spike by performing only a single Newton-Raphson iteration at a time. After each iteration, an internal self-event with the current estimated time is rescheduled in the event-driven environment.

The generalizations discussed in van Elburg and van Ooyen (2009) allow simulating more general neuron models in an event-driven way. The remaining constraint that all excitatory currents should decay faster than any inhibitory current is also assumed in many related publications on integrate-and-fire models with multiple synaptic time constants, since this significantly simplifies the threshold finding mechanism (e.g., Brette, 2007; Rudolph & Destexhe, 2006). However, in some circumstances, this is still a (physiologically) unacceptable restriction. For example, the simulation of NMDA-like currents, which can be modeled by an excitatory double exponential current (when ignoring nonlinear effects such as the magnesium block), requires slower excitatory time constants (van Elburg & van Ooyen, 2009).

A similar approach for finding the firing time was proposed in D'Haene, Schrauwen, Van Campenhout, and Stroobandt (2009). This paper presented an efficient integration scheme that can be used for arbitrary integrate-and-fire neuron models with multiple exponentially decaying synaptic time constants, with no limitations on the time constants or the number of different time constants. The analytical solution of the membrane function is a sum of exponential functions. It is shown that the convergence problems with Newton-Raphson–based iterative methods can occur only when there are negative contributions in the membrane function (which is a sum of exponential functions) with a time constant larger than any of the positive terms. In this case, the first derivative of the membrane function, used to perform a Newton-Raphson iteration, can underestimate the membrane function and converge to the wrong firing time or even not converge at all. To prevent this, all contributions to the first derivative that can cause convergence problems are replaced by a virtual contribution in which the time constant of the exponential function is set to the smallest time constant that occurs in all positive terms.

The applicability of the algorithm of D'Haene et al. (2009) is not limited to integrate-and-fire neuron models with multiple exponentially decaying synapses. Since the solution of the double exponential inhibitory currents used in the IntFire4 model also takes the form of the sum of two exponential functions, the algorithm in D'Haene et al. (2009) can be applied without modifications. In general, arbitrary synaptic functions can be efficiently simulated when they can be approximated as a sum of exponential functions. From the simulator's point of view, these synapse functions are represented by multiple interconnections between two neurons, each implementing a simple exponential decaying synapse. Using also the propagation delay of the interconnections, synaptic responses with multiple peaks can be easily approximated.

It can be shown that if the constraints from van Elburg and van Ooyen (2009) are applied to the IntFire4 model, the membrane function will never contain negative exponential terms with larger time constants. In this case, the algorithm of D'Haene et al. (2009) will not need to replace any negative contributions during the computation of the first derivative of the membrane function. On this level, both methods would perform the same computations. However, it was also shown in D'Haene et al. (2009) that the computationally costly Newton-Raphson iterations can be completely eliminated most of the time, as are the updates of all state variables after each incoming spike. This accounts for the largest part of the high speed-ups that were reported.

*V*(

*t*) and the threshold θ(

*t*) takes the form of a sum of exponential functions. The firing time can be easily found for

*V*(

*t*) − θ(

*t*) = 0, which is also a sum of exponential functions. Therefore, the optimization methods presented in D'Haene et al. (2009) can be directly applied to this model, giving it an important computational benefit over nonlinear neuron models.

We conclude that there exists an interesting alternative to the recent trend of using highly nonlinear neuron models with the least possible number of state variables (and which cannot well be accelerated using event-based techniques): neuron models where the membrane potential can be written as a sum of (possible many) exponentials can be efficiently simulated using the techniques discussed in this note, and in both simple single cell simulations and large-scale biologically plausible networks (D'Haene et al., 2009). Moreover, these algorithms do not require any additional constraint on the time constants of the different state variables. The introduction of these novel event-based simulation techniques could lead to a paradigm shift in the way complex neuronal spiking behavior is modeled, moving from low-order nonlinear to high-order linear differential equations. This was already demonstrated in Mihalaş and Niebur (2009), where a complex neuron model was introduced that can exhibit rich neuronal behavior, usually observed only in nonlinear models.

## Acknowledgments

This work is partially funded by the Institute for the Promotion of Innovation Through Science and Technology in Flanders (IWT-Vlaanderen) and by the European Community's Seventh Framework Programme (EU FP7) under grant agreement no. 231267, Self-Organized Recurrent Neural Learning for Language Processing (ORGANIC).