Abstract

The simulation of spiking neural networks (SNNs) is known to be a very time-consuming task. This limits the size of SNN that can be simulated in reasonable time or forces users to overly limit the complexity of the neuron models. This is one of the driving forces behind much of the recent research on event-driven simulation strategies. Although event-driven simulation allows precise and efficient simulation of certain spiking neuron models, it is not straightforward to generalize the technique to more complex neuron models, mostly because the firing time of these neuron models is computationally expensive to evaluate. Most solutions proposed in literature concentrate on algorithms that can solve this problem efficiently. However, these solutions do not scale well when more state variables are involved in the neuron model, which is, for example, the case when multiple synaptic time constants for each neuron are used. In this letter, we show that an exact prediction of the firing time is not required in order to guarantee exact simulation results. Several techniques are presented that try to do the least possible amount of work to predict the firing times. We propose an elegant algorithm for the simulation of leaky integrate-and-fire (LIF) neurons with an arbitrary number of (unconstrained) synaptic time constants, which is able to combine these algorithmic techniques efficiently, resulting in very high simulation speed. Moreover, our algorithm is highly independent of the complexity (i.e., number of synaptic time constants) of the underlying neuron model.

1.  Introduction

Spiking neuron models are becoming more important for four reasons: (1) biologically inspired research shows that spike timing matters in neuronal systems (Theunissen & Miller, 1995; VanRullen, Guyonneau, & Thorpe, 2005); (2) they have more computational power than artificial neural networks (Maass, 1996), which use the average firing rate of neurons as inputs; (3) it has been shown that temporal problems can be handled more efficiently than artificial neural networks, such as speech recognition (Verstraeten, Schrauwen, Stroobandt, & Van Campenhout, 2005), and (4) spiking neurons communicate through discrete valueless spikes instead of analog values. This makes these neurons particularly suited for hardware implementations, although direct hardware implementations (Agis, Ros, Diaz, Carrillo, & Ortigosa, 2007; Schrauwen, D'Haene, Verstraeten, & Van Campenhout, 2008) have their limitations regarding flexibility and space. Therefore, very large spiking neural networks (SNNs) still have to be simulated using conventional computer architectures.

There are essentially two ways to simulate SNNs: time step based and event driven. In a time-step-based approach, the simulation interval is partitioned into fixed time steps. At the end of each time step interval, the network is evaluated, and the state variables of each neuron are updated. The precision of the simulation depends on the length of the time step, which obviously affects simulation speed. Although this is a very simple approach, the asynchronous nature of the spikes requires small time steps in order to achieve accurate simulation results (Softky, 1995). In order to improve the precision or the simulation speed, some recently developed time-step-based simulators use interpolation techniques to estimate the exact spike epochs more accurately (Hansel, Mato, Meunier, & Neltner, 1998; Shelley & Tao, 2001; Morrison, Straube, Plesser, & Diesmann, 2007), or use variable time step methods that use bigger time steps for neurons that are in a mostly inactive state (Lytton & Hines, 2005).

If we do not consider electrical gap junctions (which are sometimes modeled in biologically realistic neuron models), neurons can see only the spikes emitted by the presynaptic neurons. In these cases, the internal dynamics of a presynaptic neuron are not relevant for the postsynaptic neurons and thus should not be communicated. Event-driven simulation takes advantage of this (Watts, 1993). Instead of evaluating the network on regular time intervals, the state of each neuron is evaluated only when necessary, that is, when a spike is received or emitted. This allows exact simulations, as the timings of spikes can be calculated up to machine precision. Moreover, because a typical SNN is characterized by low average activity (Grassmann & Anlauf, 1998; Muresan & Ignat, 2004; Destexhe, Rudolph, & Par, 2003), this approach can result in very fast simulations (D'Haene, Schrauwen, & Stroobandt, 2006).

Leaky integrate-and-fire (LIF) models (Gerstner & Kistler, 2002) are one of the best-known examples of threshold-based spiking neuron models. In these models, a spike is generated as soon as the membrane potential reaches a certain threshold. After firing, the membrane potential is forced to a reset value, and the neuron optionally has a refractory period during which it cannot emit spikes. A very simple case occurs when synaptic interactions are modeled as instantaneous Dirac pulses, which causes the membrane potential to rise instantaneously and decay afterward to its resting potential. Because firing can happen only at the moment of an incoming spike, the event-driven simulation of this model is very simple, as no future firing time has to be calculated.

An important step to better capture the dynamics of neurons is the addition of synaptic dynamics to the neuron model. They model the biological behavior of neurotransmitters at a synaptic cleft (Gerstner & Kistler, 2002). Maass (1996) and Maass and Markram (2002) showed that synaptic models significantly increase the computational expressive power of neurons. Because a synaptic model applies a filtering action to the incoming spikes, the somatic current that the neuron receives is no longer a pointlike event, but a continuous (decaying) function. This behavior is often modeled using an exponentially decaying synaptic function, characterized by some time constant τs. As a result, a neuron is able to fire in the future, that is, strictly after a spike was received. An event-driven simulator has to be able to predict this firing time.

1.1.  Motivation.

In biological neurons, different synaptic time constants are usually involved in the same neuron model. Also, the amount of dendritic filtering (which slows the kinetics of synapses) depends on the length of the dendritic connection (Magee & Cook, 2000). From the simulation point of view, the dendritic filtering can be seen as a multitude of different synaptic time constants. Multiple synaptic time constants are also important when neural networks are used to solve engineering problems, because different timescales can be mixed in the same neuron. Some learning rules for SNNs even explicitly explore these dynamics (Schrauwen & Van Campenhout, 2006).

In this letter, we show that the use of multiple synaptic time constants can strongly complicate the simulation of neurons. We discuss the methods that are currently used to simulate these neuron models and testify that they scale badly when the complexity of the calculations increases (i.e., when multiple synaptic time constants are involved). Hence, we propose a novel approach that can drastically reduce the computational effort of the simulation. This approach is based on the observation that there is a relatively high probability that a new spike arrives before the firing time of the neuron is reached, which will render all previous calculations obsolete. Therefore, the main idea of this work is to do the fewest possible calculations to improve the approximation of the next firing time.

We accomplish this goal using several techniques that melt together into one efficient algorithm, which outperforms current state-of-the-art algorithms (Brette, 2007) in terms of speed, even when only a few time constants are used. Moreover, our algorithm does not imply any restrictions concerning the number of synaptic time constants (in the most extreme case, each input connection can use a separate time constant) or their value.

1.2.  Organization and Contribution.

This letter is organized as follows. In section 2, we briefly review the LIF neuron model with multiple synaptic time constants, which is the reference model here. In section 3 a modified Newton Raphson method is proposed that guarantees convergence to the first firing time for arbitrary synaptic time constants. In section 4 we discuss several techniques that can each significantly speed up the simulation of general LIF neurons, but that employ the largest speed-up for a wide range of synaptic activities when used in one elegant algorithm combining the best of all these techniques. Section 5 shows the results of our techniques on randomly generated recurrent neural networks for engineering problems and biologically realistic neurons. Section 6 presents the conclusion.

This work significantly extends the results presented in D'Haene et al. (2006). The techniques presented in this prior work performed well on some typical engineering applications, but in large, biologically inspired networks with high activity, the speed gain was rather small. This work introduces some important new concepts (e.g., partial Newton-Raphson iterations) to solve these problems, while other techniques presented in D'Haene et al. (2006) (maximum approximation technique, single Newton-Raphson iteration) are significantly improved, resulting in even higher speed-up. Finally, this work provides mathematics that adds theoretical support for our techniques.

2.  The LIF Neuron with Multiple Synaptic Time Constants

One of the main difficulties in event-driven simulation of SNNs is the prediction of the firing time tf of a neuron, which can have an important impact on the simulation speed. Although the LIF neuron with exponentially decaying synaptic currents is a relatively simple model, it involves some simulation difficulties in an event-driven simulation system.

Consider an LIF neuron with i inputs. Gerstner and Kistler (2002) showed that the LIF model is a special case of the SRM0 model. The postsynaptic potential u(t) of such a neuron is given by
formula
2.1
with η0 the kernel function that models the reset and refractory behavior of the neuron and ϵ0,i the kernel function that models the neuron's response to presynaptic spikes on synapse i. The variable wi denotes the weight that is applied on incoming spikes at input i, the time of these incoming spikes is denoted by t(f)i, and tf is the time that a neuron generates a spike itself. This happens when u(t) equals a threshold value θ. For the LIF model with exponentially decaying synaptic currents, the kernel functions are given by the following equations (Gerstner & Kistler, 2002):
formula
2.2
formula
2.3
with θ the threshold, ur the reset potential, and τm and τs,i the time constants of the membrane and the synapse i. The variable is the Heaviside step function.
During a simulation run, the membrane function defined in equation 2.1 will be computed incrementally. Since the sum over the η0 terms is linear and since the η0 terms share the same time constant, they can be combined into one state variable Vr(t). Also the sum over the ϵ0,i terms can be combined into the state variable Vm,i(t) and Vs,i(t) for each time constant. Thus, in intervals without spikes, the membrane potential u(t) evolves in the following way:
formula
2.4
with tl the last update time of the neuron model. Because Vr(t) and all Vm(t) share the same decay constant τm, they can be combined into a single state variable Vtot,m(t) = Vr(t) + ∑iVm,i(t). Equation 2.4 can thus be written as
formula
2.5
The same can be done for inputs that share the same time constant τs,i: one state variable can be used for each group of inputs sharing the same time constant, which reduces the number of state variables. For clarity, we assume that each input i is connected to a separate synaptic function (unless mentioned otherwise), but it is trivial to implement a translation table that projects each input i on a state variable j.
Each time a new spike arrives at input i, the state variables Vtot,m(t) and Vs,i(t) have to be updated according to equations 2.1 and 2.3:
formula
2.6
These update rules can be applied for both τm < τs,i and τms,i. A special case occurs when τm = τs,i. In this case, equation 2.3 is transformed into the α-function. In this letter, we omit this special case without losing generality, because we can always replace τs,i by τs,i + Δ, with Δ sufficiently small.

In order to find the next firing time tf of a neuron (assuming that no new spikes arrive in this interval), the time when u(t) first reaches the threshold θ has to be computed. Therefore, equation 2.5 with u(t) = θ must be solved for t. In the general case, this is not analytically solvable. Furthermore, simple iterative techniques such as Newton-Raphson (NR) cannot be used because they do not converge in general. However, in section 3, we present an efficient solution for this convergence problem.

A solution was recently proposed in Brette (2007). It is assumed that all synaptic time constants are commensurable—that they are related to each other by some rational ratio. In this case, the membrane function can be replaced by a higher-degree polynomial function. A polynomial root-finding algorithm can now be used to find all threshold crossings, and the first one is selected as the next firing time. However, the computational complexity of the analytical solution strongly depends on the degree of the polynomial representation. This degree can be very high when multiple time constants are involved or when τm and τs are not carefully chosen. In section 5, we show that even when the number of time constants is limited with carefully chosen time constants, our algorithm outperforms this method.

Another problem that occurs when multiple time constants are involved is that all state variables (i.e., Vtot,m and all Vs,i) have to be updated to the current time for each spike entering the neuron (otherwise the membrane state is not known). This also has an important negative influence on the simulation speed and does not depend on the complexity of the firing time prediction.

To avoid these problems, most LIF neuron models found in the literature suppose further restrictions on the synaptic time constants. One rather drastic restriction that is often applied is to use only one time constant for all synapses in the neuron model: ∀τs,is,i = τs. In this case, the synaptic state variables can also be combined into one variable, which reduces the previous equation to a two-variable model:
formula
2.7
For arbitrary τm and τs, not related to each other by some rational ratio, this function can still not be solved analytically for t. However, the membrane function is concave as long as the first derivative is positive. Therefore, NR iterations can be performed.
In order to further accelerate the calculations, the time-consuming NR calculations should also be eliminated. This can be accomplished by using a further simplified neuron model or by using look-up tables. A good example of the first technique is the neuron model developed by Olaf Booij (unpublished personal communication, 2003). He assumes a fixed relation between τm and τs such that τm = 2τs. In this case, equation 2.7 can be simplified to
formula
This is a quadratic equation that can be solved analytically with t = log(ym. As we show in section 5, the computational requirements of this model are comparable to these of simple LIF neurons (with instantaneous Dirac inputs).

The use of look-up tables is a well-known technique to estimate tf, which offers high speed-ups with sufficient accuracy (depending on the size of the table; Brette, 2006). Some researchers even replaced all calculations of the neuron model with higher-dimensional precalculated look-up tables (Carrillo, Ros, Ortigosa, Barbour, & Agis, 2005). An important drawback of look-up tables is that memory size grows dramatically with the desired accuracy. Moreover, the dimension of the look-up table is related to the number of state variables and therefore with the number of synaptic time constants. Thus, neurons using multiple synaptic time constants are prone to excessive memory use. The look-up table also has to be built separately for each set of neuron model characteristics. This forces researchers to limit the number of neuron characteristics to be used within the same network.

3.  Finding the Firing Time of Neurons with Multiple Synaptic Time Constants

As observed before, when multiple synaptic time constants are involved, a simple root finding algorithm (e.g., Newton-Raphson) cannot be used because the algorithm may not converge or may converge to the wrong firing time tf. An example is given in Figures 1a and 1b. There may also be several threshold crossings, and the root-finding method should guarantee that it converges to the first threshold crossing.

Figure 1:

Newton-Raphson iteration in a neuron model with two synaptic time constants. At time t1, an excitatory spike and at time t2 an inhibitory spike is received. (a) The inhibitory spike causes the NR method to converge to the wrong threshold crossing of the membrane function. (b) Even worse, if a local minimum exists, NR will not converge at all, and the simulator will predict that no spike will occur. (c) Best solution: the derivative for a NR iteration is replaced by the tangent of the membrane function. This tangent is, however, hard to compute in the general case. (d) An approximation that involves no computational overhead. For each iteration, a virtual membrane function is created (the small dotted line) by replacing the time constant of the positive state variables with the slowest time constant of all negative state variables.

Figure 1:

Newton-Raphson iteration in a neuron model with two synaptic time constants. At time t1, an excitatory spike and at time t2 an inhibitory spike is received. (a) The inhibitory spike causes the NR method to converge to the wrong threshold crossing of the membrane function. (b) Even worse, if a local minimum exists, NR will not converge at all, and the simulator will predict that no spike will occur. (c) Best solution: the derivative for a NR iteration is replaced by the tangent of the membrane function. This tangent is, however, hard to compute in the general case. (d) An approximation that involves no computational overhead. For each iteration, a virtual membrane function is created (the small dotted line) by replacing the time constant of the positive state variables with the slowest time constant of all negative state variables.

A solution for this convergence problem was proposed in Makino (2003): the time interval of a complex membrane function is partitioned into subintervals. In each subinterval, the membrane function is guaranteed to have at most one threshold crossing, and the second derivative does not change sign. As a result, NR can be applied in each subinterval. In order to compute subintervals efficiently, linear envelopes are used to estimate the range of function values. The linear envelope of a function sum is composed of the sum of the linear envelopes of each addend function. In the case of an LIF with multiple synaptic time constants, each synapse is in fact an addend function to the membrane potential (see equation 2.4). Makino's approach has several disadvantages when the number of addend functions increases. For example, the upper-bound envelope of inhibitory synapses is the horizontal asymptote of the function. However, this is a poor approximation of the real function, as inhibitory synapses are simply neglected in the computation of the total envelope. Therefore, in balanced neural networks, the linear envelope will be a very rough approximation of the function's envelope, resulting in very short subintervals. Better subintervals can be created using linear envelopes of the first and second derivative of the addend functions. However, this complicates the algorithm and also requires the computation of these more complex envelopes.

3.1.  Our Approach.

Our approach is related to Makino's but offers a drastically improved convergence speed for a high number of synaptic time constants with no computational overhead. We first introduce a formalism that expresses a minimum requirement for correct convergence and then propose an adapted NR iteration technique that guarantees convergence to the correct threshold crossing of the membrane potential.

In a normal NR iteration method, the root of a function f(t) is estimated by iteratively computing the following expression (Tjalling, 1995):
formula
3.1
However, this procedure can be unstable near a horizontal tangent or a local extremum. Rather than using the tangent line to f(t), we propose to use the weaker concept of dominating line, defined as follows:

Definition.
A dominating line of a function f(t) at point t0 is a line that contains (t0, f(t0)) and does not intersect f(t) for all t>t0: . The slope of this line is denoted by . The slope of the most level dominating line is given by
formula
3.2
Note that all lines passing through (t0, f(t0)) with are dominating lines of f(t) in t0.

Theorem.

Assume that for a function f(t), the following conditions hold:

  1. f(t) is continuous.

  2. f(t0) < 0 for some point t0.

  3. f(t) has at least one root tr with tr>t0. Let tf denote the smallest such root: .

  4. In the interval [t0, tf), the slope of the most level dominating line is bounded—there exists a number M such that
    formula

Then the iterative method given by
formula
3.3
and will converge to the first root tf of function f(t).

Proof.

First note that conditions 1 and 2 imply that . Now choose , and execute the following:

  • • 

    i = 0.

  • • 

    Start at ti, and choose a slope fullfilling the following condition: . Compute ti+1 as .

  • • 

    Repeat this procedure from ti+1.

We now observe that the sequence t0 < t1 < t2… is increasing and bounded because ∀i:titf. Indeed, assume the contrary: ti>tf. Then while f(tf) = 0, contrary to the assumption that L dominates f(t). Hence, the sequence converges.

Finally, we have to show that the sequence actually converges to tf. Because the sequence converges, limi→∞(ti+1ti) = 0. According to equation 3.3, this implies that . Since f(t) is continuous by assumption 1 and since , the function converges to 0: limi→∞f(ti) = 0. Hence, the sequence t0 < t1 < t2… converges to tf.

We shall call this method a safe NR method. Figure 1c illustrates this technique using the optimal slope at each iteration step. However, it is important that the calculation of a candidate slope can be executed quickly, as it has to be performed for each safe NR iteration step. Unfortunately, the calculation of is a difficult problem when multiple synaptic time constants are involved and (in the general case) needs an iterative method itself.

3.2.  Fast Dominating Slope.

If we use a suboptimal dominating slope , more iterations will have to be performed before an adequate precision for tf is reached. However, if this slope can be found with little computational overhead, the cost for a few extra iterations will be much lower than the cost for computing the optimal slope in each iteration step, especially when the optimizations discussed in section 4 are applied. Therefore, a fairly good approximation of the optimal slope is used in our algorithm that has no computational overhead at all.

For clarity, we first rename the state variables of the membrane function u(t) from equation 2.5: define Vtot,m = Vj with j = 0 and τm = τ0. Also, the state variable Vs,i of each synapse i is now represented by Vj with j = 1..(i + 1) and with time constant τj. Next, we define and , the collections of all positive and negative state variables of the membrane function u(t). We can now rewrite equation 2.5 as follows:
formula
3.4
The first derivative of this function for t = tl is given by
formula
3.5
First, let us assume that there are no positive state variables: . It is clear that for each Δt>0, the decay factor is smaller than 1. Therefore, the following condition holds:
formula
3.6
As a result, du(tl)/dt is also the most level dominating line of u(t) at tl. However, if there are also positive state variables involved, they can temporarily decrease du/dt. This situation occurs if the time constant τj of a positive state variable is smaller than any of the negative time constants. Only then will a negative influence on the first derivative decay faster than any positive influence, and thus du(tl + Δt)/dt can be larger than du(tl)/dt for Δt>0.
We define τsafe being the largest τj of all negative state variables. Instead of computing the first derivative of u(t) at tl, we compute a safe derivative of the membrane function according to
formula
3.7
Since the impact of the left part of the right-hand side in equation 3.7 is never larger than the corresponding left part in equation 3.5, it is clear that dusafe(tl)/dt ⩾ du(tl)/dt.
This safe derivative is in fact the first derivative of a virtual function usafe(t), which has replaced the time constants of the positive state variables in equation 3.4 by max(τj, τsafe). Since state variables sharing the same time constant τsafe can be combined into a single variable, this virtual function can be written as
formula
3.8
with the important property that ∀k, jkj.

Concerning these definitions, the following properties hold:

  1. At time tl, u(tl) = usafe(tl) and dusafe(tl)/dt ⩾ du(tl)/dt. Since the state variables in equation 3.8 decay slower than the corresponding state variables of u(t), it is clear that usafe(tl + Δt)>u(tl + Δt) for all Δt>0. We say that usafe(t) dominates u(t).

  2. Suppose that initially, dusafe(tl)/dt < 0. Since the state variables in equation 3.8 decay with a larger time constant than the state variables , dusafe(tl + Δt)/dt will remain negative for all Δt>0. Because usafe(t) dominates u(t) (see point 1), this implies that no threshold crossing can occur in the future.

  3. If dusafe(tl)/dt ⩾ 0, the same argument can be followed, and dusafe(tl)/dt>dusafe(tl + Δt)/dt for all Δt>0. Thus, dusafe(tl)/dt is also the most level dominating line of usafe(t) when it is positive. Because usafe(t) dominates u(t) (see point 1), dusafe(tl)/dt is a valid dominating slope of u(t) at time tl.

Due to points 2 and 3, dusafe(tl)/dt can be used as an approximation of the optimal slope . This is illustrated in Figure 1d.

4.  Efficient Processing Techniques

Currently existing optimization methods for event-driven simulation have one main goal: they try to do a very efficient firing time prediction. However, the prediction cost is only one part of the computational cost. Before it can be performed, the current state of the neuron has to be computed, which requires that all state variables be updated to the current simulation time. When the number of state variables increases, this update cost will increase. Moreover, the firing time prediction itself usually requires iterative updates of the state variable. In this section, we propose several techniques that comprise a broader goal: to eliminate as many unneeded operations (state variable updates) as possible in all stages of the algorithm.

In order to accomplish this goal, we introduce the concept of partial updating. The main idea behind this concept is that we allow the simulator to compute very rough firing time estimations of the neuron without actually knowing the current state of all state variables. This allows us to update only a small part of the state variables. Exact simulation can be guaranteed if the roughly computed firing time never overshoots the real firing time. Only when a more accurate firing time prediction is required will the simulator have to update more state variables to reach adequate precision.

A neuron can be in very different regimes, depending on the number of inputs, the input activity, the weight scaling, and so forth. This complicates the development of a technique that performs well in different circumstances. Also, the number of state variables has an important influence on the efficiency of a certain technique. As a result, we present several techniques, each of them delivering very good results in a certain circumstance. However, the real power of the algorithm lies in the fact that these techniques melt together into one efficient algorithm that delivers high performance in all these circumstances. A pseudo-code of our algorithm is described in the appendix. The C++ source code is available under GPL license.

The techniques discussed are single NR iterations, maximum approximation, partial NR iterations, active synapses, and a sorting mechanism that determines which state variables can be best updated first in order to further minimize the number of required updates.

4.1.  Single Newton-Raphson Iterations.

In an event-driven environment, each time a neuron receives a spike, a new firing time has to be predicted. In rather complex models like those used in this letter, this requires several NR computing iterations in order to reach adequate precision. Each iteration requires an update of all state variables to the next iteration time, which generates a fixed computational cost. However, if another spike arrives before the neuron reaches the predicted firing time, the previous prediction becomes useless. In practice, this happens with most of the firing time predictions. A first step in improving the performance of the algorithm is reducing the computational time of the firing time prediction when it might be cancelled afterward.

One very useful property of the safe NR method that can be used here is the fact that an iteration will never overshoot the exact firing time. Therefore, the event simulator can perform only one safe NR iteration and schedule the resulting roughly estimated firing time as a virtual event in the event queue. We call this prediction . Only when the simulator has reached is a next safe NR iteration performed. This process is then repeated until has adequate precision, that is, until . If a new spike arrives before , only the last safe NR iteration was useless, and thus the computational overhead is minimal. Additionally, the disadvantage of the nonoptimal convergence speed of the safe NR method is minimized in combination with this single NR iteration method.

A similar approach was used in the partitioning mechanism of Makino (2003): only when the simulator reaches the end of a partition is the next time interval checked for possible threshold crossings. This was called incremental partitioning. However, in our case, time is not partitioned as each iteration converges to tf, but never contains the firing time.

This method can be further optimized by taking the time spent by the simulator in scheduling a new firing time in the event queue into account. When the membrane potential gets closer to the threshold, the safe NR iteration steps become smaller, and the probability that a new spike arrives before decreases correspondingly. At a certain point, the speed gain of doing only a single NR iteration will be neutralized by the overhead of the extra scheduling actions.

After each iteration, the algorithm has to consider whether scheduling the current in the event queue is more advantageous than doing another NR iteration. Therefore, the extra cost associated with each decision has to be considered. We define cu as the computational cost of updating the neuron and estimating a new firing time and cs as the computational cost of scheduling a prediction in the event queue. Consider the case that no new spikes arrive in the interval . Then the correct decision is to immediately perform another NR iteration. Scheduling the current time stamp would involve an extra cost cs, which is unnecessary in this case. On the other hand, if a new spike does arrive in the interval , scheduling the event is the best option since performing another NR iteration would generate an event that would have to be discarded due to the new incoming spike and thus generate an extra cost cu. This is summarized in Table 1, which shows the extra costs.

Table 1:
Summary of the Extra Computational Costs for Different Algorithmic Actions.
When Performing An UpdateWhen Scheduling
Incoming spike cu 
No incoming spike cs 
When Performing An UpdateWhen Scheduling
Incoming spike cu 
No incoming spike cs 
In order to estimate the expected extra cost of each decision, we need to know the probabilities that a new spike or no new spike arrives in the interval . The probabilities are defined by 1 − P(Wt) and P(Wt), with W the waiting time before a new spike arrives. A single NR iteration should be performed only if the expectation of the extra cost for scheduling is less than the expectation of the extra cost of performing another NR iteration. According to Table 1, this means
formula
4.1
In practice, cu is much larger than cs, but the exact relation depends on the complexity of the neuron model, (i.e., the number of different synaptic time constants and the membrane function) and the size of the network or the scheduling method used. For example, scheduling methods based on calendar queues (Brown, 1988) allow fast O(1) scheduling actions. If we assume that spikes form a Poisson process with Tavg the average interspike arrival time, the probability that no new spike arrives before a predicted firing time is exponentially distributed:
formula
4.2
Combining this with equation 4.1, the condition to schedule becomes
formula
4.3
If this condition is not met, another NR iteration is performed without scheduling the current . This process is repeated until Δt is sufficient large according to equation 4.3 or until adequate precision is reached. The parameter Tavg can be adaptively estimated during the simulation run using the running average of the interspike arrival times of previous spikes.

4.2.  Partial Updating and Maximum Approximation.

The observation that a neuron generally has to receive several excitatory spikes before it reaches the threshold gives rise to another optimization idea: if we can ensure that no spike can occur, the complete cost of a firing time prediction can be eliminated. If such a “spike test” is faster than an actual firing time prediction, it can result in significant speed-up. This spike test idea has been used in Brette (2006) for a conductance-based LIF model and in Tonnelier (2007) for nonlinear LIF neurons, but both use only one synaptic time constant. In Brette (2007), a quick spike test method is used based on Descartes' rule of signs for LIF neurons with multiple synaptic time constants, but with the constraint that all time constants have to be commensurable and that excitatory time constants should be smaller than the inhibitory time constants in order to allow efficient processing. However, these techniques require that all state variables of the neuron model are updated to the current simulation time before a (quick) spike test can be performed.

In this section, we introduce a novel approach that can perform a safe maximum approximation of the membrane function, even if not all state variables are updated. This gives us potentially a much higher speed-up than can be reached with traditional spike test techniques. Also, our approach does not involve any constraints on the number and the values of the synaptic time constants. We first explain the case of a neuron with one exponential synaptic time constant and then extend this to multiple synaptic time constants.

In the case that only one synaptic time constant is involved, the state of the membrane function at time tl is given by the two state variables Vm and Vs. The maximum value of the membrane can be found by deriving equation 2.7 with regard to t:
formula
4.4
A positive maximum exists only if Vm>0 and Vs < 0 for τms or if Vm < 0 and Vs>0 for τm < τs. Solving equation 4.4 for ttl gives
formula
4.5
The maximum value of the membrane can be found by substituting the solution for (ttl) back into equation 2.7:
formula
4.6
with xm = τm/(τm − τs) and xs = τs/(τm − τs).

The second factor of equation 4.6 is the maximum postsynaptic potential (PSP) for a spike with unit weight. This value can be precalculated for each neuron. If the result of equation 4.6 is smaller than the threshold θ of the neuron, no spike can occur, and thus tf = +∞. The first factor can be calculated quickly using look-up tables for exp(x) and ln(x). Because these functions are not dependent on the neuron parameters, only one table has to be built for the entire simulation kernel.

If multiple synaptic time constants are involved, it becomes much more complicated to predict with certainty if a spike will occur. Also, as already mentioned, we do not want to update all state variables before being able to do a spike test. The solution is to use a safe maximum approximation test instead, which is an upper bound of the real maximum of the membrane potential: if the approximated maximum is smaller than the threshold, it is guaranteed that the neuron will not fire.

In the procedure that we have implemented, the maximum influence mi of each PSP from synapse i on the membrane potential is computed and stored. When a new spike arrives at the neuron, only the state variable(s) related to this input is updated and a new maximum mi is calculated. Only if the sum of all maxima reaches the threshold is there a possibility for the neuron to fire; in this case, the algorithm starts updating the other state variables (because the neuron's state has to be fully updated to perform a NR iteration).

During these update actions, the maximum influence mi of each synapse is also updated. If the PSP is decreasing, mi will become smaller. This can cause the approximated maximum to drop below the threshold after only a couple of synapse updates. At that moment, there is again a guarantee that no spike can occur, and updating the remaining synaptic state variables is no longer necessary.

How can we calculate mi? A simple method is to keep a separate variable for Vm,i and Vs,i for each synapse i, as in equation 2.4. The value of mi can now be calculated exactly as in the case of only one time constant (see equation 4.6); the maximum of inhibitory spikes is estimated as 0. This technique was used in D'Haene et al. (2006) and yielded significant speed-up of the simulations. However, this technique also causes an important overhead, as the state variable Vm,i has to be calculated for each synapse. It also requires more memory to store each Vm,i.

It is more efficient, both computationally and in terms of memory requirements, to use the optimization of equation 2.5 and keep only a global variable Vtot,m, which is updated each time a new spike arrives. Since we no longer have information of each individual PSP in this case, a different solution must be used to estimate mi. In our approach, we imagine that each Vm,i in equation 2.4 has the opposite value of Vs,i: ∀i:Vm,i = −Vs,i. Thus, the resulting PSP is positive if Vs,i < 0 and τs < τm or if Vs,i>0 and τsm. The maximum of each PSP follows from equation 4.6:
formula
4.7
The “pretended” portions Vm,i = −Vs,i used to compute mi can be subtracted from Vtot,m in equation 2.5. A maximum of the membrane function can thus be estimated as the sum of the individual maxima mi and the maximum value of the remaining membrane's state variable:
formula
4.8

The computational overhead is significantly reduced compared to D'Haene et al. (2006), and the speed-up is further increased. As a result, this method yields very good results in networks with irregular spike patterns or for neurons with a limited number of synaptic time constants (<1000). However, it is clear that the approximated maximum is always larger than the real maximum of the membrane function. In some cases, the difference can be significant, depending on the number of state variables and the input activity. For example, in biologically inspired setups, the number of inputs can be very large (>10,000). When the input activity is in a balanced regime, the membrane function can be close to the threshold value for longer time periods, causing to be constantly larger than the threshold value. This results in updating most of the synaptic state variables each time a new spike arrives. This problem can be tackled by the technique we explain next.

4.3.  Partial Newton-Raphson Iteration.

If the maximum approximation method detects a possible threshold crossing, even after all state variables are updated, an NR iteration is started. However, if there is a lot of input activity, the prediction of a complete NR iteration is too accurate (the probability that a new spike arrives before this single iterated crossing point is very high because Tavg is very small).

With this in mind, we have developed a partial NR iteration mechanism that is able to predict the next firing time using partially updated synaptic state variables. In this mechanism, each additional state variable that is updated further increases . As soon as the probability of a new spike arrival within the predicted time interval Δt (see section 4.1) is sufficiently high, the algorithm stops updating the synaptic state variables, and the current partially predicted firing time is scheduled.

In order to predict a new firing time based on synaptic state variables that are not up-to-date, an upper bound of their influence on the membrane must be available. The maximum approximation method discussed in the previous section is not sufficient as an upper bound, because there is no timing information in this bound. Another requirement is that the computation of the upper bound should be very efficient because it is executed constantly.

We found a method that produces satisfying results and uses the characteristics of a safe NR iteration. As we explained in section 3, a safe slope is calculated for each iteration. This slope is actually the sum of the individual safe derivatives of each synapse. A linear approximation based on this derivative is an upper bound of the influence of the synapse on the membrane and can be used for the estimation of , even if the corresponding state variable is not updated. This technique is illustrated in Figure 2. In this figure, a situation with three inputs is shown. Each time a spike is received, only the current synaptic state variable is updated. When input S2 receives a new spike, it causes to reach the threshold. At that moment, the simulator also takes the current predicted firing time in consideration, which is based on the linear sum of the first safe derivatives of all exponential synaptic functions. In this case, Δt is sufficiently high due to the high input spike rate, and no additional update operations are performed.

Figure 2:

Partial NR iterations. The simulator uses the equation of the first safe derivative of the exponential synaptic function to estimate a safe slope for the modified NR method. If the state variables are not updated to the current time, a linear approximation based on their previously calculated first safe derivative is used to estimate the influence on the membrane potential. In the situation shown, reaches the threshold when input 2 receives a second spike. Based on its slope and on the linear approximation of the other synaptic state variables, a maximum slope is calculated. Because Δt is sufficiently high, the simulator immediately schedules without updating the other state variables.

Figure 2:

Partial NR iterations. The simulator uses the equation of the first safe derivative of the exponential synaptic function to estimate a safe slope for the modified NR method. If the state variables are not updated to the current time, a linear approximation based on their previously calculated first safe derivative is used to estimate the influence on the membrane potential. In the situation shown, reaches the threshold when input 2 receives a second spike. Based on its slope and on the linear approximation of the other synaptic state variables, a maximum slope is calculated. Because Δt is sufficiently high, the simulator immediately schedules without updating the other state variables.

Note that the conditional update operations are very similar to the conditional update operations of the maximum approximation. Indeed, they can be combined into the same update cycle: only if both conditions are not met are more synaptic state variables updated. Both techniques are complementary: while maximum approximation will do excellent work for low-activity situations (or few spikes with high impact), the partial NR will take over for high-input activity.

4.4.  Modeling Active Synapses.

If the number of synaptic time constants becomes very large, it is more difficult for the maximum approximation and the partial NR iterations to minimize the number of state variable updates, as there are so many of them. On the other hand, the average activity on each synapse decreases as the number of synapses increases for the same input activity. Based on this, a simple technique can be used to further improve the performance in these cases: if the activity of a certain synapse becomes negligible (the state variable becomes very small), its influence on the membrane potential becomes negligible as well, and it can thus be safely excluded from all computations. This technique has also been used in some advanced time-step-based simulators, such as CSIM (Natschläger, Markram, & Maass, 2002).

4.5.  Automatic Sorting of the Synaptic State Variables.

Both the speed-up reached by the maximum approximation and the partial NR iteration method are based on the fact that in most cases, only a few synaptic state variables have to be updated when a new spike arrives. The fewer state variables that have to be updated, the better the simulation speed will be. The speed-up can thus be optimized if we first update the synaptic state variables with the highest likelihood to lower the approximated maximum or those that help to advance . This is the case for the state variables that have not been updated for a longer period of time, since the error introduced by the linear approximation increases in time. The efficiency of the maximum approximation and the partial NR iteration method can thus be improved by keeping the state variables ordered by their update time. We were able to implement this very efficiently by placing them in a circular doubly linked list, together with an index list that allows O(1) access time of the state variables (which is necessary when a new spikes arrives). The index list is in fact the list of active synapses discussed in section 4.4. The setup of the sorting mechanism is shown in Figure 3.

Figure 3:

A circular doubly linked list is used to keep the active state variables ordered by their update time, thus eliminating a sorting action. Also, an index structure allows fast random access of the state variables and at the same time implements the list of active synapses (e.g., synapse 3, 5, and 6 are not active in the figure).

Figure 3:

A circular doubly linked list is used to keep the active state variables ordered by their update time, thus eliminating a sorting action. Also, an index structure allows fast random access of the state variables and at the same time implements the list of active synapses (e.g., synapse 3, 5, and 6 are not active in the figure).

Each time a synapse receives a spike, its state variable is moved to the entry, denoted by a start arrow, of the circular list. If the input was inactive, a new synapse element is inserted before the position of the start pointer. When update operations are necessary, an end arrow is set at the position of the start pointer to mark the last state variable to be updated, and the start arrow is used to circulate through the list. When a synapse becomes inactive during these operations, it is removed from the circular list. Although circulating through the list is done in only one direction, a doubly linked list allows very easy removal of elements in the list. If the update operations stop earlier, the new entry point of the list is the position of the start arrow. When this algorithm is used, all synaptic state variables are sorted automatically, without explicit sorting actions.

5.  Experimental Results

In this section, we measure the speed-up of our algorithm in comparison to an algorithm that implements only the safe NR method without optimizations. We did speed-up measurements for two totally different application domains to show that the algorithm performs very well under all circumstances: in section 5.1, a biologically realistic setup is used, and in section 5.2, a typical engineering problem is investigated. In section 5.3, we compare our algorithm with an LIF neuron with instantaneous Dirac inputs, with the optimized model of Booij, and with a specific implementation of the algorithm described in Brette (2007).

All measurements in this section are performed with near exact precision (i.e., the error on the spike timing reaches machine precision). In fact, there are only two techniques used in this letter that can influence the precision of the simulation results: modeling active synapses and the NR iteration technique itself. Switching off active synapses too early will cause some error on the membrane potential. In the algorithm, they are switched off only if the value of the state variable is <10−10. When a single precision floating point is used, this is sufficient for machine precision at the level of the threshold. The safe NR iterations in the algorithm stop if the maximum error is <10−14.

Still, the required precision has minimal impact on the simulation speed because this is exactly what the partial NR iterations are about: they use very rough estimations if the precision is not required. Only if the simulation needs more accurate estimations near the actual threshold crossing is the precision of the prediction increased. This was confirmed with test simulations: if the precision of the simulation is lowered with a factor 104, the resulting simulation speed gain is less than 1%.

The event-driven simulator used is ESSpiNN, a simulator developed at the our research group. The core of this simulator is based on MVASpike, a general-event-based C++ simulator for SNNs by Rochel and Martinez (2003). We have adapted this simulator to meet our goals, made it easier to combine neuron models with different types of plasticity (by templates), and extended it in order to allow more general neuron models (e.g., SRM0) to be efficiently simulated. Besides a broad range of neuron models and arbitrary delayed connections, several types of synaptic adaptations have been implemented such as STDP (van Rossum, Bi, & Turrigiano, 2000), dynamic synapses (Maass & Zador, 1999; Markram, Pikus, Gupta, & Tsodyks, 1998) and intrinsic plasticity (Triesch, 2005). An interface to Matlab allows the quick setup and easy interpretation of simulation results. All simulations were executed on an Intel Core 2 Duo processor clocked at 2.13 GHz (only one core was used for the simulations) equipped with 2 GiB RAM.

5.1.  A Neuron in a Biologically Realistic Setting.

We present the results of our algorithm on a typical neuron setup used in neurobiological studies: it has 10,000 excitatory and 3000 inhibitory inputs, and each input has an average incoming spike rate of 10 Hz. The weight distribution is balanced with respect to the number of excitatory and inhibitory inputs, such that the output activity of the neuron is 10 Hz. The number of independent synapses is varied between 1 and 10, 000.

This is a worst-case scenario for both the active synapses and the maximum approximation: due to the high input activity on each input, with each synapse connected to several spiking inputs, all synapses stay active during the simulation. In real biologically inspired situations, however, the input activity can be highly irregular (Stevens & Zador, 1998). This will cause groups of synapses to be inactive during certain simulation intervals, which will result in even better speed-up. Furthermore, the continuous input activity keeps the membrane potential close to the threshold, which makes it difficult for the maximum approximation method to reduce the number of state variable updates.

The results are shown in Figure 4. Despite the difficult conditions for the algorithm, we see that the partial NR calculation is able to do excellent work in this case. When the number of simulated synaptic time constants is increased, the execution time in the unoptimized implementation increases from 0.6 second for a neuron with only one time constant to more than 3000 seconds for 10,000 synaptic time constants. However, the optimized implementation is not significantly influenced by the number of synaptic time constants when their amount is much smaller than the total number of inputs. In our case, execution time starts to increase only when more than 100 synaptic time constants are simulated. Still, even then, it scales better with the number of synaptic time constants than the unoptimized algorithm. When their number comes close to the number of input connections (∼1 input connection per synapse), the slope of the curve becomes less steep. This is caused by the fact that the activity in each synapse will decrease, which results in a higher probability of nonactive synapses. In this case, the speed-up of our method reaches a value of more than 300.

Figure 4:

Speed measurements for a biologically plausible neuron model with 10,000 excitatory and 3000 inhibitory inputs. The input weights are randomly distributed, but the negative weights are balanced according to the number of positive weights. Each input receives a Poisson spike train with λ = 10 Hz with a total simulated time of 5 seconds. The weights are scaled such that the output activity of the neuron is also 10 Hz. The membrane's time constant τm was set to 20 ms, and τs of each independent synapse was fixed at 5 ms to allow a fair comparison between different numbers of state variables (but it is of course also possible to choose a different τs for each synapse, as has been done for the speech recognition task).

Figure 4:

Speed measurements for a biologically plausible neuron model with 10,000 excitatory and 3000 inhibitory inputs. The input weights are randomly distributed, but the negative weights are balanced according to the number of positive weights. Each input receives a Poisson spike train with λ = 10 Hz with a total simulated time of 5 seconds. The weights are scaled such that the output activity of the neuron is also 10 Hz. The membrane's time constant τm was set to 20 ms, and τs of each independent synapse was fixed at 5 ms to allow a fair comparison between different numbers of state variables (but it is of course also possible to choose a different τs for each synapse, as has been done for the speech recognition task).

5.2.  A Typical Engineering Application: The Speech Recognition Task.

In Figure 5, we measured the execution time of our model for a typical engineering application using SNN: a liquid state machine (LSM) is used to perform speech recognition on isolated digits (Verstraeten et al., 2005). A network of 1000 randomly connected neurons was used where each neuron input has a separate synaptic time constant (worst-case scenario). Three different network setups were used where neurons have on average 10, 50, and 200 inputs. Since each input has a different time constant, more inputs also mean more state variables. The same speech samples (taken from Verstraeten et al., 2005) were applied to each network for both the optimized model and the nonoptimized model. It is clear that when the interconnect density is varied while the weight scaling constant is kept, denser networks will generate more internal events for the same input activity because every neuron will receive on average more spikes.

Figure 5:

Speed measurements on a speech recognition task. Each speech sample is fed to three different networks consisting of 1000 neurons that are randomly interconnected with interconnection fractions (i.f.) of (resp.) 0.01, 0.05, and 0.2. The speech task consists of 30 samples of spoken digits from 0 to 10, spoken by different speakers. The analog speech signal is converted to 77 speech channels using the Lyon ear model and then converted to spikes using rate coding. Every speech channel is randomly connected to 5% of the neurons in the network. Weights are randomly chosen with a gaussian distribution and are rescaled afterward to obtain a spectral radius (this is the largest absolute eigenvalue of the interconnection matrix) of 0.7 (independent from the interconnection fraction). Seventy percent of the weights are excitatory, and 30% are inhibitory. The membrane time constant τm of each neuron is randomly chosen between 10 and 20 ms; each input has a separate synaptic time constant with τs a random value that is on average smaller for excitatory inputs and never larger than τm/2. Every neuron has a refractory period of 1 ms. The interconnection delays between the neurons are also randomly chosen between 0 and 10 ms.

Figure 5:

Speed measurements on a speech recognition task. Each speech sample is fed to three different networks consisting of 1000 neurons that are randomly interconnected with interconnection fractions (i.f.) of (resp.) 0.01, 0.05, and 0.2. The speech task consists of 30 samples of spoken digits from 0 to 10, spoken by different speakers. The analog speech signal is converted to 77 speech channels using the Lyon ear model and then converted to spikes using rate coding. Every speech channel is randomly connected to 5% of the neurons in the network. Weights are randomly chosen with a gaussian distribution and are rescaled afterward to obtain a spectral radius (this is the largest absolute eigenvalue of the interconnection matrix) of 0.7 (independent from the interconnection fraction). Seventy percent of the weights are excitatory, and 30% are inhibitory. The membrane time constant τm of each neuron is randomly chosen between 10 and 20 ms; each input has a separate synaptic time constant with τs a random value that is on average smaller for excitatory inputs and never larger than τm/2. Every neuron has a refractory period of 1 ms. The interconnection delays between the neurons are also randomly chosen between 0 and 10 ms.

It is clear from the figure that our method can significantly reduce the computation time of simulations, with a speed-up that varies from 5 to 20 times, depending on the average number of inputs of the neurons.

Another important property of our algorithm can be observed if we compare the execution times for different interconnect densities. In general, we would expect that if more synaptic time constants are used in the same neuron, the computation time per incoming event would increase drastically because more synaptic state variables have to be updated in order to predict a new firing time. This is indeed the case in the model that is not optimized. However, in the optimized model, increasing the neuron complexity does not have a significant influence on the simulation speed: an almost linear relation exists between computing time and network activity regardless of neuron complexity.

5.3.  Speed Comparisons Between Different Neuron Models.

In Figure 6A, the execution time of a biologically inspired neuron setup with a varying number of time constants (1, 100, and 13,000) is compared with the very simple model of Booij (τm = 2τs), both with and without applying the optimizations presented in this work. In addition, an LIF neuron with Dirac inputs was simulated. These two models can be easily solved analytically. Therefore, their execution speed is very fast and comparable. However, our more general algorithm is able to perform in a similar way for any network activity, even when using 100 synaptic time constants. In the case where every input is simulated with a different synaptic time constant, the speed-up of our algorithm reaches a factor of more than 1000 times compared to a nonoptimized neuron model. In fact, our optimized model with 13,000 synaptic time constants still performs better than a nonoptimized version with only 100 synaptic time constants. In Figure 6B, we show a detailed plot of the fastest models. We also added a neuron model with 1000 synaptic time constants to show that even in this case, our algorithm is less than five times slower than a simple LIF neuron without synaptic models.

Figure 6:

Comparison of the computational requirements of several neural models for a neuron with 10,000 excitatory and 3000 inhibitory inputs. Weights are random uniformly distributed and balanced according to the number of positive and negative weights. Each input receives a Poisson spike train with λ varying from 0.3 to 100 Hz for 5 seconds. The weights are globally scaled so that the neuron fires at a rate of 10 Hz when the average input activity is also 10 Hz. The membrane's time constant τm of each neuron model is set to 20 ms. (A) Comparison with the simple model of Booij for several number of synaptic time constants (1, 100, and 1.3104 synaptic time constants, both with and without our optimizations) and with an LIF model with Dirac inputs. In order to make a fair comparison with regard to execution time, the behavior of the models is equalized by using a fixed τs = 10 ms for each synaptic time constant but using a separate state variable. (B) A zoom of the top figure but compared with 1, 100, and 1000 synaptic time constants. (C) Comparison of our algorithm with an optimized case of the algorithm described in Brette (2007) that uses a polynomial representation of the membrane function. Two synaptic time constants are used with τs,1 = 5 ms and τs,2 = 10 ms.

Figure 6:

Comparison of the computational requirements of several neural models for a neuron with 10,000 excitatory and 3000 inhibitory inputs. Weights are random uniformly distributed and balanced according to the number of positive and negative weights. Each input receives a Poisson spike train with λ varying from 0.3 to 100 Hz for 5 seconds. The weights are globally scaled so that the neuron fires at a rate of 10 Hz when the average input activity is also 10 Hz. The membrane's time constant τm of each neuron model is set to 20 ms. (A) Comparison with the simple model of Booij for several number of synaptic time constants (1, 100, and 1.3104 synaptic time constants, both with and without our optimizations) and with an LIF model with Dirac inputs. In order to make a fair comparison with regard to execution time, the behavior of the models is equalized by using a fixed τs = 10 ms for each synaptic time constant but using a separate state variable. (B) A zoom of the top figure but compared with 1, 100, and 1000 synaptic time constants. (C) Comparison of our algorithm with an optimized case of the algorithm described in Brette (2007) that uses a polynomial representation of the membrane function. Two synaptic time constants are used with τs,1 = 5 ms and τs,2 = 10 ms.

Finally, we also compared our algorithm with the algorithm described in Brette (2007). This model is based on a polynomial representation of the membrane function. However, a generic and efficient implementation of this algorithm is not straightforward, as each set of rationally related synaptic time constants generates very different polynomial representations. Also, the execution speed depends on the degree of the polynomial representation (the Sturm sequences used for finding the roots increase according to the polynomial degree). For our comparison test, a very simple model was used (which is also the example used in Brette, 2007). The model uses only two synaptic time constants with a simple relation: τm = 2τs,inh = 4τs,exc. When this relation is used, the decay factors exp(−(ttl)/τs) of each synaptic state variable between two subsequent neuron updates can be calculated very easily: they are two or four times the decay factor of the membrane potential exp(−(ttl)/τm) and thus can be easily derived from it. This is a big advantage in Brette's algorithmic implementation as the exponential function is one of the heaviest computational costs. The results are shown in Figure 6C. Despite the simple polynomial representation for this particular model, our general algorithm (which does not use these optimizations) appears to be twice as fast for any network activity. The speed-up is expected to be even higher when more synaptic time constants are involved.

6.  Conclusion and Future Work

In this work, a novel approach for fast event-driven simulation of LIF neurons with multiple synaptic time constants was proposed. The techniques are based on an important overarching concept that is rarely used in current event simulators: try to do the least possible number of calculations to advance the approximation of the next firing time. This is done because there is a relatively high probability that a new spike arrives before the predicted firing time, which will undo all previous calculations. Current simulators are based on the idea of instantly predicting the firing time to full precision. However, due to this, many calculations are performed that are then canceled.

We first introduced an improvement of the partitioning method of Makino (2003) that allows performing NR in a simple way without using linear envelopes for each synaptic function. Based on this safe NR technique, we presented several complementary acceleration techniques for general LIF neurons with multiple synaptic time constants that take full advantage of the event-driven nature of the simulator. The techniques that were used are single NR iteration, maximum approximation, partial NR iteration, and modeling active synapses. An automatic sorting mechanism further optimizes these techniques. We were able to create an algorithm that exploits the best properties of all techniques into a single elegant algorithm.

Our measurements show very large speed-ups compared to an algorithm that does not use these optimizations: speed-ups of over 1000 times were reached. When comparing the execution speed with some very simple neuron models (an LIF with instantaneous inputs and the neuron model introduced by Booij, which was described in this work), we observed that the execution speed is very comparable, although the model can be much more generally applied.

Possibly even more important is the fact that our algorithm shows excellent scalability with regard to the number of synaptic time constants. In fact, the simulation speed is highly independent of the number of synaptic time constants. This opens new opportunities for biologically inspired networks as well as solving engineering problems without sacrificing simulation speed. The use of multiple synaptic time constants allows, for example, a more detailed simulation of the dendritic connections of biologically inspired networks. Because there are no limitations regarding the value of the synaptic time constants, large time constants (τsm) can be used to imitate high conductance states in the neuron.

Another important consequence of our method is the fact that more complex synaptic models can be constructed using an LIF model with arbitrary synaptic exponential functions as a substrate: it is possible to imitate more complex behavior using a sum of exponential functions. This can be done by replacing each synaptic current function by a collection of exponential synaptic currents with different synaptic time constants and interconnection delays.

The idea of performing only rough predictions if high precision is unnecessary is not limited to the LIF model with multiple synaptic time constants. There are many other (more advanced) neuron models that are popular today, for example, the conductance-based models (Brette, 2006; Rudolph & Destexhe, 2006), that could benefit from these techniques by allowing multiple synaptic time constants without having a large impact on the simulation speed. In future work, we will investigate how the event-driven simulation of more complex neuron models can be accelerated using the ideas presented in this work.

Appendix:  Pseudocode for Efficient Simulation of LIF Models with Multiple Synaptic Time Constants

We now describe the complete algorithm using pseudocode. One quality that stands out is its simplicity, although all techniques described in section 4 are included.

As stated in section 1, an event-driven simulator for SNNs has to deal with two types of events: those generated by external events and internal events from the neuron itself (the latter are in fact the firing time predictions of the algorithm). The simulator takes the next event from the event queue, and if the event is an external incoming spike, it is processed by the following pseudocode:

external_event(t, s, w) {

  update Vm and the first derivative of Vm to time t

  if synapse was inactive {

    activate synapse s

  } else {

    update Vs(s) to time t

  }

  Vm    += w / (1 - taus/taum);

  Vs(s) -= w / (1 - taus/taum);

  update first derivatives of Vm and Vs(s)

  update the maximum approximation

  update estimated total ‘safe’ derivative of the membrane

    function

 

  // estimate next threshold crossing using Newton Raphson:

  next_update_time = (threshold - membrane potential)

                   / (total membrane derivative)

 

  update T_avg

 

  if (approx. maximum > threshold) and (stop condition not met) {

    next_update_time = update_other_synaptic_inputs(t)

  } else {

    next_update_time = infinity

  }

}

Parameter t is the arrival time of the spike, parameter s indicates the index of the synaptic state variable on which the spike is applied, and w is the weight of the connection. The stop condition is described in sections 4.1 and 4.3. The decision rule uses next_t and T_avg. next_update_time is the time stamp of the next internal event for this neuron that will be scheduled by the simulator. As can be seen in the code, a new spike will update only the current synaptic state variable and the membrane potential. Only if both the maximum approximation condition and the update condition are met will the other synaptic state variables have to be updated. This is handled in function update_other_synaptic_inputs(t):

double update_other_synaptic_inputs(t) {

  next_t    = t

  update start and end pointer of circular list

 

  while (not synapses are updated) and (stop condition not met)

         and (approx. maximum > threshold) {

 

    curr = select next active synapse

    update Vs(curr) to time t

    if Vs(curr) < minimum precision {

      remove this synapse from the list

    } else {

      adapt first derivatives of Vs(curr)

      update the maximum approximation

      update estimated total ‘safe’ derivative of the membrane

         function

 

      // estimate next threshold crossing using Newton Raphson:

      next_t = (threshold - membrane potential)

             / (total membrane ‘safe’ derivative)

    }

  }

 

  return next_t

}

The code for updating a synaptic state variable is similar to the code used in the previous described function. The update routine will update the synaptic state variables until one of the stop conditions is met (maximum approximation below the threshold or next firing time far enough in the future). The synaptic state variables are selected by their last update time. This mechanism is described in section 4.5.

When an internal event has to be processed, the following pseudocode is used:

internal_event(t) {

  update Vm and the first derivative of Vm to time t

  next_update_time = update_other_synaptic_inputs(t);

 

  if membrane potential >= threshold {

    reset membrane potential

 

    next_update_time = (threshold - membrane potential)

                       / (total membrane derivative)

    send spike to successor neurons

  }

}

This routine is used each time the neuron reaches the approximated firing time . Only if the neuron reaches the threshold at this point has the real firing time tf been reached. This causes the neuron to reset its membrane potential and emit a spike to the successor neurons, which is done in the simulator core.

References

Agis
,
R.
,
Ros
,
E.
,
Diaz
,
J.
,
Carrillo
,
R.
, &
Ortigosa
,
E.
(
2007
).
Hardware event-driven simulation engine for spiking neural networks
.
International Journal of Electronics
,
94
,
435
450
.
Brette
,
R.
(
2006
).
Exact simulation of integrate-and-fire models with synaptic conductances
.
Neural Computation
,
18
,
2004
2027
.
Brette
,
R.
(
2007
).
Exact simulation of integrate-and-fire models with exponential currents
.
Neural Computation
,
19
,
2604
2609
.
Brown
,
R.
(
1988
).
Calendar queues: A fast O(1) priority queue implementation for the simulation event set problem
.
Communications of the ACM
,
31
,
1220
1227
.
Carrillo
,
R.
,
Ros
,
E.
,
Ortigosa
,
E.
,
Barbour
,
B.
, &
Agis
,
R.
(
2005
).
Lookup table powered neural event-driven simulator
. In
Proceedings of the 8th International Work-Conference on Artificial Neural Networks, IWANN 2005
(pp.
168
175
).
Berlin
:
Springer
.
Destexhe
,
A.
,
Rudolph
,
M.
, &
Paré
,
D.
(
2003
).
The high-conductance state of neocortical neurons in vivo
.
Neuroscience
,
4
,
739
751
.
D'Haene
,
M.
,
Schrauwen
,
B.
, &
Stroobandt
,
D.
(
2006
).
Accelerating event based simulation for multi-synapse spiking neural networks
. In
Proceedings of the 16th International Conference on Artificial Neural Networks
(
Vol. 4131
pp.
760
769
).
Heidelberg
:
Springer Berlin
.
Gerstner
,
W.
, &
Kistler
,
W.
(
2002
).
Spiking neuron models
.
Cambridge
:
Cambridge University Press
.
Grassmann
,
C.
, &
Anlauf
,
J.
(
1998
).
Distributed, event driven simulation of spiking neural networks
. In
Proceedings of the International ICSC/IFAC Symposium on Neural Computation
(pp.
100
105
).
Canada/Switzerland
:
ICSC Academic Press, International Computer Science Conventions
.
Hansel
,
D.
,
Mato
,
G.
,
Meunier
,
C.
, &
Neltner
,
L.
(
1998
).
On numerical simulations of integrate-and-fire neural networks
.
Neural Computation
,
10
,
467
483
.
Lytton
,
W.
, &
Hines
,
M.
(
2005
).
Independent variable time-step integration of individual neurons for network simulations
.
Neural Computation
,
17
,
903
921
.
Maass
,
W.
(
1996
).
Lower bounds for the computational power of networks of spiking neurons
.
Neural Computation
,
8
(
1
),
1
40
.
Maass
,
W.
, &
Markram
,
H.
(
2002
).
Synapses as dynamic memory buffers
.
Neural Networks
,
15
,
155
161
.
Maass
,
W.
, &
Zador
,
A.
(
1999
).
Computing and learning with dynamic synapses
. In
W. Maass & C. Bishop (Eds.)
,
Pulsed neural networks
(pp.
321
336
).
Cambridge, MA
:
MIT Press
.
Magee
,
J.
, &
Cook
,
E.
(
2000
).
Somatic EPSP amplitude is independent of synapse location in hippocampal pyramidal neurons
.
Nature Neuroscience
,
3
,
895
903
.
Makino
,
T.
(
2003
).
A discrete-event neural network simulator for general neural models
.
Neural Computing and Applications
,
11
,
210
223
.
Markram
,
H.
,
Pikus
,
D.
,
Gupta
,
A.
, &
Tsodyks
,
M.
(
1998
).
Potential for multiple mechanisms, phenomena and algorithms for synaptic plasticity at single synapses
.
Neuropharmacology
,
37
,
489
500
.
Morrison
,
A.
,
Straube
,
S.
,
Plesser
,
H.
, &
Diesmann
,
M.
(
2007
).
Exact subthreshold integration with continuous spike times in discrete-time neural network simulations
.
Neural Computation
,
19
,
47
79
.
Muresan
,
R.
, &
Ignat
,
I.
(
2004
).
Principles of design for large-scale neural simulators
. In
International Conference on Automation, Quality and Testing, Robotics
.
New York
:
IEEE
.
Natschläger
,
T.
,
Markram
,
H.
, &
Maass
,
W.
(
2002
).
Computer models and analysis tools for neural microcircuits
. In
R. Kötter (Ed.)
,
A practical guide to neuroscience databases and associated tools
.
Norwell, MA
:
Kluwer
.
Rochel
,
O.
, &
Martinez
,
D.
(
2003
).
An event-driven framework for the simulation of networks of spiking neurons
. In
Proceedings of the 11th European Symposium on Artificial Neural Networks, ESANN 2003
(pp.
295
300
).
Brussels
:
D-Facto
.
Rudolph
,
M.
, &
Destexhe
,
A.
(
2006
).
Analytical integrate-and-fire neuron models with conductance-based dynamics for event-driven simulation strategies
.
Neural Computation
,
18
,
2146
2210
.
Schrauwen
,
B.
,
D'Haene
,
M.
,
Verstraeten
,
D.
, &
Van Campenhout
,
J.
(
2008
).
Compact hardware liquid state machines on FPGA for real-time speech recognition
.
Neural Networks
,
21
,
511
523
.
Schrauwen
,
B.
, &
Van Campenhout
,
J.
(
2006
).
Backpropagation for population-temporal coded spiking neural networks
. In
Proceedings of the International Joint Conference on Neural Networks
(pp.
1797
1804
).
New York
:
IEEE
.
Shelley
,
M.
, &
Tao
,
L.
(
2001
).
Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks
.
Journal of Computational Neuroscience
,
11
,
111
119
.
Softky
,
W. R.
(
1995
).
Simple codes versus efficient codes
.
Current Opinion in Neurobiology
,
5
,
239
247
.
Stevens
,
C.
, &
Zador
,
A.
(
1998
).
Input synchrony and the irregular firing of cortical neurons
.
Nature Neuroscience
,
1
,
210
217
.
Theunissen
,
F. E.
, &
Miller
,
J. P.
(
1995
).
Temporal encoding in nervous systems: A rigorous definition
.
Journal of Computational Neuroscience
,
2
,
149
162
.
Tjalling
,
J.
(
1995
).
Historical development of the Newton-Raphson method
.
SIREV
,
37
,
531
551
.
Tonnelier
,
A.
(
2007
).
Event-driven simulations of nonlinear integrate-and-fire neurons
.
Neural Computation
,
19
,
3226
3238
.
Triesch
,
J.
(
2005
).
Synergies between intrinsic and synaptic plasticity in individual model neurons
. In
L. K. Saul, Y. Weiss, & L. Bottou (Eds.)
,
Advances in neural information processing systems
,
17
(pp.
1417
1425
).
van Rossum
,
M.
,
Bi
,
G.
, &
Turrigiano
,
G.
(
2000
).
Stable Hebbian learning from spike timing-dependent plasticity
.
Journal of Neuroscience
,
20
(
23
),
8812
8821
.
VanRullen
,
R.
,
Guyonneau
,
R.
, &
Thorpe
,
S. J.
(
2005
).
Spike times make sense
.
Trends in Neurosciences
,
28
(
1
),
1
4
.
Verstraeten
,
D.
,
Schrauwen
,
B.
,
Stroobandt
,
D.
, &
Van Campenhout
,
J.
(
2005
).
Isolated word recognition with the liquid state machine: A case study
.
Information Processing Letters
,
95
(
6
),
521
528
.
Watts
,
L.
(
1993
).
Event-driven simulation of networks of spiking neurons
. In
Sixth Neural Information Processing Systems Conference
(pp.
927
934
).
San Francisco
:
Morgan Kaufmann
.