In vivo cortical recording reveals that indirectly driven neural assemblies can produce reliable and temporally precise spiking patterns in response to stereotyped stimulation. This suggests that despite being fundamentally noisy, the collective activity of neurons conveys information through temporal coding. Stochastic integrate-and-fire models delineate a natural theoretical framework to study the interplay of intrinsic neural noise and spike timing precision. However, there are inherent difficulties in simulating their networks’ dynamics in silico with standard numerical discretization schemes. Indeed, the well-posedness of the evolution of such networks requires temporally ordering every neuronal interaction, whereas the order of interactions is highly sensitive to the random variability of spiking times. Here, we answer these issues for perfect stochastic integrate-and-fire neurons by designing an exact event-driven algorithm for the simulation of recurrent networks, with delayed Dirac-like interactions. In addition to being exact from the mathematical standpoint, our proposed method is highly efficient numerically. We envision that our algorithm is especially indicated for studying the emergence of polychronized motifs in networks evolving under spike-timing-dependent plasticity with intrinsic noise.
In cortical cell assemblies, a neuron forms synapses onto about 10,000 target neurons (Braitenberg & Schüz, 1998). The resulting neural circuits are weakly driven by behaviorally relevant, highly preprocessed spiking patterns (Fiser, Chiu, & Weliky, 2004; Bruno & Sakmann, 2006; Larson, Perrone, Sen, & Billimoria, 2010). In turn, the neural processing performed by these circuits is crucially affected by noise, which results from background spontaneous activity or inherently stochastic mechanisms of spike generation (Rolls & Deco, 2010; Faisal, Selen, & Wolpert, 2008; Fellous, Rudolph, Destexhe, & Sejnowski, 2003). However, when studied in vivo, cortical neural circuits can exhibit precise and reliable neural responses (Abeles, Bergman, Margalit, & Vaadia, 1993; Berry, Warland, & Meister, 1997; Prut et al., 1998; Reinagel & Reid, 2000; Rust, Schwartz, Movshon, & Simoncelli, 2005; Ayzenshtat et al., 2010): upon repeated excitation by a stimulus, neural circuits consistently produce patterns of activity where spike timing is precisely time-locked to the features of the stimulus. The timing jitter, the stochastic temporal variability of a neuronal spike from repetition to repetition, quantifies in a stimulus-dependent fashion the neuronal noise (Cecchi et al., 2000). This suggests that neural circuits process information through temporal coding (Thorpe, Delorme, & Van Rullen, 2001), whereby the actual spiking times convey meaningful information (Huber et al., 2008; Yang, DeWeese, Otazu, & Zador, 2008).
Understanding how indirectly driven, noisy, interacting neurons can produce reliable and temporally precise neural response is central to the problem of neural transmission (Kumar, Rotter, & Aertsen, 2008, 2010). In that respect, noise plays a crucial and ambiguous part: it alternatively appears as a nuisance constraining neural circuits to operate through rate coding (London, Roth, Beeren, Hausser, & Latham, 2010) or as a benefit to precise temporal coding through stochastic facilitation (McDonnell & Ward, 2011). Computational neuroscience addresses the problem of neural transmission by studying in silico models of neural networks, which model the internal dynamics of isolated single units as well as their interactions, while explicitly including noise.
Despite comprehensive experimental and theoretical studies (Bryant & Segundo, 1976; Manwani & Koch, 1999a, 1999b; Mainen & Sejnowski, 1995), it is still a matter of debate whether the dominant source of stochasticity is internal or external. Internal sources of noise include all effects of thermal noise, such as the voltage fluctuations induced by any electrical resistance, the discrete chatter of ion channels, and other effects of molecular systems with a finite number of components. The external sources aggregate the ongoing influence of the many neurons that are not in the circuit being investigated but measurably interact with it, acting as a neural thermal bath. Regardless of the nature of the noise, when concerned with the temporal coding of a neural population, an adequate model of spiking neuron is the stochastic integrate-and-fire neuron (Lapicque, 1907; Knight, 1972).
In the context of a population, an integrate-and-fire neuron initiates a spike in response to the temporal spiking patterns of upstream neurons, when the corresponding synaptic deliveries reliably drive the neuron's membrane potential to its threshold. The transient nature of synaptic transmission and the existence of finite propagation delays conspire to render the integration of afferent activity extremely sensitive to the perturbations of incoming spikes’ timing (Izhikevich, 2006; Goodman & Brette, 2010). Thus, temporal precision is absolutely crucial for computing the activity of a neural population in response to both past and external spiking patterns.
Moreover, as the result of intrinsically noisy dynamics, the spiking patterns of a population of stochastic integrate-and-fire neurons are realizations of a highly dimensional stochastic process. A stochastic simulation method is then temporally exact if its algorithm produces exact samples of the intricate law of the network's evolution. Practically, the implementation of exact stochastic algorithms yields spiking patterns with a temporal precision that is limited only by the exactness of the random generators and the numerical precision of the computer, thus ensuring exquisite temporal precision. That said, there are undoubtedly situations for which stochasticity is hardly relevant for understanding neural computations (e.g., the visual system with high-contrast stimuli), whereas other neuronal processing cannot be grasped without their stochastic components (e.g., the visual system at low contrast, close to detection threshold). Stochastic simulations should exclusively address the latter since the integration of stochastic equations is much more computationally intensive than deterministic ones due to the excruciating cost of generating random numbers to faithfully depict noisy neural dynamics.
Despite substantial work (Mattia & Giudice, 2000; Reutimann, Giugliano, & Fusi, 2003; Ros, Carrillo, Ortigosa, Barbour, & Agís, 2006), the many difficulties inherent in the study of interacting stochastic processes have hindered the development of temporally exact algorithms with noisy dynamics (Brette et al., 2007). It is the purpose of this letter to remedy this situation for the simplest network of stochastic integrate-and-fire neurons.
1.1. Simulation of Integrate-and-Fire Networks.
The simulation of networks of integrate-and-fire neurons invariably consists of the implementation of propagation periods between two consecutive events (spike emission or reception) and network update rules following each event (modifying neuronal internal variables if necessary). Such algorithms generally fall into two categories (Brette et al., 2007):
Clock-driven algorithm: The integrative schema corresponding to the network evolution is discretized with a finite time step (typically using Runge-Kutta methods) and is paced by a central clock (Morrison, Straube, Plesser, & Diesmann, 2006).
Event-driven algorithm: The network evolves according to simple equations, and, given the network state at a particular spiking event, there exist analytical formulas predicting the next spiking event and network state (Mattia & Giudice, 2000).
Due to their flexibility, clock-driven algorithms have received the most investigation. Nevertheless, when concerned with precise temporal dynamics, these algorithms present two major flaws. First, their computational complexity is set by the duration of the time step, and most of the computational time is devoted to simulating the unobserved subthreshold dynamics of neurons that, in addition, can remain silent most of the time. Second, its discrete flavor introduces errors: noisy membrane trajectories can very well cross the spiking threshold between two subthreshold sample points (Taillefumier & Magnasco, 2010).
Another key issue is the existence of finite time for spike propagation, which affects the core functioning of a neuronal network. For instance, reasoning on the recurrent network depicted in Figure 1, suppose we need to update the network after neuron Na fires and that receiving one spike is enough to elicit a response. Then if the delays are such that da,c+dc,e<da,e, neuron Nc is activated and Ne is inhibited, whereas the situation is reversed if da,e+de,c<da,c. In the absence of delays, it is impossible to order the sequence of local updates consistently, and the problem is ill posed.
1.2. Stochastic Event-Driven Strategy.
Recently the Markovian structure of networks of linear integrate-and-fire neurons was identified and translated conceptually in terms of a stochastic event-driven strategy (Touboul & Faugeras, 2011). The special features of the linear integrate-and-fire model make it possible to design local update rules, which stipulates how a neuron behaves when receiving or emitting spikes. Incidentally, the network evolution of such neurons appears as a Markovian chain of single-unit asynchronous updates. The time sequence with which these rules are called is determined as follows. For each instantaneous state of the network, the timing of the next event is estimated independently for every neuron, as if the neurons were not interacting. Selecting the first of these events is tantamount to drawing the next network event (since no other event happens before it), and the corresponding local update rule can be safely applied.
However, each time a local update rule is implemented, it propagates interactions to other neurons. Maintaining the Markovian property demands that updated neurons bear the memory of the fact that in the absence of interactions, they would spike at a known time in the future. From a mathematical standpoint, the bookkeeping of intermediary events means that the membrane potentials are sampled from random processes that are conditioned in the future. Stochastic event-driven algorithms implement consistently these conditional updates at the network level. They proceed by using information about the current state of the network to refine provisional estimates of future events, until each of these refined estimates sequentially becomes the next network event.
Such algorithms are efficient when they minimize the time instances at which the underlying membrane potential needs to be sampled. This sampling number is determined by how often a neuron's membrane is sampled when receiving inputs and how many estimates are required for a provisional neuronal event to become the next network event. Figuratively, the task of a stochastic event-driven algorithm is to simulate future potential scenarios at the cost of later discarding parts of them, so that the individual neurons are simulated only for those times when their update is meaningful at the network level.
1.3. New Practical Implementation.
Here, elaborating on results in Touboul and Faugeras (2011), we rigorously investigate the Markovian nature of networks of perfect stochastic integrate-and-fire neurons with time-delayed Dirac-like interactions and produce an efficient and exact event-driven implementation of their dynamics. A linear integrate-and-fire neuron is deemed “perfect” when there is no leak term () in the equation of its subthreshold membrane dynamics. Such a model is general in the sense that the Markovian setting of its networks can be extended to any linear integrate-and-fire neuron. However, networks of perfect stochastic integrate-and-fire neurons are of special interest because they are simple enough to be practically implemented in an event-driven fashion.
Indeed, we show that the conditioned processes that intervene in the Markovian description of the network are well-known stochastic processes and are closely related to the canonical Wiener process (Karatzas & Shreve, 1991). From there, we implement explicitly our event-driven strategy as a Monte Carlo algorithm, whose efficiency is ensured by the existence of exact efficient random generators for our conditioned processes. An exact random generator is an algorithmic procedure that, given exact samples from a simple probability law (typically the uniform law over [0, 1]), produces a number whose statistics follows exactly the law of the desired random variable. The efficiency of the generator is measured by the expected number of arithmetic operations that are required to simulate a sample. Moreover, we organize the scheduling of events so that the effective number of updates of a given neuron is fewer than the number of spikes it receives, while discarding little information in the refinement of provisional estimates. Incidentally, despite being a fully stochastic exact scheme, our algorithm operates with the same complexity as a deterministic event-driven method.
As a result, we argue that our event-driven implementation computationally outperforms any other stochastic simulation methods and is especially suited to the study of temporal coding. In that respect, we underline that our algorithm is not a device to perform approximate integration, as in a Monte Carlo method. The intrinsic stochasticity of the model is an essential component of the biophysics that we wish to simulate, which prompts the need for exact simulation strategies. Even when approximatively simulated, the integration of stochastic differential equations is excruciatingly costly by comparison with its deterministic counterparts. For example, the errors in a basic Euler integration of an ordinary differential equation are order , while the errors in the equivalent stochastic Euler method for the homologous Langevin equation are of order . This makes stochastic integration a phenomenally more costly proposition, as roughly the square of the number of time steps is required to achieve the same precision. Therefore, we claim that bringing a large-scale stochastic simulation into the same computational complexity class as a deterministic simulation is a substantial achievement. We assert that because it maintains a careful temporal ordering of events, our method is of particular interest for recurrent networks evolving under spike-timing-dependent plasticity (Dayan & Abbott, 2001; Abbott & Nelson, 2000; Caporale & Dan, 2008). From another perspective, networks of integrate-and-fire neurons have recently inspired Bayesian inference methods to analyze experimental neural data (Cocco, Leibler, & Monasson, 2009; Monasson & Cocco, 2011). Our algorithm provides a natural candidate to benchmark these methods on synthetic data.
The letter is organized as follows. We first describe the Markovian evolution of a network of perfect stochastic integrate-and-fire neurons and make explicit the corresponding local and network rules that allow us to design an event-driven strategy. We then devise its practical implementation and bound its complexity. Finally, we illustrate its behavior by successively considering the case of an inhibitory network, an excitatory network, and a network with both inhibition and excitation.
2. Exact Markovian Framework
In this section, we first define the evolution of a single unit embedded in a network of neurons as a first-passage Markov chain, where the interactions are recapitulated in terms of a fluctuating barrier. We then exhibit the update rules for an interacting single unit and establish their schedule at the network level in order to faithfully simulate the network's evolution.
2.1. Perfect Integrate-and-Fire Networks.
The preceding model is naturally endowed within a Markovian framework. To be more precise, at any given time t, suppose that we observe the last events of the spiking histories and that, for each neuron a and b, we maintain the schedule of spike deliveries that a neuron b is to receive as a result of the past activity of a. Then we can consistently construct the ensuing dynamics of the network by simulating the sample paths of the processes . Event-driven strategies aim at simulating exactly the evolution of the network as a Markov chain, while simulating as few sample points as possible.
2.2. Local Neuron Update Rules.
Before explaining our stochastic event-driven implementation, we describe the local updates that a neuron undergoes. If U(a) is empty, neuron a evolves independently, and each of its interspike time intervals is an independent realization of the first-passage for a Wiener process and a straight barrier. The law of is known analytically and belongs to the class of inverse gaussian distributions (Seshadri, 1999). We denote this law as to underline its parametric dependence on the threshold and drift values (see appendix A). If one can design a random generator for , it is thus possible to simulate the activity of neuron a without simulating the underlying sample paths of Xa.
It is possible to formulate these update rules rigorously from a probabilistic point of view. This requires some familiarity with conditioned stochastic processes that are related to the Wiener processes, namely the inverse gaussian bridge process and the first-passage bridge process (Bertoin, Chaumont, & Pitman, 2003). Due to the technicality of these results, we give only a brief account of their algorithmic formulation in the following and justify their cogency in Figure 3. Details about the different probability laws of the conditioned processes are given in appendix A.
2.3. Network Update Rule.
Equipped with the previous local neuronal update rule, we can lay out an event-driven strategy that simulates exactly the spontaneous activity of an arbitrary network of stochastic perfect integrate-and-fire neurons. The crucial point of such an implementation is to carefully schedule the individual asynchronous neuron updates so that the Markovian nature of the dynamics is preserved. The general approach consists in maintaining:
A fixed-size priority queue Qn of neurons a that are time-stamped by their provisional estimate of spiking in the absence of interactions pa
A varying-size priority queue Qd of spike deliveries, where receptions from neuron b to c are time-stamped by their time of delivery
At each iteration step, we select the next update event as the element of the two queues with highest priority, that is, with a minimum time stamp. If we select neuron a from Qn, a spontaneous spiking of a occurs at time pa, since a does not receive any input prior to its provisional spiking estimate pa. It then entails the insertion of spike deliveries to downstream neurons D(a) in Qd. Contrarily, if we select a time of delivery ub,c from Qd, neuron c is updated according to the rules of the previous section. It can either stay silent or spike, in which case spike deliveries are scheduled.
From a mathematical standpoint, it is crucial to ensure that the order of events is proper, that is, the probability that two events happen at the same time is zero. This requirement entails the satisfactions of simple, albeit technical, constraints on the initial spiking times, the initial deliveries, and the propagation delays. The initial spiking time of a neuron is defined as the last spike timing immediately preceding the zero time of a simulation, while initial deliveries consist of the schedule of spikes that are in transit at time zero. It is possible to show that if the initial spiking times, the initial deliveries, and the propagation delays are chosen from smooth distributions, the probability of coincidental events is zero, which corresponds to the fact that the constraints of coincidence are met with negligible probability.2 Under such mild assumptions, the proper order of events guarantees the well-posedness of our algorithm.
3. Event-Driven Implementation
In this section, we describe the algorithm of the stochastic event-driven strategy that has just been presented. We then discuss its implementation and analyze its complexity.
There are two sides to structure a program simulating a network of stochastic perfect integrate-and-fire neurons. Updating consists of locally updating the state of an individual neuron and issuing its time stamp as the time it would spike if solely driven by its internal dynamics. There exist two types of updates:
Following the reception of an input, potentially triggering a spiking event
Following a spiking event, scheduling spike deliveries to downstream neurons
We summarize these principles by providing the pseudocode that implements an iterative step of the algorithm in procedure 1. We adopt the notations of the previous section:
Neuron parameters: The neurons’ internal dynamics are entirely prescribed by their threshold value la, negative drift , and refractory period .
Network parameters: The topology of the network is described by the weight wa,b of the synapses, whereas its temporal properties are summed up in the delivery delays da,b.
Priority stamps: Events are sorted according to the time separating their occurrence from the network's current time, coinciding with the time of the last update. The time to the next internal spiking event of neuron a is pa, and ua,b is the time to the next spike delivery from neuron a to b.
History load: The spiking or update of a neuron a is effective if the past history of input of a gives rise to a nonnegative load .
The iteration of updates is managed through the first step of our algorithm, which consists of selecting the next update event: either an internal event from the fixed-size priority queue or an interaction event from the variable-size priority queue . To implement these two priority queues and , we use binary heaps, a well-known data structure for which the cost of inserting an element (scheduling a future event) or deleting the root (selecting the next event) is O(log S), where S is the number of elements in the heap. As will be discussed later, the management of the priority queue is essential to the computational efficiency of the algorithm, and its principle is represented in Figure 5.
Including the refractory periods within our framework is tantamount to freezing the internal dynamics of a neuron after the last spiking event for a duration . Refractory periods are implemented by updating the state of a neuron a for time t such that , while altering the reset rule as . Since it is relatively straightforward to do so, we do not include the implementation of refractory periods in the pseudocode of procedure 1.
3.2. Monte Carlo Sampling.
Before analyzing the complexity of the proposed algorithm, we must underline the fact that our implementation pertains to the class of Monte Carlo methods (Metropolis & Ulam, 1949; Robert & Casella, 2004), and as such, its cogency relies crucially on the existence of exact and efficient random generators.
The evolution of the network of integrate-and-fire neurons is fundamentally stochastic and requires extensive sampling from the parametric laws IG, BES3, IGB. In simulating the network for a vast number of neurons, and incidentally for an even vaster number of connections, all interaction events need to be precisely time ordered, thus making it necessary to employ exact random generators. Even a small bias can consistently introduce spurious ordering and cause the sustained propagation of “wrong” frustrated patterns of firing. From a more computational perspective, since a stochastic update is required each time an event occurs, the sampling process must demand as few arithmetic operations as possible; specifically, we want the number of required operations to be bounded deterministically. This last point precludes the use of the most common exact sampling method, the rejection algorithm (von Neumann, 1951), that consists of sampling from a known distribution and then subjecting the sample to a test, to determine whether it is a true sample or a new sequence of drawing and testing is needed.
When concerned with the parametric laws IG, BES3, IGB, it is possible to avoid the rejection algorithm thanks to a powerful statistical result of Michael, Schucany, and Haas (1976). Its scope covers real random variables X for which there exists a polynomial P such that the transformed variable U=P(X) follows a distribution for which an exact and efficient generator is already known. If P is invertible with known inverse function P−1, simulating the original variable U consists of first sampling from U and then returning P−1(U). A potential issue stems from the fact that P is often noninvertible since the equation P(x)=u generally admits many roots . The essential merit of the Michael-Schucany-Haas method is that it provides a simple rule for deciding in which proportion to choose from the n possible roots. As opposed to the rejection algorithm, it always provides an exact sample of the law of X without ever rejecting a sample u from U, and thus in a finite deterministic number of steps. Fortunately, the laws of IG, BES3, IGB are closely connected to the law, which is simulated as where is a standard gaussian variable of law . We give the implementations of the generators the laws IG, BES3, IGB in appendix A.
Equipped with such random generators, we are now in a position to discuss the complexity of the algorithm. It is natural to investigate the computational cost of the event-driven algorithm per event of interest. In our case, that is the cost of generating a spike.
To establish an upper bound to the cost of generating a spike, consider a network of N neurons for which each neuron connects, on average, to aN other neurons and where a<1 is the sparsity of the network. For simplicity, suppose that the network operates in steady regime and homogeneously, implying that each neuron has the same constant firing rate r. During each time period 1/r, the homogeneity hypothesis entails that each neuron spikes on average once, scheduling the delivery of aN2 spikes. Since we operate in steady regime, the network in the meantime has received the delivery of as many spikes, potentially leading to an update if the load of the neuron becomes excitatory. From these observations, we can deduce the overall computational cost per spike of our algorithm:
Update cost: Since the random generators operate in a finite number of arithmetic steps, the cost of a local Monte Carlo update is always finite. Its upper bound is denoted by c. In realizing that our implementation is such that the number of local update is always less than the number of spike deliveries (aN2 for N spikes within the network), we can readily estimate the cost of the local update per spike as aNc.
Sorting cost: Since we use binary heaps structures, the scheduling and deletion of aN2 delivery events per duration 1/r incur a cost in 2aN2log(S), where S is the size of the priority queue in steady regime. The priority queue acts as a buffer, and its size S is the number of deliveries that have been scheduled and are still in transit, specifically, , if D denotes the average delay. Similarly, the local updates (that occur less frequently than aN2) may cause the reshaping of the priority queue of internal events , which comprises N neurons, yielding aN2log(N) operations.
To summarize, we encapsulate the previous discussion in the following result:
- Property. For a network of N stochastic perfect integrate-and-fire neu- rons connected with sparsity a<1 in steady state and homo- geneous regime with firing rate r and time delay D, the complexity of the stochastic event-driven algorithm as the cost of generating a spike is bounded by
The computational cost of the algorithm lies primarily in the management of the priority queues and . For the case of the spike delivery events, it is essential to optimize the implementation of insertion in the heap by treating the volleys of spike deliveries that each spike elicits in a preordered fashion (see Figure 5). This way, the reshaping of the binary tree takes as few steps as possible. More important, the size of the priority heap grows enormously with the number of connections as rDaN2, so that memory management and memory access become the limiting factors to the speed of execution. However, by relying exclusively on local update rules organized through the sorting of prioritized events, this algorithm is readily parallelizable, thus offering a simple way to alleviate these constraints.
Interestingly, decreasing the average time delay of spike propagation D shortens the interaction memory of the network, offering another way to diminish the size of . In the mammalian brain, the maximum spike delays are of the order of a few milliseconds (Ferraina, Paré, & Wurtz, 2002), and the maximum firing rates is typically of the order of 100 Hz (Schreiner & Raggio, 1996), suggesting that low values of rD are the norm, thus limiting the size of .
These results contrast with stochastic clock-based algorithms for which neurons are allowed to emit and receive spikes only at those times that are multiples of the simulation time step . In this context, it is possible to include finite-propagation delays that, for also being multiples of , can be managed without being sorted (Izhikevich, 2006). However, at each time step and irrespective of the existence of an interaction event, the state of each neuron needs to be evaluated. Thus, the cost per spike for a network in the homogeneous steady regime is , where u denotes the cost of an interaction update and c the cost of sampling the membrane potential. If the dependence of the complexity is linear only in N, invariably updating neurons at each time step takes a fixed computational toll that becomes prohibitive even at reasonable precision.
Moreover, we argue that for practical use, the situation is actually much more unpropitious to stochastic clock-driven methods. Indeed, neural networks generally exhibit highly fluctuating temporal spiking activity that varies from one neuron to another. Clock-based implementations oversample neurons that spike rarely and undersample neurons involved in firing cascades. By contrast, stochastic event-driven strategies are impervious to the previous sampling defect since their computations affect only interactions. In that regard, our implementation is especially amenable to simulating networks with strong inhibition. The introduction of inhibitory interactions slows down the overall rate of events, in turn downsizing the heap of spikes in transit , whose sorting accounts for most of the computational cost.
4. Simulation of Networks
In this section, we illustrate concretely our stochastic event-driven algorithm exploiting the Markov nature of the network, seen as a collection of interacting homogeneous Markov chains. For clarity, we first distinguish between purely inhibitory and excitatory network and then proceed to networks including excitatory and inhibitory interactions. All figures have been elaborated from the outcomes of our machine-independent C-code implementation.
4.1. Inhibitory Interactions.
In Figure 6, we represent the sample points of a neuron's voltage trajectory, as simulated by our stochastic event-driven strategy for an inhibitory network. The network is made of four fully interconnected neurons: Na, Nb, Nc, and Nd.
For each neuron, the black downward traces represent the effective barriers. Every time a neuron spikes, the effective barrier is reset to start from the threshold value, after a refractory time materialized by silent periods between two vertical dashed lines. The succession of sample voltage values is sketched as follows. For a given neuron, the drawing of a provisional spiking time is shown as a point on the effective threshold, since the voltage trajectory hits the barrier at that time. If the neuron receives some interaction input before this provisional time, the estimate of the next spiking time needs to be updated. Since the network is inhibitory, the occurrence of the spike is delayed according to the local update rule. This procedure is represented by shifting downward the neuron's voltage by the amount of inhibition and drawing the next provisional estimate as for a noninteracting neuron. There are two types of internal events: the ones for which the neurons’ internal dynamics cause a spiking event (“true” event) and the ones that turn out not to be spiking events (“fake” events), for having ignored past inhibition.
Notice that the simulation proceeds asynchronously: each neuron's current state comprises a time of last update tn (light arrows) and an estimate of the next spiking time (dark arrows). The network's implementation moves forward in time so that the network time stamp t, understood as the time of the last update (gray vertical line), satisfies for all neurons. For instance, in Figure 6, the network is stalled at the last time neuron Na spikes, and thus the last update state of Na is the reset state. At the same time, neuron Nb has not interacted since its last spiking time and its last update state are also its reset state. Contrarily, neuron Nc and Nd have been updated once and twice, respectively. For a purely inhibitory network, the provisional estimates move only forward in time since they can only be postponed by interactions. Thus, the history of provisional spiking time translates exactly the history of the network interactions, and no computational power is lost to simulate unnecessary sample points of the voltage trajectories.
4.2. Excitatory Interactions.
In Figure 7 we sketch out the sample points of a neuron's voltage trajectory, as simulated by our stochastic event-driven strategy for an excitatory network. Again, the network is made of four neurons (Na, Nb, Nc, and Nd) that fully connect to one another.
Contrary to the inhibitory case, for each neuron, the voltage trajectory is updated by being shifted upward by the amount of excitatory input it receives when interacting with another neuron. Since interactions cannot delay a provisional estimate, the only possible internal events are true spiking events. Accordingly, the shading of the voltage trace changes each time a spike happens. The spiking events can be distinguished in two classes: the spikes that result from a neuron's own internal dynamics driving the trajectory to hit the barrier (black dashed lines) and the spikes that directly follow the receiving of excitatory input (solid black lines). Notice that the network is stalled at the last time neuron Nc fires, and all the other neurons are shown in the state of their last update, in keeping with their asynchronous timing.
To capture how the updating of the provisional estimate proceeds, we show the same network sample path with anticipations in Figure 8. By “anticipations,” we mean that we represent the sample points of the provisional estimates, which are progressively hastened every time a neuron receives an excitatory input. Concretely, the history of successive provisional estimates lies on the barrier (since we are concerned with potential spiking time) and moves backward in time on this barrier. In order to clearly depict this process, we do not represent the effective barrier on Figure 8. The extensions of the voltage trajectories after an actual spiking event reveal how far in the future the original estimates of the spiking time are. In fact, the algorithm simulates admissible voltage traces from a future time back to the instant they cause the next network event, at which point the portion of the trajectory in the future can be considered irrelevant without altering the Markovian nature of the network evolution. Incidentally, if simulating the trajectory in the future ensures the consistency of the network update, it devotes computational power to sampling trajectories that are not appearing in the sampled end results.
4.3. Balanced Network.
Finally, we represent in Figure 9 the sample paths of four neurons embedded in a network with excitation and inhibition. The specific network under consideration is made of 100 neurons that fully connect to one another. We adopt the same representation as for the case study of excitatory networks, sketching the network's sample path in asynchronous time in Figure 9a and with anticipations in Figure 9b.
Since the neurons Na, Nb, Nc, and Nd are the targets of both inhibitory and excitatory input, each of their sample paths presents the characteristics of excitatory and inhibitory updates. The asynchronous representation, Figure 9a, clearly shows that provisional estimates are regularly delayed by inhibition, while the representation with anticipations, Figure 9b, reveals that provisional estimates are also hastened by excitation. Notice that the update process is asymmetric: the downward shifts of the trajectories have much larger amplitude than the upward shifts but are also less frequent. As long as a neuron's past history of interaction remains inhibitory, the implementation postpones local updates until the occurrence of an internal event, at which point the accumulated inhibition is realized. As a consequence, the number of times a trajectory needs to be sampled is drastically reduced, as exemplified by Figure 9. For the given network at stake, if each spiking event causes a mean number of 100 spike deliveries, a trajectory leading up to a spiking time counts only seven sample points on average.
To conclude on the efficiency of our stochastic event-driven algorithm, we simulate 100, 000 spikes for a fully connected network of 200 neurons (50 inhibitory neurons). This represents the management of 40, 000 connections and 20 million spike deliveries, which takes on the order of the minute on a standard computer for relevant range of parameters. In Figure 10, we show the raster plots of the population for varying degree of inhibition. The reduced parameters of every neuron are set to l=1 V for the threshold and for the drift. The network is embedded in a three-dimensional space where neurons are points that are uniformly distributed on a sphere. The interneuronal delays are taken proportional to the great cycle distance on the sphere, so that the temporal radius is . Notice that the delays are thus drawn from a smooth probability distribution. The interactions of the network are prescribed by choosing at random to assign three-fourths of the neurons to be excitatory with synaptic weight w+=0.01V and the other fourth to be inhibitory with synaptic weight at w−=Rw+ for varying values of R in the range [0.75, 2]. For each value of R, the network's evolution starts from the same initial condition, with initial times drawn from the stationary distribution of the corresponding noninteracting network (w=0) and empty delivery schedules. Suppressing inhibition by lowering R facilitates the appearance of cascades of firing, until the network becomes excitatory enough to saturate at extremely high firing. In Figure 11, we depict the fact that this transition is paralleled with a drastic increase in the computational cost of generating a spike.
We have designed an efficient algorithm for simulating stochastic perfect integrate-and-fire neurons. The simulated spiking times are deemed exact in the sense that their numerical precision is limited only by the imperfection of the underlying random generator and the numerical accuracy of the computer. This is in stark contrast with stochastic Runge-Kutta methods where the precision is set by the time step of the simulation and there is always a finite probability of missing the actual first-passage time, thus numerically overestimating these event timings. The specifics of our update rules are chosen to preserve the Markovian nature of the networks’ stochastic dynamics (Touboul & Faugeras, 2011) while minimizing the number of updates. It actually comes as an additional benefit that when inhibition is at play, our algorithm implements fewer updates than the actual number of interaction events. All in all, the complexity of our stochastic event-driven strategy behaves similar to deterministic ones.
That said, one might argue that a clock-driven algorithm that proceeds without sorting interaction events, can perform better. However, even in this situation, it remains crucial that all the spikes that a neuron receives are well ordered when subjected to both inhibition and excitation. In order to separate reliably N stochastic interaction events during a period T, a clock-driven implementation needs to bin the duration T in at least N2 bins. This sets an upper bound of the time step of the simulation and shows that if one requires having well-ordered events, our method conceptually always outperforms a clock-driven algorithm.
Our method supposes the identification of closed-form expressions for the update rules of the network, which makes the exact simulation possible. The principles behind our implementation readily apply to general linear integrate-and-fire neurons. This encompasses the leaky integrate-and-fire model, for which the subthreshold dynamics follows an Ornstein-Uhlenbeck process and possesses a stationary distribution (Uhlenbeck & Ornstein, 1930). However, in the absence of closed-form expression, little is known about the simulation of the processes related to the first-passage times of such processes (Taillefumier & Magnasco, 2010).
In all generality, neural activity should be described in terms of excitable systems (Lindner, Garca-Ojalvo, Neiman, & Schimansky-Geier, 2004), whose dynamics are inherently nonlinear. Unfortunately, the development of an event-based approach for nonlinear integrate-and-fire neurons is still elusive, with the notable exception of the deterministic quadratic integrate-and-fire neurons (Tonnelier et al., 2007). This is essentially due to the fact that a nonlinear subthreshold dynamics precludes the formulation of the first-passage problem with drift in terms of an effective barrier. As a result, it is impossible to decouple the current-mediated interactions and the intrinsic dynamics of the neurons, and the formulation of simple update rules appears out of reach.
Despite being valid for only stochastic perfect integrate-and-fire neurons, our numerical method is of primary interest for its ability to simulate large-scale neuronal networks with little computational cost and exquisite temporal precision. Indeed, the spiking activity of a network is sensitive to perturbations of spiking times. This fact is best illustrated through the concept of polychronization (Izhikevich, 2006), whereby only certain temporal spiking patterns of upstream neurons elicit the generation of specific cascades of spikes in downstream neurons. More important, networks are known to evolve according to spike-timing-dependent-plasticity rules (Dayan & Abbott, 2001; Abbott & Nelson, 2000; Caporale & Dan, 2008), where the precise ordering of spikes’ emissions and receptions is absolutely crucial. In the context of noisy neural assemblies, the exactness of our algorithm makes it especially suited to the simulation of such evolvable networks, while its efficiency facilitates the identification of polychronized motifs and elucidates the event structure of spike trains (Fellous, Tiesinga, Thomas, & Sejnowski, 2004; Toups, Fellous, Thomas, Sejnowski, & Tiesinga, 2011).
Appendix A: Monte Carlo Framework
This appendix specifies the probability distributions used in our algorithm and explicates the numerical generation of their samples. Despite being technical, these points are absolutely crucial to the implementation of the algorithm.
A.1 First-Passage Time of Drifted Wiener Process
A.2 Inverse Gaussian Bridge
A.3 First-Passage Brownian Bridge
Appendix B: Precision of Clock-Driven Simulation
Suppose we know that n events happen independently at random within a time period T and that we can record their occurrence only in N time bins with precision . If the probability of occurrence of a single event is uniform, the probability of an event happening in bin k is pN=1/N. The precision resolves the n events if the probability that two events fall in the same bin is small or at least controlled.
The weights and delays can be taken as independent random variables without changing the analysis.
Actually, it is enough to choose the initial times and the propagation delays from distributions that are absolutely continuous with respect to the Lebesgue measure on their natural space of definition.
Notice that p tends to zero for increasing confidence of separation, so that diverges.