Abstract

In vivo cortical recording reveals that indirectly driven neural assemblies can produce reliable and temporally precise spiking patterns in response to stereotyped stimulation. This suggests that despite being fundamentally noisy, the collective activity of neurons conveys information through temporal coding. Stochastic integrate-and-fire models delineate a natural theoretical framework to study the interplay of intrinsic neural noise and spike timing precision. However, there are inherent difficulties in simulating their networks’ dynamics in silico with standard numerical discretization schemes. Indeed, the well-posedness of the evolution of such networks requires temporally ordering every neuronal interaction, whereas the order of interactions is highly sensitive to the random variability of spiking times. Here, we answer these issues for perfect stochastic integrate-and-fire neurons by designing an exact event-driven algorithm for the simulation of recurrent networks, with delayed Dirac-like interactions. In addition to being exact from the mathematical standpoint, our proposed method is highly efficient numerically. We envision that our algorithm is especially indicated for studying the emergence of polychronized motifs in networks evolving under spike-timing-dependent plasticity with intrinsic noise.

1.  Introduction

In cortical cell assemblies, a neuron forms synapses onto about 10,000 target neurons (Braitenberg & Schüz, 1998). The resulting neural circuits are weakly driven by behaviorally relevant, highly preprocessed spiking patterns (Fiser, Chiu, & Weliky, 2004; Bruno & Sakmann, 2006; Larson, Perrone, Sen, & Billimoria, 2010). In turn, the neural processing performed by these circuits is crucially affected by noise, which results from background spontaneous activity or inherently stochastic mechanisms of spike generation (Rolls & Deco, 2010; Faisal, Selen, & Wolpert, 2008; Fellous, Rudolph, Destexhe, & Sejnowski, 2003). However, when studied in vivo, cortical neural circuits can exhibit precise and reliable neural responses (Abeles, Bergman, Margalit, & Vaadia, 1993; Berry, Warland, & Meister, 1997; Prut et al., 1998; Reinagel & Reid, 2000; Rust, Schwartz, Movshon, & Simoncelli, 2005; Ayzenshtat et al., 2010): upon repeated excitation by a stimulus, neural circuits consistently produce patterns of activity where spike timing is precisely time-locked to the features of the stimulus. The timing jitter, the stochastic temporal variability of a neuronal spike from repetition to repetition, quantifies in a stimulus-dependent fashion the neuronal noise (Cecchi et al., 2000). This suggests that neural circuits process information through temporal coding (Thorpe, Delorme, & Van Rullen, 2001), whereby the actual spiking times convey meaningful information (Huber et al., 2008; Yang, DeWeese, Otazu, & Zador, 2008).

Understanding how indirectly driven, noisy, interacting neurons can produce reliable and temporally precise neural response is central to the problem of neural transmission (Kumar, Rotter, & Aertsen, 2008, 2010). In that respect, noise plays a crucial and ambiguous part: it alternatively appears as a nuisance constraining neural circuits to operate through rate coding (London, Roth, Beeren, Hausser, & Latham, 2010) or as a benefit to precise temporal coding through stochastic facilitation (McDonnell & Ward, 2011). Computational neuroscience addresses the problem of neural transmission by studying in silico models of neural networks, which model the internal dynamics of isolated single units as well as their interactions, while explicitly including noise.

Despite comprehensive experimental and theoretical studies (Bryant & Segundo, 1976; Manwani & Koch, 1999a, 1999b; Mainen & Sejnowski, 1995), it is still a matter of debate whether the dominant source of stochasticity is internal or external. Internal sources of noise include all effects of thermal noise, such as the voltage fluctuations induced by any electrical resistance, the discrete chatter of ion channels, and other effects of molecular systems with a finite number of components. The external sources aggregate the ongoing influence of the many neurons that are not in the circuit being investigated but measurably interact with it, acting as a neural thermal bath. Regardless of the nature of the noise, when concerned with the temporal coding of a neural population, an adequate model of spiking neuron is the stochastic integrate-and-fire neuron (Lapicque, 1907; Knight, 1972).

In this model, at any time t, the internal state of a neuron is given by its membrane potential Xt, which evolves according to a stochastic differential equation (or Langevin equation) of the diffusion type (Karatzas & Shreve, 1991; Stroock & Varadhan, 2006),
formula
1.1
where F is a drift function, the diffusion coefficient, I is the neuronal input, and W is a Wiener process (Gerstner & Kistler, 2002; Gerstner & Naud, 2009; Izhikevich, 2003, 2004). In equation 1.1, the contributions of external and internal noise are combined in an intrinsic standard gaussian white noise dW. Notice that in all generality, F and are both dependent on the time-varying input I of the neuron. Whenever the potential Xt reaches a given threshold l, the neuron spikes and the membrane potential instantaneously reset to its base level. Integrate-and-fire models are advantageous because they articulate the discrete nature of spike counts and the continuous nature of spike timing (Victor, 2005), by prescribing the subthreshold dynamics of the membrane potential. It thus offers the possibility of readily including noise contributions in the model and to model temporally localized neuronal interactions as recapitulated by I, seen as the integrated synaptic input.

In the context of a population, an integrate-and-fire neuron initiates a spike in response to the temporal spiking patterns of upstream neurons, when the corresponding synaptic deliveries reliably drive the neuron's membrane potential to its threshold. The transient nature of synaptic transmission and the existence of finite propagation delays conspire to render the integration of afferent activity extremely sensitive to the perturbations of incoming spikes’ timing (Izhikevich, 2006; Goodman & Brette, 2010). Thus, temporal precision is absolutely crucial for computing the activity of a neural population in response to both past and external spiking patterns.

Moreover, as the result of intrinsically noisy dynamics, the spiking patterns of a population of stochastic integrate-and-fire neurons are realizations of a highly dimensional stochastic process. A stochastic simulation method is then temporally exact if its algorithm produces exact samples of the intricate law of the network's evolution. Practically, the implementation of exact stochastic algorithms yields spiking patterns with a temporal precision that is limited only by the exactness of the random generators and the numerical precision of the computer, thus ensuring exquisite temporal precision. That said, there are undoubtedly situations for which stochasticity is hardly relevant for understanding neural computations (e.g., the visual system with high-contrast stimuli), whereas other neuronal processing cannot be grasped without their stochastic components (e.g., the visual system at low contrast, close to detection threshold). Stochastic simulations should exclusively address the latter since the integration of stochastic equations is much more computationally intensive than deterministic ones due to the excruciating cost of generating random numbers to faithfully depict noisy neural dynamics.

Despite substantial work (Mattia & Giudice, 2000; Reutimann, Giugliano, & Fusi, 2003; Ros, Carrillo, Ortigosa, Barbour, & Agís, 2006), the many difficulties inherent in the study of interacting stochastic processes have hindered the development of temporally exact algorithms with noisy dynamics (Brette et al., 2007). It is the purpose of this letter to remedy this situation for the simplest network of stochastic integrate-and-fire neurons.

1.1.  Simulation of Integrate-and-Fire Networks.

The simulation of networks of integrate-and-fire neurons invariably consists of the implementation of propagation periods between two consecutive events (spike emission or reception) and network update rules following each event (modifying neuronal internal variables if necessary). Such algorithms generally fall into two categories (Brette et al., 2007):

  • • 

    Clock-driven algorithm: The integrative schema corresponding to the network evolution is discretized with a finite time step (typically using Runge-Kutta methods) and is paced by a central clock (Morrison, Straube, Plesser, & Diesmann, 2006).

  • • 

    Event-driven algorithm: The network evolves according to simple equations, and, given the network state at a particular spiking event, there exist analytical formulas predicting the next spiking event and network state (Mattia & Giudice, 2000).

Due to their flexibility, clock-driven algorithms have received the most investigation. Nevertheless, when concerned with precise temporal dynamics, these algorithms present two major flaws. First, their computational complexity is set by the duration of the time step, and most of the computational time is devoted to simulating the unobserved subthreshold dynamics of neurons that, in addition, can remain silent most of the time. Second, its discrete flavor introduces errors: noisy membrane trajectories can very well cross the spiking threshold between two subthreshold sample points (Taillefumier & Magnasco, 2010).

Another key issue is the existence of finite time for spike propagation, which affects the core functioning of a neuronal network. For instance, reasoning on the recurrent network depicted in Figure 1, suppose we need to update the network after neuron Na fires and that receiving one spike is enough to elicit a response. Then if the delays are such that da,c+dc,e<da,e, neuron Nc is activated and Ne is inhibited, whereas the situation is reversed if da,e+de,c<da,c. In the absence of delays, it is impossible to order the sequence of local updates consistently, and the problem is ill posed.

Figure 1:

Recurrent neural network with time delays.

Figure 1:

Recurrent neural network with time delays.

As opposed to stochastic clock-driven algorithms, stochastic event-driven algorithms devote most of their computational power to the implementation of neuronal interactions, naturally including finite propagation times in their event scheduling and are exact in principle. However, since the existence of analytical transition formulas is still required (Reutimann et al., 2003; Ros et al., 2006), event-driven algorithms remain restrained to a few deterministic network models (Rudolph & Destexhe, 2006; Tonnelier, Belmabrouk, & Martinez, 2007). A natural candidate spiking model to develop an event-driven strategy suited to noisy dynamics is the linear integrate-and-fire neuron (Lansky & Ditlevsen, 2008; Ricciardi & Sacerdote, 1979; Buonocore, Caputo, Pirozzi, & Ricciardi, 2010; Burkitt, 2006a, 2006b) with
formula
and Dirac-like time-delayed interactions as a weighted-sum of Dirac functions,
formula
1.2
wb,a and db,a denote the weight and delay of the synapse from neuron b to neuron a, and tnb refers to the timing of the nth spiking event of neuron b.1

1.2.  Stochastic Event-Driven Strategy.

Recently the Markovian structure of networks of linear integrate-and-fire neurons was identified and translated conceptually in terms of a stochastic event-driven strategy (Touboul & Faugeras, 2011). The special features of the linear integrate-and-fire model make it possible to design local update rules, which stipulates how a neuron behaves when receiving or emitting spikes. Incidentally, the network evolution of such neurons appears as a Markovian chain of single-unit asynchronous updates. The time sequence with which these rules are called is determined as follows. For each instantaneous state of the network, the timing of the next event is estimated independently for every neuron, as if the neurons were not interacting. Selecting the first of these events is tantamount to drawing the next network event (since no other event happens before it), and the corresponding local update rule can be safely applied.

However, each time a local update rule is implemented, it propagates interactions to other neurons. Maintaining the Markovian property demands that updated neurons bear the memory of the fact that in the absence of interactions, they would spike at a known time in the future. From a mathematical standpoint, the bookkeeping of intermediary events means that the membrane potentials are sampled from random processes that are conditioned in the future. Stochastic event-driven algorithms implement consistently these conditional updates at the network level. They proceed by using information about the current state of the network to refine provisional estimates of future events, until each of these refined estimates sequentially becomes the next network event.

Such algorithms are efficient when they minimize the time instances at which the underlying membrane potential needs to be sampled. This sampling number is determined by how often a neuron's membrane is sampled when receiving inputs and how many estimates are required for a provisional neuronal event to become the next network event. Figuratively, the task of a stochastic event-driven algorithm is to simulate future potential scenarios at the cost of later discarding parts of them, so that the individual neurons are simulated only for those times when their update is meaningful at the network level.

1.3.  New Practical Implementation.

Here, elaborating on results in Touboul and Faugeras (2011), we rigorously investigate the Markovian nature of networks of perfect stochastic integrate-and-fire neurons with time-delayed Dirac-like interactions and produce an efficient and exact event-driven implementation of their dynamics. A linear integrate-and-fire neuron is deemed “perfect” when there is no leak term () in the equation of its subthreshold membrane dynamics. Such a model is general in the sense that the Markovian setting of its networks can be extended to any linear integrate-and-fire neuron. However, networks of perfect stochastic integrate-and-fire neurons are of special interest because they are simple enough to be practically implemented in an event-driven fashion.

Indeed, we show that the conditioned processes that intervene in the Markovian description of the network are well-known stochastic processes and are closely related to the canonical Wiener process (Karatzas & Shreve, 1991). From there, we implement explicitly our event-driven strategy as a Monte Carlo algorithm, whose efficiency is ensured by the existence of exact efficient random generators for our conditioned processes. An exact random generator is an algorithmic procedure that, given exact samples from a simple probability law (typically the uniform law over [0, 1]), produces a number whose statistics follows exactly the law of the desired random variable. The efficiency of the generator is measured by the expected number of arithmetic operations that are required to simulate a sample. Moreover, we organize the scheduling of events so that the effective number of updates of a given neuron is fewer than the number of spikes it receives, while discarding little information in the refinement of provisional estimates. Incidentally, despite being a fully stochastic exact scheme, our algorithm operates with the same complexity as a deterministic event-driven method.

As a result, we argue that our event-driven implementation computationally outperforms any other stochastic simulation methods and is especially suited to the study of temporal coding. In that respect, we underline that our algorithm is not a device to perform approximate integration, as in a Monte Carlo method. The intrinsic stochasticity of the model is an essential component of the biophysics that we wish to simulate, which prompts the need for exact simulation strategies. Even when approximatively simulated, the integration of stochastic differential equations is excruciatingly costly by comparison with its deterministic counterparts. For example, the errors in a basic Euler integration of an ordinary differential equation are order , while the errors in the equivalent stochastic Euler method for the homologous Langevin equation are of order . This makes stochastic integration a phenomenally more costly proposition, as roughly the square of the number of time steps is required to achieve the same precision. Therefore, we claim that bringing a large-scale stochastic simulation into the same computational complexity class as a deterministic simulation is a substantial achievement. We assert that because it maintains a careful temporal ordering of events, our method is of particular interest for recurrent networks evolving under spike-timing-dependent plasticity (Dayan & Abbott, 2001; Abbott & Nelson, 2000; Caporale & Dan, 2008). From another perspective, networks of integrate-and-fire neurons have recently inspired Bayesian inference methods to analyze experimental neural data (Cocco, Leibler, & Monasson, 2009; Monasson & Cocco, 2011). Our algorithm provides a natural candidate to benchmark these methods on synthetic data.

The letter is organized as follows. We first describe the Markovian evolution of a network of perfect stochastic integrate-and-fire neurons and make explicit the corresponding local and network rules that allow us to design an event-driven strategy. We then devise its practical implementation and bound its complexity. Finally, we illustrate its behavior by successively considering the case of an inhibitory network, an excitatory network, and a network with both inhibition and excitation.

2.  Exact Markovian Framework

In this section, we first define the evolution of a single unit embedded in a network of neurons as a first-passage Markov chain, where the interactions are recapitulated in terms of a fluctuating barrier. We then exhibit the update rules for an interacting single unit and establish their schedule at the network level in order to faithfully simulate the network's evolution.

2.1.  Perfect Integrate-and-Fire Networks.

The dynamics of the membrane potential Xa of a neuron a integrates a time-dependent input current Ia through the stochastic differential equation of a drifted Brownian motion,
formula
where are constant positive drifts, are diffusion coefficients, and denote independent standard Wiener processes (Burkitt, 2006a,b). When the membrane potential Xa reaches a threshold la, that is at the first-passage time assuming Xa0<la, a spiking event occurs, and the membrane potential resets . A spiking event entails the delivery of spikes to the set of downstream neurons D(a) that are connected to a. The past spiking history of a neuron a up to time t is the finite time-ordered sequence that satisfies
formula
and the instantaneous spike count Ca(t) is the number of events in ha(t).
Here, we posit that the currents Ia recapitulate only the interactions between connected neurons and consist of Dirac impulses, modeling time-delayed deliveries of spikes from the set of upstream neurons U(a) that are connected to a. Given the spiking histories of neurons b in U(a), the neuron a receives the input current
formula
where wb,a is the weight of the synapse from b to a and db,a is the propagation delay from neuron b to a. No neuron self-connects, and we impose that the matrices of weights and delays satisfy wa,a=0 and da,a=0.
The statistics of spiking events in the absence of interactions is entirely specified by the effective height of the threshold and the effective drift through the law of the first-passage time of Xa at level la
formula
From now on, we posit that . In Figure 2, we represent typical sample paths of stochastic perfect integrate-and-fire neurons in the absence of interaction.
Figure 2:

Sample paths of noninteracting stochastic perfect integrate-and-fire neurons. (a) In the direct representation each neuron spikes when its voltage, modeled as an upward-drifted Wiener process, hits the firing threshold. (b) In the effective representation, the drift is subtracted from the same voltage traces so that the sketched traces are Brownian trajectories. A stochastic event-driven simulation intends to simulate the spiking time while sampling the voltage trajectories on a minimal number of points.

Figure 2:

Sample paths of noninteracting stochastic perfect integrate-and-fire neurons. (a) In the direct representation each neuron spikes when its voltage, modeled as an upward-drifted Wiener process, hits the firing threshold. (b) In the effective representation, the drift is subtracted from the same voltage traces so that the sketched traces are Brownian trajectories. A stochastic event-driven simulation intends to simulate the spiking time while sampling the voltage trajectories on a minimal number of points.

In the presence of interactions, if a neuron b spikes at time tb, the membrane potential Xa of a downstream neuron a in D(b) is instantaneously updated at time tb+db,a according to
formula
Therefore, the spiking history ha is made of first-passage times of the Wiener process Wa with the piecewise continuous effective barrier,
formula
with the convention that a passage happens at a time of discontinuity t if , whereas a passage cannot happen in t if . In the formulation of interactions in terms of effective barriers, is a nonanticipating jump process that depends on only the past spiking history of upstream neurons hb(t), b in U(a).

The preceding model is naturally endowed within a Markovian framework. To be more precise, at any given time t, suppose that we observe the last events of the spiking histories and that, for each neuron a and b, we maintain the schedule of spike deliveries that a neuron b is to receive as a result of the past activity of a. Then we can consistently construct the ensuing dynamics of the network by simulating the sample paths of the processes . Event-driven strategies aim at simulating exactly the evolution of the network as a Markov chain, while simulating as few sample points as possible.

2.2.  Local Neuron Update Rules.

Before explaining our stochastic event-driven implementation, we describe the local updates that a neuron undergoes. If U(a) is empty, neuron a evolves independently, and each of its interspike time intervals is an independent realization of the first-passage for a Wiener process and a straight barrier. The law of is known analytically and belongs to the class of inverse gaussian distributions (Seshadri, 1999). We denote this law as to underline its parametric dependence on the threshold and drift values (see appendix  A). If one can design a random generator for , it is thus possible to simulate the activity of neuron a without simulating the underlying sample paths of Xa.

Now assume we know that neuron a spikes at time , and denote the set of times of scheduled spike deliveries from neuron b in U(a). In the absence of spike deliveries, is empty; the next spiking event tai+1 follows the law of . If , a spike is received before neuron a would spike in the absence of interactions, and the estimate timing pa for the next spike becomes incorrect. Indeed, if neuron a receives a spike from b at time ub,a with , the original continuous sample path instantaneously updates at ub,a upon receiving the spike:
formula
If the synapse is inhibitory, the update shifts the sample path downward and delays the time neuron a would spike in the absence of later interactions. If the synapse is excitatory, the update shifts the sample path upward. The shift may be strong enough to satisfy and cause neuron a to spike instantaneously. If , neuron a does not spike, but the update hastens the time neuron a would spike in the absence of later interactions. Incidentally, the original estimate pa appears as a provisional estimate of the next spiking time that inhibitory input postpones and excitatory input accelerates.

It is possible to formulate these update rules rigorously from a probabilistic point of view. This requires some familiarity with conditioned stochastic processes that are related to the Wiener processes, namely the inverse gaussian bridge process and the first-passage bridge process (Bertoin, Chaumont, & Pitman, 2003). Due to the technicality of these results, we give only a brief account of their algorithmic formulation in the following and justify their cogency in Figure 3. Details about the different probability laws of the conditioned processes are given in appendix  A.

Figure 3:

In both panels, the top figures represent the voltage trace in direct representation; the bottom figures are formulated in terms of an effective barriers. In the absence of interactions, the neuron spikes at the first-passage time . Upon arrival of a spike at time ub,a, the trajectory is instantaneously updated. (Left) Inhibitory input. The voltage trace is shifted downward, and the estimate for the next spiking event pa needs postponing, equivalent to the addition of a first-passage time to a barrier shifted by wb,a. (Right) Excitatory input. The voltage trace is shifted upward. If the sample path does not instantaneously cross the barrier, the estimate for the next spiking event pa needs to be advanced, amounting to substituting pa with a new random variable with law given by equation 2.3.

Figure 3:

In both panels, the top figures represent the voltage trace in direct representation; the bottom figures are formulated in terms of an effective barriers. In the absence of interactions, the neuron spikes at the first-passage time . Upon arrival of a spike at time ub,a, the trajectory is instantaneously updated. (Left) Inhibitory input. The voltage trace is shifted downward, and the estimate for the next spiking event pa needs postponing, equivalent to the addition of a first-passage time to a barrier shifted by wb,a. (Right) Excitatory input. The voltage trace is shifted upward. If the sample path does not instantaneously cross the barrier, the estimate for the next spiking event pa needs to be advanced, amounting to substituting pa with a new random variable with law given by equation 2.3.

The cases of inhibition and excitation need to be clearly distinguished. For an inhibitory input at time ub,a, the provisional estimate pa is updated by the addition of a delay whose distribution is given by according to
formula
2.1
as explained in Figure 3. For an excitatory input at time ub,a and before the update, the membrane potential Xa follows the distribution of a drifted Wiener process conditioned to first reach the threshold la at time tai+pa, while starting in zero at tai. Such a conditioned process Xa is referred to as a first-passage bridge process (see appendix  A). It is known that laXa follows the distribution of a three-dimensional Bessel bridge BES3(la, 0) (Bertoin et al., 2003), making it possible to simulate the value of the conditioned sample path X at time ub,a. Updating the sample path as
formula
allows us to decide whether it causes the neuron to spike. In the event of a spike, pa is later updated according to the reset rule,
formula
2.2
as explained in Figure 3. If the sample path remains below the spiking threshold, we still have to design an update rule for the provisional estimate pa that needs to be refined upon receiving excitatory input. As shown in Figure 3, the law of the new provisional estimate is the first-passage time of a drifted Wiener process Xa at level lawb,a, knowing that Xa first-hit la at time tai+pa, while starting in zero at tai. Such a conditional law follows the inverse gaussian bridge distribution IGB(wb,a, lawb,a, ub,a, pa) (see appendix  A), and the update rule reads
formula
2.3
Notice that we need only to simulate the value of Xa for those instances when neuron a receives an excitatory input.

2.3.  Network Update Rule.

Equipped with the previous local neuronal update rule, we can lay out an event-driven strategy that simulates exactly the spontaneous activity of an arbitrary network of stochastic perfect integrate-and-fire neurons. The crucial point of such an implementation is to carefully schedule the individual asynchronous neuron updates so that the Markovian nature of the dynamics is preserved. The general approach consists in maintaining:

  • • 

    A fixed-size priority queue Qn of neurons a that are time-stamped by their provisional estimate of spiking in the absence of interactions pa

  • • 

    A varying-size priority queue Qd of spike deliveries, where receptions from neuron b to c are time-stamped by their time of delivery

At each iteration step, we select the next update event as the element of the two queues with highest priority, that is, with a minimum time stamp. If we select neuron a from Qn, a spontaneous spiking of a occurs at time pa, since a does not receive any input prior to its provisional spiking estimate pa. It then entails the insertion of spike deliveries to downstream neurons D(a) in Qd. Contrarily, if we select a time of delivery ub,c from Qd, neuron c is updated according to the rules of the previous section. It can either stay silent or spike, in which case spike deliveries are scheduled.

From a mathematical standpoint, it is crucial to ensure that the order of events is proper, that is, the probability that two events happen at the same time is zero. This requirement entails the satisfactions of simple, albeit technical, constraints on the initial spiking times, the initial deliveries, and the propagation delays. The initial spiking time of a neuron is defined as the last spike timing immediately preceding the zero time of a simulation, while initial deliveries consist of the schedule of spikes that are in transit at time zero. It is possible to show that if the initial spiking times, the initial deliveries, and the propagation delays are chosen from smooth distributions, the probability of coincidental events is zero, which corresponds to the fact that the constraints of coincidence are met with negligible probability.2 Under such mild assumptions, the proper order of events guarantees the well-posedness of our algorithm.

To implement the strategy efficiently and consistently, we must make the following addition. If a neuron a receives a series of inputs between s and t from neurons in U(a), the net update effect in t is to shift the sample path by a value
formula
which we call the load of a between s and t. As long as , the input history of a between times s and t can only delay the occurrence of a spike. Instead of updating a at each spike delivery, we update it only if an excitatory interaction causes the load to become positive, and we reset it to zero after the update. This implicitly defines the load of a neuron as the stochastic process obeying the update rules:
formula
Incidentally, it becomes possible to select for the next event a neuron a from Qn with . In other words, the cost of considering is to introduce “fake” spontaneous spiking events. Suppose the next event is selected from Qn by identifying neuron a as the neuron with highest priority. If , the neuron has received some past inhibitory input whose effect has not been realized yet. Neuron a should then remain silent and the event is deemed a “fake” spiking event. In such a case, we update pa according to
formula
2.4
which corresponds to the inhibitory rule, equation 2.1, where wb,a is substituted for . Overall, pa is solely updated forward in time when an update of neuron a is scheduled, with rule 2.4 for fake spiking events () or rule 2.2 for real spiking events (). In the meantime, pa is updated backward in time only if the load of a becomes excitatory and according to the rule
formula
which corresponds to the excitatory rule 2.3 where replaces wb,a. As a consequence, we considerably diminished the number of neurons’ updates while ensuring that the successive estimates pa are consistently time ordered, as explained in Figure 4.
Figure 4:

(Left) Net inhibitory input. (Right) Net excitatory input. A negative load can only delay the spiking occurrences of a neuron a (lighter trace), as opposed to when the load becomes positive (darker trace). We update a neuron a only when a spike delivery causes to become positive, at the cost of treating separately the case when the next event is a “fake” spiking event.

Figure 4:

(Left) Net inhibitory input. (Right) Net excitatory input. A negative load can only delay the spiking occurrences of a neuron a (lighter trace), as opposed to when the load becomes positive (darker trace). We update a neuron a only when a spike delivery causes to become positive, at the cost of treating separately the case when the next event is a “fake” spiking event.

3.  Event-Driven Implementation

In this section, we describe the algorithm of the stochastic event-driven strategy that has just been presented. We then discuss its implementation and analyze its complexity.

3.1.  Pseudocode.

There are two sides to structure a program simulating a network of stochastic perfect integrate-and-fire neurons. Updating consists of locally updating the state of an individual neuron and issuing its time stamp as the time it would spike if solely driven by its internal dynamics. There exist two types of updates:

  • • 

    Following the reception of an input, potentially triggering a spiking event

  • • 

    Following a spiking event, scheduling spike deliveries to downstream neurons

Sorting: manages the set of future events, which are prioritized according to provisional spike timing and selected based on highest priority
formula

We summarize these principles by providing the pseudocode that implements an iterative step of the algorithm in procedure 1. We adopt the notations of the previous section:

  • • 

    Neuron parameters: The neurons’ internal dynamics are entirely prescribed by their threshold value la, negative drift , and refractory period .

  • • 

    Network parameters: The topology of the network is described by the weight wa,b of the synapses, whereas its temporal properties are summed up in the delivery delays da,b.

  • • 

    Priority stamps: Events are sorted according to the time separating their occurrence from the network's current time, coinciding with the time of the last update. The time to the next internal spiking event of neuron a is pa, and ua,b is the time to the next spike delivery from neuron a to b.

  • • 

    History load: The spiking or update of a neuron a is effective if the past history of input of a gives rise to a nonnegative load .

The iteration of updates is managed through the first step of our algorithm, which consists of selecting the next update event: either an internal event from the fixed-size priority queue or an interaction event from the variable-size priority queue . To implement these two priority queues and , we use binary heaps, a well-known data structure for which the cost of inserting an element (scheduling a future event) or deleting the root (selecting the next event) is O(log S), where S is the number of elements in the heap. As will be discussed later, the management of the priority queue is essential to the computational efficiency of the algorithm, and its principle is represented in Figure 5.

Figure 5:

Heap structure of spike deliveries. The synapses are organized as an array of lists of synapses ordered according to increasing delay. Each time a neuron i spikes, a delivery event is created for every neuron j to which i connects, with a priority stamp initialized to the delay di,j. The events are inserted into the dynamic heap structure that keeps track of the delivery schedule of spikes in the network.

Figure 5:

Heap structure of spike deliveries. The synapses are organized as an array of lists of synapses ordered according to increasing delay. Each time a neuron i spikes, a delivery event is created for every neuron j to which i connects, with a priority stamp initialized to the delay di,j. The events are inserted into the dynamic heap structure that keeps track of the delivery schedule of spikes in the network.

Including the refractory periods within our framework is tantamount to freezing the internal dynamics of a neuron after the last spiking event for a duration . Refractory periods are implemented by updating the state of a neuron a for time t such that , while altering the reset rule as . Since it is relatively straightforward to do so, we do not include the implementation of refractory periods in the pseudocode of procedure 1.

3.2.  Monte Carlo Sampling.

Before analyzing the complexity of the proposed algorithm, we must underline the fact that our implementation pertains to the class of Monte Carlo methods (Metropolis & Ulam, 1949; Robert & Casella, 2004), and as such, its cogency relies crucially on the existence of exact and efficient random generators.

The evolution of the network of integrate-and-fire neurons is fundamentally stochastic and requires extensive sampling from the parametric laws IG, BES3, IGB. In simulating the network for a vast number of neurons, and incidentally for an even vaster number of connections, all interaction events need to be precisely time ordered, thus making it necessary to employ exact random generators. Even a small bias can consistently introduce spurious ordering and cause the sustained propagation of “wrong” frustrated patterns of firing. From a more computational perspective, since a stochastic update is required each time an event occurs, the sampling process must demand as few arithmetic operations as possible; specifically, we want the number of required operations to be bounded deterministically. This last point precludes the use of the most common exact sampling method, the rejection algorithm (von Neumann, 1951), that consists of sampling from a known distribution and then subjecting the sample to a test, to determine whether it is a true sample or a new sequence of drawing and testing is needed.

When concerned with the parametric laws IG, BES3, IGB, it is possible to avoid the rejection algorithm thanks to a powerful statistical result of Michael, Schucany, and Haas (1976). Its scope covers real random variables X for which there exists a polynomial P such that the transformed variable U=P(X) follows a distribution for which an exact and efficient generator is already known. If P is invertible with known inverse function P−1, simulating the original variable U consists of first sampling from U and then returning P−1(U). A potential issue stems from the fact that P is often noninvertible since the equation P(x)=u generally admits many roots . The essential merit of the Michael-Schucany-Haas method is that it provides a simple rule for deciding in which proportion to choose from the n possible roots. As opposed to the rejection algorithm, it always provides an exact sample of the law of X without ever rejecting a sample u from U, and thus in a finite deterministic number of steps. Fortunately, the laws of IG, BES3, IGB are closely connected to the law, which is simulated as where is a standard gaussian variable of law . We give the implementations of the generators the laws IG, BES3, IGB in appendix  A.

3.3.  Complexity.

Equipped with such random generators, we are now in a position to discuss the complexity of the algorithm. It is natural to investigate the computational cost of the event-driven algorithm per event of interest. In our case, that is the cost of generating a spike.

To establish an upper bound to the cost of generating a spike, consider a network of N neurons for which each neuron connects, on average, to aN other neurons and where a<1 is the sparsity of the network. For simplicity, suppose that the network operates in steady regime and homogeneously, implying that each neuron has the same constant firing rate r. During each time period 1/r, the homogeneity hypothesis entails that each neuron spikes on average once, scheduling the delivery of aN2 spikes. Since we operate in steady regime, the network in the meantime has received the delivery of as many spikes, potentially leading to an update if the load of the neuron becomes excitatory. From these observations, we can deduce the overall computational cost per spike of our algorithm:

  • • 

    Update cost: Since the random generators operate in a finite number of arithmetic steps, the cost of a local Monte Carlo update is always finite. Its upper bound is denoted by c. In realizing that our implementation is such that the number of local update is always less than the number of spike deliveries (aN2 for N spikes within the network), we can readily estimate the cost of the local update per spike as aNc.

  • • 

    Sorting cost: Since we use binary heaps structures, the scheduling and deletion of aN2 delivery events per duration 1/r incur a cost in 2aN2log(S), where S is the size of the priority queue in steady regime. The priority queue acts as a buffer, and its size S is the number of deliveries that have been scheduled and are still in transit, specifically, , if D denotes the average delay. Similarly, the local updates (that occur less frequently than aN2) may cause the reshaping of the priority queue of internal events , which comprises N neurons, yielding aN2log(N) operations.

To summarize, we encapsulate the previous discussion in the following result:

  • Property. 
    For a network of N stochastic perfect integrate-and-fire neu- rons connected with sparsity a<1 in steady state and homo- geneous regime with firing rate r and time delay D, the complexity of the stochastic event-driven algorithm as the cost of generating a spike is bounded by
    formula
    where c denotes the finite unit cost of a local Monte Carlo update.

The computational cost of the algorithm lies primarily in the management of the priority queues and . For the case of the spike delivery events, it is essential to optimize the implementation of insertion in the heap by treating the volleys of spike deliveries that each spike elicits in a preordered fashion (see Figure 5). This way, the reshaping of the binary tree takes as few steps as possible. More important, the size of the priority heap grows enormously with the number of connections as rDaN2, so that memory management and memory access become the limiting factors to the speed of execution. However, by relying exclusively on local update rules organized through the sorting of prioritized events, this algorithm is readily parallelizable, thus offering a simple way to alleviate these constraints.

Interestingly, decreasing the average time delay of spike propagation D shortens the interaction memory of the network, offering another way to diminish the size of . In the mammalian brain, the maximum spike delays are of the order of a few milliseconds (Ferraina, Paré, & Wurtz, 2002), and the maximum firing rates is typically of the order of 100 Hz (Schreiner & Raggio, 1996), suggesting that low values of rD are the norm, thus limiting the size of .

These results contrast with stochastic clock-based algorithms for which neurons are allowed to emit and receive spikes only at those times that are multiples of the simulation time step . In this context, it is possible to include finite-propagation delays that, for also being multiples of , can be managed without being sorted (Izhikevich, 2006). However, at each time step and irrespective of the existence of an interaction event, the state of each neuron needs to be evaluated. Thus, the cost per spike for a network in the homogeneous steady regime is , where u denotes the cost of an interaction update and c the cost of sampling the membrane potential. If the dependence of the complexity is linear only in N, invariably updating neurons at each time step takes a fixed computational toll that becomes prohibitive even at reasonable precision.

Indeed, to avoid ambiguities about the order of networks updates, the precision should be high enough to ensure with confidence that all interaction events happen at separate times. In appendix  B, we show that for homogeneous network in a steady regime, that requirement leads to choosing , where q represents the confidence of event separation,
formula
where contains the times of incoming spikes that a neuron receives during an interspike period.3 The resulting cost per spike now becomes quadratic in the size of the network, which is worse than our stochastic event-driven strategy in O(Nlog(N)).

Moreover, we argue that for practical use, the situation is actually much more unpropitious to stochastic clock-driven methods. Indeed, neural networks generally exhibit highly fluctuating temporal spiking activity that varies from one neuron to another. Clock-based implementations oversample neurons that spike rarely and undersample neurons involved in firing cascades. By contrast, stochastic event-driven strategies are impervious to the previous sampling defect since their computations affect only interactions. In that regard, our implementation is especially amenable to simulating networks with strong inhibition. The introduction of inhibitory interactions slows down the overall rate of events, in turn downsizing the heap of spikes in transit , whose sorting accounts for most of the computational cost.

4.  Simulation of Networks

In this section, we illustrate concretely our stochastic event-driven algorithm exploiting the Markov nature of the network, seen as a collection of interacting homogeneous Markov chains. For clarity, we first distinguish between purely inhibitory and excitatory network and then proceed to networks including excitatory and inhibitory interactions. All figures have been elaborated from the outcomes of our machine-independent C-code implementation.

4.1.  Inhibitory Interactions.

In Figure 6, we represent the sample points of a neuron's voltage trajectory, as simulated by our stochastic event-driven strategy for an inhibitory network. The network is made of four fully interconnected neurons: Na, Nb, Nc, and Nd.

Figure 6:

Sample paths of four neurons embedded in a fully connected inhibitory network. The parameters of the simulation have been chosen for the sake of a clear illustration and should not be considered relevant.

Figure 6:

Sample paths of four neurons embedded in a fully connected inhibitory network. The parameters of the simulation have been chosen for the sake of a clear illustration and should not be considered relevant.

For each neuron, the black downward traces represent the effective barriers. Every time a neuron spikes, the effective barrier is reset to start from the threshold value, after a refractory time materialized by silent periods between two vertical dashed lines. The succession of sample voltage values is sketched as follows. For a given neuron, the drawing of a provisional spiking time is shown as a point on the effective threshold, since the voltage trajectory hits the barrier at that time. If the neuron receives some interaction input before this provisional time, the estimate of the next spiking time needs to be updated. Since the network is inhibitory, the occurrence of the spike is delayed according to the local update rule. This procedure is represented by shifting downward the neuron's voltage by the amount of inhibition and drawing the next provisional estimate as for a noninteracting neuron. There are two types of internal events: the ones for which the neurons’ internal dynamics cause a spiking event (“true” event) and the ones that turn out not to be spiking events (“fake” events), for having ignored past inhibition.

Notice that the simulation proceeds asynchronously: each neuron's current state comprises a time of last update tn (light arrows) and an estimate of the next spiking time (dark arrows). The network's implementation moves forward in time so that the network time stamp t, understood as the time of the last update (gray vertical line), satisfies for all neurons. For instance, in Figure 6, the network is stalled at the last time neuron Na spikes, and thus the last update state of Na is the reset state. At the same time, neuron Nb has not interacted since its last spiking time and its last update state are also its reset state. Contrarily, neuron Nc and Nd have been updated once and twice, respectively. For a purely inhibitory network, the provisional estimates move only forward in time since they can only be postponed by interactions. Thus, the history of provisional spiking time translates exactly the history of the network interactions, and no computational power is lost to simulate unnecessary sample points of the voltage trajectories.

4.2.  Excitatory Interactions.

In Figure 7 we sketch out the sample points of a neuron's voltage trajectory, as simulated by our stochastic event-driven strategy for an excitatory network. Again, the network is made of four neurons (Na, Nb, Nc, and Nd) that fully connect to one another.

Figure 7:

Sample paths of four neurons embedded in a fully connected excitatory network in asynchronous time. The parameters of the simulation have been chosen to give a clear illustration and should not be considered relevant.

Figure 7:

Sample paths of four neurons embedded in a fully connected excitatory network in asynchronous time. The parameters of the simulation have been chosen to give a clear illustration and should not be considered relevant.

Contrary to the inhibitory case, for each neuron, the voltage trajectory is updated by being shifted upward by the amount of excitatory input it receives when interacting with another neuron. Since interactions cannot delay a provisional estimate, the only possible internal events are true spiking events. Accordingly, the shading of the voltage trace changes each time a spike happens. The spiking events can be distinguished in two classes: the spikes that result from a neuron's own internal dynamics driving the trajectory to hit the barrier (black dashed lines) and the spikes that directly follow the receiving of excitatory input (solid black lines). Notice that the network is stalled at the last time neuron Nc fires, and all the other neurons are shown in the state of their last update, in keeping with their asynchronous timing.

To capture how the updating of the provisional estimate proceeds, we show the same network sample path with anticipations in Figure 8. By “anticipations,” we mean that we represent the sample points of the provisional estimates, which are progressively hastened every time a neuron receives an excitatory input. Concretely, the history of successive provisional estimates lies on the barrier (since we are concerned with potential spiking time) and moves backward in time on this barrier. In order to clearly depict this process, we do not represent the effective barrier on Figure 8. The extensions of the voltage trajectories after an actual spiking event reveal how far in the future the original estimates of the spiking time are. In fact, the algorithm simulates admissible voltage traces from a future time back to the instant they cause the next network event, at which point the portion of the trajectory in the future can be considered irrelevant without altering the Markovian nature of the network evolution. Incidentally, if simulating the trajectory in the future ensures the consistency of the network update, it devotes computational power to sampling trajectories that are not appearing in the sampled end results.

Figure 8:

Sample paths of four neurons embedded in a fully connected excitatory network with anticipations. The parameters of the simulation have been chosen to give clear illustration and should not be considered relevant.

Figure 8:

Sample paths of four neurons embedded in a fully connected excitatory network with anticipations. The parameters of the simulation have been chosen to give clear illustration and should not be considered relevant.

4.3.  Balanced Network.

Finally, we represent in Figure 9 the sample paths of four neurons embedded in a network with excitation and inhibition. The specific network under consideration is made of 100 neurons that fully connect to one another. We adopt the same representation as for the case study of excitatory networks, sketching the network's sample path in asynchronous time in Figure 9a and with anticipations in Figure 9b.

Figure 9:

Sample paths of 100 neurons embedded in a fully connected excitatory and inhibitory network. The parameters of the simulation have been chosen to give a clear illustration and should not be considered relevant.

Figure 9:

Sample paths of 100 neurons embedded in a fully connected excitatory and inhibitory network. The parameters of the simulation have been chosen to give a clear illustration and should not be considered relevant.

Since the neurons Na, Nb, Nc, and Nd are the targets of both inhibitory and excitatory input, each of their sample paths presents the characteristics of excitatory and inhibitory updates. The asynchronous representation, Figure 9a, clearly shows that provisional estimates are regularly delayed by inhibition, while the representation with anticipations, Figure 9b, reveals that provisional estimates are also hastened by excitation. Notice that the update process is asymmetric: the downward shifts of the trajectories have much larger amplitude than the upward shifts but are also less frequent. As long as a neuron's past history of interaction remains inhibitory, the implementation postpones local updates until the occurrence of an internal event, at which point the accumulated inhibition is realized. As a consequence, the number of times a trajectory needs to be sampled is drastically reduced, as exemplified by Figure 9. For the given network at stake, if each spiking event causes a mean number of 100 spike deliveries, a trajectory leading up to a spiking time counts only seven sample points on average.

To conclude on the efficiency of our stochastic event-driven algorithm, we simulate 100, 000 spikes for a fully connected network of 200 neurons (50 inhibitory neurons). This represents the management of 40, 000 connections and 20 million spike deliveries, which takes on the order of the minute on a standard computer for relevant range of parameters. In Figure 10, we show the raster plots of the population for varying degree of inhibition. The reduced parameters of every neuron are set to l=1 V for the threshold and for the drift. The network is embedded in a three-dimensional space where neurons are points that are uniformly distributed on a sphere. The interneuronal delays are taken proportional to the great cycle distance on the sphere, so that the temporal radius is . Notice that the delays are thus drawn from a smooth probability distribution. The interactions of the network are prescribed by choosing at random to assign three-fourths of the neurons to be excitatory with synaptic weight w+=0.01V and the other fourth to be inhibitory with synaptic weight at w=Rw+ for varying values of R in the range [0.75, 2]. For each value of R, the network's evolution starts from the same initial condition, with initial times drawn from the stationary distribution of the corresponding noninteracting network (w=0) and empty delivery schedules. Suppressing inhibition by lowering R facilitates the appearance of cascades of firing, until the network becomes excitatory enough to saturate at extremely high firing. In Figure 11, we depict the fact that this transition is paralleled with a drastic increase in the computational cost of generating a spike.

Figure 10:

Raster plot of the spontaneous activity of a fully connected network of 200 neurons with a subpopulation of 50 inhibitory neurons. Each raster is made of 100 million spiking events for different conditions of inhibition. R is the ratio of the weight of inhibitory synapses over the weight of excitatory synapses, T denotes the computational time required to run the simulation. The parameters of the simulation have been chosen to be relevant within the limits of a population of 200 neurons (see the text).

Figure 10:

Raster plot of the spontaneous activity of a fully connected network of 200 neurons with a subpopulation of 50 inhibitory neurons. Each raster is made of 100 million spiking events for different conditions of inhibition. R is the ratio of the weight of inhibitory synapses over the weight of excitatory synapses, T denotes the computational time required to run the simulation. The parameters of the simulation have been chosen to be relevant within the limits of a population of 200 neurons (see the text).

Figure 11:

Computational time needed to simulate the network of 200 neurons of Figure 10. For inhibitory and excitatory synapses of the same weight R=1, the computation takes 2 minutes. Notice the sharp increase of simulation time when alleviated inhibition causes the network to saturate. Inset: H is the mean size of the heap of spike deliveries, that is, the average number of spikes in transit at each network update.

Figure 11:

Computational time needed to simulate the network of 200 neurons of Figure 10. For inhibitory and excitatory synapses of the same weight R=1, the computation takes 2 minutes. Notice the sharp increase of simulation time when alleviated inhibition causes the network to saturate. Inset: H is the mean size of the heap of spike deliveries, that is, the average number of spikes in transit at each network update.

5.  Conclusion

We have designed an efficient algorithm for simulating stochastic perfect integrate-and-fire neurons. The simulated spiking times are deemed exact in the sense that their numerical precision is limited only by the imperfection of the underlying random generator and the numerical accuracy of the computer. This is in stark contrast with stochastic Runge-Kutta methods where the precision is set by the time step of the simulation and there is always a finite probability of missing the actual first-passage time, thus numerically overestimating these event timings. The specifics of our update rules are chosen to preserve the Markovian nature of the networks’ stochastic dynamics (Touboul & Faugeras, 2011) while minimizing the number of updates. It actually comes as an additional benefit that when inhibition is at play, our algorithm implements fewer updates than the actual number of interaction events. All in all, the complexity of our stochastic event-driven strategy behaves similar to deterministic ones.

That said, one might argue that a clock-driven algorithm that proceeds without sorting interaction events, can perform better. However, even in this situation, it remains crucial that all the spikes that a neuron receives are well ordered when subjected to both inhibition and excitation. In order to separate reliably N stochastic interaction events during a period T, a clock-driven implementation needs to bin the duration T in at least N2 bins. This sets an upper bound of the time step of the simulation and shows that if one requires having well-ordered events, our method conceptually always outperforms a clock-driven algorithm.

Our method supposes the identification of closed-form expressions for the update rules of the network, which makes the exact simulation possible. The principles behind our implementation readily apply to general linear integrate-and-fire neurons. This encompasses the leaky integrate-and-fire model, for which the subthreshold dynamics follows an Ornstein-Uhlenbeck process and possesses a stationary distribution (Uhlenbeck & Ornstein, 1930). However, in the absence of closed-form expression, little is known about the simulation of the processes related to the first-passage times of such processes (Taillefumier & Magnasco, 2010).

In all generality, neural activity should be described in terms of excitable systems (Lindner, Garca-Ojalvo, Neiman, & Schimansky-Geier, 2004), whose dynamics are inherently nonlinear. Unfortunately, the development of an event-based approach for nonlinear integrate-and-fire neurons is still elusive, with the notable exception of the deterministic quadratic integrate-and-fire neurons (Tonnelier et al., 2007). This is essentially due to the fact that a nonlinear subthreshold dynamics precludes the formulation of the first-passage problem with drift in terms of an effective barrier. As a result, it is impossible to decouple the current-mediated interactions and the intrinsic dynamics of the neurons, and the formulation of simple update rules appears out of reach.

Despite being valid for only stochastic perfect integrate-and-fire neurons, our numerical method is of primary interest for its ability to simulate large-scale neuronal networks with little computational cost and exquisite temporal precision. Indeed, the spiking activity of a network is sensitive to perturbations of spiking times. This fact is best illustrated through the concept of polychronization (Izhikevich, 2006), whereby only certain temporal spiking patterns of upstream neurons elicit the generation of specific cascades of spikes in downstream neurons. More important, networks are known to evolve according to spike-timing-dependent-plasticity rules (Dayan & Abbott, 2001; Abbott & Nelson, 2000; Caporale & Dan, 2008), where the precise ordering of spikes’ emissions and receptions is absolutely crucial. In the context of noisy neural assemblies, the exactness of our algorithm makes it especially suited to the simulation of such evolvable networks, while its efficiency facilitates the identification of polychronized motifs and elucidates the event structure of spike trains (Fellous, Tiesinga, Thomas, & Sejnowski, 2004; Toups, Fellous, Thomas, Sejnowski, & Tiesinga, 2011).

Appendix A: Monte Carlo Framework

This appendix specifies the probability distributions used in our algorithm and explicates the numerical generation of their samples. Despite being technical, these points are absolutely crucial to the implementation of the algorithm.

A.1 First-Passage Time of Drifted Wiener Process

For a Wiener process W with W0=0 and x a strictly positive real, the first-hitting time of W to the constant barrier of height x is defined as
formula
and admits the following continuous probability density (Karatzas & Shreve, 1991),
formula
Such a density is proper since it is normalized to one. However, we have , and none of its moments are defined.
The distribution of first-passage times of W with a barrier of negative slope is the same as for a Wiener process with positive drift and a constant barrier at x:
formula
Through standard argument of stochastic calculus (Karatzas & Shreve, 1991), it can be shown that such a distribution admits the continuous density:
formula
This distribution is well defined as long as , and so are all its moments. For example, we have:
formula
When the drift is zero, the distribution qx=qx,0 is known as the Wald distribution, whereas the family of functions is collectively referred to as the inverse gaussian distributions; we denote them . We are interested in the case for which the slope of the barrier is strictly positive (). In this situation, there exists a very efficient method to simulate numerically samples from the the distribution . The method generates exactly random deviates by transformation of standard normal variable (Michael et al., 1976). Its implementation is given in procedure 2.
formula

A.2 Inverse Gaussian Bridge

Given two strictly positive constants x and y, we consider the two first-passage times and . We obviously have , and these two random variables are clearly correlated. However, the random variables and are independent, a fact that is formulated by saying that the process has independent increments. In particular, has the Markov property, which allows defining the probability density of the conditioned random variable as
formula
These functions form a parametric family of functions known as the inverse gaussian bridge distributions and denoted IGB(x, y, t). Actually, assuming that the underlying Wiener process hits a barrier , l>0 at a time t, the conditional first-passage process can be defined formally and is called the inverse bridge process (Webber & Ribeiro, 2003). Because of the conditioning, its law does not depend on , and, consistently, the density can be written , dropping the index .
The family of distributions IGB(x, y, t) exhibits an interesting behavior when its parameters are varied since it can be either unimodal or bimodal. In the same fashion as for the distributions , there exists an exact numerical method to simulate random variates from the law of IGB(x, y, t) (Webber & Ribeiro, 2003). It again proceeds by transformation of a standard normal variable (see procedure 3).
formula

A.3 First-Passage Brownian Bridge

Conditioning on Wt=x<L(t), the probability that a Wiener process Wt hits the barrier before t, is given by
formula
and is the probability that W, the Wiener process killed on the barrier, survives up to (x, t). Accordingly, we can verify that satisfies the Heat equation with absorbing boundary condition on the barrier L and initial condition , which demonstrates that is the transition kernel of the killed Wiener process.
Although the kernels , x<L(t) are nongaussian and define improper probability transitions on , they satisfy the Markov property. Therefore, we can exhibit the probability density of the conditioned random variable as
formula
which is a proper density. Now, letting x tend to L(s+t), that is, assuming the process W dies in s+t, the function converges to a limit density,
formula
where the function denotes the density of the Brownian bridge . The preceding limit function represents the density of the probability that the Wiener process belongs to [y, y+dy) knowing that it first hits L in t + s. We denote such a density and the corresponding distributions as .
The process can be defined formally and is called the first-passage Brownian bridge (Bertoin et al., 2003). The rigorous treatment of such a process is however beyond the scope of this appendix. We just mention that the law of the first-passage bridge has the same law as , where B denotes the three-dimensional Bessel process. Since a three-dimensional Bessel process B is defined as , the Euclidean norm of a three-dimensional Wiener process W, it is straightforward to simulate random deviates from when equipped with a method to simulate the law BES3(s, t, x) of the three-dimensional Bessel bridge (see procedure 4)
formula

Appendix B: Precision of Clock-Driven Simulation

Suppose we know that n events happen independently at random within a time period T and that we can record their occurrence only in N time bins with precision . If the probability of occurrence of a single event is uniform, the probability of an event happening in bin k is pN=1/N. The precision resolves the n events if the probability that two events fall in the same bin is small or at least controlled.

This probability is given by
formula
B.1
which is increasing with N as expected. Enforcing good confidence amounts to taking q so that , and chosing N such that PN>q. For a given confidence q, it is enough to choose N that satisfies the following criterion:
formula
Since expression B.1 shows that good confidence requires setting , the upper bound is also a good approximation. Thus, the smallest N that ensures good confidence q is in first approximation .

References

Abbott
,
L. F.
, &
Nelson
,
S. B.
(
2000
).
Synaptic plasticity: Taming the beast
.
Nature Neuroscience
,
3
,
1178
1183
.
Abeles
,
M.
,
Bergman
,
H.
,
Margalit
,
E.
, &
Vaadia
,
E.
(
1993
).
Spatiotemporal firing patterns in the frontal cortex of behaving monkeys
.
Journal of Neurophysiology
,
70
(
4
),
1629
1638
.
Ayzenshtat
,
I.
,
Meirovithz
,
E.
,
Edelman
,
H.
,
Werner-Reiss
,
U.
,
Bienenstock
,
E.
,
Abeles
,
M.
, et al
(
2010
).
Precise spatiotemporal patterns among visual cortical areas and their relation to visual stimulus processing
.
Journal of Neuroscience
,
30
,
11232
11245
.
Berry
,
M. J.
,
Warland
,
D. K.
, &
Meister
,
M.
(
1997
).
The structure and precision of retinal spike trains
.
Proceedings of the National Academy of Sciences
,
94
,
5411
5416
.
Bertoin
,
J.
,
Chaumont
,
L.
, &
Pitman
,
J.
(
2003
).
Path transformations of first passage bridges
.
Electron. Comm. Probab.
,
8
,
155
166
.
Braitenberg
,
V.
, &
Schüz
,
A.
(
1998
).
Cortex: Statistics and geometry of neuronal connectivity
.
New York
:
Springer
.
Brette
,
R.
,
Rudolph
,
M.
,
Carnevale
,
T.
,
Hines
,
M.
,
Beeman
,
D.
,
Bower
,
J.
, et al
(
2007
).
Simulation of networks of spiking neurons: A review of tools and strategies
.
Journal of Computational Neuroscience
,
23
,
349
398
.
Bruno
,
R. M.
, &
Sakmann
,
B.
(
2006
).
Cortex is driven by weak but synchronously active thalamocortical synapses
.
Science
,
312
(
5780
),
1622
1627
.
Bryant
,
H.
, &
Segundo
,
J.
(
1976
).
Spike initiation by transmembrane current: A white-noise analysis
.
Journal of Physiology
,
260
,
279
314
.
Buonocore
,
A.
,
Caputo
,
L.
,
Pirozzi
,
E.
, &
Ricciardi
,
L. M.
(
2010
).
On a stochastic leaky integrate-and-fire neuronal model
.
Neural Computation
,
22
,
2558
2585
.
Burkitt
,
A.
(
2006a
).
A review of the integrate-and-fire neuron model: 1. Homogeneous synaptic input
.
Biological Cybernetics
,
95
,
1
19
.
Burkitt
,
A.
(
2006b
).
A review of the integrate-and-fire neuron model: 2. Inhomogeneous synaptic input and network properties
.
Biological Cybernetics
,
95
,
97
112
.
Caporale
,
N.
, &
Dan
,
Y.
(
2008
).
Spike timing–dependent plasticity: A Hebbian learning rule
.
Annual Review of Neuroscience
,
31
,
25
46
.
Cecchi
,
G. A.
,
Sigman
,
M.
,
Alonso
,
J.-M.
,
Martínez
,
L.
,
Chialvo
,
D. R.
, &
Magnasco
,
M. O.
(
2000
).
Noise in neurons is message dependent
.
Proceedings of the National Academy of Sciences
,
97
,
5557
5561
.
Cocco
,
S.
,
Leibler
,
S.
, &
Monasson
,
R.
(
2009
).
Neuronal couplings between retinal ganglion cells inferred by efficient inverse statistical physics methods
.
Proceedings of the National Academy of Sciences
,
106
,
14058
14062
.
Dayan
,
P.
, &
Abbott
,
L.
(
2001
).
Theoretical neuroscience: Computational and mathematical modeling of neural systems
.
Cambridge, MA
:
MIT Press
.
Faisal
,
A. A.
,
Selen
,
L.P.J.
, &
Wolpert
,
D. M.
(
2008
).
Noise in the nervous system
.
Nat. Rev. Neurosci.
,
9
,
292
303
.
Fellous
,
J. M.
,
Rudolph
,
M.
,
Destexhe
,
A.
, &
Sejnowski
,
T. J.
(
2003
).
Synaptic background noise controls the input/output characteristics of single cells in an in vitro model of in vivo activity
.
Neuroscience
,
122
,
811
829
.
Fellous
,
J.-M.
,
Tiesinga
,
P.H.E.
,
Thomas
,
P. J.
, &
Sejnowski
,
T. J.
(
2004
).
Discovering spike patterns in neuronal responses
.
Journal of Neuroscience
,
24
,
2989
3001
.
Ferraina
,
S.
,
Paré
,
M.
, &
Wurtz
,
R. H.
(
2002
).
Comparison of cortico-cortical and cortico-collicular signals for the generation of saccadic eye movements
.
Journal of Neurophysiology
,
87
,
845
858
.
Fiser
,
J.
,
Chiu
,
C.
, &
Weliky
,
M.
(
2004
).
Small modulation of ongoing cortical dynamics by sensory input during natural vision
.
Nature
,
431
,
573
578
.
Gerstner
,
W.
, &
Kistler
,
W.
(
2002
).
Spiking neuron models: Single neurons, populations, plasticity
.
Cambridge
:
Cambridge University Press
.
Gerstner
,
W.
, &
Naud
,
R.
(
2009
).
How good are neuron models?
Science
,
326
,
379
380
.
Goodman
,
D. F. M.
, &
Brette
,
R.
(
2010
).
Spike-timing-based computation in sound localization
.
PLoS Comput. Biol.
,
6
(
11
),
e1000993
.
Huber
,
D.
,
Petreanu
,
L.
,
Ghitani
,
N.
,
Ranade
,
S.
,
Hromadka
,
T.
,
Mainen
,
Z.
, et al
(
2008
).
Sparse optical microstimulation in barrel cortex drives learned behaviour in freely moving mice
.
Nature
,
451
,
61
64
.
Izhikevich
,
E. M.
(
2003
).
Simple model of spiking neurons
.
IEEE Trans. Neural Networks
,
14
,
1569
1572
.
Izhikevich
,
E. M.
(
2004
).
Which model to use for cortical spiking neurons?
IEEE Transactions on Neural Networks
,
15
,
1063
1070
.
Izhikevich
,
E. M.
(
2006
).
Polychronization: Computation with spikes
.
Neural Computation
,
18
,
245
282
.
Karatzas
,
I.
, &
Shreve
,
S. E.
(
1991
).
Brownian motion and stochastic calculus
(2ndEd.).
New York
:
Springer-Verlag
.
Knight
,
B.
(
1972
).
Dynamics of encoding in a population of neurons
.
Journal of General Physiology
,
59
,
734
766
.
Kumar
,
A.
,
Rotter
,
S.
, &
Aertsen
,
A.
(
2008
).
Conditions for propagating synchronous spiking and asynchronous firing rates in a cortical network model
.
Journal of Neuroscience
,
28
,
5268
5280
.
Kumar
,
A.
,
Rotter
,
S.
, &
Aertsen
,
A.
(
2010
).
Spiking activity propagation in neuronal networks: Reconciling different perspectives on neural coding
.
Nat. Rev. Neurosci.
,
11
,
615
627
.
Lansky
,
P.
, &
Ditlevsen
,
S.
(
2008
).
A review of the methods for signal estimation in stochastic diffusion leaky integrate-and-fire neuronal models
.
Biol. Cybern.
,
99
,
253
262
.
Lapicque
,
L.
(
1907
).
Recherches quantitatives sur l'excitation électrique des nerfs traitée comme une polarization
.
Journal de physiologie et de pathologie générale
,
9
,
620
635
.
Larson
,
E.
,
Perrone
,
B. P.
,
Sen
,
K.
, &
Billimoria
,
C. P.
(
2010
).
A robust and biologically plausible spike pattern recognition network
.
Journal of Neuroscience
,
30
(
46
),
15566
15572
.
Lindner
,
B.
,
Garca-Ojalvo
,
J.
,
Neiman
,
A.
, &
Schimansky-Geier
,
L.
(
2004
).
Effects of noise in excitable systems
.
Physics Reports
,
392
(
6
),
321
424
.
London
,
M.
,
Roth
,
A.
,
Beeren
,
L.
,
Hausser
,
M.
, &
Latham
,
P. E.
(
2010
).
Sensitivity to perturbations in vivo implies high noise and suggests rate coding in cortex
.
Nature
,
466
(
7302
),
123
127
.
Mainen
,
Z.
, &
Sejnowski
,
T.
(
1995
).
Reliability of spike timing in neocortical neurons
.
Science
,
268
(
5216
),
1503
1506
.
Manwani
,
A.
, &
Koch
,
C.
(
1999a
).
Detecting and estimating signals in noisy cable structures, I: Neuronal noise sources
.
Neural Computation
,
11
,
1797
1829
.
Manwani
,
A.
, &
Koch
,
C.
(
1999b
).
Detecting and estimating signals in noisy cable structures, II: Information theoretical analysis
.
Neural Computation
,
11
,
1831
1873
.
Mattia
,
M.
, &
Giudice
,
P. D.
(
2000
).
Efficient event-driven simulation of large networks of spiking neurons and dynamical synapses
.
Neural Computation
,
12
,
2305
2329
.
McDonnell
,
M. D.
, &
Ward
,
L. M.
(
2011
).
The benefits of noise in neural systems: Bridging theory and experiment
.
Nat. Rev. Neurosci.
,
12
,
415
426
.
Metropolis
,
N.
, &
Ulam
,
S.
(
1949
).
The Monte Carlo method
.
J. Amer. Statist. Assoc.
,
44
,
335
341
.
Michael
,
J. R.
,
Schucany
,
W. R.
, &
Haas
,
R. W.
(
1976
).
Generating random variates using transformations with multiple roots
.
American Statistician
,
30
,
88
90
.
Monasson
,
R.
, &
Cocco
,
S.
(
2011
).
Fast inference of interactions in assemblies of stochastic integrate-and-fire neurons from spike recordings
.
Journal of Computational Neuroscience
,
31
,
199
227
.
Morrison
,
A.
,
Straube
,
S.
,
Plesser
,
H. E.
, &
Diesmann
,
M.
(
2006
).
Exact subthreshold integration with continuous spike times in discrete-time neural network simulations
.
Neural Computation
,
19
(
1
),
47
79
.
Prut
,
Y.
,
Vaadia
,
E.
,
Bergman
,
H.
,
Haalman
,
I.
,
Slovin
,
H.
, &
Abeles
,
M.
(
1998
).
Spatiotemporal structure of cortical activity: Properties and behavioral relevance
.
Journal of Neurophysiology
,
79
,
2857
2874
.
Reinagel
,
P.
, &
Reid
,
R. C.
(
2000
).
Temporal coding of visual information in the thalamus
.
Journal of Neuroscience
,
20
,
5392
5400
.
Reutimann
,
J.
,
Giugliano
,
M.
, &
Fusi
,
S.
(
2003
).
Event-driven simulation of spiking neurons with stochastic dynamics
.
Neural Computation
,
15
(
4
),
811
830
.
Ricciardi
,
L. M.
, &
Sacerdote
,
L.
(
1979
).
The Ornstein-Uhlenbeck process as a model for neuronal activity
.
Biological Cybernetics
,
35
,
1
9
.
Robert
,
C. P.
, &
Casella
,
G.
(
2004
).
Monte Carlo statistical methods
(2ndEd.).
New York
:
Springer-Verlag
.
Rolls
,
E. T.
, &
Deco
,
G.
(
2010
).
The noisy brain: Stochastic dynamics as a principle of brain function
.
New York
:
Oxford University Press
.
Ros
,
E.
,
Carrillo
,
R.
,
Ortigosa
,
E. M.
,
Barbour
,
B.
, &
Agís
,
R.
(
2006
).
Event-driven simulation scheme for spiking neural networks using lookup tables to characterize neuronal dynamics
.
Neural Computation
,
18
,
2959
2993
.
Rudolph
,
M.
, &
Destexhe
,
A.
(
2006
).
Analytical integrate-and-fire neuron models with conductance-based dynamics for event-driven simulation strategies
.
Neural Computation
,
18
,
2146
2210
.
Rust
,
N. C.
,
Schwartz
,
O.
,
Movshon
,
J. A.
, &
Simoncelli
,
E. P.
(
2005
).
Spatiotemporal elements of macaque V1 receptive fields
.
Neuron
,
46
(
6
),
945
956
.
Schreiner
,
C. E.
, &
Raggio
,
M. W.
(
1996
).
Neuronal responses in cat primary auditory cortex to electrical cochlear stimulation. II. Repetition rate coding
.
Journal of Neurophysiology
,
75
,
1283
1300
.
Seshadri
,
V.
(
1999
).
The inverse gaussian distribution
.
New York
:
Springer-Verlag
.
Stroock
,
D. W.
, &
Varadhan
,
S.R.S.
(
2006
).
Multidimensional diffusion processes
.
New York
:
Springer-Verlag
.
Taillefumier
,
T.
, &
Magnasco
,
M.
(
2010
).
A fast algorithm for the first-passage times of Gauss-Markov processes with Hölder continuous boundaries
.
Journal of Statistical Physics
,
140
(
6
),
1
27
.
Thorpe
,
S.
,
Delorme
,
A.
, &
Van Rullen
,
R.
(
2001
).
Spike-based strategies for rapid processing
.
Neural Networks
,
14
,
715
725
.
Tonnelier
,
A.
,
Belmabrouk
,
H.
, &
Martinez
,
D.
(
2007
).
Event-driven simulations of nonlinear integrate-and-fire neurons
.
Neural Computation
,
19
,
3226
3238
.
Touboul
,
J.
, &
Faugeras
,
O.
(
2011
).
A Markovian event-based framework for stochastic spiking neural networks
.
Journal of Computational Neuroscience
,
31
,
1
23
.
10.1007/s10827-011-0327-y
.
Toups
,
J. V.
,
Fellous
,
J.-M.
,
Thomas
,
P. J.
,
Sejnowski
,
T. J.
, &
Tiesinga
,
P. H.
(
2011
).
Finding the event structure of neuronal spike trains
.
Neural Computation
,
23
(
9
),
2169
2208
.
Uhlenbeck
,
G. E.
, &
Ornstein
,
L. S.
(
1930
).
On the theory of the Brownian motion
.
Phys. Rev.
,
36
,
823
841
.
Victor
,
J. D.
(
2005
).
Spike train metrics
.
Current Opinion in Neurobiology
,
15
,
585
592
.
von Neumann
,
J.
(
1951
).
Various techniques used in connection with random digits
.
National Bureau of Standards, Applied Math Series
,
11
,
36
38
.
Webber
,
N.
, &
Ribeiro
,
C.
(
2003
).
A Monte Carlo method for the normal inverse gaussian option valuation model using an inverse gaussian bridge
(Tech. Rep. 5).
Society for Computational Economics
.
Yang
,
Y.
,
DeWeese
,
M. R.
,
Otazu
,
G. H.
, &
Zador
,
A. M.
(
2008
).
Millisecond-scale differences in neural activity in auditory cortex can drive decisions
.
Nat. Neurosci.
,
11
,
1262
1263
.

Notes

1

The weights and delays can be taken as independent random variables without changing the analysis.

2

Actually, it is enough to choose the initial times and the propagation delays from distributions that are absolutely continuous with respect to the Lebesgue measure on their natural space of definition.

3

Notice that p tends to zero for increasing confidence of separation, so that diverges.