Abstract

Delivery of neurotransmitter produces on a synapse a current that flows through the membrane and gets transmitted into the soma of the neuron, where it is integrated. The decay time of the current depends on the synaptic receptor's type and ranges from a few (e.g., AMPA receptors) to a few hundred milliseconds (e.g., NMDA receptors). The role of the variety of synaptic timescales, several of them coexisting in the same neuron, is at present not understood. A prime question to answer is which is the effect of temporal filtering at different timescales of the incoming spike trains on the neuron's response. Here, based on our previous work on linear synaptic filtering, we build a general theory for the stationary firing response of integrate-and-fire (IF) neurons receiving stochastic inputs filtered by one, two, or multiple synaptic channels, each characterized by an arbitrary timescale. The formalism applies to arbitrary IF model neurons and arbitrary forms of input noise (i.e., not required to be gaussian or to have small amplitude), as well as to any form of synaptic filtering (linear or nonlinear). The theory determines with exact analytical expressions the firing rate of an IF neuron for long synaptic time constants using the adiabatic approach. The correlated spiking (cross-correlations function) of two neurons receiving common as well as independent sources of noise is also described. The theory is illustrated using leaky, quadratic, and noise-thresholded IF neurons. Although the adiabatic approach is exact when at least one of the synaptic timescales is long, it provides a good prediction of the firing rate even when the timescales of the synapses are comparable to that of the leak of the neuron; it is not required that the synaptic time constants are longer than the mean interspike intervals or that the noise has small variance. The distribution of the potential for general IF neurons is also characterized. Our results provide powerful analytical tools that can allow a quantitative description of the dynamics of neuronal networks with realistic synaptic dynamics.

1.  Introduction

A neuron communicates with other neurons by generating synaptic currents through the corresponding synapses. The nature of these events depends on the presynaptic neurotransmitter and the postsynaptic receptors. Several types of receptors can coexist in the same neuron, each with its characteristic timescale. Within the excitatory class, AMPA-type synaptic receptors open during 1–5 ms (Silver, Traynelis, & Cull-Candy, 1992; Barbour, Keller, Llano, & Marty, 1994; Umemiya, Senda, & Murphy, 1999; Angulo, Rossier, & Audinat, 1999; Zamanillo et al., 1999), while the activation of NMDA receptors lasts for about 100 ms (see, e.g., Umemiya et al., 1999; Myme, Sugino, Turrigiano, & Nelson, 2003). Both synaptic receptors are activated by release of neurotransmitter glutamate from glutamatergic presynaptic cells. Similarly, there are also fast and slow inhibitory synapses— ∼ 5 to 10 ms (Xiang, Huguenard, & Prince, 1998; Banks, Li, & Pearce, 1998; Okada, Onodera, van Renterghem, & Takahashi, 2000) and ∼ 100 ms (Otis, De Koninck, & Mody, 1993), which are activated by release of GABA from GABAergic presynaptic cells. Therefore, spikes arriving at the presynaptic terminals can initiate a variety of synaptic currents on the postsynaptic neuron with different time courses and lasting for quite different time intervals. The variety in duration of spike aftereffects on postsynaptic neurons could have important computational consequences, because it could allow the same information to be present in the neuron at different timescales. In a similar way, it could provide a basis for transmitting and combining information carried at several temporal resolutions.

In addition, the effect of an impinging spike on the membrane potential of a neuron depends on the membrane time constant of the neuron. While in resting conditions, the membrane time constant is quite large (τm∼ 20 ms; see e.g., Paré, Shink, Gaudreau, Destexhe, & Lang, 1998), during intense presynaptic background activity or intense stimulation, its value can be reduced by several times (Bernander, Douglas, Martin, & Koch, 1991; Paré et al., 1998; Destexhe & Paré, 1999; Borg-Graham, Monier, & Frégnac, 1998; Hirsch, Alonso, Reid, & Martinez, 1998; Anderson, Carandini, & Ferster, 2000). Thus, depending on the background activity and the nature of the stimulation, the same synapse can produce different effects on the neuron. It is then reasonable that the synaptic time constants τs have to be considered in relation to the effective membrane time constant: what matters for the neuron behavior is the ratio τsm. According to this idea, synaptic filters can be classified as slow or fast, depending on whether that ratio is larger or smaller than one, respectively.

The above considerations imply that it is important to know how the presence of synaptic filters with timescales longer or shorter than the membrane time constant affects the neuron's firing statistics. Previous work on LIF neurons has determined analytically their firing rate when synapses have long time constants (Moreno-Bote & Parga, 2004), as well as when they have short time constants (Ricciardi, 1977; Brunel & Sergi, 1998; Fourcaud & Brunel, 2002; Camera, Giuglianio, Senn, & Fusi, 2008). By interpolating between these two limits, an analytical expression for the firing rate exists that determines its value for all τs (Moreno-Bote & Parga, 2004). Neurons with both fast and slow synaptic filtering have also been studied in Moreno-Bote and Parga (2004). Further developments have addressed the case of conductance-based IF neurons (Moreno-Bote & Parga, 2005) and the effect of input correlations on a pair of LIF neurons (Moreno-Bote & Parga, 2006). The expressions for the firing rate are exact in the specified limits of short or long τs compared to τm and do not require further assumptions about the amplitude of the noise. A related important issue is to know whether the ratio between synaptic and membrane time constants determines the operating regime of the neurons and its computational capabilities. For instance, it is known that the firing variability depends on that ratio (Svirskis & Rinzel, 2000; Moreno-Bote & Parga, 2004; Muller, Buesing, Schemmel, & Meier, 2007; Chizhov & Graham, 2008). Also, in neural networks in which the effective membrane time constant of the neurons can become very short, it would be very useful to have analytical predictions for the firing rate (Shelley, McLaughlin, Shapley, & Wielaard, 2002; Moreno-Bote & Parga, 2005; Cai, Rangan, & McLaughlin, 2005; Apfaltrer, Ly, & Tranchina, 2006).

Here we introduce a theory to describe the firing rate of general IF neurons receiving arbitrarily filtered inputs, which extends previous results for LIF neurons with gaussian inputs (Moreno-Bote & Parga, 2004, 2005, 2006; Brunel & Sergi, 1998; Fourcaud & Brunel, 2002). The formalism is presented in a detailed, didactic manner along with a consideration of useful examples. We first derive the expressions for the firing rate and spike correlation function in a qualitative way using the adiabatic approach introduced in Moreno-Bote and Parga (2004). The formal derivation of the expression for the firing rate valid for arbitrary IF neurons with arbitrary input structure in the limit of long synaptic timescale in the presence or not of fast filters is provided in the appendix (finer details for the LIF neuron case are presented in the Supporting Information, available online at http://www.mitpressjournals.org/doi/suppl/10.1162/neco.2010.06-09-1036). Then we analyze the expressions of the firing rate and correlation function for LIF neurons and use them to predict the input-output transfer function of individual neurons and the synchronous firing pattern of pairs of neurons receiving both common and independent sources of inputs. We continue applying the formalism to describe the firing rate of QIF and NTIF neurons. Finally, we provide an exhaustive list of the analytical expressions for the firing rate and correlation function of general IF neurons and for the particular cases of LIF, QIF, and NTIF neurons.

2.  Results

2.1.  The Adiabatic Approach.

We are interested in describing the firing statistics of simple but realistic neuron models receiving temporarily correlated inputs. In this section, we study in a general way the response properties of neurons with randomly varying inputs. We apply the results to completely determine the firing rate of IF neurons driven by stochastic currents with a long correlation timescale. The firing rate in this limit, called the adiabatic firing rate, is particularly simple and can be derived by qualitative means. Thus, we leave its formal derivation for the appendix. The adiabatic firing rate is compared to another candidate simple expression, and we show that the latter gives worse fits of the simulated data. Then the case of fast and slow stochastic inputs is considered. We finally show that our formalism can be extended to study the correlated firing of a pair of neurons receiving common as well as independent sources of noise.

2.1.1.  The Adiabatic Firing Rate.

We start by considering a neuron model in which the firing rate as a function of a constant input current I can be computed. We call this quantity ν(I). Under constant stimulation and for deterministic neurons, the rate ν(I) describes completely the statistics of the output spike train except for an initial phase: the output spike train is a periodic pattern with interspike intervals of length T(I) = 1/ν(I). The firing rate for a fixed input current can be very easily calculated for IF-like neurons. However, this idea can be extended to any other neuron model or real neurons in which the function ν(I) can be computed numerically or experimentally.

Because we are ultimately interested in the response of neurons to stochastic inputs, the steady-state description alone does not suffice, yet it can be easily extended to the case in which inputs change slowly compared to the dynamics of the neuron under consideration—for instance, to an LIF neuron with membrane time constant τm and gaussian white noise filtered with synaptic time constant τs ≫ τm. For more complex neurons, τs should also be larger than all other timescales present in the system. We will show that although the timescale separation condition could seem restrictive, the equations obtained are applicable even when the input changes as fast as the dynamics of the neuron.

Let us assume that the condition that the neuron's dynamics is faster than the synaptic time constant is satisfied. Then, during a time interval Δt shorter than τs, the current I(t) will be reasonably constant. Therefore, during that interval, the neuron will fire with a constant rate ν(I(t)) and ν(I(t)) × Δt spikes will be emitted. Since this spike count can be smaller than one, ν(I(t)) × Δt < 1, ν(I(t)) needs to be interpreted as a firing probability rather than a firing rate.

Finally, let P(I) be the distribution of input currents, not necessarily time independent. For instance, in an Ornstein-Uhlenbeck process, the distribution will be a time-independent gaussian (Risken, 1989), but in general it can be skewed, bimodal, or flat, or have any other shape, and it can depend on time. Hence, the probability density that the neuron emits a spike can be computed by averaging the rate ν(I(t)) with the probability distribution of currents as
formula
2.1
We call this expression the adiabatic firing rate, in analogy with the timescale separation technique introduced in the early developments of quantum mechanics by Bohr and Oppenheimer (1927) to deal with the slow motion of the nuclei in molecules. (See also Risken, 1989, for later applications of this idea in the field of stochastic dynamical systems to eliminate fast variables.) This equation shows that the firing rate of a neuron with slow stochastic inputs can be estimated using the input-to-rate transfer function of the neuron for stationary inputs and the distribution of the inputs. The calculations required to compute equation 2.1 are illustrated in Figure 1B. A complete proof of this result for general IF models and arbitrary forms of slow stochastic input processes is given in the appendix. Note that the current does not need to be a scalar.
Figure 1:

Adiabatic firing rate for an IF neuron driven by a noisy current filtered by slow synapses (B) and when, in addition, there is fast noise (C). (A) Schematic of a neuron receiving a noisy input filtered by synapses, I(t), and its response. Here, the distribution of the current is stationary (same at all times), although this is not necessary in general. (B) A slowly fluctuating current (left) generates a slowly fluctuating firing probability (right) in the neuron. The firing rate of the neuron can be obtained by time-averaging the instantaneous firing probability ν(t), equation 2.2. Alternatively, the firing rate can be analytically computed (middle) by integrating over the current the product of the probability distribution of the current, P(I), and the steady-state firing rate of the neuron receiving constant current I, ν(I), equation 2.1. (C) When there is also fast noise, the rate with averaged fast noise (solid line), νfast(I), has to be used instead of the noise-free firing rate ν(I) (dotted line). Typically νfast is a smooth function of I.

Figure 1:

Adiabatic firing rate for an IF neuron driven by a noisy current filtered by slow synapses (B) and when, in addition, there is fast noise (C). (A) Schematic of a neuron receiving a noisy input filtered by synapses, I(t), and its response. Here, the distribution of the current is stationary (same at all times), although this is not necessary in general. (B) A slowly fluctuating current (left) generates a slowly fluctuating firing probability (right) in the neuron. The firing rate of the neuron can be obtained by time-averaging the instantaneous firing probability ν(t), equation 2.2. Alternatively, the firing rate can be analytically computed (middle) by integrating over the current the product of the probability distribution of the current, P(I), and the steady-state firing rate of the neuron receiving constant current I, ν(I), equation 2.1. (C) When there is also fast noise, the rate with averaged fast noise (solid line), νfast(I), has to be used instead of the noise-free firing rate ν(I) (dotted line). Typically νfast is a smooth function of I.

The adiabatic expression of the firing rate is simple and generally valid under the following conditions. First, the neuron should have a known sustained response ν(I) for constant input current. Second, the current has to be a slow enough stochastic process with known distribution P(I). For stationary input statistics, P(I) is the steady-state probability distribution of the current.

When the input statistics is time independent with finite correlation time, equation 2.1 is equivalent to the temporal average of ν(I(t)),
formula
2.2
Here, the time window T over which the firing rate is averaged is much longer than the correlation time so that many independent realizations of the current I(t) occur. In this work we will focus on inputs with stationary statistics with finite correlation time.

2.1.2.  A Suboptimal Alternative Expression for the Firing Rate.

One could argue that equation 2.1 is not the only plausible way of estimating the firing rate. In fact, another plausible estimate of the firing rate can be built at follows. First, when the neuron receives a slow current I, it will emit a spike at intervals T(I) = 1/ν(I). This quantity is calculated as the inverse of the firing rate ν(I) (since for fixed current, the spike train is periodic). Then we could estimate the mean interspike interval (ISI), denoted T, by averaging T(I) with the known probability P(I) as
formula
2.3
From this mean ISI, the firing rate of the neuron can be estimated as ν = 1/T, which is different from that given by the adiabatic expression, equation 2.1. We call equation 2.3 the fake adiabatic expression for the firing rate.
Which is the correct equation? In this letter, we prove that equation 2.1 provides the correct estimate of the firing rate for slow stochastic inputs. Equation 2.3 (with ν = 1/T) or variations of it (see equation 2.22), on the contrary, deviate systematically from the true firing rate. This is because averaging T(I) with P(I) introduces biases in the estimate of the mean ISI due to the oversampling of long ISIs (i.e., T(I)s generated with strong input currents) relative to short ISIs (i.e., T(I)s generated with weak input currents), a problem known as biased sampling (Cox & Lewis, 1966; Middleton, Chacron, Lindner, & Longtin, 2003). However, when the bias in equation 2.3 is corrected appropriately, the original expression for the firing rate in equation 2.1 is recovered, as expected. To see this, note that the implicit assumption in equation 2.3 that the current I is constant for each period ISI, with a duration T(I), leads to the result that the distribution of I does not follow the desired distribution P(I), but rather cT(I)P(I), where c is a normalization constant. This suggests that the correction for the bias in equation 2.3 consists of replacing the distribution P(I) in equation 2.3 by CP(I)T−1(I), where C is a normalization constant. With this replacement, the currents are distributed according to P(I). Hence, the corrected expression for the mean ISI becomes
formula
2.4
which is identical to the adiabatic expression of the firing rate in equation 2.1. The above reasoning, however, requires that T(I) is finite for all I, which is a very restrictive condition. For instance, for LIF neurons in both the sub- and suprathreshold regimes, there are values of the current for which T(I) becomes infinity (see section 2.2), making the derivation presented above inappropriate in this case. On the contrary, we will show that equation 2.1 holds true even when T(I) becomes infinity for some set of currents (i.e., ν(I) becomes zero). In the following we will use equation 2.3 and a variation of it, equation 2.22 to highlight how large the bias sampling effect is on the estimation of the firing rate.

2.1.3.  Fast Noise and Slow Noise.

An important case arises when fast currents are present. For instance, AMPA synaptic receptors receiving Poisson spike trains will produce current fluctuations with a correlation timescale of a few milliseconds, which is better modeled as fast instead of slow noise.

Our theory can also be extended to include this case. Let νfast(I) be the firing rate of a neuron receiving a constant current I and where all sources of fast noise have been averaged. The function νfast(I) does not have a hard threshold below which firing is forbidden, but it is a rather smooth function of the input I (see Figure 1C). This is because the presence of fast noise allows firing even when I is below the firing threshold of the neuron. Under this condition, the firing rate of a neuron receiving both fast noise and slow noise with a known distribution P(I) can be calculated as
formula
2.5
as shown in the appendix.

2.1.4.  Cross-Correlation Function.

The formalism that we have described is not limited to the study of the first-order statistics of the neuron's firing, but it can also be extended to account for the second-order statistics. Here we find the two-point correlation function of the output spike trains of a pair of IF neurons receiving arbitrary forms of correlated and independent inputs. The equations are derived in an intuitive way. Finally, simpler equations are obtained for weakly correlated signals.

We now consider two neurons, not necessarily identical. They fire with rates ν1(Itot,1) and ν2(Itot,2) when they receive constant input currents Itot,1, Itot,2. Let us assume that the neurons receive a common stochastic current, Ic(t), as well as independent currents I1(t) and I2(t), as shown in Figure 2A. Therefore, the total currents to neurons 1 and 2 are Itot,1(t) = I1(t) + Ic(t) and Itot,2(t) = I2(t) + Ic(t), respectively. Since the two neurons have a common stochastic input, Ic(t), they will fire in a correlated way: if neuron 1 emits a spike at time t, neuron 2 will fire a spike at time t′ more or less likely than that given by chance (chance probability here means the rate of firing of neuron 2). The cross-correlation function of the output spike trains of the neurons is defined as
formula
2.6
where ti(j) are the spike times from neuron 1(2), the sums extend over all output spikes, and the average is across all possible realizations of the output spikes (see, e.g., Riehle, Grun, Diesmann, & Aertsen, 1997; Bair, Zohary, & Newsome, 2001). The cross-correlation function describes the synchronization pattern between the two spike trains up to second-order statistics, and it expresses the joint probability density that neuron 1 fires at time t and that neuron 2 fires at time t′. When the two neurons fire independently, the cross-correlation function becomes the product of their firing rates, ν1ν2. In general, however, the cross-correlation function is different from the pure product of the firing rates of the two neurons.
Figure 2:

Cross-correlation between the output spike trains of a pair of neurons receiving independent and common noisy inputs. (A) Two IF neurons with common sources of noise will fire in a correlated way; the probability of having a spike at time t from neuron 1 and another spike at time t′ from neuron 2 will not be the product of the instantaneous firing rates of the individual neurons. (B) To compute the correlation function of the output spike trains, C(t, t′), one has to average the instantaneous firing rates of neuron 1 receiving the currents I1 and Ic at time t, and the rate of neuron 2 receiving the currents I1 and Ic at time t′ over the distributions of the currents, equation 2.7.

Figure 2:

Cross-correlation between the output spike trains of a pair of neurons receiving independent and common noisy inputs. (A) Two IF neurons with common sources of noise will fire in a correlated way; the probability of having a spike at time t from neuron 1 and another spike at time t′ from neuron 2 will not be the product of the instantaneous firing rates of the individual neurons. (B) To compute the correlation function of the output spike trains, C(t, t′), one has to average the instantaneous firing rates of neuron 1 receiving the currents I1 and Ic at time t, and the rate of neuron 2 receiving the currents I1 and Ic at time t′ over the distributions of the currents, equation 2.7.

As usual, we assume that the current fluctuations are slower than the membrane time constant of the neurons. The two-point probability density of the common current is some known function P(Ic, t; Ic, t′), which specifies the probability density of having the common current with value Ic at time t and with value Ic at a time t′ (primes denote quantities at time t′).

In the adiabatic approach, the two-point cross-correlation function, equation 2.6, can be expressed as
formula
2.7
This equation can be understood as follows. Neuron 1 receives a current I1 + Ic at time t, while neuron 2 receives the current I2 + Ic at time t′, as shown in Figure 2B. Since the current fluctuations are slow, at those times the neurons fire with probabilities ν1(I1 + Ic) and ν2(I2 + Ic), respectively. Equation 2.7 simply states that the two-point correlation function of the output spike trains is the average of the product of the instantaneous firing rates of the two neurons evaluated at times t and t′. This average of instantaneous firing rates over synaptic currents approximates the average over stochastic realizations of the spikes in equation 2.6.
The integral expression in equation 2.7 can be rewritten in a simpler way. Since the currents I1 and I2 are independent, the integrals of those two variables with the factorized distribution P(I1)P(I2) in equation 2.7 can be computed first, obtaining
formula
2.8
where is the firing rate of neuron i (i = 1, 2) averaged over its independent current Ii for fixed common current Ic.

This intuitive derivation of the cross-correlation function in the adiabatic approach is presented here for the first time, and it can be shown to be identical to the one obtained in Moreno-Bote and Parga (2006) for the case of LIF neurons receiving filtered white noise (see below). It is worth emphasizing that equation 2.7 can be applied to more general models of spiking neurons and to rather general forms of noise distributions and noise correlation structure.

Equations 2.7 and 2.8 become particularly simple when the fluctuations of the common input are a small fraction of the total current fluctuations to the neurons. Then the averaged firing rates can be expanded around the mean value of the common current, μc, in powers of Ic − μc and Ic − μc, and take the linear approximation to obtain
formula
2.9
(here is the derivative function of evaluated at the mean current). After averaging over Ic and Ic we find
formula
2.10
By noting that the integral in the second term on the right-hand side is the cross-correlation function of the common input, denoted CI,c(t, t′), equation 2.10 can be written simply as
formula
2.11
The first term in the sum is the chance probability of observing spikes emitted at times t and t′ from neurons 1 and 2, respectively. The second term expresses the excess probability above that expected by chance that the spikes are emitted at those times. It is concluded that the autocorrelation of the common fluctuating input is linearly transformed into the cross-correlation of the neurons' output spike trains. For instance, for an Ornstein-Uhlenbeck process with timescale τs and variance σ2I,c one obtains
formula
2.12
and therefore the cross-correlation function of the output spike trains will also be an exponential with the same timescale and whose amplitude increases with the square of the common noise amplitude. (It is easy to see that the cross-correlation function depends in general only on the time difference tt′ for stochastic inputs with stationary statistics.)

It is important to note that equations 2.7 and 2.8 not only apply to the case of small common noise amplitude but also to large amplitudes, since the adiabatic approach is nonlinear in σ2I,c.

2.2.  Firing Rate and Correlations of LIF Neurons.

Here we summarize the analytical results for the case of an LIF neuron that follow from the general expressions presented above and are formally derived in the appendix. A more detailed exposition of the LIF neuron case is provided in the Supporting Information.

2.2.1.  LIF Neurons with Slow Filters.

We start by considering the case of synaptic receptors with long time constants. This case is the relevant one to study the dynamics of neurons in the so-called high-conductance regime, in which the membrane time constant can become shorter or comparable to the synaptic time constants. This case naturally arises also when strongly fluctuating GABAA synaptic currents pass through a neuronal membrane with relatively short τm. It also accounts for the case of neurons strongly innervated by NMDA or GABAB receptors, hypothesized to be crucial to stabilize working-memory states (Wang, 1999). Here we will focus on synaptic receptors with a single timescale, while the more general case with two or more slow synapses with different timescales is considered in the Supporting Information.

The membrane potential V of an LIF neuron obeys
formula
2.13
where τm is the membrane time constant and I(t) is the synaptic current. In the model, a spike is evoked when V reaches the threshold Θ, and then the voltage is reset to H. Without loss of generality, the resting potential is set at V = 0. We take the absolute refractory period to be zero.
When the number of presynaptic spikes is large and the evoked postsynaptic potentials are small compared to the voltage threshold, a sum of input Poisson spike trains can be approximated using the diffusion approximation by a white noise with mean μ and deviation σ (see Supporting Information, section 1, and Ricciardi, 1977), generating a current given by Brunel and Sergi (1998):
formula
2.14
Here, η(t) is a white noise process with zero mean and unit variance, 〈η(t)〉 = 0 and 〈η(t)η(t′)〉 = δ(tt′), μ is the mean current and σ2I = σ2/2τs is the variance of the current. The filter introduces exponential correlations in the current with a correlation time τs,
formula
2.15
and therefore the noisy input to the LIF neuron cannot be described by a white noise process. The process defined in equation 2.14 is known as the Ornstein-Uhlenbeck process (Risken, 1989). Note that the variance of the current can be obtained from the correlation function when t = t′.
The adiabatic expression of the firing rate for the LIF neuron defined in equations 2.13 and 2.14 can be built using equation 2.1 as follows. First, the distribution of the current is a gaussian with mean μ and variance σ2I = σ2/2τs,
formula
2.16
Second, solving equation 2.13 for constant current I with initial condition H and final condition Θ leads to the expression of the instantaneous firing rate
formula
2.17
for I>Imin = Θ/τm and zero otherwise. Inserting the two above quantities into equation 2.1, one gets that the firing rate is
formula
2.18
The system defined in equations 2.13 and 2.14 can be linearly transformed into an equivalent one using and . The normalized current z is distributed as a gaussian with zero mean and unit variance (see the appendix). In the new variables, the reset and threshold potential read and , and the firing rate in equation 2.18 becomes
formula
2.19
with
formula
2.20
where .
Expanding equation 2.19 in powers of ϵ in the suprathreshold regime (μτm>Θ), we find that the firing rate up to second order is
formula
2.21
where . Note that is the rate of an LIF neuron driven by a noiseless current with intensity μ. An identical expression for the rate in the suprathreshold regime is found in the Supporting Information using the Fokker-Planck equation (FPE) associated with the variables (x, z) in powers of ϵ. We also show in the Supporting Information that the firing rate in equation 2.19 does not admit an expansion in powers of ϵ in the subthreshold regime (μτm < Θ). This indicates that a naive expansion of the solution of the FPE in powers of ϵ will not work for all regimes. In the appendix for the general case and in the Supporting Information for the LIF neuron, we show how to regularize the expansion and find an asymptotic solution valid for all regimes. The zeroth-order term in the regularized expansion of the exact rate of an LIF neuron equals equation 2.19.

The prediction of the firing rate given by the adiabatic approach, equation 2.19, has been compared with simulation results in Figure 3. In the left panel, the noise σ2 is kept fixed. The adiabatic firing rate (bottom line) becomes exact when τs is much longer than the membrane time constant of the neuron, but it also provides a good approximation when τs is comparable to τm = 10 ms. Only the case of subthreshold neurons has been shown here, but similar fits are found for suprathreshold neurons. In the right panel, the noise has been increased linearly with the timescale of the noise as σ2 = σ20τs for some fixed σ20. Since the adiabatic firing rate depends on the noise level through the product of σ2s (recall the definitions of ϵ, zmin and the normalized thresholds), by increasing σ2 linearly with τs, the adiabatic firing rate does not change (horizontal lines). The simulation results show that the firing rate approaches the adiabatic limit when τs becomes long and that the adiabatic rate provides a good prediction when τs ∼ 2τm = 20 ms, and an acceptable prediction even when τs = τm = 10 ms (the adiabatic rate accounts for 80% of the simulated rate on average in the last case). These comparisons also show that the adiabatic approach does not require that the noise amplitude is small.

Figure 3:

Firing rate for an LIF neuron with slow synapses. (Left) The adiabatic firing rate (bottom line), equation 2.19, is compared to the fake adiabatic rate (top line), equation 2.22, and simulation results (data points). The last rate provides very poor predictions. Parameters are μ = 70 Hz, σ2 = 40 Hz, H = 0, Θ = 1 and τm = 10 ms. (Right) The adiabatic firing rate as a function of τs remains constant (horizontal lines) when the amount of noise increases as σ2 = σ20τs for fixed σ20. The adiabatic expression is good even when τs ∼ 2τm = 20 ms. The data points approach the analytical limit as τs increases. When H is very close to Θ, the firing rate converges more slowly to the adiabatic limit (not shown). As τs approaches zero, the total noise σ2 approaches zero. In that limit, the absence of noise produces a lack of firing in the subthreshold regime, as shown by the simulation. Parameters from bottom to top: μ = 60 Hz, σ20 = 1500 Hz2; μ = 70 Hz, σ20 = 2500 Hz2; μ = 70 Hz, σ20 = 5000 Hz2; μ = 80 Hz, σ20 = 5000 Hz2. Other parameters are as in the left panel.

Figure 3:

Firing rate for an LIF neuron with slow synapses. (Left) The adiabatic firing rate (bottom line), equation 2.19, is compared to the fake adiabatic rate (top line), equation 2.22, and simulation results (data points). The last rate provides very poor predictions. Parameters are μ = 70 Hz, σ2 = 40 Hz, H = 0, Θ = 1 and τm = 10 ms. (Right) The adiabatic firing rate as a function of τs remains constant (horizontal lines) when the amount of noise increases as σ2 = σ20τs for fixed σ20. The adiabatic expression is good even when τs ∼ 2τm = 20 ms. The data points approach the analytical limit as τs increases. When H is very close to Θ, the firing rate converges more slowly to the adiabatic limit (not shown). As τs approaches zero, the total noise σ2 approaches zero. In that limit, the absence of noise produces a lack of firing in the subthreshold regime, as shown by the simulation. Parameters from bottom to top: μ = 60 Hz, σ20 = 1500 Hz2; μ = 70 Hz, σ20 = 2500 Hz2; μ = 70 Hz, σ20 = 5000 Hz2; μ = 80 Hz, σ20 = 5000 Hz2. Other parameters are as in the left panel.

2.2.2.  Range of Validity of the Adiabatic Approach.

It is important to note that the mean ISIs obtained in the left panel of Figure 3 are very long compared to the synaptic time constant used (e.g., a rate of 10 Hz equals a mean ISI of 100 ms, which is much longer than the synaptic time constant at that point, 20 ms). Therefore, our theory does not require that the fluctuations live for a long period of time compared to the mean ISI of the neuron (∼100 ms), but rather that they live for a time of the order of its membrane time constant (∼10 ms).

2.2.3.  Comparison with the Fake Adiabatic Approach.

The adiabatic expression is much better than other alternatives. For instance, we find that equation 2.3 predicts zero firing rate in the subthreshold regime in LIF neurons for all values of τs, since there are values of the current for which the ISI becomes infinity. Although the zero value is correct in the long τs limit, and it is also predicted by the adiabatic firing rate, it shows that equation 2.3 is strictly valid only in that limit. An improved version of the expression that does not give a trivial erroneous result in the subthreshold regime is one in which the probability distribution of I is renormalized to include only those I>Imin for which the ISI is finite, T(I) < ∞, as
formula
2.22
However, in this case too, the prediction is worse than that given by the adiabatic rate, as shown in the left panel of Figure 3 (top line). Smaller but still substantial disagreements are obtained in the suprathreshold regime.

2.2.4.  Equivalent But Computationally Faster Implementation of the Adiabatic Firing Rate.

The adiabatic expression of the firing rate for an LIF neuron in the long τs limit, equation 2.19, is appealing because of its simplicity. It is also very efficient computationally, since it involves the calculation of a single integral, requiring only a summation of the order of 103 terms.

However, it is possible to enhance further the computational efficiency of the adiabatic firing rate (Moreno-Bote & Parga, 2006). The fast expression for the firing rate of an LIF neuron is
formula
2.23
where . Unlike equation 2.19, which involves an integral, equation 2.23 requires only a sum of a series that can be cut at n ∼ 200 (using t = 200 ms), giving high accuracy. This results in almost an order of magnitude improvement in computation speed in comparison with the integral form, equation 2.19. Equations 2.19 and 2.23 predict the same firing rate.

2.2.5.  Distribution of Currents Conditioned to Output Spike Times.

The distribution of z (normalized currents) at the output spike time, denoted p(z|spike), can be computed directly from the probability density flow of the voltage at threshold (see equations A.20 and A.21 in the appendix; see also Supporting Information). It can be written as
formula
2.24
defined for z>zmin and C is the normalization constant. Since ν0(z) is zero for z < zmin, the distribution of z conditioned to the spike times is skewed toward positive values of the fluctuations. This distribution has been derived in Moreno-Bote and Parga (2004, 2006) and Schwalger and Schimansky-Geier (2008).

2.2.6.  The Diffusion Approximation.

In Figure 4 we check the validity of the diffusion approximation, equation 2.14. For that purpose, we have generated excitatory and inhibitory Poisson trains with firing rates νE and νI and predicted the output firing rate of an LIF neuron with a single synaptic time constant using formula 2.19. The parameters μ and σ2 can be written as (see Supporting Information and Ricciardi, 1977)
formula
2.25
The predictions are in good agreement with the simulation results even for values of τs lower than τm.
Figure 4:

Output firing rate of an LIF neuron driven by Poisson trains as a function of τs. The parameters for the Poisson input (see equation 2.25) are JE = 0.02, νE = 4500 Hz (we have taken NE = 1 and no inhibition) for the bottom curve, JE = 0.03, νE = 3000 Hz for the middle curve, and JE = 0.03, νE = 5000 Hz, JI = 0.02, νI = 1000 Hz (NE = NI = 1) for the upper curve. With these parameters, we have, respectively, σ2 = 1.7, 2.7, and 5.7 Hz, and μ = 90 Hz in all cases. The other parameters are as in Figure 3m = 10 ms). Full lines are obtained with equation 2.19.

Figure 4:

Output firing rate of an LIF neuron driven by Poisson trains as a function of τs. The parameters for the Poisson input (see equation 2.25) are JE = 0.02, νE = 4500 Hz (we have taken NE = 1 and no inhibition) for the bottom curve, JE = 0.03, νE = 3000 Hz for the middle curve, and JE = 0.03, νE = 5000 Hz, JI = 0.02, νI = 1000 Hz (NE = NI = 1) for the upper curve. With these parameters, we have, respectively, σ2 = 1.7, 2.7, and 5.7 Hz, and μ = 90 Hz in all cases. The other parameters are as in Figure 3m = 10 ms). Full lines are obtained with equation 2.19.

2.2.7.  Short τs Limit and Interpolation Procedure.

Until now, we have determined the firing rate of an LIF neuron in the presence of a slow filter. It would be desirable to know the firing response of these neurons in the opposite limit in which the synaptic time constants are short but nonzero. Here we describe how to estimate the firing rate in the presence of a single filter characterized by any value of τs (Moreno-Bote & Parga, 2004).

A general analytical expression to compute the rate of crossing an absorbing barrier for an arbitrary nonlinear system driven by colored noise with short timescale τs has been first found by Doering, Hagan, and Levermore (1987). The authors showed, in particular, that the correction to the rate in relation to the rate with white noise is of order . This technique has been applied to compute the firing rate of an LIF neuron with a fast but finite synaptic timescale (Brunel & Sergi, 1998; Fourcaud & Brunel, 2002) obtaining,
formula
2.26
where we have defined (see equation 2.39), and .
We can now interpolate between the two limits, equation 2.19 and 2.26, by introducing higher orders in (Moreno-Bote & Parga, 2004). At short τs we use
formula
2.27
where A is the coefficient of the correction term in equation 2.26, while at long τs we employ equation 2.19. Both limits are joined at an intermediate value of the synaptic time constant, τs,inter ∼ τm, and B and C are set to obtain a continuous and differentiable interpolating curve at τs = τs,inter. The exact value of τs,inter does not alter the interpolation curve much, and it can be safely taken as a constant independent of the input parameters in the subthreshold regime. When the input is in the suprathreshold regime, a rather higher τs,inter has to be chosen, but again it does not depend too much on the values of the parameters.

2.2.8.  Cross-Correlogram of Pairs of Neurons with Synaptic Filters.

Another application of our theory refers to the correlation pattern of two neurons in response to independent and common inputs. We consider two LIF neurons, whose voltages are coupled through the equations
formula
2.28
Each neuron receives an independent current Ii(t) (i = 1, 2) and a common current Ic(t). The latter can result from shared presynaptic inputs, but also from different presynaptic inputs that are themselves correlated. The independent and common currents are described by the equations
formula
2.29
where the ηs are independent white noise waveforms with zero mean and unit variance. The currents are therefore colored gaussian noises with timescale τs, mean μind, and variance for the independent components and mean μc and variance for the common component. Note that each neuron receives inputs with total mean μ = μind + μc and total variance σ2 = σ2ind + σ2c. The two neurons do not need to be identical, but for simplicity, we consider only identical cells here.
The synchronous firing pattern between the two LIF neurons is described by the cross-correlation function of the output spike trains, C(t, t′), which in the adiabatic approach is given by equation 2.7. Because the inputs have stationary statistics, we can write C(Δ) ≡ C(t, t + Δ). Using the definitions and and integrating out two of the current variables, equation 2.7 becomes
formula
2.30
where ν0(u) is as in equation 2.19, P(u2, Δ|u1) is a gaussian distribution over the variable u2,
formula
2.31
and p(u1) is a gaussian with mean zero and unit variance.
An equivalent expression for the cross-correlation function of the output spike trains for two LIF cells has been found in Moreno-Bote and Parga (2006),
formula
2.32
where anun(t) and bmum(t + Δ), with
formula
2.33
The function P(u2, Δ|u1) is a gaussian with mean and variance , and p(u1) is a gaussian with mean zero and unit variance, as defined in equation 2.30. These distributions need to be evaluated at the corresponding values of an and bm, which depend on times t and t + Δ, respectively.

The two analytical expressions, equations 2.30 and 2.32, are compared to simulations in Figure 5. The two predictions give the same numerical values (thick full line) and are very close to the simulated cross-correlation function even when similar values of τs and τm are used. The linear approximation of the cross-correlation function, equations 2.10 and 2.11 (see Section 2.5), subestimates the true values but also provides a good match. The linear prediction improves as the amount of common noise relative to the independent noise lowers (note that in the simulations the amount of common noise used is not small compared to the amount of independent noise). The linear approximation provides a fast estimate of the cross-correlation function. Equation 2.32 consists of a double sum over an infinite series, but in practice it is extremely fast because it can be cut using the first two hundred terms in each sum (use t = 200 ms). Equation 2.30 provides the slowest prediction, since it involves a double integral.

Figure 5:

Normalized cross-correlation function of the output spike trains from two LIF neurons receiving common as well as independent sources of noise filtered by synapses. This is defined as Cnor(Δ) = ν−1C(Δ) − ν, where ν is the firing rate of the neurons. The full, thick full, and dashed lines correspond to numerical simulations, the prediction using equation 2.30 or 2.32, and the linear approximation given by equations 2.10 and 2.11, respectively. Parameters are μ = 80 Hz, σ2ind = σ2c = 16 Hz, H = 0.5, Θ = 1, τm = 10 ms, and τs = 20 ms.

Figure 5:

Normalized cross-correlation function of the output spike trains from two LIF neurons receiving common as well as independent sources of noise filtered by synapses. This is defined as Cnor(Δ) = ν−1C(Δ) − ν, where ν is the firing rate of the neurons. The full, thick full, and dashed lines correspond to numerical simulations, the prediction using equation 2.30 or 2.32, and the linear approximation given by equations 2.10 and 2.11, respectively. Parameters are μ = 80 Hz, σ2ind = σ2c = 16 Hz, H = 0.5, Θ = 1, τm = 10 ms, and τs = 20 ms.

The cross-correlograms are typically characterized by a single peak in both the sub- and suprathreshold regimes when the fraction of total noise that is common is small, while oscillatory patterns arise when the fraction approaches one (not shown; Moreno-Bote & Parga, 2006). For small fractions, the correlation timescale of the output spike trains is τs, the time constant of the synapses driving the two neurons, as predicted by the linear approximation in equations 2.10 and 2.11.

Here, we have described the cross-correlation function of the output spike trains of a pair of LIF neurons. However, our theory also allows a detailed description of other statistical properties of the spiking response, such as the coefficient of variation of the ISIs (CVISI), the Fano factor of the spike count of the output spike train (FN), and its autocorrelation function (Moreno-Bote & Parga, 2006).

2.2.9.  Probability Distribution of the Voltage.

It is also possible to determine the probability distribution of the normalized voltages, P0(x), as (see Supporting Information)
formula
2.34
where is the Heaviside (step) function. This distribution reveals the existence of two different states. One of them corresponds to firing periods of the neuron (first term) and the other to silent periods (second term). The latter, which can be considered as the free distribution (i.e., no effect of the voltage threshold), is a gaussian with width . A similar expression for the voltage distribution in the case of a conductance-based IF neuron has been found in Moreno-Bote and Parga (2005), where also two different regimes were characterized.

2.2.10.  LIF Neurons with One Fast and One Slow Synaptic Type.

We continue the discussion of slow filters this time in the presence of a second, fast filter. This is a possible scenario found in a study about the effect of background activity on τm when AMPA (fast) and GABAA (slow) synaptic receptors types are present (Destexhe, Rudolph, Fellous, & Sejnowski, 2001). In that work, it was argued that background activity reduces the membrane time constant of the neuron several times, so that τm ∼ 5 ms. Then AMPA synapses are fast compared with τm, and we can approximate τAMPA = 0. However, GABAA receptors display a slower time decay, ms, and they can be taken as slow compared with the membrane dynamics.

In this case, the total current in equation 2.13 has two contributions, I(t) = I1(t) + I2(t), which in the diffusion limit are
formula
2.35
The first equation corresponds to the inhibitory current, in which τs can be made equal to . The second equation corresponds to the fast approximation of AMPA synaptic receptors. The quantities μ1, μ2 and σ21, σ22 are the means and variances of the inhibitory and excitatory inputs, and η(t) and ζ(t) are two independent white noise processes with unit variance. Defining μ ≡ μ1 + μ2 and performing the linear transformation and , the equations for the voltage and the current are transformed into
formula
2.36
Here, , α ≡ σ2122, and the threshold and reset potentials now become and . The current autocorrelation is
formula
2.37
Note that the autocorrelation has a delta function, something that did not happen with a single slow filter, equation 2.15.
The firing rate valid for long τs can be found by performing an expansion of the FPE associated to the variables (x, z) in powers of (see the appendix and Supporting Information). Up to second order, it takes the expression
formula
2.38
with
formula
2.39
where is the error function.

The quantity νfast(z) in equation 2.38 has an intuitive meaning: it is the rate of an LIF neuron driven by a white noise input with effective mean and variance σ22 (Ricciardi, 1977). As it can be appreciated, the output firing rate is given by the average of νfast(z) with the stationary distribution of z, as in the case with a single slow filter.

Formula 2.38 admits an expansion in powers of ϵ (Supporting Information). At leading order, the rate is just , the firing rate of an LIF neuron driven by a white noise input with mean μ and variance σ22 (Ricciardi, 1977). The firing rate approaches as the synaptic time constant increases, as shown in Figure 6, where a comparison between the predictions provided by equation 2.38 and simulated data is presented. A perturbative expansion of the firing rate in equation 2.38 in powers of 1/τs exists in both the supra- and the subthreshold regimes. This contrasts with the case of a single slow filter, where the firing rate admitted a perturbative expansion in powers of 1/τs only in the suprathreshold regime.

Figure 6:

Output firing rate for an LIF neuron with one fast and one finite τs synaptic filters in both the sub- (left) and suprathreshold (right) regimes as a function of τs. The synaptic timescale τs can correspond to the decay time constant of inhibitory synapses (see text). Solid lines correspond to the rate predicted by formula 2.38, and the ticks on the right indicate the firing rate limit as τs approaches infinity. In the subthreshold regime, μ = 80 Hz (top curve) and μ = 40 Hz (bottom curve), and in both cases σ2AMPA = 20 Hz and σ2GABA = 80 Hz (α = 4). In the suprathreshold regime μ = 210 Hz, σ2AMPA = 0.1 Hz and σ2GABA = 3.6 Hz (α = 36). The horizontal line is the firing rate to which the firing rate approaches. In both regimes τm = 5 ms and the other parameters are as in Figure 3.

Figure 6:

Output firing rate for an LIF neuron with one fast and one finite τs synaptic filters in both the sub- (left) and suprathreshold (right) regimes as a function of τs. The synaptic timescale τs can correspond to the decay time constant of inhibitory synapses (see text). Solid lines correspond to the rate predicted by formula 2.38, and the ticks on the right indicate the firing rate limit as τs approaches infinity. In the subthreshold regime, μ = 80 Hz (top curve) and μ = 40 Hz (bottom curve), and in both cases σ2AMPA = 20 Hz and σ2GABA = 80 Hz (α = 4). In the suprathreshold regime μ = 210 Hz, σ2AMPA = 0.1 Hz and σ2GABA = 3.6 Hz (α = 36). The horizontal line is the firing rate to which the firing rate approaches. In both regimes τm = 5 ms and the other parameters are as in Figure 3.

An expression for the output firing rate identical to that in equation 2.38 has been found in Moreno, de la Rocha, Renart, and Parga (2002) and Moreno-Bote, Renart, and Parga (2008) for an input with spike correlations. More specifically, in that work, we calculated the output firing rate of an LIF neuron driven by exponentially correlated presynaptic spikes characterized by a correlation time constant τc and magnitude α. However, in the situation presented in this work, the presynaptic currents in equation 2.35 are modeled as white noises that approximate independent Poisson firing of a pool of presynaptic neurons (see the Supporting Information). The two expressions are identical because in the presence of two filters, one slow and another infinitely fast, the total input I(t) has exactly the exponential correlations (see equation 2.37) that were considered in Moreno et al. (2002) and Moreno-Bote et al. (2008) to model exponentially correlated spikes with correlation time τc = τs and positive correlation magnitude α = σ2122.

The results found above can be extended to any other IF neuron model. A general formula similar to equation 2.38 for the firing rate of a general IF neuron with both fast and slow filters is given in the appendix.

2.2.11.  The Transfer Function with Slow and Fast Synaptic Filters.

We plot in Figure 7 the input-to-rate transfer function for an LIF neuron. The firing rate is plotted as a function of the mean current μ for three different τs for both a single slow filter (left) and one slow and another fast filter (right). As τs increases, the fluctuations of the slow input noise are filtered out, and the curve becomes steeper as a function of μ. For the same mean input drive μ, the firing rate decreases as a function of τs. In these figures, the single formulas 2.19 and 2.38 are used without interpolation to test their range of validity. Even when the synaptic time constant is chosen to be τs = τm = 10 ms (top curves), the prediction is rather close to the simulation results. Note that in the presence of fast noise, the transfer function is smoother than in the case of a single slow filter.

Figure 7:

Input-to-rate transfer function for an LIF neuron. (Left) Output firing rate as a function of μ for a neuron with a single synaptic filter. The synaptic time constant is τs = 10, 40, and 150 ms for the upper, intermediate, and bottom curves. In all cases, the input variance is σ2 = 30 Hz. (Right) Output firing rate as a function of μ for a neuron with an infinitely fast and a slow synaptic filter. The synaptic time constant is τs = 10, 30, and 100 ms for the upper, intermediate, and bottom curves. In all cases, the input variances are σ2AMPA = 4 Hz and σ2GABA = 20 Hz, giving α = 5. Besides, H = 0, Θ = 1 and τm = 10 ms for the two panels.

Figure 7:

Input-to-rate transfer function for an LIF neuron. (Left) Output firing rate as a function of μ for a neuron with a single synaptic filter. The synaptic time constant is τs = 10, 40, and 150 ms for the upper, intermediate, and bottom curves. In all cases, the input variance is σ2 = 30 Hz. (Right) Output firing rate as a function of μ for a neuron with an infinitely fast and a slow synaptic filter. The synaptic time constant is τs = 10, 30, and 100 ms for the upper, intermediate, and bottom curves. In all cases, the input variances are σ2AMPA = 4 Hz and σ2GABA = 20 Hz, giving α = 5. Besides, H = 0, Θ = 1 and τm = 10 ms for the two panels.

2.3.  Firing Rate for the QIF Neuron.

The adiabatic expression for the firing rate of an LIF neuron, equation 2.19, is an example of the more general expression for the rate of a spiking neuron with slow noise given in equation 2.1 (see also equation A.22). Here, we apply the general theory to the QIF neuron (Ermentrout & Kopell, 1986; Wang & Buzsaki, 1996; Hansel & Mato, 2003; Brunel & Latham, 2003). This neuron model is expected to describe the dynamics of type I neurons when the output firing rate is low. It obeys the equation
formula
2.40
where the current I(t) is an Ornstein-Uhlenbeck process with timescale τs, mean μ, and variance σ2/2τs, as defined in equation 2.14. For constant current I, the firing rate is
formula
2.41
where Θ and H are the threshold and reset potentials of the neuron (Hansel & Mato, 2003). If these potentials are set at H = −∞ and Θ = ∞, then the firing rate is
formula
2.42
For this case, because of the quadratic term in equation 2.40, the membrane potential can travel from the reset to the threshold voltages in a finite time. Using our general equation for the firing rate of an IF neuron with slow filters (equations 2.1 and A.22), we find that the output firing rate for long τs of a QIF neuron with infinite threshold and reset potentials is
formula
2.43
with (a similar expression holds when the potentials take finite values). In the subthreshold regime (μ < 0), this firing rate cannot be expressed as a series in 1/τs. However, in the suprathreshold regime (μ>0), the expansion is possible, and it is
formula
2.44
(see Brunel & Latham, 2003, and section 3). Equation 2.43 provides a general expression for both sub- and suprathreshold regimes. In Figure 8 we have plotted the output firing rate of a QIF neuron simulated numerically by equations 2.40 and 2.14, and we have compared the results with the theoretical prediction. The predictions are excellent for τs ⩾ τm = 10 ms and good even for τs shorter than τm.
Figure 8:

Output firing rate for a QIF neuron in the subthreshold regime (μ < 0) as a function of τs. Simulation results are compared with the theoretical prediction, equation 2.43 (solid curves). Input parameters are μ = −103 Hz and σ2 = 2105, 105, and 2104 Hz from top to bottom. The reset and threshold potentials are at −∞ and ∞, respectively, and τm = 10 ms.

Figure 8:

Output firing rate for a QIF neuron in the subthreshold regime (μ < 0) as a function of τs. Simulation results are compared with the theoretical prediction, equation 2.43 (solid curves). Input parameters are μ = −103 Hz and σ2 = 2105, 105, and 2104 Hz from top to bottom. The reset and threshold potentials are at −∞ and ∞, respectively, and τm = 10 ms.

2.4.  Noise-Thresholded IF Neuron.

Here we consider a neuronal integrator in which the noise is rectified so that the drift is always nonnegative (i.e., the voltage is moving always upward). The voltage of the noise-thresholded IF (NTIF) neuron obeys
formula
2.45
with . With this choice, the drift is always a nonnegative function. In this case, the formula derived for long τs,
formula
2.46
is exact for all τs (see the right panel in Figure 9). It is also easy to check that the density , where p(z) is a normal, solves exactly the FPE associated with equation 2.45. Equation 2.46 can also be obtained from the general expression for the firing rate of nonleaky integrators found in Brette (2004).
Figure 9:

Simulated and exact theoretical output firing rate as an function of τs for an NTIF neuron in the suprathreshold regime (top) and in the subthreshold regime (bottom). Input parameters are μ = 50 Hz and σ2 = 50 Hz (top) and μ = −100 Hz and σ2 = 450 Hz (bottom). In the two cases, H = 0 and Θ = 1.

Figure 9:

Simulated and exact theoretical output firing rate as an function of τs for an NTIF neuron in the suprathreshold regime (top) and in the subthreshold regime (bottom). Input parameters are μ = 50 Hz and σ2 = 50 Hz (top) and μ = −100 Hz and σ2 = 450 Hz (bottom). In the two cases, H = 0 and Θ = 1.

2.5.  List of Expressions.

In this section, we provide an exhaustive list of the analytical expressions found in the letter. The expressions for the firing rates can be obtained from the general theory presented in the appendix. In the examples considered here, we use Ornstein-Unlenbeck processes (colored noise) to generate the input currents, equation 2.14, but a broad range of noises can be considered, as described in the appendix.1

2.5.1.  The Adiabatic Firing Rate for Slow Input Currents.

a. The general expression for the adiabatic firing rate is (see equation 2.1)
formula
2.47
where ν(I) is the steady-state firing rate of the neuron receiving a constant current I, and P(I) is the distribution of currents. The formal derivation for this expression for a general IF neuron and arbitrary input distributions is provided in equation A.22.
b. The firing rate of an LIF neuron driven by colored inputs with long timescale τs can be calculated as (see equation 2.19)
formula
2.48
with , , , and . The adiabatic expression provides an excellent approximation to the firing rate as long as τsm. A detailed proof of this particular case is provided in the Supporting Information. In the suprathreshold regime, the firing rate up to second order in ϵ is (see equation 2.21)
formula
2.49
where .
c. An equivalent expression for the LIF neuron receiving colored inputs with long timescale τs is given by (see equation 2.18)
formula
2.50
where Imin = Θ/τm and σ2I = σ2/2τs is the variance of the input current. This expression is linked to that in b by the transformation . See the Supporting Information for an application to AMPA and NMDA synaptic receptors.
d. The fast implementation of the firing rate for the LIF neuron driven by colored inputs with long timescale τs is (see equation 2.23)
formula
2.51
with . This expression is equivalent to that in b. Only the first 200 terms are required for an excellent approximation (use t = 200 ms).
e. The firing rate of a QIF neuron receiving colored inputs with long timescale τs is (see equation 2.43)
formula
2.52
with (a similar expression holds when the potentials take finite values). In the suprathreshold regime (μ>0) this expression is equivalent to (see equation 2.44)
formula
2.53
f. The firing rate of an NTIF neuron receiving colored inputs with long timescale τs is (see equation 2.46)
formula
2.54
with . In this case, the expression is exact for all τs.

2.5.2.  The Firing Rate with Fast and Slow Filters.

a. The general expression for the firing rate is (see equation 2.5)
formula
2.55
where νfast(I) is the steady-state firing rate of the neuron receiving a constant current I and averaged across the fast noise. A proof for this expression for a general IF neuron and arbitrary input distributions can be found in equation A.33.
b. The firing rate of an LIF neuron receiving both white noise inputs and colored inputs with long timescale τs is written as (see equation 2.38)
formula
2.56
with .

2.5.3.  Correlation Function.

a. The general expression for the two-point correlation function for two neurons receiving correlated colored noise with long timescale is (see equation 2.7)
formula
2.57
where νi(I) is the steady-state firing rate of neuron (i = 1, 2) driven by constant current I, P(Ii) is the distribution of current Ii to neuron i, and P(Ic, t; Ic, t′) is the joint probability density of the common current at two different times.
b. The linear approximation (valid for weak correlations), equation 2.11, of the previous expression is
formula
2.58
where , is the derivative of evaluated at the mean value of Ic, μc, and
formula
2.59
c. The two-point correlation function of the output spike train for two LIF neurons receiving independent as well as common colored inputs (common and total input variances σ2c and σ2 respectively (see equations 2.28 and 2.29) with long timescale τs is calculated as (see equation 2.30)
formula
2.60
where
formula
2.61
and p(u1) is a gaussian with mean zero and unit variance.
d. The linear approximation of the cross-correlation function for two LIF neurons given in c is
formula
2.62
where with , is the derivative of evaluated at μc, and
formula
2.63
e. The fast implementation of the correlation function given in c for two LIF neurons is (see equation 2.32)
formula
where anun(t) and bmum(t + Δ) with
formula
2.64
P(u2, Δ|u1) is a gaussian distribution with mean and variance , and p(u1) is a gaussian distribution with mean zero and unit variance, as in c. Only the first two hundred terms in each sum are required for an excellent approximation (use t = 200 ms). This expression is equivalent to that in c.

3.  Discussion

We have developed a theory that describes analytically the firing rate of IF neurons driven by arbitrary forms of slow stochastic inputs and when fast forms of noise are present too. The theory is exact when the timescale governing the noise fluctuations is much longer than the intrinsic timescales of the neuron, but it can also be applied to the case in which the timescales are comparable. It is worth emphasizing that our theory does not require that the interspike intervals are short compared to the timescale of the stochastic inputs, but rather that the latter is longer or comparable to the membrane time constant of the neuron. Our approach does not require that the noise amplitude is small either.

Other work has also addressed the problem of studying the firing properties of IF-like neurons driven by slow stochastic inputs. Salinas and Sejnowski (2002) considered an input current that could take two discrete values. Although interesting, the input model cannot approximate the current generated by a sum of spike trains. Svirskis and Rinzel (2000) have found an estimate of the firing rate of a neuron model in which the potential can be above threshold and the reset effect is not included. Middleton et al. (2003) have studied the distributions of interspike intervals in nonleaky IF neurons with slow stochastic inputs and provided analytical expressions for them that are valid in the limit of small noise amplitude and when the synaptic timescale is at least one order of magnitude longer than the mean interspike interval. Their technique cannot be applied to compute the firing rate of LIF neurons in the noise-driven regime since it requires that for any frozen value of the input noise, the interspike interval is noninfinity. Schwalger and Schimansky-Geier (2008) have studied the interspike interval distributions and the Fano factor of the spike count of the output spike train in LIF neurons driven by slow stochastic inputs and derived analytical expressions for them. The computation of the interspike interval distribution requires knowing the distribution of synaptic currents at the spike times found in Moreno-Bote and Parga (2004, 2006). Their analytical expressions are valid when the synaptic time constant is several orders of magnitude longer than the membrane time constant. In Moreno-Bote and Parga (2006), we have developed a method that allows computing the Fano factor and the autocorrelation function of the output spike trains accurately even when τs is similar to τm. The crucial difference between the two approaches lies in that Schwalger and Schimansky-Geier (2008) assume that the currents are constant across time after an output spike, while in Moreno-Bote and Parga (2006), we fully consider the stochastic temporal evolution of the currents after an output spike. Gerstner (1999) has studied models in which the threshold potential after a spike is a slow, random variable. These models can be solved exactly, but they cannot be mapped to IF neurons with slow, noisy inputs. This is because the noisy threshold is drawn from a distribution only at the moments of the spikes, not continuously over time, as it happens in neuron models receiving fluctuating inputs. In a recent work, Brunel and Latham (2003) have used our naive expansion in powers of for long synaptic time constants (described in detail in Supporting Information) to compute the firing rate of a QIF neuron in the suprathreshold regime in that limit. However, our naive expansion can be applied only to the suprathreshold regime, in which the mean input drive dominates the spiking behavior of the neuron and noise plays a secondary role. Here, using a regularized expansion, we have found an expression for the firing rate of a QIF neuron valid in both the supra- and subthreshold regimes.

Chizhov and Graham (2008) have elaborated a new procedure to compute the firing rate of LIF neurons receiving colored noise with arbitrary timescale τs. Our and their analytical expressions for the firing rate have been compared in their Figure 4, providing both a good match with the simulated data. Their method, however, differs from ours substantially. The reset effect after generation of a spike is not considered in Chizhov and Graham (2008), which makes the statement of the problem easier. This approximation is well justified in cases in which the interspike intervals are longer than the membrane time constant. Moreover, their calculation of the firing rate in the stationary case involves two steps. First, the associated FPE without reset effect is solved numerically for several values of the neuron parameters and τs, and the results are then fit by simple functions; the fits are good at least in the parameter regime used. And second, these simple fit functions are employed in the final expression of the firing rate, which involves a double integral that can only be computed numerically. In contrast, by solving analytically the FPE with reset effect, we have provided analytical expressions for the firing rate of general IF neurons that are exact in the long τs limit, involve a single integral for the LIF and QIF neurons, and whose asymptotic behavior is mathematically different from that obtained in Chizhov and Graham (2008) for the LIF neuron.

Recently Carandini (2004) has introduced a model (the gaussian-rectification model) to characterize the firing response of neurons under stimulation in visual cortex. First, the voltage responses of the neuron to the stimulus are averaged over trials and then filtered at 50 Hz to obtain a coarse voltage, Vc. Then for any arbitrary value of the coarse voltage, the firing probability of the neuron given the voltage, ν(Vc), is experimentally obtained. Several interesting characteristics of the firing response can then studied, but interesting to us is the way the mean firing rate would be computed. Since the voltage distribution, PG(Vc), can be experimentally computed (it is well approximated by a gaussian), then the mean firing rate could be obtained by averaging the rate with the voltage as
formula
3.1
Typically the function ν(Vc) is smooth, as it corresponds to the case of averaging the voltage over all fast fluctuations over a time window of 20 ms (50 Hz low pass filtering). Therefore, equation 3.1 resembles the adiabatic expression for the firing rate of a neuron driven with fast and slow stochastic inputs, equation 2.5, with the difference that the voltage average is replaced by the synaptic current average. Carandini (2004) has presented the above formalism to describe quantitatively the firing response of visually stimulated neurons. We have derived a similar description in terms of input currents (Moreno-Bote & Parga, 2004) as a formal way of characterizing the firing rate of neurons receiving inputs with long correlation timescales.

The difference between averaging voltage in Carandini's model and synaptic inputs (currents or conductances) in our theory is substantial. Since voltages are necessarily upper-bounded by the spiking threshold of the neuron, computing the firing rate as a function of the voltage might be susceptible to large statistical errors, since many possible firing rates will correspond to similar voltages around the spiking threshold. However, computing the firing rate as a function of the input current (or synaptic conductances) does not suffer this statistical problem, since currents are not upper-bounded in the range of values typically observed. In fact, we have previously shown that the firing rate of conductance-based IF neurons can be computed using an average of the firing rate as a function of the instantaneous synaptic conductances over the distribution of synaptic conductances (Moreno-Bote & Parga, 2005). Since voltages and synaptic conductances can be measured in vivo, it would be interesting to compare quantitatively the predictions for the firing properties of visual cortex neurons using the two alternative ways of averaging discussed above.

In this letter, we have also provided expressions for the cross-correlation function between the output spike trains of two IF neurons receiving common as well as independent sources of noise and applied the theory to the LIF neuron (see also Moreno-Bote & Parga, 2006). The theory allows describing quantitatively the peak, width, area of the cross-correlation function, and the correlation coefficient of the output spike trains. We have also found simplified equations for the cross-correlation function that establish a linear relationship between input and output correlations. Several recent works have described analytically the temporal profile of the cross-correlation function of a pair of spiking neurons, but using simplified models that do not have the after-spike reset characteristic of the IF neuron (Svirskis & Hounsgaard, 2003; Tchumatchenko, Malyshev, Geisel, Volgushev, & Wolf, 2010; Burak, Lewallen, & Sompolinsky, 2009). De la Rocha, Doiron, Shea-Brown, Josic, and Reyes (2007) have presented analytical expressions to compute the coefficient of correlation for LIF neurons in the limit of weak input correlations, but these expressions do not allow an analytical description of the temporal profile of the cross-correlation function. Our theory and its extension to interconnected neurons might be crucial for describing the temporal correlation patterns found in retina and cortex (Riehle et al., 1997; Bair et al., 2001; Kohn & Smith, 2005; Pillow et al., 2008) and the determination of connectivity matrices underlying those patterns using IF neurons as the basic functional units.

Although we have focused on the description of the firing rate and cross-correlation function of the output spike trains of a pair of IF neurons, our theory can also be applied to study other statistical properties of the spiking response, such as the coefficient of variation of the ISIs (CVISI), the Fano factor of the spike count of the output spike train (FN), and its autocorrelation function. It has been shown previously that those statistical quantities can be obtained for LIF neurons from the adiabatic approach (Moreno-Bote & Parga, 2006). A generalization of that formalism is possible and will allow the description of second-order firing statistics for general IF neurons driven by arbitrary forms of noise. In addition, it would be desirable to extend our adiabatic approach to describe the transient response of IF neurons. Analytical expressions for the response of LIF neurons to sinusoidal stimuli in the limit of small amplitudes and infinitely fast synapses, τs = 0, are available (Brunel & Hakim, 1999; Lindner & Schimansky-Geier, 2001; Richardson, 2007), but there are not known solutions valid for all stimulus frequencies for nonzero τs.

A prime question is which is the effect of synaptic time constants in neuronal network dynamics. Recent work has shown that the temporal dynamics of the synapses can play an important role in setting the response properties of IF neurons working in the high-conductance regime (Shelley et al., 2002; Moreno-Bote & Parga, 2005; Vogels & Abbott, 2005; Cai et al., 2005; Apfaltrer et al., 2006; Kumar, Schrader, Aertsen, & Rotter, 2008). We think that the general theory on synaptic filtering presented here can be of utility for building mean field theories that use the rate variables as well as second-order statistics to describe the temporal dynamics of these networks (see Renart, Moreno-Bote, Wang, & Parga, 2007).

Appendix:  Methods

A.1.  IF Neurons with Fast and Slow Filters.

Here we provide the details for computing the output firing rate for IF neurons described by rather general drift functions and noise models (see equation A.1 below). First, we define the model, then we compute the adiabatic firing rate of an IF neuron driven by slow input noise, and finally we study the case of both fast and slow synaptic filters.

A.1.1.  General IF Neuron and Noise Models.

The voltage of an IF neuron satisfies the equation
formula
A.1
where is the drift (function) of the voltage, which depends on the voltage and synaptic synaptic fluctuation variables with time constants . The synaptic fluctuations variables obey
formula
A.2
where the ηi(t)s are independent white noise processes with zero mean and unit variance. Therefore, the fluctuation variable zi drifts with rate −μi(zi)/τi and has diffusion coefficient . A spike is emitted in the model when the voltage reaches a threshold Θ, after which it is reset to H. The synaptic variables are not reset after a spike, since they model external stochastic inputs that do not depend on the state of the neuron.
The drift function determines the direction and the velocity of the displacement of V when the potential is at V and the synaptic fluctuations take value . Any IF neuron can be written in this form: for example, for an LIF neuron with a membrane time constant τm and a single linear filter of timescale τs it is written as (see equations 2.132.14)
formula
A.3
where z obeys equation A.2 with μ(zi) = zi and σ2i(zi) = 1. Other relevant drift functions can include the driving forces of synapses and also quadratic terms in the voltage that generates a sort of action potential (e.g., as in the QIF neuron). Nonlinear filters are included in the formalism by making the drift function nonlinear in z.
The noise model in equation A.2 specifies the distribution of z. Its stationary distribution, whenever it exists, has the form
formula
A.4
where A is the normalization constant. This density solves the stationary FPE for the zi variable
formula
A.5
where is the linear operator . To obtain the above FPE, we interpret the diffusion term in equation A.2 in the Stratonovich sense, that is, the white noise process is understood to arise from the limit of a continuous stochastic process. Other interpretations, like the Ito interpretation, can be mapped into the previous one easily. For an Ornstein-Uhlenbeck process with mean zero and unit variance μi(zi) = zi and σ2i(zi) = 1, and from equation A.4, it is easy to check that its steady-state distribution follows a normal distribution.

Note that the neuron model defined by equations A.1 and A.2 considers both slow and fast filters. Infinitely fast filters are also included by taking the limit τi → 0 for some synaptic filters.

A.1.2.  The Case of Slow Filtering.

Let us first study equations A.1 and A.2 in the limit of long synaptic time constants. Although an approximation to the firing rate could be proposed as an expansion in powers of the inverse of the τis, this perturbative expansion often does not work (see the Supporting Information for the proof of the failure of the naive expansion for the LIF neuron). Instead of attempting to solve the problem in this perturbative fashion, the problem of finding the firing rate is attacked by assuming that the drift function in equation A.1 does not depend on while equations A.2 do so depend. The latter is done by defining a new constant vector that substitutes in equation A.1 as
formula
A.6
As we take the limit of long synaptic time constants, the effect of the noise variables on the membrane potential in equation A.6 remains fixed because the vector is maintained fixed (as an example, see equation A.3). The new model defined by equations A.6 and A.2 is equivalent to that defined by equations A.1 and A.2 when . Then all we have to do is to calculate the firing rate for the system defined by equations A.6 and A.2 and at the end to replace by its true value . (The limit with fixed is called the distinguished limit in singular perturbation theory; see, e.g., Bender & Orszag, 1978).
However, this trick does not always work. A necessary condition is that for any fixed , the dynamics defined in equation A.6 does not lead V to −∞ as t → ∞. This implies that the potential function
formula
A.7
increases as V → −∞. In other words, either the voltage of the neuron drifts upward or settles down to a stationary value for any fixed . For simplicity in our arguments, we will also restrict the generalization a bit more by assuming that the potential function has at most an (absolute) minimum in the interval (−∞, Θ]. This is a reasonable assumption, because the voltage cannot travel to −∞ and neurons do not usually show intrinsic bistability. If this is the case, there exists a stationary FPE associated with equations A.6 and A.2, and it is
formula
A.8
where is the probability density of having the neuron with membrane potential V and receiving a synaptic fluctuation . (A restricted version of the FPE A.8 has been used by Doering et al. (1987), who considered a general nonlinear system driven by linearly filtered white noise. Brunel and Sergi (1998) considered a similar version of FPE, A.8 for a linear system driven by linearly filtered white noise in the context of neurons driven by colored stochastic inputs). The probability density current is the probability density of current escaping at threshold,
formula
A.9
Crucially, this self-consistency condition states that once the neuron hits threshold, it should be reinjected at the reset potential with the same distribution of fluctuations that it had when it escaped. If one views the process described by the FPE A.8 as a population of independent particles diffusing over the variables V and , equation A.9 states that when a particle escapes (hits threshold), it has to be reinjected at the reset potential with the same that it had when it escaped.
The probability density current cannot be negative because there cannot be probability entering from above threshold. To determine when the probability current is positive (i.e., for which do the neurons fire?), we have to determine for which fixed a trajectory of V starting at the reset potential H can travel up to the threshold potential without being stopped. This condition determines a region over , which we call Ω, defined more formally as
formula
A.10
The firing rate of the IF neuron can be finally obtained by integrating the probability current over Ω as
formula
A.11
where we have replaced by its true (long) value .
Solution of the FPE for long synaptic time constants. The FPE A.3 has to be solved along with the conditions
formula
A.12
formula
A.13
formula
A.14
formula
A.15
We propose an expansion of of the density P(V,z) and the probability current in powers of τ−1i as
formula
A.16
where the vector is assumed to be constant. Condition iv has to be satisfied order by order in the expansion. Condition iii means that the integral of P0 has to be one and that the integral of any other higher order has to be zero.
Since the zis are independent (see equations A.2), the marginal distribution for is
formula
A.17
If the variables are independent Ornstein-Uhlenbeck processes, then the marginal distribution is a normal in N dimensions.
Now the zeroth-order equation of the FPE A.8 is
formula
A.18
This equation has to be solved for two different cases: when belongs to Ω and when does not belong to it. In the first case, is positive (the neuron can fire), and the solution for P0 using condition A.13 is
formula
A.19
Using condition A.17, we find that the probability current is
formula
A.20
with
formula
A.21
Note that is the firing rate of the IF when it receives a fixed synaptic fluctuation when the synaptic time constants have value . The firing rate, equation A.11, up to zeroth-order zero is then
formula
A.22
where we have replaced by its true value . In the second case, when does not belong to Ω and then the neuron cannot fire, . Thus, the solution of equation A.18 with that satisfies condition A.14 is
formula
A.23
where is the unique solution of . To obtain equation A.23, it is required that the condition that the function defined in equation A.7 has at most one minimum. Using condition A.17 leads to .

The distribution is computed by combining equations A.19 and A.23 and replacing by . The firing rate, equation A.22, and the distribution of the membrane potential, equations A.19 and A.23, describe completely the problem at leading order.

Equation A.22 is the adiabatic expression for the firing rate of IF neurons driven by slow inputs with arbitrary distributions, equation 2.1.

A.1.3.  The Case of Slow and Fast Filtering.

In this section we consider the case of a neuron with several slow filters and a single additive fast filter. Several nonadditive fast filters can also be included in the formalism without extra complications, and therefore we restrict our discussion to the simplest case described below.

The membrane potential obeys
formula
A.24
where is described by equations A.2 and ζ(t) is a normalized white noise process that represents the noisy contribution of a current passing through an infinitely fast filter. The strength of the fast noise is determined by the prefactor σf. The FPE associated with equations A.24 and A.2 is
formula
A.25
where the density P and the linear operator have the same definitions as in the FPE A.8. Again, the probability current injected at H has to equal the probability current escaping at threshold Θ, and then
formula
A.26
Note that the probability current now includes a derivative of the probability density P evaluated at the voltage firing threshold, a term that was not present in equation A.9. This fact imposes different boundary conditions on the density P at threshold. In particular, has to be zero for all , because when V>Θ, and no discontinuity in the function can exist at threshold.
The solution should satisfy the conditions
formula
A.27
formula
A.28
formula
A.29
while condition iii is the same as condition A.14. Using the same reasoning as in the case with slow filters only, one can derive an additional constraint, identical to equation A.17, which states that the marginal distribution is .
To carry on with our analysis, we propose an expansion of both P and J in powers of the inverse of the synaptic time constants, as in equations A.16. The coefficients of the expansion have to satisfy the boundary conditions defined above. The zeroth order FPE is
formula
A.30
The solution at zeroth order can be found using standard perturbative techniques (Ricciardi, 1977; Risken, 1989), and it is
formula
A.31
where we have defined the “potential” function as in equation A.7. The probability current is found by inserting the equation for P0 into equation A.17,
formula
A.32
where the function is defined as
formula
Note that is the output firing rate of an IF neuron driven by (fast) white noise with deviation σf and experiencing a drift as if were constant. For the firing rate to be well defined, the potential has to increase fast enough as V → −∞, so that the integral is finite for all x. Finally, the output firing rate of the IF neuron with both fast and slow filters can be obtained by integration over of equation A.32 as
formula
A.33
where the integral extends over the whole space. Note that at this point, we have replaced by .

Under the condition that the potential function U has at most a minimum, as also required in the previous section, taking the limit σf → 0 in equation A.33 leads to the adiabatic firing rate of an IF neuron with slow filters, equation A.22.

Acknowledgments

Support was provided by the Spanish Grant FIS 2006-09294 and by the Swartz Foundation (to R.M.B.). We thank A. Renart and R. Brette for useful discussions. R.M.B. also thanks S. Deneve, B. Gutkin, C. Machens, M. Tsodyks, and A. Pouget for their hospitality at the Collège de France at Paris.

Notes

1

The firing rate and correlation function for other models of noise can be obtained by replacing the gaussian distributions in the expressions presented here by the corresponding steady-state distributions.

References

Anderson
,
J. S.
,
Carandini
,
M.
, &
Ferster
,
D.
(
2000
).
Orientation tuning of input conductance, excitation, and inhibition in cat primary visual cortex
.
J. Neurophysiol.
,
84
(
2
),
909
926
.
Angulo
,
M. C.
,
Rossier
,
J.
, &
Audinat
,
E.
(
1999
).
Postsynaptic glutamate receptors and integrative properties of fast-spiking interneurons in the rat neocortex
.
J. Neurophysiol.
,
82
,
1295
1302
.
Apfaltrer
,
F.
,
Ly
,
C.
, &
Tranchina
,
D.
(
2006
).
Population density methods for stochastic neurons with realistic synaptic kinetics: Firing rate dynamics and fast computational methods
.
Network: Computation in Neural Systems
,
17
(
4
),
373
418
.
Bair
,
W.
,
Zohary
,
E.
, &
Newsome
,
W. T.
(
2001
).
Correlated firing in macaque visual area MT: Time scales and relationship to behavior
.
J. Neurosci.
,
21
(
5
),
1676
1697
.
Banks
,
M. I.
,
Li
,
T-B.
, &
Pearce
,
R. A.
(
1998
).
The synaptic basis of GABAA,slow
.
J. Neurosci.
,
18
(
4
),
1305
1317
.
Barbour
,
B.
,
Keller
,
B.
,
Llano
,
I.
, &
Marty
,
A.
(
1994
).
Prolonged presence of glutamate during excitatory synaptic transmission to cerebellar Purkinje cells
.
Neuron.
,
12
,
1331
1343
.
Bender
,
C. M.
, &
Orszag
,
S.
(
1978
).
Advanced mathematical methods for scientists and engineers
.
New York
:
McGraw-Hill
.
Bernander
,
Ö.
,
Douglas
,
R. J.
,
Martin
,
K. A.
, &
Koch
,
C.
(
1991
).
Synaptic background activity influences spatiotemporal integration in single pyramidal cells
.
Proc. Natl. Acad. Sci. USA
,
88
,
11569
11573
.
Bohr
,
M.
, &
Oppenheimer
,
J. R.
(
1927
).
On the quantum theory of molecules
.
Ann. Physik
,
84
,
457
.
Borg-Graham
,
L. J.
,
Monier
,
C.
, &
Frégnac
,
Y.
(
1998
).
Visual input evokes transient and strong shunting inhibition in visual cortical neurons
.
Nature
,
393
,
369
373
.
Brette
,
R.
(
2004
).
Dynamics of one-dimensional spiking neuron models
.
J. Math Biol.
,
48
(
1
),
38
56
.
Brunel
,
N.
, &
Hakim
,
V.
(
1999
).
Fast global oscillations in networks of integrate-and-fire neurons with low firing rates
.
Neural Computation
,
11
(
7
),
1621
1671
.
Brunel
,
N.
, &
Latham
,
P. E.
(
2003
).
Firing rate of the noisy quadratic integrate-and-fire neuron
.
Neural Computation
,
15
,
2281
2306
.
Brunel
,
N.
, &
Sergi
,
S.
(
1998
).
Firing frequency of leaky integrate-and-fire neurons with synaptic current dynamics
.
J. Theor. Biol.
,
195
,
87
95
.
Burak
,
Y.
,
Lewallen
,
S.
, &
Sompolinsky
,
H.
(
2009
).
Stimulus-dependent correlations in threshold-crossing spiking neurons
.
Neural Computation
,
21
(
8
),
2269
2308
.
Cai
,
D.
,
Rangan
,
A. V.
, &
McLaughlin
,
D. W.
(
2005
).
Architectural and synaptic mechanisms underlying coherent spontaneous activity in V1
.
PNAS
,
102
(
16
),
5868
5873
.
Camera
,
G.
,
Giuglianio
,
M.
, &
Senn
,
W.
,
Fusi
,
S.
(
2008
).
The response of cortical neurons to in vivo-like input current: Theory and experiment I. noisy inputs with stationary statistics
.
Biol. Cybern.
,
99
,
279
301
.
Carandini
,
M.
(
2004
).
Amplification of trial-to-trial response variability by neurons in visual cortex
.
PLoS Biol.
,
2
(
9
),
e264
.
Chizhov
,
V.
, &
Graham
,
L. J.
(
2008
).
Efficient evaluation of neuron populations receiving colored-noise current based on a refractory density method
.
Physical Review E
,
77
,
011910
.
Cox
,
D. R.
, &
Lewis
,
P. A. W.
(
1966
).
The statistical analysis of series of events
.
London
:
Methuen
.
de la Rocha
,
J.
,
Doiron
,
B.
,
Shea-Brown
,
E.
,
Josic
,
K.
, &
Reyes
,
A.
(
2007
).
Correlation between neural spike trains increases with firing rate
.
Nature
,
448
,
802
807
.
Destexhe
,
A.
, &
Paré
,
D.
(
1999
).
Impact of network activity on the integrative properties of neocortical pyramidal neurons in vivo
.
J. Neurophysiol.
,
81
(
4
),
1531
1547
.
Destexhe
,
A.
,
Rudolph
,
M.
,
Fellous
,
J. M.
, &
Sejnowski
,
T. J.
(
2001
).
Fluctuating synaptic conductances recreate in vivo–like activity in neocortical neurons
.
Neuroscience
,
107
,
13
24
.
Doering
,
C. R.
,
Hagan
,
P. S.
, &
Levermore
,
C. D.
(
1987
).
Bistability driven by weakly colored gaussian noise: The Fokker-Planck equation boundary layer and mean first-passage times
.
Physical Review Letters
,
59
(
19
),
2129
2132
.
Ermentrout
,
G. B.
, &
Kopell
,
N.
(
1986
).
Parabolic bursting in an excitable system coupled with a slow oscillation
.
SIAM J. Appl. Math.
,
46
,
233
253
.
Fourcaud
,
N.
, &
Brunel
,
N.
(
2002
).
Dynamics of the firing probability of noisy integrate-and-fire neurons
.
Neural Computation
,
14
,
2057
2110
.
Gerstner
,
W.
(
1999
).
Population dynamics of spiking neurons: Fast transients, asynchronous states, and locking
.
Neural Computation
,
12
(
1
),
43
89
.
Hansel
,
D.
, &
Mato
,
G.
(
2003
).
Asynchronous states and the emergence of synchrony in large networks of interacting excitatory and inhibitory neurons
.
Neural Computation
,
15
,
1
56
.
Hirsch
,
J. A.
,
Alonso
,
J-M.
,
Reid
,
R. C.
, &
Martinez
,
L. M.
(
1998
).
Synaptic integration in striate cortical simple cells
.
J. Neuroscience
,
18
(
22
),
9517
9528
.
Kohn
,
A.
, &
Smith
,
M. A.
(
2005
).
Stimulus dependence of neuronal correlation in primary visual cortex of the macaque
.
J. Neurosci.
,
25
(
14
),
3661
3673
.
Kumar
,
A.
,
Schrader
,
S.
,
Aertsen
,
A.
, &
Rotter
,
S.
(
2008
).
The high-conductance state of cortical networks
.
Neural Computation
,
20
(
1
),
1
43
.
Lindner
,
B.
, &
Schimansky-Geier
,
L.
(
2001
).
Transmission of noise coded versus additive signals through a neuronal ensemble
.
Physical Review Letters
,
86
(
14
),
2934
2937
.
Middleton
,
J. W.
,
Chacron
,
M. J.
,
Lindner
,
B.
, &
Longtin
,
A.
(
2003
).
Firing statistics of a neuron model driven by long-range correlated noise
.
Phys. Rev. E
,
68
,
021920
.
Moreno
,
R.
,
de la Rocha
,
J.
,
Renart
,
A.
, &
Parga
,
N.
(
2002
).
Response of spiking neurons to correlated inputs
.
Physical Review Letters
,
89
(
28
),
288101
.
Moreno-Bote
,
R.
, &
Parga
,
R.
(
2004
).
Role of synaptic filtering on the firing response of simple model neurons
.
Physical Review Letters
,
92
(
2
),
028102
.
Moreno-Bote
,
R.
, &
Parga
,
N.
(
2005
).
Membrane potential and response properties of populations of cortical neurons in the high conductance state
.
Physical Review Letters
,
94
,
088103
.
Moreno-Bote
,
R.
, &
Parga
,
R.
(
2006
).
Auto- and crosscorrelograms for the spike response of leaky integrate-and-fire neurons with slow synapses
.
Physical Review Letters
,
96
,
028101
.
Moreno-Bote
,
R.
,
Renart
,
A.
, &
Parga
,
N.
(
2008
).
Theory of input spike auto- and cross-correlations and their effect on the response of spiking neurons
.
Neural Computation
,
20
,
1651
1705
.
Muller
,
E.
,
Buesing
,
L.
,
Schemmel
,
J.
, &
Meier
,
K.
(
2007
).
Spike-frequency adapting neural ensembles: Beyond mean adaptation and renewal theories
.
Neural Computation
,
19
,
2958
3010
.
Myme
,
C. I. O.
,
Sugino
,
K.
,
Turrigiano
,
G. G.
, &
Nelson
,
B.
(
2003
).
The NMDA-to-AMPA ratio at synapses onto layer 2/3 pyramidal neurons conserved across prefrontal and visual cortices
.
J. Neurophysiol.
,
90
,
771
779
.
Okada
,
M.
,
Onodera
,
K.
,
van Renterghem
,
C.
, &
Takahashi
,
T.
(
2000
).
Functional correlation of GABA-A receptor α subunits expression with the properties of IPSCS in the developing thalamus
.
J. Neurosci.
,
20
(
6
),
2202
2208
.
Otis
,
T. S.
,
De Koninck
,
Y.
, &
Mody
,
I.
(
1993
).
Characterization of synaptically elicited GABAB responses using patch-clamp recording in rat hippocampal slices
.
J. Physiology
,
463
(
1
),
391
407
.
Paré
,
D.
,
Shink
,
E.
,
Gaudreau
,
H.
,
Destexhe
,
A.
, &
Lang
,
E.
(
1998
).
Impact of spontaneous synaptic activity on the resting properties of cat neocortical pyramidal neurons in vivo
.
J. Neurophysiol.
,
79
(
3
),
1450
1460
.
Pillow
,
J. W.
,
Shlens
,
J.
,
Paninski
,
L.
,
Sher
,
A.
,
Litke
,
A. M.
,
Chichilnisky
,
E. I.
, et al
(
2008
).
Spatio-temporal correlations and visual signalling in a complete neuronal population
.
Nature
,
454
(
7207
),
995
999
.
Renart
,
A.
,
Moreno-Bote
,
R.
,
Wang
,
X.-J.
, &
Parga
,
N.
(
2007
).
Mean-driven and fluctuation-driven persistent activity in recurrent networks
.
Neural Computation
,
19
,
1
46
.
Ricciardi
,
L. M.
(
1977
).
Diffusion processes and related topics in biology
.
Berlin
:
Springer-Verlag
.
Richardson
,
M. J. E.
(
2007
).
Firing-rate response of linear and nonlinear integrate-and-fire neurons to modulated current-based and conductance-based synaptic drive
.
Physical Review E, Statistical, Nonlinear, and Soft Matter Physics
,
76
(
2 Pt. 1
),
021919
.
Riehle
,
A.
,
Grun
,
S.
,
Diesmann
,
M.
, &
Aertsen
,
A.
(
1997
).
Spike synchronization and rate modulation differentially involved in motor cortical function
.
Science
,
278
,
1950
1953
.
Risken
,
H.
(
1989
).
The Fokker-Planck equation
(2nded.).
Berlin
:
Springer-Verlag
.
Salinas
,
E.
, &
Sejnowski
,
T. J.
(
2002
).
Integrate-and-fire neurons driven by correlated stochastic input
.
Neural Computation
,
14
,
2111
2155
.
Schwalger
,
T.
, &
Schimansky-Geier
,
L.
(
2008
).
Interspike interval statistics of a leaky integrate-and-fire neuron driven by gaussian noise with large correlation times
.
Physical Review E
,
77
,
031914
.
Shelley
,
M.
,
McLaughlin
,
D.
,
Shapley
,
R.
, &
Wielaard
,
D. J.
(
2002
).
States of high conductance in a large-scale model of the visual cortex
.
Journal of Comp. Neuros.
,
13
,
93
109
.
Silver
,
R. A.
,
Traynelis
,
S. F.
, &
Cull-Candy
,
S. G.
(
1992
).
Rapid-time-course miniature and evoked excitatory currents at cerebellar synapses in situ
.
Nature Lond.
,
355
,
163
166
.
Svirskis
,
G.
, &
Hounsgaard
,
J.
(
2003
).
Influence of membrane properties on spike synchronization in neurons: Theory and experiments
.
Network: Comput. Neural Syst.
,
14
,
747
763
.
Svirskis
,
G.
, &
Rinzel
,
J.
(
2000
).
Influence of temporal correlation of synaptic input on the rate and variability of firing in neurons
.
Biophysical Journal
,
79
,
629
637
.
Tchumatchenko
,
T.
,
Malyshev
,
A.
,
Geisel
,
T.
,
Volgushev
,
M.
, &
Wolf
,
F.
(
2010
).
Correlations and synchrony in threshold neuron models
.
Phys. Rev. Lett.
,
104
,
058102
.
Umemiya
,
M.
,
Senda
,
M.
, &
Murphy
,
T. H.
(
1999
).
Behavior of NMDA and AMPA receptor-mediated miniature EPSCs of rat cortical neuron synapses identified by calcium imaging
.
J. Physiology
,
521
,
113
122
.
Vogels
,
T. P.
, &
Abbott
,
L. F.
(
2005
).
Signal propagation and logic gating in networks of integrate-and-fire neurons
.
J. Neurosci.
,
25
(
46
),
10786
10795
.
Wang
,
X.-J.
(
1999
).
Synaptic basis of cortical persistent activity: The importance of NMDA receptors to working memory
.
J. Neurosci.
,
19
,
958
963
.
Wang
,
X. J.
, &
Buzsaki
,
G.
(
1996
).
Gamma oscillation by synaptic inhibition in a hippocampal interneuronal network model
.
Journal of Neuroscience
,
16
(
20
),
6402
6413
.
Xiang
,
Z.
,
Huguenard
,
J. R.
, &
Prince
,
D. A.
(
1998
).
GABA-A receptor-mediated currents in interneurons and pyramidal cells of rat visual cortex
.
J. Physiology
,
506
,
715
730
.
Zamanillo
,
D.
,
Sprengel
,
R.
,
Hvalby
,
O.
,
Jensen
,
V.
,
Burnashev
,
N.
,
Rozov
,
A.
, et al
(
1999
).
Importance of AMPA receptors for hippocampal synaptic plasticity but not for spatial learning
.
Science
,
284
,
1805
1811
.