## Abstract

Delivery of neurotransmitter produces on a synapse a current that flows through the membrane and gets transmitted into the soma of the neuron, where it is integrated. The decay time of the current depends on the synaptic receptor's type and ranges from a few (e.g., AMPA receptors) to a few hundred milliseconds (e.g., NMDA receptors). The role of the variety of synaptic timescales, several of them coexisting in the same neuron, is at present not understood. A prime question to answer is which is the effect of temporal filtering at different timescales of the incoming spike trains on the neuron's response. Here, based on our previous work on linear synaptic filtering, we build a general theory for the stationary firing response of integrate-and-fire (IF) neurons receiving stochastic inputs filtered by one, two, or multiple synaptic channels, each characterized by an arbitrary timescale. The formalism applies to arbitrary IF model neurons and arbitrary forms of input noise (i.e., not required to be gaussian or to have small amplitude), as well as to any form of synaptic filtering (linear or nonlinear). The theory determines with exact analytical expressions the firing rate of an IF neuron for long synaptic time constants using the adiabatic approach. The correlated spiking (cross-correlations function) of two neurons receiving common as well as independent sources of noise is also described. The theory is illustrated using leaky, quadratic, and noise-thresholded IF neurons. Although the adiabatic approach is exact when at least one of the synaptic timescales is long, it provides a good prediction of the firing rate even when the timescales of the synapses are comparable to that of the leak of the neuron; it is not required that the synaptic time constants are longer than the mean interspike intervals or that the noise has small variance. The distribution of the potential for general IF neurons is also characterized. Our results provide powerful analytical tools that can allow a quantitative description of the dynamics of neuronal networks with realistic synaptic dynamics.

## 1. Introduction

A neuron communicates with other neurons by generating synaptic currents through the corresponding synapses. The nature of these events depends on the presynaptic neurotransmitter and the postsynaptic receptors. Several types of receptors can coexist in the same neuron, each with its characteristic timescale. Within the excitatory class, AMPA-type synaptic receptors open during 1–5 ms (Silver, Traynelis, & Cull-Candy, 1992; Barbour, Keller, Llano, & Marty, 1994; Umemiya, Senda, & Murphy, 1999; Angulo, Rossier, & Audinat, 1999; Zamanillo et al., 1999), while the activation of NMDA receptors lasts for about 100 ms (see, e.g., Umemiya et al., 1999; Myme, Sugino, Turrigiano, & Nelson, 2003). Both synaptic receptors are activated by release of neurotransmitter glutamate from glutamatergic presynaptic cells. Similarly, there are also fast and slow inhibitory synapses— ∼ 5 to 10 ms (Xiang, Huguenard, & Prince, 1998; Banks, Li, & Pearce, 1998; Okada, Onodera, van Renterghem, & Takahashi, 2000) and ∼ 100 ms (Otis, De Koninck, & Mody, 1993), which are activated by release of GABA from GABAergic presynaptic cells. Therefore, spikes arriving at the presynaptic terminals can initiate a variety of synaptic currents on the postsynaptic neuron with different time courses and lasting for quite different time intervals. The variety in duration of spike aftereffects on postsynaptic neurons could have important computational consequences, because it could allow the same information to be present in the neuron at different timescales. In a similar way, it could provide a basis for transmitting and combining information carried at several temporal resolutions.

In addition, the effect of an impinging spike on the membrane potential of a neuron
depends on the membrane time constant of the neuron. While in resting conditions,
the membrane time constant is quite large (τ_{m}∼ 20 ms; see e.g., Paré, Shink, Gaudreau, Destexhe, & Lang, 1998), during intense presynaptic
background activity or intense stimulation, its value can be reduced by several
times (Bernander, Douglas, Martin, & Koch, 1991; Paré et al., 1998;
Destexhe & Paré, 1999;
Borg-Graham, Monier, & Frégnac, 1998; Hirsch, Alonso, Reid, & Martinez, 1998; Anderson, Carandini, & Ferster, 2000). Thus, depending on the background activity and the
nature of the stimulation, the same synapse can produce different effects on the
neuron. It is then reasonable that the synaptic time constants τ_{s} have to be considered in relation to the effective membrane time constant:
what matters for the neuron behavior is the ratio τ_{s}/τ_{m}. According to this idea, synaptic filters can be classified as slow or fast,
depending on whether that ratio is larger or smaller than one, respectively.

The above considerations imply that it is important to know how the presence of
synaptic filters with timescales longer or shorter than the membrane time constant
affects the neuron's firing statistics. Previous work on LIF neurons has determined
analytically their firing rate when synapses have long time constants (Moreno-Bote
& Parga, 2004), as well as when they have
short time constants (Ricciardi, 1977; Brunel
& Sergi, 1998; Fourcaud & Brunel, 2002; Camera, Giuglianio, Senn, &
Fusi, 2008). By interpolating between these
two limits, an analytical expression for the firing rate exists that determines its
value for all τ_{s} (Moreno-Bote & Parga, 2004).
Neurons with both fast and slow synaptic filtering have also been studied in
Moreno-Bote and Parga (2004). Further
developments have addressed the case of conductance-based IF neurons (Moreno-Bote
& Parga, 2005) and the effect of input
correlations on a pair of LIF neurons (Moreno-Bote & Parga, 2006). The expressions for the firing rate are exact in
the specified limits of short or long τ_{s} compared to τ_{m} and do not require further assumptions about the amplitude of the noise. A
related important issue is to know whether the ratio between synaptic and membrane
time constants determines the operating regime of the neurons and its computational
capabilities. For instance, it is known that the firing variability depends on that
ratio (Svirskis & Rinzel, 2000;
Moreno-Bote & Parga, 2004; Muller,
Buesing, Schemmel, & Meier, 2007; Chizhov
& Graham, 2008). Also, in neural networks
in which the effective membrane time constant of the neurons can become very short,
it would be very useful to have analytical predictions for the firing rate (Shelley,
McLaughlin, Shapley, & Wielaard, 2002;
Moreno-Bote & Parga, 2005; Cai, Rangan,
& McLaughlin, 2005; Apfaltrer, Ly, &
Tranchina, 2006).

Here we introduce a theory to describe the firing rate of general IF neurons receiving arbitrarily filtered inputs, which extends previous results for LIF neurons with gaussian inputs (Moreno-Bote & Parga, 2004, 2005, 2006; Brunel & Sergi, 1998; Fourcaud & Brunel, 2002). The formalism is presented in a detailed, didactic manner along with a consideration of useful examples. We first derive the expressions for the firing rate and spike correlation function in a qualitative way using the adiabatic approach introduced in Moreno-Bote and Parga (2004). The formal derivation of the expression for the firing rate valid for arbitrary IF neurons with arbitrary input structure in the limit of long synaptic timescale in the presence or not of fast filters is provided in the appendix (finer details for the LIF neuron case are presented in the Supporting Information, available online at http://www.mitpressjournals.org/doi/suppl/10.1162/neco.2010.06-09-1036). Then we analyze the expressions of the firing rate and correlation function for LIF neurons and use them to predict the input-output transfer function of individual neurons and the synchronous firing pattern of pairs of neurons receiving both common and independent sources of inputs. We continue applying the formalism to describe the firing rate of QIF and NTIF neurons. Finally, we provide an exhaustive list of the analytical expressions for the firing rate and correlation function of general IF neurons and for the particular cases of LIF, QIF, and NTIF neurons.

## 2. Results

### 2.1. The Adiabatic Approach.

We are interested in describing the firing statistics of simple but realistic neuron models receiving temporarily correlated inputs. In this section, we study in a general way the response properties of neurons with randomly varying inputs. We apply the results to completely determine the firing rate of IF neurons driven by stochastic currents with a long correlation timescale. The firing rate in this limit, called the adiabatic firing rate, is particularly simple and can be derived by qualitative means. Thus, we leave its formal derivation for the appendix. The adiabatic firing rate is compared to another candidate simple expression, and we show that the latter gives worse fits of the simulated data. Then the case of fast and slow stochastic inputs is considered. We finally show that our formalism can be extended to study the correlated firing of a pair of neurons receiving common as well as independent sources of noise.

#### 2.1.1. The Adiabatic Firing Rate.

We start by considering a neuron model in which the firing rate as a function
of a constant input current *I* can be computed. We call this
quantity ν(*I*). Under constant stimulation and for
deterministic neurons, the rate ν(*I*) describes
completely the statistics of the output spike train except for an initial
phase: the output spike train is a periodic pattern with interspike
intervals of length *T*(*I*) =
1/ν(*I*). The firing rate for a fixed input
current can be very easily calculated for IF-like neurons. However, this
idea can be extended to any other neuron model or real neurons in which the
function ν(*I*) can be computed numerically or
experimentally.

Because we are ultimately interested in the response of neurons to stochastic
inputs, the steady-state description alone does not suffice, yet it can be
easily extended to the case in which inputs change slowly compared to the
dynamics of the neuron under consideration—for instance, to an LIF
neuron with membrane time constant τ_{m} and gaussian white noise filtered with synaptic time constant τ_{s} ≫ τ_{m}. For more complex neurons, τ_{s} should also be larger than all other timescales present in the
system. We will show that although the timescale separation condition could
seem restrictive, the equations obtained are applicable even when the input
changes as fast as the dynamics of the neuron.

Let us assume that the condition that the neuron's dynamics is faster than
the synaptic time constant is satisfied. Then, during a time interval
Δ*t* shorter than τ_{s}, the current *I*(*t*) will be
reasonably constant. Therefore, during that interval, the neuron will fire
with a constant rate ν(*I*(*t*)) and
ν(*I*(*t*)) ×
Δ*t* spikes will be emitted. Since this spike
count can be smaller than one,
ν(*I*(*t*)) ×
Δ*t* < 1,
ν(*I*(*t*)) needs to be interpreted
as a firing probability rather than a firing rate.

*P*(

*I*) be the distribution of input currents, not necessarily time independent. For instance, in an Ornstein-Uhlenbeck process, the distribution will be a time-independent gaussian (Risken, 1989), but in general it can be skewed, bimodal, or flat, or have any other shape, and it can depend on time. Hence, the probability density that the neuron emits a spike can be computed by averaging the rate ν(

*I*(

*t*)) with the probability distribution of currents as We call this expression the

*adiabatic firing rate*, in analogy with the timescale separation technique introduced in the early developments of quantum mechanics by Bohr and Oppenheimer (1927) to deal with the slow motion of the nuclei in molecules. (See also Risken, 1989, for later applications of this idea in the field of stochastic dynamical systems to eliminate fast variables.) This equation shows that the firing rate of a neuron with slow stochastic inputs can be estimated using the input-to-rate transfer function of the neuron for stationary inputs and the distribution of the inputs. The calculations required to compute equation 2.1 are illustrated in Figure 1B. A complete proof of this result for general IF models and arbitrary forms of slow stochastic input processes is given in the appendix. Note that the current does not need to be a scalar.

The adiabatic expression of the firing rate is simple and generally valid
under the following conditions. First, the neuron should have a known
sustained response ν(*I*) for constant input current.
Second, the current has to be a slow enough stochastic process with known
distribution *P*(*I*). For stationary input
statistics, *P*(*I*) is the steady-state
probability distribution of the current.

*I*(

*t*)), Here, the time window

*T*over which the firing rate is averaged is much longer than the correlation time so that many independent realizations of the current

*I*(

*t*) occur. In this work we will focus on inputs with stationary statistics with finite correlation time.

#### 2.1.2. A Suboptimal Alternative Expression for the Firing Rate.

*I*, it will emit a spike at intervals

*T*(

*I*) = 1/ν(

*I*). This quantity is calculated as the inverse of the firing rate ν(

*I*) (since for fixed current, the spike train is periodic). Then we could estimate the mean interspike interval (ISI), denoted

*T*, by averaging

*T*(

*I*) with the known probability

*P*(

*I*) as From this mean ISI, the firing rate of the neuron can be estimated as ν = 1/

*T*, which is different from that given by the adiabatic expression, equation 2.1. We call equation 2.3 the fake adiabatic expression for the firing rate.

*T*) or variations of it (see equation 2.22), on the contrary, deviate systematically from the true firing rate. This is because averaging

*T*(

*I*) with

*P*(

*I*) introduces biases in the estimate of the mean ISI due to the oversampling of long ISIs (i.e.,

*T*(

*I*)s generated with strong input currents) relative to short ISIs (i.e.,

*T*(

*I*)s generated with weak input currents), a problem known as biased sampling (Cox & Lewis, 1966; Middleton, Chacron, Lindner, & Longtin, 2003). However, when the bias in equation 2.3 is corrected appropriately, the original expression for the firing rate in equation 2.1 is recovered, as expected. To see this, note that the implicit assumption in equation 2.3 that the current

*I*is constant for each period ISI, with a duration

*T*(

*I*), leads to the result that the distribution of

*I*does not follow the desired distribution

*P*(

*I*), but rather

*c*

*T*(

*I*)

*P*(

*I*), where

*c*is a normalization constant. This suggests that the correction for the bias in equation 2.3 consists of replacing the distribution

*P*(

*I*) in equation 2.3 by

*C*

*P*(

*I*)

*T*

^{−1}(

*I*), where

*C*is a normalization constant. With this replacement, the currents are distributed according to

*P*(

*I*). Hence, the corrected expression for the mean ISI becomes which is identical to the adiabatic expression of the firing rate in equation 2.1. The above reasoning, however, requires that

*T*(

*I*) is finite for all

*I*, which is a very restrictive condition. For instance, for LIF neurons in both the sub- and suprathreshold regimes, there are values of the current for which

*T*(

*I*) becomes infinity (see section 2.2), making the derivation presented above inappropriate in this case. On the contrary, we will show that equation 2.1 holds true even when

*T*(

*I*) becomes infinity for some set of currents (i.e., ν(

*I*) becomes zero). In the following we will use equation 2.3 and a variation of it, equation 2.22 to highlight how large the bias sampling effect is on the estimation of the firing rate.

#### 2.1.3. Fast Noise and Slow Noise.

An important case arises when fast currents are present. For instance, AMPA synaptic receptors receiving Poisson spike trains will produce current fluctuations with a correlation timescale of a few milliseconds, which is better modeled as fast instead of slow noise.

_{fast}(

*I*) be the firing rate of a neuron receiving a constant current

*I*and where all sources of fast noise have been averaged. The function ν

_{fast}(

*I*) does not have a hard threshold below which firing is forbidden, but it is a rather smooth function of the input

*I*(see Figure 1C). This is because the presence of fast noise allows firing even when

*I*is below the firing threshold of the neuron. Under this condition, the firing rate of a neuron receiving both fast noise and slow noise with a known distribution

*P*(

*I*) can be calculated as as shown in the appendix.

#### 2.1.4. Cross-Correlation Function.

The formalism that we have described is not limited to the study of the first-order statistics of the neuron's firing, but it can also be extended to account for the second-order statistics. Here we find the two-point correlation function of the output spike trains of a pair of IF neurons receiving arbitrary forms of correlated and independent inputs. The equations are derived in an intuitive way. Finally, simpler equations are obtained for weakly correlated signals.

_{1}(

*I*

_{tot,1}) and ν

_{2}(

*I*

_{tot,2}) when they receive constant input currents

*I*

_{tot,1},

*I*

_{tot,2}. Let us assume that the neurons receive a common stochastic current,

*I*(

_{c}*t*), as well as independent currents

*I*

_{1}(

*t*) and

*I*

_{2}(

*t*), as shown in Figure 2A. Therefore, the total currents to neurons 1 and 2 are

*I*

_{tot,1}(

*t*) =

*I*

_{1}(

*t*) +

*I*(

_{c}*t*) and

*I*

_{tot,2}(

*t*) =

*I*

_{2}(

*t*) +

*I*(

_{c}*t*), respectively. Since the two neurons have a common stochastic input,

*I*(

_{c}*t*), they will fire in a correlated way: if neuron 1 emits a spike at time

*t*, neuron 2 will fire a spike at time

*t*′ more or less likely than that given by chance (chance probability here means the rate of firing of neuron 2). The cross-correlation function of the output spike trains of the neurons is defined as where

*t*

_{i(j)}are the spike times from neuron 1(2), the sums extend over all output spikes, and the average is across all possible realizations of the output spikes (see, e.g., Riehle, Grun, Diesmann, & Aertsen, 1997; Bair, Zohary, & Newsome, 2001). The cross-correlation function describes the synchronization pattern between the two spike trains up to second-order statistics, and it expresses the joint probability density that neuron 1 fires at time

*t*and that neuron 2 fires at time

*t*′. When the two neurons fire independently, the cross-correlation function becomes the product of their firing rates, ν

_{1}ν

_{2}. In general, however, the cross-correlation function is different from the pure product of the firing rates of the two neurons.

As usual, we assume that the current fluctuations are slower than the
membrane time constant of the neurons. The two-point probability density of
the common current is some known function *P*(*I _{c}*,

*t*;

*I*′

_{c},

*t*′), which specifies the probability density of having the common current with value

*I*at time

_{c}*t*and with value

*I*′

_{c}at a time

*t*′ (primes denote quantities at time

*t*′).

*I*

_{1}+

*I*at time

_{c}*t*, while neuron 2 receives the current

*I*′

_{2}+

*I*′

_{c}at time

*t*′, as shown in Figure 2B. Since the current fluctuations are slow, at those times the neurons fire with probabilities ν

_{1}(

*I*

_{1}+

*I*) and ν

_{c}_{2}(

*I*′

_{2}+

*I*′

_{c}), respectively. Equation 2.7 simply states that the two-point correlation function of the output spike trains is the average of the product of the instantaneous firing rates of the two neurons evaluated at times

*t*and

*t*′. This average of instantaneous firing rates over synaptic currents approximates the average over stochastic realizations of the spikes in equation 2.6.

*I*

_{1}and

*I*′

_{2}are independent, the integrals of those two variables with the factorized distribution

*P*(

*I*

_{1})

*P*(

*I*′

_{2}) in equation 2.7 can be computed first, obtaining where is the firing rate of neuron

*i*(

*i*= 1, 2) averaged over its independent current

*I*for fixed common current

_{i}*I*.

_{c}This intuitive derivation of the cross-correlation function in the adiabatic approach is presented here for the first time, and it can be shown to be identical to the one obtained in Moreno-Bote and Parga (2006) for the case of LIF neurons receiving filtered white noise (see below). It is worth emphasizing that equation 2.7 can be applied to more general models of spiking neurons and to rather general forms of noise distributions and noise correlation structure.

_{c}, in powers of

*I*− μ

_{c}_{c}and

*I*′

_{c}− μ

_{c}, and take the linear approximation to obtain (here is the derivative function of evaluated at the mean current). After averaging over

*I*and

_{c}*I*′

_{c}we find By noting that the integral in the second term on the right-hand side is the cross-correlation function of the common input, denoted

*C*

_{I,c}(

*t*,

*t*′), equation 2.10 can be written simply as The first term in the sum is the chance probability of observing spikes emitted at times

*t*and

*t*′ from neurons 1 and 2, respectively. The second term expresses the excess probability above that expected by chance that the spikes are emitted at those times. It is concluded that the autocorrelation of the common fluctuating input is linearly transformed into the cross-correlation of the neurons' output spike trains. For instance, for an Ornstein-Uhlenbeck process with timescale τ

_{s}and variance σ

^{2}

_{I,c}one obtains and therefore the cross-correlation function of the output spike trains will also be an exponential with the same timescale and whose amplitude increases with the square of the common noise amplitude. (It is easy to see that the cross-correlation function depends in general only on the time difference

*t*−

*t*′ for stochastic inputs with stationary statistics.)

### 2.2. Firing Rate and Correlations of LIF Neurons.

Here we summarize the analytical results for the case of an LIF neuron that follow from the general expressions presented above and are formally derived in the appendix. A more detailed exposition of the LIF neuron case is provided in the Supporting Information.

#### 2.2.1. LIF Neurons with Slow Filters.

We start by considering the case of synaptic receptors with long time
constants. This case is the relevant one to study the dynamics of neurons in
the so-called high-conductance regime, in which the membrane time constant
can become shorter or comparable to the synaptic time constants. This case
naturally arises also when strongly fluctuating GABA_{A} synaptic
currents pass through a neuronal membrane with relatively short τ_{m}. It also accounts for the case of neurons strongly innervated by NMDA
or GABA_{B} receptors, hypothesized to be crucial to stabilize
working-memory states (Wang, 1999).
Here we will focus on synaptic receptors with a single timescale, while the
more general case with two or more slow synapses with different timescales
is considered in the Supporting Information.

*V*of an LIF neuron obeys where τ

_{m}is the membrane time constant and

*I*(

*t*) is the synaptic current. In the model, a spike is evoked when

*V*reaches the threshold Θ, and then the voltage is reset to

*H*. Without loss of generality, the resting potential is set at

*V*= 0. We take the absolute refractory period to be zero.

*t*) is a white noise process with zero mean and unit variance, 〈η(

*t*)〉 = 0 and 〈η(

*t*)η(

*t*′)〉 = δ(

*t*−

*t*′), μ is the mean current and σ

^{2}

_{I}= σ

^{2}/2τ

_{s}is the variance of the current. The filter introduces exponential correlations in the current with a correlation time τ

_{s}, and therefore the noisy input to the LIF neuron cannot be described by a white noise process. The process defined in equation 2.14 is known as the Ornstein-Uhlenbeck process (Risken, 1989). Note that the variance of the current can be obtained from the correlation function when

*t*=

*t*′.

^{2}

_{I}= σ

^{2}/2τ

_{s}, Second, solving equation 2.13 for constant current

*I*with initial condition

*H*and final condition Θ leads to the expression of the instantaneous firing rate for

*I*>

*I*

_{min}= Θ/τ

_{m}and zero otherwise. Inserting the two above quantities into equation 2.1, one gets that the firing rate is

*z*is distributed as a gaussian with zero mean and unit variance (see the appendix). In the new variables, the reset and threshold potential read and , and the firing rate in equation 2.18 becomes with where .

_{m}>Θ), we find that the firing rate up to second order is where . Note that is the rate of an LIF neuron driven by a noiseless current with intensity μ. An identical expression for the rate in the suprathreshold regime is found in the Supporting Information using the Fokker-Planck equation (FPE) associated with the variables (

*x*,

*z*) in powers of ϵ. We also show in the Supporting Information that the firing rate in equation 2.19 does not admit an expansion in powers of ϵ in the subthreshold regime (μτ

_{m}< Θ). This indicates that a naive expansion of the solution of the FPE in powers of ϵ will not work for all regimes. In the appendix for the general case and in the Supporting Information for the LIF neuron, we show how to regularize the expansion and find an asymptotic solution valid for all regimes. The zeroth-order term in the regularized expansion of the exact rate of an LIF neuron equals equation 2.19.

The prediction of the firing rate given by the adiabatic approach, equation 2.19, has been compared
with simulation results in Figure 3. In
the left panel, the noise σ^{2} is kept fixed. The adiabatic
firing rate (bottom line) becomes exact when τ_{s} is much longer than the membrane time constant of the neuron, but it
also provides a good approximation when τ_{s} is comparable to τ_{m} = 10 ms. Only the case of subthreshold neurons has been shown
here, but similar fits are found for suprathreshold neurons. In the right
panel, the noise has been increased linearly with the timescale of the noise
as σ^{2} = σ^{2}_{0}τ_{s} for some fixed σ^{2}_{0}. Since the adiabatic firing rate depends on the noise level
through the product of σ^{2}/τ_{s} (recall the definitions of ϵ, *z*_{min} and the normalized thresholds), by increasing
σ^{2} linearly with τ_{s}, the adiabatic firing rate does not change (horizontal lines). The
simulation results show that the firing rate approaches the adiabatic limit
when τ_{s} becomes long and that the adiabatic rate provides a good prediction
when τ_{s} ∼ 2τ_{m} = 20 ms, and an acceptable prediction even when τ_{s} = τ_{m} = 10 ms (the adiabatic rate accounts for 80% of the simulated
rate on average in the last case). These comparisons also show that the
adiabatic approach does not require that the noise amplitude is small.

#### 2.2.2. Range of Validity of the Adiabatic Approach.

It is important to note that the mean ISIs obtained in the left panel of Figure 3 are very long compared to the synaptic time constant used (e.g., a rate of 10 Hz equals a mean ISI of 100 ms, which is much longer than the synaptic time constant at that point, 20 ms). Therefore, our theory does not require that the fluctuations live for a long period of time compared to the mean ISI of the neuron (∼100 ms), but rather that they live for a time of the order of its membrane time constant (∼10 ms).

#### 2.2.3. Comparison with the Fake Adiabatic Approach.

_{s}, since there are values of the current for which the ISI becomes infinity. Although the zero value is correct in the long τ

_{s}limit, and it is also predicted by the adiabatic firing rate, it shows that equation 2.3 is strictly valid only in that limit. An improved version of the expression that does not give a trivial erroneous result in the subthreshold regime is one in which the probability distribution of

*I*is renormalized to include only those

*I*>

*I*

_{min}for which the ISI is finite,

*T*(

*I*) < ∞, as However, in this case too, the prediction is worse than that given by the adiabatic rate, as shown in the left panel of Figure 3 (top line). Smaller but still substantial disagreements are obtained in the suprathreshold regime.

#### 2.2.4. Equivalent But Computationally Faster Implementation of the Adiabatic Firing Rate.

The adiabatic expression of the firing rate for an LIF neuron in the long τ_{s} limit, equation 2.19, is appealing because of its simplicity. It is also very
efficient computationally, since it involves the calculation of a single
integral, requiring only a summation of the order of 10^{3} terms.

*n*∼ 200 (using

*t*= 200 ms), giving high accuracy. This results in almost an order of magnitude improvement in computation speed in comparison with the integral form, equation 2.19. Equations 2.19 and 2.23 predict the same firing rate.

#### 2.2.5. Distribution of Currents Conditioned to Output Spike Times.

*z*(normalized currents) at the output spike time, denoted

*p*(

*z*|

*spike*), can be computed directly from the probability density flow of the voltage at threshold (see equations A.20 and A.21 in the appendix; see also Supporting Information). It can be written as defined for

*z*>

*z*

_{min}and

*C*is the normalization constant. Since ν

_{0}(

*z*) is zero for

*z*<

*z*

_{min}, the distribution of

*z*conditioned to the spike times is skewed toward positive values of the fluctuations. This distribution has been derived in Moreno-Bote and Parga (2004, 2006) and Schwalger and Schimansky-Geier (2008).

#### 2.2.6. The Diffusion Approximation.

_{E}and ν

_{I}and predicted the output firing rate of an LIF neuron with a single synaptic time constant using formula 2.19. The parameters μ and σ

^{2}can be written as (see Supporting Information and Ricciardi, 1977) The predictions are in good agreement with the simulation results even for values of τ

_{s}lower than τ

_{m}.

#### 2.2.7. Short τ_{s} Limit and Interpolation Procedure.

Until now, we have determined the firing rate of an LIF neuron in the
presence of a slow filter. It would be desirable to know the firing response
of these neurons in the opposite limit in which the synaptic time constants
are short but nonzero. Here we describe how to estimate the firing rate in
the presence of a single filter characterized by any value of τ_{s} (Moreno-Bote & Parga, 2004).

_{s}has been first found by Doering, Hagan, and Levermore (1987). The authors showed, in particular, that the correction to the rate in relation to the rate with white noise is of order . This technique has been applied to compute the firing rate of an LIF neuron with a fast but finite synaptic timescale (Brunel & Sergi, 1998; Fourcaud & Brunel, 2002) obtaining, where we have defined (see equation 2.39), and .

_{s}we use where

*A*is the coefficient of the correction term in equation 2.26, while at long τ

_{s}we employ equation 2.19. Both limits are joined at an intermediate value of the synaptic time constant, τ

_{s,inter}∼ τ

_{m}, and

*B*and

*C*are set to obtain a continuous and differentiable interpolating curve at τ

_{s}= τ

_{s,inter}. The exact value of τ

_{s,inter}does not alter the interpolation curve much, and it can be safely taken as a constant independent of the input parameters in the subthreshold regime. When the input is in the suprathreshold regime, a rather higher τ

_{s,inter}has to be chosen, but again it does not depend too much on the values of the parameters.

#### 2.2.8. Cross-Correlogram of Pairs of Neurons with Synaptic Filters.

*I*(

_{i}*t*) (

*i*= 1, 2) and a common current

*I*(

_{c}*t*). The latter can result from shared presynaptic inputs, but also from different presynaptic inputs that are themselves correlated. The independent and common currents are described by the equations where the ηs are independent white noise waveforms with zero mean and unit variance. The currents are therefore colored gaussian noises with timescale τ

_{s}, mean μ

_{ind}, and variance for the independent components and mean μ

_{c}and variance for the common component. Note that each neuron receives inputs with total mean μ = μ

_{ind}+ μ

_{c}and total variance σ

^{2}= σ

^{2}

_{ind}+ σ

^{2}

_{c}. The two neurons do not need to be identical, but for simplicity, we consider only identical cells here.

*C*(

*t*,

*t*′), which in the adiabatic approach is given by equation 2.7. Because the inputs have stationary statistics, we can write

*C*(Δ) ≡

*C*(

*t*,

*t*+ Δ). Using the definitions and and integrating out two of the current variables, equation 2.7 becomes where ν

_{0}(

*u*) is as in equation 2.19,

*P*(

*u*

_{2}, Δ|

*u*

_{1}) is a gaussian distribution over the variable

*u*

_{2}, and

*p*(

*u*

_{1}) is a gaussian with mean zero and unit variance.

*a*≡

_{n}*u*(

_{n}*t*) and

*b*≡

_{m}*u*(

_{m}*t*+ Δ), with The function

*P*(

*u*

_{2}, Δ|

*u*

_{1}) is a gaussian with mean and variance , and

*p*(

*u*

_{1}) is a gaussian with mean zero and unit variance, as defined in equation 2.30. These distributions need to be evaluated at the corresponding values of

*a*and

_{n}*b*, which depend on times

_{m}*t*and

*t*+ Δ, respectively.

The two analytical expressions, equations 2.30 and 2.32, are compared to simulations in
Figure 5. The two predictions give the
same numerical values (thick full line) and are very close to the simulated
cross-correlation function even when similar values of τ_{s} and τ_{m} are used. The linear approximation of the cross-correlation function,
equations 2.10 and 2.11 (see Section 2.5), subestimates the true values but
also provides a good match. The linear prediction improves as the amount of
common noise relative to the independent noise lowers (note that in the
simulations the amount of common noise used is not small compared to the
amount of independent noise). The linear approximation provides a fast
estimate of the cross-correlation function. Equation 2.32 consists of a double sum over an
infinite series, but in practice it is extremely fast because it can be cut
using the first two hundred terms in each sum (use *t* = 200 ms). Equation 2.30 provides the slowest prediction, since it involves a double
integral.

The cross-correlograms are typically characterized by a single peak in both
the sub- and suprathreshold regimes when the fraction of total noise that is
common is small, while oscillatory patterns arise when the fraction
approaches one (not shown; Moreno-Bote & Parga, 2006). For small fractions, the correlation
timescale of the output spike trains is τ_{s}, the time constant of the synapses driving the two neurons, as
predicted by the linear approximation in equations 2.10 and 2.11.

Here, we have described the cross-correlation function of the output spike
trains of a pair of LIF neurons. However, our theory also allows a detailed
description of other statistical properties of the spiking response, such as
the coefficient of variation of the ISIs (*CV _{ISI}*), the Fano factor of the spike count of the output spike train
(

*F*), and its autocorrelation function (Moreno-Bote & Parga, 2006).

_{N}#### 2.2.9. Probability Distribution of the Voltage.

*P*

_{0}(

*x*), as (see Supporting Information) where is the Heaviside (step) function. This distribution reveals the existence of two different states. One of them corresponds to firing periods of the neuron (first term) and the other to silent periods (second term). The latter, which can be considered as the free distribution (i.e., no effect of the voltage threshold), is a gaussian with width . A similar expression for the voltage distribution in the case of a conductance-based IF neuron has been found in Moreno-Bote and Parga (2005), where also two different regimes were characterized.

#### 2.2.10. LIF Neurons with One Fast and One Slow Synaptic Type.

We continue the discussion of slow filters this time in the presence of a
second, fast filter. This is a possible scenario found in a study about the
effect of background activity on τ_{m} when AMPA (fast) and GABA_{A} (slow) synaptic receptors types
are present (Destexhe, Rudolph, Fellous, & Sejnowski, 2001). In that work, it was argued that
background activity reduces the membrane time constant of the neuron several
times, so that τ_{m} ∼ 5 ms. Then AMPA synapses are fast compared with τ_{m}, and we can approximate τ_{AMPA} = 0. However,
GABA_{A} receptors display a slower time decay, ms, and they can be taken as slow compared with the
membrane dynamics.

*I*(

*t*) =

*I*

_{1}(

*t*) +

*I*

_{2}(

*t*), which in the diffusion limit are The first equation corresponds to the inhibitory current, in which τ

_{s}can be made equal to . The second equation corresponds to the fast approximation of AMPA synaptic receptors. The quantities μ

_{1}, μ

_{2}and σ

^{2}

_{1}, σ

^{2}

_{2}are the means and variances of the inhibitory and excitatory inputs, and η(

*t*) and ζ(

*t*) are two independent white noise processes with unit variance. Defining μ ≡ μ

_{1}+ μ

_{2}and performing the linear transformation and , the equations for the voltage and the current are transformed into Here, , α ≡ σ

^{2}

_{1}/σ

^{2}

_{2}, and the threshold and reset potentials now become and . The current autocorrelation is Note that the autocorrelation has a delta function, something that did not happen with a single slow filter, equation 2.15.

The quantity ν_{fast}(*z*) in equation 2.38 has an intuitive meaning: it is the
rate of an LIF neuron driven by a white noise input with effective mean and variance σ^{2}_{2} (Ricciardi, 1977). As it
can be appreciated, the output firing rate is given by the average of ν_{fast}(*z*) with the stationary distribution of *z*, as in the case with a single slow filter.

Formula 2.38 admits an expansion in powers of ϵ (Supporting
Information). At leading order, the rate is just , the firing rate of an LIF neuron driven by a white noise
input with mean μ and variance σ^{2}_{2} (Ricciardi, 1977). The
firing rate approaches as the synaptic time constant increases, as shown in
Figure 6, where a comparison between
the predictions provided by equation 2.38 and simulated data is presented. A
perturbative expansion of the firing rate in equation 2.38 in powers of 1/τ_{s} exists in both the supra- and the subthreshold regimes. This
contrasts with the case of a single slow filter, where the firing rate
admitted a perturbative expansion in powers of 1/τ_{s} only in the suprathreshold regime.

An expression for the output firing rate identical to that in equation 2.38 has been found in
Moreno, de la Rocha, Renart, and Parga (2002) and Moreno-Bote, Renart, and Parga (2008) for an input with spike correlations. More
specifically, in that work, we calculated the output firing rate of an LIF
neuron driven by exponentially correlated presynaptic spikes characterized
by a correlation time constant τ_{c} and magnitude α. However, in the situation presented in this
work, the presynaptic currents in equation 2.35 are modeled as white noises that
approximate independent Poisson firing of a pool of presynaptic neurons (see
the Supporting Information). The two expressions are identical because in
the presence of two filters, one slow and another infinitely fast, the total
input *I*(*t*) has exactly the exponential
correlations (see equation 2.37) that were considered in Moreno et al. (2002) and Moreno-Bote et al. (2008) to model exponentially correlated spikes
with correlation time τ_{c} = τ_{s} and positive correlation magnitude α =
σ^{2}_{1}/σ^{2}_{2}.

The results found above can be extended to any other IF neuron model. A general formula similar to equation 2.38 for the firing rate of a general IF neuron with both fast and slow filters is given in the appendix.

#### 2.2.11. The Transfer Function with Slow and Fast Synaptic Filters.

We plot in Figure 7 the input-to-rate
transfer function for an LIF neuron. The firing rate is plotted as a
function of the mean current μ for three different τ_{s} for both a single slow filter (left) and one slow and another fast
filter (right). As τ_{s} increases, the fluctuations of the slow input noise are filtered out,
and the curve becomes steeper as a function of μ. For the same mean
input drive μ, the firing rate decreases as a function of τ_{s}. In these figures, the single formulas 2.19 and 2.38 are used without
interpolation to test their range of validity. Even when the synaptic time
constant is chosen to be τ_{s} = τ_{m} = 10 ms (top curves), the prediction is rather close to the
simulation results. Note that in the presence of fast noise, the transfer
function is smoother than in the case of a single slow filter.

### 2.3. Firing Rate for the QIF Neuron.

*I*(

*t*) is an Ornstein-Uhlenbeck process with timescale τ

_{s}, mean μ, and variance σ

^{2}/2τ

_{s}, as defined in equation 2.14. For constant current

*I*, the firing rate is where Θ and

*H*are the threshold and reset potentials of the neuron (Hansel & Mato, 2003). If these potentials are set at

*H*= −∞ and Θ = ∞, then the firing rate is For this case, because of the quadratic term in equation 2.40, the membrane potential can travel from the reset to the threshold voltages in a finite time. Using our general equation for the firing rate of an IF neuron with slow filters (equations 2.1 and A.22), we find that the output firing rate for long τ

_{s}of a QIF neuron with infinite threshold and reset potentials is with (a similar expression holds when the potentials take finite values). In the subthreshold regime (μ < 0), this firing rate cannot be expressed as a series in 1/τ

_{s}. However, in the suprathreshold regime (μ>0), the expansion is possible, and it is (see Brunel & Latham, 2003, and section 3). Equation 2.43 provides a general expression for both sub- and suprathreshold regimes. In Figure 8 we have plotted the output firing rate of a QIF neuron simulated numerically by equations 2.40 and 2.14, and we have compared the results with the theoretical prediction. The predictions are excellent for τ

_{s}⩾ τ

_{m}= 10 ms and good even for τ

_{s}shorter than τ

_{m}.

### 2.4. Noise-Thresholded IF Neuron.

_{s}, is

*exact*for all τ

_{s}(see the right panel in Figure 9). It is also easy to check that the density , where

*p*(

*z*) is a normal, solves exactly the FPE associated with equation 2.45. Equation 2.46 can also be obtained from the general expression for the firing rate of nonleaky integrators found in Brette (2004).

### 2.5. List of Expressions.

In this section, we provide an exhaustive list of the analytical expressions
found in the letter. The expressions for the firing rates can be obtained from
the general theory presented in the appendix. In the examples considered here,
we use Ornstein-Unlenbeck processes (colored noise) to generate the input
currents, equation 2.14, but a
broad range of noises can be considered, as described in the appendix.^{1}

#### 2.5.1. The Adiabatic Firing Rate for Slow Input Currents.

**a.**The general expression for the adiabatic firing rate is (see equation 2.1) where ν(

*I*) is the steady-state firing rate of the neuron receiving a constant current

*I*, and

*P*(

*I*) is the distribution of currents. The formal derivation for this expression for a general IF neuron and arbitrary input distributions is provided in equation A.22.

**b.**The firing rate of an LIF neuron driven by colored inputs with long timescale τ

_{s}can be calculated as (see equation 2.19) with , , , and . The adiabatic expression provides an excellent approximation to the firing rate as long as τ

_{s}>τ

_{m}. A detailed proof of this particular case is provided in the Supporting Information. In the suprathreshold regime, the firing rate up to second order in ϵ is (see equation 2.21) where .

**c.**An equivalent expression for the LIF neuron receiving colored inputs with long timescale τ

_{s}is given by (see equation 2.18) where

*I*

_{min}= Θ/τ

_{m}and σ

^{2}

_{I}= σ

^{2}/2τ

_{s}is the variance of the input current. This expression is linked to that in

**b**by the transformation . See the Supporting Information for an application to AMPA and NMDA synaptic receptors.

**d.**The fast implementation of the firing rate for the LIF neuron driven by colored inputs with long timescale τ

_{s}is (see equation 2.23) with . This expression is equivalent to that in

**b**. Only the first 200 terms are required for an excellent approximation (use

*t*= 200 ms).

**f.**The firing rate of an NTIF neuron receiving colored inputs with long timescale τ

_{s}is (see equation 2.46) with . In this case, the expression is exact for all τ

_{s}.

#### 2.5.2. The Firing Rate with Fast and Slow Filters.

**a.**The general expression for the firing rate is (see equation 2.5) where ν

_{fast}(

*I*) is the steady-state firing rate of the neuron receiving a constant current

*I*and averaged across the fast noise. A proof for this expression for a general IF neuron and arbitrary input distributions can be found in equation A.33.

**b.**The firing rate of an LIF neuron receiving both white noise inputs and colored inputs with long timescale τ

_{s}is written as (see equation 2.38) with .

#### 2.5.3. Correlation Function.

**a.**The general expression for the two-point correlation function for two neurons receiving correlated colored noise with long timescale is (see equation 2.7) where ν

_{i}(

*I*) is the steady-state firing rate of neuron (

*i*= 1, 2) driven by constant current

*I*,

*P*(

*I*) is the distribution of current

_{i}*I*to neuron

_{i}*i*, and

*P*(

*I*,

_{c}*t*;

*I*′

_{c},

*t*′) is the joint probability density of the common current at two different times.

**b.**The linear approximation (valid for weak correlations), equation 2.11, of the previous expression is where , is the derivative of evaluated at the mean value of

*I*, μ

_{c}_{c}, and

**c.**The two-point correlation function of the output spike train for two LIF neurons receiving independent as well as common colored inputs (common and total input variances σ

^{2}

_{c}and σ

^{2}respectively (see equations 2.28 and 2.29) with long timescale τ

_{s}is calculated as (see equation 2.30) where and

*p*(

*u*

_{1}) is a gaussian with mean zero and unit variance.

**e.**The fast implementation of the correlation function given in

**c**for two LIF neurons is (see equation 2.32) where

*a*≡

_{n}*u*(

_{n}*t*) and

*b*≡

_{m}*u*(

_{m}*t*+ Δ) with

*P*(

*u*

_{2}, Δ|

*u*

_{1}) is a gaussian distribution with mean and variance , and

*p*(

*u*

_{1}) is a gaussian distribution with mean zero and unit variance, as in

**c**. Only the first two hundred terms in each sum are required for an excellent approximation (use

*t*= 200 ms). This expression is equivalent to that in

**c**.

## 3. Discussion

We have developed a theory that describes analytically the firing rate of IF neurons driven by arbitrary forms of slow stochastic inputs and when fast forms of noise are present too. The theory is exact when the timescale governing the noise fluctuations is much longer than the intrinsic timescales of the neuron, but it can also be applied to the case in which the timescales are comparable. It is worth emphasizing that our theory does not require that the interspike intervals are short compared to the timescale of the stochastic inputs, but rather that the latter is longer or comparable to the membrane time constant of the neuron. Our approach does not require that the noise amplitude is small either.

Other work has also addressed the problem of studying the firing properties of
IF-like neurons driven by slow stochastic inputs. Salinas and Sejnowski (2002) considered an input current that could
take two discrete values. Although interesting, the input model cannot approximate
the current generated by a sum of spike trains. Svirskis and Rinzel (2000) have found an estimate of the firing rate of a
neuron model in which the potential can be above threshold and the reset effect is
not included. Middleton et al. (2003) have
studied the distributions of interspike intervals in nonleaky IF neurons with slow
stochastic inputs and provided analytical expressions for them that are valid in the
limit of small noise amplitude and when the synaptic timescale is at least one order
of magnitude longer than the mean interspike interval. Their technique cannot be
applied to compute the firing rate of LIF neurons in the noise-driven regime since
it requires that for any frozen value of the input noise, the interspike interval is
noninfinity. Schwalger and Schimansky-Geier (2008) have studied the interspike interval distributions and the Fano
factor of the spike count of the output spike train in LIF neurons driven by slow
stochastic inputs and derived analytical expressions for them. The computation of
the interspike interval distribution requires knowing the distribution of synaptic
currents at the spike times found in Moreno-Bote and Parga (2004, 2006). Their
analytical expressions are valid when the synaptic time constant is several orders
of magnitude longer than the membrane time constant. In Moreno-Bote and Parga (2006), we have developed a method that allows
computing the Fano factor and the autocorrelation function of the output spike
trains accurately even when τ_{s} is similar to τ_{m}. The crucial difference between the two approaches lies in that Schwalger and
Schimansky-Geier (2008) assume that the
currents are constant across time after an output spike, while in Moreno-Bote and
Parga (2006), we fully consider the
stochastic temporal evolution of the currents after an output spike. Gerstner (1999) has studied models in which the threshold
potential after a spike is a slow, random variable. These models can be solved
exactly, but they cannot be mapped to IF neurons with slow, noisy inputs. This is
because the noisy threshold is drawn from a distribution only at the moments of the
spikes, not continuously over time, as it happens in neuron models receiving
fluctuating inputs. In a recent work, Brunel and Latham (2003) have used our naive expansion in powers of for long synaptic time constants (described in detail in
Supporting Information) to compute the firing rate of a QIF neuron in the
suprathreshold regime in that limit. However, our naive expansion can be applied
only to the suprathreshold regime, in which the mean input drive dominates the
spiking behavior of the neuron and noise plays a secondary role. Here, using a
regularized expansion, we have found an expression for the firing rate of a QIF
neuron valid in both the supra- and subthreshold regimes.

Chizhov and Graham (2008) have elaborated a
new procedure to compute the firing rate of LIF neurons receiving colored noise with
arbitrary timescale τ_{s}. Our and their analytical expressions for the firing rate have been compared
in their Figure 4, providing both a good match
with the simulated data. Their method, however, differs from ours substantially. The
reset effect after generation of a spike is not considered in Chizhov and Graham
(2008), which makes the statement of the
problem easier. This approximation is well justified in cases in which the
interspike intervals are longer than the membrane time constant. Moreover, their
calculation of the firing rate in the stationary case involves two steps. First, the
associated FPE without reset effect is solved numerically for several values of the
neuron parameters and τ_{s}, and the results are then fit by simple functions; the fits are good at least
in the parameter regime used. And second, these simple fit functions are employed in
the final expression of the firing rate, which involves a double integral that can
only be computed numerically. In contrast, by solving analytically the FPE with
reset effect, we have provided analytical expressions for the firing rate of general
IF neurons that are exact in the long τ_{s} limit, involve a single integral for the LIF and QIF neurons, and whose
asymptotic behavior is mathematically different from that obtained in Chizhov and
Graham (2008) for the LIF neuron.

*V*. Then for any arbitrary value of the coarse voltage, the firing probability of the neuron given the voltage, ν(

_{c}*V*), is experimentally obtained. Several interesting characteristics of the firing response can then studied, but interesting to us is the way the mean firing rate would be computed. Since the voltage distribution,

_{c}*P*(

_{G}*V*), can be experimentally computed (it is well approximated by a gaussian), then the mean firing rate could be obtained by averaging the rate with the voltage as Typically the function ν(

_{c}*V*) is smooth, as it corresponds to the case of averaging the voltage over all fast fluctuations over a time window of 20 ms (50 Hz low pass filtering). Therefore, equation 3.1 resembles the adiabatic expression for the firing rate of a neuron driven with fast and slow stochastic inputs, equation 2.5, with the difference that the voltage average is replaced by the synaptic current average. Carandini (2004) has presented the above formalism to describe quantitatively the firing response of visually stimulated neurons. We have derived a similar description in terms of input currents (Moreno-Bote & Parga, 2004) as a formal way of characterizing the firing rate of neurons receiving inputs with long correlation timescales.

_{c}The difference between averaging voltage in Carandini's model and synaptic inputs (currents or conductances) in our theory is substantial. Since voltages are necessarily upper-bounded by the spiking threshold of the neuron, computing the firing rate as a function of the voltage might be susceptible to large statistical errors, since many possible firing rates will correspond to similar voltages around the spiking threshold. However, computing the firing rate as a function of the input current (or synaptic conductances) does not suffer this statistical problem, since currents are not upper-bounded in the range of values typically observed. In fact, we have previously shown that the firing rate of conductance-based IF neurons can be computed using an average of the firing rate as a function of the instantaneous synaptic conductances over the distribution of synaptic conductances (Moreno-Bote & Parga, 2005). Since voltages and synaptic conductances can be measured in vivo, it would be interesting to compare quantitatively the predictions for the firing properties of visual cortex neurons using the two alternative ways of averaging discussed above.

In this letter, we have also provided expressions for the cross-correlation function between the output spike trains of two IF neurons receiving common as well as independent sources of noise and applied the theory to the LIF neuron (see also Moreno-Bote & Parga, 2006). The theory allows describing quantitatively the peak, width, area of the cross-correlation function, and the correlation coefficient of the output spike trains. We have also found simplified equations for the cross-correlation function that establish a linear relationship between input and output correlations. Several recent works have described analytically the temporal profile of the cross-correlation function of a pair of spiking neurons, but using simplified models that do not have the after-spike reset characteristic of the IF neuron (Svirskis & Hounsgaard, 2003; Tchumatchenko, Malyshev, Geisel, Volgushev, & Wolf, 2010; Burak, Lewallen, & Sompolinsky, 2009). De la Rocha, Doiron, Shea-Brown, Josic, and Reyes (2007) have presented analytical expressions to compute the coefficient of correlation for LIF neurons in the limit of weak input correlations, but these expressions do not allow an analytical description of the temporal profile of the cross-correlation function. Our theory and its extension to interconnected neurons might be crucial for describing the temporal correlation patterns found in retina and cortex (Riehle et al., 1997; Bair et al., 2001; Kohn & Smith, 2005; Pillow et al., 2008) and the determination of connectivity matrices underlying those patterns using IF neurons as the basic functional units.

Although we have focused on the description of the firing rate and cross-correlation
function of the output spike trains of a pair of IF neurons, our theory can also be
applied to study other statistical properties of the spiking response, such as the
coefficient of variation of the ISIs (*CV _{ISI}*), the Fano factor of the spike count of the output spike train
(

*F*), and its autocorrelation function. It has been shown previously that those statistical quantities can be obtained for LIF neurons from the adiabatic approach (Moreno-Bote & Parga, 2006). A generalization of that formalism is possible and will allow the description of second-order firing statistics for general IF neurons driven by arbitrary forms of noise. In addition, it would be desirable to extend our adiabatic approach to describe the transient response of IF neurons. Analytical expressions for the response of LIF neurons to sinusoidal stimuli in the limit of small amplitudes and infinitely fast synapses, τ

_{N}_{s}= 0, are available (Brunel & Hakim, 1999; Lindner & Schimansky-Geier, 2001; Richardson, 2007), but there are not known solutions valid for all stimulus frequencies for nonzero τ

_{s}.

A prime question is which is the effect of synaptic time constants in neuronal network dynamics. Recent work has shown that the temporal dynamics of the synapses can play an important role in setting the response properties of IF neurons working in the high-conductance regime (Shelley et al., 2002; Moreno-Bote & Parga, 2005; Vogels & Abbott, 2005; Cai et al., 2005; Apfaltrer et al., 2006; Kumar, Schrader, Aertsen, & Rotter, 2008). We think that the general theory on synaptic filtering presented here can be of utility for building mean field theories that use the rate variables as well as second-order statistics to describe the temporal dynamics of these networks (see Renart, Moreno-Bote, Wang, & Parga, 2007).

## Appendix: Methods

### A.1. IF Neurons with Fast and Slow Filters.

Here we provide the details for computing the output firing rate for IF neurons described by rather general drift functions and noise models (see equation A.1 below). First, we define the model, then we compute the adiabatic firing rate of an IF neuron driven by slow input noise, and finally we study the case of both fast and slow synaptic filters.

#### A.1.1. General IF Neuron and Noise Models.

_{i}(

*t*)s are independent white noise processes with zero mean and unit variance. Therefore, the fluctuation variable

*z*drifts with rate −μ

_{i}_{i}(

*z*)/τ

_{i}_{i}and has diffusion coefficient . A spike is emitted in the model when the voltage reaches a threshold Θ, after which it is reset to

*H*. The synaptic variables are not reset after a spike, since they model external stochastic inputs that do not depend on the state of the neuron.

*V*when the potential is at

*V*and the synaptic fluctuations take value . Any IF neuron can be written in this form: for example, for an LIF neuron with a membrane time constant τ

_{m}and a single linear filter of timescale τ

_{s}it is written as (see equations 2.13–2.14) where

*z*obeys equation A.2 with μ(

*z*) =

_{i}*z*and σ

_{i}^{2}

_{i}(

*z*) = 1. Other relevant drift functions can include the driving forces of synapses and also quadratic terms in the voltage that generates a sort of action potential (e.g., as in the QIF neuron). Nonlinear filters are included in the formalism by making the drift function nonlinear in

_{i}*z*.

*z*. Its stationary distribution, whenever it exists, has the form where

*A*is the normalization constant. This density solves the stationary FPE for the

*z*variable where is the linear operator . To obtain the above FPE, we interpret the diffusion term in equation A.2 in the Stratonovich sense, that is, the white noise process is understood to arise from the limit of a continuous stochastic process. Other interpretations, like the Ito interpretation, can be mapped into the previous one easily. For an Ornstein-Uhlenbeck process with mean zero and unit variance μ

_{i}_{i}(

*z*) =

_{i}*z*and σ

_{i}^{2}

_{i}(

*z*) = 1, and from equation A.4, it is easy to check that its steady-state distribution follows a normal distribution.

_{i}#### A.1.2. The Case of Slow Filtering.

_{i}s, this perturbative expansion often does not work (see the Supporting Information for the proof of the failure of the naive expansion for the LIF neuron). Instead of attempting to solve the problem in this perturbative fashion, the problem of finding the firing rate is attacked by assuming that the drift function in equation A.1 does not depend on while equations A.2 do so depend. The latter is done by defining a new constant vector that substitutes in equation A.1 as As we take the limit of long synaptic time constants, the effect of the noise variables on the membrane potential in equation A.6 remains fixed because the vector is maintained fixed (as an example, see equation A.3). The new model defined by equations A.6 and A.2 is equivalent to that defined by equations A.1 and A.2 when . Then all we have to do is to calculate the firing rate for the system defined by equations A.6 and A.2 and at the end to replace by its true value . (The limit with fixed is called the distinguished limit in singular perturbation theory; see, e.g., Bender & Orszag, 1978).

*V*to −∞ as

*t*→ ∞. This implies that the potential function increases as

*V*→ −∞. In other words, either the voltage of the neuron drifts upward or settles down to a stationary value for any fixed . For simplicity in our arguments, we will also restrict the generalization a bit more by assuming that the potential function has at most an (absolute) minimum in the interval (−∞, Θ]. This is a reasonable assumption, because the voltage cannot travel to −∞ and neurons do not usually show intrinsic bistability. If this is the case, there exists a stationary FPE associated with equations A.6 and A.2, and it is where is the probability density of having the neuron with membrane potential

*V*and receiving a synaptic fluctuation . (A restricted version of the FPE A.8 has been used by Doering et al. (1987), who considered a general nonlinear system driven by linearly filtered white noise. Brunel and Sergi (1998) considered a similar version of FPE, A.8 for a linear system driven by linearly filtered white noise in the context of neurons driven by colored stochastic inputs). The probability density current is the probability density of current escaping at threshold, Crucially, this self-consistency condition states that once the neuron hits threshold, it should be reinjected at the reset potential with the same distribution of fluctuations that it had when it escaped. If one views the process described by the FPE A.8 as a population of independent particles diffusing over the variables

*V*and , equation A.9 states that when a particle escapes (hits threshold), it has to be reinjected at the reset potential with the same that it had when it escaped.

*V*starting at the reset potential

*H*can travel up to the threshold potential without being stopped. This condition determines a region over , which we call Ω, defined more formally as The firing rate of the IF neuron can be finally obtained by integrating the probability current over Ω as where we have replaced by its true (long) value .

*P*(

*V*,

*z*) and the probability current in powers of τ

^{−1}

_{i}as where the vector is assumed to be constant. Condition iv has to be satisfied order by order in the expansion. Condition iii means that the integral of

*P*

_{0}has to be one and that the integral of any other higher order has to be zero.

*z*s are independent (see equations A.2), the marginal distribution for is If the variables are independent Ornstein-Uhlenbeck processes, then the marginal distribution is a normal in

_{i}*N*dimensions.

*P*

_{0}using condition A.13 is Using condition A.17, we find that the probability current is with Note that is the firing rate of the IF when it receives a fixed synaptic fluctuation when the synaptic time constants have value . The firing rate, equation A.11, up to zeroth-order zero is then where we have replaced by its true value . In the second case, when does not belong to Ω and then the neuron cannot fire, . Thus, the solution of equation A.18 with that satisfies condition A.14 is where is the unique solution of . To obtain equation A.23, it is required that the condition that the function defined in equation A.7 has at most one minimum. Using condition A.17 leads to .

#### A.1.3. The Case of Slow and Fast Filtering.

In this section we consider the case of a neuron with several slow filters and a single additive fast filter. Several nonadditive fast filters can also be included in the formalism without extra complications, and therefore we restrict our discussion to the simplest case described below.

*t*) is a normalized white noise process that represents the noisy contribution of a current passing through an infinitely fast filter. The strength of the fast noise is determined by the prefactor σ

_{f}. The FPE associated with equations A.24 and A.2 is where the density

*P*and the linear operator have the same definitions as in the FPE A.8. Again, the probability current injected at

*H*has to equal the probability current escaping at threshold Θ, and then Note that the probability current now includes a derivative of the probability density

*P*evaluated at the voltage firing threshold, a term that was not present in equation A.9. This fact imposes different boundary conditions on the density

*P*at threshold. In particular, has to be zero for all , because when

*V*>Θ, and no discontinuity in the function can exist at threshold.

*P*and

*J*in powers of the inverse of the synaptic time constants, as in equations A.16. The coefficients of the expansion have to satisfy the boundary conditions defined above. The zeroth order FPE is The solution at zeroth order can be found using standard perturbative techniques (Ricciardi, 1977; Risken, 1989), and it is where we have defined the “potential” function as in equation A.7. The probability current is found by inserting the equation for

*P*

_{0}into equation A.17, where the function is defined as Note that is the output firing rate of an IF neuron driven by (fast) white noise with deviation σ

_{f}and experiencing a drift as if were constant. For the firing rate to be well defined, the potential has to increase fast enough as

*V*→ −∞, so that the integral is finite for all

*x*. Finally, the output firing rate of the IF neuron with both fast and slow filters can be obtained by integration over of equation A.32 as where the integral extends over the whole space. Note that at this point, we have replaced by .

## Acknowledgments

Support was provided by the Spanish Grant FIS 2006-09294 and by the Swartz Foundation (to R.M.B.). We thank A. Renart and R. Brette for useful discussions. R.M.B. also thanks S. Deneve, B. Gutkin, C. Machens, M. Tsodyks, and A. Pouget for their hospitality at the Collège de France at Paris.

## Notes

^{1}

The firing rate and correlation function for other models of noise can be obtained by replacing the gaussian distributions in the expressions presented here by the corresponding steady-state distributions.

## References

*GABA*

_{A,slow}*GABA*responses using patch-clamp recording in rat hippocampal slices

_{B}