Delivery of neurotransmitter produces on a synapse a current that flows through the membrane and gets transmitted into the soma of the neuron, where it is integrated. The decay time of the current depends on the synaptic receptor's type and ranges from a few (e.g., AMPA receptors) to a few hundred milliseconds (e.g., NMDA receptors). The role of the variety of synaptic timescales, several of them coexisting in the same neuron, is at present not understood. A prime question to answer is which is the effect of temporal filtering at different timescales of the incoming spike trains on the neuron's response. Here, based on our previous work on linear synaptic filtering, we build a general theory for the stationary firing response of integrate-and-fire (IF) neurons receiving stochastic inputs filtered by one, two, or multiple synaptic channels, each characterized by an arbitrary timescale. The formalism applies to arbitrary IF model neurons and arbitrary forms of input noise (i.e., not required to be gaussian or to have small amplitude), as well as to any form of synaptic filtering (linear or nonlinear). The theory determines with exact analytical expressions the firing rate of an IF neuron for long synaptic time constants using the adiabatic approach. The correlated spiking (cross-correlations function) of two neurons receiving common as well as independent sources of noise is also described. The theory is illustrated using leaky, quadratic, and noise-thresholded IF neurons. Although the adiabatic approach is exact when at least one of the synaptic timescales is long, it provides a good prediction of the firing rate even when the timescales of the synapses are comparable to that of the leak of the neuron; it is not required that the synaptic time constants are longer than the mean interspike intervals or that the noise has small variance. The distribution of the potential for general IF neurons is also characterized. Our results provide powerful analytical tools that can allow a quantitative description of the dynamics of neuronal networks with realistic synaptic dynamics.
A neuron communicates with other neurons by generating synaptic currents through the corresponding synapses. The nature of these events depends on the presynaptic neurotransmitter and the postsynaptic receptors. Several types of receptors can coexist in the same neuron, each with its characteristic timescale. Within the excitatory class, AMPA-type synaptic receptors open during 1–5 ms (Silver, Traynelis, & Cull-Candy, 1992; Barbour, Keller, Llano, & Marty, 1994; Umemiya, Senda, & Murphy, 1999; Angulo, Rossier, & Audinat, 1999; Zamanillo et al., 1999), while the activation of NMDA receptors lasts for about 100 ms (see, e.g., Umemiya et al., 1999; Myme, Sugino, Turrigiano, & Nelson, 2003). Both synaptic receptors are activated by release of neurotransmitter glutamate from glutamatergic presynaptic cells. Similarly, there are also fast and slow inhibitory synapses— ∼ 5 to 10 ms (Xiang, Huguenard, & Prince, 1998; Banks, Li, & Pearce, 1998; Okada, Onodera, van Renterghem, & Takahashi, 2000) and ∼ 100 ms (Otis, De Koninck, & Mody, 1993), which are activated by release of GABA from GABAergic presynaptic cells. Therefore, spikes arriving at the presynaptic terminals can initiate a variety of synaptic currents on the postsynaptic neuron with different time courses and lasting for quite different time intervals. The variety in duration of spike aftereffects on postsynaptic neurons could have important computational consequences, because it could allow the same information to be present in the neuron at different timescales. In a similar way, it could provide a basis for transmitting and combining information carried at several temporal resolutions.
In addition, the effect of an impinging spike on the membrane potential of a neuron depends on the membrane time constant of the neuron. While in resting conditions, the membrane time constant is quite large (τm∼ 20 ms; see e.g., Paré, Shink, Gaudreau, Destexhe, & Lang, 1998), during intense presynaptic background activity or intense stimulation, its value can be reduced by several times (Bernander, Douglas, Martin, & Koch, 1991; Paré et al., 1998; Destexhe & Paré, 1999; Borg-Graham, Monier, & Frégnac, 1998; Hirsch, Alonso, Reid, & Martinez, 1998; Anderson, Carandini, & Ferster, 2000). Thus, depending on the background activity and the nature of the stimulation, the same synapse can produce different effects on the neuron. It is then reasonable that the synaptic time constants τs have to be considered in relation to the effective membrane time constant: what matters for the neuron behavior is the ratio τs/τm. According to this idea, synaptic filters can be classified as slow or fast, depending on whether that ratio is larger or smaller than one, respectively.
The above considerations imply that it is important to know how the presence of synaptic filters with timescales longer or shorter than the membrane time constant affects the neuron's firing statistics. Previous work on LIF neurons has determined analytically their firing rate when synapses have long time constants (Moreno-Bote & Parga, 2004), as well as when they have short time constants (Ricciardi, 1977; Brunel & Sergi, 1998; Fourcaud & Brunel, 2002; Camera, Giuglianio, Senn, & Fusi, 2008). By interpolating between these two limits, an analytical expression for the firing rate exists that determines its value for all τs (Moreno-Bote & Parga, 2004). Neurons with both fast and slow synaptic filtering have also been studied in Moreno-Bote and Parga (2004). Further developments have addressed the case of conductance-based IF neurons (Moreno-Bote & Parga, 2005) and the effect of input correlations on a pair of LIF neurons (Moreno-Bote & Parga, 2006). The expressions for the firing rate are exact in the specified limits of short or long τs compared to τm and do not require further assumptions about the amplitude of the noise. A related important issue is to know whether the ratio between synaptic and membrane time constants determines the operating regime of the neurons and its computational capabilities. For instance, it is known that the firing variability depends on that ratio (Svirskis & Rinzel, 2000; Moreno-Bote & Parga, 2004; Muller, Buesing, Schemmel, & Meier, 2007; Chizhov & Graham, 2008). Also, in neural networks in which the effective membrane time constant of the neurons can become very short, it would be very useful to have analytical predictions for the firing rate (Shelley, McLaughlin, Shapley, & Wielaard, 2002; Moreno-Bote & Parga, 2005; Cai, Rangan, & McLaughlin, 2005; Apfaltrer, Ly, & Tranchina, 2006).
Here we introduce a theory to describe the firing rate of general IF neurons receiving arbitrarily filtered inputs, which extends previous results for LIF neurons with gaussian inputs (Moreno-Bote & Parga, 2004, 2005, 2006; Brunel & Sergi, 1998; Fourcaud & Brunel, 2002). The formalism is presented in a detailed, didactic manner along with a consideration of useful examples. We first derive the expressions for the firing rate and spike correlation function in a qualitative way using the adiabatic approach introduced in Moreno-Bote and Parga (2004). The formal derivation of the expression for the firing rate valid for arbitrary IF neurons with arbitrary input structure in the limit of long synaptic timescale in the presence or not of fast filters is provided in the appendix (finer details for the LIF neuron case are presented in the Supporting Information, available online at http://www.mitpressjournals.org/doi/suppl/10.1162/neco.2010.06-09-1036). Then we analyze the expressions of the firing rate and correlation function for LIF neurons and use them to predict the input-output transfer function of individual neurons and the synchronous firing pattern of pairs of neurons receiving both common and independent sources of inputs. We continue applying the formalism to describe the firing rate of QIF and NTIF neurons. Finally, we provide an exhaustive list of the analytical expressions for the firing rate and correlation function of general IF neurons and for the particular cases of LIF, QIF, and NTIF neurons.
2.1. The Adiabatic Approach.
We are interested in describing the firing statistics of simple but realistic neuron models receiving temporarily correlated inputs. In this section, we study in a general way the response properties of neurons with randomly varying inputs. We apply the results to completely determine the firing rate of IF neurons driven by stochastic currents with a long correlation timescale. The firing rate in this limit, called the adiabatic firing rate, is particularly simple and can be derived by qualitative means. Thus, we leave its formal derivation for the appendix. The adiabatic firing rate is compared to another candidate simple expression, and we show that the latter gives worse fits of the simulated data. Then the case of fast and slow stochastic inputs is considered. We finally show that our formalism can be extended to study the correlated firing of a pair of neurons receiving common as well as independent sources of noise.
2.1.1. The Adiabatic Firing Rate.
We start by considering a neuron model in which the firing rate as a function of a constant input current I can be computed. We call this quantity ν(I). Under constant stimulation and for deterministic neurons, the rate ν(I) describes completely the statistics of the output spike train except for an initial phase: the output spike train is a periodic pattern with interspike intervals of length T(I) = 1/ν(I). The firing rate for a fixed input current can be very easily calculated for IF-like neurons. However, this idea can be extended to any other neuron model or real neurons in which the function ν(I) can be computed numerically or experimentally.
Because we are ultimately interested in the response of neurons to stochastic inputs, the steady-state description alone does not suffice, yet it can be easily extended to the case in which inputs change slowly compared to the dynamics of the neuron under consideration—for instance, to an LIF neuron with membrane time constant τm and gaussian white noise filtered with synaptic time constant τs ≫ τm. For more complex neurons, τs should also be larger than all other timescales present in the system. We will show that although the timescale separation condition could seem restrictive, the equations obtained are applicable even when the input changes as fast as the dynamics of the neuron.
Let us assume that the condition that the neuron's dynamics is faster than the synaptic time constant is satisfied. Then, during a time interval Δt shorter than τs, the current I(t) will be reasonably constant. Therefore, during that interval, the neuron will fire with a constant rate ν(I(t)) and ν(I(t)) × Δt spikes will be emitted. Since this spike count can be smaller than one, ν(I(t)) × Δt < 1, ν(I(t)) needs to be interpreted as a firing probability rather than a firing rate.
The adiabatic expression of the firing rate is simple and generally valid under the following conditions. First, the neuron should have a known sustained response ν(I) for constant input current. Second, the current has to be a slow enough stochastic process with known distribution P(I). For stationary input statistics, P(I) is the steady-state probability distribution of the current.
2.1.2. A Suboptimal Alternative Expression for the Firing Rate.
2.1.3. Fast Noise and Slow Noise.
An important case arises when fast currents are present. For instance, AMPA synaptic receptors receiving Poisson spike trains will produce current fluctuations with a correlation timescale of a few milliseconds, which is better modeled as fast instead of slow noise.
2.1.4. Cross-Correlation Function.
The formalism that we have described is not limited to the study of the first-order statistics of the neuron's firing, but it can also be extended to account for the second-order statistics. Here we find the two-point correlation function of the output spike trains of a pair of IF neurons receiving arbitrary forms of correlated and independent inputs. The equations are derived in an intuitive way. Finally, simpler equations are obtained for weakly correlated signals.
As usual, we assume that the current fluctuations are slower than the membrane time constant of the neurons. The two-point probability density of the common current is some known function P(Ic, t; I′c, t′), which specifies the probability density of having the common current with value Ic at time t and with value I′c at a time t′ (primes denote quantities at time t′).
This intuitive derivation of the cross-correlation function in the adiabatic approach is presented here for the first time, and it can be shown to be identical to the one obtained in Moreno-Bote and Parga (2006) for the case of LIF neurons receiving filtered white noise (see below). It is worth emphasizing that equation 2.7 can be applied to more general models of spiking neurons and to rather general forms of noise distributions and noise correlation structure.
2.2. Firing Rate and Correlations of LIF Neurons.
Here we summarize the analytical results for the case of an LIF neuron that follow from the general expressions presented above and are formally derived in the appendix. A more detailed exposition of the LIF neuron case is provided in the Supporting Information.
2.2.1. LIF Neurons with Slow Filters.
We start by considering the case of synaptic receptors with long time constants. This case is the relevant one to study the dynamics of neurons in the so-called high-conductance regime, in which the membrane time constant can become shorter or comparable to the synaptic time constants. This case naturally arises also when strongly fluctuating GABAA synaptic currents pass through a neuronal membrane with relatively short τm. It also accounts for the case of neurons strongly innervated by NMDA or GABAB receptors, hypothesized to be crucial to stabilize working-memory states (Wang, 1999). Here we will focus on synaptic receptors with a single timescale, while the more general case with two or more slow synapses with different timescales is considered in the Supporting Information.
The prediction of the firing rate given by the adiabatic approach, equation 2.19, has been compared with simulation results in Figure 3. In the left panel, the noise σ2 is kept fixed. The adiabatic firing rate (bottom line) becomes exact when τs is much longer than the membrane time constant of the neuron, but it also provides a good approximation when τs is comparable to τm = 10 ms. Only the case of subthreshold neurons has been shown here, but similar fits are found for suprathreshold neurons. In the right panel, the noise has been increased linearly with the timescale of the noise as σ2 = σ20τs for some fixed σ20. Since the adiabatic firing rate depends on the noise level through the product of σ2/τs (recall the definitions of ϵ, zmin and the normalized thresholds), by increasing σ2 linearly with τs, the adiabatic firing rate does not change (horizontal lines). The simulation results show that the firing rate approaches the adiabatic limit when τs becomes long and that the adiabatic rate provides a good prediction when τs ∼ 2τm = 20 ms, and an acceptable prediction even when τs = τm = 10 ms (the adiabatic rate accounts for 80% of the simulated rate on average in the last case). These comparisons also show that the adiabatic approach does not require that the noise amplitude is small.
2.2.2. Range of Validity of the Adiabatic Approach.
It is important to note that the mean ISIs obtained in the left panel of Figure 3 are very long compared to the synaptic time constant used (e.g., a rate of 10 Hz equals a mean ISI of 100 ms, which is much longer than the synaptic time constant at that point, 20 ms). Therefore, our theory does not require that the fluctuations live for a long period of time compared to the mean ISI of the neuron (∼100 ms), but rather that they live for a time of the order of its membrane time constant (∼10 ms).
2.2.3. Comparison with the Fake Adiabatic Approach.
2.2.4. Equivalent But Computationally Faster Implementation of the Adiabatic Firing Rate.
The adiabatic expression of the firing rate for an LIF neuron in the long τs limit, equation 2.19, is appealing because of its simplicity. It is also very efficient computationally, since it involves the calculation of a single integral, requiring only a summation of the order of 103 terms.
2.2.5. Distribution of Currents Conditioned to Output Spike Times.
2.2.6. The Diffusion Approximation.
2.2.7. Short τs Limit and Interpolation Procedure.
Until now, we have determined the firing rate of an LIF neuron in the presence of a slow filter. It would be desirable to know the firing response of these neurons in the opposite limit in which the synaptic time constants are short but nonzero. Here we describe how to estimate the firing rate in the presence of a single filter characterized by any value of τs (Moreno-Bote & Parga, 2004).
2.2.8. Cross-Correlogram of Pairs of Neurons with Synaptic Filters.
The two analytical expressions, equations 2.30 and 2.32, are compared to simulations in Figure 5. The two predictions give the same numerical values (thick full line) and are very close to the simulated cross-correlation function even when similar values of τs and τm are used. The linear approximation of the cross-correlation function, equations 2.10 and 2.11 (see Section 2.5), subestimates the true values but also provides a good match. The linear prediction improves as the amount of common noise relative to the independent noise lowers (note that in the simulations the amount of common noise used is not small compared to the amount of independent noise). The linear approximation provides a fast estimate of the cross-correlation function. Equation 2.32 consists of a double sum over an infinite series, but in practice it is extremely fast because it can be cut using the first two hundred terms in each sum (use t = 200 ms). Equation 2.30 provides the slowest prediction, since it involves a double integral.
The cross-correlograms are typically characterized by a single peak in both the sub- and suprathreshold regimes when the fraction of total noise that is common is small, while oscillatory patterns arise when the fraction approaches one (not shown; Moreno-Bote & Parga, 2006). For small fractions, the correlation timescale of the output spike trains is τs, the time constant of the synapses driving the two neurons, as predicted by the linear approximation in equations 2.10 and 2.11.
Here, we have described the cross-correlation function of the output spike trains of a pair of LIF neurons. However, our theory also allows a detailed description of other statistical properties of the spiking response, such as the coefficient of variation of the ISIs (CVISI), the Fano factor of the spike count of the output spike train (FN), and its autocorrelation function (Moreno-Bote & Parga, 2006).
2.2.9. Probability Distribution of the Voltage.
2.2.10. LIF Neurons with One Fast and One Slow Synaptic Type.
We continue the discussion of slow filters this time in the presence of a second, fast filter. This is a possible scenario found in a study about the effect of background activity on τm when AMPA (fast) and GABAA (slow) synaptic receptors types are present (Destexhe, Rudolph, Fellous, & Sejnowski, 2001). In that work, it was argued that background activity reduces the membrane time constant of the neuron several times, so that τm ∼ 5 ms. Then AMPA synapses are fast compared with τm, and we can approximate τAMPA = 0. However, GABAA receptors display a slower time decay, ms, and they can be taken as slow compared with the membrane dynamics.
The quantity νfast(z) in equation 2.38 has an intuitive meaning: it is the rate of an LIF neuron driven by a white noise input with effective mean and variance σ22 (Ricciardi, 1977). As it can be appreciated, the output firing rate is given by the average of νfast(z) with the stationary distribution of z, as in the case with a single slow filter.
Formula 2.38 admits an expansion in powers of ϵ (Supporting Information). At leading order, the rate is just , the firing rate of an LIF neuron driven by a white noise input with mean μ and variance σ22 (Ricciardi, 1977). The firing rate approaches as the synaptic time constant increases, as shown in Figure 6, where a comparison between the predictions provided by equation 2.38 and simulated data is presented. A perturbative expansion of the firing rate in equation 2.38 in powers of 1/τs exists in both the supra- and the subthreshold regimes. This contrasts with the case of a single slow filter, where the firing rate admitted a perturbative expansion in powers of 1/τs only in the suprathreshold regime.
An expression for the output firing rate identical to that in equation 2.38 has been found in Moreno, de la Rocha, Renart, and Parga (2002) and Moreno-Bote, Renart, and Parga (2008) for an input with spike correlations. More specifically, in that work, we calculated the output firing rate of an LIF neuron driven by exponentially correlated presynaptic spikes characterized by a correlation time constant τc and magnitude α. However, in the situation presented in this work, the presynaptic currents in equation 2.35 are modeled as white noises that approximate independent Poisson firing of a pool of presynaptic neurons (see the Supporting Information). The two expressions are identical because in the presence of two filters, one slow and another infinitely fast, the total input I(t) has exactly the exponential correlations (see equation 2.37) that were considered in Moreno et al. (2002) and Moreno-Bote et al. (2008) to model exponentially correlated spikes with correlation time τc = τs and positive correlation magnitude α = σ21/σ22.
The results found above can be extended to any other IF neuron model. A general formula similar to equation 2.38 for the firing rate of a general IF neuron with both fast and slow filters is given in the appendix.
2.2.11. The Transfer Function with Slow and Fast Synaptic Filters.
We plot in Figure 7 the input-to-rate transfer function for an LIF neuron. The firing rate is plotted as a function of the mean current μ for three different τs for both a single slow filter (left) and one slow and another fast filter (right). As τs increases, the fluctuations of the slow input noise are filtered out, and the curve becomes steeper as a function of μ. For the same mean input drive μ, the firing rate decreases as a function of τs. In these figures, the single formulas 2.19 and 2.38 are used without interpolation to test their range of validity. Even when the synaptic time constant is chosen to be τs = τm = 10 ms (top curves), the prediction is rather close to the simulation results. Note that in the presence of fast noise, the transfer function is smoother than in the case of a single slow filter.
2.3. Firing Rate for the QIF Neuron.
2.4. Noise-Thresholded IF Neuron.
2.5. List of Expressions.
In this section, we provide an exhaustive list of the analytical expressions found in the letter. The expressions for the firing rates can be obtained from the general theory presented in the appendix. In the examples considered here, we use Ornstein-Unlenbeck processes (colored noise) to generate the input currents, equation 2.14, but a broad range of noises can be considered, as described in the appendix.1
2.5.1. The Adiabatic Firing Rate for Slow Input Currents.
2.5.2. The Firing Rate with Fast and Slow Filters.
2.5.3. Correlation Function.
We have developed a theory that describes analytically the firing rate of IF neurons driven by arbitrary forms of slow stochastic inputs and when fast forms of noise are present too. The theory is exact when the timescale governing the noise fluctuations is much longer than the intrinsic timescales of the neuron, but it can also be applied to the case in which the timescales are comparable. It is worth emphasizing that our theory does not require that the interspike intervals are short compared to the timescale of the stochastic inputs, but rather that the latter is longer or comparable to the membrane time constant of the neuron. Our approach does not require that the noise amplitude is small either.
Other work has also addressed the problem of studying the firing properties of IF-like neurons driven by slow stochastic inputs. Salinas and Sejnowski (2002) considered an input current that could take two discrete values. Although interesting, the input model cannot approximate the current generated by a sum of spike trains. Svirskis and Rinzel (2000) have found an estimate of the firing rate of a neuron model in which the potential can be above threshold and the reset effect is not included. Middleton et al. (2003) have studied the distributions of interspike intervals in nonleaky IF neurons with slow stochastic inputs and provided analytical expressions for them that are valid in the limit of small noise amplitude and when the synaptic timescale is at least one order of magnitude longer than the mean interspike interval. Their technique cannot be applied to compute the firing rate of LIF neurons in the noise-driven regime since it requires that for any frozen value of the input noise, the interspike interval is noninfinity. Schwalger and Schimansky-Geier (2008) have studied the interspike interval distributions and the Fano factor of the spike count of the output spike train in LIF neurons driven by slow stochastic inputs and derived analytical expressions for them. The computation of the interspike interval distribution requires knowing the distribution of synaptic currents at the spike times found in Moreno-Bote and Parga (2004, 2006). Their analytical expressions are valid when the synaptic time constant is several orders of magnitude longer than the membrane time constant. In Moreno-Bote and Parga (2006), we have developed a method that allows computing the Fano factor and the autocorrelation function of the output spike trains accurately even when τs is similar to τm. The crucial difference between the two approaches lies in that Schwalger and Schimansky-Geier (2008) assume that the currents are constant across time after an output spike, while in Moreno-Bote and Parga (2006), we fully consider the stochastic temporal evolution of the currents after an output spike. Gerstner (1999) has studied models in which the threshold potential after a spike is a slow, random variable. These models can be solved exactly, but they cannot be mapped to IF neurons with slow, noisy inputs. This is because the noisy threshold is drawn from a distribution only at the moments of the spikes, not continuously over time, as it happens in neuron models receiving fluctuating inputs. In a recent work, Brunel and Latham (2003) have used our naive expansion in powers of for long synaptic time constants (described in detail in Supporting Information) to compute the firing rate of a QIF neuron in the suprathreshold regime in that limit. However, our naive expansion can be applied only to the suprathreshold regime, in which the mean input drive dominates the spiking behavior of the neuron and noise plays a secondary role. Here, using a regularized expansion, we have found an expression for the firing rate of a QIF neuron valid in both the supra- and subthreshold regimes.
Chizhov and Graham (2008) have elaborated a new procedure to compute the firing rate of LIF neurons receiving colored noise with arbitrary timescale τs. Our and their analytical expressions for the firing rate have been compared in their Figure 4, providing both a good match with the simulated data. Their method, however, differs from ours substantially. The reset effect after generation of a spike is not considered in Chizhov and Graham (2008), which makes the statement of the problem easier. This approximation is well justified in cases in which the interspike intervals are longer than the membrane time constant. Moreover, their calculation of the firing rate in the stationary case involves two steps. First, the associated FPE without reset effect is solved numerically for several values of the neuron parameters and τs, and the results are then fit by simple functions; the fits are good at least in the parameter regime used. And second, these simple fit functions are employed in the final expression of the firing rate, which involves a double integral that can only be computed numerically. In contrast, by solving analytically the FPE with reset effect, we have provided analytical expressions for the firing rate of general IF neurons that are exact in the long τs limit, involve a single integral for the LIF and QIF neurons, and whose asymptotic behavior is mathematically different from that obtained in Chizhov and Graham (2008) for the LIF neuron.
The difference between averaging voltage in Carandini's model and synaptic inputs (currents or conductances) in our theory is substantial. Since voltages are necessarily upper-bounded by the spiking threshold of the neuron, computing the firing rate as a function of the voltage might be susceptible to large statistical errors, since many possible firing rates will correspond to similar voltages around the spiking threshold. However, computing the firing rate as a function of the input current (or synaptic conductances) does not suffer this statistical problem, since currents are not upper-bounded in the range of values typically observed. In fact, we have previously shown that the firing rate of conductance-based IF neurons can be computed using an average of the firing rate as a function of the instantaneous synaptic conductances over the distribution of synaptic conductances (Moreno-Bote & Parga, 2005). Since voltages and synaptic conductances can be measured in vivo, it would be interesting to compare quantitatively the predictions for the firing properties of visual cortex neurons using the two alternative ways of averaging discussed above.
In this letter, we have also provided expressions for the cross-correlation function between the output spike trains of two IF neurons receiving common as well as independent sources of noise and applied the theory to the LIF neuron (see also Moreno-Bote & Parga, 2006). The theory allows describing quantitatively the peak, width, area of the cross-correlation function, and the correlation coefficient of the output spike trains. We have also found simplified equations for the cross-correlation function that establish a linear relationship between input and output correlations. Several recent works have described analytically the temporal profile of the cross-correlation function of a pair of spiking neurons, but using simplified models that do not have the after-spike reset characteristic of the IF neuron (Svirskis & Hounsgaard, 2003; Tchumatchenko, Malyshev, Geisel, Volgushev, & Wolf, 2010; Burak, Lewallen, & Sompolinsky, 2009). De la Rocha, Doiron, Shea-Brown, Josic, and Reyes (2007) have presented analytical expressions to compute the coefficient of correlation for LIF neurons in the limit of weak input correlations, but these expressions do not allow an analytical description of the temporal profile of the cross-correlation function. Our theory and its extension to interconnected neurons might be crucial for describing the temporal correlation patterns found in retina and cortex (Riehle et al., 1997; Bair et al., 2001; Kohn & Smith, 2005; Pillow et al., 2008) and the determination of connectivity matrices underlying those patterns using IF neurons as the basic functional units.
Although we have focused on the description of the firing rate and cross-correlation function of the output spike trains of a pair of IF neurons, our theory can also be applied to study other statistical properties of the spiking response, such as the coefficient of variation of the ISIs (CVISI), the Fano factor of the spike count of the output spike train (FN), and its autocorrelation function. It has been shown previously that those statistical quantities can be obtained for LIF neurons from the adiabatic approach (Moreno-Bote & Parga, 2006). A generalization of that formalism is possible and will allow the description of second-order firing statistics for general IF neurons driven by arbitrary forms of noise. In addition, it would be desirable to extend our adiabatic approach to describe the transient response of IF neurons. Analytical expressions for the response of LIF neurons to sinusoidal stimuli in the limit of small amplitudes and infinitely fast synapses, τs = 0, are available (Brunel & Hakim, 1999; Lindner & Schimansky-Geier, 2001; Richardson, 2007), but there are not known solutions valid for all stimulus frequencies for nonzero τs.
A prime question is which is the effect of synaptic time constants in neuronal network dynamics. Recent work has shown that the temporal dynamics of the synapses can play an important role in setting the response properties of IF neurons working in the high-conductance regime (Shelley et al., 2002; Moreno-Bote & Parga, 2005; Vogels & Abbott, 2005; Cai et al., 2005; Apfaltrer et al., 2006; Kumar, Schrader, Aertsen, & Rotter, 2008). We think that the general theory on synaptic filtering presented here can be of utility for building mean field theories that use the rate variables as well as second-order statistics to describe the temporal dynamics of these networks (see Renart, Moreno-Bote, Wang, & Parga, 2007).
A.1. IF Neurons with Fast and Slow Filters.
Here we provide the details for computing the output firing rate for IF neurons described by rather general drift functions and noise models (see equation A.1 below). First, we define the model, then we compute the adiabatic firing rate of an IF neuron driven by slow input noise, and finally we study the case of both fast and slow synaptic filters.
A.1.1. General IF Neuron and Noise Models.
A.1.2. The Case of Slow Filtering.
The distribution is computed by combining equations A.19 and A.23 and replacing by . The firing rate, equation A.22, and the distribution of the membrane potential, equations A.19 and A.23, describe completely the problem at leading order.
A.1.3. The Case of Slow and Fast Filtering.
In this section we consider the case of a neuron with several slow filters and a single additive fast filter. Several nonadditive fast filters can also be included in the formalism without extra complications, and therefore we restrict our discussion to the simplest case described below.
Support was provided by the Spanish Grant FIS 2006-09294 and by the Swartz Foundation (to R.M.B.). We thank A. Renart and R. Brette for useful discussions. R.M.B. also thanks S. Deneve, B. Gutkin, C. Machens, M. Tsodyks, and A. Pouget for their hospitality at the Collège de France at Paris.
The firing rate and correlation function for other models of noise can be obtained by replacing the gaussian distributions in the expressions presented here by the corresponding steady-state distributions.