Abstract

There has recently been a great deal of interest in inferring network connectivity from the spike trains in populations of neurons. One class of useful models that can be fit easily to spiking data is based on generalized linear point process models from statistics. Once the parameters for these models are fit, the analyst is left with a nonlinear spiking network model with delays, which in general may be very difficult to understand analytically. Here we develop mean-field methods for approximating the stimulus-driven firing rates (in both the time-varying and steady-state cases), auto- and cross-correlations, and stimulus-dependent filtering properties of these networks. These approximations are valid when the contributions of individual network coupling terms are small and, hence, the total input to a neuron is approximately gaussian. These approximations lead to deterministic ordinary differential equations that are much easier to solve and analyze than direct Monte Carlo simulation of the network activity. These approximations also provide an analytical way to evaluate the linear input-output filter of neurons and how the filters are modulated by network interactions and some stimulus feature. Finally, in the case of strong refractory effects, the mean-field approximations in the generalized linear model become inaccurate; therefore, we introduce a model that captures strong refractoriness, retains all of the easy fitting properties of the standard generalized linear model, and leads to much more accurate approximations of mean firing rates and cross-correlations that retain fine temporal behaviors.

1.  Introduction

One of the fundamental problems in the statistical analysis of neural data is to infer the connectivity and stimulus dependence of a network of neurons given an observed sequence of extracellular spike times in the network (Brown, Kass, & Mitra, 2004). Estimating the parameters of network models from data is in general a computationally difficult problem because of the high dimensionality of the parameter space and the possible complexity of the model's objective function given an observed data set of network spike times.

One class of models that has proven quite useful is known in the statistics literature as the generalized linear model (GLM; Brillinger, 1988; Chornoboy, Schramm, & Karr, 1988; see also Martignon et al., 2000; Kulkarni & Paninski, 2007; Nykamp, 2007), variants of which have been applied successfully in the hippocampus (Harris, Csicsvari, Hirase, Dragoi, & Buzsaki, 2003; Okatan, Wilson, & Brown, 2005), motor cortex (Paninski, Fellows, Shoham, Hatsopoulos, & Donoghue, 2004; Truccolo, Eden, Fellows, Donoghue, & Brown, 2005), retina (Pillow et al., 2008), and cultured cortical slice (Rigat, de Gunst, & van Pelt, 2006). In its simplest form, this model incorporates both stimulus-dependence terms and direct coupling terms between each observed neuron in the network; fitting the model parameters leads to an inhomogeneous (stimulus-driven), nonlinear coupled spiking model with delay terms. Statistically speaking, the model is attractive due to its explicitly probabilistic nature and because fitting the model parameters is surprisingly simple: under certain simple conditions, the log-likelihood function with respect to the model parameters is concave, and the maximum likelihood estimation of the parameters can be easily performed using standard ascent methods (Paninski, 2004; Paninski, Pillow, & Lewi, 2007).

Once the model parameters are obtained, we are left with an obvious question: What do we do next? One of the key applications of such a network model is to better understand the input-output properties of the network. For example, we would like to be able to predict the mean firing response of the network given a novel input and to carve out the impact of the network coupling terms on this stimulus-response relationship (e.g., what is local inhibition's impact on the stimulus filtering properties of the network?). We would also like to know how the correlation properties of spike trains in the network might depend on the stimulus, and in general how correlations might encode stimulus information (Riehle, Grün, Diesmann, & Aertsen, 1997; Hatsopoulos, Ojakangas, Paninski, & Donoghue, 1998; Oram, Hatsopoulos, Richmond, & Donoghue, 2001; Nirenberg, Carcieri, Jacobs, & Latham, 2002; Schnitzer & Meister, 2003; Kohn & Smith, 2005; Pillow et al., 2008).

We can in general study these questions with direct Monte Carlo simulations. However, simulation of a large-scale probabilistic spiking network is computationally expensive, since we often need to draw many samples (i.e., run many simulated trials) in order to compute the quantities of interest to the desired precision. More importantly, direct numerical simulation often provides limited analytical insight into the mechanisms underlying the observed phenomena.

The goal of this study is to investigate how much of the behaviors of these GLM networks can be understood using standard analytical mean-field approximations (Renart, Brunel, & Wang, 2003; Wilson & Cowan, 1972; Amit & Tsodyks, 1991; Ginzburg & Sompolinsky, 1994; Hertz, Krogh, & Palmer, 1991; Kappen & Spanjers, 2000; Gerstner & Kistler, 2002; Meyer & van Vreeswijk, 2002). In particular, we develop approximations for the mean firing rates of the network given novel stimuli, as well as the auto- and cross-correlation and input-output filtering properties of these networks. These approximations are valid when the contributions of individual network coupling terms are small and lead to deterministic ordinary differential equations that are much easier to solve and analyze than direct Monte Carlo simulation of the network activity.

However, in the case of strong refractory effects, these mean-field approximations become inaccurate, since the spike history terms in the generalized linear model must be large to induce strong refractoriness, and this pushes our approximations beyond their region of accuracy. Therefore we introduce a modified model: a generalized linear model with Markovian refractoriness. This model has several advantages in this setting: it captures strong refractoriness, retains all of the easy-fitting properties of the standard generalized linear model, and leads to much more accurate mean-field approximations.

2.  The Generalized Linear Point Process Model

We begin by precisely defining the generalized linear point process model (see Figure 1). We consider N recurrently connected spiking neurons, modeled as a multivariate point process (Snyder & Miller, 1991). The firing rate (conditional intensity function) of neuron i at time t is a function of the total input, ui(t), the neuron receives:
formula
2.1
where f(.) is a smooth, nonnegative, and monotonically increasing function. For numerical simulations, we use exponential nonlinarity, f(u) = eu, but the analysis in this article is not limited to this exponential nonlinearity. The total input to the neuron is expressed as the sum of external input Ii and recurrent input Hi:
formula
2.2
The recurrent input is modeled as a sum of identical responses caused by each presynaptic spike and written as
formula
2.3
where wji(t) is a coupling term from neuron j (upper index) to i (lower index), and tj,n is the nth spike time of neuron j. Without loss of generality, we may express each coupling term wji(t) as a weighted sum of exponential functions of different time constants. As we will see below, this sum-of-exponentials representation is extremely useful: we will take full advantage of the Markovian nature of the decaying exponential function. To keep notation under control, we will restrict our attention to the case of a single exponential function with time constant τij,
formula
2.4
where Θ is the Heaviside step function, which takes one for positive arguments and zero otherwise; generalizations to the case where wji(t) are given by a sum of exponentials with different time constants will be left implicit and will be straightforward in all cases. If we write the output spike train of neuron i as
formula
2.5
the recurrent input of equation 2.3 is rewritten as the convolution of the synaptic filter and the presynaptic spike train:
formula
2.6
with
formula
2.7
This model is a generalization of the inhomogeneous Poisson process, since past output spikes modulate the firing intensity function. If we set all the coupling weights Jji to zero, we recover the the inhomogeneous Poisson process model. The model is also closely related to the spike-response model of Gerstner and Kistler (2002) and the inhomogeneous Markov interval process of Kass and Ventura (2001), in which the firing rate depends on only the last observed spike. Note that in the GLM, the firing intensity of a neuron may depend not only on the last spike of the neuron but on the past several spikes of this neuron (see, e.g., Paninski, 2004; Paninski et al., 2007, for further discussion). As mentioned in section 1, this model is attractive because it is fairly easy to fit to data and seems to capture the firing statistics of neural populations in a variety of experimental settings. However, in this article, we will not discuss the estimation problem but rather limit our attention to the problem of determining the network firing rate, auto- and cross-correlation functions, and input-output properties, given fixed, known model parameters.
Figure 1:

A schematic illustration of the generalized linear point process model.

Figure 1:

A schematic illustration of the generalized linear point process model.

3.  Master Equation

The main object we would like to compute here is P(h, t): this is the distribution of the state variable at time t. These hji are the solutions of the multivariate coupled Markov process,
formula
3.1
and therefore solving for P(h, t) will allow us to compute a number of important quantities. For example, to compute the mean firing intensity νi(t) = E[Si(t)] of neuron i given some input {Ii(t)}, we may compute
formula
where E[.] is the average over the spike history in an entire duration T given external input {Ii(t)}. Hence, we can read the mean firing rate off directly from P(h, t).
We begin by writing down the master equation—the Kolmogorov forward equation that governs the time evolution of this Markov process (Oksendal, 2002):
formula
3.2
with as in equation 2.1 and the term ej denoting the vector with its slot δjj. Note that there are no diffusion terms here; the random nature of the stochastic differential equation enters instead through the jump terms.

While this jump-advection partial differential equation (PDE) in principle completely describes the behavior of this system, unfortunately this equation remains difficult to handle analytically (due to the nonlinearity in the λ terms) and numerically (due to the large dimensionality of the state variable h). We will pursue a direct analysis of this PDE elsewhere; here, we will attack this system using a different approach, based on a self-consistent expansion of the first and second moments of Si(t).

In the one-dimensional case (i.e., a single neuron, with one h(.)), however, we may easily solve the PDE numerically to obtain an exact solution for the mean firing rate. Figure 2 compares this exact solution versus the mean-field approximations derived in the following section. The mean-field approximation is reasonably good here but not perfect, because of the fluctuation in the recurrent input (J = −1, τ = 10 ms).
Figure 2:

Direct solution of the master equation in the one-dimensional case. (A) Top: A single neuron is simulated with a pulse input I(t) shown. Bottom: Comparison of the firing rates computed via direct Monte Carlo sampling (simulation); the inhomogeneous Poisson case, with no history dependence, that is, w = 0 (noH); the master equation of equation 3.2 (exact); and the mean-field approximation methods in the limit of Nc → ∞ introduced in section 4. The mean-field approximation gives good approximations of the output firing intensity. (B) Top: The probability distribution P(h,t); grayscale axis indicates the height of the probability density. The self-history term h(t) was the convolution of the output spike train and a single exponential filter with time constant τ = 10 ms and weight J = −1; thus, the neuron was inhibited for a brief period following each spike. Note the discontinuous behavior visible at stimulus onset, where probability mass jumps significantly toward h = 1, then decays back down toward zero, according to equation 3.1. Bottom: The mean of h calculated from the probability distribution P(h,t). This E[h] also increases following the input pulse, which has the effect of slowing the firing rate. Since the weight J is negative, the history term is inhibitory here. (A color version of this figure is available online at http://www.mitpressjournals.org/doi/suppl/10.1162/neco.2008.04-08-757.)

Figure 2:

Direct solution of the master equation in the one-dimensional case. (A) Top: A single neuron is simulated with a pulse input I(t) shown. Bottom: Comparison of the firing rates computed via direct Monte Carlo sampling (simulation); the inhomogeneous Poisson case, with no history dependence, that is, w = 0 (noH); the master equation of equation 3.2 (exact); and the mean-field approximation methods in the limit of Nc → ∞ introduced in section 4. The mean-field approximation gives good approximations of the output firing intensity. (B) Top: The probability distribution P(h,t); grayscale axis indicates the height of the probability density. The self-history term h(t) was the convolution of the output spike train and a single exponential filter with time constant τ = 10 ms and weight J = −1; thus, the neuron was inhibited for a brief period following each spike. Note the discontinuous behavior visible at stimulus onset, where probability mass jumps significantly toward h = 1, then decays back down toward zero, according to equation 3.1. Bottom: The mean of h calculated from the probability distribution P(h,t). This E[h] also increases following the input pulse, which has the effect of slowing the firing rate. Since the weight J is negative, the history term is inhibitory here. (A color version of this figure is available online at http://www.mitpressjournals.org/doi/suppl/10.1162/neco.2008.04-08-757.)

4.  Mean-Field Approximation of GLM Statistics

Now we turn to the main topic of this article. As emphasized above, we would like to calculate the mean-firing intensities of neurons, νi(t) = E[Si(t)], or the spike-cross-correlation function between neurons, . The calculation of these statistics is difficult, however, because of the nonlinearity f and the recurrent input term H in the model. A typical way of approximating these recurrent effects is to replace the interaction term H by its mean field (Renart et al., 2003; Wilson & Cowan, 1972; Amit & Tsodyks, 1991; Ginzburg & Sompolinsky, 1994; Hertz et al., 1991; Kappen & Spanjers, 2000; Gerstner & Kistler, 2002; Meyer & van Vreeswijk, 2002).

Let us assume that each neuron receives input from some subset, Nc, of the total population of neurons N. If the synaptic strength scales with Nc, and assuming that the population is in the “asynchronous” state, then it is known that the cross-covariance function between two neurons scales with 1/Nc for large Nc (Ginzburg & Sompolinsky, 1994). This means that the contribution of each network interaction term is negligible and all the neuron activities are independent in the Nc → ∞ limit. Hence, calculating the mean-firing intensities and cross-correlations is easy in this limit.

We first derive the mean-firing intensity in the limit of Nc → ∞ and then evaluate the finite-size effect of order 1/Nc. Note that this mean-field solution is a good approximation not only when Nc is very large; equivalently, this is a good approximation when the synaptic coupling J is small for fixed Nc, where the contribution of an individual synapse is small.

Now our first key approximation is to appeal to the central limit theorem applied to the sum of equation 2.6, and to assume that the recurrent input H is roughly gaussian. We can express the mean firing intensities of the neurons in our network as a function of two sufficient state variables: the mean recurrent input E[h] and the variance of the recurrent input Var[H],
formula
4.1
where, in the third line, the distribution of the recurrent input, Hi(t), is assumed to be gaussian with mean μi(t) = E[Hi(t)] and variance , and the expectation was replaced by a gaussian integral:
formula
4.2
Note for an exponential nonlinearity f(u) = eu, the gaussian integral is given by F(μ, σ) = exp (μ + σ2/2).
Next we calculate the cross-correlation function,
formula
4.3
where in the second line, we decomposed the correlation function into the three terms: the first term shows the simultaneous correlation with its amplitude proportional to the firing intensity due to the point-process nature of the spike train, the second term shows the correlation when t > t′ (the spike variable Si(t) is averaged given past spike history to yield λi(t)), and the third term is the correlation for t′ > t. In the third line, we introduced conditional firing intensity,
formula
4.4
which describes the probability of neuron i firing at t given the spike of j at t′. We also introduce an abbreviation in the following and write E[.|Sj(t′)] instead of E[.|spike of j at t′]. Similar to the approximation in equation 4.1, we assume a gaussian distribution of H given a spike of neuron j at time t′ to find
formula
4.5
for t > t′ with conditional mean, , and conditional variance, . Note that equation 4.5 is valid only for t > t′; if t < t′ instead, we evaluate and use the Bayes theorem to calculate . The unconditional and conditional mean and variance of H in equations 4.1 and 4.5 are evaluated as
formula
4.6
where we defined
formula
4.7
with . The evolution of , and Akli in equation 4.7 follows simple ordinary differential equations (ODEs) and is easy to simulate,
formula
4.8
where and . Note that the differentiation of Akli(t) for k = l is a little tricky because has a delta peak at s = s′. While the definition of Akli(t) in equation 4.7 is clear, we have to make sure in the evaluation of not to double-count the integration of the delta peak. Hence, in the third equation of equation 4.8, we included the delta peak along the integration over s (including the peak at s = t) but excluded this peak from the integration along s′ (excluding the peak at s′ = t).

4.1.  Solution in the Mean-Field Limit.

In this section we evaluate mean firing intensities, νi, and cross-correlation functions, ϕij, in the mean-field limit of Nc → ∞ (which is also valid in the noncoupled limit J → 0 for finite Nc), that is, synaptic strengths are scaled J ∼ 1/Nc so that the individual contribution of a spike is negligible for large Nc; and we also assume, as discussed above, that the network is in the asynchronous state. Under these conditions, the cross-covariances between neurons are known to scale as Δ ϕij ∼ 1/Nc for ij and Δ ϕij ∼ 1 for i = j (Ginzburg & Sompolinsky, 1994). Then we can easily see σi2(t)∼ 1/Nc because Aikl 1 for k = l and Aikl 1/Nc for k ≠ l from equation 4.7. Hence, in the limit of Nc → ∞, we obtain from equations 4.1 and 4.8,
formula
4.9
This self-consistent update rule corresponds to a well-known mean-field dynamics of the mean-firing intensity (Ginzburg & Sompolinsky, 1994). Similarly, the scaling of cross-covariance Δ ϕij ∼ 1/Nc for ij implies for kj and for k = j from equation 4.7. Moreover the conditional covariance scales with 1/Nc for klj and thus . Hence, in the limit of Nc → ∞, we find for t > t′ and, from equation 4.3,
formula
4.10
These equations are very easy to simulate, and the approximation of the mean-firing intensity is good even for a small Nc if J is small (see Figure 2A). On the other hand, the mean-field approximation is not good for large J (see Figures 3 and 4B). This is because, for fixed Nc, the fluctuation of the recurrent input increases with J, making the mean-field assumption (which neglects these fluctuations) invalid. Figure 4 shows the cross-correlation functions between two neurons. Because the self-interaction term J ∼ 1/Nc vanishes for the large Nc limit, the autocorrelation function in this limit can capture the Poisson peak only at t = t′; it does not capture any nontrivial correlation caused by the interaction term J for finite Nc (see Figure 4).
Figure 3:

Comparison of the direct Monte Carlo simulation of a single neuron (solid), the firing intensity without recurrent input H (dotted), and the mean-field approximation (dashed). (A) Low step input (baseline I = 2, peak I = 4) is applied. The self-inhibition term is set to J = −5 and τ = 2 ms. (B) High step input (baseline I = 2, peak I = 8) is applied. The self-inhibition term is set to J = −5 and τ = 2 ms. Because of the large nongaussian fluctuations of input caused by the strong self-inhibition, the mean-field approximation loses accuracy here. (A color version of this figure is available online at http://www.mitpressjournals.org/doi/suppl/10.1162/neco.2008.04-08-757.)

Figure 3:

Comparison of the direct Monte Carlo simulation of a single neuron (solid), the firing intensity without recurrent input H (dotted), and the mean-field approximation (dashed). (A) Low step input (baseline I = 2, peak I = 4) is applied. The self-inhibition term is set to J = −5 and τ = 2 ms. (B) High step input (baseline I = 2, peak I = 8) is applied. The self-inhibition term is set to J = −5 and τ = 2 ms. Because of the large nongaussian fluctuations of input caused by the strong self-inhibition, the mean-field approximation loses accuracy here. (A color version of this figure is available online at http://www.mitpressjournals.org/doi/suppl/10.1162/neco.2008.04-08-757.)

Figure 4:

Comparison of cross-correlation functions of two neurons calculated by direct simulation (solid), calculated from the mean-field approximation (see equations 4.9 and 4.10; dashed), and the mean-field approximation with finite size correction (see equations 4.11 and 4.12; dot-dashed). (A) Weak self-interaction. Parameters are set to J11 = −1, J22 = −1, J21 = 1, J12 = −0.5, , and I1 = 2, I2 = 4. The result with finite size correction is obtained after five iterations of the self-consistent equations. The calculation in the mean-field limit captures only the steady-state firing intensity, not any temporal cross-correlation function. Cross-correlation functions are well approximated with the finite-size correction terms. (B) Strong self-interaction: parameters are set to J11 = −5, J22 = −5, J21 = 1, J12 = −0.5, , and I1 = 2, I2 = 4. Because of the large nongaussian fluctuation of the input caused by the strong self-inhibition term, the mean-field approximation does not work very well. In this case the finite size solution converges rather slowly. Hence, the result is after 10 iterations of the self-consistent equations. (A color version of this figure is available online at http://www.mitpressjournals.org/doi/suppl/10.1162/neco.2008.04-08-757.)

Figure 4:

Comparison of cross-correlation functions of two neurons calculated by direct simulation (solid), calculated from the mean-field approximation (see equations 4.9 and 4.10; dashed), and the mean-field approximation with finite size correction (see equations 4.11 and 4.12; dot-dashed). (A) Weak self-interaction. Parameters are set to J11 = −1, J22 = −1, J21 = 1, J12 = −0.5, , and I1 = 2, I2 = 4. The result with finite size correction is obtained after five iterations of the self-consistent equations. The calculation in the mean-field limit captures only the steady-state firing intensity, not any temporal cross-correlation function. Cross-correlation functions are well approximated with the finite-size correction terms. (B) Strong self-interaction: parameters are set to J11 = −5, J22 = −5, J21 = 1, J12 = −0.5, , and I1 = 2, I2 = 4. Because of the large nongaussian fluctuation of the input caused by the strong self-inhibition term, the mean-field approximation does not work very well. In this case the finite size solution converges rather slowly. Hence, the result is after 10 iterations of the self-consistent equations. (A color version of this figure is available online at http://www.mitpressjournals.org/doi/suppl/10.1162/neco.2008.04-08-757.)

4.2.  Estimating the Finite Size Effect.

In the previous section, we calculated the mean-firing intensity, νi(t), and the cross-covariance function, ϕij(t, t′), in the Nc → ∞ limit. Ideally, once provided the true term, which is of order 1/N2c different from σi2(t), we can iteratively evaluate equations 4.1 to 4.8, starting from the mean-field limit solution of equations 4.9 and 4.10 to find the precise approximation of νi(t) and ϕij(t, t′) as long as the gaussian H assumption is valid. However, we need the third-order correlations to evaluate , and the equations are not closed. Here, we give up evaluating O(1/N2c) terms and just evaluate the finite size corrections of νi(t) and ϕij(t, t′) up to O(1/Nc) terms. Up to this order, it can be argued that , and under this approximation, we obtain a self-consistent expression for νi(t) and ϕij(t, t′). Equations 4.1 and 4.5 yield, up to the first order of 1/Nc,
formula
4.11
where to the first order of 1/Nc. Equation 4.11 can be evaluated using equations 4.6 to 4.8, which we repeat here for convenience:
formula
4.12
In principle, the above equations are valid even with time-dependent input, and several iterations of the above set of equations yield time-dependent cross-correlation functions. However, evaluation of the above equations is computationally expensive with time-dependent input. If one neuron is connected to Nc surrounding neurons and time is discretized into T time bins, it takes O(N2NcT2) operations and memory to evaluate . If the input is constant in time, is a function of tt′ only, and just O(N2NcT) operations are required to solve the equations. Therefore, we focus on calculating the finite size effect for constant input in this article.1 Generally several iterations of the above set of equations are required for the convergence of mean firing intensities and cross-correlations. However, by initially setting those variables to the solution in the limit of Nc → ∞, we found good approximations after a couple of iterations in many cases.

The mean firing rate and cross-correlation functions calculated from equations 4.11 and 4.12 are precise for small J and large Nc because the recurrent input is close to gaussian in this case (see Figure 4A). On the other hand, for large J and small Nc, the gaussian input approximation is poor, and therefore the approximation becomes worse (see Figure 4B). One key result here is that both the autocorrelations and the cross-correlations in these networks depend on the inputs I(t) in general, as can be seen from equations 4.6 to 4.8. Changing just the baseline (mean) of the input I(t) can significantly change the correlation structure in the network as seen in a number of physiological preparations (e.g., Hatsopoulos et al., 1998; Kohn & Smith, 2005), even if the coupling terms J and w are unchanged.

4.3.  Linear Input-Output Filter.

The ODE derived in equation 4.9 is general and may be applied given arbitrary time-dependent input {Ii(t)}. In this section, we exploit these equations to explore the linear response properties of these spiking networks. A standard physiological technique is white noise analysis (Marmarelis & Marmarelis, 1978; Rieke, Warland, de Ruyter van Steveninck, & Bialek, 1997), in which we inject white noise input in order to study a network's input-output filtering properties. When this analysis is applied to our GLM system, the external input is given by , where Ki is neuron i's linear stimulus filter and ξi(t) is the white noise stimulus with mean and small variance . We keep the input variance small here in order to treat the input as almost constant and develop a consistent linear response theory. Note that 〈.〉 describes the average over the fluctuating stimulus ξi(t) here. We use δ X = X − 〈X〉 for any function X of input stimulus and neglect O(δ I2) terms in the following calculations.

Let us summarize the calculations in the mean-field (Nc → ∞) limit. The mean-firing intensity of a neuron is given by the following self-consistent equations:
formula
4.13
In particular, for a constant stimulus, ξi(t) = ξi(0), we obtain
formula
4.14
from equation 2.4. Note that X(0) describes responses at constant stimulus ξi(0), for example, . We can solve the zeroth-order terms in equation 4.14 self-consistently by finding the roots of the equations . In Figure 5, we compare this self-consistent mean-field solution with numerical simulation of a single neuron as a function of constant input, I(0), and self-inhibition strength J. We see that the approximation is good for small values of self-interaction J, but that the error increases with ∣ J ∣ and I(0), as expected given the analysis in the previous section.
Figure 5:

Comparison of (A) a simulated firing rate of a single neuron and (B) the self-consistent solutions of equation 4.13 for different input levels, I(0), and self-inhibition strengths, J. (C) The percentage errors of the mean-field approximation are shown with respect to the simulated firing intensity. We set τ = 10 ms. (A color version of this figure is available online at http://www.mitpressjournals.org/doi/suppl/10.1162/neco.2008.04-08-757.)

Figure 5:

Comparison of (A) a simulated firing rate of a single neuron and (B) the self-consistent solutions of equation 4.13 for different input levels, I(0), and self-inhibition strengths, J. (C) The percentage errors of the mean-field approximation are shown with respect to the simulated firing intensity. We set τ = 10 ms. (A color version of this figure is available online at http://www.mitpressjournals.org/doi/suppl/10.1162/neco.2008.04-08-757.)

To perform the linear response analysis, we require the first-order terms. Differentiation of equation 4.12 with respect to stimulus perturbation, δ ξ, yields
formula
4.15
Now we can rewrite equation 4.15 by introducing vector notation as
formula
4.16
where we defined and . Therefore, the Fourier transformation of equation 4.16 yields
formula
4.17
with a gain function . Note that I is an identity matrix here (not the input vector). We should note that a mathematically identical linear response filter was derived for a related model in Ginzburg and Sompolinsky (1994). One difference is that in their model, spinlike binary variables flip depending on input level with a fixed time constant. This corresponds, in our model, to the case that all the synaptic time constants τij are identical. Hence, the linear response filter in equation 4.17 is a natural generalization of their result.
We show in Figure 6A a simulated normalized receptive field of a single neuron and the analytical result of equation 4.17. In this single-neuron case, where K is set to be a single exponential function with time constant τI, for simplicity, the Fourier transformations of the synaptic filter and the input filter are and , respectively. In this case, we obtain, after a Fourier inverse transformation by a Cauchy integral,
formula
4.18
where and . We see that the linear response may be described as a combination of the stimulus-filter term K along with an additional term due to the recurrent filter w. Of course, in the limit J → 0, we recover GfK. Note also that the input-output filter changes with the baseline input through f′. In the case of the exponential nonlinearity, f, and J<0, the larger the input baseline, the sharper the linear filter. This is because the effective strength of of the spike history effect is modulated by the input-output nonlinearity and the baseline input level (see, e.g., Bryant & Segundo, 1976, for some similar effects in an in vitro physiological preparation).
Figure 6:

Comparison of the analytically derived and simulated input-output filters. (A) A simulated normalized linear input-output filter for I(0) = 2 (thin-dotted) and I(0) = 4 (thick-dotted) is compared with analytical results at I(0) = 2 (thin-solid) and I(0) = 4 (thick-solid) of equation 4.18. Because the strength of the self-inhibition component changes with the baseline input through f′(I(0) + μ(0), the input filter is sharpened for high baseline input. Other parameters are set to J = 1, τ = 10 ms, with τI = 20 ms and σξ = 1 Hz. All filter amplitudes are normalized by their maximum amplitudes. (B) Left panels show the comparison of analytically derived input-output filters G11(t) and G12(t) of equation 4.20 (solid lines) and simulated input-output filters (dotted lines) for I(0) = 2 (thin lines) and I(0) = 4 (thick lines). Right panels show neuron 1's spatiotemporal input-output filter for I(0) = 2 (top) and I(0) = 4 (bottom). See the text for parameter values. (A color version of this figure is available online at http://www.mitpressjournals.org/doi/suppl/10.1162/neco.2008.04-08-757.)

Figure 6:

Comparison of the analytically derived and simulated input-output filters. (A) A simulated normalized linear input-output filter for I(0) = 2 (thin-dotted) and I(0) = 4 (thick-dotted) is compared with analytical results at I(0) = 2 (thin-solid) and I(0) = 4 (thick-solid) of equation 4.18. Because the strength of the self-inhibition component changes with the baseline input through f′(I(0) + μ(0), the input filter is sharpened for high baseline input. Other parameters are set to J = 1, τ = 10 ms, with τI = 20 ms and σξ = 1 Hz. All filter amplitudes are normalized by their maximum amplitudes. (B) Left panels show the comparison of analytically derived input-output filters G11(t) and G12(t) of equation 4.20 (solid lines) and simulated input-output filters (dotted lines) for I(0) = 2 (thin lines) and I(0) = 4 (thick lines). Right panels show neuron 1's spatiotemporal input-output filter for I(0) = 2 (top) and I(0) = 4 (bottom). See the text for parameter values. (A color version of this figure is available online at http://www.mitpressjournals.org/doi/suppl/10.1162/neco.2008.04-08-757.)

Next, we calculate how interactions between neurons can change the input-output filter. For concreteness, we consider two neurons that receive a spatiotemporal white noise stimulus ξ(z, t) with unit variance. Input to the neuron i is described by , where are spatial filters with means z1 = 1 and z2 = −1. For simplicity, we assume that two neurons receive the same baseline of input, I(0)1 = I(0)2 = I(0) with an identical temporal stimulus filter, , and have symmetric synaptic interactions J12 = J21 = J = −1, J11 = J22 = 0, and . The derivative of the nonlinear function is described by for i = 1, 2. Proceeding as above, we can calculate the spatiotemporal input-output filter of neuron i as . Similar to the previous single-neuron case, the Fourier transformation of the temporal stimulus filter is , while the synaptic filter is , which yield the Fourier transformation of the temporal input-output filter,
formula
4.19
After a Fourier inverse transformation using Cauchy integration, we find
formula
4.20
Figure 6B compares the analytically derived input-output filter of equation 4.20 with numerical simulation for different input baseline I(0) = 2 and I(0) = 4. Figure 6B, also shows the spatiotemporal input-output filter of neuron 1, , for the two input baselines. Under large input baseline conditions, we see stronger interactions between the two neurons and, hence, significant differences in the effective input-output filtering properties of the neuron in the two conditions. Thus, as emphasized above, the filtering properties of the network can change in a stimulus-dependent way, without any changes in the system parameters K, J, or w. (See, e.g., Pillow et al., 2008, for some related results concerning the effects of interneuronal coupling terms on the filtering properties of populations of primate retinal ganglion neurons.)

5.  The Generalized Linear Point Process Model with Markov Refractoriness

In section 4, we saw that we could derive simple approximate expressions for many important quantities (firing rate, correlations) in the basic generalized linear point process model as long as the individual synaptic coupling terms ∣ J ∣ are not so large that the accuracy of the mean-field expansion is compromised (see Figures 3 and 4).

In particular, the small ∣ J ∣ condition is likely acceptable for the multineuronal coupling terms, since in many cases, the inferred coupling parameters—in motor cortex (Paninski et al., 2004; Truccolo et al., 2005) and retina (Pillow et al., 2008), for example—have been empirically found to be small (though larger network effects are found in hippocampus; see Harris et al., 2003; Okatan et al., 2005). Similarly, we might expect the “slow” self-inhibition terms in the GLM (the terms responsible for adaptation in the firing rate, for example) to be relatively small as well (Pillow, Paninski, Uzzell, Simoncelli, & Chichilnisky, 2005; Truccolo et al., 2005).

However, it is clear that the generalized linear model with small weights ∣ J ∣ will not be able to account for the strong refractoriness that is a fundamental feature of neural point-process dynamics at fine timescales: in the GL model, strong, brief inhibition is produced by large, negatively weighted, and sharply decaying history effects wii(t), and these large wii(t) terms spoil our mean-field approximation. Thus, it is clear that we need to adapt the methods introduced above in order to handle strong refractory effects.

One possible solution would be to take a discrete-time approach: instead of modeling neural responses as point processes in continuous time, we could simply model each cell's spike count within a discrete-time bin as a Bernoulli random variable whose rate νi(t) is an increasing function of ui(t), very much as in the continuous-time model described above (Chornoboy et al., 1988; Okatan et al., 2005). This discrete-time Bernoulli model shares much of the ease of fitting (including concavity of the log likelihood; Escola & Paninski, 2007) as the continuous-time model. In fact, the discrete-time model converges in a natural sense to the continuous-time model as the time binwidth dt becomes small. The advantage of this discrete-time formalism is that the firing rate νi(t) can never exceed 1/dt (since the Bernoulli probabilities cannot exceed one), and therefore, in a crude sense, the discrete-time model has a refractory effect of length dt. A mean-field analysis can be developed for this discrete-time model, which exactly mirrors that developed above in the continuous-time model. We simply need to exchange our ordinary differential equations for discrete-time difference equations.

Of course, this discrete-time approach is unsatisfactory in at least one respect, since we would like, if possible, to model the firing behavior of neurons down to a millisecond timescale (Berry & Meister, 1998; Keat, Reinagel, Reid, & Meister, 2001; Pillow et al., 2005; Paninski et al., 2007), and the discrete-time approach, by construction, ignores these fine timescales. Thus, we introduce a model that allows us to incorporate strong refractory effects directly. We set the rates,
formula
5.1
where f(.) and ui(t) are defined as in section 4 and I(.) is the indicator function that takes the value 1 if the argument is true. We have introduced an auxiliary refractory variable xi(t), which takes a discrete state from 1 to M. We assume that this variable xi(t) is itself Markovian, with transition rates
formula
5.2
that is, when xi(t) is in state M, it will transition to state 1 with each spike, and then transitions occur from state m to state m + 1 with rate 1/τr until xi(t) has reached the spiking state M once again. Refractoriness in this model is enforced because the neuron is silent whenever xi(t) is in one of the postspike quiescent states xi(t)<M. It is easy to see that this is a kind of inhomogeneous renewal model (the delay τd required for the state variable xi to move from state 1 to first reach the active state M is a sum of (M−1) independent and identically (i.i.d.) distributed exponential random variables of mean τr, and so τd is an i.i.d. gamma variable with parameters (M − 1, 1/τr)) and may be considered a special case of the “inhomogeneous Markov interval” model introduced by Kass and Ventura (2001). By adjusting the number of states M and the rate 1/τr, we may adjust the refractory properties of the model: for example, M = 1 implies that we have an inhomogeneous Poisson model, while M = 2 provides an exponentially decaying relative refractory effect, and if we let M be large, with τr scaling like τr∼ τ/M, we obtain an absolute refractory effect of length τ.

Finally, it is easy to show that the additional Markov term in the definition of the rate λi(t) does not have a negative impact on the estimation of the GL model parameters J given spiking data; as discussed in Paninski (2004), maximizing the likelihood in this model (or constructing an EM algorithm, as in Escola & Paninski, 2007) requires that we maximize a nonnegatively weighted version of the standard point-process log likelihood, and this weighted log likelihood retains all of the concavity properties of the original GL model. Thus, this new model requires just a single concave optimization and is as easy to fit as the standard GL point process model. The advantage is that a strong relative refractory effect is intrinsic to the model and does not need to be enforced by a large, brief, negative wii(t) term. Instead, we may use small adjustments to the Jii terms to fine-tune the model to match the short-time details of the observed interspike interval density.2 As we will see below, this leads to much more accurate mean-field approximations of the firing rates and correlation functions in these networks.

Let us define the probability that the neuron i is in state xi(t) = 1,…, M. Note that the boldface characters denote vectors. The dynamics of pi is described by
formula
5.3
where has the same elements as Wi(t) of equation 5.2 except for
formula
5.4
which describes the probability of neuron i firing at time t given xi(t) = M. Note that we used and wrote the last component of pi(t) as [pi(t)]M = pi(t).
We also define as the probability of neuron i being in state m given a spike of neuron j at time t′. In particular, the last component is written as . The evolution of is described by a similar equation,
formula
5.5
where the transition matrix has the same elements as Wi(t) of equation 5.2 except for
formula
5.6

Before turning to “mean-field” approximation, it is worth noting that we may solve for the firing rates and correlations exactly in the special case of no synaptic couplings, J = 0. (The analogous case in the standard GL model is the inhomogeneous Poisson case, which is of course trivial.) We can proceed by exploiting either the renewal nature of the model (Gerstner & Kistler, 2002), which leads to convolution or infinite-sum formulas for the firing rate, or the Markovian structure, which leads to somewhat more intuitive ordinary differential equations. We pursue the second approach here.

In this case, the mean firing intensity is given by
formula
5.7
This in turn gives, together with equation 5.4, . Hence, we can find the dynamics of mean firing intensity by solving equation 5.3. In the special case of constant input Ii, it is easy to solve the fixed point of pi(t); we find for the last component that .
The autocorrelation function in this case is similarly easy to derive. (Note that the cross-covariance functions are zero in this J = 0 case, since the cross-coupling terms are set to zero.) In case of no couplings, J = 0, the conditional firing intensity is given by
formula
5.8
for t > t′. We find those corner elements of as
formula
5.9
for t > t′. This means that for t > t′. Hence, follows the same differential equation as pi(t) in this J = 0 case but starting from the initial condition
formula
5.10
because the state xi is reset to state 1 just after a given spike of neuron i.

In the following sections, we apply mean-field approximations for the firing rates and correlations in the nonzero J case.

5.1.  Mean-Field Approximation of the GL Model with Markov Refractoriness.

Let us approximate the mean firing intensity and the cross-correlations of GL model with Markov refractoriness assuming that the contribution of individual synaptic coupling is small. Similar to the calculation without the Markov refractoriness, we assume a gaussian distribution of recurrent input Hi(t) given xi(t) = M to find
formula
5.11
with conditional mean and conditional variance . Also, assuming a gaussian distribution of recurrent input Hi(t) given xi(t) = M and a spike of neuron j at time t′, we find
formula
5.12
for t > t′ with conditional mean and variance . The dynamics of pi(t) and are described by equations 5.3 and 5.5, respectively.
Next we calculate the conditional means and variances of the recurrent input. We find
formula
5.13
with auxiliary variables defined as
formula
5.14
Note that we use the final component of the above quantities without an index (e.g., ). After some calculation (see the appendix), we obtain
formula
5.15
where and, particularly, the last component is written as .

5.2.  Solution in the Mean-Field Limit.

Similar to the case without Markov refractoriness, we first derive a self-consistent equation for the mean firing intensity and cross-covariances in the mean-field limit Nc → ∞, where J ∼ 1/Nc and assuming an asynchronous state so that Δ ϕij ∼ 1/Nc. Because all the cross-covariance functions vanish and in the limit, we find, from equations 5.11 to 5.13, for t > t′,
formula
5.16
while the cross-correlation function is given by equation 4.3. For the corner terms of the transition matrices, we find from equation 5.4 and for t > t′ from equation 5.6. Hence, the evolution of pi(t) and is described by
formula
5.17
for t > t′, where the initial condition of is given by equation 5.10. Finally, from equation 5.15, the evolution of is written as an ODE,
formula
5.18
where we can apply equation 5.10 to evaluate in this limit. Figure 7 plots the mean firing intensity calculated from the above equations for various input, and the mean-field approximation provides good approximation of the time dependent firing intensities. Figure 8 compares the above approximation (Nc → ∞ limit) with the mean-field approximation, including the finite size effect that we discuss in the next section. In the mean-field limit, all the cross-correlations vanish and the autocorrelation functions are approximated by including the Markov refractory effect but not the self-interaction terms Jii. We can also calculate the linar input-output filter discussed in section 4.3 with this Markov refractoriness (see the appendix).
Figure 7:

Comparison of the mean firing intensity obtained from direct simulation of a single neuron (solid), without recurrent input H and no Markov refractory effect (dotted), and the mean-field equation in the limit of Nc → ∞ (dashed). In all cases, the Markov refractory effect with was included. (A) Weak step input (baseline I = 2, peak I = 4) and refractoriness M = 3; J = −1, τ = 10 ms. Although the model includes strong Markov refractory effect, the mean-field approximation is not abolished. (B) Strong step input (baseline I = 2, peak I = 8) and refractoriness M = 10; J = −1, τ = 10 ms. The mean-field approximation also approximates very well the transient oscillation caused by the the strong refractory effect and steep current. (C) The mean-field approximation works for colored noise input: with E[ξ (t)] = 4, , and τI = 20 ms; M = 3, J = −1, τ = 10 ms. The initial error is due to the preset initial values that deviate from the true ones. (D) Sinusoidal input input: ; M = 3, J = −1, τ = 10 ms. The mean-field results give good approximations. (A color version of this figure is available online at http://www.mitpressjournals.org/doi/suppl/10.1162/neco.2008.04-08-757.)

Figure 7:

Comparison of the mean firing intensity obtained from direct simulation of a single neuron (solid), without recurrent input H and no Markov refractory effect (dotted), and the mean-field equation in the limit of Nc → ∞ (dashed). In all cases, the Markov refractory effect with was included. (A) Weak step input (baseline I = 2, peak I = 4) and refractoriness M = 3; J = −1, τ = 10 ms. Although the model includes strong Markov refractory effect, the mean-field approximation is not abolished. (B) Strong step input (baseline I = 2, peak I = 8) and refractoriness M = 10; J = −1, τ = 10 ms. The mean-field approximation also approximates very well the transient oscillation caused by the the strong refractory effect and steep current. (C) The mean-field approximation works for colored noise input: with E[ξ (t)] = 4, , and τI = 20 ms; M = 3, J = −1, τ = 10 ms. The initial error is due to the preset initial values that deviate from the true ones. (D) Sinusoidal input input: ; M = 3, J = −1, τ = 10 ms. The mean-field results give good approximations. (A color version of this figure is available online at http://www.mitpressjournals.org/doi/suppl/10.1162/neco.2008.04-08-757.)

Figure 8:

Comparison of spike cross-correlation functions of two neurons obtained by direct numerical simulation (solid), mean-field approximation in the limit of Nc → ∞ (dashed), and with finite size correction terms (dot-dashed). While the approximation in the limit of Nc → ∞ captures only the strong Markov refractory effect, the weaker refractory effect caused by the self-interaction terms is well captured with the finite size correction. The finite size correction provides a good approximation despite the strong refractoriness because of an explicit evaluation of the Markov refractory effect. Mean-field equations are iteratively solved over five iterations. (A) Two neurons are directly coupled: parameters are set to J11 = −1, J22 = −1, J12 = −0.5, J21 = 1, , M = 3, I1 = 2, I2 = 4. (B) Two neurons are not directly connected but receive common input from a third neuron: parameters are set to J13 = J23 = 2, , M = 3, I1 = I2 = I3 = 2. (A color version of this figure is available online at http://www.mitpressjournals.org/doi/suppl/10.1162/neco.2008.04-08-757.)

Figure 8:

Comparison of spike cross-correlation functions of two neurons obtained by direct numerical simulation (solid), mean-field approximation in the limit of Nc → ∞ (dashed), and with finite size correction terms (dot-dashed). While the approximation in the limit of Nc → ∞ captures only the strong Markov refractory effect, the weaker refractory effect caused by the self-interaction terms is well captured with the finite size correction. The finite size correction provides a good approximation despite the strong refractoriness because of an explicit evaluation of the Markov refractory effect. Mean-field equations are iteratively solved over five iterations. (A) Two neurons are directly coupled: parameters are set to J11 = −1, J22 = −1, J12 = −0.5, J21 = 1, , M = 3, I1 = 2, I2 = 4. (B) Two neurons are not directly connected but receive common input from a third neuron: parameters are set to J13 = J23 = 2, , M = 3, I1 = I2 = I3 = 2. (A color version of this figure is available online at http://www.mitpressjournals.org/doi/suppl/10.1162/neco.2008.04-08-757.)

5.3.  Estimating the Finite Size Effect.

Similar to the case without Markov refractoriness, we evaluate the mean firing intensity and cross-covariance functions up to 1/Nc terms. We find, as in the case without Markov refractoriness, for ij, for ik, and the third-order covariances, such as , are of order 1/N2c under the asynchronous state if the contributions of individual synaptic weights are small, J ∼ 1/Nc. This implies .

The dynamics of pi(t) and are described by equations 5.3 and 5.5, respectively. We now want to calculate μi(t), , and σi2(t) to the first order of 1/Nc. First, direct differentiation of in equation 5.15 yields an ODE update equation (see the appendix):
formula
5.19
with . The origin of the second term on the right-hand side of equation 5.19 is due to the correlation of spike variable Sk(s) and the refractory variable xi(t). Next, we need to evaluate to the zeroth order of 1/Nc for k = i or k = j because J ∼ 1/Nc makes the contribution of them ∼ 1/Nc, and we need to evaluate to the first order of 1/Nc for ki and kj case. Except for the case i = j = k, we find (see the appendix)
formula
5.20
to the order described above. The precise evaluation of this term for i = j = k is much harder but, to a good approximation, equation 5.20 holds (see the appendix). Note that the approximation of the i = j = k term affects the order 1/Nc term of the autocovariance function, , but affects only the O(1/N2c) terms of the mean firing intensities and cross-covariance functions.
Finally, dropping higher-order terms that result as O(1/N2c) on the quantities in equation 5.13, we find (see the appendix)
formula
5.21
Altogether, using equation 4.12, we obtain the following update equations to evaluate the finite size effect up to order 1/Nc:
formula
5.22
and the dynamics of pi(t) and follows equations 5.3 and 5.5. As is the case without the Markov refractoriness, calculating this finite size effect is computationally expensive for time-dependent input, although in principle, this equation should work under that case as well. We plot the mean firing intensities of two neurons and cross-correlations in Figure 8. The finite size effect well captures the cross-correlation functions. It also successfully captures a peak in the cross-correlation function if two neurons are not directly connected but receive a common input from a third neuron (see Figure 8). Again, if the values of the above equations are initially set to the solution in the limit of Nc → ∞ given by equations 5.16 to 5.18, only a couple of iterations of the above equations are sufficient for the evaluation of cross-correlation functions.

6.  Discussion

We have introduced mean-field methods for analyzing the dynamics of a coupled population of neurons whose activity may be well approximated as the output of a generalized linear point process model. Our approximations for the mean time-varying firing rate and correlations in the population are exact in the mean-field limit (Nc → ∞ under J ∼ 1/Nc scaling), though we have found numerically that the finite size correction to the mean-field equations is useful even for physiologically relevant connectivity strengths. The approximations may be computed by solving a set of coupled ordinary differential equations. This approach is much more computationally tractable than direct Monte Carlo sampling and may lead to greater analytical insight into the behavior of the network. In addition, we have introduced a new model, the generalized linear point process model with Markovian refractoriness, that captures strong refractoriness, retains all of the easy-fitting properties of the standard generalized linear model, and whose firing rate dynamics are much more amenable to mean-field analysis than the standard modells. Note that most of our illustrations involved the simulation of only a couple of mutually connected neurons; this small-Nc case can be considered a kind of worst-case analysis, since with more asynchronous neurons (larger Nc), the central limit theorem becomes progressively applicable, the distribution of recurrent input approaches the assumed gaussian distribution, and our approximations become more accurate.

This kind of mean-field analysis—or more generally, approaches for reducing the complexity of the dynamics of networks of noisy, nonlinear neurons—has a long history in computational neuroscience, as reviewed (Hertz et al., 1991; Gerstner & Kistler, 2002; Renart et al., 2003). While the point-process models we have discussed here are somewhat distinct from the noisy integrate-and-fire-type models that have been analyzed most extensively in this literature (e.g., Mattia & Del Giudice, 2002; Shriki, Hansel, & Sompolinsky, 2003; Moreno-Bote & Parga, 2004, 2006; Fourcaud-Trocme, Hansel, van Vreeswijk, & Brunel, 2003; Chizhov & Graham, 2007; Doiron, Lindner, Longtin, Maler, & Bastian, 2004; Lindner, Doiron, & Longtin, 2005), it is more important to stress the differences in the motivation of this work and previous work. Our main goal here was to provide analytical tools that an experimentalist who has fit a point process model to his or her data (as in, e.g., Paninski, 2004; Truccolo et al., 2005; Okatan et al., 2005; Pillow et al., 2008) may use to understand the behavior of the network or single-cell models that have just been inferred. In particular, we were interested in predicting, for example, the mean firing rate of a specific GLM network to a novel arbitrary dynamic input stimulus. This contrasts with the literature cited above, which has for the most part focused on the mean firing rates, first-order response, and fixed-point stability properties behavior of idealized, infinitely large populations (Mattia & Del Giudice, 2002, is an exception here) with homogeneous membrane and connectivity properties. In addition, most analytical studies of the input-output property of model neurons with a dynamical input start from a Fokker-Planck formalism and rely on a direct numerical simulation of the partial differential equation (Nykamp & Tranchina, 2000; Knight, Omurtag, & Sirovich, 2000; Chizhov & Graham, 2007; this direct approach is feasible only in the case that the system dynamics may be reduced to a state space of dimension at most two or so, and therefore does not apply in the networks studied here), or on linear response theory that assumes a small time-dependent component relative to the baseline component (Shriki et al., 2003; Fourcaud-Trocme et al., 2003; Doiron et al., 2004; Lindner et al., 2005), or on some quasi-stationary assumption (Knight et al., 2000; Mattia & Del Giudice, 2002; Fourcaud-Trocme & Brunel, 2005; Moreno-Bote & Parga, 2004, 2006) restricting the applicable input stimulus to slowly changing one or two simple step stimuli with sufficiently long gaps between steps that the network may reach equilibrium. Thus, it is difficult to apply the insights developed in these previous analyses directly to obtain a quantitative prediction of the dynamic firing rates of a specific nonhomogeneous, nonsymmetric network.

We derived, in this article, the finite size correction to the mean-field equation that captures cross-correlation functions between neurons. This finite size effect originates from the xJij 1/Nc scaling of synaptic strengths that guarantees fluctuation of inputs under asynchronous states. Hence, the finite size effect described in this article is the contribution of small fluctuation in the input around the mean. Another kind of finite size effect has also been discussed in the literature. In a sparsely connected network, correlations between two neurons disappear if Nc is sufficiently smaller than the total number of neurons, N (Derrida, Gardner, & Zippelius, 1987). The finite Nc effect has been evaluated using stochastic Fokker-Planck equations (Brunel & Hakim, 1999; Mattia & Del Giudice, 2002). One should note, however, that the evaluation of this finite size effect for an experimentally estimated network structure is generally not straightforward because the evaluation of a Fokker-Planck equation with many state-space variables is computationally hard. Finally, Jij 1/Nc is not the unique way to scale synapses. Under the balanced input assumption, yields order one fluctuation of input even in the limit of Nc → ∞ (Sompolinksy, Crisanti, & Sommers, 1988; van Vreeswijk & Sompolinsky, 1996, 1998). However, nontrivial cross-correlation functions cannot be captured under the standard asynchronous assumption of mean-field analysis in the Nc → ∞ limit. The calculation of the finite size effect of a network with is not the scope of this article.

Two works that bear a stronger mathematical resemblance to ours are Ginzburg and Sompolinsky (1994) and Meyer and van Vreeswijk (2002). These authors applied mean-field approximations to coupled model neurons to study the mean firing rates and autocorrelation and cross-correlation functions in the network. However, the two-state neuron model discussed in Ginzburg and Sompolinsky (1994) (where the neurons flip between “active” and “inactive” firing states according to a Markov process with some finite rate constant) is somewhat distinct from the point-process models we have treated here (where the neuron remains in the “active” state for a negligible time—i.e., the neuron spikes instantaneously), and we have not been able to translate these earlier results to the problems of interest in this article. Renewal point spiking neuorns are analyzed in Meyer and van Vreeswijk (2002), and their results are closer to ours. However, according to their refractory model, one has to consider infinitely many refractory states in the continuous time limit, whereas our spiking neuron model has M refractory state; M can be as small as 2 while keeping the order 1 strong refractory effect. Accordingly, their cross-correlations should be evaluated by integral equations that are computationally (and conceptually) somewhat more involved, while the methods developed here require us to evaluate just a couple of differential equations.

Finally, the effect of common input on the cross-correlation function was previously studied in a related model (Nykamp, 2007), using an expansion of the output f(.) nonlinearity arround J = 0. One major difference between the analysis we have presented here and this previous work is that a simple expansion around J = 0 does not lead to a good approximation of the firing rate, even in the mean-field limit of Nc → ∞, because the recurrent input changes the baseline firing rate (cf. Figure 5); thus, it is much more accurate to expand around the zeroth-order firing rate given by the roots of equation 4.13, as discussed in section 4.3. (On the other hand, our mean-field approach does require a gaussian approximation that is known to be inaccurate in the case of large J terms.) It would be interesting to explore whether the methods developed here could help lead to more accurate inference of the common-input effects discussed in Nykamp (2007).

There are many possible applications of this mean-field method for problems that require fast evaluation of mean firing rates and cross-correlation functions. One example is to evaluate the information coded by spiking of a recurrently connected network about the input stimulus. This kind of information calculation usually requires averaging over recent spike history (Toyoizumi, Aihara, & Amari, 2006), but the gaussian approximation of the input described here could greatly ease the computational complexity to evaluate the information of stimulus coded by the network. We hope to pursue this direction in future work.

Appendix:  Mathematical Details

In this appendix, we collect the details of the expansion analyses summarized in the main text.

A.1.  Calculation of , , and Bkli.

In this appendix, we use to simplify some expressions. Direct evaluation of equation 5.14 yields
formula
A.1
where and . Similarly, we find
formula
A.2

A.2.  Approximation of , and Bkli(t).

In order to evaluate the finite size effect—order 1/Nc terms of the mean firing intensity and cross-covariance functions—we need to evaluate , , and to the first order of 1/Nc. Assuming that the synaptic strengths scale as J ∼ 1/Nc, we need to evaluate
formula
to the first order of 1/Nc,
formula
to the zeroth order of 1/Nc if k = i or k = j because J ∼ 1/Nc but to the first order of 1/Nc otherwise (in the following, we divide these into five cases and consider each case separately), and
formula
to the first order of 1/Nc if i, j, k are all different, and to the zeroth order of 1/Nc if k = l, k = i, or l = i.
First, direct differentiation of yields
formula
A-3
where , and we used in the above calculation, from equations 5.4 and 5.6,
formula
A.4

Next, to evaluate , we use if i, k, l are all different; for ik. As we discussed at the beginning of this section, in order to evaluate to the first order of 1/Nc, we have to consider the number of combinations of indexes. For example, there are Nc terms of and N2c terms of . Hence, we consider the following five possibilities for combining indexes:

  • • 
    Case 1 (ij, ik); evaluated up to the first order of 1/Nc:
    formula
    A-5
    Note that in the second line, we used and , for example, and neglected higher-order terms such as Δ pik Δ νkj.
  • • 
    Case 2 (i = j, ki); evaluated up to the first order of 1/Nc:
    formula
    A-6
  • • 
    Case 3 (i = k, ij); evaluated up to the zeroth order of 1/Nc:
    formula
    A-7
  • • 
    Case 4 (j = k, ij); evaluated up to the zeroth order of 1/Nc:
    formula
    A.8
  • • 
    Case 5 (i = j = k); evaluated up to the first order of 1/Nc:
    formula
    A-9

This is a difficult situation to analyze precisely. However, unless the neuron i receives extremely strong input, the conditional firing intensity is almost zero if s and t′ are as close as the refractory time constant of the neuron (M − 1)τr, so the contribution of this interval on the integral is small. If s and t′ are separated more than the refractory time constant, the interaction between the spike at s and t′ becomes small, that is, , and becomes small. Hence, to a good approximation,
formula
A-10
The validity of this intuitive approximation should be examined by numerical simulations. Note that contributes to the order 1/Nc term of the autocovariance function Δ ϕii(t, t′) but contributes only to order 1/N2c terms of the mean firing intensity νi(t) and cross-covariance functions with ij.
Finally, we can rewrite Bkli as
formula
A-11
where we used and . We need to evaluate Bkli up to the first order of 1/Nc if i, k, l are all different and up to the zeroth order of 1/Nc if k = l, k = i, or l = i. Up to these orders, Bkli can be approximated as
formula
A.12
Note that for ik, if i, k, l are different, and for ik.

A.3.  Calculation of the Linear Filter in the Generalized Linear Model with Markov Refractoriness.

We assume that the input is given by
formula
A.13
with stimulus , and calculate the linearized input-output filter for small stimulus fluctuation δ ξi(t) about a constiant baseline ξi(0).
In the limit of Nc → ∞, the mean-field equation is given by
formula
A.14
First, for a constant input I(0)i, we obtain
formula
A.15
where and the second to last equation is easily solvable with respect to p(0)i; we find
formula
A.16
Next, the linear response to a small perturbation is given by
formula
A-17
where and . Remember that and . Fourier transformation of the above equations gives
formula
A.18
formula
The above equation is still very complicated. So it is worth thinking about the special case Jki = 0. In this case, the linear response is described by
formula
A.19
The calculation of is straightforward thanks to the bidiagonal nature of . After direct matrix inversion, we find
formula
A.20
with and . Hence, the linear response is simplified as
formula
A-21
In particular, if M = 2, we find that the Fourier inverse transformation of the gain function has an analytical form, that is, using , we find
formula
A.22
Note that in the limit of τr → 0, the trivial gain function is . Hence, we can see that the suppressive kernel is added to this instantaneous gain function due to the Markov refractoriness. This linear response for Jki = 0 case is a special case of a more general linear response for a renewal neuron (Gerstner & Kistler, 2002). Note that in equation A.18, we discussed a more general case and considered the interaction between the refractory effect and spike interaction effect of a network of recurrently connected neurons.

Acknowledgments

We thank E. Shea-Brown for helpful conversations. T.T. is supported by the Robert Leet and Clara Guthrie Patterson Trust Postdoctoral Fellowship, Bank of America, Trustee. L.P. is supported by an NSF CAREER award, an Alfred P. Sloan Research Fellowship, the McKnight Scholar award, and NEI grant EY018003.

Notes

1

Note that the computational complexity to calculate the finite size effect is not N2 but N2Nc because each synaptic time constant, τik, is distinct. Hence, we needed to evaluate each term separately. However, if all the synaptic constants onto each neuron are constant, that is, τik = τi, we can directly evaluate without evaluating αijk separately, and the computational complexity to evaluate the finite size effect reduces to O(N2T).

2

In this article, we mostly use M = 3 (except for Figure 7B, where the effect of abrupt refractoriness is studied), because this is the smallest value of M that requires the following mean-field formulation (we do not need a vector formulation for an M = 2 case because there is only one degree of freedom that corresponds to the probability of being in the active state). It is worth noting that Escola and Paninski (2007) discuss methods for additionally estimating an optimal transition matrix Wi(t) via an EM algorithm. This provides another method for adjusting the short-time details of the model's responses. In addition, we may extend many of the mean-field methods developed below to the case of more general rate matrices Wi(t). However, for simplicity, we will not pursue these extensions here.

References

Amit
,
D. J.
, &
Tsodyks
,
M. V.
(
1991
).
Quantitative study of attractor neural networks retrieving at low spike rates. I: Substrate—spikes, rates, and neuronal gain
.
Network
,
2
,
259
273
.
Berry
,
M.
, &
Meister
,
M.
(
1998
).
Refractoriness and neural precision
.
J. Neurosci.
,
18
,
2200
2211
.
Brillinger
,
D.
(
1988
).
Maximum likelihood analysis of spike trains of interacting nerve cells
.
Biological Cyberkinetics
,
59
,
189
200
.
Brown
,
E.
,
Kass
,
R.
, &
Mitra
,
P.
(
2004
).
Multiple neural spike train data analysis: State-of-the-art and future challenges
.
Nature Neuroscience
,
7
,
456
461
.
Brunel
,
N.
, &
Hakim
,
V.
(
1999
).
Fast global oscillations in networks of integrate-and-fire neurons with low firing rates
.
Neural Comput.
,
11
,
1621
1671
.
Bryant
,
H.
, &
Segundo
,
J.
(
1976
).
Spike initiation by transmembrane current: A white-noise analysis
.
Journal of Physiology
,
260
,
279
314
.
Chizhov
,
A. V.
, &
Graham
,
L. J
(
2007
).
Population model of hippocampal pyramidal neurons, linking a refractory density approach to conductance-based neurons
.
Phys. Rev. E Stat. Nonlin. Soft Matter Phys.
,
75
,
011924
Chornoboy
,
E.
,
Schramm
,
L.
, &
Karr
,
A.
(
1988
).
Maximum likelihood identification of neural point process systems
.
Biological Cybernetics
,
59
,
265
275
.
Derrida
,
B.
,
Gardner
,
E.
, &
Zippelius
,
A.
(
1987
).
An exactly solvable asymmetric neural network model
.
Europhys. Lett.
,
4
,
167
173
.
Doiron
,
B.
,
Lindner
,
B.
,
Longtin
,
A.
,
Maler
,
L.
, &
Bastian
,
J.
(
2004
).
Oscillatory activity in electrosensory neurons increases with the spatial correlation of the stochastic input stimulus
.
Phys. Rev. Lett.
,
93
(
4
),
048101
Escola
,
S.
, &
Paninski
,
L.
(
2007
).
Hidden Markov models applied toward the inference of neural states and the improved estimation of linear receptive fields
.
Manuscript submitted for publication
.
Fourcaud-Trocme
,
N.
, &
Brunel
,
N.
(
2005
).
Dynamics of the instantaneous firing rate in response to changes in input statistics
.
J. Comput. Neurosci.
,
18
,
311
321
.
Fourcaud-Trocme
,
N.
,
Hansel
,
D.
,
van Vreeswijk
,
C.
, &
Brunel
,
N.
(
2003
).
How spike generation mechanisms determine the neuronal response to fluctuating inputs
.
J. Neurosci.
,
23
,
11628
11640
.
Gerstner
,
W.
, &
Kistler
,
W.
(
2002
).
Spiking neuron models: Single neurons, populations, plasticity
.
Cambridge
:
Cambridge University Press
.
Ginzburg
,
I.
, &
Sompolinsky
,
H.
(
1994
).
Theory of correlations in stochastic neural networks
.
Phys. Rev. E
,
50
(
4
),
3171
3191
.
Harris
,
K.
,
Csicsvari
,
J.
,
Hirase
,
H.
,
Dragoi
,
G.
, &
Buzsaki
,
G.
(
2003
).
Organization of cell assemblies in the hippocampus
.
Nature
,
424
,
552
556
.
Hatsopoulos
,
N.
,
Ojakangas
,
C.
,
Paninski
,
L.
, &
Donoghue
,
J.
(
1998
).
Information about movement direction obtained by synchronous activity of motor cortical neurons
.
PNAS
,
95
,
15706
15711
.
Hertz
,
J.
,
Krogh
,
A.
, &
Palmer
,
R. G.
(
1991
).
Introduction to the theory of neural computation
.
Redwood City, CA
:
Addison-Wesley
.
Kappen
,
H. J.
, &
Spanjers
,
J. J.
(
2000
).
Mean field theory for asymmetric neural networks
.
Phys. Rev. E
,
61
(
5 Pt B
),
5658
5663
.
Kass
,
R.
, &
Ventura
,
V.
(
2001
).
A spike-train probability model
.
Neural Comput.
,
13
,
1713
1720
.
Keat
,
J.
,
Reinagel
,
P.
,
Reid
,
R.
, &
Meister
,
M.
(
2001
).
Predicting every spike: A model for the responses of visual neurons
.
Neuron
,
30
,
803
817
.
Knight
,
B.
,
Omurtag
,
A.
, &
Sirovich
,
L.
(
2000
).
The approach of a neuron population firing rate to a new equilibrium: An exact theoretical result
.
Neural Comput.
,
12
,
1045
1055
.
Kohn
,
A.
, &
Smith
,
M.
(
2005
).
Stimulus dependence of neuronal correlation in primary visual cortex of the macaque
.
Journal of Neuroscience
,
25
,
3661
3673
.
Kulkarni
,
J.
, &
Paninski
,
L.
(
2007
).
Common-input models for multiple neural spike-train data
.
Network: Computation in Neural Systems
,
18
,
375
407
.
Lindner
,
B.
,
Doiron
,
B.
, &
Longtin
,
A.
(
2005
).
Theory of oscillatory firing induced by spatially correlated noise and delayed inhibitory feedback
.
Phys. Rev. E Stat. Nonlin. Soft Matter Phys.
,
72
(
6 Pt 1
),
061919
.
Marmarelis
,
P.
, &
Marmarelis
,
V.
(
1978
).
Analysis of physiological systems: The white-noise approach
.
New York
:
Plenum Press
.
Martignon
,
L.
,
Deco
,
G.
,
Laskey
,
K.
,
Diamond
,
M.
,
Freiwald
,
W.
, &
Vaadia
,
E.
(
2000
).
Neural coding: Higher-order temporal patterns in the neurostatistics of cell assemblies
.
Neural Comput.
,
12
,
2621
2653
.
Mattia
,
M.
, &
Del Giudice
,
P.
(
2002
).
Population dynamics of interacting spiking neurons
.
Phys. Rev. E Stat. Nonlin. Soft Matter Phys.
,
66
,
051917
.
Meyer
,
C.
, &
van Vreeswijk
,
C.
(
2002
).
Temporal correlations in stochastic networks of spiking neurons
.
Neural Comput.
,
14
,
369
404
.
Moreno-Bote
,
R.
, &
Parga
,
N.
(
2004
).
Role of synaptic filtering on the firing response of simple model neurons
.
Phys. Rev. Lett.
,
92
,
028102
Moreno-Bote
,
R.
, &
Parga
,
N.
(
2006
).
Auto- and cross-correlograms for the spike response of leaky integrate-and-fire neurons with slow synapses
.
Phys. Rev. Lett.
,
96
,
028101
.
Nirenberg
,
S.
,
Carcieri
,
S.
,
Jacobs
,
A.
, &
Latham
,
P.
(
2002
).
Retinal ganglion cells act largely as independent encoders
.
Nature
,
411
,
698
701
.
Nykamp
,
D.
(
2007
).
A mathematical framework for inferring connectivity in probabilistic neuronal networks
.
Math. Biosci.
,
205
,
204
251
.
Nykamp
,
D.
, &
Tranchina
,
D.
(
2000
).
A population density approach that facilitates large-scale modeling of neural networks: Analysis and an application to orientation tuning
.
Journal of Computational Neuroscience
,
8
,
19
50
.
Okatan
,
M.
,
Wilson
,
M.
, &
Brown
,
E.
(
2005
).
Analyzing functional connectivity using a network likelihood model of ensemble neural spiking activity
.
Neural Comput.
,
17
,
1927
1961
.
Oksendal
,
B.
(
2002
).
Stochastic differential equations
.
Berlin
:
Springer
.
Oram
,
M.
,
Hatsopoulos
,
N.
,
Richmond
,
B.
, &
Donoghue
,
J.
(
2001
).
Excess synchrony in motor cortical neurons provides redundant direction information with that from coarse temporal measures
.
Journal of Neurophysiology
,
86
,
1700
1716
.
Paninski
,
L.
(
2004
).
Maximum likelihood estimation of cascade point-process neural encoding models
.
Network: Computation in Neural Systems
,
15
,
243
262
.
Paninski
,
L.
,
Fellows
,
M.
,
Shoham
,
S.
,
Hatsopoulos
,
N.
, &
Donoghue
,
J.
(
2004
).
Superlinear population encoding of dynamic hand trajectory in primary motor cortex
.
J. Neurosci.
,
24
,
8551
8561
.
Paninski
,
L.
,
Pillow
,
J.
, &
Lewi
,
J.
(
2007
).
Statistical models for neural encoding, decoding, and optimal stimulus design
. In
P. Cisek, T. Drew, & J. Kalaska
(Eds.),
Computational neuroscience: Progress in brain research
.
Dordrecht
:
Elsevier
.
Pillow
,
J.
,
Paninski
,
L.
,
Uzzell
,
V.
,
Simoncelli
,
E.
, &
Chichilnisky
,
E.
(
2005
).
Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model
.
Journal of Neuroscience
,
25
,
11003
11013
.
Pillow
,
J. W.
,
Shlens
,
J.
,
Paninski
,
L.
,
Sher
,
A.
,
Litke
,
A. M.
, &
Chichilnisky
, et al (
2008
).
Spatio-temporal correlations and visual signalling in a complete neuronal population
.
Nature
,
454
,
995
999
.
Renart
,
A.
,
Brunel
,
N.
, &
Wang
,
X.-J.
. (
2003
).
Computational neuroscience: A comprehensive approach
.
Boca Raton, FL
:
CRC Press
.
Riehle
,
A.
,
Grün
,
S.
,
Diesmann
,
M.
, &
Aertsen
,
A.
(
1997
).
Spike synchronization and rate modulation differentially involved in motor cortical function
.
Science
,
278
,
1950
1953
.
Rieke
,
F.
,
Warland
,
D.
,
de Ruyter van Steveninck
,
R.
, &
Bialek
,
W.
(
1997
),
Spikes: Exploring the neural code
.
Cambridge, MA
:
MIT Press
.
Rigat
,
F.
,
de Gunst
,
M.
, &
van Pelt
,
J.
(
2006
).
Bayesian modelling and analysis of spatio-temporal neuronal networks
.
Bayesian Analysis
,
1
,
733
764
.
Schnitzer
,
M.
, &
Meister
,
M.
(
2003
).
Multineuronal firing patterns in the signal from eye to brain
.
Neuron
,
37
,
499
511
.
Shriki
,
O.
,
Hansel
,
D.
, &
Sompolinsky
,
H.
(
2003
).
Rate models for conductance-based cortical neuronal networks
.
Neural Comput.
,
15
,
1809
1841
.
Snyder
,
D.
, &
Miller
,
M.
(
1991
),
Random point processes in time and space
.
Berlin
:
Springer-Verlag
.
Sompolinksy
,
H.
,
Crisanti
,
A.
, &
Sommers
,
H.
(
1988
).
Chaos in random neural networks
.
Physical Review Letters
,
61
,
259
262
.
Toyoizumi
,
T.
,
Aihara
,
K.
, &
Amari
,
S.
(
2006
).
Fisher information for spike-based population decoding
.
Phys. Rev. Lett.
,
97
,
098102
Truccolo
,
W.
,
Eden
,
U.
,
Fellows
,
M.
,
Donoghue
,
J.
, &
Brown
,
E.
(
2005
).
A point process framework for relating neural spiking activity to spiking history, neural ensemble and extrinsic covariate effects
.
Journal of Neurophysiology
,
93
,
1074
1089
.
van Vreeswijk
,
C.
, &
Sompolinsky
,
H.
(
1996
).
Chaos in neuronal networks with balanced excitatory and inhibitory activity
.
Science
,
274
,
1724
1726
.
van Vreeswijk
,
C.
, &
Sompolinsky
,
H.
(
1998
).
Chaotic balanced state in a model of cortical circuits
.
Neural Comput.
,
10
,
1321
1371
.
Wilson
,
H. R.
, &
Cowan
,
J. D.
(
1972
).
Excitatory and inhibitory interactions in localized populations of model neurons
.
Biophys. J.
,
12
(
1
),
1
24
.