Abstract

Emerging evidence indicates that information processing, as well as learning and memory processes, in both the network and single-neuron levels are highly dependent on the correlation structure of multiple spike trains. Contemporary experimental as well as theoretical studies that involve quasi-realistic neuronal stimulation thus require a method for controlling spike train correlations. This letter introduces a general new strategy for generating multiple spike trains with exactly controlled mean firing rates and correlation structure (defined in terms of auto- and cross-correlation functions). Our approach nonlinearly transforms random gaussian-distributed processes with a predistorted correlation structure into nonnegative rate processes, which are then used to generate doubly stochastic Poisson point processes with the required correlation structure. We show how this approach can be used to generate stationary or nonstationary spike trains from small or large groups of neurons with diverse auto- and cross-correlation structures. We analyze and derive analytical formulas for the high-order correlation structure of generated spike trains and discuss the limitations of this approach.

1.  Introduction

The principal representation of neural information is in sequences of action potentials (APs). Within the mammalian central nervous system, each neuron receives thousands of such input sequences (spike trains) from presynaptic cells. A neuron's output is believed to be dependent on not only the mean firing rate of presynaptic cells but also the degree of spatial and temporal correlation between spike trains from different presynaptic neurons (for reviews, see, e.g., Abeles, 1991; Mel, 1994; Segev & London, 2000; Hausser & Mel, 2003; London & Hausser, 2005). Furthermore, synaptic plasticity processes are also known to be highly dependent on the correlation structure between pre- and postsynaptic neurons, a process termed spike-timing-dependent plasticity (STDP; Dayan & Abbott, 2001). The multicorrelation structure of spike trains is also believed to be crucially important to the representation and processing of network information (Abeles, 1991), where even weak pair-wise correlations can lead to highly organized network states (Schneidman, Berry, Segev, & Bialek, 2006).

Methodological advances in recent years led to an experimental ability to rapidly stimulate dendrites with a high temporal resolution and at a resolution level approaching a single synapse (Matsuzaki et al., 2001; Shoham, O'Connor, Sarkisov, & Wang, 2005; Araya, Eisenthal, & Yuste, 2006; Losonczy & Magee, 2006). Similar optical tools can also be used to individually stimulate groups of presynaptic neurons (Boucsein, Nawrot, Rotter, Aertsen, & Heck, 2005; Shoham, O'Connor et al., 2005; Farah, Reutsky, & Shoham, 2007; Nikolenko, Poskanzer, & Yuste, 2007). These tools enable the study of responses to controlled yet realistically complex input patterns. Of particular interest among the large space of possible stimulation pattern is the study of responses to varying degrees of correlations between spike trains. This, however, requires a standard method for generating spike trains with a specific degree of correlation between them in order to conduct systematic studies.

The central measures of correlations among and between random processes are the auto- and cross-correlation functions (estimated by the auto- and cross-correlograms). Spike train correlation functions describe not only the coincidence of spikes, but also probabilities for spikes to appear at different relative delays, whether in two different sequences (cross-correlation function) or in the same one (autocorrelation function). Currently, there is no method that enables spike train generation with these functions predefined. Recent work by Niebur (2007) proposes a partial solution that allows the generation of spike trains with a specific degree of coincident spikes (with predefined time domain binning) between different sequences. Here we present a new approach to spike train generation (stationary or nonstationary) in continuous time (no prebinning of the time domain) with predefined auto- and cross-correlation functions. Our approach is based on analytical solutions of the nonlinear distortion of correlation structure in a linear-nonlinear-Poisson (LNP) generative model with gaussian white noise inputs.

2.  Method

2.1.  Outline.

We consider the problem of generating spike trains as doubly stochastic Poisson point processes (Snyder & Miller, 1991; also known as Cox processes: Cox, 1955; Cox & Lewis, 1966) with stochastic intensity functions (instantaneous rates) having specific auto- and cross-correlation functions. A Poisson process is called doubly stochastic if its rate profile is itself a random process. Interestingly, point processes generated this way preserve the full correlation structure of the corresponding stochastic intensity processes (see Knox, 1970 and proof below). This important property reduces the problem of generating correlated point processes to the problem of generating correlated continuous stochastic nonnegative processes. We next reduce the complexity of the problem by generating these nonnegative processes through a nonlinear transformation applied to correlated gaussian processes, an approach reviewed in Johnson (1994). This transformation affects not only the marginal distributions of the processes but also their correlation functions, so we create correlated gaussian processes whose correlation structure is appropriately predistorted to yield the desired final correlations in the nonnegative processes. Table 1 and Figure 1 outline the full algorithm.

Table 1:
Algorithm Outline.
graphic
graphic
Figure 1:

Algorithm outline. Simulation of two correlated point processes with predefined auto- and cross-correlation functions (see Table 1). (1) Linearly filtering white gaussian noise processes to obtain correlated gaussian processes with predistorted correlations. (2) Nonlinearly transforming the gaussian processes with predistorted correlations to obtain nonnegative processes with the predefined correlation structure and mean values. (3) The nonnegative correlated processes are used as rate processes of doubly stochastic Poisson point processes. These point processes possess the correlation structure and mean rates of the corresponding rate processes.

Figure 1:

Algorithm outline. Simulation of two correlated point processes with predefined auto- and cross-correlation functions (see Table 1). (1) Linearly filtering white gaussian noise processes to obtain correlated gaussian processes with predistorted correlations. (2) Nonlinearly transforming the gaussian processes with predistorted correlations to obtain nonnegative processes with the predefined correlation structure and mean values. (3) The nonnegative correlated processes are used as rate processes of doubly stochastic Poisson point processes. These point processes possess the correlation structure and mean rates of the corresponding rate processes.

2.2.  Correlation Distortions.

Our method is based on predistorting the correlation structure of random gaussian processes in an exact way such that a nonnegative transformation applied to them will yield the desired auto- and cross-correlation functions (Johnson, 1994). We derive below the distortions resulting from three basic nonnegative transformations: exponential, square, and absolute value. We begin by restricting the discussion to correlation coefficients between draws of random variables Xi and Λi. In section 2.4, we expand this treatment to auto- and cross-correlation functions between stochastic processes xi(t) and λi(t) by appropriately arranging and filtering uncorrelated processes.

2.2.1.  Exponential Transformation.

We transform the gaussian variable XiN(0, 1) using the following transformation:
formula
2.1
The resulting variable Λi has a log-normal distribution with expectation
formula
2.2
where we have used Wick's theorem (Simon, 1974) for mean zero, normal random variables vN(0, σ2v):
formula
2.3
Using the same theorem, we can also derive expressions for the distorted correlations in terms of the predistorted gaussian random variable correlations: E[Xi · Xj] = rij,
formula
2.4
and for the case i = j (rii = 1):
formula
2.5
Equations 2.2 and 2.5 together allow us to evaluate the parameters μi and σi:
formula
2.6
Finally, from equation 2.4, we derive the correlations predistortion as
formula
2.7
From equation 2.6, we easily obtain the following restriction on the range of permissible values for Rii:RiiE2i].

2.2.2.  Square Transformation.

We define the following transformation for gaussian random variables XiN(0, 1):
formula
2.8
The resulting random variable Λi has an unnormalized non-central X21 distribution with expectation
formula
2.9
This transformation distorts the correlation coefficients of the gaussian variables into
formula
2.10
where we have used Isserlis's formula (1918) to calculate the fourth-order moments of zero mean gaussian variables:
formula
2.11
The quadratic equation, 2.10, has the following solutions for the predistortion:
formula
2.12
formula
2.13
Solving equation 2.13 together with 2.9 yields the transformation's parameters,
formula
2.14
with the following bounds for the nonnegative process variance derived from equation 2.14: E2i] ⩽ Rii ⩽ 3E2i].

2.2.3.  Absolute Value Transformation.

We define the following transformation for the gaussian random variables XiN(0, 1):
formula
2.15

In the appendix, we derive a numerical solution for this transformation, which is inherently computationally more expensive compared to the analytical solutions of the two other transformations above.

2.3.  Doubly Stochastic Poisson Processes Possess the Same Correlations as Their Stochastic Intensity Functions.

Definitions:
We represent each point process using a discrete-time counting process N(t), which has unit increments every time there is an event and remains constant until the next event occurs. Its respective increment process equals unity at all times t where an event occurs, and zero otherwise. For regular point processes, the conditional probability for an event obeys
formula
2.16
where λ(t) is the stochastic intensity, or instantaneous firing rate of the point process at time t. Nonstationary auto- and cross-correlation functions between two stochastic intensity functions are defined as
formula
2.17
Similarly, for point processes,
formula
2.18

Theorem 1.

The instantaneous cross-correlation function RPij(t, τ) between two doubly stochastic Poisson processes is equal to the cross-correlation function between their respective intensity functions Rij(t, τ). This property also holds for the autocorrelation function of a single point process, except for τ = 0, where it behaves as a delta function.

Proof.
Applying the law of total expectation (the smoothing theorem), we obtain
formula
Now, assuming small Δt and using equation 2.16, we arrive at
formula
and then:
formula
Similarly for an autocorrelation of a single point process, we get (τ ≠ 0):
formula
And in the case of τ = 0,
formula
thereby completing the proof.

2.4.  Generating the Correlated Processes.

The results of the previous sections provide a general route for translating the predefined spike train correlation structure into a multicorrelation structure defined between gaussian random processes. To actually move from the individual pair-wise correlations between random variables (see section 2.2) to temporally structured (with multiple delays) auto- and cross-correlation functions rij(τ), we construct a matrix containing all the target pair-wise correlations in block Toeplitz structure. To create the desired correlated gaussian processes, we construct finite impulse response (FIR) filters and apply them to N white gaussian processes wi(t) of length L in the discrete time form. To apply these filters in the stationary setting, we first build Toeplitz matrices Wi where each of the N processes of length L is copied into 2K + 1 rows with increasing delays (−K · Δt ⩽ τ ⩽ K · Δt). These matrices, when concatenated vertically, construct a matrix that contains all the processes with all the required delayed replications:
formula
2.19
To obtain gaussian processes with the desired correlations rij(τ) = E[xi(t + τ) · xj(t)] requires the original white processes to be multiplied by the correlation matrix square root (Johnson, 1994): , where and is a block matrix composed of Toeplitz matrices corresponding to each of the auto- and cross-correlation functions:
formula
2.20

In practice, is highly redundant as a consequence of the replication procedure applied in equations 2.19 and 2.20, and we select only N of its rows (the central row in each block) to perform the filtering. In the nonstationary case, the matrices Wi and rij do not have a Toeplitz structure, and the matrix is not redundant in this case.

We note that in addition to this FIR-based approach, an alterative method for constructing multivariate correlated gaussian processes using multivariate autoregressive models is also possible here.

Once the correlated gaussian processes have been generated, they are nonlinearly transformed into rate processes using one of the transformations discussed above. Generating the Poisson spike trains from these inhomogeneous rate processes λ(t) proceeds using the standard procedure described by Johnson (1996, eq. 7). In this procedure, a sequence of independent exponentially distributed random variables ζi ∼ exp(1) is generated, and then the event times tk are calculated by solving .

3.  Results

Using our approach, it is possible to generate spike trains with different correlation structures that are typical for real neural activity (see Figures 2C–2F for examples). Cross-correlation function between two signals changes while propagating through the steps of the algorithm and corresponds to its theoretical values at every step (see Figures 2A–2C).

Figure 2:

Controlled cross-correlation structure between two stationary spike trains. Each panel contains a comparison of an empirical cross-correlogram (dotted line) and a desired cross-correlation function (solid line) value during the steps of the algorithm (A–C) and in several examples (C–F). (A) Cross-correlation function with predistorted values between two gaussian processes. (B) After nonlinear transformation, we obtain rate processes with the desired cross-correlation function. (C) Spike trains generated from correlated rate processes possess the same cross-correlation function and simulate simple excitatory relation between two cells. (D) Simulation of inhibitory connection between two cells. (E) STDP-type behavior of two spike trains (biphasic inhibition-excitation relation). (F) Wide gaussian-shaped cross-correlation function.

Figure 2:

Controlled cross-correlation structure between two stationary spike trains. Each panel contains a comparison of an empirical cross-correlogram (dotted line) and a desired cross-correlation function (solid line) value during the steps of the algorithm (A–C) and in several examples (C–F). (A) Cross-correlation function with predistorted values between two gaussian processes. (B) After nonlinear transformation, we obtain rate processes with the desired cross-correlation function. (C) Spike trains generated from correlated rate processes possess the same cross-correlation function and simulate simple excitatory relation between two cells. (D) Simulation of inhibitory connection between two cells. (E) STDP-type behavior of two spike trains (biphasic inhibition-excitation relation). (F) Wide gaussian-shaped cross-correlation function.

The generated point processes have intrinsically much more variability than the underlying intensity processes. This can be seen reflected in the intrinsic variance of the correlation function estimates (see Figures 2C–2F).

In principle, the approach developed in this letter is general enough to also generate nonstationary point processes with a predefined correlation structure. We tested the algorithm's ability to generate spike trains with time-varying rates and instantaneous, time-varying correlation structures. Correlation structures in this general scenario are typically represented in neuroscience research using the joint peristimulus time histogram (JPSTH). Figure 3 presents a predefined and a data-estimated JPSTH for processes whose mean has a gaussian temporal profile and displays a complex biphasic time-varying interaction. As can be seen, the algorithm can jointly capture the full temporal structure of both time-varying rates and the interaction.

Figure 3:

Controlled cross-correlation structure between two nonstationary random spike trains. (A) Target time-varying mean rates and unnormalized joint peristimulus time histogram (JPSTH) of two units with a nonstationary correlation structure (transient version of the inhibitory-excitatory correlation in Figure 2E). (B) Estimated empirical JPSTH and mean rates for the spike trains generated by the algorithm.

Figure 3:

Controlled cross-correlation structure between two nonstationary random spike trains. (A) Target time-varying mean rates and unnormalized joint peristimulus time histogram (JPSTH) of two units with a nonstationary correlation structure (transient version of the inhibitory-excitatory correlation in Figure 2E). (B) Estimated empirical JPSTH and mean rates for the spike trains generated by the algorithm.

Next, we tested our algorithm's ability to simulate spike trains resulting from networks of pair-interacting neurons. Figure 4 shows typical activity patterns in a network of 100 units simulated by controlling pair-wise cross-correlation functions. Varying this correlation structure leads to a large diversity of network activity patterns that can be simulated.

Figure 4:

Population behavior. Raster plots of different patterns of network activity simulated for 100 units by controlling pair-wise cross-correlation functions. All of the units possess mean firing rates of 10 Hz and 1 msec narrow auto- and cross-correlation functions (effectively delta functions): Rii(τ = 0) = 1100 Hz2; Rij(τ = τpeak)= 1000 Hz2. (A) Uncorrelated network activity . (B) Synchronous (τpeak = 0) cross-correlation between cells results in spontaneous synchronized bursting activity of the cells. (C) Gradually changing the delay between the firing of different cells' (0 < τpeak < 100 ms) leads to a wave propagation effect. (D) Dividing the units into groups with different within-group (τpeak = 0) and between-group cross-correlation functions' relative delays (Δτpeak = 10 ms) results in synfire chain-like behavior.

Figure 4:

Population behavior. Raster plots of different patterns of network activity simulated for 100 units by controlling pair-wise cross-correlation functions. All of the units possess mean firing rates of 10 Hz and 1 msec narrow auto- and cross-correlation functions (effectively delta functions): Rii(τ = 0) = 1100 Hz2; Rij(τ = τpeak)= 1000 Hz2. (A) Uncorrelated network activity . (B) Synchronous (τpeak = 0) cross-correlation between cells results in spontaneous synchronized bursting activity of the cells. (C) Gradually changing the delay between the firing of different cells' (0 < τpeak < 100 ms) leads to a wave propagation effect. (D) Dividing the units into groups with different within-group (τpeak = 0) and between-group cross-correlation functions' relative delays (Δτpeak = 10 ms) results in synfire chain-like behavior.

4.  High-Order Correlation Structure

Nonlinearly transforming gaussian processes, where the entire statistical structure is determined by the first- and second-order moments, introduces higher-order moments, which then carry over to the spike train statistics. These moments depend exclusively on the nonlinear function applied. In Figures 5A and 5B, we present the complexity distribution histogram and quantile-quantile plots for the distribution of synchronous events in spike trains with the same first- and second-order statistics generated by the three different nonlinear transformations. The complexity distribution is a useful way of characterizing the zero-lag high-order correlation structure that is defined in Grün, Abeles, and Diesmann (2008) as the probability p(ξ) to observe a particular number of synchronous spikes ξ (complexity). These results support the notion that the three nonlinear transformations lead to a different higher-order correlation structure. While the mean level of coincidences is the same (here E(ξ) = 5), the complexity distribution resulting from the exponential nonlinear transformation has significantly wider tails (corresponding to a higher proportion of highly synchronous events). The square and absolute value transformations appear to lead to a fairly similar high-order correlation structure.

Figure 5:

High-order correlation structure of generated spike trains. (A) Complexity distribution histograms for the three nonlinear transformations (ξ-number of simultaneous spikes). N = 100 spike trains with 50 Hz mean firing rate and delta auto- and cross-correlations were simulated (1 ms time bins, Rii(τ = 0) = 3875 Hz2; Rij(τ = 0) = 3750 Hz2). (B) Quantile-quantile plots for the complexity distributions in A. The exponential transformation is compared to the two other methods, revealing large differences in the tails of the distributions. (C) Unnormalized third-order correlation functions for three stationary spike trains generated using the exponential nonlinear transformation (mean firing rate 20 Hz, exponentially shaped pair-wise cross-correlation functions, delays: τ2peak = −10 ms, τ3peak= 10 ms, (τ3 − τ2)peak = 0). (D) Analytical third-order rate correlation structure for the simulation parameters (from equation 4.3).

Figure 5:

High-order correlation structure of generated spike trains. (A) Complexity distribution histograms for the three nonlinear transformations (ξ-number of simultaneous spikes). N = 100 spike trains with 50 Hz mean firing rate and delta auto- and cross-correlations were simulated (1 ms time bins, Rii(τ = 0) = 3875 Hz2; Rij(τ = 0) = 3750 Hz2). (B) Quantile-quantile plots for the complexity distributions in A. The exponential transformation is compared to the two other methods, revealing large differences in the tails of the distributions. (C) Unnormalized third-order correlation functions for three stationary spike trains generated using the exponential nonlinear transformation (mean firing rate 20 Hz, exponentially shaped pair-wise cross-correlation functions, delays: τ2peak = −10 ms, τ3peak= 10 ms, (τ3 − τ2)peak = 0). (D) Analytical third-order rate correlation structure for the simulation parameters (from equation 4.3).

To gain a better understanding of the high-order statistics of processes generated using this type of method, we have derived analytical formulas for the exponential transformation Λi = exp(μi + σiXi). The n-order correlation of transformed correlated normally distributed variablesXiN(0, 1); E[XiXj] = rij has the following form:
formula
4.1
where the second step follows from Wick's theorem. Expanding and then collecting terms,
formula
4.2
which can be further simplified using equation 2.4:
formula
4.3

The theorem in section 2.3 can be easily generalized for higher-order correlations; hence, equation 4.3 can be applied also for calculating a high-order correlation structure between spike trains RnΔN (see Figures 5C–5D).

5.  Discussion

The generation of continuous gaussian processes with a predefined correlation structure is a classical problem with well-known solutions and numerous applications in all fields of signal and image processing. However, while the empirical correlation structure of neural spike trains has been measured many thousands of times using well-known analysis tools introduced in the 1960s (Perkel, Gerstein, & Moore, 1967a, 1967b; Gerstein & Perkel, 1969), the development of a complementary simulation tool was left largely ignored for four decades and has only recently begun to attract attention. The recent interest in controlled-correlation spike trains is mostly motivated by the emergence of experimental techniques for rapid patterned stimulation of neurons with high temporal and spatial resolution (Shoham, O'Connor et al., 2005; Gasparini & Magee, 2006; Losonczy & Magee, 2006; Nikolenko, et al., 2007), as well as the recent suggestion in neural coding studies that second-order statistics dominate the observed neuroretinal spiking pattern and can be used to explain essentially the full structure (Schneidman et al., 2006; Shlens et al., 2006). In addition, these developments could potentially find applications in econometric time series, earthquake data, network traffic, reliability research, optical communication, and heartbeat variability.

To generate the stochastic intensity functions, we have provided solutions using three of the most fundamental nonlinear transformations to nonnegative variables: exponentiation, absolute value, and squaring. Doubly stochastic Poisson processes with rate functions that are nonlinearly transformed gaussian processes (the so called linear-nonlinear-Poisson, or LNP model) have previously been studied in various contexts. For example, square-transformed processes were studied extensively in the framework of low-light optical communication where the gaussian process represents a quantum probability density (Snyder & Miller, 1991), while exponentially transformed rates have recently been studied in the context of neural encoding models (Paninski, Shoham, Fellows, Hatsopoulos, & Donoghue, 2004; Shoham, Paninski et al., 2005; Pillow et al., 2008). The different nonlinearities lead to solutions with different characteristics and complexity, which may be preferable in different applications. The exponential and square transformations have an analytical solution for the correlation predistortion, while for the absolute value transformation, the problem of computing the correlation predistortion can be solved only numerically. The three transformations lead to spike trains with different permissible levels of dynamic variation, as measured, for example, by the processes' coefficient of variation,
formula
or equivalently,
formula
The exponential transformation is the most flexible, allowing essentially every possible CV; the squaring transformation is more restricted (; and the absolute value transformation is the most limited (. Figure 6 presents the probability density functions of the rate processes for different CV values. Note how the higher CV values are obtained through nonlinear stretching of the distributions' tails. For the rectifying transformation, the formulas are derived similarly to the absolute value transformation case by using equations A.4, A.5, and A.15 instead of A.6, A.7, and A.16, respectively.
Figure 6:

Effect of the nonlinear transformations on the resulting probability density functions. The four plots show the resulting marginal distributions Pλ(λ) for mean firing rates of 10 Hz and four different levels of variation: (A) . (B) . (C) . (D) . Note that as the CV increases, the mode of Pλ(λ) decreases and the tails become stronger.

Figure 6:

Effect of the nonlinear transformations on the resulting probability density functions. The four plots show the resulting marginal distributions Pλ(λ) for mean firing rates of 10 Hz and four different levels of variation: (A) . (B) . (C) . (D) . Note that as the CV increases, the mode of Pλ(λ) decreases and the tails become stronger.

It is interesting to compare our approach with other methods developed recently for controlling the correlation structure of spike trains. The method by Niebur (2007) probabilistically distributes spikes from a “mother” spike train (Kuhn, Aertsen, & Rotter, 2003) to tune synchrony in stationary homogeneous Poisson processes. For addressing the more general problem of temporal correlation functions among different spike trains, Brette (2009, method II) extended this approach by adding a controlled temporal jitter to the spike timing. However, this general strategy was solved only for stationary cases, introduces only positive correlations, and generally leads to strong coupling between the auto- and cross-correlation structure (Tetzlaff et al., 2008). It is also not well connected to central concepts in the theory of random point processes, such as stochastic intensity, and could potentially introduce spurious high-order interdependencies.

In contrast, our strategy was based on using correlated gaussian processes as an underlying structure for generating the spike trains. This general concept was also developed independently in two very recent studies (Brette, 2009; Macke, Berens, Ecker, Tolias, & Bethge, 2009), in both cases using a threshold nonlinearity. The study by Brette (2009, method I) generates rate processes by applying a rectifying nonlinearity to gaussian processes with a specific exponential autocorrelation function. The rectifying nonlinearity is conceptually close to the absolute-value transformation studied here, and formulas for the correlation distortion it creates can be derived using the procedures we introduce in the appendix (see also above). However, as noted by Brette (2009), it has the undesirable property that when the correlations are strong, the rates are zero most of the time, and it is therefore of limited use for simulating realistic spike trains. In addition, Brette limits his discussion to the case of low correlations, without the concept of correlation distortion, which is central to the generality of our approach. In an alternative “dichotomized gaussian” approach introduced by Macke et al. (2009), gaussian processes are thresholded to directly generate the binary spike trains in the time domain, without going through the intermediate step of a rate process (see also Bethge & Berens, 2008, and Johnson, 1994, section IIID, for related analysis). Because the second step is deterministic, the structure is single stochastic, in contrast to the doubly stochastic Poisson structure explored here. The dichotomized gaussian approach requires numerical solutions of nonlinear equations and was found to lead to spike trains with nearly maximal entropy.

Our methods can generate a flexible and very broad range of auto- and cross-correlation structures. For example, the method allows the generation of processes with the same autocorrelation and very different cross-correlation structures (see Figure 2), and either one can practically be very different from an exponential function (in contrast to Brette, 2009). The method is practically limited to correlation structures that lead to positive-definite matrices (see equation 2.20). More generally, the method is limited to correlation structures that are realizable by pure rate covariation, without any precise spike coordination. (Staude, Rotter, & Grün, 2008). The analysis by Staude et al. demonstrated that rate-covariation models have peak cross-correlation between spike trains that are weaker than the respective peak autocorrelations (the smaller of the two). Moreover, when two processes have slow dynamics (broad autocorrelations), they have a limited ability to sustain a narrow cross-correlation.

Appendix:  Correlation Distortion for an Absolute Value Transformation

We consider the transformation Λi = |μi + σiXi|, where XiN(0, 1) with and rii = 1 that results in folded normal distribution of Λi. The solution is divided into two steps. First, we derive a solution for μi and σi as a function of a desired Rii = E2i] and Ei] and then proceed to derive rij as a function of the calculated μi, σi and the desired Rij. Expressions for the first and second moments of truncated gaussian random variables (both univariate and bivariate) were derived earlier in Rosenbaum (1961). To derive moments for the absolute value transformation, we treat it as a superposition of two (univariate) or four (bivariate) rectified gaussian random variables.

A.1.  Step 1.

For a standard normal random variable ZN(0, 1), we define: w(h) = ∫hfZ(α)dα—the relative weight of a truncated gaussian random variable :
formula
A.1
Note that , where is the error function.
The univariate has the following first and second moments (Rosenbaum, 1961):
formula
A.2
formula
A.3
After translation and scaling and substituting , these moments become
formula
A.4
formula
A.5
These expressions can now be used to derive the moments of Λi (which can be seen as a sum of two truncated, scaled, and shifted gaussian random variables):
formula
A.6
formula
A.7
Several algebraic steps were omitted in the last expression.

Bounds on Rii can be derived from the above expressions: .

In addition, note that and are functions of a single variable . As a consequence, is also a function of a single variable h that can be found numerically. After h is found, the transformation parameters can be calculated by deriving from equation A.7:
formula
A.8

A.2.  Step 2.

For a bivariate random variable , we define —the relative weight of a truncated bivariate gaussian random variable:
formula
A.9
Expressions for the moments of this distribution were derived by Rosenbaum (1961):
formula
A.10
formula
A.11
formula
A.12
Now, in a similar way as for the univariate variables, these expressions after scaling and translating (where and are truncated at and , respectively) are:
formula
A.13
formula
A.14
formula
A.15
And after a superposition of four such solutions, we get the following expressions for the absolute value transformation:
formula
A.16
Here Rij is a function of a single variable riji,j and σi,j were calculated previously). Predistorted value rij can be calculated by numerically solving this equation.

In summary, we calculate the transformation parameters (μ and σ) from the desired mean rate and the autocorrelation function at τ = 0 of the rate process, and then proceed to calculate the predistorted correlations rij for the gaussian processes from the desired correlations Rij of the nonnegative processes.

Acknowledgments

This work was supported by Israeli Science Foundation grant 1248/06 and European Research Council starting grant 211055. We thank the two anonymous reviewers for their comments and suggestions.

References

Abeles
,
M.
(
1991
).
Corticonics: Neural circuits of the cerebral cortex
.
Cambridge
:
Cambridge University Press
.
Araya
,
R.
,
Eisenthal
,
K. B.
, &
Yuste
,
R.
(
2006
).
Dendritic spines linearize the summation of excitatory potentials
.
PNAS
,
103
(
49
),
18799
18804
.
Bethge
,
M.
,
Berens
,
P.
(
2008
).
Near-maximum entropy models for binary neural representations of natural images
. In
J. C. Platt, D. Koller, Y. Singer, & S. Roweis
(Eds.),
Advances in neural information processing systems
,
20
(pp.
97
104
).
Cambridge, MA
:
MIT Press
.
Boucsein
,
C.
,
Nawrot
,
M.
,
Rotter
,
S.
,
Aertsen
,
A.
, &
Heck
,
D.
(
2005
).
Controlling synaptic input patterns in vitro by dynamic photo stimulation
.
J. Neurophysiol.
,
94
(
4
),
2948
2958
.
Brette
,
R.
(
2009
).
Generation of correlated spike trains
.
Neural Computation
,
21
(
1
),
188
215
.
Cox
,
D. R.
(
1955
).
Some statistical methods connected with series of events
.
J. Royal Statistical Society B
,
17
,
129
164
.
Cox
,
D. R.
, &
Lewis
,
P. A. W.
(
1966
).
The statistical analysis of events
.
London
:
Methuen
.
Dayan
,
P.
, &
Abbott
,
L. F.
(
2001
).
Theoretical neuroscience: Computational and mathematical modeling of neural systems
.
Cambridge, MA
:
MIT Press
.
Farah
,
N.
,
Reutsky
,
I.
, &
Shoham
,
S.
(
2007
).
Patterned optical activation of retinal ganglion cells
.
Paper presented at the Engineering in Medicine and Biology Society, 2007, 29th Annual International Conference of the IEEE
.
Gasparini
,
S.
, &
Magee
,
J. C.
(
2006
).
State-dependent dendritic computation in hippocampal CA1 pyramidal neurons
.
J. Neurosci.
,
26
(
7
),
2088
2100
.
Gerstein
,
G. L.
, &
Perkel
,
D. H.
(
1969
).
Simultaneously recorded trains of action potentials: Analysis and functional interpretation
.
Science
,
164
(
881
),
828
830
.
Grün
,
S.
,
Abeles
,
M.
, &
Diesmann
,
M.
(
2008
).
Impact of higher-order correlations on coincidence distributions of massively parallel data
. In
M. Marinaro, S. Scarpetta, & Y. Yamaguchi
(Eds.),
Dynamic brain—from neural spikes to behaviors
(Vol.
5286
(pp.
96
114
).
Berlin
:
Springer
.
Hausser
,
M.
, &
Mel
,
B.
(
2003
).
Dendrites: Bug or feature
.
Current Opinion in Neurobiology
,
13
(
3
),
372
383
.
Isserlis
,
L.
(
1918
).
On a formula for the product-moment coefficient of any order of a normal frequency distribution in any number of variables
.
Biometrika
,
12
,
134
139
.
Johnson
,
D. H.
(
1996
).
Point process models of single-neuron discharges
.
Journal of Computational Neuroscience
,
3
(
4
),
275
299
.
Johnson
,
G. E.
(
1994
).
Construction of particular random processes
.
Proceedings of the IEEE
,
82
(
2
),
270
285
.
Knox
,
C. K.
(
1970
).
Signal transmission in random spike trains with applications to the statocyst neurons of the lobster
.
Kybernetik
,
7
(
5
),
167
174
.
Kuhn
,
A.
,
Aertsen
,
A.
, &
Rotter
,
S.
(
2003
).
Higher-order statistics of input ensembles and the response of simple model neurons
.
Neural Computation
,
15
(
1
),
67
101
.
London
,
M.
, &
Hausser
,
M.
(
2005
).
Dendritic computation
.
Annual Review of Neuroscience
,
28
(
1
),
503
532
.
Losonczy
,
A.
, &
Magee
,
J. C.
(
2006
).
Integrative properties of radial oblique dendrites in hippocampal CA1 pyramidal neurons
.
Neuron
,
50
(
2
),
291
307
.
Macke
,
J. H.
,
Berens
,
P.
,
Ecker
,
A. S.
,
Tolias
,
A. S.
, &
Bethge
,
M.
(
2009
).
Generating spike trains with specified correlation coefficients
.
Neural Computation
,
21
(
2
),
397
423
.
Matsuzaki
,
M.
,
Ellis-Davies
,
G. C. R.
,
Nemoto
,
T.
,
Miyashita
,
Y.
,
Iino
,
M.
, &
Kasai
,
H.
(
2001
).
Dendritic spine geometry is critical for AMPA receptor expression in hippocampal CA1 pyramidal neurons
.
Nat. Neurosci.
,
4
(
11
),
1086
1092
.
Mel
,
B. W.
(
1994
).
Information processing in dendritic trees
.
Neural Computation
,
6
,
1031
1085
.
Niebur
,
E.
(
2007
).
Generation of synthetic spike trains with defined pairwise correlations
.
Neural Computation
,
19
(
7
),
1720
1738
.
Nikolenko
,
V.
,
Poskanzer
,
K. E.
, &
Yuste
,
R.
(
2007
).
Two-photon photostimulation and imaging of neural circuits
.
Nat. Meth.
,
4
(
11
),
943
950
.
Paninski
,
L.
,
Shoham
,
S.
,
Fellows
,
M. R.
,
Hatsopoulos
,
N. G.
, &
Donoghue
,
J. P.
(
2004
).
Superlinear population encoding of dynamic hand trajectory in primary motor cortex
.
J. Neurosci.
,
24
(
39
),
8551
8561
.
Perkel
,
D. H.
,
Gerstein
,
G. L.
, &
Moore
,
G. P.
(
1967a
).
Neuronal spike trains and stochastic point processes. I. The single spike train
.
Biophys. J.
,
7
(
4
),
391
418
.
Perkel
,
D. H.
,
Gerstein
,
G. L.
, &
Moore
,
G. P.
(
1967b
).
Neuronal spike trains and stochastic point processes. II. Simultaneous spike trains
.
Biophys. J.
,
7
(
4
),
419
440
.
Pillow
,
J. W.
,
Shlens
,
J.
,
Paninski
,
L.
,
Sher
,
A.
,
Litke
,
A. M.
,
Chichilnisky
,
E. J.
, et al
(
2008
).
Spatio-temporal correlations and visual signalling in a complete neuronal population
.
Nature
,
454
(
7207
),
995
999
.
Rosenbaum
,
S.
(
1961
).
Moments of a truncated bivariate normal distribution
.
Journal of the Royal Statistical Society. Series B (Methodological)
,
23
(
2
),
405
408
.
Schneidman
,
E.
,
Berry
,
M. J.
,
Segev
,
R.
, &
Bialek
,
W.
(
2006
).
Weak pairwise correlations imply strongly correlated network states in a neural population
.
Nature
,
440
(
7087
),
1007
1012
.
Segev
,
I.
, &
London
,
M.
(
2000
).
Untangling dendrites with quantitative models
.
Science
,
290
(
5492
),
744
750
.
Shlens
,
J.
,
Field
,
G. D.
,
Gauthier
,
J. L.
,
Grivich
,
M. I.
,
Petrusca
,
D.
,
Sher
,
A.
, et al
(
2006
).
The structure of multi-neuron firing patterns in primate retina
.
J. Neurosci.
,
26
(
32
),
8254
8266
.
Shoham
,
S.
,
O'Connor
,
D. H.
,
Sarkisov
,
D. V.
, &
Wang
,
S. S. H.
(
2005
).
Rapid neurotransmitter uncaging in spatially defined patterns
.
Nat. Meth.
,
2
(
11
),
837
843
.
Shoham
,
S.
,
Paninski
,
L. M.
,
Fellows
,
M. R.
,
Hatsopoulos
,
N. G.
,
Donoghue
,
J. P.
, &
Normann
,
R. A.
(
2005
).
Statistical encoding model for a primary motor cortical brain-machine interface
.
Biomedical Engineering, IEEE Transactions
,
52
(
7
),
1312
1322
.
Simon
,
B.
(
1974
).
The P(ϕ)2 Euclidean (quantum) field theory
.
Princeton, NJ
:
Princeton University Press
.
Snyder
,
D. L.
,
Miller
,
M. I.
(
1991
).
Random point processes in time and space
(2nd ed.).
Berlin
:
Springer
.
Staude
,
B.
,
Rotter
,
S.
, &
Grün
,
S.
(
2008
).
Can spike coordination be differentiated from rate covariation
.
Neural Computation
,
20
(
8
),
1973
1999
.
Tetzlaff
,
T.
,
Rotter
,
S.
,
Stark
,
E.
,
Abeles
,
M.
,
Aertsen
,
A.
, &
Diesmann
,
M.
(
2008
).
Dependence of neuronal correlations on filter characteristics and marginal spike train statistics
.
Neural Computation
,
20
(
9
),
2133
2184
.