Abstract

The unitary events (UE) method is one of the most popular and efficient methods used over the past decade to detect patterns of coincident joint spike activity among simultaneously recorded neurons. The detection of coincidences is usually based on binned coincidence count (Grün, 1996), which is known to be subject to loss in synchrony detection (Grün, Diesmann, Grammont, Riehle, & Aertsen, 1999). This defect has been corrected by the multiple shift coincidence count (Grün et al., 1999). The statistical properties of this count have not been further investigated until this work, the formula being more difficult to deal with than the original binned count. First, we propose a new notion of coincidence count, the delayed coincidence count, which is equal to the multiple shift coincidence count when discretized point processes are involved as models for the spike trains. Moreover, it generalizes this notion to nondiscretized point processes, allowing us to propose a new gaussian approximation of the count. Since unknown parameters are involved in the approximation, we perform a plug-in step, where unknown parameters are replaced by estimated ones, leading to a modification of the approximating distribution. Finally the method takes the multiplicity of the tests into account via a Benjamini and Hochberg approach (Benjamini & Hochberg, 1995), to guarantee a prescribed control of the false discovery rate. We compare our new method, MTGAUE (multiple tests based on a gaussian approximation of the unitary events) and the UE method proposed in Grün et al. (1999) over various simulations, showing that MTGAUE extends the validity of the previous method. In particular, MTGAUE is able to detect both profusion and lack of coincidences with respect to the independence case and is robust to changes in the underlying model. Furthermore MTGAUE is applied on real data.

1.  Introduction

The study of how neural networks transmit activity in the brain and somehow code information implies a consideration of various aspects of spike activity. Historically, firing rates have been first considered as the main way for neurons or populations of neurons to transmit activity, in correlation with some experimental or behavioral events. Such correlations have been shown mainly by the use of peristimulus time histogram (PSTH) (Abeles, 1982; Gerstein & Perkel, 1969; Shinomoto, 2010).

Besides the role of the firing rate, it has been argued, first in theoretical studies, that the activity of an ensemble of neurons may be coordinated in the spatiotemporal domain (i.e., coordination of the occurrence of spikes between different neurons) to form neuronal assemblies (Hebb, 1949; Palm, 1990; Sakurai, 1999; von der Malsburg, 1981). Indeed, such assemblies could be constituted on the basis of specific spike timing thanks to several mechanisms at the synaptic level (König, Engel, & Singer, 1996; Rudolph & Destexhe, 2003; Softky & Koch, 1993). The required neural circuitry could spontaneously emerge with spike-timing-dependent plasticity (Brette, 2012). Such coordinated activity could easily propagate over neural networks (Abeles, 1991; Diesmann, Gewaltig, & Aertsen, 1999; Goedeke & Diesmann, 2008) and be used as a potential code for the brain (Singer, 1993, 1999). Moreover simulation studies have shown how synchronization emerges and propagates in neural networks, with or without oscillations (Diesmann et al., 1999; Golomb & Hansel, 2000; Tiesinga & Sejnowski, 2004; Goedeke & Diesmann, 2008; Rudolph & Destexhe, 2003).

In addition to these theoretical considerations, much experimental evidence has been accumulated, showing that coordination between neurons is indeed taking place. In particular, it has been shown that the mechanisms of spike generation can be very precise (Mainen & Sejnowski, 1995) under physiological conditions (Boucsein, Nawrot, Schnepel, & Aertsen, 2011; Konishi, Takahashi, Wagner, Sullivan, & Carr, 1988; Lestienne, 2001; Prescott, Ratté, De Koninck, & Sejnowski, 2008). Spike synchronization, with or without oscillation, has been shown to be involved in the so-called binding problem (Engel & Singer, 2001; Singer & Gray, 1995; Singer, 1999; Super, van der Togt, Spekreijse, & Lamme, 2003). It has also been studied in relation to firing rate (Abeles & Gat, 2001; Eyherabide, Rokem, Herz, & Samengo, 2009; Gerstein, 2004; Grammont & Riehle, 1999, 2003; Heinzle, König, & Salazar, 2007; König et al., 1996; Kumar, Rotter, & Aertsen, 2010; Lestienne, 1996; Maldonado, Friedman-Hill, & Gray, 2000; Masuda & Aihara, 2007; Riehle, Grün, Diesmann, & Aertsen, 1997; Vaadia et al., 1995).

Most of these experimental evidences could not have been obtained without the development of specific descriptive analysis methods of spike timing: cross-correlogram (Perkel, Gernstein, & Moore, 1967), gravitational clustering (Gerstein & Aertsen, 1985), and joint peristimulus time histogram (JPSTH) (Aertsen, Gerstein, Habib, & Palm, 1989). However, these methods do not necessarily answer a major criticism: that spike synchronization might just be an epiphenomenon of the variations of the firing rate. That is why, in direct line with these methods, Grün and collaborators developed the unitary events (UE) analysis method (Grün, 1996).

The UE method is based on a binned coincidence count (see also section 2.1 for a more precise definition). This method has been continuously improved (Grün, Diesman, & Aertsen, 2002a, 2002b; Grün, 2009; Grün, Diesmann, & Aertsen, 2010; Gütig, Aertsen, & Rotter, 2001; Grün, Riehle, & Diesmann, 2003; Pipa & Grün, 2003; Pipa, Diesmann, & Grün, 2003; Louis, Borgelt, & Grün, 2010; Pipa, Grün, & van Vreeswijk, 2013). A popular method which has been used successfully in several experimental studies (Riehle et al., 1997; Riehle, Grammont, Diesmann, & Grün, 2000; Grammont & Riehle, 1999, 2003; Maldonado et al., 2008; Kilavik et al., 2009). However, these approaches suffer from several defects due to the use of a binned coincidence count. Indeed, as Grün et al. (1999) and Grün et al. (2010), pointed out, there may be a large loss in synchrony detection, coincidences being discarded when they correspond to two spikes lying in two adjacent distinct bins. Actually, up to 60% of the coincidences can be lost when the bin length is the typical delay or jitter between two spikes participating in the same coincidence. Another version of the UE method has consequently been proposed: the multiple shift coincidence count (Grün et al., 1999) (see section 2.1 for precise definitions; see also Pazienti, Maldonado, Diesmann, and Grün (2008), for another development). However, and to our knowledge, this notion has not been as well explored as the notion of binned coincidence count. Indeed and as already pointed out in Grün et al. (2010), the various shifts can make the coincidence count more complex than a sum of independent variables, depending on the underlying model.

Therefore, the main aim of this letter is to complete the study of this notion of multiple shift coincidence count and propose a new method that extends the validity of the original multiple shift UE method (Grün et al., 1999). To do so, we focus on the symmetric multiple shift coincidence count, which is much better adapted to the purpose of testing independence between two spike trains on a given window (see section 2.1 for the distinction between symmetric and asymmetric multiple shift coincidence count). Then we generalize the notion that was given for discretized spike trains at a certain resolution level. The delayed coincidence count defined in section 2.3 is exactly the same coincidence count for discretized spike trains, but this new formula can also be applied to nondiscretized point processes as well. This mathematical notion allows us to compute the expectation and the variance in the simplest case of Poisson processes, which approximate the Bernoulli processes used in Grün et al. (1999). Therefore, Fano's factor can be derived for the symmetric multiple shift or delayed coincidence count, as it has been done, for instance, in Pipa et al. (2013) for the more classical notion of binned coincidence count. This also leads to a gaussian approximation of the distribution of the symmetric multiple shift or delayed coincidence count when Poisson or Bernoulli processes are involved as models for the spike trains.

However, this approximation depends on unknown parameters in practice, namely, the underlying firing rates. Such problems due to unknown parameters can be bypassed by several methods, mainly based on surrogate data (see Louis, Gerstein, Grün, & Diesmann, 2010 for dithering and Grün, 2009, for a more general review). On the binned coincidence count, there are two main methods that can be easily statistically interpreted: trial shuffling (Pipa & Grün, 2003; Pipa et al., 2003), which is a permutation resampling method, and conditional distributions (Gütig et al., 2001). However trial shuffling has clearly nontrivial and specific implementation solutions, and when working conditionally to the observed number of spikes (Gütig et al., 2001), the solution is completely linked to the form of the binned coincidence count and the use of Bernoulli models. Consequently, both solutions are not used here for the symmetric multiple shift or delayed coincidence count, which is more intricate than the classical binned coincidence count. We prefer to look more carefully at the replacement of the unknown firing rates by estimated ones, a step known in statistics as a plug-in step. By looking closely at the plug-in procedure, we show in section 3.1 that it changes the variance of the asymptotic gaussian distribution, and, therefore we correct the approximation to take this phenomenon into account. To our knowledge, no correction due to the plug-in effect has been taken into account even for the classical binned coincidence count. The last step (see section 3.2) of our procedure consists in carefully controlling the false discovery rate (FDR), when several windows of analysis are considered, by using Benjamini and Hochberg's (1995) procedure.

Each time, a thorough simulation study shows the performance of our procedure. An analysis of real data is also performed in section 4 on data that have already been partially published, so that the detection ability of the method can be demonstrated in concrete situations. Finally, we discuss in section 5 the overall improvements due to our procedure with respect to the original method of Grün et al. (1999).

In the sequel, we write in italic technical expressions the first time they are encountered and we give their definition. We also use the following notation that is generally used in point process theory (Daley & Vere-Jones, 2003). A point process N is a random countable set of points (of here). Each point corresponds to the detection time of a spike of the considered neuron by the recording electrode. For any set A of , N(A) is the number of points in A, and N(dt) is the associated point measure, that is, , where is the Dirac measure at point S. This means that for every function f, . The point process corresponding to the spike train of neuron j is denoted Nj, and when M trials are recorded, the point process corresponding to the spike train of neuron j during trial m is denoted N(m)j. In the sequel and whatever the chosen model, we assume that the M trials are independent and identically distributed (i.i.d.), which means ergodicity across the trials except when stated otherwise. We denote by the probability measure, by its corresponding expectation, and by Var its corresponding variance. Also 1A denotes the indicator function of the event A, which takes the value 1 when A is true and 0 otherwise. Hence, a function takes the value on A and 0 on the complementary event Ac.

Fundamental notions for this letter are given in the following definition (see also Staude, Rotter, & Grün, 2010, for this kind of distinction):

Definition 1. 

Real single unit data are recorded with a certain resolution h, which is usually 10−3 s or 10−4 s depending on the experiment. Formally time is cut into intervals of length h and of the form [ih−h/2, ih+h/2). Then one associates with any point process, N, its associated sequence at resolution h, that is, a sequence of 0 and 1, (Hn)n, where Hi=1 corresponds to the presence of (at least) one point of N in [ih−h/2, ih+h/2) (see also Figure 1A). Reciprocally, to a sequence of 0 and 1, (Hn)n, we associate a point process N by taking the set of all points of the type S=ih such that Hi=1. Such a point process that is forced to have only points of the type ih, for some integer i, is referred to as discretized at resolution h.

Figure 1:

Discretization and multiple shift coincidence count. (A) Illustration of the discretization of a spike train on the window W and on the enlarged window Wd. (B) Illustration of both symmetric and asymmetric multiple shift coincidence counts. (C) Interpretation of those notions in term of delay: an edge corresponds to a couple (i, k), where i (resp. k) is the position of a 1 in the first (resp. second) spike train, and for which the delay |ik| is less than d. The number of coincidences is then the number of edges. In gray are represented data that are outside W but inside Wd and that are therefore taken into account in the asymmetric notion but not in the symmetric one.

Figure 1:

Discretization and multiple shift coincidence count. (A) Illustration of the discretization of a spike train on the window W and on the enlarged window Wd. (B) Illustration of both symmetric and asymmetric multiple shift coincidence counts. (C) Interpretation of those notions in term of delay: an edge corresponds to a couple (i, k), where i (resp. k) is the position of a 1 in the first (resp. second) spike train, and for which the delay |ik| is less than d. The number of coincidences is then the number of edges. In gray are represented data that are outside W but inside Wd and that are therefore taken into account in the asymmetric notion but not in the symmetric one.

2.  Probabilistic Study of the Coincidence Count

2.1.  The Multiple Shift Coincidence Count.

When a pair of neurons is being recorded, the question is how to properly define a coincidence and, more precisely, the coincidence count. Usually those counts are computed over several windows of time. We focus in this section on only one window W of length T, keeping in mind that this small window is strictly included in a much larger recording. Because processes can be discretized and to avoid tedious indexations, it is easier to think that W corresponds to indices with T=nh (see Figure 1A). However, there are points outside this window corresponding to indices i that can be nonpositive or larger than n.

In classical UE methods, both spike trains are usually represented by sequences of 0 and 1 of length , for some integer (Grün, 1996; Grün et al., 2010). The presence of 1 at position i indicates that there is at least one spike in the ith segment of length . The segments of length are usually called bins and are centered around points of the type . The previous construction means that the real data have been binned at a coarser level (namely, ) than their original resolution h. We denote by and , the associated sequence to the first and second neuron, respectively. According to this bin construction, a coincidence at time happens if D1i=D2i=1. The coincidence count in this binning framework is then the number of is such that D1i=D2i=1. The problem, underscored in Grün et al. (1999), is that for reasonable d (typically s), a significant number of spikes that are at a distance less than are not counted if they fall into two adjacent and distinct bins: the binning effect generates a significative loss in synchrony detection.

The multiple shift method, introduced in Grün et al. (1999), uses a notion of coincidence count that corrects this loss in synchrony detection by shifting one spike train (N2) with respect to another spike train (N1), which is fixed by a step of size h, both spike trains being kept at their original resolution level h. There are two ways of defining the multiple shift coincidence count depending on whether data outside the window of interest, W, may enter or not when the spike train N2 is shifted (see Figure 1B).

In the symmetric multiple shift coincidence count, both spike trains, N1 and N2, are observed on a window W, and data outside W, are discarded. Those spike trains are discretized at resolution h and are consequently considered as a sequence of 0 and 1, denoted , and , respectively, in the sequel. Then a (near) coincidence is observed at time ih on the window W, if there exists a shift j, integer in , such that H1i=H2i+j=1. Note that this definition implies in particular that such a j should also satisfy that
formula
2.1
since the recordings outside the window W of interest are discarded. The symmetric multiple shift coincidence count is then defined by the total number of (near) coincidences on the window W, or, formally,
formula
2.2
Taking k=i+j in the previous sum, we obtain
formula
2.3
Hence, it can be also understood as the total number of couples (i, k) such that H1i=H2k=1, such that the delay and such that both i and k belong to . Both i and k correspond to a spike in the window of interest W; this notion is therefore symmetric in the first and second spike trains (see Figure 1C).
If condition 2.1 is not fulfilled and if j is free, then when the shift is performed, data outside W ”enter” and interact with data inside W (see Figure 1B). This leads to the asymmetric multiple shift coincidence count, formally defined as
formula
2.4
Taking k=i+j in the previous sum, we obtain
formula
2.5
Hence, it can be also understood as the total number of couples (i, k) such that H1i=H2k=1, such that the delay , such that i (in ) corresponds to a spike of N1 in W and such that k (in ) corresponds to a spike of N2 in the enlarged window . Note that with h=10−3 s, it is quite usual to have n=100 and up to d=20. Hence, in the asymmetric notion, one sequence can be 40% larger than the other one.

In the original Matlab code of Grün et al. (1999), coincidences, that is, couples (i, k) such that , were identified at first and indexed by their position on the first spike train, N1. Therefore, when focusing on a given window W, they counted all the coincidences (i, k) for which i corresponds to a spike of the first spike train N1 in W, whatever the value of k. This is exactly the asymmetric multiple shift coincidence count.

However, on the one hand, all coincidence counts are classically used to detect dependence. They are compared to the distribution expected under the independence hypothesis. Since the UE method aims at locally detecting the dependence, the independence hypothesis is a local one, which depends on the underlying W and is classically understood as H0: N1 and N2 are independent on the window of interest W. But using an asymmetric test statistic (as the asymmetric multiple shift coincidence count) for testing a symmetric notion (namely, the independence) can lead to different detections with the same data set, depending on which spike train is referred to as N1 (see Figure 11 for a concrete example on real data for the original UE method). Therefore, the symmetric notion leading to symmetric answers to the question of whether the two spike trains are independent seems the more natural.

On the other hand, one faces the problem of understanding the distribution under the independence hypothesis. Usually models of independence need to be imposed, and in many articles on the UE method (in particular, in Grün et al., 1999), the spike trains are modeled by independent Bernoulli processes. More precisely, the Hji are assumed to be i.i.d. Bernoulli variables with parameter , where is the firing rate of the neuron j. Based on the fact that is also a Bernoulli variable of parameter p1p2 and that for each i there are 2d+1 corresponding Uik in the sum of equation 2.5, it is easy to prove that
formula
2.6
is the expectation of the asymmetric multiple shift coincidence count, Xa (see equation 2.5). This is the main quantity used in Grün et al. (1999) to understand the distribution under the independence assumption (see equation 15 in Grün et al., 1999). The problem is more complex for the symmetric multiple shift coincidence count. The main problem with such a derivation for the symmetric notion is that when i is close to 1 or n, one cannot always find (2d+1) indices k such that and . There could be fewer. This edge effect is negligible for small d but becomes more critical when d is large. Because we consider that the symmetric notion is the most relevant one, one of our first aims is to take this edge effect into account. Therefore, we want to propose a correct formula for X too.

Note also that when deriving equation 2.6 for the asymmetric notion Xa, we have been forced to implicitly assume more than H0: N1 and N2 are independent on the window of interest W because the index k may correspond to points in the enlarged window Wd that are not in the window of interest W. In fact, we have assumed that : N1 on W is independent of N2 on Wd. This is actually this last asymmetric hypothesis , which is natural when considering the asymmetric multiple shift coincidence count and not the symmetric hypothesis H0. However, if one rephrases the independence hypothesis as , two different sets of windows need to be considered: one set of classical windows for N1 and one set of enlarged windows for N2. This is not reasonable either, since again, conclusions of the UE detection method can be different when exchanging the role of N1 and N2 (see also Figure 11).

Finally, in Grün et al. (1999), the distribution of multiple shift coincidence count is approximated by a Poisson distribution, as it is classically done for binned coincidence count, setup where the Poisson distribution is viewed as the approximation of a binomial distribution. However, if it is true that the coincidence count is a sum of Bernoulli variables, these variables are not independent because the variable Hji may participate in more than one coincidence, as already noted in Grün et al. (2010). Therefore, the multiple shift coincidence count is not a binomial variable when Bernoulli processes are considered. This fact makes the behavior of this precise multiple shift coincidence count different from other classical notions of coincidence count based on binning. Therefore, this letter proposes a limit distribution for the coincidence count that takes this dependency into account, so that the approximation is valid for a larger set of parameters than the Poisson approximation done in Grün et al. (1999). If it is quite difficult to do so directly for Bernoulli processes, this probabilistic result can be easily derived if we approximate Bernoulli processes by Poisson processes.

2.2.  Bernoulli and Poisson Processes.

Recall that Bernoulli processes are generated as follows. For a window W of length T, at resolution h, n=T/h independent Bernoulli variables, Bi, with parameter are simulated, where is the firing rate of the considered neuron. The associated point process (see definition 1) is denoted NB in the sequel.

It is well known that when h tends to 0, the Bernoulli process tends to a Poisson process. This can be seen, for instance, because the number of points, NB(W), is a binomial variable that tends in distribution toward a Poisson variable with parameter when h tends to 0. In particular, the approximation is valid as soon as and (Hogg & Tanis, 2009). Since, by construction, for any disjoint sets , are independent variables, we recover the classical definition of a homogeneous Poisson process of intensity (see Daley & Vere-Jones, 2003, for a precise definition). Note that Poisson processes are not discretized at any resolution level, whereas Bernoulli processes are (see definition 1).

More precisely, in our setup, windows of length 0.1 s are classically considered, with firing rates less than 100 Hz and with resolution h=10−3 s or h=10−4 s. We are consequently typically in a case where the Poisson approximation is valid. Reynaud-Bouret, Tuleau-Malot, Rivoirard, and Grammont (in press) used several classical tests, originally due to Ogata (1988), to test whether a point process is a homogeneous Poisson process; we refer readers to this article for detailed explanations on the procedures. In Figure 2, we run those tests on a simulated Bernoulli process. The p-values are large, meaning that the various tests accept the Poisson assumption (see also Ventura, 2010, for precise definitions of tests and p-values).1 Moreover the repartitions of the p-values are close to the diagonal, meaning that the distributions of the various statistics (positions, numbers of spikes or delays between spikes) are the ones given by a classical Poisson process, and they become closer to the diagonal when h decreases.

Figure 2:

Poisson approximation of the Bernoulli process. Two thousand p-values for the following tests are displayed as a function of their rank divided by 2000; the diagonal is also represented. Two Bernoulli processes are simulated with Hz and h=10−3 s or 10−4 s. (A) For each simulation, a Kolmogorov-Smirnov uniformity test is performed on a Bernoulli process simulated on a window W=[h/2, T+h/2] of length T=2 s. The test statistic is , where is the empirical cumulative distribution function of the points of the considered process. The corresponding test at level rejects the uniformity hypothesis if the test statistic is larger than the quantile of the tabulated Kolmogorov-Smirnov distribution. (B) A Kolmogorov-Smirnov uniformity test is performed on the aggregated process over 40 simulated trials (i.e., the considered point process is the union of 40 i.i.d. Bernoulli processes simulated on W with T=0.1 s). (C) The chi-square Poisson test over 40 trials is performed on the total number of points per trials for Bernoulli point processes simulated on W with T=0.1 s. (D) Exponentiality test 1 (Reynaud-Bouret et al., in press) of the delays (ISI) between points of a Bernoulli process on W with T=10 s is performed. See test 1 of Reynaud-Bouret et al. (in press) for more insight, here used with a subsample size given by (total number of delays)2/3.

Figure 2:

Poisson approximation of the Bernoulli process. Two thousand p-values for the following tests are displayed as a function of their rank divided by 2000; the diagonal is also represented. Two Bernoulli processes are simulated with Hz and h=10−3 s or 10−4 s. (A) For each simulation, a Kolmogorov-Smirnov uniformity test is performed on a Bernoulli process simulated on a window W=[h/2, T+h/2] of length T=2 s. The test statistic is , where is the empirical cumulative distribution function of the points of the considered process. The corresponding test at level rejects the uniformity hypothesis if the test statistic is larger than the quantile of the tabulated Kolmogorov-Smirnov distribution. (B) A Kolmogorov-Smirnov uniformity test is performed on the aggregated process over 40 simulated trials (i.e., the considered point process is the union of 40 i.i.d. Bernoulli processes simulated on W with T=0.1 s). (C) The chi-square Poisson test over 40 trials is performed on the total number of points per trials for Bernoulli point processes simulated on W with T=0.1 s. (D) Exponentiality test 1 (Reynaud-Bouret et al., in press) of the delays (ISI) between points of a Bernoulli process on W with T=10 s is performed. See test 1 of Reynaud-Bouret et al. (in press) for more insight, here used with a subsample size given by (total number of delays)2/3.

Hence, Bernoulli processes can be well approximated by Poisson processes on the typical set of parameters used in neuroscience, and they are almost statistically indistinguishable from Poisson processes, at the resolution h=10−4 s.

2.3.  The Delayed Coincidence Count.

We focus now on the symmetric notion, at least in a first approach. If we want to use Poisson processes instead of Bernoulli processes to perform the computations, we need to rewrite the symmetric multiple shift coincidence count in terms of point processes that are not necessarily discretized at the resolution level h (see definition 1). To turn equation 2.3 into a more generic formula valid for any point process, we remark on the following phenomenon. Fix and fix some x=ih as a point of N1. If N2 is not discretized and if we consider its associated sequence at resolution h (see definition 1), then a point y of N2 that corresponds to a k such that could be as far as and still be counted as a near coincidence. In particular, when d=0, y is in the segment of center x with length h if and only if . Therefore, let
formula
2.7
The delayed coincidence count, generalizing the notion of symmetric multiple shift coincidence count for general point process, can be written as follows:
Definition 2. 
The delayed coincidence count with delayon the window W is given by
formula
2.8

When N1 and N2 are discretized at resolution h, both equations 2.3 and 2.8 coincide, and both coincidence counts are exactly the same.

If the symmetric case is the most relevant one in our framework and if the delayed coincidence count should always be understood as a symmetric quantity in (N1, N2), the same translation can be done for the asymmetric notion, and one can also introduce
formula
2.9
which is clearly not symmetric.
Now we translate the multiple shift UE method introduced by Grün and her collaborators to the general point-process framework. Equation 2.6 can be easily rewritten as
formula
2.10
where T is the length of the window W.

In Grün et al. (1999), the distribution of Xa is approximated by a Poisson distribution with parameter mg. We will see in section 3 that assuming X to be Poisson distributed with parameter mg may also lead in certain cases to a reasonably correct behavior of the UE procedure. Therefore in the sequel, we always study both cases—the UE symmetric procedure (UEs), where X is assumed to be Poisson distributed with parameter mg, and the UE asymmetric procedure (UEa), where Xa is assumed to be Poisson distributed with parameter mg.

Our new method focuses on the (symmetric) delayed coincidence count X. For this count, assuming that both N1 and N2 are now Poisson processes, one can prove the following result:

Theorem 1. 
Let us fixin equation2.8such that
formula
2.11
where T is the length of the window W. If N1 and N2 are independent homogeneous Poisson processes with respective intensitiesandon W, then the expectation of the delayed coincidence count X and its variance are given by
formula
2.12
and
formula
2.13
Moreover if M i.i.d. trials are available, then
formula
2.14
whereis the average observed coincidence count with delay, that is,
formula
2.15

The symbol means convergence in distribution when M tends to infinity. This means, for instance, that the quantiles of tend to those of , when M becomes larger. The proof is given in the supplementary material, available at http://www.mitpressjournals.org/doi/suppl/10.1162/NECO_a_00604.

This result states first that can be computed when both observed point processes are independent homogeneous Poisson processes and that the edge effects appear in m0 via a quadratic term in , which is the difference with respect to mg. Therefore, it needs to be taken into account if one wants to compute delayed coincidence count with large . Note that if equation 2.11 is not satisfied, then all the couples (x, y) in W are affected by this edge effect, and in this case, the above formulas for the expectation and variance are no longer valid.

Note also that the Fano factor,2
formula
2.16
is strictly larger than 1. The gap between the variable X and a Poisson variable increases with the firing rate and with . Several articles have also considered the Fano factor for binned coincidence count showing that the distribution may be different from a Poisson distribution, but to our knowledge, nothing has been done for the multiple shift coincidence count. In particular, Pipa et al. (2013) computed Fano factors for renewal processes only in the case of a coincidence count, which is binned but not clipped. Multiple shift coincidence count and the binned clipped coincidence count are exactly the same when the size of the bin is h for the binned coincidence count and when d=0 in equation 2.2 or 2.4 for the multiple shift symmetric or asymmetric coincidence count. The fact that coincidence counts are clipped or not has almost no effect for very small resolution h. Since delayed coincidence count is a generalization of the symmetric multiple shift coincidence count, it is logical that we recover results of the same flavor as the ones of Pipa et al. (2013), in the Poisson case, with (i.e., equation 2.7 with d=0). Note, however, that both results are not equivalent since they are not based on the same notion of coincidence count. Using Poisson processes instead of Bernoulli processes allows us to produce such results for the generalization of the symmetric multiple shift coincidence count to the more general but not necessarily discretized point process case.

Note that for the asymmetric notion, one can also show that , when Xa is defined by equation 2.9. We will see later on simulations that Xa (discretized or not) is not Poisson either. Other similar computations would lead to another gaussian approximation of Xa. However, we do not want to perform them and consequently correct the asymmetric UE procedure in this way. Indeed, the asymmetric notion can lead to very awkward results that depend on which spike train is referred as N1 when testing the independence hypothesis on W, which is a symmetric statement (see Figure 11 for a practical example). Even if we correct the approximation, those awkward conclusions will remain and are intrinsic to the notion of the asymmetric coincidence count itself.

We conclude this section with some simulations to underline the fact that this approximation of equation 2.14 is valid not only for Poisson processes in theory but also for Bernoulli processes in practice and also to show that the distributions of X or Xa are not Poisson in any case.

In Figure 3, the coincidence counts are symmetric. The three distributions—the one of the delayed coincidence counts for Poisson processes with , the one with , and the one of the symmetric multiple shift coincidence counts for Bernoulli processes with resolution h—are almost indistinguishable. All three are very well approximated by the gaussian approximation of equation 2.14, and the distinction between or cannot be made when h=10−4 s. On the contrary, all the Poisson distributions with either mean Mmg (biased with neglected edge effects) or Mm0 (unbiased with edge effects taken into account) with or are not fitting the coincidence count distribution: the variance is larger than what is predicted by the Poisson approximation.

Figure 3:

Repartition of the symmetric total coincidence count (i.e., the sum of the coincidence counts over M trials). In all the experiments, Hz, M=40 trials, and a window W of length T=0.1 s are used. Bernoulli processes have been simulated with resolution h. We plotted histograms over 5000 runs of symmetric multiple shift coincidence count for Bernoulli processes (see equation 2.2) and of delayed coincidence count with or for Poisson processes (see definition 2). We also plotted densities of the corresponding gaussian approximation, as well as probability distribution functions of the corresponding Poisson approximation, with mean Mmg or Mm0, for the different choices of .

Figure 3:

Repartition of the symmetric total coincidence count (i.e., the sum of the coincidence counts over M trials). In all the experiments, Hz, M=40 trials, and a window W of length T=0.1 s are used. Bernoulli processes have been simulated with resolution h. We plotted histograms over 5000 runs of symmetric multiple shift coincidence count for Bernoulli processes (see equation 2.2) and of delayed coincidence count with or for Poisson processes (see definition 2). We also plotted densities of the corresponding gaussian approximation, as well as probability distribution functions of the corresponding Poisson approximation, with mean Mmg or Mm0, for the different choices of .

In Figure 4, the coincidence counts are asymmetric. It is once again clear that the asymmetric multiple shift coincidence count on Bernoulli processes is almost indistinguishable from its generalization on Poisson processes. It is also clear that they are not Poisson distributed, in particular, for large , even if mg correctly matches the mean, the difference being less obvious for small . Once again there is no real difference in considering or when h is small (h=10−4 s).

Figure 4:

Repartition of the asymmetric total coincidence count. In all the experiments, Hz, M=40 trials, and a window W of length T=0.1 s are used. Bernoulli processes have been simulated with resolution h. We plotted histograms over 5000 runs of the asymmetric multiple shift coincidence count (see equation 2.4) for Bernoulli processes and of Xa (see equation 2.9) with or for Poisson processes. We also plotted probability distribution functions of the corresponding Poisson approximation with mean Mmg for the different choices of .

Figure 4:

Repartition of the asymmetric total coincidence count. In all the experiments, Hz, M=40 trials, and a window W of length T=0.1 s are used. Bernoulli processes have been simulated with resolution h. We plotted histograms over 5000 runs of the asymmetric multiple shift coincidence count (see equation 2.4) for Bernoulli processes and of Xa (see equation 2.9) with or for Poisson processes. We also plotted probability distribution functions of the corresponding Poisson approximation with mean Mmg for the different choices of .

As a summary of this section, note that:

  • • 

    Symmetric coincidence counts are much more adapted to the purpose of testing an independence hypothesis between N1 and N2 on a fixed window W, which is a symmetric statement.

  • • 

    It is equivalent to simulate Poisson or Bernoulli processes for coincidence counts (symmetric or not).

  • • 

    Symmetric multiple shift coincidence counts are distributed as delayed coincidence counts, the latter being just the generalization of the former to the general point-process theory. A similar version exists for the asymmetric case.

  • • 

    In either case, the distribution is not Poisson.

  • • 

    In both cases (Poisson or Bernoulli processes), the gaussian approximation of equation 2.14 is valid for the symmetric notion on the classical set of parameters.

  • • 

    Edge effects need to be taken into account for large when dealing with the symmetric notion of coincidence count.

  • • 

    Considering or with h=10−4 s is completely equivalent.

Therefore, in the sequel, we use a delayed coincidence count with of type . Bernoulli processes are replaced by Poisson processes when necessary. When real data are considered, we use the resolution h=10−4 s, which is the machine resolution of the recorded spike trains.

3.  Statistical Study of the Independence Tests

Section 2 gives a probability result, namely, the gaussian approximation. Now we will see how this approximation can be turned into a fully operational statistical method. Two main points need to be taken into account. First, we do not know the value of m0 in practice and therefore need to plug an estimate in. How does this plug-in affect the distribution? Second, we usually consider several windows; therefore, several tests are performed at once. How can one guarantee a small false discovery rate for all the tests at once?

3.1.  Plug-In and Modification of the Gaussian Approximation.

Equation 2.14 depends on m0 and , which are unknown. Hence, to perform the approximation in practice, we need to replace them by corresponding estimates based on the observations. This step is known in statistics as a plug-in step, and it is known to sometimes dramatically modify the distribution. One of the most famous examples is the gaussian distribution, which has to be replaced by a Student distribution when the variance is unknown and estimated by an empirical mean over fewer than 30 realizations (Hogg & Tanis, 2009).3

3.1.1.  Theoretical Study.

In the present setup, as far as asymptotic in the number of trials M is concerned, the plug-in of an estimate of does not change the gaussian distribution, whereas the plug-in of an estimate of m0 changes the variance of the limit, as we can see in the following result:

Theorem 2. 
With the same notation as in theorem 1, letbe the unbiased estimate of, the firing rate of neuron j, defined by
formula
3.1
Also let be an estimate of m0 defined by
formula
3.2
Then under the assumptions of theorem 1,
formula
3.3
where
formula
3.4
Moreover, v2 can be estimated by
formula
3.5
and
formula
3.6

The proof is given in the supplementary file available at http://www.mitpressjournals.org/doi/suppl/10.1162/NECO_a_00604.

Figure 5 illustrates the impact of the plug-in in the renormalized coincidence count distribution. When using plug-in, we need to renormalize the count since at each run, a new value for the estimate is drawn. Therefore, our reference in Figure 5 is the standard gaussian variable. First, we see that the gaussian approximation of equation 2.14 is still valid, and, more important, that the plug-in steps of equations 3.3 and 3.6 are valid on Figure 5A. Instead of the new variance v2 and its estimate , we have also plugged in with the original variance, , or a basic estimate of , namely,
formula
3.7
The result in Figure 5B clearly shows that the variance or the plug-in is wrong. Hence the plug-in correction needs to be taken into account. Figures 5C and 5D show what happens for the Poisson approximation of Grün et al. (1999). More precisely, Poisson variables with parameter are well approximated by as soon as is large enough, so the variables are accordingly renormalized so that they can be plotted in the same space as the standard gaussian variables. The Poisson approximation with parameter Mmg or its estimation with
formula
3.8
is clearly not satisfactory for the symmetric count (see Figure 5C), because of a bias toward the left due to the neglected edge effects. For the asymmetric count (see Figure 5D), if the variance is too large when using mg, the approximation is much more accurate when replacing mg by .
Figure 5:

Repartition of the different renormalizations of the various coincidence counts. Each time, 5000 simulations of two independent Poisson processes with Hz, M=40 trials, a window W of length T=0.1 s and s are performed. (A) Histograms of , of and of with the average delayed coincidence count. (B) Histogram of and of with the average delayed coincidence count. (C) Histogram of and of with the average delayed coincidence count. (D) Same thing as in panel C but with , the average of Xa (see equation 2.9) the asymmetric coincidence count.

Figure 5:

Repartition of the different renormalizations of the various coincidence counts. Each time, 5000 simulations of two independent Poisson processes with Hz, M=40 trials, a window W of length T=0.1 s and s are performed. (A) Histograms of , of and of with the average delayed coincidence count. (B) Histogram of and of with the average delayed coincidence count. (C) Histogram of and of with the average delayed coincidence count. (D) Same thing as in panel C but with , the average of Xa (see equation 2.9) the asymmetric coincidence count.

The gaussian approximation of the UE method, denoted GAUE in the following, given by theorem 2, leads to three different single tests depending on what needs to be detected.

Definition 3. 
The GAUE tests:
  • • 
    The symmetric test of H0: “N1 and N2 are independent” versus H1: “N1 and N2 are dependent” which rejects H0 when and are too different:
    formula
    3.9
  • • 
    The unilateral test by upper value , which rejects H0 when is too large:
    formula
    3.10
  • • 
    The unilateral test by lower value , which rejects H0 when is too small:
    formula
    3.11
where zt is the t-quantile of, that is, the real number zt such that
formula
and whereis the average delayed coincidence count (see definition 2).

By theorem 2, those three tests are asymptotically of type 1 error if the processes Nj are homogeneous Poisson processes. It means that under this assumption, the probability that the test rejects the independence hypothesis, whereas the processes are independent, tends to when the number of trials M tends to infinity.

The original UE multiple shift method of Grün et al. (1999) can be formalized in the same way:

Definition 4. 

The UE tests:

  • • 

    A symmetric test, which rejects H0 when

  • • 

    The unilateral test by upper value, which rejects H0 when

  • • 

    The unilateral test by lower value, which rejects H0 when

where qt is the t-quantile of a Poisson variable whose parameter is given by

Each of the previous tests exists in two versions (UEs or UEa, respectively) depending on whether is the average delayed coincidence count (symmetric notion; see equation 2.8) or the average Xa (asymmetric notion; see equation 2.9).

To our knowledge, no other operational method based on multiple shift coincidence count has been developed. In particular, the distribution-free methods such as trial-shuffling methods developed by Pipa and collaborators, which avoid plug-in problems, are based on binned coincidence count (Pipa & Grün, 2003; Pipa et al., 2003) and not on multiple shift coincidence count. Plug-in effects can also be avoided in another way, on binned coincidence count, by considering conditional distribution (Gütig et al., 2001).

3.1.2.  Simulation Study on One Window.

The simulation study is consequently restricted to the two previous tests, GAUE and UE, to focus on this notion of delayed or multiple shift coincidence count, which is drastically different from binned coincidence count.

Simulated Processes. Several processes have been simulated. The Poisson processes have already been described in section 2.2. They constitute a particular case of more general counting processes, called Hawkes processes, which can be simulated by thinning algorithms (Daley & Vere-Jones, 2003; Ogata, 1981; Reimer, Staude, Ehm, & Rotter, 2012). After a brief appearance in Chornoboy, Schramm, and Karr (1988), they have recently been used again to model spike trains in Krumin, Reutsky, and Shoham (2010) and Pernice, Staude, Cardanobile, and Rotter (2011, 2012). A bivariate Hawkes process (N1, N2) is described by its respective conditional intensities with respect to the past, . Informally, the quantity gives the probability that a new point on Nj appears in [t, t+dt] given the past. We refer the reader' to Brown, Barbieri, Ventura, Kass, and Frank (2002) for a more precise definition. General bivariate Hawkes processes are given for all time t and all indexes in by
formula
3.12
The ’s are real parameters called the spontaneous parameters. The functions , representing self-interaction, and , representing the interaction of neuron i on neuron j, are functions with support in and are called the interaction functions. This equation means that before the first occurrence of a spike, the Nj’s behave like homogeneous Poisson processes with intensity . The first occurrence of a spike (and the next ones) affects all the processes by increasing or decreasing the conditional intensities via the interaction functions.

For instance, if takes large positive values in the neighborhood of a delay x=5 ms and is null elsewhere, then 5 ms after a spike in Ni, the probability of a new spike in Nj will significantly increase: the process Ni excites the process Nj. On the contrary, if is negative around x, then 5 ms after a spike in Ni, the probability of a new spike in Nj will significantly decrease: the process Ni inhibits the process Nj. So Hawkes processes enable us to model a lack of coincidences as well as a profusion of coincidences depending on the sign of the interaction functions. The processes (N1, N2) in this Hawkes model are independent if and only if . Note also that the self-interaction functions , when very negative at short range, model refractory periods, making the Hawkes model more realistic than Poisson processes with respect to real data sets, even in the independence case. In particular when , all the other interaction functions being null, such Hawkes processes are independent Poisson processes with dead time x (PPD), modeling strict refractory periods of length x (Reimer et al., 2012). Finally, Hawkes processes are not discretized at any resolution level as well as Poisson processes.

Another popular example is the injection model, which is discretized at resolution h and is used in Grün et al. (1999). Two independent Bernoulli processes and are generated with respective firing rates and . Then a third Bernoulli process Nc is generated with firing rate . A fourth point process is generated from Nc by moving independently each point of Nc by a random uniform shift in , for a prescribed nonnegative integer x. Then the two spike trains are given by and (see Grün et al., 1999, for more details). This injection model can only model profusion of coincidences, not a lack of coincidences. (We refer interested readers to the supplementary file for a more precise correspondence of the parameters between Hawkes and injection models.)

Injection and Hawkes models are stationary, which means that their distribution does not change by shift in time (see Daley & Vere-Jones, 2003, for a more precise definition). This is also the case of homogeneous Poisson processes. One can also simulate inhomogeneous Poisson processes, which correspond to a conditional intensity , which is deterministic but not constant. These inhomogeneous Poisson processes are therefore nonstationary in time (see Daley & Vere-Jones, 2003, for more details).

Figure 6 gives the percentage of rejections over various numerical experiments, that have been led on those simulated processes.

Figure 6:

Percentage of rejections of the independence hypothesis over 5000 numerical experiments on a window W=[0, T] of length 0.1 s, tests being built with . For UEa (asymmetric count), data for N2 have been simulated on the corresponding enlarged window . For the experiments under H0, the horizontal line corresponds to the expected level 0.05. Experiments A and B correspond to independent homogeneous Poisson processes with, respectively, Hz and Hz. Experiments C correspond to independent inhomogeneous Poisson processes with intensity . Experiments D and E correspond to Hawkes processes with Hz and with, respectively, and , the other interaction functions being null. Experiments F and G correspond to injection models with resolution h=10−4 s, Hz, x=200 and, respectively, and . Experiments H, I, J, and K correspond to Hawkes processes with Hz and only one nonzero interaction function given by, respectively, , , , and .

Figure 6:

Percentage of rejections of the independence hypothesis over 5000 numerical experiments on a window W=[0, T] of length 0.1 s, tests being built with . For UEa (asymmetric count), data for N2 have been simulated on the corresponding enlarged window . For the experiments under H0, the horizontal line corresponds to the expected level 0.05. Experiments A and B correspond to independent homogeneous Poisson processes with, respectively, Hz and Hz. Experiments C correspond to independent inhomogeneous Poisson processes with intensity . Experiments D and E correspond to Hawkes processes with Hz and with, respectively, and , the other interaction functions being null. Experiments F and G correspond to injection models with resolution h=10−4 s, Hz, x=200 and, respectively, and . Experiments H, I, J, and K correspond to Hawkes processes with Hz and only one nonzero interaction function given by, respectively, , , , and .

Type 1 Error. Since H0 refers to ”N1 and N2 are independent on W” (or ”N1 on W is independent of N2 on ” for UEa), we have simulated various situations of independence. Our theoretical work proves that the level is asymptotically guaranteed if the processes are homogeneous Poisson processes. Our aim is now twofold: first, check whether the level is controlled for a finite, relatively small number of trials (experiments A and B), and second, check if it still holds when the processes are not homogeneous Poisson processes (experiments C, D, and E). Moreover, we want to compare our results to the ones of UEs and UEa. The upper-left part of Figure 6 shows that the three forms of GAUE (symmetric, upper, and lower value) guarantee a level of roughly 5%, even for a very small number of trials (M=20) with a very small firing rate ( Hz) or with large ( s). In this sense, it clearly extends the validity of the original UE method (UEa in the lower part of Figure 6), which is known to be inadequate for firing rates less than 7 Hz (Roy, Steinmetz & Niebur, 2000), as one can see with experiments A. Note also that the level for GAUE, as well as for UEa, seems robust to changes in the model: nonstationarity for inhomogeneous Poisson processes (experiments C) and refractory periods when using Hawkes processes (experiments D and E). Note finally that the UEs method does not guarantee the correct level except for the test by the upper value, which is much smaller than 5%.

Power. Several dependence situations have been tested on the right part of Figure 6: GAUE tests by upper value can adequately detect profusion of coincidences induced by injection models (see experiments F and G) or Hawkes models (see experiments H and I). GAUE tests by lower value can, on the contrary, detect a lack of coincidences, simulated by inhibitory Hawkes processes (see experiments J and K). Note, moreover, that symmetric GAUE tests can detect both situations. The same conclusions are true for both UE methods, the power being of the same order as the GAUE tests except for the injection case with low (experiments F with and M=100) where GAUE is clearly better.

As a partial conclusion, the gaussian approximation of theorem 1 needs to be modified to take into account the plug-in effect. Once this modification is done (see theorem 2), the gaussian approximation leads to tests that are shown to be of asymptotic level . Our simulation study has shown (see Figure 6) that MTGAUE type 1 error is of order 5% even for small firing rates and that those tests seem robust to variations in the model (nonstationarity, refractory periods). Moreover symmetric GAUE tests are able to detect both profusion and lack of coincidences. Except for very small firing rates where its level is not controlled, the original UE method (UEa) shares the same properties, whereas the level of UEs is not controlled in any cases except for the test by upper values.

3.2.  Multiple Tests and False Discovery Rate.

Classical UE analysis (Grün, 1996; Grün et al., 1999) is performed on several windows, so that dependence regions can be detected through time. We want to produce the same kind of analysis with GAUE. However, since a test is by essence a random answer, it is not true that the control of one test at level automatically induces a controlled number of false rejections.

Indeed, let us consider a collection of possibly overlapping windows W, with cardinality K, and to illustrate the problem, let us assume that we observe two independent homogeneous Poisson processes. Now let us perform any of the previous GAUE tests at level on each of the previous windows. Then, by linearity of the expectation, one has that
formula
3.13
Moreover, if L is the maximal number of disjoint windows in , then the probability that the K tests accept the independence hypothesis is upper-bounded by
formula
3.14
by independence of the test statistics between disjoint windows. This means that for large M, this procedure is doomed to reject on average tests, and the procedure will reject at least one test when L grows. Consequently, one cannot apply a multiple testing procedure without correcting them for multiplicity. Ventura (2010) also underlined the problem of the multiplicity of the tests and proposed a procedure that is not as general as the one described here.

3.2.1.  Multiple Testing Correction: A Benjamini and Hochberg Approach.

Let us denote by the test considered on the window W.

One way to control a multiple testing procedure based on the ’s, is to control the so-called familywise error rate (FWER) (Hochberg & Tamhame, 1987), which consists in controlling
formula
3.15
This can be easily done by Bonferroni bounds:
formula
3.16
So Bonferroni's method (Holm, 1979) consists in applying the tests at level instead of to guarantee a FWER less than . However, the smaller the type 1 error, the more difficult it is to make a rejection. Usually the rejected tests are called detections (or discoveries). So when K is large, Bonferroni's procedure potentially leads to no discovery or detection at all, even in cases where dependent structures exist.

Another notion, popularized by Benjamini and Hochberg (1995), has consequently been introduced in multiple testing areas, leading to a large number of publications in statistics, genomics, medicine, and other areas over the past 10 years (Benjamini, 2010). This is the false discovery rate (FDR). Actually, a false discovery (also named false detection) is not that bad if the ratio of the number of false discoveries divided by the total number of discoveries is small. More formally, we use the notation given in Table 1.

Table 1:
Repartition of the Answers for the Multiple Testing Procedure.
Number of W    
Such That  Accepts  Rejects Total 
Independence on W Tnd = number of correct nondiscoveries Fd = number of false discoveries K0 = number of windows where independence is satisfied 
Dependence on W Fnd = number of false discoveries Td = number of correct discoveries K1 = number of windows where dependence exists 
Total KR R = discoveries K 
Number of W    
Such That  Accepts  Rejects Total 
Independence on W Tnd = number of correct nondiscoveries Fd = number of false discoveries K0 = number of windows where independence is satisfied 
Dependence on W Fnd = number of false discoveries Td = number of correct discoveries K1 = number of windows where dependence exists 
Total KR R = discoveries K 
Then the false discovery rate is defined by
formula
3.17
Note that when both spike trains are independent for all windows, K1=0, which leads to Td=0 and Fd=R. Hence, the FDR in the full independent case is also a control of , that is, the FWER. In all other cases, . This means that when there are some W for which the independence assumption does not hold, controlling the FDR is less stringent, whereas the relative confidence that we can have in the discoveries is still good: if we make 100 discoveries with an FDR of 5%, this means that on average, only 5 of those discoveries will be potentially wrong.
The question now is how to guarantee a small FDR. To do so, Benjamini and Hochberg (1995) proposed the following procedure: For each test , the corresponding p-value PW is computed. They are next ordered such that
formula
3.18
Let be a fixed upper bound that we desire on the FDR and define
formula
3.19
Then the discoveries of this BH method are given by the windows corresponding to the k smallest p-values.

The theoretical result of Benjamini and Hochberg (1995) can be translated in our framework as follows: if the p-values are uniformly and independently distributed under the null hypothesis, then the procedure guarantees a FDR less than q.

Now let us finish describing our method, named MTGAUE, for multiple tests based on a gaussian approximation of the unitary events, which is based on symmetric tests to detect both profusion and lack of coincidences.

Definition 5. 

MTGAUE:

  • • 

    For each W in the collectionof possible overlapping windows, compute the p-value of the symmetric GAUE test (see definition 3).

  • • 

    For a fixed parameter q, which controls the FDR, order the p-values according to equation3.18and find k satisfying equation3.19.

  • • 

    Return as a set of detections the k windows corresponding to the k smallest p-values.

The corresponding program in R is available at math.unice.fr/~malot/liste-MTGAUE.html.

Note that in our case, the assumptions required in the approach of Benjamini and Hochberg are not satisfied. Indeed, the tests are only asymptotically of type 1 error , which is equivalent to the fact that asymptotically and not for fixed M, the p-values are uniformly distributed. Therefore, there is a gap between theory and what we have in practice. However, as we illustrate in simulations, this difference does not seem to have a significant impact on the FDR.

Moreover, to have independent p-values, we should have considered disjoint windows W. However, it is possible that we miss some detections because the dependence region is small and straddles two disjoint windows. Therefore, it is preferable to consider sliding windows that overlap. In theory, few results exist in this context (see Benjamini & Yekutieli, 2001). In practice, we will see in the next section that this lack of independence does not affect the FDR as well.

Finally, note that when the FDR parameter q grows, there are more and more detections. Hence, if a window is detected for q=q0, it is detected for all . Therefore, there is a monotony of the set of detected windows as a function of q in terms of inclusion.

3.2.2.  Simulation Study.

In Figure 7, we see an example of detection by both methods: MTGAUE and UE with symmetric tests. Note that UE is performed here without corrections due to the multiplicity of the tests, as in Grün et al. (1999), and therefore its parameter is , which should reflect the level of each individual test. MTGAUE clearly detects relevant regions and very few false positives, even for large delays , with a clear continuity in : once a region has been detected for a small , it remains generally detected for a larger until the number of imposed coincidences is diluted because is much too large. Therefore, there is usually an interval of possible around the actual maximal true interaction (here 0.02 s, whatever the model, injection or Hawkes) where the same window is detected. Note also that profusion of coincidences in the Hawkes model (experiments G) or in the injection model (experiments I) is usually detected and that the detected regions correspond to positive values for the difference . Reciprocally negative interactions in Hawkes model (experiments K) that is, a lack of coincidences, are also detected and correspond to negative values of since they appear in Figure 7 but not in the positive detections part. Both UE methods have a much larger number of false discoveries except when considering the positive detections of UEs.

Figure 7:

Detections with symmetric tests of MTGAUE with q=0.05 and of both UEs and UEa methods with on one run with M=20. The positive detections of MTGAUE or UEs correspond to the detections for which for MTGAUE or for which for UE (see definition 4). Nineteen hundred single tests have been performed on 1900 overlapping (sliding) windows of length 0.1 s shifted by 0.001 s. The corresponding detections are marked by a point at the center of the windows. Each line corresponds to a different delay. Forty different delays from 0.001 s to 0.04 s are considered. Each time, two homogeneous independent Poisson processes are simulated on s with firing rates Hz. On [0.5, 0.7] s and [1.5, 1.6] s two dependent processes are simulated. Those dependent processes are simulated according to experiments G, I, or K (see Figure 6). The black vertical lines delimit the regions where the tests should detect a dependence, that is, each time the window W intersects a dependence region.

Figure 7:

Detections with symmetric tests of MTGAUE with q=0.05 and of both UEs and UEa methods with on one run with M=20. The positive detections of MTGAUE or UEs correspond to the detections for which for MTGAUE or for which for UE (see definition 4). Nineteen hundred single tests have been performed on 1900 overlapping (sliding) windows of length 0.1 s shifted by 0.001 s. The corresponding detections are marked by a point at the center of the windows. Each line corresponds to a different delay. Forty different delays from 0.001 s to 0.04 s are considered. Each time, two homogeneous independent Poisson processes are simulated on s with firing rates Hz. On [0.5, 0.7] s and [1.5, 1.6] s two dependent processes are simulated. Those dependent processes are simulated according to experiments G, I, or K (see Figure 6). The black vertical lines delimit the regions where the tests should detect a dependence, that is, each time the window W intersects a dependence region.

Figure 8 gives the FDR and the false nondiscovery rate. Clearly MTGAUE ensures as FDR less than 5% as expected by the choice of the parameter q=0.05 and in all simulations. Moreover the proportion of false nondiscoveries is relatively small (less than 30%) and clearly decreases when M and increase. Note also than even if the trials are not i.i.d. (case L), the method still guarantees a controlled FDR and a reasonable number of false negatives. Both UE methods have a large FDR except the UEs method with tests by upper values and the UEa method by lower values.

Figure 8:

Average over 5000 runs of the ratio Fd/R (see Table 1) giving the false discovery rate and of the ratio Fnd/(KR) giving the false nondiscovery rate. Each time, in the three first columns (G, I, K), two homogeneous independent Poisson processes are simulated on s with firing rates Hz. On [0.5, 0.7] s and [1.5, 1.6] s two dependent processes are simulated. Those dependent processes are simulated according to experiments G, I, or K (see Figure 6). In the fourth column (L), 50% of the trials are simulated as for column I, 25% trials are similar to column I except that the dependence regions are slightly modified (now [0.55, 0.75] s and [1.45, 1.6] s), 25% of the trials are pure independent Poisson processes with intensity Hz. A window W is declared a false discovery (resp. false nondiscovery) if it is detected, whereas it does not intersect s (resp. it is not detected whereas it does intersect s). The horizontal line refers to the expected rate of 5% for the false discovery rate (FDR).

Figure 8:

Average over 5000 runs of the ratio Fd/R (see Table 1) giving the false discovery rate and of the ratio Fnd/(KR) giving the false nondiscovery rate. Each time, in the three first columns (G, I, K), two homogeneous independent Poisson processes are simulated on s with firing rates Hz. On [0.5, 0.7] s and [1.5, 1.6] s two dependent processes are simulated. Those dependent processes are simulated according to experiments G, I, or K (see Figure 6). In the fourth column (L), 50% of the trials are simulated as for column I, 25% trials are similar to column I except that the dependence regions are slightly modified (now [0.55, 0.75] s and [1.45, 1.6] s), 25% of the trials are pure independent Poisson processes with intensity Hz. A window W is declared a false discovery (resp. false nondiscovery) if it is detected, whereas it does not intersect s (resp. it is not detected whereas it does intersect s). The horizontal line refers to the expected rate of 5% for the false discovery rate (FDR).

4.  Real Data Study

With MTGAUE being validated on simulated data, the method is now applied to real data, some of which have already been partially published. This study shows that MTGAUE is able to detect phenomena in line with the time of the experiment. Furthermore, some novel aspects are revealed as a result of this method, completing the existing results on those data.

4.1.  Description of the Data.

4.1.1.  Behavioral Procedure.

The data used in this theoretical letter to test the detection ability of the MTGAUE method have been partially published in previous experimental studies (Riehle et al., 2000; Grammont & Riehle, 2003; Riehle, Grammont, & MacKay, 2006). These data were collected on a five-year-old male rhesus monkey trained to perform a delayed multidirectional pointing task. The animal sat in a primate chair in front of a vertical panel on which seven touch-sensitive light-emitting diodes were mounted—one in the center and six placed equidistant (60 degrees apart) on a circle around it. The monkey had to initiate a trial by touching and then holding with the left hand the central target. After a fixed delay of 500 ms, the preparatory signal (PS) was presented by illuminating one of the six peripheral targets in green. After a delay of either 600 ms or 1200 ms, selected at random with various probability, it turned red, serving as the response signal and pointing target. During the first part of the delay, the probability presp for the response signal to occur at (500 + 600)ms = 1.1 s was either 0.3, 0.5, or 0.7, depending on the experimental condition. Once this moment passed without signal occurrence, the conditional probability for the signal to occur at (500 + 600 + 600)ms = 1.7 s changed to 1. The monkey was rewarded by a drop of juice after each correct trial. Reaction time (RT) was defined as the release of the central target. Movement time (MT) was defined as the touching of the correct peripheral target.

Recording technique. Signals recorded from up to seven microelectrodes (quartz insulated platinum-tungsten electrodes, impedance: 2–5 M at 1000 Hz) were amplified and bandpass-filtered from 300 Hz to 10 kHz. Using a window discriminator, spikes from only one single neuron per electrode were then isolated. Neuronal data along with behavioral events (occurrences of signals and performance of the animal) were stored on a PC for off-line analysis with a time resolution of 10 kHz.

In the following study, only trials where the response signal occurs at 1.7 s are considered. When presp=0.3 (resp. 0.5 or 0.7), the corresponding data are called Data30 (resp. Data50 and Data70). There are, respectively, 43, 34, and 27 pairs of neurons that have been registered in, respectively, Data30, Data50, and Data70.

Assuming that the synchrony depends on only the time of signal occurrences and not the movement directions, we test for the independence of neuron pairs in a pooled fashion over all direction of movement, as already done in the previous study but in a different way (Grammont & Riehle, 2003). Note that a study with respect to the movement directions could also have been done if data with fewer than 20 trials per direction were discarded, but then Data70 would have almost completely disappeared. Note also that the small heterogeneity in the data as shown with case L of Figure 8 does not affect the method. Moreover, we did not discard pairs of neurons whose firing rate is smaller than 7 Hz (Roy et al., 2000), as done by Riehle et al. (2000) or Grammont and Riehle (2003) because even in this case, the MTGAUE detections can be trusted (see Figure 6).

4.2.  Symmetric Detections.

Before analyzing the three data sets, we focus on some examples to underline the main difference between MTGAUE and both UE methods. MTGAUE (see definition 5) has been applied to the aticvity of two pairs of neurons (pairs 13 and 40 of Data30), recorded during the experiment described in section 4.1 for various choices for the FDR control parameter q. Both multiple-shift UE methods with symmetric tests and with parameter have also been applied. The results are displayed in Figure 9.

Figure 9:

Representation of the detections of the three methods (MTGAUE, UEs, UEa) as a function of the FDR control parameter q of the MTGAUE method, performed over sliding windows of length 0.1 s shifted by 0.001 s on pairs 13 and 40 of Data30. Each line corresponds to a different value of from 0.001 s to 0.04 s. Each window is associated with a colored point at the center of the window, whose color depends on the value of the parameter q in the MTGAUE method for which this window is detected. For the UEs and UEa methods, the colored point exists if and only if the corresponding window was detected by the method with symmetric tests at the level . For MTGAUE, all the colored points (i.e., all the windows) are represented. The first black vertical bar corresponds to the preparatory signal (PS), the blue vertical bar to the expected signal (ES), and the second black vertical bar to the response signal (RS). The first hatched box corresponds to the interval [mean reaction time (RT) minus its standard deviation, mean reaction time (RT) plus its standard deviation]; the second hatched box corresponds to the same thing but for the movement time (MT).

Figure 9:

Representation of the detections of the three methods (MTGAUE, UEs, UEa) as a function of the FDR control parameter q of the MTGAUE method, performed over sliding windows of length 0.1 s shifted by 0.001 s on pairs 13 and 40 of Data30. Each line corresponds to a different value of from 0.001 s to 0.04 s. Each window is associated with a colored point at the center of the window, whose color depends on the value of the parameter q in the MTGAUE method for which this window is detected. For the UEs and UEa methods, the colored point exists if and only if the corresponding window was detected by the method with symmetric tests at the level . For MTGAUE, all the colored points (i.e., all the windows) are represented. The first black vertical bar corresponds to the preparatory signal (PS), the blue vertical bar to the expected signal (ES), and the second black vertical bar to the response signal (RS). The first hatched box corresponds to the interval [mean reaction time (RT) minus its standard deviation, mean reaction time (RT) plus its standard deviation]; the second hatched box corresponds to the same thing but for the movement time (MT).

The black points correspond to MTGAUE detections with q=0.01; consequently, at most 1% of those detections are false discoveries on average. Those are the safest detections. Because of the monotonicity in terms of q, the classical detections with q=0.05 correspond to the union of black and magenta points. At the opposite, yellow circles correspond to untrusted detections since they are detected only for an FDR control parameter q strictly larger than 80%. In this respect, in Figure 9, several periods are detected by MTGAUE that correlate to the occurrence of specific events of the behavorial protocol (see also Figure 12 in this respect). Interestingly, there is no real gain by looking at the detections for q=0.05. Most of them are already detected for q=0.01 for larger , and therefore these detections are significant. Both UE methods with symmetric tests and with detect several intervals corresponding to q>0.8, and therefore those are untrusted (yellow) detections for the MTGAUE method.

4.3.  Is the Count Significantly Too Low or Too Large?.

For single tests, the detection of at level is just the detection of both tests and at level for both the UE and GAUE methods (see definitions 3 and 4). With a collection of tests, this result is still valid for the UE method, which does not correct for multiplicity. However, it is no longer valid for MTGAUE (see definition 5) since this method is based on the rank of all the p-values of all the symmetric tests. Indeed the set of considered p-values corresponds to a set of test statistics whose positive and negative values are mixed and whose rank corresponds only to their absolute value. The result of Benjamini and Hochberg procedure consequently is intertwining positive and negative detections (i.e., detections for which or ). Since the interest lies in both distinct detections (upper and lower values) on the experimental data, it is meaningless to independently perform tests by upper values and then tests by lower values. The correct way to use the method is to perform MTGAUE with the symmetric tests, as set out in definition 5. Once a window is detected, one can then ask whether this detection was due to a low count or a large count by looking at the sign of , which is coherent with Figure 7.

Figures 10 and 11 show the detections according to their signs. In Figure 10, the UEs positive detections are more or less included in the ones of MTGAUE and the detections that are done by UEs and not MTGAUE are mostly negative detections, both facts being coherent with Figure 7. In Figure 11, UEa detects more coincidence than MTGAUE, a fact that is also coherent with Figure 7. More important, we see here on real data that exchanging the role of N1 and N2 can drastically change the conclusion: whole regions are detected or not depending on which spike train is restricted to the family of windows W and which spike train is allowed to be in the family of enlarged windows . Note that those regions are not present in MTGAUE detections (see Figure 10).

Figure 10:

Detections according to the sign of the test statistics for MTGAUE and UEs (symmetric count). Both methods have been run on pairs 13 and 40 with s and over sliding windows of length 0.1 s shifted by 0.001 s with symmetric tests. Lines correspond to trials, the first bottom half corresponding to N1 and the second upper half to N2. Points correspond to spikes: by default, their color is gray. Couples of points (x, y) with a delay less than are in red if they belong to a window, which is a positive detection by the considered method: (for MTGAUE) or (for UEs). They are in blue if they belong to a window, which is a negative detection by the considered method: (for MTGAUE) or (for UEs).

Figure 10:

Detections according to the sign of the test statistics for MTGAUE and UEs (symmetric count). Both methods have been run on pairs 13 and 40 with s and over sliding windows of length 0.1 s shifted by 0.001 s with symmetric tests. Lines correspond to trials, the first bottom half corresponding to N1 and the second upper half to N2. Points correspond to spikes: by default, their color is gray. Couples of points (x, y) with a delay less than are in red if they belong to a window, which is a positive detection by the considered method: (for MTGAUE) or (for UEs). They are in blue if they belong to a window, which is a negative detection by the considered method: (for MTGAUE) or (for UEs).

Figure 11:

Detections according to the sign of the test statistics for UEa, the original UE method. Since the coincidence count is not symmetric in N1/N2 (see equation 2.9), the method has been run twice on pairs 13 and 40 with s and over sliding windows of length 0.1 s shifted by 0.001 s with symmetric tests (see definition 4). The first time, in the asymmetric count Xa (see equation 2.9), points of N1 belong to W and points of N2 belong to the enlarged window . The second time, the roles of N1 and N2 have been exchanged: points of N2 belong to W and points of N1 belong to the enlarged in the coincidence count Xa. Lines correspond to trials, with the first bottom half corresponding to N1 and the second upper half to N2. Points correspond to spikes. By default, their color is gray. For the first case, couples of points (x, y) with delay less than are in red if the point x of N1 belongs to a window W, which is a positive detection (the point y of N2 belonging to the enlarged window ). They are in blue if the point x of N1 belongs to a window W, which is a negative detection (the point y of N2 belonging to the enlarged window ). In the second case, the roles of N1 and N2 have been exchanged.

Figure 11:

Detections according to the sign of the test statistics for UEa, the original UE method. Since the coincidence count is not symmetric in N1/N2 (see equation 2.9), the method has been run twice on pairs 13 and 40 with s and over sliding windows of length 0.1 s shifted by 0.001 s with symmetric tests (see definition 4). The first time, in the asymmetric count Xa (see equation 2.9), points of N1 belong to W and points of N2 belong to the enlarged window . The second time, the roles of N1 and N2 have been exchanged: points of N2 belong to W and points of N1 belong to the enlarged in the coincidence count Xa. Lines correspond to trials, with the first bottom half corresponding to N1 and the second upper half to N2. Points correspond to spikes. By default, their color is gray. For the first case, couples of points (x, y) with delay less than are in red if the point x of N1 belongs to a window W, which is a positive detection (the point y of N2 belonging to the enlarged window ). They are in blue if the point x of N1 belongs to a window W, which is a negative detection (the point y of N2 belonging to the enlarged window ). In the second case, the roles of N1 and N2 have been exchanged.

4.4.  Aggregation of the Results Obtained from Several Recording Sessions.

We run the MTGAUE method over the three data sets: Data30, Data50, and Data70. The main point is to see the influence of presp, the probability of occurrence of RS at 1.1 s on the synchronized activity. Note that despite the fact that this pool of data has already been published (Grammont & Riehle, 2003), there has been no systematic analysis in terms of the impact of this varying probability. There has been no systematic analysis of proportion of lack of coincidences either.

MTGAUE has been performed on each pair of neurons, leading to the positive (resp. negative) detections of windows—detections for which (resp. ), meaning that on those windows, this pair is significantly synchronized (resp. antisynchronized). Then an aggregation of the detections over all pairs of each data set was performed. The proportions of these significantly synchronized or antisynchronized pairs are displayed in Figure 12. As already observed in Grammont and Riehle (2003) and as a confirmation of our approach, the maximal proportions of significantly synchronized pairs of neurons occur in correlation with the occurrence of the different behavioral events of the task (preparatory, expected, and response signals). Note that the significant synchronized activity observed before the preparatory signal can be explained by the fact that this signal always occurred after a fixed delay of 500 ms, which could perfectly be anticipated by the monkey. It is also the case when the response signal occurs finally 1200 ms after the preparatory signal.

Figure 12:

Proportion of positive and negative detections, that is, of significantly synchronized or antisynchronized pairs of neurons using the MTGAUE method (sliding windows of length 0.1 s shifted by 0.001 s, q=0.05) through time. Each line represents a different parameter s. PS was presented at 0.5 s (first vertical black line) and RS at 1.7 s (third vertical black line). Even if those trials are not used in the average, a response signal at 1.1 s (ES = second vertical black line) was also presented to the animal in 30% of the cases for Data30, in 50% of the cases for Data50, and in 70% of the cases for Data70.

Figure 12:

Proportion of positive and negative detections, that is, of significantly synchronized or antisynchronized pairs of neurons using the MTGAUE method (sliding windows of length 0.1 s shifted by 0.001 s, q=0.05) through time. Each line represents a different parameter s. PS was presented at 0.5 s (first vertical black line) and RS at 1.7 s (third vertical black line). Even if those trials are not used in the average, a response signal at 1.1 s (ES = second vertical black line) was also presented to the animal in 30% of the cases for Data30, in 50% of the cases for Data50, and in 70% of the cases for Data70.

A huge proportion of detections (up to 40% of the pairs) are due to a coincidence count that is significantly too low, whereas at most, 20% of the pairs have a coincidence count significantly too large. In other words, at a given moment of the task, a maximum of about 20% of the pairs can be significantly synchronized and a maximum of 40% of them can be antisynchronized. The fact that a larger proportion of pairs can be antisynchronized at a given moment of the task remains to be explained in neurophysiological terms and is beyond the scope of this letter. As far as we know, although antisynchronization can technically be studied more or less rigorously by most analysis methods in this domain, no systematic study has been performed on this phenomenon from a more neurophysiological point of view. We know a few examples where such antisynchronizations are mentioned: in Grammont and Riehle (1999), with the classical UE method based on binned coincidence count (Grün, 1996), and in Riehle et al. (2000) with the UE method based on multiple shift coincidence count (Grün et al., 1999):

  • • 

    Significantly too large coincidence count. The proportion of significantly synchronized pairs in Figure 12 is much more important on Data30, showing two maxima, just after the expected signal (ES) and the response signal (RS). When the probability of ES increases (Data50 and Data70), those maxima still appear, but in a lower proportion. Therefore, it appears with this new analysis that the proportion of synchronized pairs tends to diminish when the probability of RS at 1.1 s increases. Moreover, when the response signal at 1.1 s is much more likely than the response signal at 1.7 s (in Data70), those maxima are shifted in time as if the animal continued to expect the signal for a longer time.

  • • 

    Significantly too low coincidence count. In Figure 12, it appears that all the three data sets have roughly the same maximal proportion of pairs that are significantly antisynchronized. The main difference comes from the localization and the fuzziness of the proportion of pairs. The maxima are large and strong in Data30, in particular around the preparatory and expected signals. In Data70, the only localizations reaching such high scores are just before and after ES. In Data50, the signal is fuzzier, and it is harder to give a rigorous interpretation.

5.  Discussion

We have presented a generalization, the delayed coincidence count, of the notion of multiple shift coincidence count (Grün et al., 1999) to point processes that are not discretized at a given resolution.

The multiple shift notion is already known to clearly outperform the classical binned coincidence count, since there is, in particular, no loss in synchrony detection due to binning effects, as already pointed out in Grün et al. (1999). However, this notion is not as popular as the binned coincidence count, mainly because its statistical properties were much more opaque, until this work. Actually, the multiple shift coincidence count can be interpreted in two ways: symmetric and asymmetric (see Figure 1). But when testing a symmetric hypothesis, the independence of both spike trains, the use of an asymmetric notion, the original version programmed in the UE method, leads to answers that depend on which spike train is referred to as N1 (see Figure 11 for an example on real data). Therefore, we focused in this letter on the symmetric notion of the multiple shift coincidence count to prevent this effect. The delayed coincidence count introduced in this letter is the generalization of the symmetric multiple shift coincidence count to the nondiscretized point-process framework. This notion allows us to perform complete computations when Poisson processes are involved. It consequently also reveals some basic properties of the (symmetric) multiple shift coincidence count for Bernoulli processes, this model being very well approximated by Poisson processes, as proved in section 2.3.

The first novel property is that the multiple shift coincidence count (symmetric or not), as well as the delayed coincidence count, is not Poisson distributed but more dispersed when both spike trains are independent homogeneous Poisson processes. The Fano factor computed in equation 2.16 for the delayed coincidence count and Poisson processes, is strictly larger than 1, and this gap tends to increase with the firing rate and the delay . In Grün et al. (1999), the distribution of the (asymmetric) multiple shift coincidence count is approximated by a Poisson distribution with mean mg (see equation 2.6 or 2.10). However, Figures 3 and 4 show that neither the symmetric nor the asymmetric multiple shift coincidence count is Poisson, whereas the gaussian approximation that has been derived in theorem 1 correctly approximates the symmetric multiple shift coincidence count when Poisson or Bernoulli processes are involved. Moreover, when considering the symmetric multiple shift coincidence count, an edge effect in the mean (m0 and not mg; see equation 2.12) needs to be taken into account (see again Figure 3 and also theorem 1). For the asymmetric multiple shift coincidence count, the mean but not the variance is correctly predicted by mg.

Those properties would be useless if they could not be turned into a statistical test. To do so, we investigated the plug-in step, showing that a modification of the variance needs to be taken into account (see theorem 2). Our GAUE (gaussian approximation of the unitary events) tests (see also definition 3) are therefore proved to be of asymptotical level , when homogeneous Poisson processes are considered as models of the spike trains. They are also proved to be robust on simulations where small firing rates, nonstationarity, or refractory periods have been imposed, even for the test by a lower value, which is able to detect a lack of coincidences (see Figure 6). We also see that for a large enough firing rate ( Hz), the three original UEa tests of Grün et al. (1999), based on the asymmetric count (see also definition 4), have type 1 error %. This is in the range of parameters where the original UE method with the multiple shift coincidence count, based on a Poissonian approximation, can be used. For very small firing rates, the type 1 errors of all UE tests (with symmetric or asymmetric count) are huge, a fact that is consistent with Roy et al. (2000; see Figure 6, experiments A). For large and large firing rates, the UEs test by upper values (with symmetric count) becomes too conservative (see Figure 6, experiments B). On the contrary, both UEs tests (symmetric or by lower values) have a very large type 1 error (more than 20%), and this can increase with the number of trials M. This is consistent with Figure 5, where we clearly see that the Poisson approximation of the asymmetric count is quite accurate but that the Poisson approximation of the symmetric count is not correctly centered, making the UEs test by upper values more conservative but the other UEs tests (symmetric and by lower values) untrustworthy. In this sense, even if UEs tests (symmetric or by lower values) have a larger power than GAUE tests for detecting a lack of coincidences (see Figure 6, experiments J and K), they cannot be preferred to the GAUE tests since there is no control of their type 1 error. Finally, note that all UEa methods and the UEs method by upper values have level when nonstationary processes or refractory periods with small are tested (experiments C, D, and E). But both UEs tests (symmetric and by lower values) with large have a type 1 error that clearly becomes huge when M, the number of trials, grows and when refractory periods are present (experiments D and E).

Two other methods in the literature do not need plug-in steps. Conditional probability computations can be performed on a binned coincidence count (Gütig et al., 2001), but it will be, in our opinion, very difficult to transfer those computations to the delayed coincidence count, since the use of independent Bernoulli variables is fundamental. Trial-shuffling methods (Pipa et al., 2003; Pipa & Grün, 2003) have also been used for binned coincidence count. This kind of resampling method could be mathematically adapted to the delayed coincidence count and could furnish a prescribed type 1 error control of the resulting test. However, the combinatorics of the delayed coincidence count is much more complex than the binned coincidence count. Therefore, a first step in this direction would be to develop a fast algorithm of delayed coincidence count. As a second step, classical parallel programming could make the trial-shuffling procedure combined with delayed coincidence count effective in practice (Denker, Wiebelt, Fliegner, Diesmann, & Morrison, 2010). More generally, many other surrogate data methods exist (see Louis, Gerstein, Grün, & Diesmann, 2010, for dithering or Grün, 2009, for a more general review). In our opinion, it would be decisive, as future work, to find a procedure that combines an effective surrogate data method of this kind with a prescribed type 1 error control.

The final step of our procedure is the use of Benjamini and Hochberg's (1995) method to control the false discovery rate (FDR) by the choice of a parameter q. Our method, MTGAUE, is proved on various simulations to guarantee the correct FDR (see Figure 8). On the contrary, the classical UE method is not corrected for multiplicity. Simulations show that this choice leads to a huge FDR, sometimes as high as 80%. Only for the UEs method with tests by upper values for large delays ( s) and only for the UEa method with tests by lower values for very large M (M=100) and large ( s), is the UE method trustworthy. This is consistent with the visualization of both methods on just one run (see Figure 7). Clearly for large , the number of false discoveries by the symmetric UE method (UEs or UEa) is huge, whereas the detections by upper values of UEs are globally adequate, with a tendency to be less accurate for small . This phenomenon can be explained as follows. For large and reasonable firing rates, the UEs tests by upper values are too conservative (see experiments B of Figure 6) because the edge effects have not been taken into account. Therefore, the fact that the method has not been corrected for the multiplicity of tests somehow compensates for its conservativeness, leading in this situation to a reasonable method. But this happens only for this range of parameters and only for the detection of profusion of coincidences. In the same range of parameters, the original method UEa with tests by lower values also seems reasonable: it is still of the order of 5%, even if the FDR of UEs with tests by upper values is much smaller. In all other situations (for the symmetric or the original asymmetric count), there is no guarantee in terms of FDR for the multiple shift UE method of Grün et al. (1999).

To summarize those simulations, MTGAUE, which relies on a theoretical approximation of the distribution of the coincidence count by a gaussian distribution, seems to be robust with respect to small firing rates, nonstationarity in time, refractory periods, and nonergodicity across trials. It guarantees a control of the FDR by the choice of the parameter q. This method is also able to detect both profusion and lack of coincidences. It therefore clearly extends the range of situations where the original multiple-shift UE method applies, the latter being trustworthy only for delays on the order 0.02 s (see Figure 8), reasonable firing rates ( Hz; see Roy et al., 2000), and detections when the coincidence count is significantly too large for UEs (symmetric count) or detections when the coincidence count is significantly too low for UEa (asymmetric count).

Therefore, when MTGAUE is applied on real data (already partly published in Grammont & Riehle, 2003), it can detect lack of coincidences (see Figures 10 and 12), whereas the detections of UE cannot in general be trusted in this range of parameters. This leads to new insights into these already published data. Note that on the same data, the symmetric UE method (with symmetric or asymmetric count) declares as statistically significant regions for which the parameter q of MTGAUE needs to be larger than 80% (see Figure 9). In accordance with our observations on the previous simulation studies, those detections are quite likely to be false discoveries. The fact that the previous published results appear to be nicely coherent in neurophysiological, cognitive, and behavioral terms is probably due to the fact that the biological phenomenon was sufficiently strong and the data aggregated to provide some averaged results, reducing in practice the hypersensitivity of the original UE method, a feature that leads on simulations to high FDR (see Figure 8).

MTGAUE also allows studying coincidences for values of up to T/2, where T is the size of the considered window (see theorem 1, for instance). Here we have stopped the study at s, whereas T=0.1 s because the results were about the same on [0.041, 0.049] s with a larger computational time. However, there is no absolute criterion from the neurophysiological point of view to determine the relevant . It remains an exploratory question. It depends mainly on the patterns of local and global connectivity of the specific cerebral structure under study, on both the structural and functional plan. In previous studies, ’s up to 0.02 s were generally studied, mainly because of the limitations of the UE method (Grammont & Riehle, 2003). MTGAUE allows studying coincidences with larger and thus exploring potentially more complex patterns of synchrony.

Note finally that once MTGAUE has been performed, one can use the sign of the test statistics to classify the detections in two types: positive detections, which reveal a profusion of coincidences, and negative detections, which reveal a lack of coincidences. Figure 10 shows that UEs method gives roughly the same trustworthy positive detections as MTGAUE, whereas its negative detections are likely to be false detectionss because of the large FDR in this case (see Figures 8 and 9). Figure 11 combined with Figure 9 also shows that the original UEa can detect additional windows that correspond to large FDR parameter q. More important, it shows that the original UEa method does not give the same answer when exchanging the role of the first and second spike trains. This is due to the fact that the coincidence count is in this case asymmetric, and even if the method could be corrected for multiplicity, it will never avoid this potential lack of symmetry in the answers.

In conclusion, it was already known that the UE method with multiple shift coincidence count (Grün et al., 1999) allowed avoiding loss in synchrony detection compared to the classical UE method based on binned coincidence count (Grün, 1996). In this letter, we show that MTGAUE notably extends the properties of this UE method with multiple-shift coincidence count thanks to its clear control of the false discovery rate, its robustness, and its ability to detect both lack and profusion of coincidences for a large set of parameters (delays, number of trials, firing rates). Moreover and contrary to the original multiple shift method (UEa), the MTGAUE answer remains identical when exchanging the role of both spike trains, since it relies on a symmetric test statistics, the delayed coincidence count.

In terms of perspective, it should be possible to extend the use of the delayed coincidence count to the analysis of more than two neurons at a time, as it has already been done for the binned coincidences (see Grün, 1996; Grün et al., 2010). This would allow the more direct study of neuronal assemblies. Adaptation of trial-shuffling or other surrogate data methods should also be considered. However, we think that pure testing procedures do not give fully satisfying answers and that it would be legitimate to provide an estimation of the dependence, notably through the Hawkes model (Krumin et al., 2010). It is already known that this model can easily deal with more than two neurons (Daley & Vere-Jones, 2003; Pernice et al., 2011, 2012; Chornoboy et al., 1988). However, it is only in a recent work that we have proposed theoretical statistical methods to deal with several neurons and large delays of interaction through lasso methods (Hansen, Reynaud-Bouret, & Rivoirard, 2013). We aim at generalizing those results to nonstationary data in time.

Acknowledgments

We first thank Alexa Riehle, leader of the laboratory in which the data used in this article were previously collected. We also thank Y. Bouret and F. Picard for fruitful discussions and programming help at several stages of this work. Finally, we thank two anonymous referees for their thorough work on our manuscript and their helpful comments. This research is partly supported by the French Agence Nationale de la Recherche (ANR 2011 BS01 010 01 projet Calibration), the PEPS BMI 2012-2013 (Estimation of Dependence Graphs for Thalamo-Cortical Neurons and Multivariate Hawkes Processes), and the PEPS BMI 2012-2013 NeuroConf.

C.T.M. is the main contributor to the applied method, its simulation, and the treatment of the real data sets. P.R.B. originated the theoretical method, contributed to the applied study, and did most of the redaction. A.R. is a predoctoral student who ran most of the preliminary study during her internship. F.G. initiated the project by introducing C.T.M., P.R.B., and A.R. to the neuroscientific issues and to the informal notion of coincidence; he also provided insight into the real data sets that he recorded in Alexa Riehle's laboratory.

References

Abeles
,
M.
(
1982
).
Quantification, smoothing, and confidence-limits for single-units histograms
.
Journal of Neuroscience Methods
,
5
(
4
),
317
325
.
Abeles
,
M.
(
1991
).
Corticonics: Neural circuits in the cerebral cortex.
Cambridge
:
Cambridge University Press
.
Abeles
,
M.
, &
Gat
,
I.
(
2001
).
Detecting precise firing sequences in experimental data
.
Journal of Neuroscience Methods
,
107
,
141
154
.
Aertsen
,
A. M.
,
Gerstein
,
G. L.
,
Habib
,
M. K.
, &
Palm
,
G.
(
1989
).
Dynamics of neuronal firing correlation: Modulation of “effective connectivity
.”
Journal of Neurophysiology
,
61
(
5
),
900
917
.
Benjamini
,
Y.
(
2010
).
Discovering the false discovery rate
.
Journal of the Royal Statistical Society, Series B
,
72
(
4
),
405
416
.
Benjamini
,
Y.
, &
Hochberg
,
Y.
(
1995
).
Controlling the false discovery rate: A practical and powerful approach to multiple testing
.
Journal of the Royal Statistical Society, Series B
,
57
(
1
),
289
300
.
Benjamini
,
Y.
, &
Yekutieli
,
D.
(
2001
).
The control of the false discovery rate in multiple testing under dependency
.
Annals of Statistics
,
29
(
4
),
1165
1188
.
Boucsein
,
C.
,
Nawrot
,
M. P.
,
Schnepel
,
P.
, &
Aertsen
,
A. M.
(
2011
).
Beyond the cortical column: Abundance and physiology of horizontal connections imply a strong role for inputs from the surround
.
Frontiers in Neuroscience
,
5
,
32
.
Brette
,
R.
(
2012
).
Computing with neural synchrony
.
PLoS Computational Biology
,
8
(
6
),
e1002561
.
Brown
,
E.
,
Barbieri
,
R.
,
Ventura
,
V.
,
Kass
,
R.
, &
Frank
,
L.
(
2002
).
The time rescaling theorem and its application to neural spike train analysis
.
Neural Computation
,
14
(
2
),
325
346
.
Chornoboy
,
E. S.
,
Schramm
,
L. P.
, &
Karr
,
A. F.
(
1988
).
Maximum likelihood identification of neural point process systems
,
Biological Cybernetics
,
59
,
265
275
.
Daley
,
D. J.
, &
Vere-Jones
,
D.
(
2003
).
An introduction to the theory of point processes
.
New York
:
Springer
.
Denker
,
M.
,
Wiebelt
,
B.
,
Fliegner
,
D.
,
Diesmann
,
M.
, &
Morrison
,
A.
(
2010
).
Practically trivial parallel data processing in a neuroscience laboratory
. In
S. Grün & S. Rotter (Eds.)
,
Analysis of parallel Spike trains
.
New York
:
Springer
.
Diesmann
,
M.
,
Gewaltig
,
M. O.
, &
Aertsen
,
A. M.
(
1999
).
Stable propagation of synchronous spiking in cortical neural networks
.
Nature
,
402
,
529
533
.
Engel
,
A. K.
, &
Singer
,
W.
(
2001
).
Temporal binding and the neural correlates of sensory awareness
.
Trends in Cognitive Sciences
,
5
(
1
),
16
25
.
Eyherabide
,
H. G.
,
Rokem
,
A.
,
Herz
,
A. V. M.
, &
Samengo
,
I.
(
2009
).
Bursts generate a non-reducible spike-pattern code
.
Frontiers in Neuroscience
,
3
(
1
),
8
14
.
Gerstein
,
G. L.
(
2004
).
Searching for significance in spatio-temporal firing patterns
.
Acta Neurobiologia Experimentalis
,
64
,
203
207
.
Gerstein
,
G. L.
, &
Aertsen
,
A. M.
(
1985
).
Representation of cooperative firing activity among simultaneously recorded neurons
.
Journal of Neurophysiology
,
54
(
6
),
1513
1528
.
Gerstein
,
G. L.
, &
Perkel
,
D. H.
(
1969
).
Simultaneous recorded trains of action potentials: Analysis and functional interpretation
.
Science
,
164
,
828
830
.
Goedeke
,
S.
, &
Diesmann
,
M.
(
2008
).
The mechanism of synchronization in feedforward neuronal networks
.
New Journal of Physics
,
10
,
015007
.
Golomb
,
D.
, &
Hansel
,
D.
(
2000
).
The number of synaptic inputs and the synchrony of large sparse neuronal networks
.
Neural Computation
,
12
(
5
),
1095
1139
.
Grammont
,
F.
, &
Riehle
,
A.
(
1999
).
Precise spike synchronization in monkey motor cortex involved in preparation for movement
.
Experimental Brain Research
,
128
(
1–2
),
118
122
.
Grammont
,
F.
, &
Riehle
,
A.
(
2003
).
Spike synchronisation and firing rate in a population of motor cortical neurons in relation to movement direction and reaction time
.
Biological Cybernetics
,
88
,
360
373
.
Grün
,
S.
(
1996
).
Unitary joint-events in multiple-neuron spiking activity: Detection, significance and interpretation.
Thun
:
Verlag Harri Deutsch
.
Grün
,
S.
(
2009
).
Data-driven significance estimation for precise spike correlation
.
Journal of Neurophysiology
,
101
,
1126
1140
.
Grün
,
S.
,
Diesman
,
M.
, &
Aertsen
,
A. M.
(
2002a
).
Unitary events in multiple single-neuron spiking activity: I. Detection and significance
.
Neural Computation
,
14
,
43
80
.
Grün
,
S.
,
Diesman
,
M.
, &
Aertsen
,
A. M.
(
2002b
).
Unitary events in multiple single-neuron spiking activity: II. Nonstationary data
.
Neural Computation
,
14
,
81
119
.
Grün
,
S.
,
Diesmann
,
M.
, &
Aertsen
,
A. M.
(
2010
).
Unitary events analysis
. In
S. Grün & S. Rotter (Eds.)
,
Analysis of parallel spike trains
.
New York
:
Springer
.
Grün
,
S.
,
Diesmann
,
M.
,
Grammont
,
F.
,
Riehle
,
A.
, &
Aertsen
,
A. M.
(
1999
).
Detecting unitary events without discretization of time
.
Journal of Neuroscience Methods
,
93
,
67
79
.
Grün
,
S.
,
Riehle
,
A.
, &
Diesmann
,
M.
(
2003
).
Effect of cross-trial nonstationarity on joint-spike events
.
Biological Cybernetics
,
88
,
335
351
.
Gütig
,
R.
,
Aertsen
,
A. M.
, &
Rotter
,
S.
(
2001
).
Statistical significance of coincident spikes: Count-based versus rate-based statistics
.
Neural Computation
,
14
,
121
153
.
Hansen
,
N. R.
,
Reynaud-Bouret
,
P.
, &
Rivoirard
,
V.
(
2013
).
Lasso and probabilistic in equalities for multivariate point processes
.
Retrieved from
http://arxiv.org/abs/1208.0570,
to appear in Bernoulli
.
Hebb
,
D. O.
(
1949
).
Organization of behavior: A neuropsychological theory
.
New York
:
Wiley
.
Heinzle
,
J.
,
König
,
P.
, &
Salazar
,
R. F.
(
2007
).
Modulation of synchrony without changes in firing rate
.
Cognitive Neurodynamics
,
1
,
225
235
.
Hochberg
,
Y.
, &
Tamhame
,
A.
(
1987
).
Multiplesg comparison procedures
.
New York
:
Wiley
.
Hogg
,
R. V.
, &
Tanis
,
E. A.
(
2009
).
Probability and statistical inference
.
Englewood Cliffs, NJ
:
Pearson
.
Holm
,
S.
(
1979
).
A simple sequentially rejection multiple test procedure
.
Scandinavian Journal of Statistics
,
6
(
2
),
65
70
.
Kilavik
,
B. E.
,
Roux
,
S.
,
Ponce-Alvarez
,
A.
,
Confais
,
J.
,
Grün
,
S.
, &
Riehle
,
A.
(
2009
).
Long-term modifications in motor cortical dynamics induced by intensive practice
.
Journal of Neuroscience
,
29
(
40
),
12653
12663
.
König
,
P.
,
Engel
,
A. K.
, &
Singer
,
W.
(
1996
).
Integrator or coincidence detector? The role of the cortical neuron revisited
.
Trends in Neurosciences
,
19
(
4
),
130
137
.
Konishi
,
M.
,
Takahashi
,
T. T.
,
Wagner
,
H.
,
Sullivan
,
W. E.
, &
Carr
,
C. E.
(
1988
).
Neurophysiological and anatomical substrates of sound localization in the owl
. In
G. M. Edelman, W. E. Gall, & W. M. Cowan
(Eds.),
Auditory function.
New York
:
Wiley
.
Krumin
,
M.
,
Reutsky
,
I.
, &
Shoham
,
S.
(
2010
).
Correlation-based analysis and generation of multiple spike trains using Hawkes models with an exogenous input
.
Frontiers in Computational Neuroscience
,
4
,
147
.
Kumar
,
A.
,
Rotter
,
S.
, &
Aertsen
,
A. M.
(
2010
).
Spiking activity propagation in neuronal networks: Reconciling different perspectives on neural coding
.
Nature Reviews Neuroscience
,
11
,
615
627
.
Lestienne
,
R.
(
1996
).
Determination of the precision of spike timing in the visual cortex of anaesthetised cats
.
Biological Cybernetics
,
74
,
55
61
.
Lestienne
,
R.
(
2001
).
Spike timing, synchronization and information processing on the sensory side of the central nervous system
.
Progress in Neurobiology
,
65
,
545
591
.
Louis
,
S.
,
Borgelt
,
C.
, &
Grün
,
S.
(
2010
).
Generation and selection of surrogate methods for correlation analysis
. In
S. Grün & S. S. Rotter (Eds.)
,
Analysis of parallel spike trains
.
New York
:
Springer
.
Louis
,
S.
,
Gerstein
,
G. L.
,
Grün
,
S.
, &
Diesmann
,
M.
(
2010
).
Surrogate spike train generation through dithering in operational time
.
Frontiers in Computational Neuroscience
,
4
(
127
).
doi:10.3389/fncom.2010.00127
Mainen
,
Z. F.
, &
Sejnowski
,
T. J.
(
1995
).
Reliability of spike timing in neocortical neurons
.
Science
,
268
,
1503
1506
.
Maldonado
,
P. E.
,
Babul
,
C.
,
Singer
,
W.
,
Rodriguez
,
E.
,
Berger
,
D.
, &
Grün
,
S.
(
2008
).
Synchronization of neuronal response in primary visual cortex of monkeys viewing natural images
.
Journal of Neurophysiology
,
100
,
1523
1532
.
Maldonado
,
P. E.
,
Friedman-Hill
,
S.
, &
Gray
,
C. M.
(
2000
).
Dynamics of striate cortical activity in the alert macaque: II. Fast time scale synchronization
.
Cerebral Cortex
,
10
,
1117
1131
.
Masuda
,
N.
, &
Aihara
,
K.
(
2007
).
Dual coding hypotheses for neural information representation
.
Mathematical Biosciences
,
207
,
312
321
.
Ogata
,
Y.
(
1981
).
On Lewis simulation method for point processes
.
Journal of the American Statistical Association
,
83
(
401
),
9
27
.
Ogata
,
Y.
(
1988
).
Statistical models for earthquakes’ occurrences and residual analysis for point processes
.
IEEE Transactions on Information Theory
,
27
(
1
),
23
31
.
Palm
,
G.
(
1990
).
Cell assemblies as a guideline for brain research
.
Concepts in Neuroscience
,
1
,
133
148
.
Pazienti
,
A.
,
Maldonado
,
P. E.
,
Diesmann
,
M.
, &
Grün
,
S.
(
2008
).
Effectiveness of systematic spike dithering depends on the precision of cortical synchronization
.
Brain Research
,
1225
,
39
46
.
Perkel
,
D. H.
,
Gernstein
,
G. L.
, &
Moore
,
G. P.
(
1967
).
Neuronal spike trains and stochastic point processes
.
Biophysical Journal
,
7
,
419
440
.
Pernice
,
V.
,
Staude
,
B.
,
Cardanobile
,
S.
, &
Rotter
,
S.
(
2011
).
How structure determines correlations in neuronal networks
.
PLoS Computational Biology
,
7
,
e1002059
.
Pernice
,
V.
,
Staude
,
B.
,
Cardanobile
,
S.
, &
Rotter
,
S.
(
2012
).
Recurrent interactions in spiking networks with arbitrary topology
.
Physical Review E: Statistical, Nonlinear, and Soft Matter Physics
,
85
,
031916
.
Pipa
,
G.
,
Diesmann
,
M.
, &
Grün
,
S.
(
2003
).
Significance of joint-spike events based on trial-shuffling by efficient combinatorial methods
.
Complexity
,
8
(
4
),
1
8
.
Pipa
,
G.
, &
Grün
,
S.
(
2003
).
Non-parametric significance estimation of joint-spike events by shuffling and resampling
.
Neurocomputing
,
52–54
,
31
37
.
Pipa
,
G.
,
Grün
,
S.
, &
van Vreeswijk
,
C.
(
2013
).
Impact of spike train autostructure on probability distribution of joint spike events
.
Neural Computation
,
25
,
1123
1163
.
Prescott
,
S. A.
,
Ratté
,
S.
,
De Koninck
,
Y.
, &
Sejnowski
,
T. J.
(
2008
).
Pyramidal neurons switch from integrators in vitro to resonators under in vivo–like conditions
.
Journal of Neurophysiology
,
100
,
3030
3042
.
Reimer
,
I.C.G.
,
Staude
,
B.
,
Ehm
,
W.
, &
Rotter
,
S.
(
2012
).
Modeling and analyzing higher-order correlations in non-Poissonian spike trains
.
Journal of Neuroscience Methods
,
208
,
18
33
.
Reynaud-Bouret
,
P.
,
Tuleau-Malot
,
C.
,
Rivoirard
,
V.
, &
Grammont
,
F.
(
in press
).
Spike trains as (in)homogeneous Poisson processes or Hawkes processes: Non-parametric adaptive estimation and goodness-of-fit tests
.
Journal of Mathematical Neuroscience
.
Riehle
,
A.
,
Grammont
,
F.
,
Diesmann
,
M.
&
Grün
,
S.
(
2000
).
Dynamical changes and temporal precision of synchronised spiking activity in monkey motor cortex during movement preparation
.
Journal of Physiology
,
94
,
569
582
.
Riehle
,
A.
,
Grammont
,
F.
, &
MacKay
,
A.
(
2006
).
Cancellation of a planned movement in monkey motor cortex
.
Neuroreport
,
17
(
3
),
281
285
.
Riehle
,
A.
,
Grün
,
S.
,
Diesmann
,
M.
, &
Aertsen
,
A. M.
(
1997
).
Spike synchronization and rate modulation differentially involved in motor cortical function
.
Science
,
278
,
1950
1953
.
Roy
,
A.
,
Steinmetz
,
P. N.
, &
Niebur
,
E.
(
2000
).
Rate limitations of unitary event analysis
.
Neural Computation
,
12
(
9
),
2063
2082
.
Rudolph
,
M.
, &
Destexhe
,
A.
(
2003
).
Tuning neocortical pyramidal neurons between integrators and coincidence detectors
.
Journal of Computational Neuroscience
,
14
,
239
251
.
Sakurai
,
Y.
(
1999
).
How do cell assemblies encode information in the brain?
Neuroscience and Biobehavioral Reviews
,
23
,
785
796
.
Shinomoto
,
S.
(
2010
).
Estimating the firing rate
. In
S. Grün & S. Rotter (Eds.)
,
Analysis of parallel spike trains
.
New York
:
Springer
.
Singer
,
W.
(
1993
).
Synchronization of cortical activity and its putative role in information processing and learning
.
Annual Review of Physiology
,
55
,
349
374
.
Singer
,
W.
(
1999
).
Neuronal synchrony: A versatile code for the definition of relations?
Neuron
,
24
,
49
65
.
Singer
,
W.
, &
Gray
,
C. M.
(
1995
).
Visual feature integration and the temporal correlation hypothesis
.
Annual Review of Neuroscience
,
18
,
555
586
.
Softky
,
W. R.
, &
Koch
,
C.
(
1993
).
The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPs
.
Journal of Neuroscience
,
13
(
1
),
334
350
.
Staude
,
B.
,
Rotter
,
S.
, &
Grün
,
S.
(
2010
).
CuBIC: Cumulant based inference of higher-order correlations in massively parallel spike trains
.
Journal of Computational Neuroscience
,
29
,
327
350
.
Super
,
H.
,
van der Togt
,
C.
,
Spekreijse
,
H.
, &
Lamme
,
V. A.
(
2003
).
Internal state of monkey primary visual cortex (V1) predicts figure-ground perception
.
Journal of Neuroscience
,
23
(
8
),
3407
3414
.
Tiesinga
,
P. H. E.
, &
Sejnowski
,
T. J.
(
2004
).
Rapid temporal modulation of synchrony by competition in cortical interneuron networks
.
Neural Computation
,
16
,
251
275
.
Vaadia
,
E.
,
Haalman
,
I.
,
Abeles
,
M.
,
Bergman
,
H.
,
Prut
,
Y.
,
Slovin
,
H.
, &
Aertsen
,
A.
(
1995
).
Dynamics of neuronal interactions in monkey cortex in relation to behavioural events
.
Nature
,
373
,
515
518
.
Ventura
,
V.
(
2010
).
Bootstrap tests of hypotheses
. In
S. Grün & S. Rotter (Eds.)
,
Analysis of parallel spike trains
.
New York
:
Springer
.
von der Malsburg
,
C.
(
1981
).
The correlation theory of brain function
(
Internal Report
).
Göttingen: Max-Planck-Institute for Biophysical Chemistry, Department of Neurobiology
.

Notes

1

A p-value is the random value of for which a test of level passes from accept to reject. Note that usually when , the test always accepts, whereas it always rejects when : therefore, there is a limit value that depends on the observations for which one passes from one decision to another one. If the test is of type I error exactly for all , then one can prove that the corresponding p-value is uniformly distributed on [0, 1] under H0. Therefore, their value as position of their normalized rank over an i.i.d. sample should be close to the diagonal of the square.

2

In the following equation, o(1) denotes a quantity that tends to 0 when tends to 0.

3

More precisely, if are i.i.d. gaussian variables with mean m and variance , then , whereas where is the unbiased estimate of .

Author notes

*Corresponding author