## Abstract

Recent remarkable advances in experimental techniques have provided a background for inferring neuronal couplings from point process data that include a great number of neurons. Here, we propose a systematic procedure for pre- and postprocessing generic point process data in an objective manner to handle data in the framework of a binary simple statistical model, the Ising or generalized McCulloch–Pitts model. The procedure has two steps: (1) determining time bin size for transforming the point process data into discrete-time binary data and (2) screening relevant couplings from the estimated couplings. For the first step, we decide the optimal time bin size by introducing the null hypothesis that all neurons would fire independently, then choosing a time bin size so that the null hypothesis is rejected with the strict criteria. The likelihood associated with the null hypothesis is analytically evaluated and used for the rejection process. For the second postprocessing step, after a certain estimator of coupling is obtained based on the preprocessed data set (any estimator can be used with the proposed procedure), the estimate is compared with many other estimates derived from data sets obtained by randomizing the original data set in the time direction. We accept the original estimate as relevant only if its absolute value is sufficiently larger than those of randomized data sets. These manipulations suppress false positive couplings induced by statistical noise. We apply this inference procedure to spiking data from synthetic and in vitro neuronal networks. The results show that the proposed procedure identifies the presence or absence of synaptic couplings fairly well, including their signs, for the synthetic and experimental data. In particular, the results support that we can infer the physical connections of underlying systems in favorable situations, even when using a simple statistical model.

## 1  Introduction

Recent tremendous technological advances have enabled us to measure multipoint signals simultaneously, which may reveal much about the central nervous system and other biological systems (Weigt, White, Szurmant, Hoch, & Hwa, 2009; Lezon, Banavar, Cieplak, Maritan, & Fedoroff, 2006). Multipoint neuronal activities can be recorded by microelectrode arrays (Buzsáki, 2004; Brown, Kass, & Mitra, 2004; Steinmetz, Koch, Harris, & Carandini, 2018) or calcium imaging (Ikegaya et al., 2004; Cheng, Gonçalves, Golshani, Arisaka, & Portera-Cailliau, 2011; Grewe, Langer, Kasper, Kampa, & Helmchen, 2010). These types of studies provide rich data and help us to understand the mechanisms of information processing in large, coupled systems beyond the single-neuron level (Yuste, 2015; Roudi, Dunn, & Hertz, 2015; Grosmark & Buzsáki, 2016; Maass, 2016; Paninski & Cunningham, 2018).

Theoretical results from the field of the statistical physics have offered powerful tools for inferring intrinsic structures from such measurements. In cases with undirected (symmetric) couplings, where the systems are in equilibrium, the mean-field formulas for statistical inference have been previously developed (Kappen & Rodríguez, 1998; Tanaka, 1998; Sessak & Monasson, 2009; Roudi, Tyrcha, & Hertz, 2009) and successfully applied to biological data (Schneidman, Berry, Segev, & Bialek, 2006; Shlens et al., 2006; Cocco, Leibler, & Monasson, 2009; Tang et al., 2008; Ohiorhenuan et al., 2010). Recently, beyond undirected cases, more general cases, or those with directed (asymmetric) connectivity structures in nonequilibrium states, can be handled with reasonable computation by using improved systematic techniques (Roudi & Hertz, 2011; Mézard & Sakellariou, 2011; Zeng, Aurell, Alava, & Mahmoudi, 2011; Sakellariou, Roudi, Mezard, & Hertz, 2012; Aurell & Ekeberg, 2012; Zeng, Alava, Aurell, Hertz, & Roudi, 2013), which leads to their applications for real data (Tyrcha, Roudi, Marsili, & Hertz, 2013; Dunn, Mørreaunet, & Roudi, 2015).

While statistical-mechanics approaches have revealed the nontrivial nature of biological systems, there remain unsatisfactory points in earlier studies. Two crucial points are the lack of an objective criterion to determine bin size for temporal discretization of signals and effectively screen relevant couplings obtained by inference techniques. In this letter, we describe an objective procedure to resolve these issues based on information theory methods (see method I in Figure 1) and computational statistics (method II in Figure 1), respectively. These methods can be applied to a wide variety of dynamical systems that exhibit event sequences regardless of the directionality of connectivities. As a representative example, we apply these methods to a mathematical model of plausible neuronal networks, the Izhikevich model (Izhikevich, 2003). As a result, we can reconstruct the underlying couplings of the networks with high accuracy even when using the simple kinetic Ising model as the inference model. Motivated by this finding, we also apply these methods to in vitro neuronal networks of rat cortical cells cultured in a simple circular structure (Isomura et al., 2015). Thanks to this physical constraint, the anatomical connections of the cultured system are expected to also have a circular structure. The inferred couplings derived by our inference methods actually exhibit the clear circular structure expected by the experimental design. The result suggests that the kinetic Ising model, despite its simplicity, can be a reliable estimator of the sign pattern of true synaptic couplings.
Figure 1:

The procedure to reconstruct couplings. Using multipoint observation techniques, we first obtain event-series data that record the times when events occurred. We use the criterion of equation 2.4 to make the original series binned with $Δτopt$. Applying the inverse mean-field formula, we estimate the coupling matrix and obtain the bare estimates ${Jijest}$, whose elements are continuous. Finally, we select relevant couplings from them using a computational-statistical method with randomized sequences.

Figure 1:

The procedure to reconstruct couplings. Using multipoint observation techniques, we first obtain event-series data that record the times when events occurred. We use the criterion of equation 2.4 to make the original series binned with $Δτopt$. Applying the inverse mean-field formula, we estimate the coupling matrix and obtain the bare estimates ${Jijest}$, whose elements are continuous. Finally, we select relevant couplings from them using a computational-statistical method with randomized sequences.

## 2  Inference Model and Methods

### 2.1  Inference Model: The Kinetic Ising Model

In this work, we use the kinetic Ising model, which consists of $N$ elements with two possible discrete state values: $si(t)=±1$. The $si(t)=1$ state corresponds to the firing of the neuron $i$, whereas the $si(t)=-1$ state means no firing at time $t$. This model is supposed to obey Galuber dynamics:
$Ps(t+1)|s(t)=∏i=1Nexpsi(t+1)Hi(t)expHi(t)+exp-Hi(t),$
(2.1)
where $Hi(t)$ is the effective field defined as $Hi(t)=hi(t)+∑j=1NJijsj(t)$, $hi(t)$ is the external force, and $Jij$ is the coupling strength from $j$ to $i$. This model corresponds to the generalized McCulloch–Pitts model (McCulloch & Pitts, 1943) in theoretical neuroscience and logistic regression (Cox, 1958) in statistics.

In this study, the maximum likelihood method forms the basis for inferring the parameters. However, a direct maximization of likelihood is often accompanied by plenty of computational cost, which can be a bottleneck in postprocessing of screening. Therefore, we use a simple mean-field formula to reduce the computational cost. This approximation keeps the inference accuracy in terms of coupling signs, as shown in section 4.4. The direct maximum likelihood method and other more sophisticated formulas can also be principally applied in combination with our methodology. The mean-field inverse formula derived by Roudi and Hertz (2011) is written as $Jest=A-1DC-1$, where $mi=〈i(t)〉$, $Cij=〈i(t)sj(t)〉-mimj$, $Dij=〈i(t+1)sj(t)〉-mimj$, and $Aij=1-mi2δij$. The angled brackets represent the time average, which is expected to be in accordance with the ensemble average under the ergodic assumption, and we make this assumption in this work (Churchland, Yu, Sahani, & Shenoy, 2007; Cunningham & Yu, 2008).

Throughout this letter, we assume that the data do not involve dominant long-range collective modes of statistical fluctuations that interfere with the accurate coupling inference (Das & Fiete, 2019). This marks the limit of the application range of the proposed methods. Although such modes are sometimes observed in some natural setups, data without the long-range modes have recently attracted much attention because they can be observed, for example, in neuronal activity during tasks (Dahmen, Grün, Diesmann, Helias, 2019; Stringer, Pachitariu, Steinmetz, Carandini, & Harris, 2019) as well as spontaneous activity. Therefore, we focus on these types of data. Note that the absence of the long-range modes in data can be tested by a statistical method, as shown in section 4.5.

### 2.2  Selecting Optimal Bin Size

For the kinetic Ising model and other types of generalized linear models (Okatan, Wilson, & Brown, 2005; Pillow et al., 2008; Field et al., 2010; Kobayashi et al., 2019), we have to determine the size of time bins when preprocessing raw data. Although it is known that the size of the time bins in binarizing the spiking data can affect the inference results (Capone, Filosa, Gigante, Ricci-Tersenghi, & Del Giudice, 2015), objective criteria for the bin size have not been considered sufficiently. To address this issue, we employ an information-theoretic method to determine the bin size in an objective way.

To apply the inverse kinetic Ising scheme to spike train data, we first determine the length $Δτ$ of the time bin from an information theory viewpoint (Kullback, 1997), in the procedure shown in Figure 1. We begin with the spike series, which is obtained with a measurement frequency $Δt-1$, which decides the minimal time unit (i.e., temporal resolution). We divide the spike trains using time bins of $Δτ$, binarize them, and obtain a spike raster with the whole time-length $T$, to ensure that each bin takes a $+$1 or $-$1 value to express the presence or absence of spikes in a bin, respectively. Then, to operate the inverse formula effectively, their time bins are made coarse-grained with the appropriate $Δτ$, which corresponds to the unit time in the kinetic Ising system.

In preparation to calculate the optimal $Δτ$ size, we suppose that the system has reached its stationary state and set a null hypothesis that each neuron would fire or not fire independently of the other neurons. We consider a method to determine the optimal bin size $Δτ$ such that this hypothesis is rejected using the strongest criteria. For each pair of neurons $i$ and $j$, we denote how many binarized states in successive bins $(si(t+1),sj(t))$ fall on the four possible patterns $(+1,+1)$, $(+1,-1)$, $(-1,+1)$, and $(-1,-1)$ among $M-1(=T/Δτ-1)$ time bins, as $n++ij$, $n+-ij$, $n-+ij$, and $n--ij$, respectively. Under the null hypothesis, neurons $i$ and $j$ are considered to fire with probabilities of $p+i·=1-p-i·=(n++ij+n+-ij)/(M-1)$ and $p+·j=(n++ij+n-+ij)/(M-1)$, respectively. Given the coarse-grained data with $n++ij,n+-ij,n-+ij,n--ij$, this provides us with the likelihood of the hypothesis as
$∏i≠jPn++ij,n+-ij,n-+ij,n--ij|p+i·,p+·j=∏i≠j(M-1)!n++ij!n+-ij!n-+ij!n--ij!p+i·p+·jn++ijp+i·(1-p+·j)n+-ij(1-p+i·)p+·jn-+ij(1-p+i·)(1-p+·j)n--ij≃exp-(M-1)∑i≠jIΔτsi(t+1);sj(t),$
(2.2)
where the last expression in that equation is derived by Stirling's formula and
$IΔτsi(t+1);sj(t)=∑(α,β)∈{+,-}2nαβijM-1lognαβijM-1pαi·pβ·j$
(2.3)
represents the mutual information of neurons $i$ and $j$ between successive time bins. The notation ${+,-}2$ denotes the direct product ${+,-}×{+,-}$. In equation 2.2, contributions from self-interactions are omitted to focus on interactions among different neurons. Our aim is to determine the bin size $Δτ$ such that the likelihood that equation 2.2 is minimized as
$Δτopt=argmaxΔτTΔτ-1∑i≠jIΔτsi(t+1);sj(t).$
(2.4)
We note that the quantity in the argument of the right-hand side of equation 2.4 is related to the chi-squared test, a conventional statistical-testing method. When the dependency is weak, we can write as $nαβij/(M-1)=pαi·pβ·j+εαβij$ with small values of $εαβij$ for each $(α,β)∈{+,-}2$. Then, the gross mutual information $(M-1)IΔτsi(t+1);sj(t)$ is approximated as
$(M-1)IΔτsi(t+1);sj(t)=(M-1)∑(α,β)∈{+,-}2nαβijM-1log1+εαβijpαi·pβ·j≃(M-1)∑(α,β)∈{+,-}2pαi·pβ·j+εαβijεαβijpαi·pβ·j-12εαβijpαi·pβ·j2≃(M-1)∑(α,β)∈{+,-}212εαβij2pαi·pβ·j=12∑(α,β)∈{+,-}2nαβij-(M-1)pαi·pβ·j2(M-1)pαi·pβ·j,$
(2.5)
which holds up to the second order of $ε$. In the third line, we use the constraint $∑(α,β)∈{+,-}2εαβij=0$ and pick up the terms up to the second order. The right-hand side of equation 2.5 is just the half of Pearson's chi-squared test statistic with 1 degree of freedom. This implies that our method can be regarded as a generalization of the chi-square test or g-test (Sokal & Rohlf, 2011).

### 2.3  Screening Relevant Couplings

Once the optimal time bin size is decided, applying the inverse Ising formula to the coarse-grained binary sequence is straightforward. Calculating the estimated coupling matrix provides us with continuous-valued $Jijest$ for each pair $(i,j)$ of neurons. This continuity may make results unclear because it is not easy to distinguish statistically significant couplings from the others. Therefore, we introduce an additional computational-statistical step (Aru et al., 2015; Xu, Aurell, Corander, & Kabashima, 2017; Xu, Puranen, Corander, & Kabashima, 2018) to extract the relevant couplings from $Jest$. To prepare, we generate $L$ randomized time series, each obtained by shuffling the original coarse-grained sequence with $Δτopt$ in the time direction individually for each element. For all the series, we calculate the coupling matrices ${(Jranest)(r)}r=1L$. Then, for each pair $(i,j)$, we have $L$ reference values ${(Jranest)(r)ij}r=1L$ against the value for the nonrandomized data, $Jijest$. If the nonrandomized value is relevant, its absolute value is considered as sufficiently larger than the $L$ reference values. According to this idea, we accept $Jijest$ as a relevant coupling only if its absolute value is larger than the $pthL$ largest value among ${|(Jranest)(r)ij|}r=1L$. The value $pth$ is a parameter that controls the tightness of this criterion and should be small. Although we use the simple mean-field formula to obtain $Jest$ in this letter, this procedure can be applied to any other inference algorithms solving the inverse Ising problem. Similar data processing methods have been used in other contexts for spike statistics (Bialek & van Steveninck, 2005; Schwartz et al., 2006).

This numerical manipulation can induce a great computational cost when dealing with a huge amount of data. Under such cases it would be better to make use of the approximation proposed in Terada, Obuchi, Isomura, and Kabashima (2018).

### 2.4  Symmetric Inference

If the coupling matrix is symmetric, systems are described by equilibrium distributions in the static manner (Kappen & Rodríguez, 1998; Tanaka, 1998; Sessak & Monasson, 2009; Roudi et al., 2009). Hence, we use the mutual information between equal-time states $IΔτsi(t);sj(t)$ instead of $IΔτsi(t+1);sj(t)$ in equation 2.4. The mean-field inverse formula is replaced as $Jest=A-1-C-1$ (Kappen & Rodríguez, 1998; Tanaka, 1998).

To characterize the asymmetric property, we compare the results obtained using symmetric and asymmetric inferences in section 3.

## 3  Results

### 3.1  Synthetic Data: The Izhikevich Model

To confirm the effectiveness of our methods, we first take up the Izhikevich model as an example. This is a standard neuronal model that generates neurobiologically plausible spike sequences (Izhikevich, 2003). We first set 100 neurons (90 excitatory and 10 inhibitory) on an asymmetric cyclic chain, where each neuron projects synaptic connections and sends signals to up to three clockwise neighboring neurons. The coupling strengths are drawn from uniform distributions between 5 and 10 for the excitatory neurons and between $-20$ and $-10$ for the inhibitory neurons. The other parameters are set following the original work (Izhikevich, 2003).

We also consider the systems on sparse and dense random networks. For each pair $(i,j)$, except the self pairs, a connection from $j$ to $i$ is generated with a probability $q$. In the sparse case, the coupling strengths are drawn between 2 and 3 for the excitatory neurons and between $-6$ and $-4$ for the inhibitory neurons. In the dense cases, they are $[0.8/q,0.8/q+1]$ for the excitatory neurons and $[-2(0.8/q+1),-2(0.8/q)]$ for the inhibitory neurons. The other conditions, except for coupling strengths, are the same as those in the chain model.

The spike trains generated by this model in the cyclic chain are plotted in Figure 2a and the coupling matrix is exhibited in Figure 2b, where the minimal time step is $Δt=1ms$ and the time length used for the inference is $T=106ms$. Gross mutual information $(M-1)∑i≠jIΔτsi(t+1);sj(t)$ in equation 2.4 versus $Δτ$ is shown by the red curve in Figure 2c where the error bars show the standard deviations evaluated from five different simulations. The red curve produces a nearly unimodal feature with a sharp peak at $Δτ=5ms$, which indicates the unique optimal $Δτ$. The nontrivial timescale appears, which may be due to the difference in the dynamics descriptions between the Izhikevich and Ising models. The Izhikevich model has a process that accumulates spikes from other neurons to emit a spike, whereas the Ising has no such process. Hence, the appropriate Ising description would need a certain timescale, presumably longer than the elementary scale for the Izhikevich model. This naturally explains the obtained $Δτ$. We next create a binned spike time raster with $Δτ=5ms$, where only $0.23%$ time bins contain more than one spike. We then apply the inverse formula and adopt only $Jij$ with absolute values exceeding the threshold or the largest value among the estimates for the $L=1,000$ randomized time series. This corresponds to the criterion $pth=0.001$ and yields the relevant couplings shown in Figure 2d. The original asymmetric structure with the excitatory and inhibitory couplings is sufficiently recovered, despite the fact that the models used for generation and inference are different. The self-interactions tend to be inferred as inhibitory couplings, although they are absent in the original system. This may describe a refractory property of the neurons. As a comparison, we apply the symmetric inference procedure to the same data; the results are shown by the blue curve in Figure 2c, which exhibits a larger optimal $Δτ$, and by the matrix in Figure 2e. To express the regularity of the asymmetric model using the symmetric model, it is necessary to merge successive bins to describe events in neighboring time steps as simultaneous events. This may be why the optimal bin size for the symmetric model is considerably larger than for the asymmetric model. The estimated couplings with $Δτ=15ms$ in Figure 2f are localized around the diagonal line but distributed on both sides, while the true couplings used for the simulation are unidirectional. We also show the conditional ratios for correctness under conditions of existence, absence, and excitatory and inhibitory couplings in Figure 2f. Both models successfully identify the connections with their signs, although the symmetric inference model has larger false positive rates.
Figure 2:

Application to the Izhikevich model on the asymmetric cyclic chain. (a) Raster plot during $1s$ with a point plotted for each spike. (b) Network matrix used to generate the spike data. (c) Gross mutual information $(M-1)∑i≠jIΔτsi(t+1);sj(t)$ for the asymmetric inference (red) and $(M-1)∑i≠jIΔτsi(t);sj(t)$ for the symmetric inference (blue). (d) Inferred couplings by the asymmetric inference with $Δτ=5ms$. (e) Inferred couplings by the symmetric inference with $Δτ=15ms$. (f) Conditional ratios of the correct inferences, excluding the self-interactions. In the coupling matrices of panels b, d, and e, the red and blue elements represent the excitatory and inhibitory couplings, respectively. In panels c and f, the error bars represent the standard deviations over five different simulations.

Figure 2:

Application to the Izhikevich model on the asymmetric cyclic chain. (a) Raster plot during $1s$ with a point plotted for each spike. (b) Network matrix used to generate the spike data. (c) Gross mutual information $(M-1)∑i≠jIΔτsi(t+1);sj(t)$ for the asymmetric inference (red) and $(M-1)∑i≠jIΔτsi(t);sj(t)$ for the symmetric inference (blue). (d) Inferred couplings by the asymmetric inference with $Δτ=5ms$. (e) Inferred couplings by the symmetric inference with $Δτ=15ms$. (f) Conditional ratios of the correct inferences, excluding the self-interactions. In the coupling matrices of panels b, d, and e, the red and blue elements represent the excitatory and inhibitory couplings, respectively. In panels c and f, the error bars represent the standard deviations over five different simulations.

To scrutinize the conditions where our inference procedure works well, the inference methods are studied using sparse and dense random networks. The obtained optimal bin sizes are not so different from the ones in the chain model, so we adopt the same values. The conditional ratios of the correctness in the asymmetric and symmetric inferences are shown in Figure 3. The asymmetric model has much higher expressive power and, hence, higher performance. These results highlight the broad applicability of the proposed inference procedure.
Figure 3:

Application to the Izhikevich model on the asymmetric random network. (a) Sparse and (b) dense connectivities, where $q$ denotes the connection probability. Conditional ratios of the correctness, excluding the self-interactions, in the asymmetric and symmetric inferences. The error bars represent the standard deviations over five simulations.

Figure 3:

Application to the Izhikevich model on the asymmetric random network. (a) Sparse and (b) dense connectivities, where $q$ denotes the connection probability. Conditional ratios of the correctness, excluding the self-interactions, in the asymmetric and symmetric inferences. The error bars represent the standard deviations over five simulations.

### 3.2  Experimental Data: In Vitro Neuronal Networks

We study a cultured neuronal system introduced in Isomura et al. (2015) to demonstrate the applicability of our methods in real systems. Rat cortical neurons were cultured in a microwell plate so that they were likely to asymmetrically connect to clockwise neighboring cells, as shown in Figure 4. This provides a similar condition to that of the Izhikevich model on the chain (see section 3.1). Spontaneous spiking neuron activity was recorded from 64 electrodes of a multielectrode array. The measurement time was $Δt=40μs$, and the time length of the data used here was $120s$. Spike sorting (Takekawa, Isomura, & Fukai, 2010) was subsequently applied to the recorded data, and 100 neurons were identified.
Figure 4:

Pictures of the cultured system. (a,b) Neurons were cultured in microchambers designed to guide the axons in one-direction. The black dots denote microelectrodes. (c) The entire experimental equipment setup constructed so that the neurons and their synapses form a physical asymmetric chain network.

Figure 4:

Pictures of the cultured system. (a,b) Neurons were cultured in microchambers designed to guide the axons in one-direction. The black dots denote microelectrodes. (c) The entire experimental equipment setup constructed so that the neurons and their synapses form a physical asymmetric chain network.

Generally, we do not precisely know the underlying connectivity in real neuronal systems, which provides uncertainty regarding the accuracy of inference methods. However, this favorable situation allows us to easily confirm the validation of the inference methods due to the physical circular restriction of the synaptic connections.

The spike raster plot during $1s$ is exhibited in Figure 5a. Gross mutual information for five different periods is plotted in Figure 5b. An almost unimodal shape is also observed, where the locations of the peaks are robust. We set $Δτ=4ms$ (where $1.81%$ spike bins contain more than one spike), which almost maximizes the mutual information, and we proceed to perform the inverse asymmetric Ising inference and pick up the relevant interactions, similar to the Izhikevich models. The bold diagonal outline appears in Figure 5c, where the neuronal indices are set following the locations of the corresponding electrodes as in Isomura et al. (2015). This indicates a chain-like structure, which precisely reproduces the experimentally designed neuronal network structure. Therefore, our statistical inference captures the underlying physiological couplings. The ratios of the estimated coupling types between pairs, except for the absent pairs, are shown in Figure 5d. In this setting, the majority of the estimated connections are the one-way paths with excitatory couplings, while some excitatory-excitatory couplings and inhibitory one-way paths are also estimated. These results reproduce the actual asymmetric structure and are consistent with the physiological properties of in vitro networks, such as the ratio of the excitatory and inhibitory neurons that has been previously reported (Marom & Shahaf, 2002).
Figure 5:

Application to a cultured-neuronal system. (a) Raster plot of the spontaneous activity during $1s$. (b) Gross mutual information in equation 2.4, where the results for five different $1s$ intervals are shown. (c) Inferred couplings by the asymmetric inference with $Δτ=4ms$. (d) Ratios of estimated coupling types, excluding the absence type. The error bars indicate the standard deviations over the five periods. (e–g) Corresponding panels for the symmetric inference method using the same data. In panel f the network is inferred with a time bin size of $Δτ=10ms$. In the estimated coupling matrices of panels c and f, the red and blue elements represent the excitatory and inhibitory couplings, respectively.

Figure 5:

Application to a cultured-neuronal system. (a) Raster plot of the spontaneous activity during $1s$. (b) Gross mutual information in equation 2.4, where the results for five different $1s$ intervals are shown. (c) Inferred couplings by the asymmetric inference with $Δτ=4ms$. (d) Ratios of estimated coupling types, excluding the absence type. The error bars indicate the standard deviations over the five periods. (e–g) Corresponding panels for the symmetric inference method using the same data. In panel f the network is inferred with a time bin size of $Δτ=10ms$. In the estimated coupling matrices of panels c and f, the red and blue elements represent the excitatory and inhibitory couplings, respectively.

For comparison, we also plot the counterpart results obtained from the conventional symmetric inference methods in Figures 5e to 5g. A similar chain-like network structure is generated; however, the coupling plot in Figure 5f tends to be noisier, and we find that the finer coupling structures are missed. These results highlight the effectiveness of our methods with the use of the asymmetric model.

## 4  Discussion

### 4.1  Robustness of Bin-Size Determination: Comparison with Cross-Correlation

Our statistical study confirmed that the proposed determination of the optimal bin size based on equation 2.4 is adequately robust, as shown in Figures 2c, 5b, and 5e). Here, we examine cross-correlation as a comparison, which is another quantity commonly used for determining the characteristic timescale of the neuronal data (Ostojic, Brunel, & Hakim, 2009).

The cross-correlations in the Izhikevich model and the cultured system are shown in Figure 6, which uses the same data as in Figures 2 and 5, respectively. We show the correlations in the excitatory (Figure 6a), inhibitory (Figure 6b), and no coupling (Figure 6c) pairs, and the mean of their absolute values in the Izhikevich model (Figure 6d). In the Izhikevich model, the characteristic timescale at approximately $5ms$ appears in the excitatory and inhibitory couplings, but it is not clearly observed in the case of no coupling cases. This timescale is extremely close to that obtained by equation 2.4, which implies that both our method and cross-correlation method can provide accurate inference in this case. Conversely, Figure 6e to 6h show the cultured system and demonstrate no characteristic timescale, except the time resolution. However, as shown in Figure 5b, equation 2.4 can effectively detect the characteristic timescale. Thus, we conclude that the applicability of our methods is broader than the cross-correlation-based method.
Figure 6:

Cross-correlations between neuronal pairs using data from (a,b,c) the Izhikevich model, and (e,f,g) the in vitro neuronal system, and (d,h) the means of the absolute values of the cross-correlations. Cross-correlations for six randomly chosen pairs with (a) excitatory, (b) inhibitory, and (c) no couplings, where the black dotted lines represent the means over all the excitatory (270 pairs), inhibitory (30 pairs), and absent couplings, respectively. The inset in panel c represents the mean value on the different scale. (d) Mean of the absolute value of the cross-correlations over all pairs in the Izhikevich model, where the inset represents the standard deviations over all pairs. Correlations of six randomly chosen pairs with estimated (e) excitatory, (f) inhibitory, and (g) no couplings, where the black dotted lines represent the means as in the Izhikevich case, and the estimated excitatory and inhibitory pairs are 862 and 62, respectively. The inset in panel e represents the cross-correlation and the mean value on the different scale. (h) Absolute mean of all pairs in the cultured system, where the inset represents the standard deviations over all pairs.

Figure 6:

Cross-correlations between neuronal pairs using data from (a,b,c) the Izhikevich model, and (e,f,g) the in vitro neuronal system, and (d,h) the means of the absolute values of the cross-correlations. Cross-correlations for six randomly chosen pairs with (a) excitatory, (b) inhibitory, and (c) no couplings, where the black dotted lines represent the means over all the excitatory (270 pairs), inhibitory (30 pairs), and absent couplings, respectively. The inset in panel c represents the mean value on the different scale. (d) Mean of the absolute value of the cross-correlations over all pairs in the Izhikevich model, where the inset represents the standard deviations over all pairs. Correlations of six randomly chosen pairs with estimated (e) excitatory, (f) inhibitory, and (g) no couplings, where the black dotted lines represent the means as in the Izhikevich case, and the estimated excitatory and inhibitory pairs are 862 and 62, respectively. The inset in panel e represents the cross-correlation and the mean value on the different scale. (h) Absolute mean of all pairs in the cultured system, where the inset represents the standard deviations over all pairs.

Moreover, we demonstrate the inference results using nonoptimal bin sizes in the Izhikevich and cultured systems in Figure 7. Figures 7a and 7b correspond to the results in Figure 2d for the Izhikevich model, and Figure 7c and 7d correspond to Figure 5c for the cultured system, where discretizations with shorter and longer nonoptimal sizes are used. This result implies that using optimal bin sizes results in clearer inference than using the smaller and larger nonoptimal bin sizes.
Figure 7:

Inferred coupling matrices with nonoptimal time bin sizes. The Izhikevich model using (a) $1ms$ and (b) $20ms$. The cultured system using (c) $0.04ms$ (time resolution) and (d) $100ms$. The other conditions are the same as those in Figures 2d and 5c.

Figure 7:

Inferred coupling matrices with nonoptimal time bin sizes. The Izhikevich model using (a) $1ms$ and (b) $20ms$. The cultured system using (c) $0.04ms$ (time resolution) and (d) $100ms$. The other conditions are the same as those in Figures 2d and 5c.

### 4.2  Finite-Size Bias of Mutual Information

We used the gross mutual information of binarized time series to determine the optimal time bin size for coupling inference in section 3. In the large limit of the data size, we can obtain accurate estimates for the gross mutual information from the observed data. However, in real situations, the data size is finite, and hence it is important to assess the finite-size bias for that quantity. Thus, we calculate the bias of the mutual information $I(si(t+1);sj(t))$ induced by the finiteness of the used data size by using a resampling method.

Given the binarized data $si(t)i=1,…,N;t=1,…,T$, we have the empirical one-step joint probability ${p++ij,p+-ij,p-+ij,p--ij}$, which also generates the marginalized probabilities $p+i·,p-i·,p+·j,p-·j$ for all $(i,j)$ pairs. Using these probabilities, we generate a binary sequence of finite sizes, where we randomly choose a previous state from the original binarized data. We calculate the gross mutual information for these resampled sequences, which describes the finite-size bias for the data. The results are shown by the red curves in Figure 8. Here we use the values of ${p++,p+-,p-+,p--}$ obtained from the Izhikevich model simulations in Figure 2 at $Δτ=1ms$, $5ms$, and $20ms$. Although the gross mutual information tends to be positively biased at finite sample sizes, we observe that they smoothly approach the dashed line corresponding to the infinite size limit. This figure also suggests that the deviation due to the sample finiteness from the infinite size limit is small enough in our present setting. This is exemplified by Figure 8c, where the sample size is $M=5×104$, the lowest among the examined $Δτ$ range. Therefore, our gross mutual information estimates and the optimal time bin sizes obtained in section 3 are considered to be accurate.
Figure 8:

Finite size bias of the gross mutual information estimated by the resampling method. The values of ${p++,p+-,p-+,p--}$ obtained from the Izhikevich model simulations at (a) $Δτ=1ms$, (b) $5ms$, and (c) $20ms$ are used. The points and error bars denote the average and standard deviation, respectively, over 20 independent simulations. The dashed lines represent the gross mutual information in the infinite sample limit.

Figure 8:

Finite size bias of the gross mutual information estimated by the resampling method. The values of ${p++,p+-,p-+,p--}$ obtained from the Izhikevich model simulations at (a) $Δτ=1ms$, (b) $5ms$, and (c) $20ms$ are used. The points and error bars denote the average and standard deviation, respectively, over 20 independent simulations. The dashed lines represent the gross mutual information in the infinite sample limit.

### 4.3  Comparison to Other Regularization Method

This letter proposes a method for screening significant couplings from the estimates. Several regularization methods to avoid overfitting have been proposed in other works (Zeng, Hertz, & Roudi, 2014; Decelle & Zhang, 2015; Bulso, Marsili, & Roudi, 2016). Among them, some sparsity-inducing regularizations also work as an effective screening method, which are of interest to compare with our method. Therefore, we examine the $ℓ1$ regularization as a representative example of sparsity-inducing regularizations.

For the numerical experiment using the $ℓ1$ regularization, we employ the lassoglm package in Matlab, which estimates parameters based on the maximum likelihood in the generalized linear model. We obtain the ROC curves by changing the level of significance values for the proposed method and the coefficient of the regularizer for the $l1$ regularization, which are shown for several data sizes in Figure 9. For comparison, we plot the ROC curve based on our method, where we use the semianalytical method to compute the values at very small FP and TP (Terada et al., 2018) to avoid the huge computational cost induced by the randomization operation. This result shows that our method reconstructs ground-truth couplings as accurate as the $ℓ1$ regularization does. The advantage of our method is that it requires a lower computational cost compared to other methods and does not require any specific prior-assumptions.
Figure 9:

ROC curves by the proposed and $ℓ1$ regularization methods for different data sizes in the Izhikevich model. The error bars represent the standard deviations over five simulations.

Figure 9:

ROC curves by the proposed and $ℓ1$ regularization methods for different data sizes in the Izhikevich model. The error bars represent the standard deviations over five simulations.

### 4.4  Goodness of Fit of the Kinetic Ising Model

We investigate the goodness of fit achieved by the kinetic Ising model. Although we used the mean-field (MF) approximation as the inference algorithm in section 3, here we use the maximum likelihood (ML) method to evaluate the expressive ability of the kinetic Ising model (the generalized linear model) itself and assess the MF approximation accuracy. To perform the ML method, we use the glmfit package in Matlab. This package considers the spike train data as $σi∈0,1$, where 1 indicates the existence of a spike and 0 indicates no spike, during direct application of the package code. We then rephrase the result in the Ising description as $si=±1$ through a simple transformation of the variables. To evaluate the statistics in the kinetic Ising model, we generate the binary data using the transition probability which is conditioned by the original data at the previous time step.

To study the goodness of fit of the model, we conduct cross-validations for the synthetic and experimental data. We separate their spike trains into two parts with the same lengths: one is used to infer the couplings and the other to evaluate the statistics generated by the model. Using the coarse-grained data with the optimal time bins in the Izhikevich model, we calculate the mean and different-time correlation and compare them with those of the data generated by the kinetic Ising model. In Figures 10a and 10b, the red crosses show the means and correlations of the original data from the Izhikevich model versus the generated data by the kinetic Ising model with the ML parameters. Good consistency is observed between the two statistics, which confirms that the kinetic Ising model can almost completely recover the data's one-step transition statistics when the ML method is used. The counterparts of the MF approximation in the kinetic Ising model are shown by the blue x points. Although the MF inference loses finer statistical properties compared to the ML method, especially regarding the correlation, the sign property is retained. We interpret that this is a source for the accurate discrimination of the excitatory, inhibitory, and absent couplings demonstrated in the Results section.
Figure 10:

Statistical reconstruction of the synthetic data using the kinetic Ising model. (a) Means and (b) different time correlations obtained from the original data in the Izhikevich model versus those from the data generated by the kinetic Ising model using parameters obtained from the maximum likelihood of the red cross points and the mean-field inverse formula of the blue x's.

Figure 10:

Statistical reconstruction of the synthetic data using the kinetic Ising model. (a) Means and (b) different time correlations obtained from the original data in the Izhikevich model versus those from the data generated by the kinetic Ising model using parameters obtained from the maximum likelihood of the red cross points and the mean-field inverse formula of the blue x's.

Although the direct ML inference requires a much greater computational cost than that with the MF method, we are able to apply our screening method to the ML estimator. The result is shown in Figure 11. We observe no significant difference between this and Figure 2d. Moreover, we quantitatively demonstrate the accuracy and computational cost of the ML and MF inferences in Table 1. The computational cost is addressed for screening process with 1000 surrogate data manipulation. We conclude that MF inference drastically saves the computational cost and maintains the inference accuracy compared to the ML inference.
Figure 11:

Inferred couplings generated by the maximum likelihood method. The conditions are the same as those in Figure 2d, except for the algorithm.

Figure 11:

Inferred couplings generated by the maximum likelihood method. The conditions are the same as those in Figure 2d, except for the algorithm.

Table 1:
Accuracy and Computational Cost of the Inference of the Mean-Field Approximation and Direct Maximization of the Likelihood in the Izhikevich Model.
MethodsCorrect Ratio: ExistenceAbsenceExcitatoryInhibitoryTime (s)
Mean field $0.9993±0.0015(s.d.)$ $0.9979±0.0006$ $1±0$ $0.9933±0.0149$ 55,344
Maximum likelihood 0.9944 529,772
MethodsCorrect Ratio: ExistenceAbsenceExcitatoryInhibitoryTime (s)
Mean field $0.9993±0.0015(s.d.)$ $0.9979±0.0006$ $1±0$ $0.9933±0.0149$ 55,344
Maximum likelihood 0.9944 529,772
Additionally, we evaluate the goodness of fit in the experimental data shown in Figure 12. Although the fine statistics do not recover, especially in the MF correlation, most of them retain the tendencies of the sign patterns, similar to what is observed in the Izhikevich model.
Figure 12:

Statistical reconstruction of the experimental data using the kinetic Ising model. (a) Means and (b) different time correlations from the original experimental data versus those from the data generated by the kinetic Ising model using parameters obtained from the maximum likelihood of the red cross points and the mean-field inverse formula of the blue x's.

Figure 12:

Statistical reconstruction of the experimental data using the kinetic Ising model. (a) Means and (b) different time correlations from the original experimental data versus those from the data generated by the kinetic Ising model using parameters obtained from the maximum likelihood of the red cross points and the mean-field inverse formula of the blue x's.

### 4.5  Fundamental Limitation by Dominant Long-Range Collective Modes

Finally, we state that the presence of dominant long-range modes in systems can hide local statistics and degrade the inference accuracy. As recently reported in Das and Fiete (2019), most inference methods fail to reconstruct neuronal couplings in the presence of long-range modes. Indeed, if such modes hide in data and contaminate the statistics directly related to couplings, it is very difficult to infer the connectivity. However, using statistical quantities, we can assess the presence or absence of these long-range modes, allowing us to avoid these situations in advance. We use a recurrent network model introduced by Das and Fiete (2019) to demonstrate this.

We consider an extreme case with strong couplings that corresponds to the $r=0.025$ case in Das and Fiete (2019). We set the same model parameters as described in Das and Fiete (2019), and use $T=2000s$ data averaged over time to evaluate statistical quantities. The spike trains during $500ms$ for the stationary state are shown in Figure 13a. We observe a macroscopic order in their dynamics, which are induced by Mexican-hat-like inhibitory couplings. Das and Fiete demonstrated that this type of dynamics masks the statistics generated by local couplings. Therefore, we would like to identify these situations in advance and avoid the possibility of unexpected failure. To detect possible failure, we look at the eigenvalue spectrum of the covariance matrix $Cij=〈si(t)sj(t)〉-〈si(t)〉〈sj(t)〉$, also known as principal component analysis. After making the spike trains coarse-grained with the optimal $Δτopt$ ($10ms$ for the Das-Fiete system), we calculate the eigenvalues of the covariance matrices for the Das-Fiete system, the Izhikevich chain system, the Izhikevich dense random network system with $q=0.9$, and the cultured system, which are shown in Figure 13b. The data used in the latter three cases are the same data as above. In the Das-Fiete system, the maximum eigenvalue (the first principal component) is fairly large, $∼O(10)$, whereas the Izhikevich and in vitro systems show reasonable $∼O(1)$ values. The eigenvectors for the five largest eigenvalues are shown in Figure 13c. In the Das-Fiete system we observe long-range modes, whereas they are not present in the dominant modes of the other three cases. To characterize the eigenmode profile, we introduce the inverse participation ratio (IPR), which is defined for each mode $i$ by
$IPRi=∑j=1Nvji4∑j=1Nvji22,$
(4.1)
where $vji$ indicates the $j$th element of the eigenvector for mode $i$. The IPR quantifies the profile of the eigenmode, and its large (or small) value implies the short (or long) range mode. The IPRs for the four cases are shown in Figure 13d. To discriminate whether dominant modes have long-range features, we calculate the weighted IPR average as
$〈IPR〉=∑i=1Nλi×IPRi∑i=1Nλi,$
(4.2)
where $λi$ is the eigenvalue for mode $i$. The result is shown in Figure 14. This implies that Das-Fiete system has the dominant long-range modes characterized by $〈IPR〉$, whereas the others, even with the dense networks, do not. Therefore, $〈IPR〉$ can be used to discriminate unfavorable situations for coupling inference induced by the dominant long-range modes.
Figure 13:

(a) Spike trains in the Das-Fiete system. Coupling strength is assumed to be large enough for the macroscopic order to emerge. (b) Eigenspectrums of covariance matrices for the Das-Fiete system, the Izhikevich system on the chain, the Izhikevich system on the dense random network with $q=0.9$, and the cultured system. The inset shows the last three cases. (c,d) Eigenvectors and IPRs from equation 4.1. (c) Eigenvectors for the five largest eigenvalues (the top is the first largest, the bottom is the fifth largest) and (d) IPRs, where for the latter, the indices are ordered so that the eigenvalues are descending.

Figure 13:

(a) Spike trains in the Das-Fiete system. Coupling strength is assumed to be large enough for the macroscopic order to emerge. (b) Eigenspectrums of covariance matrices for the Das-Fiete system, the Izhikevich system on the chain, the Izhikevich system on the dense random network with $q=0.9$, and the cultured system. The inset shows the last three cases. (c,d) Eigenvectors and IPRs from equation 4.1. (c) Eigenvectors for the five largest eigenvalues (the top is the first largest, the bottom is the fifth largest) and (d) IPRs, where for the latter, the indices are ordered so that the eigenvalues are descending.

Figure 14:

Weighted IPR averages from equation 4.2 for the Das-Fiete system, the Izhikevich systems on the chain and the dense network, and the cultured system. The error bars indicate the standard deviations of five independent simulations for the first three models and five different durations for the in vitro system.

Figure 14:

Weighted IPR averages from equation 4.2 for the Das-Fiete system, the Izhikevich systems on the chain and the dense network, and the cultured system. The error bars indicate the standard deviations of five independent simulations for the first three models and five different durations for the in vitro system.

## 5  Conclusion

In this letter, we proposed systematic methods equipped with an objective criterion for processing spike time series data in the inverse problem using statistical inference models. The first method is to appropriately discretize raw data with the time bins. This method showed superior effectiveness against the conventional cross-correlation method, as demonstrated in section 4.1. The second method is for effectively screening couplings obtained as the solution of the inverse problem. We showed this method can be accelerated using the approximation (Terada, Obuchi, Isomura, & Kabashima, 2018) with keeping its inference accuracy as much as the $ℓ1$ regularization in section 4.3. We showed that these methods work well in simulated and in vitro neuronal networks. These results highlight that the proposed inference procedure, with the use of the kinetic asymmetric Ising model, is quite effective and has the potential to infer actual connectivity, including neuronal excitatory and inhibitory properties. We stress that the methods described here for pre- and postprocessing data can be used with other types of generalized linear models.

We note that strong long-range modes may hide local statistics, and in such cases, we cannot reconstruct synaptic connectivity from observed spiking data as reported in Das and Fiete (2019). However, as shown in section 4.5, we can detect such long-range modes from the statistics of the neuronal activity. We can use the weighted average of the inverse participation ratios in the correlation matrix to discriminate such conditions, and this method may be useful before applying an inference procedure to neural data.

We also note two potential directions in future work. First, it is important to develop inference methods under hidden neurons since we often cannot have access to the entire relevant population (Battistin, Dunn, & Roudi, 2017). Statistical-mechanical analyses have been done in symmetric (Huang, 2015) and asymmetric coupling cases (Dunn & Roudi, 2013; Battistin, Hertz, Tyrcha, & Roudi, 2015). Thus, it would be interesting to investigate whether our methods are effective in those cases. Second, although our treatment ignores the time dependency of the external force, it becomes important in many experimental settings where neurons are exposed to nonstationary input (Tyrcha, Roudi, Marsili, & Hertz, 2013). Hence, it is interesting to generalize our methods presented in this letter and apply the generalized one to those cases with nonstationary inputs.

## Acknowledgments

We thank Abhranil Das and Ila R. Fiete for providing us with a sample code of their work (Das & Fiete, 2019). We are grateful to Yasser Roudi and John Hertz for fruitful discussion. This work was supported by the Special Postdoctoral Research Program at RIKEN (Y.T.), MEXT KAKENHI grant numbers 17H00764 (Y.T., T.O., and Y.K.), 18K11463 (T.O.), 19H01812 (T.O.), and 19K20365 (Y.T.), and JST CREST grant number JPMJCR1912 (T.O. and Y.K.). T.O. is also supported by a Grant for Basic Science Research Projects from the Sumitomo Foundation.

## References

Aru
,
J.
,
Aru
,
J.
,
Priesemann
,
V.
,
Wibral
,
M.
,
Lana
,
L.
,
Pipa
,
G.
, …
Vicente
,
R.
(
2015
).
Untangling cross-frequency coupling in neuroscience
.
Current Opinion in Neurobiology
,
31
,
51
61
.
Aurell
,
E.
, &
Ekeberg
,
M.
(
2012
).
Inverse Ising inference using all the data
.
Physical Review Letters
,
108
,
090201
.
Battistin
,
C.
,
Dunn
,
B.
, &
Roudi
,
Y.
(
2017
).
Learning with unknowns: Analyzing biological data in the presence of hidden variables
.
Current Opinion in System Biology
,
1
,
122
128
.
Battistin
,
C.
,
Hertz
,
J.
,
Tyrcha
,
J.
, &
Roudi
,
Y.
(
2015
).
Belief propagation and replicas for inference and learning in a kinetic Ising model with hidden spins
.
Journal of Statistical Mechanics: Theory and Experiment
,
2015
,
P05021
.
Bialek
,
W.
, & van
Steveninck
,
R. R.
(
2005
).
Features and dimensions: Motion estimation in fly vision
.
arXiv:q-bio/0505003
.
Brown
,
E. N.
,
Kass
,
R. E.
, &
Mitra
,
P. P.
(
2004
).
Multiple neural spike train data analysis: State-of-the-art and future challenges
.
Nature Neuroscience
,
7
,
456
461
.
Bulso
,
N.
,
Marsili
,
M.
, &
Roudi
,
Y.
(
2016
).
Sparse model selection in the highly undersampled regime
.
Journal of Statistical Mechanics: Theory and Experiment
,
2016
(
9
),
093404
.
Buzsáki
,
G.
(
2004
).
Large-scale recording of neuronal ensembles
.
Nature Neuroscience
,
7
,
446
451
.
Capone
,
C.
,
Filosa
,
C.
,
Gigante
,
G.
,
Ricci-Tersenghi
,
F.
, &
Del Giudice
,
P.
(
2015
).
Inferring synaptic structure in presence of neural interaction time scales
.
PLOS One
,
10
(
3
),
0118412
.
Cheng
,
A.
,
Gonçalves
,
J. T.
,
Golshani
,
P.
,
Arisaka
,
K.
, &
Portera-Cailliau
,
C.
(
2011
).
Simultaneous two-photon calcium imaging at different depths with spatiotemporal multiplexing
.
Nature Methods
,
8
,
139
142
.
Churchland
,
M. M.
,
Yu
,
B. M.
,
Sahani
,
M.
, &
Shenoy
,
K. V.
(
2007
).
Techniques for extracting single-trial activity patterns from large-scale neural recordings
.
Current Opinion in Neurobiology
,
17
,
609
618
.
Cocco
,
S.
,
Leibler
,
S.
, &
Monasson
,
R.
(
2009
).
Neuronal couplings between retinal ganglion cells inferred by efficient inverse statistical physics methods
. In
Proceedings of the National Academy of Sciences
,
106
(
33
),
14058
14062
.
Cox
,
D. R.
(
1958
).
The regression analysis of binary sequences
.
Journal of the Royal Statistical Society: Series B (Methodological)
,
20
(
2
),
215
242
.
Cunningham
,
J. P.
, &
Yu
,
B. M.
(
2008
).
Dimensionality reduction for large-scale neural recordings
.
Nature Neuroscience
,
17
,
1500
1509
.
Dahmen
,
D.
,
Grün
,
S.
,
Diesmann
,
M.
, &
Helias
,
M.
(
2019
).
Second type of criticality in the brain uncovers rich multiple-neuron dynamics
. In
Proceedings of the National Academy of Sciences
,
116
(
26
),
13051
13060
.
Das
,
A.
, &
Fiete
,
I. R.
(
2019
).
Systematic errors in connectivity inferred from activity in strongly coupled recurrent circuits.
bioRxiv:512053
.
Decelle
,
A.
, &
Zhang
,
P.
(
2015
).
Inference of the sparse kinetic Ising model using the decimation method
.
Physical Review E
,
91
(
9
),
052136
.
Dunn
,
B.
,
Mørreaunet
,
M.
, &
Roudi
,
Y.
(
2015
).
Correlations and functional connections in a population of grid cells
.
PLOS Computational Biology
,
11
(
2
),
e1004052
.
Dunn
,
B.
, &
Roudi
,
Y.
(
2013
).
Learning and inference in a nonequilibrium Ising model with hidden nodes
.
Physical Review E
,
87
,
022127
.
Field
,
G. D.
,
Gauthier
,
J. L.
,
Sher
,
A.
,
Greschner
,
M.
,
,
T. A.
,
Jepson
,
L. H.
, …
Chichilnisky
,
E. J.
(
2010
).
Functional connectivity in the retina at the resolution of photoreceptors
.
Nature
,
467
,
673
677
.
Grewe
,
B. F.
,
Langer
,
D.
,
Kasper
,
H.
,
Kampa
,
B. M.
, &
Helmchen
,
F.
(
2010
).
High-speed in vivo calcium imaging reveals neuronal network activity with near-millisecond precision
.
Nature Methods
,
7
(
5
),
399
405
.
Grosmark
,
A. D.
, &
Buzsáki
,
G.
(
2016
).
Diversity in neural firing dynamics supports both rigid and learned hippocampal sequences
.
Science
,
351
(
6280
),
1440
1443
.
Huang
,
H.
(
2015
).
Effects of hidden nodes on network structure inference
.
Journal of Physics A: Mathematical and Theoretical
,
48
,
355002
.
Ikegaya
,
Y.
,
Aaron
,
G.
,
Cossart
,
R.
,
Aronov
,
D.
,
Lampl
,
I.
,
Ferster
,
D.
, &
Yuste
,
R.
(
2004
).
Synfire chains and cortical songs: Temporal modules of cortical activity
.
Science
,
559
(
5670
),
559
564
.
Isomura
,
T.
,
Shimba
,
K.
,
Takayama
,
Y.
,
Takeuchi
,
A.
,
Kotani
,
K.
, &
Jimbo
,
Y.
(
2015
).
Signal transfer within a cultured asymmetric cortical neuron circuit
.
Journal of Neural Engineering
,
12
(
6
),
066023
.
Izhikevich
,
E. M.
(
2003
).
Simple model of spiking neurons
.
IEEE Transactions on Neural Networks
,
14
(
6
),
1569
1572
.
Kappen
,
H. J.
, &
Rodríguez
,
F. d. B.
(
1998
).
Efficient learning in Boltzmann machines using linear response theory
.
Neural Computation
,
10
(
5
),
1137
1156
.
Kobayashi
,
R.
,
Kurita
,
S.
,
Kurth
,
A.
,
Kitano
,
K.
,
Mizuseki
,
K.
,
Diesmann
,
… Shinomoto
, S.
, (
2019
).
Reconstructing neuronal circuitry from parallel spike trains
.
Nature Communications
,
10
,
4468
.
Kullback
,
S.
(
1997
).
Information theory and statistics.
New York
:
Dover
.
Lezon
,
T. R.
,
Banavar
,
J. R.
,
Cieplak
,
M.
,
Maritan
,
A.
, &
Fedoroff
,
N. V.
(
2006
).
Using the principle of entropy maximization to infer genetic interaction networks from gene expression patterns
. In
Proceedings of the National Academy of Sciences
,
103
(
50
),
19033
19038
.
Maass
,
W.
(
2016
).
Searching for principles of brain computations
.
Current Opinion in Behavioral Sciences
,
11
,
81
92
.
Marom
,
S.
, &
Shahaf
,
G.
(
2002
).
Development, learning and memory in large random networks of cortical neurons: Lessons beyond anatomy
.
Quarterly Reviews of Biophysics
,
35
(
1
),
63
87
.
McCulloch
,
W. S.
, &
Pitts
,
W.
(
1943
).
A logical calculus of the ideas immanent in nervous activity
.
Bulletin of Mathematical Biophysics
,
5
(
4
),
115
133
.
Mézard
,
M.
, &
Sakellariou
,
J.
(
2011
).
Exact mean-field inference in asymmetric kinetic Ising systems
.
Journal of Statistical Mechanics: Theory and Experiment
,
2011
(
7
),
L07001
.
Ohiorhenuan
,
I. E.
,
Mechler
,
F.
,
Purpura
,
K. P.
,
Schmid
,
A. M.
,
Hu
,
Q.
, &
Victor
,
J. D.
(
2010
).
Sparse coding and high-order correlations in fine-scale cortical networks
.
Nature
,
466
(
7306
),
617
.
Okatan
,
M.
,
Wilson
,
M. A.
, &
Brown
,
E. N.
(
2005
).
Analyzing functional connectivity using a network likelihood model of ensemble neural spiking activity
.
Neural Computation
,
17
,
1927
1961
.
Ostojic
,
S.
,
Brunel
,
N.
, &
Hakim
,
V.
(
2009
).
How connectivity, background activity, and synaptic properties shape the cross-correlation between spike trains
.
Journal of Neuroscience
,
29
(
33
),
10234
10253
.
Paninski
,
L.
, &
Cunningham
,
J. P.
(
2018
).
Neural data science: Accelerating the experiment-analysis-theory cycle in large-scale neurosciences
.
Current Opinion in Neurobiology
,
50
,
232
241
.
Pillow
,
J. W.
,
Shlens
,
J.
,
Paninski
,
L.
,
Sher
,
A.
,
Litke
,
A. M.
,
Chichilnisky
,
E. J.
, &
Simoncelli
,
E. P.
(
2008
).
Spatio-temporal correlations and visual signalling in a complete neuronal population
.
Nature
,
454
,
995
999
.
Roudi
,
Y.
,
Dunn
,
B.
, &
Hertz
,
J.
(
2015
).
Multi-neuronal activity and functional connectivity in cell assemblies
.
Current Opinion in Neurobiology
,
32
,
38
44
.
Roudi
,
Y.
, &
Hertz
,
Y.
(
2011
).
Mean field theory for nonequilibrium network reconstruction
.
Physical Review Letters
,
106
,
048702
.
Roudi
,
Y.
,
Tyrcha
,
J.
, &
Hertz
,
J.
(
2009
).
Ising model for neural data: Model quality and approximate methods for extracting functional connectivity
.
Physical Review E
,
79
,
051915
.
Sakellariou
,
J.
,
Roudi
,
Y.
,
Mezard
,
M.
, &
Hertz
,
J.
(
2012
).
Effect of coupling asymmetry on mean-field solutions of the direct and inverse Sherrington–Kirkpatrick model
.
Philosophical Magazine
,
92
(
1–3
),
272
279
.
Schneidman
,
E.
,
Berry
,
M. J.
,
Segev
,
R.
, &
Bialek
,
W.
(
2006
).
Weak pairwise correlations imply strongly correlated network states in a neural population
.
Nature
,
440
(
7087
),
1007
1012
.
Schwartz
,
O.
,
Pillow
,
J. W.
,
Rust
,
N. C.
, &
Simoncelli
,
E. P.
(
2006
).
Spike-triggered neural characterization
.
Journal of Vision
,
6
(
4
),
13
.
Sessak
,
V.
, &
Monasson
,
R.
(
2009
).
Small-correlation expansions for the inverse Ising problem
.
Journal of Physics A: Mathematical and Theoretical
,
42
(
5
),
055001
.
Shlens
,
J.
,
Field
,
G. D.
,
Gauthier
,
J. L.
,
Grivich
,
M. I.
,
Petrusca
,
D.
,
Sher
,
A.
, …
Chichilnisky
,
E. J.
(
2006
).
The structure of multi-neuron firing patterns in primate retina
.
Journal of Neuroscience
,
26
(
32
),
8254
8266
.
Sokal
,
R. R.
, &
Rohlf
,
F. J.
(
2011
).
Biometry: The principles and practice of statistics in biological research.
San Francisco
:
Freeman
.
Steinmetz
,
N. A.
,
Koch
,
C.
,
Harris
,
K. D.
, &
Carandini
,
M.
(
2018
).
Challenges and opportunities for large-scale electrophysiology with Neuropixels probes
.
Current Opinion in Neurobiology
,
50
,
92
100
.
Stringer
,
C.
,
Pachitariu
,
M.
,
Steinmetz
,
N.
,
Carandini
,
M.
, &
Harris
,
K. D.
(
2019
).
High-dimensional geometry of population responses in visual cortex
.
Nature
,
571
,
361
365
.
Takekawa
,
T.
,
Isomura
,
Y.
, &
Fukai
,
T.
(
2010
).
Accurate spike sorting for multiunit recordings
.
European Journal of Neuroscience
,
31
(
2
),
263
272
.
Tanaka
,
T.
(
1998
).
Mean-field theory of Boltzmann machine learning
.
Physical Review E
,
58
,
2302
.
Tang
,
A.
,
Jackson
,
D.
,
Hobbs
,
J.
,
Chen
,
W.
,
Smith
,
J. L.
,
Patel
,
H.
, …
Beggs
,
J. M.
(
2008
).
A maximum entropy model applied to spatial and temporal correlations from cortical networks in vitro
.
Journal of Neuroscience
,
28
(
2
),
505
518
.
,
Y.
,
Obuchi
,
T.
,
Isomura
,
T.
, &
Kabashima
,
Y.
(
2018
). Objective and efficient inference for couplings in neuronal networks. In
S.
Bengio
,
H.
Wallach
,
H.
Larochelle
,
K.
Grauman
,
N.
Cesa-Bianchi
, &
R.
Garnett
(Eds.),
Advances in neural information processing systems
,
31
(pp.
4971
4980
).
Red Hook, NY
:
Curran
.
Tyrcha
,
J.
,
Roudi
,
Y.
,
Marsili
,
M.
, &
Hertz
,
J.
(
2013
).
The effect of nonstationarity on models inferred from neural data
.
Journal of Statistical Mechanics: Theory and Experiment
,
2013
(
3
),
P03005
.
Weigt
,
M.
,
White
,
R. A.
,
Szurmant
,
H.
,
Hoch
,
J. A.
, &
Hwa
,
T.
(
2009
).
Identification of direct residue contacts in protein–protein interaction by message passing
. In
Proceedings of the National Academy of Sciences
,
106
(
1
),
67
72
.
Xu
,
Y.
,
Aurell
,
E.
,
Corander
,
J.
, &
Kabashima
,
Y.
(
2017
).
Statistical properties of interaction parameter estimates in direct coupling analysis
.
arXiv:1704.01459
.
Xu
,
Y.
,
Puranen
,
S.
,
Corander
,
J.
, &
Kabashima
,
Y.
(
2018
).
Inverse finite-size scaling for high-dimensional significance analysis
.
Physical Review E
,
97
,
062112
.
Yuste
,
R.
(
2015
).
From the neuron doctrine to neural networks
.
Nature Reviews Neuroscience
,
16
(
8
),
487
497
.
Zeng
,
H. L.
,
Alava
,
M.
,
Aurell
,
E.
,
Hertz
,
J.
, &
Roudi
,
Y.
(
2013
).
Maximum likelihood reconstruction for Ising models with asynchronous updates
.
Physical Review Letters
,
110
,
210601
.
Zeng
,
H. L.
,
Aurell
,
E.
,
Alava
,
M.
, &
Mahmoudi
,
H.
(
2011
).
Network inference using asynchronously updated kinetic Ising model
.
Physical Review E
,
83
,
041135
.
Zeng
,
H. L.
,
Hertz
,
J.
, &
Roudi
,
Y.
(
2014
).
L1 regularization for reconstruction of a non-equilibrium Ising model
.
Physics Scripta
,
89
,
105002
.