## Abstract

The investigation of neural interactions is crucial for understanding information processing in the brain. Recently an analysis method based on information geometry (IG) has gained increased attention, and the property of the pairwise IG measure has been studied extensively in relation to the two-neuron interaction. However, little is known about the property of IG measures involving more neuronal interactions. In this study, we systematically investigated the influence of external inputs and the asymmetry of connections on the IG measures in cases ranging from 1-neuron to 10-neuron interactions. First, the analytical relationship between the IG measures and external inputs was derived for a network of 10 neurons with uniform connections. Our results confirmed that the single and pairwise IG measures were good estimators of the mean background input and of the sum of the connection weights, respectively. For the IG measures involving 3 to 10 neuronal interactions, we found that the influence of external inputs was highly nonlinear. Second, by computer simulation, we extended our analytical results to asymmetric connections. For a network of 10 neurons, the simulation showed that the behavior of the IG measures in relation to external inputs was similar to the analytical solution obtained for a uniformly connected network. When the network size was increased to 1000 neurons, the influence of external inputs almost disappeared. This result suggests that all IG measures from 1-neuron to 10-neuron interactions are robust against the influence of external inputs. In addition, we investigated how the strength of asymmetry influenced the IG measures. Computer simulation of a 1000-neuron network showed that all the IG measures were robust against the modulation of the asymmetry of connections. Our results provide further support for an information-geometric approach and will provide useful insights when these IG measures are applied to real experimental spike data.

## 1. Introduction

The interaction between neurons plays a key role in information processing in the brain. A number of attempts at understanding the contribution of correlations to information processing have been made by studying pairwise and higher-order neural correlations (Gerstein & Perkel, 1969; Abeles & Gerstein, 1988; Aertsen, Gerstein, Habib, & Palm, 1989; Zhang, Ginzburg, McNaughton, & Sejnowski, 1998; Panzeri & Schultz, 2001; Grün, Diesmann, & Aertsen, 2002a, 2002b; Brown, Kass, & Mitra, 2004; Fellous, Tiesinga, Thomas, & Sejnowski, 2004; Czanner, Grün, & Iyengar, 2005; Shimazaki & Shinomoto, 2007; Amari, 2009; Peyrache, Benchenane, Khamassi, Wiener, & Battaglia, 2009; Shimokawa & Shinomoto, 2009; Lopes-dos-Santos, Conde-Ocazionez, Nicolelis, Ribeiro, & Tort, 2011). Recently information geometry (IG) has provided an information-theoretic approach based on differential geometry and has been used as a powerful tool for analyzing neuronal activity patterns (Amari & Nagaoka, 2000; Amari, 2001; Nakahara & Amari, 2002; Amari, Nakahara, Wu, & Sakai, 2003; Tatsuno & Okada, 2004; Eleuteri, Tagliaferri, & Milano, 2005; Ikeda, 2005; Miura, Okada, & Amari, 2006; Nakahara, Amari, & Richmond, 2006; Tatsuno, Fellous, & Amari, 2009; Ince et al., 2010; Ohiorhenuan & Victor, 2011; Nie & Tatsuno, 2012; Shimazaki, Amari, Brown, & Grün, 2012). The advantages of the IG approach include an orthogonal decomposition of higher-order interactions (Amari, 2001; Nakahara & Amari, 2002; Amari, 2009) and the direct relationship between IG measures and connection weights (Tatsuno & Okada, 2004; Tatsuno et al., 2009; Nie & Tatsuno, 2012).

Many of the previous theoretical studies, including information geometry, have focused on the pairwise interaction or relatively low orders of interactions. However, since the brain may process information with highly coordinated neural activity, the development of a correlation measure that is capable of estimating interactions with more neurons is important. The IG measures are ideal for this purpose because they can be extended to higher-order interactions in a straightforward manner (Amari, 2001). However, a systematic investigation of the relationship between different orders of IG measures and their dependency on network parameters has not yet been conducted. In this study, we investigated how the IG measures derived from up to 10-neuronal interactions were influenced by a correlated input, a background input, and the asymmetry of connections. First, we derived the analytical relationship between the IG measures and external inputs using a network of 10 neurons that were connected by uniform weights. Second, we extended our investigation to an asymmetrically connected neural network by computer simulation. We investigated how the IG measures were influenced by external inputs and the level of asymmetry of connections.

This study is organized as follows. In section 2, we introduce information geometry, a model network, and a recursive formula for analytically calculating the IG measures. In section 3, we describe the analytical relationship between the IG measures and the external inputs for 10 neurons that are uniformly connected. In section 4, we show the numerical results for an asymmetrically connected neural network of up to 1000 neurons. And in section 5, we summarize the results, discuss the limitations of this work, and propose direction for future studies.

## 2. Information Geometry, Model Network, and a Recursive Formula for Analytically Calculating the IG Measures

### 2.1. Information Geometry

*x*is a binary variable that represents the state of the

_{i}*i*th neuron in cases where it is silent (

*x*=0) or produces a spike (

_{i}*x*=1). is the probability of an

_{i}*N*-neuron system where we assume . The full

*N*th order log-linear model (LLM) of an

*N*-neuron system is given by: where ( represents the

*m*-neuron interaction and is the normalizing factor so that (Amari & Nagaoka, 2000). The first and second superscripts represent the order of LLM and the number of neurons in the system, respectively. We call the

*m*–neuron IG measure of the fully expanded LLM (Tatsuno & Okada, 2004). For simplicity, we also refer to the one-neuron IG measure as the single IG measure and the two-neuron IG measure as the pairwise IG measure. The first few IG measures and the normalizing factor are expressed as where . Note that for and , we used the following form of representation: In general terms, the partially expanded

*k*th order LLM of an

*N*-neuron system is expressed by where ( is the

*m*-neuron IG measure of the partially expanded

*k*th order LLM. The first few terms and normalizing factor are given as follows: where represents the marginalization over the (

*N*−

*k*) neurons. Also note that for and , we used the following form of representation: Both (the IG measure from the full model) and (the IG measure from the

*k*th order partial model) represent the

*m*-neuron interactions. However, note the difference between them: is calculated from the full information of all

*N*neurons. By contrast, is calculated from the partial information of

*k*neurons by marginalizing (

*N*−

*k*) neurons. It has been shown that is statistically orthogonal to any

*x*where

_{i}*x*represents the expectation of

_{i}*x*. On the other hand, is orthogonal to

_{i}*x*for

_{i}*i*that is included in the

*k*neurons (Amari, 2001; Nakahara & Amari, 2002).

*N*is an even (odd) number. Similarly, for the partly expanded

*k*th order IG measures, the formula becomes where an upper sign (lower sign) at the last term on the right-hand side is taken when

*k*is an even (odd) number.

In summary, the probability of events ( can be calculated from the marginal and coincident firings ( using equation 2.7 for the full LLM and equation 2.8 for the partially expanded LLM. The IG measures with any neuronal interactions can be then calculated with equation 2.2 for the full LLM and with equation 2.5 for the partially expanded LLM. However, performing these calculations for large *N* is difficult. In addition, obtaining the relationship between () and network parameters such as external inputs for an arbitrary network structure is not straightforward. Therefore, in the analytical part of this study, we focused on a uniformly connected network of 10 neurons. Our goal was to obtain insight into how the IG measures derived from up to 10-neuron interactions were related to external inputs. We also expanded the study to include asymmetric connections and a network with more neurons through the use of computer simulation. In the next section, we describe the structure and dynamics of the neural network that we used in this study.

### 2.2. Model Network

#### 2.2.1. General Description

*N*neurons , where a connection strength from a presynaptic neuron (

*j*) to a postsynaptic neuron (

*i*) is represented by

*J*. Each neuron in the layer receives a correlated input from a single upstream neuron

_{ij}*n*

_{0}with a connection strength represented by

*W*

_{i0}. It also receives a background input

*h*. The upstream neuron

_{i}*n*

_{0}receives a background input

*h*

_{0}. We assume that a background input is a random variable , where is the normal distribution with the mean (

*m*) and variance . If we let

_{i}*x*(

_{i}*t*) be the state of the

*i*th neuron at time

*t*, the binary values 0 and 1 correspond to a quiescent and active state, respectively. Under these conditions, the total input to the

*i*th neuron

*n*in the layer and to the upstream neuron

_{i}*n*

_{0}is written as follows: The first term on the right-hand side of equation 2.9 represents inputs from the neurons in the same layer. The second and third terms on the right-hand side of equation 2.9 represent a correlated input from the upstream neuron

*n*

_{0}and uncorrelated background input, respectively.

*u*. Following the work of Ginzburg and Sompolinsky (1994), we write the transition rate

_{i}*w*between the binary states as where is a microscopic characteristic time and

*g*(

*u*) is a monotonically increasing sigmoidal function whose value is bounded in the interval [0, 1]. The firing probability of a neuronal state variable is Note that is the marginal probability distribution of

_{i}*x*(

_{i}*t*) where the

*i*th neuron takes the value 1, while all the other

*N*−1 neurons take arbitrary values (0 or 1). Similarly, the coincident firing of the

*i*th and

*j*th neurons, , is expressed as The coincident firing of

*N*neurons, , is written as For mathematical clarity, we investigate neural interactions when the network is in the equilibrium state. Equations 2.12, 2.14, and 2.15 then reduce to Note that , and do not depend on

*t*.

#### 2.2.2. Simplified Network

*W*

_{i0}and a background input

*h*). To help facilitate the analytical investigation, we set all recurrent connections to be equal (uniform):

_{i}*J*=

_{ij}*J*. In addition, for mathematical clarity, we assumed that a connection weight from the upstream neuron (

*n*

_{0}) to a neuron (

*n*) is uniform and that the background input to a neuron (

_{i}*n*) has the same mean (

_{i}*h*) and variance (. These assumptions simplified equation 2.9 as follows: For a sigmoidal activation function

*g*(

*u*), we used: where

_{i}*m*is a parameter controlling the firing probability of a model neuron.

In the equilibrium limit, the influence of the background input is characterized by its mean value *h*. In the following section, we investigate how the strength of a correlated input (*W*) and the mean of a background input (*h*) influence the IG measures.

### 2.3. Derivation of System Equations in the Equilibrium Limit

#### 2.3.1. Two-Neuron System

*x*

_{0}, we have For the joint firing of two neurons, equation 2.17 becomes: Considering , equations 2.27 and 2.28 become identical. Therefore, equations 2.26, 2.27, and 2.28 reduce to two equations: For the coincident firing of three neurons, equation 2.18 translates to Note that we used from the first to the second lines on the right-hand side of the equation.

We now have five equations (2.24, 2.25, 2.29, 2.30, and 2.31) for five marginal and coincident firings , , , and . By solving these equations simultaneously, we represent the marginal and coincident firings in terms of the network parameters *J, W, h, h*_{0}, and *m*. We then use these parameters in equations 2.7 or 2.8 to obtain the probability of events such as . Finally, the IG measures for the full LLM are calculated by using equation 2.2, and the IG measures for the partially expanded LLM are calculated by using equation 2.5.

In the following section, we use a simplified notation such as for and for because all IG measures of the same order in the layer are identical due to the uniform connection assumption.

#### 2.3.2. Ten-Neuron System

The equations for a 10-neuron system can be obtained by expanding the procedure in the previous section. Therefore, we solved 21 equations simultaneously for the following 21 marginal and coincident firings: , , , , , , , , , , , , , , , , , , , , and . Since space does not allow us to write all 21 equations, we provide an equation for the first-order marginal as an example in the appendix.

*s*is the integer and represents a binomial coefficient. Note that expresses possible combinations on the first (2

*s*+1) variables. Similarly, equation 2.5 for the partial LLM reduces to

In the next section, we describe how *W* (the strength of a correlated input to neurons in the layer) and *h* (the mean of a background input to the neurons in the layer) influence the IG measures using a simplified 10-neuron network.

## 3. Analytical Study of IG Measures by Uniformly Connected Ten Neurons

In the analytical study in this section, we vary the strength of the correlated input *W* between 0 and 5*J* where *J* is the strength of the intrinsic connection between neurons in a layer. *J* is set to 1/10 following the general scaling rule of *J*=1/*N* where *N* is the number of neurons. The range of values is chosen to cover the strength of correlated inputs that could be observed in the brain. For example, the mossy fiber from the dentate gyrus to the CA3 region of the hippocampus is known to make a very strong synaptic connection. This strength has been estimated to be 5- to 10-fold of the intrinsic recurrent connections in CA3 (Urban, Henze, & Barrionuevo, 2001). Therefore, *W*=[0, 50*J*] is wide enough to cover the vast majority of correlated inputs that could be observed experimentally. The strength of the mean background input *h* is varied between 0 and 5*J*. The difference of the range between *W* and *h* comes from the different implementation of these inputs. While the correlated input *W* was modeled with an upstream neuron *n*_{0}, the background input *h* was implemented as a direct input to each neuron in a layer (see equation 2.19). This was done so that the model was consistent with previous studies (Ginzburg & Sompolinsky, 1994; Tatsuno & Okada, 2004; Tatsuno et al., 2009; Nie & Tatsuno, 2012). The parameter *m* that controls the firing probability of a model neuron in equation 2.24 was set to 1. It corresponds to the firing probability of approximately 0.15 when the network receives the weakest inputs (*W*=0, *h*=0) and approximately 0.64 when the network receives the maximum inputs (*W*=50*J, h*=5*J*).

In the following section, we summarize the results in four categories of the IG measures: the IG measure for a single neuron (, the IG measure for a 2-neuron interaction (, the IG measures for 3- to 5-neuron interactions (, , , and the IG measures for 6- to 10-neuron interactions (, , , , .

### 3.1. The IG Measure for a Single Neuron Interaction, θ_{1}^{(k)}

*W*=0), a previous study (Tatsuno et al., 2009) showed that can be related to an uncorrelated background input

*h*such as Below, we investigate the influence of a correlated input

_{i}*W*and the mean of a background input

*h*to where

*k*is systematically varied from 1 to 10. For a simplified 10-neuron network, reduces to , and is given by Figure 2A shows how is influenced by a correlated input

*W*in the absence of a background input

*h*(data with a background input are not shown because the overall tendency is the same). The calculation shows that is linearly related to the strength of

*W*initially but that it becomes insensitive to it (asymptotic flat line). In addition, the influence of

*W*was decreased with the increase of the order of LLM. In contrast, we found that a background input

*h*was related to linearly regardless of the existence of

*W*(see Figure 2B for

*W*=0, data where were not shown because the overall tendency was same). Furthermore, the figures showed that the linear relationship between and

*h*described in equation 3.1 holds more strongly for the higher-order LLM, the exact relationship being obtained at

*k*=10 (full LLM; see Figure 2B). In summary, the analytical calculation shows that the single IG measure is not sensitive to the strength of a correlated input

*W*, but that it is linearly related to the strength of the background input

*h*. In practice, this property could be useful to estimate the relative amount of background input that a neuron receives.

### 3.2. The IG Measure for a Two-neuron Interaction, θ_{12}^{(k, 10)}

*W*=0), it has also been shown that it is directly related to the sum of connection weights (Tatsuno et al., 2009), Furthermore, even under the influence of a correlated input

*W*, it has been shown that the pairwise measure with the fourth- or fifth-order LLM, or , is able to estimate the connection weight provided that the size of the network is sufficiently large (

*N*=10

^{3}−10

^{4}) (Nie & Tatsuno, 2012).

*W*and a background input

*h*on where

*k*was systematically varied from 2 to 10. When

*W*was modified, was affected, but to a lesser extent for higher-order

*k*of the LLM (see Figure 2C). Interestingly, when a background input existed, was less likely to be affected by the correlated input (data not shown). Note that is the correct answer for estimating the sum of the connection weights (a horizontal dashed line). When

*h*was modified, was weakly affected when there was no correlated input (see Figure 2D). For the full LLM (

*k*=10), was completely independent of the modulation of

*h*, providing the correct answer of 0.2 (2

*J*=0.2, the horizontal dashed line). When a correlated input existed, the value of was affected more severely, especially when the order of LLM

*k*was low (data not shown).

In summary, the analysis shows that the pairwise IG measure is a good estimator of the sum of connection weights, even under the influence of both a correlated input *W* and a background input *h*. This is especially true if the order of LLM *k* is high. In practice, the calculation of might not be easy to obtain because of the limited size of experimental data. However, as we previously discussed, or would provide a reasonable estimation of connection weights provided that the size of the network is large (e.g., *N*= 1000; Nie & Tatsuno, 2012). Therefore, could be a useful measure for estimating the sum of connection weights in electrophysiological recordings.

### 3.3. The IG Measures for Three- to Five-Neuron Interactions, (θ_{123}^{(k, 10)}, θ_{1234}^{(k, 10)}, θ_{12345}^{(k, 10)}

*m*variables.

The analytical results for , , and are plotted from Figures 2E to 2J. When a correlated input *W* is 0, all the measures are zero regardless of the existence of a background input *h* (see Figures 2E, 2G, and 2I). Since the network reduces to a Hopfield-type network where *W*=0, the result is consistent with the finding that the energy function has terms only up to the second order. For *W*>0, the IG measures deviate from 0 because a nonzero *W* introduces higher-order interactions. The analytical calculation shows that is affected monotonically by *W* (see Figure 2E) while and are influenced in a nonlinear manner (see Figures 2G and 2I). Interestingly, was less affected by *W* if there was a background input *h* (data not shown), as was the case for the pairwise IG measure . This tendency was not obvious for the other IG measures and . For all the IG measures investigated here, the values approach zero when the order of LLM *k* increases. When a background input *h* is varied, the IG measures stay very close to 0 if there is no correlated input *W* (see Figures 2F, 2H, and 2J). However, when *W*>0, the IG measures are more strongly influenced (data not shown). The range of modulation for *W*>0 was approximately on the order of 0.1, almost 10 to 10^{3} times larger than when *W*=0. The values approach zero when the order of LLM *k* increases, suggesting that the IG measures calculated by higher-order LLM may be more robust to interferences from a background input *h*.

In summary, the analysis shows that the IG measures for three- to five-neuron interactions are affected by a correlated input *W* in a highly nonlinear manner. The influence by a background input *h* was insignificantly small if the correlated input did not exist, but it increased significantly when the correlated input was present.

### 3.4. The IG Measures for Six- to Ten-Neuron Interactions, (θ_{123456}^{(k, 10)}, θ_{1234567}^{(k, 10)}, θ_{12345678}^{(k, 10)}, θ_{123456789}^{(k, 10)}, θ_{12345678910}^{(k, 10)}

*k*=9) and the IG measure for a 10-neuron interaction has the full LLM (

*k*=10) only.

The analytical results for , , , , and are shown in Figure 3. The general trend of dependency of these measures on correlated and background inputs was similar to that of and . When *W*=0, all the measures are zero regardless of the existence of a background input *h* (see Figures 3A, 3C, 3E, 3G, and 3I). However, when *W*>0, especially when *W*>0.5(=5*J*), the IG measures deviated from zero in a highly nonlinear manner. The values tended to approach zero when the order of LLM *k* increased, although the trend was less obvious as compared to IG measures involving three to seven neurons. When a background input *h* is varied, the IG measures stay very close to zero if there is no correlated input *W* (see Figures 3B, 3D, 3F, 3H, and 3J). When *W*>0, the IG measures are more strongly influenced (data not shown). The range of modulation was almost 10^{3}- to 10^{5}-fold larger than when *W*=0. When the order of LLM *k* increased, the values became less variable.

In summary, this analysis shows that the IG measures for 6- to 10-neuron interactions are affected by a correlated input *W* in a highly nonlinear manner. The influence by a background input *h* was insignificantly small if *W*=0 but increased significantly for *W*>0.

## 4. Simulation Study of IG Measures with Asymmetric Connections

Although the analytical relationship between the IG measures and network parameters is useful, we had to apply a strong constraint of uniform connectivity between neurons. We also had to use a small network size of 10 neurons to obtain the analytical solutions. These constraints made it difficult to obtain further insights into a more general situation such as with asymmetric connections. Therefore, we extended our investigation using numerical computer simulation.

First, to demonstrate the accuracy of the simulation, we numerically calculated the IG measures for a uniformly connected 10-neuron network and compared them with the analytical results obtained in the previous section. Second, we extended the connections from uniform to asymmetric. We investigated how external inputs (correlated input *W* and background input *h*) influenced the IG measures and how the network size affected their relationship with network parameters. In addition, we investigated how the magnitude of the asymmetry of connection weights influenced the IG measures.

### 4.1. Comparison Between Computer Simulations and Analytical Results

We performed numerical simulations using ten uniformly connected Ginzburg and Sompolinsky (1994) neurons. We computed the IG measures from 1-neuron interaction ( to 10-neuron interactions ( with all possible LLM orders *k*, corresponding to Figures 2 and 3. We calculated the IG measures by sampling a correlated input *W* from 0 to 50*J* with an increment of 5*J*. We also calculated the IG measures by sampling a background input *h* from 0 to 5*J* with an increment of 0.5*J*. At each value of *W* and *h*, we performed 100 simulation trials where each trial consisted of 10^{6} updates. The parameter *m* that controls the firing probability of a model neuron in equation 2.24 was set to 1. The results are reported as the mean SEM.

Figure 4 shows the representative examples in which we compare the values of numerical simulations and the corresponding analytical results. For clarity, we showed the results only for the single IG measure (see Figures 4A and 4B), the pairwise IG measure (see Figures 4C and 4D), the 9-neuron IG measure (see Figures 4E and 4F), and the 10-neuron IG measure (see Figures 4G and 4H). We also plotted the results only for the lowest and highest LLM orders ; and for the single IG measure (see Figures 4A and 4B), and for the pairwise IG measure (see Figures 4C and 4D), and for the 9-neuron IG measure (Figures 4E and 4F), and for the 10-neuron IG measure (see Figures 4G and 4H). Figure 4 shows that the numerical simulations and analytical results strongly agree; all analytical results are included within the mean SEM of the values obtained with the numerical simulations. We also confirmed that the same relationship holds true for all the IG measures that were not included in Figure 4 and for all possible LLM orders. Taken together, these results demonstrate that the numerical simulation reproduces the analytical results accurately and that they could be used for investigating the relationship between the IG measures and network parameters in more general settings such as with asymmetric connections.

### 4.2. Relationship Between the IG Measures and External Inputs for Asymmetrically Connected Networks

In this section, we extended a uniformly connected neural network to an asymmetrically connected one. We numerically calculated the IG measures for up to 10-neuronal interactions with *N*=10 and 1000 neurons. Asymmetric connections were set as at each simulation trial, where was a random number drawn from the normal distribution with the mean *m*=0 and variance , respectively. Without losing generality, we calculated the IG measures for a specific neuron group as follows. For the pairwise IG measure , we selected neurons 1 and 2 and set their connection weights to and . In this way, the magnitude of their total connections was kept constant (*J*_{12}+*J*_{21}=2/*N*). We applied this constraint because we wanted to assess the robustness of the relationship . The other connections were set following . Similarly, for the three-neuron IG measure , we selected the neurons 1, 2, and 3 and set their connection weights to ( and *J*_{21}=2/N−*J*_{12}), ( and *J*_{32}=2/*N*−*J*_{32}), and ( and *J*_{13}=2/N−*J*_{31}). For clarity of the analysis, these constraints were applied to investigate the robustness of . The other connections were set following . We used the same procedure for all the other IG measures with four or more neuronal interactions. The influence of a common input*W* was investigated in the range of [0, 50*J*]. Similarly, the influence of a background input *h _{i}*=

*h*was investigated at 0 and 5

*J*. As the theoretical calculation and simulation for uniform connections showed that the IG measures were influenced by the common input

*W*strongly (see Figures 2 to 4), more data points were sampled for

*W*. Each simulation trial consisted of 10

^{6}updates, and 100 trials were performed at each

*W*and

*h*value. The parameter

*m*that controls the firing probability of a model neuron in equation 2.24 was set to 1. The results are presented as the mean SEM.

#### 4.2.1. The IG Measures for Single and Pairwise Interactions, θ_{1}(*k, N*), θ_{12}(*k, N*)

Figures 5A and 5B show how the single IG measure is influenced by a common input*W* and a background input *h*, for an asymmetric network of 10 neurons. For clarity, we showed the results for the lowest and highest LLM orders only (*k*=1, dashed line; and *k*=10, solid line), but we confirmed that the IG measures with *k*=2 to *k*=9 reside between *k*=1 and *k*=10. The simulation showed that (dashed line, lowest LLM order) was affected by both the common input and the background input. However, (solid line, highest LLM order) was robust against the common input and was related to the background input only. Note the similarity between the simulation results in Figure 5 and the analytical results for a uniformly connected network in Figure 2. When the size of the network was increased to *N*= 1000, the influence of a common input became significantly smaller (see Figure 5C), even for (dashed line, lowest LLM order). Note that the values of and were more consistent than those of and , although their network size was 100 times different (see Figures 5A–5D). Furthermore, we confirmed that the values of and were close to the values predicted from equation 3.1, even under the influence of both the common input and the background input (data not shown). This result suggests that and , the single IG measure with the highest LLM order in this study, was able to detect the background input correctly even under the influence of the common input.

For the pairwise IG measure , the results for an asymmetric network of 10 neurons are shown in Figures 5E and 5F. The desired property of the pairwise IG measure is to detect the two-neuron interaction correctly: for *N*= 10. We observed that (the measure with the lowest LLM order) was strongly influenced by a common input (see Figure 5E, dashed line), but the influence of a background input was much weaker (see Figure 5F, dashed line). In contrast, the influence of external inputs to (the measure with the highest LLM order) was much weaker, and was able to estimate the connection weight almost correctly (see Figures 5E and 5F, solid lines). It is also important to note the similarity of values to the corresponding analytical results for a uniform connection (see Figures 2C and 2D). When the size of the network was increased to *N*= 1000, we observed a similar tendency (see Figures 5G and 5H); (solid line) was more robust against the external inputs than (dashed line). Also note that estimated the connection weight almost correctly: .

In summary, the numerical simulation demonstrated that the single IG measure and the pairwise IG measure (highest LLM order) were able to detect the background input and the sum of connection weights, for an asymmetrically connected network. We also found that the influence of external inputs became less significant for a larger network.