We examine the efficiency of information processing in a balanced excitatory and inhibitory (E-I) network during the developmental critical period, when network plasticity is heightened. A multimodule network composed of E-I neurons was defined, and its dynamics were examined by regulating the balance between their activities. When adjusting E-I activity, both transitive chaotic synchronization with a high Lyapunov dimension and conventional chaos with a low Lyapunov dimension were found. In between, the edge of high-dimensional chaos was observed. To quantify the efficiency of information processing, we applied a short-term memory task in reservoir computing to the dynamics of our network. We found that memory capacity was maximized when optimal E-I balance was realized, underscoring both its vital role and vulnerability during critical periods of brain development.
It is known that neuronal circuits in the brain are particularly influenced by environmental inputs when their plasticity is elevated during critical periods (CPs) of development (Hensch, 2004, 2005; Werker & Hensch, 2015).
One of the best known examples is the loss of binocular vision following monocular deprivation of kittens (Hubel & Wiesel, 1970), which is not observed in adult cats. Similar CPs have been discovered in various systems across species, including sound localization in barn owls, song learning in birds, and speech perception in human infants (Hensch, 2004).
Late maturation of neuronal circuits utilizing the inhibitory neurotransmitter -amino butyric acid (GABA) dictates the opening of CPs (Hensch, 2005). In particular, the maturation of parvalbumin-positive (PV) basket cells, a subtype of inhibitory neuron, plays a pivotal role (Werker & Hensch, 2015; Reh et al., 2020). These PV cells extend lateral inhibition onto neighboring pyramidal cell bodies, which are excitatory neurons; therefore, the emergence of PV basket synapses realizes a balance between excitatory and inhibitory activity (E-I balance) at CP onset. Subsequent closure of the CP protects the circuits from further rewiring. Several factors actively promote stability, such as structural barriers like perineuronal nets or myelin-related signals, and the dampening of neuromodulatory systems (Werker & Hensch, 2015). Neuromodulatory systems, such as acetylcholine (ACh) and serotonin, act through another class of non-PV inhibitory interneuron, which ultimately form synaptic connections onto PV basket cells (Takesian et al., 2018). The emergence of molecules that dampen ACh receptors, like Lynx1, serve as a functional brake on plasticity. Removal of Lynx1 may restore a juvenile E-I balance to reopen CPs in adulthood (Morishita et al., 2010; Takesian et al., 2018).
Optimal E-I balance is thought to be homeostatically maintained in the brain to avoid runaway excitation or quiescence (Turrigiano & Nelson, 2004). Disrupted E-I balance commonly underlies developmental disorders such as autism or psychiatric disorders like schizophrenia (Eichler & Meier, 2008; Yizhar et al., 2011; Nelson & Valakh, 2015). In slices of ferret prefrontal or occipital cortex, the E-I balance generates self-sustaining activity that can be turned on and off by synaptic inputs (Shu, Hasenstaub, & McCormick, 2003). In the field of nonlinear dynamics, a theoretical model of a neural network with balanced E-I activity is known to yield deterministic chaos (van Vreeswijk & Sompolinsky, 1996). Other theoretical models composed of excitatory and inhibitory neurons show various synchronized dynamics, including a fully synchronized, stochastically synchronized, and chaotically synchronized (Brunel, 2000; Kanamaru & Sekine, 2005; Kanamaru & Aihara, 2008). Here, we examined the efficiency of information processing in a network with shifting E-I balance to understand the role of optimally balanced dynamics during the CP.
Specifically, we incorporated the framework of reservoir computing (Lukoševičius & Jaeger, 2009), wherein machine-learning tasks are solved using recurrent neural networks. During training, random connection weights inside the network are fixed, and only the weights in the output layer are trained. Some studies have found that networks show high performance when their parameters are set at the edge of chaos (Bertschinger & Natschläger, 2004; Legenstein & Maass, 2007; Boedecker et al., 2012). In this study, we applied a short-term memory task to our network and considered the relationship between E-I balanced dynamics and CPs. In the short-term memory task, a quantity called memory capacity is calculated to evaluate how long information on time-varying inputs is retained in the network (Kubota et al., 2018; Tanaka et al., 2020).
In section 2 of this paper, we define a cortical network model composed of excitatory pyramidal neurons in layers 2/3 and inhibitory interneurons in layer 4. First, we define a module model composed of excitatory and inhibitory neurons and then define a multimodule network by connecting them to each other. In section 3, we examine dynamics of our model and find transitive chaotic synchronization and conventional chaos. In section 4, we analyze the chaotic dynamics in our model using the Lyapunov spectrum and Lyapunov dimensions. It was found that the Lyapunov dimension of transitive chaotic synchronization is high and that of conventional chaos is low. Between them, we also found a new type of the edge of chaos, the edge of high-dimensional chaos. In section 5, we quantify the degree of E-I balance using two E/I ratios: the functional E/I ratio proposed by Bruining et al. (2020) and the input E/I ratio, which is the proportion of excitatory to inhibitory input onto pyramidal neurons. Transitive chaotic synchronization and the edge of high-dimensional chaos were observed when both E/I ratios are close to 1. In section 6, we apply the short-term memory task in reservoir computing to the E-I balanced dynamics in our network. It was observed that memory capacity was maximized when optimal E-I balance was realized. Conclusions are discussed in the final section of this paper.
2.1 Cortical Layers
To examine networks of excitatory and inhibitory neurons with shifting E-I balance, we considered ensembles of neurons in cortical layers as shown in Figure 1A (Kanamaru, Fujii, & Aihara, 2013). In particular, we constructed a network composed of excitatory pyramidal neurons (PYRs) in layers 2/3 and inhibitory interneurons (INs) in layer 4. We assumed that the interneurons in our model are parvalbumin-positive (PV), fast-spiking (FS) neurons because their maturation underlines an optimal E-I balance that opens the CP (Hensch, 2005; Reh et al., 2020). In Figure 1A, top-down glutamatergic and bottom-up pathways are shown as inputs to the network. When we examine the internal dynamics of the network, these inputs are omitted.
Notably, we also incorporated the effect of ascending projections from the nucleus basalis of Meynert (NBM) and their corticopetal acetylcholine (ACh) release when attention-demanding tasks are performed (Thiele & Bellgrove, 2018). In our model, ACh was projected onto INs bearing nicotinic acetylcholine receptors (nAChRs) in layer 1. These neurons in turn inhibit INs in layer 4 and, as a result, indirectly activate PYRs in layers 2/3 (Takesian et al., 2018). Morishita et al. (2010) found that an endogenous prototoxin, Lynx1, suppresses nAChR signaling and closes the CP, potentially by modulating E-I circuit balance (Takesian et al., 2018).
This model is not a full model of the cortical circuitry because it does not incorporate inhibitory neurons in layers 2/3 or excitatory neurons in layer 4 for examples. Rather, it is a minimal model of cortical circuitry that is relevant to plasticity during CP (Werker & Hensch, 2015). This model allows us to focus on the particular E-I balance, which appears to be important for maximizing learning during CP in biological systems.
2.2 Module Model
We considered a multimodule network model of layers 2/3 and 4 of the cortex (see Figure 1B). Each module, shown as a rectangle with rounded corners, was composed of excitatory neurons and inhibitory neurons.
The internal states of the th excitatory neuron and inhibitory neuron were described by phase variables and , respectively (Kanamaru & Sekine, 2005; Kanamaru & Aihara, 2008). This phase model is often used as a model of class I spiking neurons (Hodgkin, 1948; Izhikevich, 1999, 2000). Arbitrary class I neurons near their bifurcation points can be transformed into this phase model. Even when neurons are not close to their bifurcation points, it is expected that similar results will be observed in both models. Chaotic dynamics observed in networks of phase models has also been observed in networks of the class I Morris-Lecar neuron model (Kanamaru, 2006, 2007). Moreover, this model can facilitate the analysis of its dynamics in the limit of by integrating its Fokker-Planck equations numerically, as shown below.
The activity of a single neuron in the model without connections can be regulated by inputs ( or ). When , a stable equilibrium point exists, which can be considered a resting state. When , the resting state disappears and starts to oscillate. Firing time is defined as the time at which exceeds . In this study, we set . Each neuron without connections spontaneously fires owing to gaussian white noise with strength .
The connections among the neurons are global. We consider four types of intramodule connections: , , , and . Connections generating postsynaptic currents of an exponential form between all pairs of neurons, as well as diffusive connections by gap junctions between inhibitory neurons, are present. Appendix A provides a detailed definition of the module.
When there are no intramodule connections, all neurons show only stochastic firings owing to the gaussian white noise. Various collective firings were observed when intramodule connections were introduced. For weak E-I connections, only asynchronous firings with low or high firing rates are observed, depending on the strength of the gaussian white noise. By adjusting the strengths of E-I connections, some correlations among the firings of different neurons appear, where ensemble-averaged firing rates of neurons show time-varying dynamics, such as periodic and chaotic dynamics (Kanamaru & Sekine, 2005; Kanamaru & Aihara, 2008). These dynamics are called intramodule synchronization.
The averaged dynamics of a single module in the limit of can be analyzed using the Fokker-Planck equations (FPEs), as shown in appendix B. By considering this limit, the equations that govern our model change from stochastic to deterministic, and we can calculate the theoretical dynamics of ensemble-averaged firing rates and of excitatory and inhibitory neurons, respectively. Therefore, in the following sections, we examine the dynamics of our model in the limit of . When integrating the FPEs numerically, we transformed them into a set of ordinary differential equations (ODEs) using Fourier expansion as shown in appendix C. By replacing the original FPEs of a module with 242-dimensional ODEs, we can conduct nonlinear analysis of intramodule synchronization (Kanamaru & Sekine, 2005; Kanamaru & Aihara, 2008). To our knowledge, such simple analyses are possible only when the dynamics of the neurons is described by phase variables, as shown in appendix A.
Temporal changes in and calculated from the FPEs are shown in Figure 2C. They approximate well the ensemble-averaged firing rates in a module with a finite number of neurons shown in Figure 2B. In Figures 2A and 2B, the ratio of 4:1 between and did not affect the dynamics of a module except for the amount of fluctuations because of scaling of the connections of order . Similarly, in the FPE-based analysis, the ratio between and disappears in the limit of .
The intramodule synchronization in a module with and is shown in Figures 2D and 2E. It is observed that the interpeak intervals of and are largely fluctuating in this small module. Comparison between Figure 2B and Figure 2E shows that the dynamics of a module with and in Figure 2B better approximates the result of the FPE-based analysis shown in Figure 2C with smaller fluctuations.
This module is regarded as a model of a local E-I network, such as a pyramidal cell module (Rodriguez, Whitson, & Granger, 2004). Using these module models, we defined a network composed of modules, as shown in Figure 1B. Previously, we had examined the pattern-retrieval dynamics of a multimodule network in which several patterns were embedded in the intermodule connections (Kanamaru, 2007; Kanamaru et al., 2013; Kanamaru & Aihara, 2019). Such networks can be interpreted as models of the adult brain, in which numerous memories are stored.
In contrast, in this study, we considered a model of the brain during development. Therefore, instead of embedding patterns in the intermodule connections, we randomly connected the modules. As intermodule connections, we treated two types of connections: those from excitatory neurons in the th module to excitatory neurons in the th module, and those from excitatory neurons in the th module to inhibitory neurons in the th module. Their strengths were defined as and , respectively, set to and with a probability of 0.1 and set to 0 with a probability of 0.9. The intermodule connections were also global.
A detailed definition of intermodule connections is provided in appendix D. In the following sections, we analyze the dynamics of a network with 48 such modules. Note that we can analyze this network using the FPEs of 48 modules, which are transformed into 48242-dimensional ODEs.
3 Dynamics in a Network with Multiple Modules
In the following sections, we regulate E-I balance and observe the dynamics in a network with 48 modules. There are several methods for regulating E-I balance, such as regulating the ratio between and , regulating the activity of excitatory and inhibitory neurons, and regulating the strength of excitatory and inhibitory synapses. As stated in the previous section, the ratio between and does not affect the dynamics of our network except for the amount of fluctuations. Therefore, we use the other methods, that is, regulating the activity of neurons and regulating the strength of excitatory and inhibitory synapses.
In this section, we use parameter to regulate the E-I balance, which controls the activity of inhibitory neurons. and can be interpreted as constant inputs to the excitatory and inhibitory neurons, respectively. Moreover, the regulation of can be interpreted as the effect of input from layer 1 interneurons bearing nAChR, as shown in Figures 1A and 1B. In the following, is the degree of inhibitory activity.
Dynamics of an E-I network with 48 modules for are shown in Figure 3A. Methods for initializing this model in Figure 3 are explained in appendix E. Temporal changes in the ensemble-averaged firing rates of the excitatory neurons in the th module () were aligned such that they did not overlap. Figure 3B shows an enlarged image of Figure 3A. Temporal changes in a single indicated that firing of the excitatory neurons in the th module showed some correlations. This phenomenon shows intramodule synchronization. Some correlations were also observed among the ensemble-averaged firing rates of different modules, indicating intermodule synchronization that was also rearranged transitively.
The oscillation of the firing rate in each module was not periodic. In the next section, we show that the dynamics in Figure 3A are chaotic, that is, there are some positive Lyapunov exponents in the network. Therefore, we refer to the dynamics in Figure 3A as “transitive” chaotic synchronization.
The transitive chaotic synchronization observed with a different initial condition explained in appendix E is shown in Figure 3C. Even when starting from a different initial condition, the network converges to a similar transitive chaotic synchronization. Therefore, we assume that this dynamics is realized as some kind of attractor.
In Figure 4A, the short covariances for , , , and are shown as color matrices. When is inside the 99% confidence interval of , the value of is replaced by 0. Therefore, only the significant values of are colorized in Figure 4. When takes positive (red) or negative (blue) values, the excitatory ensembles in the th and th modules exhibit in-phase and antiphase synchronization, respectively. We refer to this as intermodule synchronization. Moreover, it was also observed that the synchronized pairs were rearranged when a different short period was observed. We refer to such chaotic rearrangement of synchronized pairs as transitive chaotic synchronization.
Note that transitive chaotic synchronization is different from well-known transient chaos. Transient chaos is a phenomenon in which the system stays for a prolonged duration around a nonattracting chaotic set and then converges to another attractor, such as an equilibrium point or a periodic solution (Ott, 2002; Lai & Tel, 2011). Transient chaos is observed when the value of a parameter of the system is close to the bifurcation point at which the chaos becomes unstable. Similarly, transitive chaotic synchronization which we define here for the first time, is different from topological transitivity, which is an essential property of deterministic chaos with a strange attractor (Robinson, 1995; Kolyada & Snoha, 2009). Because transitive chaotic synchronization is also chaotic with positive Lyapunov exponents, it is topologically transitive. We define transitive chaotic synchronization as maintaining the rearrangements of synchronized pairs nonperiodically. Furthermore, the transitive chaotic synchronization considered here is similar to the Milnor attractor (Milnor, 1985), in which the system returns to a chaotic set with synchronized pairs after it successively moves away from it, due to the characteristics of both attracting and repelling orbits in this chaotic set. In other words, transitive chaotic synchronization maintains successive transitions among chaotic sets and may be related to chaotic itinerancy (Kanneko & Tsuda, 2001).
In the following, we examine how variation in affects transitive chaotic synchronization.
The dynamics of the network for are shown in Figure 3D, where the activity of excitatory neurons is dominant because of the small . The short covariances for this dynamics are shown in Figure 4B. It was observed that the number of synchronized modules increases, but that transitive chaotic synchronization (chaotic rearrangement of synchronized pairs) still exists in this case.
The dynamics of the network for are shown in Figure 3E, where the activity of inhibitory neurons is dominant because of the larger . The short covariances for this dynamics are shown in Figure 4C. Here, the transitive chaotic synchronization disappeared, and oscillating modules were fixed. Note that the horizontal range in Figure 3E is narrower than that in Figure 3D to clearly show each oscillation. As demonstrated later, the nontransitive oscillations in Figure 3E were also chaotic, which we refer to as conventional chaos.
The dynamics of the network for with a different initial condition (see appendix E) is shown in Figure 3F. It is observed that the arrangement of oscillating modules in Figure 3F is different from that in Figure 3E. In the region of conventional chaos, at least several arrangements of oscillating modules coexist and the transitions among them are not observed. Although we can find other arrangements by using different initial conditions, we will not examine such arrangements further in this study because we are interested in the properties of transitive chaotic synchronization.
4 Analysis with Lyapunov Spectrum
To definitively state that the dynamics in Figure 3 are chaotic, we must show that the network has at least one positive Lyapunov exponent, which means that there is at least one direction in which two trajectories separate exponentially from each other.
In general, the number of Lyapunov exponents is for an -dimensional dynamical system, and a set of Lyapunov exponents is called the Lyapunov spectrum. In this study, we analyzed our network using the Fokker-Planck equations, which are a set of partial differential equations of phase and time . Therefore, there is theoretically an infinite number of Lyapunov exponents in our model. When integrating the Fokker-Planck equations numerically, we transformed them into a set of ordinary differential equations using Fourier expansion as shown in appendix C. The Lyapunov spectrum can then be calculated by integrating these ordinary differential equations (Shimada & Nagashima, 1979; Nagashima & Baba, 1999).
It is known that there is always one Lyapunov exponent with a value of 0 for a continuous-time dynamical system, which we denoted as . Then we aligned the remaining Lyapunov exponents in descending order and denoted them as , , , . The dependence of the Lyapunov exponents , , , , , , and on is shown in Figure 5A. The point and the error bar denote the mean and the 99% confidence interval calculated using the -distribution for 10 trials, respectively. Methods for initializing 10 trials are explained in appendix E. We calculated at most 70 Lyapunov exponents depending on and confirmed the sum of the calculated Lyapunov exponents at each is negative, concluding this system is a dissipative one.
We found that positive Lyapunov exponents existed for all values. Therefore, chaotic dynamics robustly exist over a wide range of values in our model. This fact is related to the chaotic dynamics with balanced inhibition proposed by van Vreeswijk and Sompolinsky (1996).
In the region of transitive chaotic synchronization, the error bars for Lyapunov exponents are very narrow in Figure 5A so that they are not visible in this scale. This property would relate to the ergodic theorem that guarantees that the Lyapunov exponents take the same values for almost all the initial conditions on an attractor (Oseledets, 1968; Eckmann & Ruelle, 1985). On the other hand, in the region of conventional chaos, the error bars are wide because the Lyapunov exponents for different attractors are calculated as shown in Figures 3E and 3F.
When was increased toward , it was observed that monotonically decreased, and when , the dependence of on became complicated. These results imply that transitive chaotic synchronization for and conventional chaos for were qualitatively different phenomena. As stated above, for , several arrangements of the oscillating modules can coexist. In this region, the Lyapunov dimension depends on the number of largely oscillating modules (e.g., 13 modules in Figure 3E and 12 modules in Figure 3F). Therefore, in the region of conventional chaos, the Lyapunov dimensions take different values for 10 trials.
Moreover, a transition from high () dimensional chaos to low () dimensional chaos took place at . Around this transition point, we observed intermittent switching between transitive chaotic synchronization and conventional chaos, as shown in Figure 6.
Such intermittent switching is often observed at the edge of chaos. The “edge of chaos” refers to the area between order and disorder in the field of cellular automata and is also used in the field of dynamical systems (Pierre & Hübler, 1994; Melby et al., 2000). The edge of chaos in dynamical systems is typically observed when the largest positive Lyapunov exponent of the system approaches zero. In this case, intermittent switching between chaotic and nonchaotic dynamics was observed. However, in our model, intermittent switching between high-dimensional chaotic dynamics and low-dimensional chaotic dynamics was observed, as shown in Figure 6. Therefore, it is more appropriate to use the term “edge of high-dimensional chaos” in our system rather than “edge of chaos,” but we will use the latter simpler term below to indicate this meaning.
To confirm the robustness of the observed phenomena, we regulated the strength of the connections instead of . First, the strengths of the connections from the inhibitory neurons and are replaced by and , respectively, and is regulated. Similarly, the connections from excitatory neurons to inhibitory neurons, and , are replaced by and , respectively. , , and are the strengths of intramodule connections, and is the strength of intermodule connections, as shown in appendixes A and D. is related to the maturation of inhibitory neurons, and is related to the regulation of inhibitory neurons by excitatory neurons. The dependence of some Lyapunov exponents and Lyapunov dimensions on and is shown in Figure 7. The values of the parameters for were the same as those used in Figure 3A. The point and the error bar denote the mean and the 99% confidence interval calculated using the -distribution for 10 trials, respectively. Methods for initializing 10 trials are explained in appendix E. Similar to Figure 5, the largest positive Lyapunov exponents and Lyapunov dimensions become small when regulating or , that is, a transition between transitive chaotic synchronization and conventional chaos occurs. Therefore, we regarded this transition as a robust phenomenon. Note that the Lyapunov dimension of the conventional chaos for is larger than 10 because the number of largely oscillating modules tend to become large in this setting, as shown in Figure 7E, where 19 modules are oscillating.
Transitive chaotic synchronization is stable for large and small . It is somewhat difficult to understand the opposite dependence of dynamics on the parameters, because the dynamics in our network is generated by reciprocal excitatory and inhibitory connections. However, the regulation of yields clear results because it directly modulates the firing rates of inhibitory neurons. Therefore, as described in the following sections, we subsequently used to regulate the E-I balance.
5 Measuring E-I Balance
In the previous sections, we identified several dynamics in our model by regulating the degree of inhibitory activity, . With particular values of , an optimal E-I balance in the network can be realized.
To measure the degree of E-I balance, we use two E/I ratios: the functional E/I ratio () proposed by Bruining et al. (2020) and the input E/I ratio (), which is the proportion of excitatory to inhibitory input onto pyramidal neurons.
relates to detrended fluctuation analysis (DFA) (Peng et al., 1995; Linkenkaer-Hansen et al., 2001; Hardstone et al., 2012) to analyze the long-term correlations and scale-free properties of time series obtained from neural systems. In DFA, a time series is divided into bins with width , and the trend in each bin is removed. Then the average over bins of standard deviation of the detrended data in each window is calculated. When scaling is observed, is called the DFA exponent. Typical values of the DFA exponent are 0.5 for white noise, 1 for fluctuations, and 1.5 for Brownian motion. When the DFA exponent takes large values in the range (0.5, 1], the original time series has long-term correlations.
The DFA exponent carries some disadvantages when measuring the degree of E-I balance. First, a long time series is required to calculate the DFA exponent. Second, even when the DFA exponent can be obtained, the ratio of excitatory and inhibitory activity cannot be uniquely determined. The was then proposed to overcome these disadvantages (Bruining et al., 2020).
It takes a value of 1 when the DFA exponent reaches a maximum value by regulating the ratio of E-I activity. It is assumed that long-term correlation is maximal when optimal E-I balance is realized and that the maximal value of the DFA exponent is in the range (0.5, 1]. When the DFA exponent increases with an increase in excitatory activity, is defined as . When the DFA exponent decreases with an increase in excitatory activity, is defined as . The detailed method for calculating is presented in appendix F. When calculating it, the ensemble-averaged firing rate for the th module was used, and the average and standard deviation over the 48 modules were calculated.
Although the definition of can be easily interpreted, it cannot be applied to experimental data in many cases. Conversely, can be calculated only from time series data obtained from neural systems because it quantifies long-term correlations and scale-free properties of data. For example, Bruining et al. (2020) applied to EEGs of healthy adults. We calculated both E/I ratios in our network for comparison. becomes important when we compare our theoretical results with experimental ones.
It is observed that both and approach 1 when is increased. means that the network shows neither runaway excitation nor a quiescent state (Turrigiano & Nelson, 2004). However, the property of the dynamics with is not clear when only is observed. In contrast, is calculated based on the analysis of the long-term correlations and scale-free properties of the data. Therefore, means that the transitive chaotic synchronization shown in Figures 3A, 3B, and 3C has a long-term correlation.
In the following section, we show that transitive chaotic synchronization with is suitable for information processing.
6 Storing Randomly Varying Inputs via Reservoir Computing
Here, we demonstrate that transitive chaotic synchronization and the dynamics at the edge of chaos are suitable for information processing. Specifically, we consider the response of the network to time-varying inputs using the framework of reservoir computing (Lukoševičius & Jaeger, 2009) with respect to a short-term memory task. We demonstrate that a network showing transitive chaotic synchronization or dynamics at the edge of chaos displays a large memory capacity.
We define the weights for the th module as following a uniform distribution [, ], where . is assigned to the excitatory neurons in all modules by replacing the degree of excitatory activity with . Weights were fixed during the simulations.
Then the data for were discarded, the data for were used to train (), and the data for were used as test data. We used the previous values of input () for the target of .
The dependence of on the time delay for where transitive chaotic synchronization is observed is shown in Figure 9. for the test data is also shown, which was obtained by calculating the coefficient of determination with the test data using obtained from the training data. Whereas for the training data always takes positive values, for the test data takes positive values only when .
where in this study.
The dependence of MC on for the training data () and the test data () is shown in Figure 10. We have shown MC before training for comparison, where the values of weights are set following a uniform distribution [, 1]. All results were obtained by averaging 10 experiments for different .
Figure 10 shows that MC increases when is close to where transitive chaotic synchronization or dynamics at the edge of chaos are observed. For such values of , both and are close to 1; therefore, it can be concluded that MC increases when an optimal E-I balance is realized. Although MC for test data was relatively small, it was larger than that before training. To obtain larger MC, all connections inside the network should be trained. Such training of the entire network is important for future comparison to the actual brain.
7 Conclusions and Discussion
In the field of nonlinear dynamics, it is well known that network models with balanced excitation and inhibition yield deterministic chaos (van Vreeswijk & Sompolinsky, 1996). The novelty of our study lies in the analysis of the Lyapunov spectrum and Lyapunov dimension, and the finding of more detailed properties of the chaotic dynamics with E-I balance (van Vreeswijk & Sompolinsky, 1996), namely, transitive chaotic synchronization, conventional chaos, and their intermittent switching at the edge of high-dimensional chaos with respect to the CP.
To understand the role of optimal E-I balance during the CP, we examined the dynamics of a network composed of excitatory pyramidal neurons (PYRs) in layers 2/3 and inhibitory interneurons (INs) in layer 4. INs with nicotinic acetylcholine receptors (nAChRs) in layer 1 were further connected to INs in layer 4. The effect of ACh was incorporated into the regulation of a parameter of INs in layer 4.
In a randomly connected network of 48 such modules, transitive chaotic synchronization was observed in which modules that showed intermodule synchronization were rearranged transitively. When the activity of inhibitory neurons was dominant for a large , conventional chaos was observed, in which modules that showed intramodule synchronization were fixed.
The parameter can be interpreted as a constant input to inhibitory neurons, and we used it to regulate the E-I balance. Other methods for regulating E-I balance are available, such as regulating the ratio between the numbers of excitatory and inhibitory neurons and regulating the strength of excitatory and inhibitory synapses. The former method cannot be applied to our model because the interactions among neurons are scaled as ( or ), as shown in equation A.3. The latter method can be realized by regulating the strengths of the connections , , , , , and . In Figure 7, the transition between transitive chaotic synchronization and conventional chaos was also observed when the strengths of connections from inhibitory neurons ( and ) or from excitatory to inhibitory neurons ( and ) were regulated. Therefore, we regarded this transition as a robust phenomenon.
The Lyapunov dimension of transitive chaotic synchronization was high and that of conventional chaos was low. Around the boundary between these two regions, intermittent switching between transitive chaotic synchronization and conventional chaos was observed. Typical dynamics at the edge of chaos are intermittent switching between chaotic and nonchaotic dynamics (Pierre & Hübler, 1994; Melby et al., 2000). However, in our model, intermittent switching between high- and low-dimensional chaotic dynamics was observed. Therefore, a more suitable term than “edge of chaos” would be “edge of high-dimensional chaos.”
To further investigate the relationship between E-I balance and chaotic dynamics in our network, we calculated two E/I ratios: the functional E/I ratio proposed by Bruining et al. (2020) and the input E/I ratio () onto pyramidal neurons. Transitive chaotic synchronization and dynamics at the edge of high-dimensional chaos were observed when . While indicates neither runaway excitation nor a quiescent network state (Turrigiano & Nelson, 2004), the dynamics is not clear when only this ratio is taken. In contrast, is calculated based on the analysis of long-term correlations and scale-free properties of the data. Thus, it can be concluded that transitive chaotic synchronization and the dynamics at the edge of high-dimensional chaos with show long-term correlations.
Finally, to examine the efficiency of information processing, a framework of reservoir computing (Lukoševičius & Jaeger, 2009) was introduced. A short-term memory task was performed by the network in which a time-varying random input was given, and the previous values of the input were obtained as the output of the network. It was found that memory capacity takes large values when transitive chaotic synchronization or dynamics at the edge of high-dimensional chaos were observed, where both and are close to 1. Therefore, it can be concluded that a network with optimal E-I balance offers large memory capacity. This result is similar to those of previous studies in which dynamics at the edge of chaos showed high performance in some tasks of the reservoir computing (Bertschinger & Natschläger, 2004; Legenstein & Maass, 2007; Boedecker et al., 2012). While previous studies have analyzed the dynamics at the edge of low-dimensional chaos, our network shows high-dimensional chaos with large Lyapunov dimensions, realized by the network’s multiple-module structure. The edge of high-dimensional chaos is a novel phenomenon that exists between transitive and conventional chaos. The dimension of chaos is expected to increase when the number of modules increases. Such high-dimensional chaos may be effective for certain tasks in reservoir computing. Uncovering such tasks will be investigated in future studies.
In the brain, the maturation of perisomatic GABA circuits achieves the optimal E-I balance that opens the CP (Hensch, 2005; Reh et al., 2020). Our finding that a network with E-I balance holds a large memory capacity indirectly supports the heightened learning capacity during these windows of brain development. When training a network under the reservoir computing framework, all connections in the network were fixed except those to the output neuron. To more accurately model learning during the CP, all connections inside the network should be trained. A procedure called force learning has been proposed to train recurrent neural networks, which successfully generates complex time-varying output patterns (Sussillo & Abbott, 2009). Such training of the entire network is important for future comparison to the actual brain.
Notably, CP timing is staggered across brain regions as each modality sequentially attains E-I balance (Reh et al., 2020). Mistimed or misaligned CP milestones may then amplify the progression toward cognitive disorders. In the literature on reservoir computing, various tasks, such as classification, parity check, and systems modeling, have been used as benchmarks (Bertschinger & Natschläger, 2004; Legenstein & Maass, 2007; Lukoševičius & Jaeger, 2009; Boedecker et al., 2012). As shown by Cramer et al. (2020) and Shi et al. (2022), the importance of “criticality” might depend on the complexity of the tasks. Similarly, the importance of the dynamics of the E-I balance might also depend on the properties of the tasks involving multiple brain areas. Analyses of such task dependence are planned for our future work.
Appendix A: Definition of a Single Module
The connections among the neurons are global. We consider four types of intramodule connections: , , , and . Connections generating postsynaptic currents of an exponential form between all pairs of neurons, as well as diffusive connections between inhibitory neurons, are present. These two types of connections model chemical and electrical synapses with gap junctions, respectively. The form of the connections of the gap junctions is sinusoidal, following the Kuramoto model (Kuramoto, 1984) because our network is described by phase variables. is the th firing time of the th neuron. , , , and are connection weights within the excitatory and inhibitory ensembles, respectively, and is the strength of the gap junctions. and are the membrane time constants, and and are the time constants of the synaptic currents.
We set the values of the parameters as , , , , , , , , , and .
Appendix B: The Fokker-Planck Equations
Using FPEs, we can obtain the theoretical dynamics of ensemble-averaged firing rates and , and various methods for nonlinear analysis can be used.
In this study, we examined the network in the limit of by analyzing FPEs B.1 and B.2, which describe the dynamics of globally connected E-I neurons governed by equations A.1 and A.2. The dynamics of randomly connected E-I neurons with probability is also described by FPEs B.1 and B.2. In such a network, in the denominator and the summation over in equation A.3 are replaced by and summation over a set of neurons, respectively. This equivalence between a global network and a random network is caused by the scaling factor in equation A.3 and the large limit. The infinitesimally small connections of order differ from the strong connections of order in previous studies (Renart et al., 2010; van Vreeswijk & Sompolinsky, 1996). Despite the different orders of connections, the E-I balance is important in all networks.
Appendix C: Numerical Integration of the Fokker-Planck Equations
Appendix D: Connections among Modules
In the definitions above, the intramodule connection weights and are decreased by subtracting and , respectively, to maintain the chaotic dynamics observed in a module.
Appendix E: Initialization of Models
In this appendix, we explain methods for initializing our models.
In Figures 2A, 2B, 2D, and 2E, the dynamics of a module model composed of excitatory neurons and inhibitory neurons defined by equations A.1 and A.2 is shown. In this model, and were initialized to random values obtained from a uniform distribution in . Discarding the transient data until , the results in Figures 2A, 2B, 2D, and 2E were obtained.
In Figure 3, the dynamics of a network with 48 modules is analyzed by FPEs. When all the vectors of 48 modules are initialized to , the simulation starts from a fully synchronized state of 48 modules. To avoid such a fully synchronized initial condition, we prepared 48 different vectors using the following method. First, we simulated a single module for with an initial condition until , and we denote the obtained vector as . Then we initialized the th module of 48 modules to (), and we simulated the network with 48 modules until . The obtained vector is denoted as , and it can be used as an initial vector for the network with 48 modules. In Figures 3A, 3B, 3D, and 3E, was used for initialization. In Figures 3C and 3F, was used for initialization. When the transient data until were discarded, the results in Figure 3 were obtained.
When calculating Lyapunov exponents as shown in Figures 5, 7A, 7B, 7C, and 7D, we have to compute trajectories. We calculate them for 10 trials. We initialized the network to () for the first trial, and we initialized them to () for the th () trial. Discarding transient data until , Lyapunov exponents were calculated by using the dynamics until .
In Figure 7E, was used for initialization, and the transient data until were discarded.
Appendix F: Calculation of
Here, we introduce a method for calculating (Bruining et al., 2020).
Typically, a bandpass filter is applied to the data. Bruining et al. (2020) applied a bandpass filter with a frequency range of 8 to 13 Hz to electroencephalogram data. However, in this study, a bandpass filter was not applied because our data were obtained from numerical simulations.
The envelope of the data is obtained by applying the Hilbert transform. We denote the obtained envelope as () and denote its average over time as .
- The cumulative sum of is calculated as follows:(F.1)
The time axis is divided into bins of width , and is calculated, where is the average of in each bin.
A linear fit of in each bin was calculated, and was calculated.
The standard deviation of in each bin is calculated.
The correlation coefficient of and is calculated. Subsequently, is defined as .
We thank the anonymous reviewers for their valuable comments, which have greatly improved the quality of this paper. This work was supported by the International Research Center for Neurointelligence (WPI-IRCN) at the University of Tokyo Institutes for Advanced Study (UTIAS), JST Moonshot R&D grant JPMJMS2021, AMED under grant JP22dm0307009, Institute of AI and Beyond of UTokyo, and JSPS KAKENHI grants JP20H05920 and JP20H05921.