## Abstract

The relationship between complex brain oscillations and the dynamics of individual neurons is poorly understood. Here we utilize maximum caliber, a dynamical inference principle, to build a minimal yet general model of the collective (mean field) dynamics of large populations of neurons. In agreement with previous experimental observations, we describe a simple, testable mechanism, involving only a single type of neuron, by which many of these complex oscillatory patterns may emerge. Our model predicts that the refractory period of neurons, which has often been neglected, is essential for these behaviors.

## 1 Introduction

A major interest in neuroscience is understanding how macroscopic brain functions, such as cognition and memory, are encoded at the microscale of neurons and their topological connectivities. One of the significant developments in this direction was the Wilson-Cowan (WC) model, describing the averaged behavior of large populations of simple excitatory and inhibitory neurons in terms of a set of coupled, mesoscale differential equations (Destexhe & Sejnowski, 2009; Wilson & Cowan, 1972, 1973). With only a few physical parameters, WC provided one of the first mechanisms for simple (single-frequency) oscillations across the brain, such as the hypersynchronized dynamics observed during epileptic seizures (Destexhe & Sejnowski, 2009; Shusterman & Troy, 2008). More recently, generalized WC-like models have been used to describe heterogeneous populations of neurons ranging in scale from single regions to networks of activities across the whole brain (Breskin, Soriano, Moses, & Tlusty, 2006; Deco, Jirsa, Robinson, Breakspear, & Friston, 2008; Destexhe & Sejnowski, 2009; Hopfield, 1982; Schneidman, Berry, Segev, & Bialek, 2006; Tkacik, Schneidman, Berry, Michael, & Bialek, 2006; Weistuch, Mujica-Parodi, Amgalan, & Dill, 2020).

But there remain important macroscopic brain behaviors that WC-like models fail to adequately explain (Chow & Karimipanah, 2019; Muir, 1979). One example is theta oscillations in the hippocampus, which have multiple superimposed frequencies and are thought to be critical for memory formation and storage (Buzsáki, 2002; Buzsáki & Draguhn, 2004; Colgin, 2013). They are believed to be generated through recurrent feedback involving excitatory neurons only (Buzsáki, 2002). Another example is gamma oscillations, which are high-frequency chaotic firing patterns associated with a wide-range of complex brain activities (Buzsáki & Wang, 2012). They are believed to arise in networks of inhibitory neurons. While WC-like models can exhibit these complex patterns of brain activity, they require many different subtypes of neurons to do so (Goles-Chacc, Fogelman-Soulié, & Pellegrin, 1985; Keeley, Fenton, & Rinzel, 2017; Neves & Monteiro, 2016; Rajan & Abbott, 2006).

## 2 The Physics of the Model

We represent a generic network of $N$ neurons (labeled $i=1,2,\u2026,N$) as a graph; nodes represent each neuron, and edges are synaptic connections (see Figure 1). Each node ($i$) also has a time-dependent state $Si(t)$, representing the activity of a neuron. In particular, the nodes of our network can be in any one of three states: quiescent (Q), or silent but able to fire; active (A), or firing; or refractory (R), or unable to fire. Additionally, the states of each node evolve stochastically over time: $Q\u2192A\u2192R\u2192Q$. The rate of each of these transitions is then chosen to reflect the biophysical dynamics of real neurons.

We use the principle of maximum caliber (Max Cal) to infer these transitions directly from the data (Dixit et al., 2018; Ghosh et al., 2020; Pressé et al., 2013). Here Max Cal, the dynamical extension of maximum entropy, provides the simplest and least-biased model consistent with a few known transition rates, such as average neuronal firing rates and correlations (Dixit et al., 2018; Ghosh et al., 2020; Pressé et al., 2013; Tkacik et al., 2006). This model takes the form of a probability distribution $P$ over different dynamical trajectories $\Gamma $ of populations of neurons.

Using Max Cal, we model how the fraction of neurons in each state ($\pi Q$, $\pi A$, and $\pi R$) evolves over time. While our approach is applicable to any number of neurons, we focus on the case when this number is large. We maintain our focus here for two reasons. First, it presents an enormous simplification, as we can study the long-time behavior of our model using mean-field theory (Deco et al., 2008; Gerstner, 2000; Jirsa & Haken, 1997; Omurtag, Knight, & Sirovich, 2000). Second, it is often a reasonable approximation, as system behaviors converge to their means when their number of components $N$ is large.

### 2.1 Obtaining the Stochastic Dynamics of the Model Using Maximum Caliber

Thus, $h$ and $J$ can be alternatively computed by fitting the shape of $pQA$ for different values of $\pi A$. Taken together, our model is a function of four parameters ($pAR$, $pRQ$, $h$, and $J$), each uniquely chosen to reproduce our four experimental constraints.

### 2.2 The Mean-Field Equations

In contrast to typical modeling approaches, we have made no assumptions in deriving these equations other than the fact that our experimentally observed constraints are reasonably descriptive of neural dynamics. Thus, we expect our model to be widely applicable, even when other previous models fail.

Also, each of our parameters has a clear biological interpretation. First, $pAR$ and $pRQ$ control the average amount of time neurons spend (respectively) active and refractory. Thus, when $pAR$ is large (as might be expected of real neurons), nodes are only briefly active. On the other hand, $pRQ$ might be expected to be small, as biological neural oscillation occurs at a relatively low frequency (an action potential lasts 1 ms, but the fastest oscillations have a period of about 10 ms). Reflecting these requirements, we fix $pAR$ and $pRQ$ at 0.8 and 0.01, respectively. Additionally, $h$, the unit-less average firing threshold, controls the fraction of neurons that fire spontaneously. Thus, we should have $h<0$, reflecting a low-level of baseline activity. Finally, $J$ reflects feedback, or synaptic coupling, between neighboring neurons and can be either positive (excitatory) or negative (inhibitory).

We study two general classes of brain oscillations, corresponding to the network activities of excitatory ($J>0$) and inhibitory ($J<0$) neurons. Here, excitatory oscillations are characterized by high-amplitude waves of activity followed by long periods of silence during which most neurons are refractory (Buzsáki, 2002; Buzsáki & Draguhn, 2004). In contrast, networks of inhibitory neurons fire asynchronously, producing low-amplitude, high-frequency oscillations (Brunel & Hakim, 2008; Buzsáki & Wang, 2012; Korn & Faure, 2003). And unlike WC, both of these behaviors can be exhibited by our model using only a single population of neurons (Feldman & Cowan, 1975; Muir, 1979).

## 3 Model Properties

The formulation of WC-like models is based on quasi-steady-state dynamics and requires multiple populations of neurons to exhibit even single-frequency oscillations (see appendix C and Feldman & Cowan, 1975, and Muir, 1979). In contrast, the behaviors of real neurons are more complex, often involve few neural subtypes, and have been difficult to describe mechanistically (Buzsáki & Draguhn, 2004; Chow & Karimipanah, 2019). We next demonstrate the significant improvements of our model over these previous approaches.

And despite appearing to have almost noise-like dynamics, these chaotic firing patterns robustly store information in their probability distributions of amplitudes (see Figure 3C and inset). And thus, the asynchronous oscillations in real networks of inhibitory neurons can be seen as information transmission that is fast and robust to noise (Brunel & Hakim, 2008; Buzsáki & Wang, 2012). Also, hidden within the chaotic region are occasional windows of stable oscillations that are seen when $J$ is very negative (see Figure 3A). Whether pathological or strategic, this suggests that real networks of neurons may be able to flexibly switch between qualitatively different patterns of firing activity by only slightly changing their synaptic coupling (Korn & Faure, 2003).

Taken together, the general behavior of our model changes dramatically, in biologically expected ways, as its parameters are varied. These findings are summarized in Figure 4, illustrating how these behaviors change with $h$ and $J$ (with $pRQ$ and $pAR$ fixed at their previous, biologically plausible values). In particular, as long as $pRQ$ and $pAR$ are biologically appropriate, our model exhibits roughly three different behaviors (corresponding to the three colors in Figure 4): constant (equilibrium) activity and both excitatory and inhibitory oscillations (including chaos). In contrast, a single population of WC neurons exhibits only the former behavior (see appendix C). And analysis of the locations and properties of each of these regimes can be easily performed using only standard techniques (see appendix D). Thus, our model explains a huge variety of complex, natural phenomena in a simple and practical way. In particular, $h$ (i.e., the mean firing probability) can be manipulated experimentally by applying an external voltage to a group of neurons (Breskin et al., 2006; Eckmann et al., 2007). Also, synaptic activity ($J$) can be manipulated (Breskin et al., 2006). The predictions of our model (and even the phase diagram itself) can be easily tested experimentally.

## 4 Discussion

Here we have presented a new treatment of collective neural dynamics that starts from only the most elementary biophysical of neurons, and basic stochastic dynamics. We find a broad range of behaviors, even in the simplest case of only a single type of neuron (either excitatory or inhibitory).

Of course, many situations involve both types of neurons. Nevertheless, some situations involve only a single type. For example, theta-wave neuronal oscillations in the hippocampus are thought to play a considerable role in memory formation and spatial navigation (Buzsáki, 2002; Colgin, 2013). The currents driving these oscillations are believed to be primarily generated by recurrent excitatory-excitatory connections within the CA3 region of the hippocampus, whereby these neurons robustly synchronize using a “relaxation” mechanism akin to our model's predictions (Buzsáki, 2002; Buzsáki & Draguhn, 2004). Our model suggests how these neurons can easily toggle between and store the large number of complex oscillatory patterns required for their proper function (Buzsáki & Draguhn, 2004; Hutcheon & Yarom, 2000; Llinás, 1988).

Similarly, the emergence of chaotic neural dynamics has been seen experimentally and is believed to underlie high-frequency, gamma-band oscillations across the brain (Aihara, Takabe, & Toyoda, 1990; Brunel & Hakim, 2008; Korn & Faure, 2003). Our model generates these patterns with just inhibitory neurons (Buzsáki & Wang, 2012). And while chaotic dynamics might seem counterproductive for the brain, it has been theorized that these patterns are critical for information storage (Aihara et al., 1990; Brunel & Hakim, 2008; Jia et al., 2012; Korn & Faure, 2003). And perhaps fluctuations into the occasional window of stability within this chaos play a role in pathologies such as epilepsy (Sato et al., 2017).

It is also worth noting that our mean-field Max Cal, equation 2.5, resembles the model proposed in Goychuk and Goychuk (2015), but with two key differences: first, it makes far fewer assumptions and can be rigorously extended to small, noisy populations of neurons (see appendix B); and second, it defines time discretely and exhibits richer dynamics (e.g., chaos). At the same time, this link may provide a new avenue to examine the behaviors of more general neural networks (Pinto & Copelli, 2019; Goychuk & Goychuk, 2015).

Our model is also readily extended beyond a single type of neuron. In particular, WC provided the starting point for the Hopfield model of associative learning, which has itself been an essential starting point for much of the recent development in artificial neural networks (Destexhe & Sejnowski, 2009; Hopfield, 1982; Hopfield & Tank, 1986). In that case, each population of neurons is assigned its own learned coupling $Jij$, representing the storage of unique patterns of activity. In a similar vein, the present may allow generalization beyond the Hopfield model to include more complex, dynamical features (Destexhe & Sejnowski, 2009).

## Appendix A: Maximizing Caliber for Markovian Processes

## Appendix B: Deriving the Mean-Field Model from Max Cal

## Appendix C: Deriving the Wilson-Cowan Model from Max Cal

## Appendix D: Bifurcation Analysis

We now have three cases to consider: when the unstable eigenvalue is one (excitatory), when it is $-$1 (inhibitory), and when it is complex with $|\lambda |=1$ (either excitatory or inhibitory).

Thus, the three scenarios describe three sets of critical transitions between different types of oscillations. The first ($\lambda =1$) and last ($\lambda $ complex) exclusively correspond to the emergence of excitatory oscillations. In contrast, the second case ($\lambda =-1$) corresponds to the emergence of inhibitory oscillations. Most important, the Lambert function $W(x)$ is only defined when $x\u2265e-1$. And when $-e-1<x<0$, the Lambert function has two solutions (corresponding to the beginning and end of oscillatory behavior). Thus, we have found an analytical relationship between our model parameters and the emergence of qualitatively distinct biological patterns.

## Acknowledgments

The research was funded by the WM Keck Foundation (L.M.P., K.D.), the NSF BRAIN Initiative (L.M.P., K.D.: ECCS1533257, NCS-FR 1926781), and the Stony Brook University Laufer Center for Physical and Quantitative Biology (K.D.).

## Competing Interests

We declare no competing financial interests.

### References

*Physics Letters A*

*Nature Reviews Neuroscience*

*eLife*

*PLOS Computational Biology*

*Physical Review Letters*

*Chaos*

*Neuron*

*Science*

*Annual Review of Neuroscience*

*Before and beyond the WilsonCowan equations.*

*Annual Review of Neuroscience*

*PLOS Computational Biology*

*Biological Cybernetics*

*Journal of Chemical Physics*

*Physics Reports*

*Biological Cybernetics*

*Neural Computation*

*Annual Review of Physical Chemistry*

*Discrete Applied Mathematics*

*New Journal of Physics*

*SIAM Journal on Applied Mathematics*

*Proceedings of the National Academy of Sciences*

*Science*

*Trends in Neurosciences*

*Cognitive Neurodynamics*

*Physica D: Nonlinear Phenomena*

*Journal of Neurophysiology*

*Comptes Rendus Biologies*

*Science*

*Biophysical Journal*

*Computational Intelligence and Neuroscience 2016*

*Journal of Computational Neuroscience*

*Trends in Cognitive Sciences*

*Physical Review E*

*Review of Modern Physics*

*Physical Review Letters*

*Nature Neuroscience*

*Scientific Reports*

*Nature*

*Physical Review E*

*Ising models for networks of real neurons.*

*Journal of Computational Neuroscience*

*Inferring a network from dynamical signals at its nodes*

*Younger, but not older, brains are poised at a critical point of functional activity*

*Biophysical Journal*

*Kybernetik*