## Abstract

We explore the stability and dynamic properties of a hierarchical, linearized, and analytic spectral graph model for neural oscillations that integrates the structural wiring of the brain. Previously, we have shown that this model can accurately capture the frequency spectra and the spatial patterns of the alpha and beta frequency bands obtained from magnetoencephalography recordings without regionally varying parameters. Here, we show that this macroscopic model based on long-range excitatory connections exhibits dynamic oscillations with a frequency in the alpha band even without any oscillations implemented at the mesoscopic level. We show that depending on the parameters, the model can exhibit combinations of damped oscillations, limit cycles, or unstable oscillations. We determined bounds on model parameters that ensure stability of the oscillations simulated by the model. Finally, we estimated time-varying model parameters to capture the temporal fluctuations in magnetoencephalography activity. We show that a dynamic spectral graph modeling framework with a parsimonious set of biophysically interpretable model parameters can thereby be employed to capture oscillatory fluctuations observed in electrophysiological data in various brain states and diseases.

## Author Summary

One of the open questions in the field of neuroscience is to understand how dynamic brain functional activity arises with the static brain structural wiring as a constraint. Here, we explore the properties of an analytic model that can accurately capture such a relationship, where the brain functional activity is captured by magnetoencephalography. We demonstrate the temporal and spectral richness of this model and how it is controlled by a small set of biophysically interpretable parameters. Finally, we show that these parameters can accurately capture the temporal fluctuations in magnetoencephalography activity—outlining a tractable strategy to capture oscillatory fluctuations observed in electrophysiological data in various brain states and diseases.

## INTRODUCTION

One of the prevailing questions in the field of neuroscience is to understand brain functional activity, its relationship with the structural wiring, and its dynamics within a brain state and among different brain states and diseases (Bassett & Bullmore, 2009; Cao et al., 2014; Fornito, Zalesky, & Breakspear, 2015; Suárez, Markello, Betzel, & Misic, 2020). These questions are being pursued using various noninvasive neuroimaging modalities to measure the functional activity and the anatomical structural wiring of the brain. Functional activity is measured using modalities such as functional magnetic resonance imaging (fMRI), electroencephalography (EEG), and magnetoencephalography (MEG) that capture different spatial and temporal scales. The white matter structural wiring of the brain is estimated with MRI followed by using diffusion tensor imaging (DTI) and tractography. Subsequently, data-driven and model-based approaches are used to understand how structural-functional (SC-FC) relationships arise in different brain states and diseases. Both of these approaches are largely based on transcribing the brain anatomy into a graph, where different brain regions are the nodes connected to each other as edges made of the white matter fiber.

Data-driven approaches have been broadly based on calculating graph theoretic measures to derive SC-FC relationships (Abdelnour, Dayan, Devinsky, Thesen, & Raj, 2018; Abdelnour, Voss, & Raj, 2014; Achard, Salvador, Whitcher, Suckling, & Bullmore, 2006; Bassett & Bullmore, 2006, 2009; Buckner et al., 2005; Bullmore & Sporns, 2009; Chatterjee & Sinha, 2007; Ghosh, Rho, McIntosh, Kötter, & Jirsa, 2008; Y. He, Chen, & Evans, 2008; Hermundstad et al., 2013; Park & Friston, 2013; Rubinov, Sporns, van Leeuwen, & Breakspear, 2009; Strogatz, 2001; van den Heuvel, Mandl, Kahn, & Hulshoff Pol, 2009). However, such approaches suffer from a lack of neural physiology (Mišić et al., 2015) and in their inability to infer mechanistic insights. Many modeling approaches have been primarily based on either extensive nonlinear models such as neural mass and mean field models (Breakspear, 2017; Cabral, Kringelbach, & Deco, 2014, 2017; David & Friston, 2003; Destexhe & Sejnowski, 2009; El Boustani & Destexhe, 2009; Honey et al., 2009; Muldoon et al., 2016; Siettos & Starke, 2016; Spiegler & Jirsa, 2013; Wilson & Cowan, 1973) that require time-consuming simulations, or have been linear (based on network control theory) that exclude the biophysical details of the excitatory and inhibitory neuron population ensembles (Gu et al., 2015, 2017; Stiso et al., 2019; Tang & Bassett, 2018). While there are many other linearized neural mass models, both at a regional level and for the entire brain network connected via the structural connectome (Abeysuriya & Robinson, 2016; Deco et al., 2014; Gabay, Babaie-Janvier, & Robinson, 2018; Moran et al., 2007), a thorough analytic/stability analysis of the brain network models, taking into account region-specific delays, has not been done before. In this article, we focus on a modeling approach lying between the extensively nonlinear and nonbiophysical linear modeling approaches, demonstrated by a linear biophysical model called the spectral graph theory model (SGM) that can accurately capture the wide-band static frequency spectra obtained from MEG. This model incorporates biophysics while maintaining parsimony and requiring minimal computation speed (Raj et al., 2020; Verma, Nagarajan, & Raj, 2022).

We investigate the properties of SGM and extend it to capture the temporal fluctuations in MEG activity, an emerging marker of brain function (Baker et al., 2014; Jiang et al., 2022; Liuzzi et al., 2019; Quinn et al., 2018; Sorrentino et al., 2021; Tait & Zhang, 2022; Tewarie et al., 2019a, 2019b; Vidaurre et al., 2018). We focus on SGM for various reasons. First, SGM yields a closed-form steady-state frequency response of the functional activity generated by fast brain oscillations. It is based on eigendecomposition of a graph Laplacian, drawn from the field of spectral graph theory (Auffarth, 2007; Kondor & Lafferty, 2002; Larsen, Nielsen, & Sporring, 2006; Ng, Jordan, & Weiss, 2001). It is a hierarchical model consisting of excitatory and inhibitory population ensembles at the mesoscopic level, and excitatory long-range connections at the macroscopic level. The key distinguishing factor of SGM is that it captures SC-FC using a parsimonious set of biophysically interpretable model parameters: the neural gains, time constants, conduction speed, and macroscopic coupling constant. Due to its parsimony and its ability to directly capture the wide-band frequency spectra, parameter inference is more tractable as compared to the nonlinear modeling approaches.

Despite prior success of the SGM’s ability to fit wide-band regional power spectra using a closed-form analytical solution (Raj et al., 2020), its ability to accommodate more complex dynamics including regimes of stability and instability have not yet been explored. Other aspects of SGM’s biological relevance remain unaddressed. For instance, it is not known whether a linear model like SGM can accommodate dynamic changes in model parameters that may then lead to dynamic complex behavior. Since the SGM was formulated in terms of steady-state frequency spectra, its transient behavior was not previously addressed. These are important questions, since a biologically realistic model requires sufficiently rich temporal dynamics at all timescales, and stationary spectral response only tells part of the story.

In this article, we address these aspects. Using a series of analytical and numerical explorations, we show that SGM is capable of generating frequency-rich spectra and qualitatively different solution regimes in the time domain. In particular, we demonstrate that this model can exhibit combinations of different dynamical solutions: damped oscillations, limit cycles, and unstable oscillatory solutions, depending on the parameter values. We further demonstrate that the dominant alpha band behavior is independent of local oscillators and can arise purely from the macroscopic network. We then performed a stability analysis to identify stability boundaries separating these different dynamical regimes. In contrast to prevailing dynamic function studies based on noise-driven fluctuations around a bifurcation point of nonlinear SC-FC models, we employed SGM to capture temporal fluctuations in the frequency spectra. We show a novel approach to capture dynamic function with only a small set of time-varying model parameters: the neural gains and the macroscopic coupling constant.

## RESULTS

The model used here is a modified SGM we developed recently (Verma et al., 2022). We hierarchically model the local cortical mesoscopic and long-range macroscopic signals for each brain region, where the regions are obtained using the Desikan-Killiany atlas (Desikan et al., 2006). We then solve the model equations to obtain a closed-form solution in the Fourier frequency domain. This provides us frequency-rich spectra that is an estimate of the source-reconstructed MEG spectra. At the mesoscopic level, we model the excitatory (*x*_{e}(*t*)) and inhibitory (*x*_{i}(*t*)) signals regulated by parameters *g*_{ee}, *g*_{ii}, and *g*_{ei} that are the neural gain terms for the excitatory, inhibitory, and coupled excitatory and inhibitory populations, respectively, and parameters *τ*_{e} and *τ*_{i} that are the excitatory and inhibitory characteristic time constants, respectively. Both excitatory and inhibitory mesoscopic models for all the regions receive a Gaussian white noise input *p*(*t*). At the macroscopic level, we model the long-range excitatory signals (*x*_{k}(*t*) for every brain region *k*) regulated by these parameters: macroscopic graph time constant *τ*_{G}, global coupling constant *α*, and conduction speed *v*. The macroscopic model for *x*_{k}(*t*) in every region receives the mesoscopic signals *x*_{e}(*t*) + *x*_{i}(*t*) as input.

### SGM Can Exhibit Oscillations That Are Damped, Limit Cycles, or Unstable

We explored the transient behavior of the model by simulating the model solution in the time domain. First, in order to obtain time simulations, we performed a numerical inverse of the Laplace transformed equations, explained in the Methods section. We performed this first for the local mesoscopic model alone and then for the complete macroscopic model, with an input term that can be either an impulse or Gaussian white noise. In the case of an impulse input, we replaced the white noise input term *p*(*t*) with an impulse input (*p*(*t*) ≠ 0 only when *t* = 0, and *p*(*t*) = 0 otherwise). The simulations of the mesoscopic model are shown in Figure 1. Simulations of the mesoscopic model with impulse input (mesoscopic model impulse response) are shown in Figure 1A. Three types of solutions are possible: damped oscillations, limit cycles, and unstable oscillatory solutions, depending on the values of the model parameters. Such damped oscillations and limit cycles are observed in a comparable nonlinear neural mass model previously as well (Zhang & Saggar, 2020). With Gaussian white noise input in Figure 1B, damped oscillations and limit cycles are not distinguishable. However, oscillations are observed regardless. While this general trend can be observed for a range of model parameters (which we demonstrate later through a stability analysis), we chose representative values of *τ*_{e}, *τ*_{i}, and *g*_{ii} here and varied *g*_{ei} to demonstrate stable and unstable solutions.

Time domain simulations of the macroscopic model were also obtained by taking the inverse Laplace transform. Figure 2 shows the simulations obtained for the macroscopic model impulse response (input *x*_{e}(*t*) + *x*_{i}(*t*) in the macroscopic model replaced with an impulse) in Figure 2A, the complete model with mesoscopic model’s input *p*(*t*) as an impulse input in Figure 2B, and complete simulations with *p*(*t*) as a Gaussian white noise input in Figure 2C. As seen in Figure 2A, depending on the parameter values, four types of solutions are possible. If *α* > 1, the mean value of the frequency response keeps increasing with time. For certain combinations of *τ*_{G} and *α*, we observe damped oscillations, oscillations that are blowing up with time, and limit cycles. These regimes are clearly distinguishable when simulating the macroscopic impulse response alone in Figure 2A. When we include a stable mesoscopic model with impulse input in Figure 2B, stability of the complete model is not obvious. Moreover, the regimes are further unclear when noise is included in the model, shown in Figure 2C. We have only demonstrated simulations for initial time points up till around 0.3 seconds because of numerical instabilities encountered while performing inverse Laplace transform for higher time points, which we discuss in the Discussion section. This general trend can be observed for a range of model parameters (which we demonstrate later through a stability analysis), but we chose these default values for the simulations.

### SGM Generates Stable Damped Oscillatory Solutions for a Range of Parameters

Based on the stability method described, we obtained the stability regimes for the macroscopic as well as the local mesoscopic model alone. The system is stable when the oscillations dampen over time, is a limit cycle if the amplitude of oscillations remains constant over time, and is unstable if the amplitude or mean of the oscillations increase with time. Stability is determined by calculating the poles of the transfer function of the mesoscopic model (see Methods section for details). We show the poles for a set of parameter values in Figure 3A. We see that a change in parameters *g*_{ei} or *g*_{ii} can shift the roots of the characteristic polynomial in Equation 28 (which correspond to the poles of the mesoscopic model transfer function) to the right of the imaginary axis, leading to instability. Since the poles that cross the imaginary axis have a nonzero imaginary component as well, the system exhibits oscillations that blow up with time. We also see that for the mesoscopic model, the stability is largely controlled by the neural gain parameters *g*_{ei} and *g*_{ii}, as shown in Figure 3B. For higher values of *g*_{ei} and *g*_{ii}, the system becomes unstable and the oscillations amplitude increase with time as shown in the rightmost column in Figure 1. The time constants *τ*_{e} and *τ*_{i} only shift the stability boundary marginally. As seen in the Supporting Information Figure S1 as well, shifting the time constants does not shift the stability boundary substantially. At the stability boundary, we observe the limit cycles type solutions as demonstrated in the middle column in Figure 1.

The mesoscopic model can also exhibit a variety of peak frequencies, demonstrated in Figure 3C and Supporting Information Figure S2. As shown in Figure 3C, a primary and a secondary peak in the frequency spectra can be observed, depending on the parameter values. For fixed *τ*_{e} and *τ*_{i} and varying *g*_{ei} and *g*_{ii}, a primary alpha and a secondary higher beta peak can be observed. For higher values of *τ*_{e} and *τ*_{i}, a primary theta or delta peak can be observed. The mesoscopic model’s spectra can also exhibit a primary peak in the gamma band, for very low values of *τ*_{e} and *τ*_{i}, as shown in the Supporting Information Figure S2.

The stability regime of the macroscopic model is demonstrated in Figure 4A, by replacing input *x*_{e}(*t*) + *x*_{i}(*t*) with an impulse. For the macroscopic model, all the parameters *τ*_{e}, *τ*_{G}, *α*, and *v* impact the stability. Time constant *τ*_{e} determines the boundary for stability of oscillations when *α* = 0, as demonstrated in the Methods section. Speed *v* determines the shape of the boundary for stability of oscillations, which is shown with a blue line. If speed becomes zero, the effect due to the connectivity matrix becomes zero, and the situation is the same as that for *α* = 0. In such a case, the boundary of stability will be a horizontal line at 2*τ*_{G} = *τ*_{e}. There is also a hard boundary of stability at *α* = 1 shown as a red line, as was demonstrated in the Methods section. For *α* > 1, the mean of oscillations starts increasing with time, making the system unstable. Moreover, for sufficiently low values of *τ*_{G} and *α* > 1, both the mean and the amplitude of oscillations increase with time. These example regimes are labeled in Figure 4A and the orresponding macroscopic model’s impulse response is simulated in Figure 4B. As seen in the simulations shown in Figure 4B1, since *τ*_{G} is sufficiently high and *α* ≤ 1, the oscillations dampen over time. In Figure 4B2, since *τ*_{G} is below the stability boundary in blue but *α* ≤ 1, the amplitude of the oscillations increase over time even though the mean of the oscillations does not change. In Figure 4B3, since *α* > 1 even though *τ*_{G} is sufficiently high, the mean of the oscillations increases with time even though the oscillations dampen over time. In Figure 4B4, since *τ*_{G} is sufficiently low and *α* > 1, both the amplitude and mean of oscillations will increase with time.

### Macroscopic Model Alone Can Exhibit a Peak in the Alpha Frequency Band

We observed that the macroscopic model’s spectra can exhibit a single peak in the alpha frequency band even without the local mesoscopic oscillatory signals *x*_{e}(*t*) + *x*_{i}(*t*) as the input to the macroscopic model. To test this, we replaced the local mesoscopic model input of *x*_{e}(*t*) + *x*_{i}(*t*) to the macroscopic model with exp(−*t*). This is a simple damping term, and its Fourier transform will be 1/(j*ω* + 1). Thus, if a peak in a frequency band is observed in this model’s spectra, it can only get generated from the macroscopic response. A peak in the alpha frequency band can be seen for a combination of parameters *τ*_{G} and *α* in the stable regime, as shown in Figure 4C. Note that this peak is dependent on *τ*_{e}, and altering *τ*_{e} will shift the peak frequency.

The macroscopic model can also exhibit a variety of frequencies at which a peak in the spectra is observed, as shown in the supplementary frequency heat maps for different model parameters in Supporting Information Figure S3. The primary peak in the macroscopic model can also shift upon varying the time constants *τ*_{e} and *τ*_{G}. Note that a secondary peak is not observed in this macroscopic model alone. The secondary peak is exhibited by the mesoscopic model, shown in Figure 3C, and therefore is exhibited at the macroscopic level when the mesoscopic model is included as an input to the macroscopic model.

### Dynamics in Functional Activity Is Captured by Fluctuations of a Small Set of Parameters

Next, we used the time-frequency decomposition of MEG source-reconstructed time series to estimate model parameters over time, approximately every 5 seconds. We only varied *α*, *g*_{ei}, and *g*_{ii}. The dynamic model parameters are shown in Figure 5A, B, and C. All the three parameters vary over time for many subjects. Interestingly, a sharp switch in *α* can be seen for many subjects. To capture this variation, we counted the number of times the difference between two consecutive values of *α* was greater than 0.5 over time for every subject. This is shown in the Supporting Information Figure S4. For multiple subjects, sharp switches occur. Parameter *α* captures the extend of global connectivity.

For the previous model parameter estimation, we kept an upper bound of 1.0 on *α*, to ensure stability. We also estimated how the model parameters vary if the upper bound is relaxed. In Supporting Information Figure S5A, B, and C, we show how the model parameters vary when the upper limit of *α* has been increased to 3. We again see switches in *α*, which we show in Supporting Information Figure S6.

We also tested if having static parameters instead will be equally accurate in capturing the dynamic spectra. For this, we calculated the Pearson’s correlation coefficient between the modeled spectra with static parameters and the spectra at each of the time points. Then, we performed a paired *t* test between the two set of correlations after performing a Fisher’s z transform on them. Based on a one-sided paired *t* test, having dynamic parameters gave significantly higher Pearson’s *r* as compared to having static model parameters, with a *p* < 0.0001.

A visual comparison of the empirical MEG spectra and the modeled spectra at different time points is shown in Supporting Information Figures S7 and S8 for two representative subjects, along with a spatial distribution of empirical and modeled alpha frequency bands. We see that SGM currently does not capture the dynamical changes in the spatial distribution, even though the patterns are broadly similar—we will address this in follow-ups of this article.

### Dynamics in Functional Activity Is Captured by Fluctuations in Model Stability

Based on the estimated dynamic model parameters, we also calculated the stability at the respective time points. Note that the neural gain terms control the mesoscopic model’s stability, while *α* controls the macroscopic model’s stability. By keeping an upper bound of 1 on *α*, we ensured the macroscopic system does not become unstable because of increase in *α*. The dynamic stability patterns are shown in Figure 5E, F, and G. As shown in Figure 5E, the mesoscopic models’ stability varies with time for very few subjects. As shown in Figure 5F, *α* hits the stability boundary of *α* = 1 over time for some subjects too. In order to capture if the mesoscopic and macroscopic models’ stability was varying simultaneously, we show their combination in Figure 5G. The shade is based on four situations: (1) both mesoscopic model is stable and *α* < 1, (2) mesoscopic model is stable but *α* = 1, (3) mesoscopic model is unstable but *α* < 1, (4) mesoscopic model is unstable and *α* = 1. This plot shows that most changes in stability patterns occur because of *α* hitting the upper bound.

We repeated this analysis with an increased upper bound of *α* = 3. This is shown in Supporting Information Figure S5E, F, and G. Similar changes in the dynamics are observed here as well. In particular, *α* switches between stable and unstable regimes for many subjects. As a consequence, we see that the macroscopic model goes unstable while the mesoscopic model remains stable at different time points, shown as the pink region in Supporting Information Figure S5G.

We note that the values of *τ*_{e} and *τ*_{G} also control the macroscopic model’s stability, which are static. It implies either the system is constantly stable or unstable depending on their values. They are shown in Supporting Information Figure S9. For all the subjects except two, the macroscopic system is stable. Constraining the time constants appropriately is beyond the scope of this work, but we will investigate it in the future.

## DISCUSSION

In this work, we demonstrated that a biophysical linearized spectral graph model can generate frequency-rich spectra. The key advantage of this model is that it is hierarchical, analytic, graph-based, and consists of a parsimonious set of biophysically interpretable global parameters. Although prior work has already demonstrated the SGM’s ability to fit wide-band regional power spectra, its ability to accommodate more complex dynamics including regimes of stability and instability were previously unknown. In this article, we focused on those aspects, along with dynamic changes in model parameters that may then lead to dynamic complex behavior. Using detailed analytical and numerical analyses, we were able to show that this model can exhibit oscillatory solutions that are damped, limit cycles, or unstable, which we demonstrated by calculating the inverse Laplace transform of the model responses. We also showed how the stability of both the mesoscopic (local circuits) and the macroscopic (whole-brain network) model are governed by the model parameters. Interestingly, the macroscopic model alone can exhibit a peak in the alpha frequency band even when the local mesoscopic model is replaced with a simple damping term, implying that the macroscopic alpha rhythm may not arise from local mesoscopic oscillators tuned to the alpha frequency, but emerge from the modulatory effect of long-range network connectivity. In addition, a variety of frequency responses can be observed by varying the model parameters within physiological ranges, making this model suitable for inferring model parameters using MEG wide-band spectra directly, instead of using second-order metrics such as functional connectivity as previously employed in various nonlinear modeling approaches. This will be specifically helpful in capturing differences in frequency spectra observed in different diseases and brain states. Lastly, we inferred dynamic model parameters using time-frequency decomposition of the source-reconstructed MEG data, outlining a novel model-based approach to directly infer dynamics in functional activity using a parsimonious set of biophysically interpretable model parameters.

### Relationship to Previous Works

All the structure-function models can be categorized into communication and control models, reviewed in detail by Srivastava et al. (2020). The dynamical communication models incorporate biophysics of signal propagation and generation. While such models can be linear as well as nonlinear, many structure-function modeling approaches are based on nonlinear models—such models can exhibit a rich dynamical repertoire in their oscillatory behavior (Cabral et al., 2014, 2017; Siettos & Starke, 2016). Such behaviors are quantified in terms of bifurcations defining solution regimes that are fixed points, limit cycles, quasiperiodic, chaotic, or bistable. These models have been used extensively and applied to differentiate different brain states and allow transitions between them (Kringelbach & Deco, 2020). While SGM cannot exhibit such complex behavior, it can accurately capture the wide-band frequency spectra, in contrast to the other modeling approaches that infer model parameters using second-order statistics such as functional connectivity. It is also to be noted that even if the nonlinear models can exhibit diverse solutions, they may not be completely derived from biophysics; for example, some models are based on using normal form of Hopf bifurcation model to represent the mesoscopic dynamics (Deco, Kringelbach, Jirsa, & Ritter, 2017; Senden, Reuter, van den Heuvel, Goebel, & Deco, 2017).

On the other hand, the controls models are primarily linear time-invariant (LTI) systems. Control models have also been widely used to model state transitions and different neurological conditions with a view of estimating controllability in terms of the energy required to facilitate state transitions (Gu et al., 2015, 2017; Stiso et al., 2019; Tang & Bassett, 2018). Such models are limited in the kinds of solutions they can exhibit—exponential growth, exponential decay, and sinusoidal oscillations. Moreover, these solutions are primarily based on eigenvalues of the structural connectome matrix. While SGM can also exhibit broadly the same kind of solutions that the control models can, it can generate wide-band frequency spectra that can accurately match empirical MEG spectra for a range of model parameters. Moreover, SGM is derived from the excitatory and inhibitory neuronal biophysics, unlike the network control models. Thus, SGM can be interpreted as an LTI network control system where the structural connectome is replaced by an eigendecomposition of the complex Laplacian along with the macroscopic excitatory frequency response, and the input control is replaced by the mesoscopic excitatory and inhibitory dynamics.

Numerous evidences have pointed toward the brain being multistable (Kelso, 2012) and several modeling approaches have indicated that the brain functional activity exhibits multistability by operating close to a bifurcation point (Cabral et al., 2014; Deco & Jirsa, 2012; Deco et al., 2017; Freyer et al., 2011; Golos, Jirsa, & Daucé, 2016). In such cases, input noise can shift the nonlinear solution regime if it is close to a bifurcation point, which can yield the dynamical repertoire of simulated functional activity (Cabral et al., 2014; Deco & Jirsa, 2012; Ghosh, Rho, McIntosh, Kötter, & Jirsa, 2008; Golos et al., 2016). SGM cannot exhibit multistability in its current form—noise will simply act as a linear filter that shapes the power spectrum. Instead, we focus on capturing fluctuations in functional activity by inferring SGM model parameters at different time points—an alternative approach of introducing nonlinearity while keeping parameter inference tractable and ensuring estimation of wide-band frequency spectra instead of second-order statistics such as functional connectivity.

A key question then arises: Is SGM sufficient to capture brain macroscopic dynamics without multistability and other complex dynamics? Our modeling approach is focused on capturing the macroscopic spatial and frequency patterns, which can be largely identical across individuals (Freeman & Zhai, 2009; B. J. He, Zempel, Snyder, & Raichle, 2010; Robinson, Rennie, Rowe, O’Connor, & Gordon, 2005). It has been suggested that emergent long-range activity can be independent of microscopic local activity of individual neurons (Abdelnour et al., 2014; Destexhe & Sejnowski, 2009; Mišić et al., 2015; Mišić, Sporns, & McIntosh, 2014; Robinson et al., 2005; Shimizu & Haken, 1983), and that these long-range activities may be regulated by the long-range connectivity (Abdelnour, Raj, Dayan, Devinsky, & Thesen, 2015; Deco, Senden, & Jirsa, 2012; V. K. Jirsa, Jantzen, Fuchs, & Kelso, 2002; Nakagawa et al., 2014). Therefore, to capture such phenomena, it may be sufficient to undertake deterministic modeling approaches such as SGM. Indeed, it was already demonstrated that SGM outperforms a Wilson-Cowan neural mass model in fitting the empirical MEG spectra (Raj et al., 2020). In addition, a recent comparison showed that linear models outperformed nonlinear models in predicting resting-state fMRI time series. This was attributed to the linearizing effects of macroscopic neurodynamics and neuroimaging due to spatial and temporal averaging, observation noise, and high dimensionality (Nozari et al., 2020). These evidences, in addition to our results on the dynamic model parameter estimation, suggest that it can be sufficient to use SGM to capture static as well as temporal fluctuations in the functional activity.

Previous MEG studies have reported MEG fluctuations in the order of seconds (O’Neill et al., 2018; Tewarie et al., 2019b) as well as a much lower order of 100 ms (Baker et al., 2014). Presented model parameter variability results were resolved at 5 s due to the chosen window length. Dynamics on faster timescales will require finer Morlet wavelet time-frequency decomposition. New approaches that can detect non-evenly-spaced state switching (Baker et al., 2014; Jiang et al., 2022) may also be adapted in future iterations of our inference procedure.

SGM is closely related to the Jansen-Rit model (Jansen & Rit, 1995; Sanz-Leon, Knock, Spiegler, & Jirsa, 2015). The key difference being that the relationship between the mesoscopic and the macroscopic level is unidirectional in case of SGM, while the macroscopic level interacts with the mesoscopic level excitatory neurons in the Jansen-Rit model formulation. As a result, the extrinsic connections from all other brain regions are only at the macroscopic level. This can also be interpreted as having a time domain forward model at the macroscopic level, such as the Balloon model for the dynamics of blood flow and oxygenation in the brain (Buxton & Frank, 1997; Buxton, Wong, & Frank, 1998; Mandeville et al., 1999). In addition, the sigmoidal nonlinearities are dropped and replaced by neural gain terms, which is equivalent to linearizing the sigmoidal term in the Jansen-Rit (and similar models) equations. Lastly, we introduce the convolution ensemble terms that could have alternatively been expressed as coupled second-order ordinary differential equations (ODEs). The main goal of SGM was to obtain a closed-form solution in the Fourier domain so it can be used for tractable model parameter inference when fitting to empirical frequency spectra. Hence, it was not necessary to convert convolutions to ODEs—for a linear system the two are equivalent, hence the choice. The convolution/Fourier formulation has the additional benefit of being far more intuitive and easier to interpret by anyone with familiarity with signal processing. We introduced all these modifications to obtain a parsimonious model with a closed-form solution in the frequency domain, leading to tractable inference of model parameters when fitting to frequency-rich spectra obtained from MEG not just for a specific brain region, but for all the cortical regions together.

### Potential Applications and Future Work

This work can be extended to identify temporal state and stability changes in the functional activity of various brain states and diseases, particularly neuropsychiatric disorders. We will also examine the association of switching in *α* coupling, which controls the coupling between remote populations, with the dynamics of segregation versus integration. Currently, in our model macroscopic stability is ensured via an upper bound on the coupling term *α*; in the future we will explicitly introduce automatic gain control for this purpose, for example, as a mathematical correlate of neuromodulation (Shine, Aburn, Breakspear, & Poldrack, 2018).

The presented model-based inference of dynamic functional activity can provide additional insights into the biophysics because of the parsimony and the biophysical interpretability of the SGM parameters. For example, we recently applied the mesoscopic model to estimate regionally varying local model parameters for empirical static MEG spectra collected for healthy and Alzheimer’s disease subjects. Our results showed that the neural gains and the time constants were differentially distributed in the healthy versus the Alzheimer’s disease subjects, indicating an excitatory/inhibitory imbalance in Alzheimer’s disease (Ranasinghe et al., 2022). Lastly, one can also extend this work to find seizure onset regions, based on the stability of the regional model parameters, as has been done earlier by investigating bifurcation points with nonlinear modeling (V. K. Jirsa, Stacey, Quilichini, Ivanov, & Bernard, 2014).

### Limitations

The SGM model involves various limitations and assumptions that have previously been described. Here we list potential limitations of the current stability results. We employed an inverse Laplace transform approach to generate model solution in the time domain, instead of solving the differential equations in the time domain directly. While this is an excellent feature enabled by having a closed-form solution in Laplace domain, a limitation of this approach is that the inverse Laplace transform can lead to numerical instabilities. Hence, we were able to obtain reliable solutions only for a short range of time. This is especially true when the solution is close to the limit cycle, exhibiting persistent oscillations, since such systems are harder to invert using numerical inverse Laplace (Graf, 2004). However, the time range used in this study (0–0.3 s) is sufficient to demonstrate the nature of the oscillations and whether they will blow up or damp down with time.

We obtained the oscillation stability boundary (blue line in Figure 4A) as a function of *τ*_{G} and *α*, keeping all other parameters fixed. This boundary will shift upon varying parameters *τ*_{e} and *v* as well, which we have not demonstrated here. Moreover, this is only an approximate boundary since this was based on a numerical root-finding approach. This requires an initial guess close to the actual root. It is computationally infeasible currently to perform a symbolic manipulation of an 86 × 86 matrix in Equation 37, and to either apply Routh-Hurwitz criteria or find roots of the determinant of this matrix. With sufficient computation capability in the future, an accurate stability boundary may be obtained via the characteristic polynomial of Equation 37. Parameter estimation of two subjects gave *τ*_{G} ≪ *τ*_{e}, indicating that their inferred macroscopic system is unstable. In future work we will explore constraints to ensure parameters are in the stable regime.

In this study, we only considered 1-min MEG recordings; the quality of inferred dynamics might therefore benefit from longer recordings. However, we chose the most noise-free 1-min snippet out of the entire 5-min recording, ensuring that the fluctuations observed in model parameters are not because of measurement noise.

All the analyses described in this paper are based on source-reconstructed MEG data using a minimum-variance adaptive beamformer. This source reconstruction algorithm assumes the availability of a lead field that embodies the spatial transformations between source dipoles and sensor data and uses the sample spatial covariance across sensors to reconstruct source time-courses (Sekihara & Nagarajan, 2015). Although adaptive beamformers can have signal cancellation of highly correlated sources (*r* > 0.9), for sources that do not have such a high correlation, many studies have shown that it is a robust reconstruction algorithm for resting-state data (Sekihara, Nagarajan, Poeppel, & Marantz, 2002). Furthermore, in this study we reconstructed sources at a voxel resolution and averaged reconstructed source time series at a regional spatial resolution in a standardized atlas. In our case, we used the Desikan-Killiany atlas (Desikan et al., 2006). It is important to note that the reconstruction algorithm we used is mainly spatial reweighting of sensor data to obtain source time series; importantly it does not impose any temporal distortions to the sensor data. Performing analyses in sensor space with the current SGM is difficult because the connectomes are defined in source space at a regional level—we will need a new SGM model that includes a lead field.

## THEORY AND METHODS

### Notation

All the vectors and matrices are written in boldface and the scalars are written in normal font. The frequency *f* of a signal is specified in Hertz (Hz), and the corresponding angular frequency *ω* = 2*πf* is used to obtain the Fourier transforms. The connectivity matrix is defined as ** C** =

*c*

_{jk}, where

*c*

_{jk}is the connectivity strength between regions

*j*and

*k*, normalized by the row degree.

### Mesoscopic Model

*k*out of the total

*N*regions, we model the local excitatory signal

*x*

_{e}(

*t*), local inhibitory signal

*x*

_{i}(

*t*), as well as the long-range excitatory signal

*x*

_{k}(

*t*) where the global connections are incorporated. The local signals

*x*

_{e}(

*t*) and

*x*

_{i}(

*t*) are the same for every region

*k*. They are modeled using an analytical and linearized form of neural mass equations. We write a set of differential equations for evolution of

*x*

_{e}(

*t*) and

*x*

_{i}(

*t*) due to decay of individual signals with a fixed neural gain, incoming signals from coupled excitatory and inhibitory signals, and input white Gaussian noise. Letting

*f*

_{e}(

*t*) and

*f*

_{i}(

*t*) denote the ensemble average neural impulse response functions, the

*x*

_{e}(

*t*) and

*x*

_{i}(

*t*) are modeled as

*p*(

*t*) is input noise, parameters

*g*

_{ee},

*g*

_{ii},

*g*

_{ei}are neural gain terms, and parameters

*τ*

_{e},

*τ*

_{i}are characteristic time constants. These are global parameters and are the same for every region

*k*. Here, the ensemble average neural impulse response functions

*f*

_{e}(

*t*) and

*f*

_{i}(

*t*) are assumed to be Gamma-shaped and written as

Note that in the second term of Equation 1 the excitatory response convolution is applied to the difference of the excitatory population signal and the inhibitory response-convolved inhibitory population signal, and vice versa for Equation 2. The signals *x*_{e}(*t*) and *x*_{i}(*t*) in this model are presynaptic ones. The key idea is that no neural element, whether excitatory or inhibitory, can influence another element’s input (i.e., change its derivative) unless it passes through the neural response impulse function (*f*_{e}(*t*) or *f*_{i}(*t*)). Similarly, the self-decay process of a neural element can only influence itself through the self-impulse response.

### Macroscopic Model

*x*

_{k}(

*t*), for every

*k*

^{th}region, accounting for long-range excitatory cortico-cortical connections for the pyramidal cells. The evolution of

*x*

_{k}(

*t*) is assumed as a sum of decay due to individual signals with a fixed excitatory neural gain, incoming signals from all other connected regions determined by the white matter connections, and the input signal

*x*

_{e}(

*t*) +

*x*

_{i}(

*t*) determined from Equations 1 and 2. Signal

*x*

_{k}is modeled as

*τ*

_{G}is the graph characteristic time constant,

*α*is the global coupling constant,

*c*

_{jk}are elements of the connectivity matrix determined from DTI followed by tractography, $\tau jkv$ is the delay in signals reaching from the

*j*

^{th}to the

*k*

^{th}region, and

*v*is the cortico-cortical fiber conduction speed with which the signals are transmitted. The delay $\tau jkv$ is calculated as

*d*

_{jk}/

*v*, where

*d*

_{jk}is the distance between regions

*j*and

*k*.

This set of equations is parameterized by eight global parameters: excitatory time constant *τ*_{e}, inhibitory time constant *τ*_{i}, macroscopic graph time constant *τ*_{G}, excitatory neural gain *g*_{ee}, inhibitory neural gain *g*_{ii}, coupled population neural gain *g*_{ei}, global coupling constant *α*, and conduction speed *v*. The neural gain *g*_{ee} is kept as 1 to ensure parameter identifiability. We estimate the remaining seven global parameters using an optimization procedure described in the next section.

### Model Solution in the Fourier Domain

*ω*, which is equal to 2

*πf*, where

*f*is the frequency in Hz. The Fourier transform of the Equations 1 and 2 will be the following, where 𝓕(

*x*

_{e}(

*t*)) =

*X*

_{e}(

*ω*) and 𝓕(

*x*

_{i}(

*t*)) =

*X*

_{i}(

*ω*), and j is the imaginary unit:

*P*(

*ω*) is the Fourier transform of the input Gaussian noise

*p*(

*t*), which we assume to be identically distributed for both the excitatory and inhibitory local populations for each region, and the Fourier transforms of the ensemble average neural response functions are

*X*

_{e}(

*ω*) and

*X*

_{i}(

*ω*) are

*H*

_{e}(

*ω*) and

*H*

_{i}(

*ω*) can be separated out and we get

*H*

_{local}(

*ω*) =

*H*

_{e}(

*ω*) +

*H*

_{i}(

*ω*).

**(**

*x**t*)) =

**(**

*X**ω*):

***(**

*C**ω*) ≡ [

*c*

_{ij}exp(−

*jω*$\tau ijv$)]. Note that each element in the matrix

**is normalized already by the row degree. The above equation can be rearranged to give**

*C***is the identity matrix of size**

*I**N*×

*N*. The eigendecomposition of this complex Laplacian matrix is

**(**

*U**ω*) are the eigenmodes/eigenvectors and

**Λ**(

*ω*) = diag([

*λ*

_{1}(

*ω*), …,

*λ*

_{N}(

*ω*)]) consist of the eigenvalues

*λ*

_{1}(

*ω*), …,

*λ*

_{N}(

*ω*), at angular frequency

*ω*. The macroscopic response

**(**

*X**ω*) from Equation 14 becomes

*u*_{k}(

*ω*) are the eigenmodes from

**(**

*U**ω*), and

*λ*

_{k}(

*ω*) are the eigenvalues from

**Λ**(

*ω*) obtained by the eigendecomposition of the complex Laplacian matrix 𝓛(

*ω*) obtained in Equation 16. Equation 18 is the closed-form steady-state solution of the macroscopic signals at a specific angular frequency

*ω*. We use this modeled spectra to compare against empirical MEG spectra and subsequently estimate model parameters.

### Model Parameter Estimation

The dataset used for this work is based on the preprocessed publicly available dataset for the SGM work (Xie, Stanley, & Damasceno, 2020), and is also the same as the one we used for the modified SGM (Verma et al., 2022). For this dataset, MEG, anatomical MRI, and diffusion MRI was collected for 36 healthy adult subjects (23 males, 13 females; 26 left-handed, 10 right-handed; mean age 21.75 years, age range 7–51 years). Data collection procedure has already been described previously (Raj et al., 2020). All study procedures were approved by the institutional review board at the University of California at San Francisco and were in accordance with the ethics standards of the Helsinki Declaration of 1975 as revised in 2008. MEG recordings were collected for 5 min while the subjects were resting and had eyes closed. Out of the 5-min recording, a 1-min snippet was chosen that was most noise free. MRI followed by tractography was used to generate the connectivity and distance matrices. The publicly available dataset consisted of processed connectivity and distance matrices, and power spectral density (PSD) for every subject. MEG recordings were downsampled to 600 Hz, followed by a band-pass filtering of the signals between 2 to 45 Hz using firls in MATLAB (MATLAB version 9.8.0.1451342 (R2020a) Update 5, 2020) and generation of the static frequency spectra for every region of interest using the pmtm algorithm in MATLAB (MATLAB version 9.8.0.1451342 (R2020a) Update 5, 2020). For generating the time-frequency decomposition of the MEG time series, Morlet wavelet algorithm in python was used with the input parameter w as 600 and the widths were calculated based on w for every frequency between 2 and 45 Hz.

Modeled spectra was converted into PSD by calculating the norm of the frequency response and converting it to dB scale by taking 20log_{10}() of the norm. Pearson’s *r* between modeled PSD and the MEG PSD was used as goodness of fit metric for estimating model parameters. Pearson’s *r* was calculated for comparing spectra between each of the regions, and then the average of Pearson’s *r* of all the 68 cortical regions was taken. This average correlation coefficient was the objective function for optimization and used for estimating the model parameters. We used a dual annealing optimization procedure in Python for performing parameter optimization (Xiang, Sun, Fan, & Gong, 1997).

Parameter initial guesses and bounds for estimating the static spectra are specified in Table 1. The bounds for the time constants, speed, and long-range connectivity coupling constant were the same as those used in previous SGM-related works (Raj et al., 2020; Verma et al., 2022). The range for excitatory and inhibitory time constants were based on previous studies (Ferezou et al., 2007; Polack & Contreras, 2012; Roland, Hilgetag, & Deco, 2014). These time constants correspond to the average neural ensemble response functions that capture delays not just due to membrane capacitance, but also due to local circuit delays. Thus, they were expected to be as long as 20 ms. Neural responses in the ferret V1 were reported 20 ms after a short virtual stimulus (Roland et al., 2014). Moreover, cortical depolarization evoked by a brief deflection of a single-barrel whisker in the mouse was reported to spread to parts of sensorimotor cortex within tens of milliseconds (Ferezou et al., 2007; Polack & Contreras, 2012). A recent cortico-cortical-evoked potentials and modeling-based study also reported excitatory/inhibitory synaptic time constants of approximately 5 and 7 ms, respectively (Lemaréchal et al., 2021). The parameter ranges for the neural gain terms were determined such that the gain terms are on the border of stability. This allowed the gain terms to shift between stable/unstable regimes over time.

**Table 1.**

Name
. | Symbol
. | Initial value 1
. | Initial value 2
. | Initial value 3
. | Lower/upper bound for optimization
. |
---|---|---|---|---|---|

Excitatory time constant | τ_{e} | 0.012 s | 0.018 s | 0.006 s | [0.005 s, 0.02 s] |

Inhibitory time constant | τ_{i} | 0.003 s | 0.01 s | 0.018 s | [0.005 s, 0.02 s] |

Long-range connectivity coupling constant | α | 1 | 0.5 | 0.1 | [0.1, 1] |

Transmission speed | v | 5 m/s | 10 m/s | 18 m/s | [5 m/s, 20 m/s] |

Alternating population gain | g_{ei} | 0.2 | 0.1 | 0.3 | [0.001, 0.8] |

Inhibitory gain | g_{ii} | 1 | 1.5 | 0.5 | [1, 2.5] |

Graph time constant | τ_{G} | 0.006 s | 0.01 s | 0.018 s | [0.005 s, 0.02 s] |

Excitatory gain | g_{ee} | n/a | n/a | n/a | n/a |

Name
. | Symbol
. | Initial value 1
. | Initial value 2
. | Initial value 3
. | Lower/upper bound for optimization
. |
---|---|---|---|---|---|

Excitatory time constant | τ_{e} | 0.012 s | 0.018 s | 0.006 s | [0.005 s, 0.02 s] |

Inhibitory time constant | τ_{i} | 0.003 s | 0.01 s | 0.018 s | [0.005 s, 0.02 s] |

Long-range connectivity coupling constant | α | 1 | 0.5 | 0.1 | [0.1, 1] |

Transmission speed | v | 5 m/s | 10 m/s | 18 m/s | [5 m/s, 20 m/s] |

Alternating population gain | g_{ei} | 0.2 | 0.1 | 0.3 | [0.001, 0.8] |

Inhibitory gain | g_{ii} | 1 | 1.5 | 0.5 | [1, 2.5] |

Graph time constant | τ_{G} | 0.006 s | 0.01 s | 0.018 s | [0.005 s, 0.02 s] |

Excitatory gain | g_{ee} | n/a | n/a | n/a | n/a |

Dynamic model parameters were estimated approximately every 5 s by fitting the modeled spectra to the frequency spectra every 5 s from the time-frequency decomposition. Parameter initial guesses and bounds for estimating the dynamic spectra are specified in Table 2. In this case, the time constants and speed were kept the same as those estimated from the static spectra for every subject. We assume that the time constants and speed are biophysical constraints that do not have dynamics in the timescale of seconds. On the other hand, we assume that gain and coupling are biophysical parameters that capture computations and connections across neural populations that can change dynamically. Therefore, only *α*, *g*_{ei}, and *g*_{ii} were allowed to vary. The dual annealing optimization was performed for three different initial guesses, and the parameter set leading to maximum Pearson’s *r* (averaged over all 68 cortical regions) was chosen for each subject. The dual annealing settings were maxiter = 500. All the other settings were the same as default.

**Table 2.**

Name
. | Symbol
. | Initial value 1
. | Initial value 2
. | Initial value 3
. | Lower/upper bound for optimization
. |
---|---|---|---|---|---|

Long-range connectivity coupling constant | α | 1 | 0.5 | 0.1 | [0.1, 1] |

Alternating population gain | g_{ei} | 0.2 | 0.1 | 0.3 | [0.001, 0.8] |

Inhibitory gain | g_{ii} | 1 | 1.5 | 0.5 | [1, 2.5] |

Name
. | Symbol
. | Initial value 1
. | Initial value 2
. | Initial value 3
. | Lower/upper bound for optimization
. |
---|---|---|---|---|---|

Long-range connectivity coupling constant | α | 1 | 0.5 | 0.1 | [0.1, 1] |

Alternating population gain | g_{ei} | 0.2 | 0.1 | 0.3 | [0.001, 0.8] |

Inhibitory gain | g_{ii} | 1 | 1.5 | 0.5 | [1, 2.5] |

### Time Domain Simulations

*ω*. In order to obtain the transient behavior of the model, we performed an inverse Laplace transform of the model solution obtained in the Laplace domain. The closed-form solution in the Laplace domain is the same as that in Equation 18, replacing j

*ω*with

*s*, where

*s*is the Laplace variable which gives

*H*

_{local}(

*s*) =

*H*

_{e}(

*s*) +

*H*

_{i}(

*s*), and

*P*(

*s*) is the Laplace transform of the input noise term

*p*(

*t*).

We performed inverse Laplace transform of the equations using Python mpmath’s numerical inverse Laplace algorithm, based on the de Hoog, Knight, and Stokes method (De Hoog, Knight, & Stokes, 1982).

### Stability Analysis of the Mesoscopic Model

We performed a stability analysis of the model and explored regimes demonstrated in Figures 1 and 2. Firstly, we explored the stability of the mesoscopic model alone. Then, we investigated the stability of the macroscopic model. For performing the stability analysis, we obtained the set of equations in Laplace domain. Then, we identified the poles of the impulse transfer function, where the poles imply the roots of the denominator of the impulse transfer function. If any of the poles are to the right of the imaginary axis, the system is unstable.

*P*(

*s*) is the Laplace transform of Gaussian white noise. Equations 20 and 21 can be written in matrix form:

*X*_{e,i}(

*s*) ≡ [

*x*

_{e}(

*s*),

*x*

_{i}(

*s*)]. This yields

*s*

**−**

*I***(**

*A**s*)|, which is

*t*

_{e}= 1/

*τ*

_{e},

*t*

_{i}= 1/

*τ*

_{i}, we have

*F*

_{e}(

*s*) = $te2$/(

*s*+

*t*

_{e})

^{2}and

*F*

_{i}(

*s*) = $ti2$/(

*s*+

*t*

_{i})

^{2}, we get

*s*+

*t*

_{e})

^{4}(

*s*+

*t*

_{i})

^{4}, we get the following polynomial in

*s*:

This is the characteristic polynomial of the mesoscopic model. We numerically solve for and find the roots of this polynomial, which are also called the poles. If the real part of any of the roots is positive, it will imply that the system is unstable. Another way to check for stability is the Routh-Hurwitz criterion. If there are any changes in the sign of the elements of the Routh-Hurwitz array, it would imply that the system is unstable. We used both numerical root finding and the Routh-Hurwitz criterion to check for stability. The roots of the polynomial (or the poles of the transfer function) are plotted in Figure 3A. As seen in this figure, upon increasing the neural gains, a pair of poles cross the imaginary axis and the real part becomes greater than zero. This loss in stability was also confirmed by calculating the Routh-Hurwitz array.

### Stability Analysis of the Macroscopic Model

Since the macroscopic model consists of 86 regions as defined by the Desikan-Killiany atlas (Desikan et al., 2006), it is numerically not feasible to obtain a characteristic polynomial of the determinant and subsequently its roots to investigate stability. Here, we use different approaches to obtain the stability boundary of the macroscopic model independent of the stability of the mesoscopic model. We investigate two different scenarios: (i) *α* = 0, and (ii) *α* > 0, separately below.

#### Stability boundary when *α* = 0.

*α*= 0. With

*α*= 0, the macroscopic model equations will become

*k*. Its Laplace transform is given by

*τ*

_{e}and 1/($\tau e2$

*τ*

_{G}) should be positive and the following condition should hold:

*α*= 0 is

We confirmed this condition by estimating the poles using numerical root finding as well. This provides us a boundary for stability when *α* = 0. Next, we will use both analytical and numerical approaches to estimate stability boundaries when *α* > 0.

#### Stability boundary when *α* > 0.

*α*> 0, the Laplace transformed equation will be

***(**

*C**s*) ≡ [

*c*

_{ij}$e\u2212\tau ijvs$]. This can be rewritten as

*s*= 0 is a root of

**(**

*M**s*), and (ii)

*s*= j

*ω*is a root of

**(**

*M**s*) for some value of

*ω*. Both

*s*= 0 and

*s*= j

*ω*will define the stability boundaries. We will also use the intuition that the system will ultimately become unstable for high values of

*α*since it controls the signal input to the system. Therefore, we only need to find the upper bound on

*α*to determine the stability boundary.

#### Case I) *s* = 0 is a root of *M*(*s*).

*s*= 0 is a root of the determinant of

**(**

*M**s*). When

*s*= 0, the determinant becomes

**is a real adjacency matrix normalized by row degree. For this determinant to be zero, one of the eigenvalues**

*C**λ*

_{i}of [

**−**

*I**α*

**] should be zero. Since**

*C***is a normalized adjacency matrix, from graph theory, we know that**

*C**α*= 1. Therefore,

*α*= 1 is the stability boundary, and

*α*≥ 1 will make the system unstable, regardless of other parameter values.

#### Case II) *s* = j*ω* is a root of *M*(*s*) for some value of *ω*.

In this case, we need to solve the above determinant [** I** −

*α*

**] and find values of**

*C**τ*

_{G}and

*α*for which

*s*= j

*ω*is a root of the determinant, for some value of

*ω*. We used numerical root finding, giving different initial guesses for

*τ*

_{G}and

*α*. In particular, we used the hybr method in Python’s Scipy’s root finding library, with these settings: xtol = 1e-12, maxfev = 10000 (Moré, Garbow, & Hillstrom, 1980). Based on this, we generated a stability regime by varying

*τ*

_{G}and

*α*. Note that all other parameters were fixed while finding these stability boundaries. These boundaries will shift when the parameters

*τ*

_{e}and

*v*are varied. However, the upper bound of

*α*= 1 will remain unaffected since that was found analytically earlier and it was not based on any other parameter value.

## ACKNOWLEDGMENTS

The template Human Connectome Project (HCP) connectome used in the preparation of this work were obtained from the MGH-USC HCP database (https://ida.loni.usc.edu/login.jsp). The HCP project’s MGH-USC Consortium (Principal Investigators: Bruce R. Rosen, Arthur W. Toga and Van Wedeen; U01MH093765) is supported by the NIH Blueprint Initiative for Neuroscience Research Grant; the National Institutes of Health grant P41EB015896; and the Instrumentation Grants S10RR023043, 1S10RR023401, and 1S10RR019307. Collectively, the HCP is the result of efforts of co-investigators from the University of Southern California, Martinos Center at Massachusetts General Hospital (MGH), Washington University, and the University of Minnesota.

## SUPPORTING INFORMATION

Supporting information for this article is available at https://doi.org/10.1162/netn_a_00263. The code and processed datasets for this work can be found in this GitHub repository: https://github.com/Raj-Lab-UCSF/spectrome-stability (Verma, 2021).

## AUTHOR CONTRIBUTIONS

Parul Verma: Conceptualization; Formal analysis; Investigation; Writing – original draft; Writing – review & editing. Srikantan Nagarajan: Conceptualization; Funding acquisition; Supervision; Writing – review & editing. Ashish Raj: Conceptualization; Funding acquisition; Supervision; Writing – review & editing.

## FUNDING INFORMATION

Ashish Raj: National Institutes of Health awards R01NS092802, R01NS183412, R01AG062196, R01AG072753. Srikantan Nagarajan: National Institutes of Health awards P50DC019900, R56DC019282, R01NS100440, R01DC017091, DoDCDMRP Grant W81XWH1810741, UCOP-MRP-17-454755, and an industry research contract from Ricoh MEG Inc.

## TECHNICAL TERMS

- Structural connectome:
A brain white matter connectivity matrix, where each entry of the matrix corresponds to the weighted connection between two brain regions.

- Spectral graph theory:
Theories focusing on eigendecomposition-based spectral representation of a graph, where a graph represents how different nodes/vertices are connected via different edges/links.

- Laplacian:
A matrix representation of a graph. Each entry of the matrix represents the connectivity between the two nodes/vertices/brain regions.

- Mesoscopic:
Excitatory and inhibitory neural population ensembles for a specific brain region.

- Macroscopic:
Long-range excitatory pyramidal neural population ensemble for a specific brain region.

- Limit cycles:
Oscillatory time series where the amplitude of oscillations remain constant.

- Alpha band:
Frequency in the range of 8–12 Hz.

- Impulse input:
An input to a system that is nonzero only at time = 0.

- Poles:
Roots of the denominator of a system’s transfer function (see below for definition). The signs of the real part of these roots determine the stability of the system.

- Transfer function:
It models the response of a linear system when the input to the system is an impulse.

## REFERENCES

*C. elegans*

## Supporting Information

## Author notes

Competing Interests: The authors have declared that no competing interests exist.

Handling Editor: Petra Vertes