Generative models of brain activity have been instrumental in testing hypothesized mechanisms underlying brain dynamics against experimental datasets. Beyond capturing the key mechanisms underlying spontaneous brain dynamics, these models hold an exciting potential for understanding the mechanisms underlying the dynamics evoked by targeted brain stimulation techniques. This paper delves into this emerging application, using concepts from dynamical systems theory to argue that the stimulus-evoked dynamics in such experiments may be shaped by new types of mechanisms distinct from those that dominate spontaneous dynamics. We review and discuss (a) the targeted experimental techniques across spatial scales that can both perturb the brain to novel states and resolve its relaxation trajectory back to spontaneous dynamics and (b) how we can understand these dynamics in terms of mechanisms using physiological, phenomenological, and data-driven models. A tight integration of targeted stimulation experiments with generative quantitative modeling provides an important opportunity to uncover novel mechanisms of brain dynamics that are difficult to detect in spontaneous settings.

Generative models are important tools for testing hypothesized mechanisms of brain dynamics against experimental data. This review highlights an application of generative models in analyzing a form of brain activity evoked by emerging targeted stimulation techniques. We argue that analyzing targeted stimulation dynamics can uncover mechanisms that are hidden during commonly analyzed spontaneous dynamics and explore how integrating diverse targeted stimulation experiments with existing generative models offer a significant opportunity to uncover these novel mechanisms and thereby expand our mechanistic understanding of brain dynamics.

Statistical analyses of experimental neuroimaging data have long demonstrated that the brain supports a rich repertoire of spatiotemporal dynamics (Biswal, Yetkin, Haughton, & Hyde, 1995; Honey, Kötter, Breakspear, & Sporns, 2007; Lopes da Silva, van Rotterdam, Barts, van Heusden, & Burr, 1976; Vidaurre, Smith, & Woolrich, 2017). To better understand the intrinsic mechanisms that give rise to the brain’s dynamical properties, generative models of brain activity—also known as computational models, dynamic models, or mathematical models—can be used to test hypothesized mechanisms against properties of brain dynamics that they aim to explain or predict. While the term “generative model” can have a range of interpretations across fields, here, we use it as shorthand for “generative model of brain dynamics” to describe a model that can generate (through simulation) a set of time series of brain activity that can be compared with data measured experimentally (Ramezanian-Panahi, Abrevaya, Gagnon-Audet, Voleti, Rish, & Dumas, 2022). Existing generative models allow us to simulate brain activity based on a wide variety of encoded mechanisms across spatial scales (Acharya, Ruf, & Nozari, 2022; Breakspear, 2017; Cabral, Kringelbach, & Deco, 2017; Deco, Jirsa, Robinson, Breakspear, & Friston, 2008; Kim & Bassett, 2020; Ramezanian-Panahi et al., 2022), allowing the hypothesized mechanisms to be directly evaluated against the dynamics of activity accessed in experimental neural recordings.

A crucial advantage that generative models provide is the ability to rigorously test candidate mechanisms against datasets from different experimental settings. Early generative models were used to test mechanisms underlying dynamics of brain activity recorded during cognitive or motor tasks, or in response to sensory stimuli (Brunel & Wang, 2001; David, Kilner, & Friston, 2006; Jansen & Rit, 1995; Jirsa, Friedrich, Haken, & Kelso, 1994; Jirsa & Haken, 1996, 1997; Kerr, Rennie, & Robinson, 2008; Moran et al., 2007; Rennie, Robinson, & Wright, 2002; Zipser, 1991). Rich dynamical properties were later found even in spontaneous brain activity, that is, activity not attributable to any externally applied task or stimulus (Fox & Raichle, 2007). Many of these properties have hypothesized mechanistic explanations that have been tested through generative models (Abdelnour, Dayan, Devinsky, Thesen, & Raj, 2018; Breakspear et al., 2006; Deco, Kringelbach, Jirsa, & Ritter, 2017; Freyer et al., 2011; Honey et al., 2009; Nozari et al., 2024; Robinson, Loxley, O’Connor, & Rennie, 2001; Robinson, Rennie, & Rowe, 2002). As datasets from different experimental settings reveal novel spatiotemporal properties of brain dynamics, generative models present new opportunities to refine and hypothesize new underlying mechanistic accounts that can accurately capture them.

A new opportunity of growing interest to generative modeling is to test our understanding of the dynamics of brain activity evoked by targeted stimulation. Unlike traditionally used sensory stimulation paradigms, targeted stimulation techniques directly target neurons or neural populations using concentrated external inputs of energy. This approach bypasses the constraints imposed by the brain’s natural sensory pathways, offering significantly greater spatial precision in target selection and flexibility in stimulation strength. Commonly used targeted stimulation techniques include optogenetics (Deisseroth, 2015), electrode stimulation (Flesher et al., 2016; Kringelbach, Jenkinson, Owen, & Aziz, 2007), transcranial magnetic stimulation (TMS) (Rogasch & Fitzgerald, 2013), and chemogenetics (Roth, 2016), all of which are applicable in vivo and allow for simultaneous measurement of evoked responses with compatible measurement techniques (Bonnard et al., 2016; Luboeinski & Tchumatchenko, 2020; Markicevic et al., 2020; O’Shea et al., 2022; Sanzeni et al., 2023). This review focuses on the new opportunities that datasets of activity measured from targeted stimulation experiments present in validating and refining our generative models of brain activity and thereby better understanding the important mechanisms that underpin brain dynamics.

Why should one be interested in the mechanisms underlying the brain’s response to targeted stimulation? There are both theoretical and practical motivations. Theoretically, despite the high dimensionality of brain activity recordings, brain dynamics of spontaneous activity is known to be constrained to a lower-dimensional manifold (Chaudhuri, Gerçek, Pandey, Peyrache, & Fiete, 2019; Churchland et al., 2012; Cunningham & Yu, 2014; Humphries, 2021; Shine, Breakspear, et al., 2019; Shine, Hearne, et al., 2019). This implies the existence of a vast space of unexplored states that the brain does not typically access, unless perturbed with a sufficiently strong stimulus. Targeted stimulation techniques enable us to access states that are not only inaccessible during spontaneous activity but also difficult to access during sensory-evoked activity, hence initiating new forms of artificially evoked activity that transcend the brain’s natural activation patterns. Consequently, generative models present an important opportunity to test and calibrate mechanisms on datasets obtained through a unique experimental paradigm, potentially driving the discovery of new mechanisms of brain dynamics distinct from those that sufficiently capture spontaneous dynamics. Practically, there is growing interest in developing clinical neuromodulation technologies using targeted stimulation to address a wide range of neurodegenerative and neurological disorders (Kurtin et al., 2023). While trial-and-error methods for determining effective stimulation parameters have shown promising results for specific patients (Cole et al., 2020; Ramasubbu, Lang, & Kiss, 2018), a wider deployment of neuromodulation necessitates a systematic approach that can more efficiently explore the vast stimulation parameter space and tailor the technology to each patient’s needs (Capogrosso & Lempka, 2020; Wang, Hutchings, & Kaiser, 2015). An enhanced mechanistic understanding of the brain’s response to the targeted stimulation techniques behind these technologies can therefore accelerate the development of more systematic personalized protocols by guiding the interpretation of individual responses to different stimulation parameter combinations.

In this review, we focus on the theoretically motivated opportunities in understanding the brain’s response to targeted stimulation (for a review on the practically motivated opportunities mentioned above, see Acharya et al., 2022; Kurtin et al., 2023; Tang & Bassett, 2018; and Wang et al., 2015 for examples). With reference to a theoretical abstracted picture of brain dynamics, we argue for the existence of hidden nonlinear mechanisms of brain dynamics, which are challenging to detect from spontaneous dynamics. We highlight three available tools that we argue are essential to uncover these mechanisms: (a) targeted stimulation techniques to make precise perturbations that can transition brain activity away from spontaneous patterns, (b) measurement techniques to simultaneously record the brain’s response to the perturbation as it reverts to spontaneous patterns, and (c) generative models to test mechanisms that we hypothesize to capture the dynamics of recorded activity evoked by targeted stimulation. Anchored around this argument, we explore in detail the opportunities that integrating these tools—that is, integrating different targeted stimulation and measurement techniques with different physiological, phenomenological, and data-driven models—provided in expanding our understanding of the various mechanisms underlying brain dynamics.

To concretely establish our theoretical motivation for understanding the brain’s response to targeted stimulation, we present a geometric depiction of brain dynamics that abstracts the complexities of brain activity measurements across different spatial scales and species, which is shown in Figure 1. With reference to dynamical systems theory, we use this picture to argue that complex brain dynamics are likely underpinned by mechanisms that are not clearly apparent in the dynamics of commonly analyzed spontaneous activity and that targeted stimulation techniques and measurement techniques are necessary tools to generate brain dynamics that manifest these hidden mechanisms. We also argue that the dynamics evoked by targeted stimulation are likely to be nonlinear and that to rigorously test and validate the hidden mechanisms underlying these dynamics necessitates the use of generative modeling and simulation.

Figure 1.

An abstracted picture of brain dynamics that conceptualizes the dynamics measured from distinct experimental settings, while simplifying the complexities of brain activity measurements across other experimental conditions. (A) Brain activity dynamics can be represented as the evolution of a set of key state variables, such as (i) the membrane potentials of individual neurons in a network or (ii) activation of distributed modes of whole-brain activity. (B) In this abstracted picture, spontaneous brain dynamics can be viewed as stochastic fluctuations about an attractor (a manifold that attracts nearby trajectories). Common attractor classes include fixed points (Sip et al., 2023; van den Heuvel & Hulshoff Pol, 2010), labeled (i), and limit cycles (Shine, Breakspear, et al., 2019; Shine, Hearne, et al., 2019), labeled (ii). (C) In contrast to spontaneous dynamics, stimulus-evoked brain dynamics consists of a perturbation, labeled (i), to a point that is typically far from the attractor, followed by a dynamic relaxation, labeled (ii), back to the attractor. Stimulus-evoked dynamics can exhibit interesting nonlinearities, where the response (red) qualitatively differs with the strength of the perturbation (blue), as depicted at (iii).

Figure 1.

An abstracted picture of brain dynamics that conceptualizes the dynamics measured from distinct experimental settings, while simplifying the complexities of brain activity measurements across other experimental conditions. (A) Brain activity dynamics can be represented as the evolution of a set of key state variables, such as (i) the membrane potentials of individual neurons in a network or (ii) activation of distributed modes of whole-brain activity. (B) In this abstracted picture, spontaneous brain dynamics can be viewed as stochastic fluctuations about an attractor (a manifold that attracts nearby trajectories). Common attractor classes include fixed points (Sip et al., 2023; van den Heuvel & Hulshoff Pol, 2010), labeled (i), and limit cycles (Shine, Breakspear, et al., 2019; Shine, Hearne, et al., 2019), labeled (ii). (C) In contrast to spontaneous dynamics, stimulus-evoked brain dynamics consists of a perturbation, labeled (i), to a point that is typically far from the attractor, followed by a dynamic relaxation, labeled (ii), back to the attractor. Stimulus-evoked dynamics can exhibit interesting nonlinearities, where the response (red) qualitatively differs with the strength of the perturbation (blue), as depicted at (iii).

Close modal

In this abstracted picture, the brain’s activity is characterized as the evolution of a brain state over time. Brain states can be encapsulated by a set of time-varying state variables, {x1(t), x2(t), x3(t), …, xN(t)}, and brain dynamics thus correspond to trajectories, x(t) = [x1(t), x2(t), …, xN(t)], through the state space. A specific spatial scale of interest can be addressed through appropriate selection of the state variables, from the activities of individual neurons in a population (Buzsáki, 2004) (depicted in Figure 1A(i)) to the spatially distributed functional gradients or modes that describe patterns of population-scale activity over the whole brain (Huntenburg, Bazin, & Margulies, 2018; Pang et al., 2023) (depicted in Figure 1A(ii)).

By abstracting brain dynamics as trajectories through a state space, we can conceptualize the dynamics of brain activity measured from qualitatively distinct experimental settings. Consider the commonly examined setting of spontaneous activity, which is measured when the brain is at rest and absent from any explicit external stimulus. Measurements indicate that spontaneous brain states are concentrated at a subset of the state space with dimension much less than that of the state space and do not deviate significantly from this subset (Chaudhuri et al., 2019; Churchland et al., 2012; Cunningham & Yu, 2014; 2019; Shine, Hearne, et al., 2019). Consistent with these properties, spontaneous dynamics in our abstracted picture can be conceptualized as trajectories concentrated near attractors, which are low-dimensional subsets of the state space that attract neighboring trajectories (Strogatz, 2018). Additional sensory inputs and background biological processes can also cause trajectories to temporarily deviate from and return to the attractor, effectively yielding stochastic fluctuations about the attractor. Attractors, which define spontaneous dynamics across different experimental conditions, can be further categorized into classes of similar geometric features, such as stable fixed points (Sip et al., 2023; van den Heuvel & Hulshoff Pol, 2010) (depicted in Figure 1B(i)), or flows along a more complex low-dimensional manifold such as a limit cycle (Churchland et al., 2012; Shine, Breakspear, et al., 2019; Shine, Hearne, et al., 2019) (depicted in Figure 1B(ii)). Through classes of attractors, we can concisely capture the intrinsic dynamical properties of spontaneous brain activity in this abstracted picture while aggregating the impact of other experimental conditions.

In addition to the low-dimensional attractor where trajectories of spontaneous dynamics are concentrated, there are a myriad of possible trajectories of brain dynamics through states far from the attractor. Unfortunately, as brain states move further from the attractor, they become less likely to be observed in spontaneous settings. However, our abstracted picture can conceptualize an alternative stimulus-evoked setting that employs external intervention to reach these distant states, as shown in Figure 1C. Initially positioned near the attractor (depicted here as a fixed point), the brain is first perturbed with a sufficiently strong stimulus to a distant state (Figure 1C(i)) and subsequently relaxes back to the attractor following stimulus termination (Figure 1C(ii)). According to nonlinear dynamical systems theory, a central characteristic of stimulus-evoked dynamics is the nonlinear nature of the relaxation trajectory (Strogatz, 2018), where its shape is expected to be highly sensitive to the magnitude of the perturbation, as illustrated in Figure 1C(iii). However, while such nonlinearities can produce a diverse range of trajectories far from the attractor, their effects tend to vanish as the trajectory approaches the attractor governing spontaneous dynamics, implying that spontaneous dynamics can be sufficiently approximated by simpler linear models. Generative modeling studies have indeed shown that linear models can capture properties of macroscale spontaneous brain dynamics as effectively as nonlinear models, such as the established resting-state functional connectivity benchmark (Hosaka, Hieda, Hayashi, Jimura, & Matsui, 2024; Messé, Rudrauf, Giron, & Marrelec, 2015; Nozari et al., 2024). These findings, supported by predictions of nonlinear dynamical systems theory, suggest that the nonlinear mechanisms shaping stimulus-evoked brain dynamics in this dynamical systems picture are difficult to be detected in spontaneous settings. Thus, to uncover and understand these nuanced hidden mechanisms of stimulus-evoked dynamics in practice (beyond this abstracted picture), we require an experimental paradigm that allows us to perturb the brain away from its spontaneous attractor (Figure 1C(i)) and resolve the nonlinear relaxation trajectory (Figure 1C(ii)), alongside a model that allows us to question and validate the nonlinear mechanisms that underpin these trajectories.

To this end, we bring attention to a practically feasible strategy that integrates targeted stimulation techniques and measurement techniques with generative models. Targeted stimulation techniques can perturb the brain from the attractor to distant states, with spatial coverage and precision that surpass traditional sensory stimuli (Deisseroth, 2015; Keller et al., 2014; Rogasch & Fitzgerald, 2013; Roth, 2016). High precision selection of states is crucial, as the nonlinear dynamics of stimulus-evoked activity would be highly sensitive to perturbation strengths. Compatible measurement techniques play an indispensable role by simultaneously tracking the brain’s relaxation trajectory back to the attractor at sufficient spatial and temporal resolution. Finally, the simulation approach of generative models is essential for linking hypothesized mechanisms to the dynamics observed in measurements, as a mechanistic understanding of nonlinear properties is challenging to pursue from static statistical measures alone (John et al., 2022).

Targeted stimulation techniques, measurement techniques, and generative models are therefore three important tools which, when integrated, can uncover and provide understanding of new mechanisms of brain dynamics that are concealed in spontaneous settings. By conducting experiments with stimulation and measurement techniques, we can generate datasets of stimulus-evoked dynamics with potentially new forms of nonlinearities. By constraining generative models with datasets from these experiments, we can rigorously test and validate hypotheses about the novel mechanisms which drive their nonlinear dynamics. To explore the opportunities that emerge from this integration, we first delve into the different experiments that can be executed.

Through an abstracted picture of brain dynamics, illustrated in Figure 1, we have argued why targeted stimulation experiments play an important role in revealing novel mechanisms of brain dynamics in measured brain activity. This picture serves another purpose: It describes the fundamental dynamical properties of activity in any targeted stimulation experiment while abstracting away details of the specific techniques used. This interpretation bridges a wide body of stimulation and measurement techniques developed from disjoint fields, each with their own mechanics of action, spatial scale, and species compatibility. Here, we review a diverse range of key stimulation and measurement techniques that can be used in a targeted stimulation experiment, contextualizing their features within our unified picture.

In a targeted stimulation experiment, a stimulation technique perturbs the brain to a state distant from the attractor representing spontaneous dynamics (Figure 1(i)), and a measurement technique captures the brain’s subsequent relaxation trajectory back to the attractor (Figure 1(ii)). Key targeted stimulation and measurement techniques are illustrated in Figure 2. Based on the spatial scale in which brain states are defined, it is crucial to select an appropriate targeted stimulation technique to elicit a precise perturbation in this state space and an appropriate measurement technique to accurately capture the relaxation.

Figure 2.

Stimulus-evoked brain activity can be generated through a diverse range of targeted stimulation techniques and measured by a range of measurement techniques at different spatial scales. (A) Targeted stimulation techniques can perturb the brain to novel brain states at different degrees of spatial precision. From neuron-scale techniques, such as optogenetics; to mesoscale techniques, such as ICMS and DBS; and to macroscale techniques, such as chemogenetics and TMS. (B) Different measurement techniques can measure the brain’s response to targeted stimulation at different spatial resolutions. From neuron-resolution modalities, such as Neuropixels electrodes, patch clamps, and calcium imaging; to mesoscale modalities, such as iEEG systems; and to macroscale modalities, such as EEG and fMRI.

Figure 2.

Stimulus-evoked brain activity can be generated through a diverse range of targeted stimulation techniques and measured by a range of measurement techniques at different spatial scales. (A) Targeted stimulation techniques can perturb the brain to novel brain states at different degrees of spatial precision. From neuron-scale techniques, such as optogenetics; to mesoscale techniques, such as ICMS and DBS; and to macroscale techniques, such as chemogenetics and TMS. (B) Different measurement techniques can measure the brain’s response to targeted stimulation at different spatial resolutions. From neuron-resolution modalities, such as Neuropixels electrodes, patch clamps, and calcium imaging; to mesoscale modalities, such as iEEG systems; and to macroscale modalities, such as EEG and fMRI.

Close modal

Targeted Stimulation Techniques

Different targeted stimulation techniques can be selected to elicit perturbations at a spatial precision of choice. Microscale techniques can precisely stimulate individual neurons (Deisseroth, 2015), while meso-to-macroscale techniques can stimulate neural populations ranging from millimeters to centimeters in size (Flesher et al., 2016; Kringelbach et al., 2007; Rogasch & Fitzgerald, 2013). Figure 2A illustrates key targeted stimulation techniques, ordered by degrees of spatial precision. We review these techniques below.

At the microscale, techniques can stimulate a single target neuron or a set of neurons. The most prevalent microscale stimulation technique is optogenetics, in which target neurons are genetically modified with injected photoreceptor proteins called opsins and later exposed to light of a specified wavelength to generate action potentials (Deisseroth, 2015). Two-photon microscopy allows the light beam to be made sufficiently narrow to be directed exclusively to the target neurons (Rickgauer & Tank, 2009; Shemesh et al., 2017; Tong et al., 2023).

At the mesoscale, neural populations can be similarly stimulated using optogenetics, or via electrical methods such as intracortical microstimulation (ICMS) and deep brain stimulation (DBS). A compelling benefit of using optogenetics at this scale is its ability to target specific cell types, cortical layers, and projection targets through viral-vector gene-delivery techniques (Yizhar, Fenno, Davidson, Mogri, & Deisseroth, 2011). The alternative methods, ICMS and DBS, are electrical techniques which stimulate local populations in the vicinity of an implanted electrode using electric fields (Flesher et al., 2016; Kringelbach et al., 2007). While these methods lack the forms of specificity that can be achieved with optogenetics, they can be applied in human brains as it does not require genetic modification of the target neural tissue. Local populations near the cortical surface can be targeted by ICMS through a chronically implanted multielectrode array (Flesher et al., 2016), whereas populations of varying depth down to subcortical structures can also be targeted by DBS through electrodes implanted deep in the brain (Kringelbach et al., 2007).

Finally, macroscale neural populations can be stimulated using electrical methods such as TMS, or chemogenetics, which is a genetic modification technique similar to optogenetics. TMS uses an external electromagnetic coil to generate a magnetic field, which then induces an electric field over the target population (Rogasch & Fitzgerald, 2013; Tremblay et al., 2019). TMS is a noninvasive technique, as the generated magnetic field passes across the scalp and skull; however, the difficulty in downsizing the coil has mainly restricted its use to primate brains (Alekseichuk et al., 2019). In smaller rodent brains, macroscale populations can be stimulated using chemogenetics, in which the target population is injected with an engineered protein called Designer Receptors Exclusively Activated by Designer Drugs (DREADDs) and consequently activated by an orally administered designer drug (Roth, 2016). Similar to optogenetics, chemogenetics benefits from the forms of specificity provided by viral-vector techniques, but it is able to stimulate much larger neural populations, as the designer drug diffuses throughout the brain when orally administered (Sternson & Roth, 2014). Chemogenetic stimulations also have the unique ability to modulate brain activity for sustained periods ranging from hours to weeks (Muir, Lopez, & Bagot, 2019).

Measurement Techniques

Following the perturbation elicited by the selected targeted stimulation technique, we can simultaneously measure the brain’s relaxation trajectory back to the attractor of spontaneous activity. Collections of such measurements form a dataset of stimulus-evoked activity, which we have argued to reveal undiscovered mechanisms of brain dynamics upon further analysis. To capture the relaxation, the selected measurement technique must measure the exhibited brain activity at a matching spatial resolution. Microscale measurement techniques can resolve the response from individual neurons, while mesoscale and macroscale measurement techniques capture aggregated activity of neural populations. Figure 2B illustrates key measurement techniques, ordered by degrees of spatial resolution. We review these techniques below.

At a microscale resolution, single-unit recording methods capture the spiking activity of individual neurons. The classic single-unit recording method is whole-cell patch-clamp electrophysiology, which reads the intracellular potential from an electrode of a micropipette tightly suctioned to the target neuron membrane (Hill & Stephens, 2021). The spatial coverage of patch clamps is however limited to a few neurons. An alternate emerging modality is calcium imaging, which measures changes in levels in calcium concentration as an indirect proxy for spiking activity (Grienberger & Konnerth, 2012). Modern microscopy techniques can measure calcium levels of hundreds of neurons, recently measuring activity across all neurons in C. elegans in response to neuron scale optogenetics (Randi, Sharma, Dvali, & Leifer, 2023). With emerging microelectrode arrays, such as the recently developed Neuropixels probes (Steinmetz et al., 2021), it is now possible to directly measure neuronal spiking with a spatial coverage comparable with that of calcium imaging.

The activity of a mesoscale population is conventionally measured by the local field potential (LFP) of its extracellular medium, which aggregates the spiking of its constituent neurons (Buzsáki, Anastassiou, & Koch, 2012). LFPs can be recorded by intracranial electroencephalographic (iEEG) electrodes penetrating the cortex. Commonly used iEEG electrode designs include subdural strip or grid electrodes, which can monitor LFPs from multiple positions from the cortical surface, or depth electrodes, which can measure at multiple depths at a single surface position (Lachaux, Rudrauf, & Kahane, 2003).

The activity of a macroscale neural population could be measured by simultaneously tracking mesoscale populations with multiple iEEG systems; however, this option can only be implemented in rare surgical scenarios (e.g., the Functional Brain Tractography project (Jedynak et al., 2023)), as many electrodes become excessively invasive. Instead, there exist noninvasive functional neuroimaging techniques that make proxy measurements at this spatial scale. Electroencephalography (EEG) can measure electrical potentials from electrodes attached around the scalp at high temporal resolution, making it suitable for measuring transient responses to stimulation such as TMS (Rogasch & Fitzgerald, 2013). Complementing EEG is functional magnetic resonance imaging (fMRI), which measures the blood oxygen level dependent (BOLD) signals, tracking underlying neural activity. Despite its limited temporal resolution, fMRI with its high spatial coverage can be used to measure responses to a macroscale modulatory stimulation method such as chemogenetics (Markicevic et al., 2020, 2023; Zerbi et al., 2019).

In summary, there are a range of sophisticated targeted stimulation techniques that can perturb precise neurons or broad neural populations and measurement techniques that can simultaneously resolve the subsequent relaxation of neuronal spiking activity or aggregated neural population activity. Thanks to these technological advancements, datasets of stimulus-evoked activity can be generated through targeted stimulation experiments at any spatial scale of our interest: micro-, meso-, or macroscale. As argued through an abstracted picture of brain dynamics (Figure 1), stimulus-evoked brain activity may manifest highly nonlinear dynamics, driven by novel mechanisms. However, integrating these experiments with generative models allows us to rigorously test hypotheses about these nonlinear mechanisms at any spatial scale. In the next section, we detail how different generative models can be utilized to harness this significant opportunity.

Generative models provide a mechanistic account of the spatiotemporal dynamics of brain activity observed under any experimental setting. Mechanisms are encoded as a set of dynamical rules, such as difference equations or differential equations, which govern the evolution of brain activity over time. By simulating these governing rules, we can generate a spatially distributed time series of brain activity, which can be compared against activity from experimental data. Mechanisms of generative models can also be designed to generate nonlinear time series, allowing us to capture the nonlinear dynamical properties of stimulus-evoked activity. Aside from the ability to generate neuronal dynamics, the hypothesized mechanisms of generative models can also be assessed and validated by integrating the model with a target experimental dataset. This process may involve model selection and model calibration, where the model’s parameters and possible structural characteristics are optimized to explain the dataset as accurately as possible (Ramezanian-Panahi et al., 2022). One can additionally assess the model’s predictions of the target dynamics through an uncertainty quantification phase, in which statistical methods quantify the reliability of a model’s parameter estimates and predictions (Jha, Hashemi, Vattikonda, Wang, & Jirsa, 2022; Penas, Hashemi, Jirsa, & Banga, 2024), and sensitivity analyses evaluate how robust the output dynamics are to small changes in the inferred parameters (Baldy, Breyton, Woodman, Jirsa, & Hashemi, 2024; Hashemi et al., 2024). Through the generation of nonlinear dynamics and the integration with targeted stimulation datasets, generative models are therefore indispensable tools in the endeavor to infer mechanisms of brain dynamics revealed in the activity evoked by targeted stimulation.

Generative models encode various forms of mechanisms to provide different forms of accounts of brain dynamics. Figure 3 illustrates a spectrum of key generative models, organized by the type of mechanisms they encode. By choosing and employing an appropriate model, we can test and validate the novel mechanisms that we hypothesize to drive the dynamics evoked by targeted stimulation. Physiological models, for example, use physically measurable quantities to quantitatively encapsulate complex physiological processes in the brain (Breakspear, 2017; Deco et al., 2008). In contrast, phenomenological models use mathematical abstractions to capture the core spatiotemporal properties of brain dynamics hypothesized by the mechanism (Cabral et al., 2017; Kim & Bassett, 2020). Finally, we distinguish data-driven models, which learn statistical associations from sample data to make accurate predictions of brain dynamics (Acharya et al., 2022; Ramezanian-Panahi et al., 2022). Despite their varying mechanisms, all generative models in Figure 3 can simulate both spontaneous and stimulus-evoked activity as abstracted by the dynamical systems picture in Figure 1. The relevant output variables construct a state space where brain activity follows a trajectory of brain states (Figure 1A). Introducing stochastic noise as an input variable concentrates the trajectory around an attractor, reflecting the model’s depiction of spontaneous dynamics at the corresponding spatial scale (Figure 1B). Contrastingly, setting the input variable to a deterministic spatiotemporal profile aligned with the experimental stimulus would perturb the brain state away from the attractor before drawing it back via the relaxation trajectory (Figure 1C).

Figure 3.

Generative models come in a variety of flavors, and can be positioned along a spectrum between three extremes, depending on the type of mechanistic account of brain dynamics they provide. (A) Physiological models—such as biological neural networks, neural mass models, and neural field models—model physiological processes of brain activity with physically measurable biological quantities (Breakspear, 2017; Deco et al., 2008). (B) Phenomenological models—such as linear network models, coupled oscillator models, and conductance-based models—model abstracted mechanisms that can capture essential dynamical properties of brain activity with minimal physiological detail (Cabral et al., 2017; Kim & Bassett, 2020). (C) Data-driven models—such as models constructed through system identification or artificial neural networks—learn statistical associations of brain dynamics from sample data to make accurate predictions of brain dynamics (Acharya et al., 2022; Ramezanian-Panahi et al., 2022).

Figure 3.

Generative models come in a variety of flavors, and can be positioned along a spectrum between three extremes, depending on the type of mechanistic account of brain dynamics they provide. (A) Physiological models—such as biological neural networks, neural mass models, and neural field models—model physiological processes of brain activity with physically measurable biological quantities (Breakspear, 2017; Deco et al., 2008). (B) Phenomenological models—such as linear network models, coupled oscillator models, and conductance-based models—model abstracted mechanisms that can capture essential dynamical properties of brain activity with minimal physiological detail (Cabral et al., 2017; Kim & Bassett, 2020). (C) Data-driven models—such as models constructed through system identification or artificial neural networks—learn statistical associations of brain dynamics from sample data to make accurate predictions of brain dynamics (Acharya et al., 2022; Ramezanian-Panahi et al., 2022).

Close modal

In the subsections below, we review how physiological, phenomenological, and data-driven models are constructed and how the specific types of mechanisms they encode can contribute to our understanding of the dynamics evoked by targeted stimulation. For each model category, we provide examples of modeling studies which have tested mechanisms underlying brain dynamics through an integration with datasets of stimulus-evoked activity, with a primary focus on emerging datasets of targeted stimulation. We also highlight how findings of some studies support our core argument presented through the abstracted picture in Figure 1: that stimulus-evoked dynamics could be driven by novel mechanisms of brain dynamics which play a more peripheral role in shaping spontaneous dynamics.

Physiological Models

In physiological models, variables and parameters represent physical quantities, such as cell-body potentials or synaptic coupling strengths, and interact in accordance to experimentally supported physiological processes (Breakspear, 2017; Deco et al., 2008; Ma & Tang, 2017). Physically measurable variables and parameters assist in interpreting the functional significance of individual variables and the effects of changes in static parameters on the resultant dynamics. Crucially, the physiological formulation allows predictions of the model to be tested not only against the model’s ability to capture the predicted brain dynamics but also against physical measurements from physiological experiments. Physiological models can study the dynamics of responses to a wide range of targeted stimulation techniques, with input variables quantifying the technique’s specific mechanics of action. For example, a physiological model could be formulated to directly compare responses with optogenetics, which directly depolarizes neuron membranes, with responses to electrode stimulation, which feeds a current into the extracellular medium. We can construct physiological models to test mechanisms of physiological processes at different spatial scales, ranging from the microscale to whole brain scale.

To capture microscale brain dynamics, we can use biological neural networks, which simulate the activity of interconnected neurons (Ma & Tang, 2017). Separate single-neuron models, such as the Hodgkin–Huxley model (Hodgkin & Huxley, 1952), are coupled to capture neuron–neuron interactions, reflecting underlying synaptic processes. By assigning separate models to each neuron, biological neural networks can simulate dynamics with realistic heterogeneity among neurons. Specific parameters can be assigned to heterogeneous classes of neurons, such as excitatory and inhibitory neurons (Arkhipov et al., 2018; Billeh et al., 2020; Gast, Solla, & Kennedy, 2024; Landau, Egger, Dercksen, Oberlaender, & Sompolinsky, 2016; Stefanescu & Jirsa, 2008), or classes of intricate morphological features (Aberra, Peterchev, & Grill, 2018; Aberra, Wang, Grill, & Peterchev, 2020). Ongoing projects to create expansive classification databases such as the Blue Brain Project (Markram, 2006) and the Allen Cell Types Database (Gouwens et al., 2018), alongside technological advances in computing power, suggest that future biological neural networks could simulate brain dynamics at neuronal resolution while spanning large spatial scales.

Zooming out from the microscale, mean-field population models instead track the mean activity of all neurons in a population, without tracking each constituent neuron (Breakspear, 2017; Deco et al., 2008). Crucially, this mean-field approach to model construction requires assumptions on the connectivities between populations in space. For example, there are neural field models (Jirsa & Haken, 1996, 1997), which assume isotropic connectivity between populations to allow for efficient computation of population-scale activity across continuous space (Deco et al., 2008; Robinson, 2005; Robinson et al., 2002; Robinson, Rennie, & Wright, 1997). Another class of mean-field models across populations are neural mass models or brain-network models (Breakspear, 2017; Pathak, Roy, & Banerjee, 2022), which coarse-grain neural populations into discrete spatial regions, and then couple these regions to one another in a network, using measured structural connectome data (Chaudhuri, Knoblauch, Gariel, Kennedy, & Wang, 2015; Deco, Jirsa, McIntosh, Sporns, & Kötter, 2009; Deco et al., 2021). Notwithstanding the limitations in accuracy and physical realism due to mean-field assumptions (Robinson, 2019), the significant reduction in the number of variables allows mean-field population models to be computed much more efficiently than the equivalent hypothetical biological neural network of the same spatial scale.

By designing stimulation inputs with unique spatiotemporal profiles, physiological models integrated with targeted stimulation datasets can help investigate the mechanisms behind varying responses to different stimulation forms. For instance, a biological neural network of the rat somatosensory cortex revealed how morphological features like axon length shape neuron activation under ICMS and TMS under various stimulation parameters (Aberra et al., 2018). Another biological neural network of the mouse and monkey somatosensory cortex investigated recurrent mechanisms underpinning the difference in responses between optogenetic and visual sensory stimulation (Sanzeni et al., 2023). Additionally, physiological models can account for nonlinearities observed in experimental responses to targeted stimulation. For example, a biological neural network of the macaque motor cortex demonstrated potential nonlinear mechanisms underpinning distortive effects of ICMS on the dynamics of ongoing task-related activity (O’Shea et al., 2022). At the macroscale, a neural field model with nonlinear synaptic mechanisms studied the calcium-dependent mechanisms underlying the nonlinear effects of TMS-induced plasticity (Fung & Robinson, 2013). A similar neural field model demonstrated significant differences in local synaptic strength parameters when fitted to spontaneous EEG versus evoked potentials (Kerr et al., 2008). This discrepancy was reconciled with a dynamic synaptic strength governed by a nonlinear gain modulation mechanism (Babaie-Janvier & Robinson, 2020).

Phenomenological Models

While physiological models offer realistic biophysically constrained explanations of how brain activity stems from independently measurable physiological processes, it is also useful to avoid excessive physiological details that complicate the model’s dimensionality and human interpretability (Robinson, 2022). An alternative approach, taken by phenomenological models, is to replace a full physiological description with simpler canonical dynamical systems, which preserve the dynamics hypothesized to be exhibited by the underlying complex physiological processes (Cabral et al., 2017; Kim & Bassett, 2020). This abstraction is supported by the observation that many physiologically realistic models of stable brain dynamics exhibit canonical dynamical structures such as fixed points and limit cycles (Freyer et al., 2011; Ramezanian-Panahi et al., 2022; Siu, Müller, Zerbi, Aquino, & Fulcher, 2022). While the variables and parameters of phenomenological models are not physiologically interpretable (and thus not experimentally measurable), they simplify the model’s complexity and the systems-level interpretation of the resulting simulated dynamics. Phenomenological models are thus valuable for investigating the core spatially distributed properties of the dynamics of networks of neurons or brain regions in response to targeted stimulation, without comprehensive detail of the local physiological mechanisms that underpin each neuron or brain region’s dynamics.

Phenomenological models vary in the sophistication of dynamics exhibited by their canonical structures. A simple example is a linear network model, which uses a linear system of equations to model the dynamics of a network of nodes representing neurons or neural populations (Kim & Bassett, 2020; Tang & Bassett, 2018). Each node’s dynamics are driven by a linear combination of activities from coupled nodes, and any external perturbation such as targeted brain stimulation. Linear models are a means to capture correlations between the activity of coupled neurons or neural populations with the simplest form of node-to-node coupling interactions (Abdelnour, Voss, & Raj, 2014; Raj, Kuceyeski, & Weiner, 2012). They may also allow for closed-form expressions for network responses to inputs, mathematically linking the system dynamics directly to its underlying coupling structure (Gu et al., 2017; Parkes et al., 2022).

Generalizing the linear setting, there are nonlinear network models governed by nonlinear systems of equations, which can capture more sophisticated forms of local dynamics. A simple example is linear threshold models (Mišić et al., 2015), which extend linear models by incorporating an additional threshold to the linear combination input. More sophisticated coupled oscillator models aim to capture oscillatory properties of brain dynamics as observed in recordings of brain activity across spatial scales (Breakspear, Heitmann, & Daffertshofer, 2010; Deco, Kringelbach, et al., 2017). Commonly used oscillators include the Kuramoto oscillator which oscillates with constant amplitude (Cabral, Hugues, Sporns, & Deco, 2011; Gollo, Roberts, & Cocchi, 2017) and the Stuart–Landau (Hopf) oscillator which can also change in amplitude (Deco et al., 2018; Deco, Cabral, et al., 2017; Deco, Kringelbach, et al., 2017). A more physiologically inspired nonlinear model is a conductance-based model, which models population-level activity abstracted as a simpler biological neural network (Breakspear, Terry, & Friston, 2003; Honey et al., 2009; Larter, Speelman, & Worth, 1999). Here, the aggregate activity of each population is tracked by the dynamics of a single neuron model, which assumes a sufficiently strong coherence between constituent neurons of each population (Breakspear, 2017). Despite this strong physiological assumption, conductance-based models are capable of exhibiting population-scale spatiotemporal dynamics with interesting properties that are commonly observed in biological neural networks.

In comparison with physiological models, the abstractions incorporated in the local dynamics of phenomenological models limit their ability to distinguish between the dynamics evoked by different targeted stimulation techniques. However, phenomenological models can be used to more directly investigate how the brain’s underlying structure shapes the spatially distributed properties of its stimulus-evoked dynamics under an arbitrary stimulus input, as they can isolate the effects of complex local dynamics. For example, a coupled oscillator model integrated with mouse optogenetic stimulation data demonstrated how heterogeneous long-range structural connections allow stimulus-evoked activity to reveal a rich repertoire of novel dynamical responsive functional networks, extending the established repertoire of resting-state networks observed in spontaneous dynamics (Spiegler, Abadchi, Mohajerani, & Jirsa, 2016; Spiegler, Hansen, Bernard, McIntosh, & Jirsa, 2020). Phenomenological models can also investigate the extent to which deviations in spatiotemporal dynamics between stimulus-evoked and spontaneous activity are influenced by the stimulus position. For example, a coupled oscillator model demonstrated how TMS-induced changes in functional connectivity varied between stimulating highly connected hub regions in hub regions and stimulating weakly connected peripheral regions (Gollo et al., 2017). Another coupled oscillator model studied how the influence of stimulus position varied between different global brain states such as wakefulness and deep sleep (Deco et al., 2018).

Data-Driven Models

The governing rules of phenomenological and physiological models summarized above reflect qualitatively interpretable mechanisms underlying brain dynamics, such as the internal physiological processes of neurons in biological neural networks or the oscillatory behaviors of population-scale neural recordings in a whole-brain oscillator model. In contrast, data-driven models encode statistical mechanisms that can describe the quantitative structure of the governing rules without a qualitative interpretation (Acharya et al., 2022; Ramezanian-Panahi et al., 2022). Through the use of statistical techniques, data-driven models are capable of exploring a wide range of possible governing rules which reflect encoded statistical mechanisms, enabling highly accurate predictions of brain dynamics that are unconstrained by predefined physiological or phenomenological mechanisms. This flexibility is particularly advantageous when studying responses to targeted stimulation, where underlying qualitative mechanisms may be difficult to hypothesize in advance. At the same time, choosing rule structures requires careful judgment: Models need enough coefficients to predict complex dynamics, but not too many underconstrained coefficients that undermine the model’s generalization ability.

The range of different possible statistical properties of brain activity that a data-driven model can capture varies with the range of governing rules that it can explore. For example, there are data-driven models that use the process of system identification to infer a set of mathematical equations that govern the dynamics of brain activity from the dataset (Acharya et al., 2022). While system identification algorithms are limited to exploring and discovering equation-based governing rules, these equations enable the interpretation of the learned statistical properties of individual time series and pairwise dependencies between time series. Commonly used system identification procedures include linear time series models which learn simple linear equation structures (Ljung, 1998) and procedures which can estimate nonlinear equation structures (Billings, 2013; Brunton, Proctor, & Kutz, 2016; Williams, Kevrekidis, & Rowley, 2015). Comparing different system identification procedures therefore provides an opportunity to detect nonlinearities in stimulus-evoked dynamics by comparing the capacities of linear and nonlinear equations in predicting responses (Acharya, Davis, & Nozari, 2024; Chang et al., 2012; Yang et al., 2021).

There are also more flexible data-driven models such as artificial neural networks (not to be confused with biological neural networks), which can be used to solve a more general problem of learning a predictive mapping between an input and output variable from a dataset of observations (Acharya et al., 2022). The majority of artificial neural network architectures, including the simplest feedforward architecture, learn nonlinear mappings between variables as a sequence of layers of units, including the input layer, output layer, and hidden layers (Ljung, Andersson, Tiels, & Schön, 2020). Each unit receives inputs from units of the previous layer, then feeds the input into an activation rule before sending it to units of the next layer. While less interpretable than the systems of equations discovered by system identification methods, artificial neural networks are more flexible in replicating a wider range of governing rule structures, including those with high orders of nonlinearities that would otherwise be computationally expensive to be captured as a set of equations (Hornik, Stinchcombe, & White, 1989; Vyas, Golub, Sussillo, & Shenoy, 2020). This flexibility potentially enables artificial neural networks to create future personalized models that make accurate predictions of individual responses to targeted stimulation, adapting important individual-specific structural and functional characteristics that are difficult to encode as physiological or abstracted mechanisms (Misra et al., 2021).

A systematic comparison of data-driven models can be used to test whether hypothesized statistical mechanisms are necessary in capturing responses to targeted stimulation. For example, a comparative study of responses to DBS found significant differences in fitted linear model parameters between spontaneous and stimulus-evoked dynamics, thus suggested the use of a switched linear model as a simple nonlinear model that switches between different parameters with and without stimulation (Acharya et al., 2024). Similar predictive powers were also found between switched linear models and less-interpretable feedforward neural networks, demonstrating that switched linear models sufficiently explained responses to DBS without significantly trading off predictive capabilities (Acharya et al., 2024). Another avenue of systematic comparison involves an architecturally distinct family of recurrent neural networks, which use recurrent connections to accumulate memory of an indefinite number of past inputs and observed states (Ljung et al., 2020). While recurrent neural networks have mostly focused on modeling dynamics during task stimuli (Mante, Sussillo, Shenoy, & Newsome, 2013; Vyas et al., 2020), they may also investigate how dynamic working-memory mechanisms can impact variations in time-dependent responses to sequences of targeted or sensory stimulation. For example, a comparison of recurrent neural networks with other memoryless neural network architectures demonstrated the necessity of internal memory mechanisms in predicting responses to visual stimuli sequences of varying images (Güçlü & van Gerven, 2017).

In summary, in this section, we have explored the variety of different generative models, as well as their different aims, advantages, and disadvantages for modeling dynamics evoked by targeted stimulation. Choosing an appropriate model is crucial for investigating different types of questions about this form of dynamics: Testing mechanisms related to the physiology of neurons would benefit from a biological neural network, or exploring the role of the connectome in the spatially distributed response to TMS could benefit from a phenomenological linear network model, or a project to accurately predict the firing rates of single neurons in response to optogenetic stimulation could benefit from system identification models or artificial neural network models. Integrated with datasets of activity from targeted stimulation experiments, generative models complete our toolkit for uncovering the mechanisms underlying the complex, high-dimensional dynamics measured from this unique experimental paradigm.

Emerging targeted stimulation techniques such as optogenetics and electrode stimulation, combined with measurement techniques such as calcium imaging and EEG, now allow us to measure the intricate spatiotemporal responses of neurons and neural populations to a range of perturbations with unprecedented precision (Deisseroth, 2015; Keller et al., 2014; Rogasch & Fitzgerald, 2013; Roth, 2016). Using an abstracted picture of brain dynamics, and drawing on results from dynamical systems theory, we presented an argument for why the dynamics from novel states accessed in these targeted stimulated experiments may be driven by new types of mechanisms, distinct to those underpinning the simpler low-dimensional dynamics of spontaneous activity. The mechanisms which govern dynamics evoked by targeted stimulation are also functionally significant, as they reflect how the brain recruits diverse neural populations while processing complex streams of sensory inputs. Fortunately, through an integration of targeted stimulation and measurement techniques with different generative models, we can uncover, analyze, and better understand these mechanisms at a chosen spatial scale. Targeted stimulation techniques allow us to precisely perturb the brain to a range of states distant from the attractor manifold on which spontaneous dynamics unfold. Measurement techniques can simultaneously capture the brain’s trajectory as it relaxes back to the attractor, generating datasets of activity that could reveal the novel (and likely highly nonlinear) mechanisms that govern the relaxation’s dynamics. Finally, by integrating datasets of activity evoked by targeted stimulation with generative models, it is possible to rigorously test and validate mechanisms that we hypothesize to drive the dynamics. Different generative modeling approaches are suited to testing different types of hypothesized mechanisms, from different physiological processes that can be incorporated in physiological models (Breakspear, 2017; Deco et al., 2008) to the abstracted dynamical mechanisms treated in phenomenological models (Cabral et al., 2017; Kim & Bassett, 2020) and the statistical mechanisms empirically learned using data-driven models (Ljung, 1998; Ljung et al., 2020). Our exploration touches on some of the many opportunities to uncover mechanistic insights into complex brain dynamics, ultimately guiding the practical implementation of emerging neuromodulation technologies in both research and clinical settings.

While our dynamical systems framework provides a foundation for interpreting the unique characteristics of brain dynamics evoked by targeted stimulation compared with spontaneous brain activity, there are alternative ways to conceptualize these dynamics. One such approach involves the separation of timescales (Kuehn, 2015). Some mechanisms may act too quickly to significantly influence the trajectories of fluctuations near the spontaneous activity attractor but become crucial in capturing the far-from-attractor dynamics evoked by targeted stimulation. Brain activity may therefore possibly be modeled as a multitimescale system, with mechanisms operating over various timescales in response to targeted stimulation (Babaie-Janvier & Robinson, 2020; Chaudhuri et al., 2015). Targeted stimulation can also trigger new forms of nonequilibrium dynamics if multiple attractors exist in the state space, where specific perturbations can cause the relaxation trajectory to migrate to different attractors. Each attractor may represent unique forms of spontaneous brain activity, such as the states of deep sleep versus wakefulness (Deco et al., 2019) or sequential discrete thoughts during cognition (Spivey & Dale, 2006). Generative models of targeted stimulation can therefore inform the development of neuromodulation treatments to restore healthy brain function, by designing optimal perturbations that can force brain states to transition from “diseased” to “healthy” attractor dynamics (Kringelbach & Deco, 2020; Muldoon et al., 2016; Perl et al., 2021).

As a novel experimental paradigm, datasets generated from targeted stimulation experiments allow us to not only test newly hypothesized mechanisms of brain dynamics but also to refine existing mechanisms that have been validated mostly on datasets of spontaneous activity. Integrating existing models with new datasets can potentially contribute toward resolving standing issues of degeneracy in computational neuroscience, where generative models founded on differing mechanisms can lead to similar dynamics of spontaneous activity (Bernard, 2023). It is possible that the relaxation dynamics measured in targeted stimulation experiments, given their potential nonlinearities, can be used to arbitrate between different generative quantitative models of brain dynamics, beyond the relatively simple statistical properties of spontaneous activity that both forms of models are known to adequately capture, such as their pairwise linear correlation structure (as “functional connectivity”) (Messé et al., 2015; Nozari et al., 2024).

Looking at and beyond the targeted stimulation setting, we suggest that more synergistic interactions between experimentalists and theoreticians can accelerate progress in our mechanistic understanding of brain dynamics. As experimentalists forge more sophisticated paradigms that each generate unique forms of brain dynamics, theoreticians can build more robust generative models that can explain or predict the dynamics of brain activity from a wider range of experimental settings. Conversely, refined generative models motivate the development of further experimental designs tailored to test and validate new theoretical accounts and predictions. This reciprocal relationship between experimental precision and theoretical rigor can uncover novel hypotheses and validate theoretical predictions, fostering a deeper and more integrative comprehension of the intricate mechanisms underlying brain function.

The authors thank Nigel Rogasch for feedback and suggestions for the manuscript. R.M. would like to thank the Australian Government Research Training Program (RTP) Scholarship for financial support. B.D.F. would like to thank the Selby Scientific Foundation for financial support.

Rishikesan Maran: Conceptualization; Visualization; Writing – original draft; Writing – review & editing. Eli J. Müller: Writing – review & editing. Ben D. Fulcher: Conceptualization; Supervision; Writing – review & editing.

Targeted stimulation experiment:

An experiment that generates stimulus-evoked dynamics through targeted stimulation and measurement techniques.

Generative model of brain activity:

A quantitative model that generates brain dynamics as a time series, which can be compared with experimental data.

Spontaneous dynamics:

Brain dynamics in the absence of any external task or stimulus, which can be conceptualized as noisy deviations about an attractor.

Physiological model:

A generative model of brain activity that models physiological processes in the brain using physically measurable variables and parameters.

Phenomenological model:

A generative model of brain activity that captures core spatiotemporal properties of brain dynamics with abstracted dynamical mechanisms.

Data-driven model:

A generative model of brain activity that learns statistical associations of brain dynamics from experimental data.

Attractor:

A low-dimensional subset of the state space to which nearby trajectories are attracted.

Stimulus-evoked dynamics:

Brain dynamics resulting from an external stimulus, which can be conceptualized as a perturbation away from an attractor followed by a relaxation back to the attractor.

Abdelnour
,
F.
,
Dayan
,
M.
,
Devinsky
,
O.
,
Thesen
,
T.
, &
Raj
,
A.
(
2018
).
Functional brain connectivity is predictable from anatomic network’s Laplacian eigen-structure
.
NeuroImage
,
172
,
728
739
. ,
[PubMed]
Abdelnour
,
F.
,
Voss
,
H. U.
, &
Raj
,
A.
(
2014
).
Network diffusion accurately models the relationship between structural and functional brain connectivity networks
.
NeuroImage
,
90
,
335
347
. ,
[PubMed]
Aberra
,
A. S.
,
Peterchev
,
A. V.
, &
Grill
,
W. M.
(
2018
).
Biophysically realistic neuron models for simulation of cortical stimulation
.
Journal of Neural Engineering
,
15
(
6
),
066023
. ,
[PubMed]
Aberra
,
A. S.
,
Wang
,
B.
,
Grill
,
W. M.
, &
Peterchev
,
A. V.
(
2020
).
Simulation of transcranial magnetic stimulation in head model with morphologically-realistic cortical neurons
.
Brain Stimulation
,
13
(
1
),
175
189
. ,
[PubMed]
Acharya
,
G.
,
Davis
,
K. A.
, &
Nozari
,
E.
(
2024
).
Predictive modeling of evoked intracranial EEG response to medial temporal lobe stimulation in patients with epilepsy
.
Communications Biology
,
7
(
1
),
1210
. ,
[PubMed]
Acharya
,
G.
,
Ruf
,
S. F.
, &
Nozari
,
E.
(
2022
).
Brain modeling for control: A review
.
Frontiers in Control Engineering
,
3
.
Alekseichuk
,
I.
,
Mantell
,
K.
,
Shirinpour
,
S.
, &
Opitz
,
A.
(
2019
).
Comparative modeling of transcranial magnetic and electric stimulation in mouse, monkey, and human
.
NeuroImage
,
194
,
136
148
. ,
[PubMed]
Arkhipov
,
A.
,
Gouwens
,
N. W.
,
Billeh
,
Y. N.
,
Gratiy
,
S.
,
Iyer
,
R.
,
Wei
,
Z.
, …
Koch
,
C.
(
2018
).
Visual physiology of the layer 4 cortical circuit in silico
.
PLOS Computational Biology
,
14
(
11
),
e1006535
. ,
[PubMed]
Babaie-Janvier
,
T.
, &
Robinson
,
P. A.
(
2020
).
Neural field theory of evoked response potentials with attentional gain dynamics
.
Frontiers in Human Neuroscience
,
14
,
293
. ,
[PubMed]
Baldy
,
N.
,
Breyton
,
M.
,
Woodman
,
M. M.
,
Jirsa
,
V. K.
, &
Hashemi
,
M.
(
2024
).
Inference on the macroscopic dynamics of spiking neurons
.
Neural Computation
,
36
(
10
),
2030
2072
. ,
[PubMed]
Bernard
,
C.
(
2023
).
Brain’s best kept secret: Degeneracy
.
eNeuro
,
10
(
11
),
ENEURO.0430-23.2023
. ,
[PubMed]
Billeh
,
Y. N.
,
Cai
,
B.
,
Gratiy
,
S. L.
,
Dai
,
K.
,
Iyer
,
R.
,
Gouwens
,
N. W.
, …
Arkhipov
,
A.
(
2020
).
Systematic integration of structural and functional data into multi-scale models of mouse primary visual cortex
.
Neuron
,
106
(
3
),
388
403
. ,
[PubMed]
Billings
,
S. A.
(
2013
).
Nonlinear system identification: NARMAX methods in the time, frequency, and spatio-temporal domains
.
John Wiley & Sons
.
Biswal
,
B.
,
Yetkin
,
F. Z.
,
Haughton
,
V. M.
, &
Hyde
,
J. S.
(
1995
).
Functional connectivity in the motor cortex of resting human brain using echo-planar MRI
.
Magnetic Resonance in Medicine
,
34
(
4
),
537
541
. ,
[PubMed]
Bonnard
,
M.
,
Chen
,
S.
,
Gaychet
,
J.
,
Carrere
,
M.
,
Woodman
,
M.
,
Giusiano
,
B.
, &
Jirsa
,
V.
(
2016
).
Resting state brain dynamics and its transients: A combined TMS-EEG study
.
Scientific Reports
,
6
,
31220
. ,
[PubMed]
Breakspear
,
M.
(
2017
).
Dynamic models of large-scale brain activity
.
Nature Neuroscience
,
20
(
3
),
340
352
. ,
[PubMed]
Breakspear
,
M.
,
Heitmann
,
S.
, &
Daffertshofer
,
A.
(
2010
).
Generative models of cortical oscillations: Neurobiological implications of the Kuramoto model
.
Frontiers in Human Neuroscience
,
4
,
190
. ,
[PubMed]
Breakspear
,
M.
,
Roberts
,
J. A.
,
Terry
,
J. R.
,
Rodrigues
,
S.
,
Mahant
,
N.
, &
Robinson
,
P. A.
(
2006
).
A unifying explanation of primary generalized seizures through nonlinear brain modeling and bifurcation analysis
.
Cerebral Cortex
,
16
(
9
),
1296
1313
. ,
[PubMed]
Breakspear
,
M.
,
Terry
,
J. R.
, &
Friston
,
K. J.
(
2003
).
Modulation of excitatory synaptic coupling facilitates synchronization and complex dynamics in a nonlinear model of neuronal dynamics
.
Neurocomputing
,
52–54
,
151
158
.
Brunel
,
N.
, &
Wang
,
X. J.
(
2001
).
Effects of neuromodulation in a cortical network model of object working memory dominated by recurrent inhibition
.
Journal of Computational Neuroscience
,
11
(
1
),
63
85
. ,
[PubMed]
Brunton
,
S. L.
,
Proctor
,
J. L.
, &
Kutz
,
J. N.
(
2016
).
Discovering governing equations from data by sparse identification of nonlinear dynamical systems
.
Proceedings of the National Academy of Sciences
,
113
(
15
),
3932
3937
. ,
[PubMed]
Buzsáki
,
G.
(
2004
).
Large-scale recording of neuronal ensembles
.
Nature Neuroscience
,
7
(
5
),
446
451
. ,
[PubMed]
Buzsáki
,
G.
,
Anastassiou
,
C. A.
, &
Koch
,
C.
(
2012
).
The origin of extracellular fields and currents—EEG, ECoG, LFP and spikes
.
Nature Reviews Neuroscience
,
13
(
6
),
407
420
. ,
[PubMed]
Cabral
,
J.
,
Hugues
,
E.
,
Sporns
,
O.
, &
Deco
,
G.
(
2011
).
Role of local network oscillations in resting-state functional connectivity
.
NeuroImage
,
57
(
1
),
130
139
. ,
[PubMed]
Cabral
,
J.
,
Kringelbach
,
M. L.
, &
Deco
,
G.
(
2017
).
Functional connectivity dynamically evolves on multiple time-scales over a static structural connectome: Models and mechanisms
.
NeuroImage
,
160
,
84
96
. ,
[PubMed]
Capogrosso
,
M.
, &
Lempka
,
S. F.
(
2020
).
A computational outlook on neurostimulation
.
Bioelectronic Medicine
,
6
,
10
. ,
[PubMed]
Chang
,
J.-Y.
,
Pigorini
,
A.
,
Massimini
,
M.
,
Tononi
,
G.
,
Nobili
,
L.
, &
Van Veen
,
B. D.
(
2012
).
Multivariate autoregressive models with exogenous inputs for intracerebral responses to direct electrical stimulation of the human brain
.
Frontiers in Human Neuroscience
,
6
,
317
. ,
[PubMed]
Chaudhuri
,
R.
,
Gerçek
,
B.
,
Pandey
,
B.
,
Peyrache
,
A.
, &
Fiete
,
I.
(
2019
).
The intrinsic attractor manifold and population dynamics of a canonical cognitive circuit across waking and sleep
.
Nature Neuroscience
,
22
(
9
),
1512
1520
. ,
[PubMed]
Chaudhuri
,
R.
,
Knoblauch
,
K.
,
Gariel
,
M.-A.
,
Kennedy
,
H.
, &
Wang
,
X.-J.
(
2015
).
A large-scale circuit mechanism for hierarchical dynamical processing in the primate cortex
.
Neuron
,
88
(
2
),
419
431
. ,
[PubMed]
Churchland
,
M. M.
,
Cunningham
,
J. P.
,
Kaufman
,
M. T.
,
Foster
,
J. D.
,
Nuyujukian
,
P.
,
Ryu
,
S. I.
, &
Shenoy
,
K. V.
(
2012
).
Neural population dynamics during reaching
.
Nature
,
487
(
7405
),
51
56
. ,
[PubMed]
Cole
,
E. J.
,
Stimpson
,
K. H.
,
Bentzley
,
B. S.
,
Gulser
,
M.
,
Cherian
,
K.
,
Tischler
,
C.
, …
Williams
,
N. R.
(
2020
).
Stanford accelerated intelligent neuromodulation therapy for treatment-resistant depression
.
American Journal of Psychiatry
,
177
(
8
),
716
726
. ,
[PubMed]
Cunningham
,
J. P.
, &
Yu
,
B. M.
(
2014
).
Dimensionality reduction for large-scale neural recordings
.
Nature Neuroscience
,
17
(
11
),
1500
1509
. ,
[PubMed]
David
,
O.
,
Kilner
,
J. M.
, &
Friston
,
K. J.
(
2006
).
Mechanisms of evoked and induced responses in MEG/EEG
.
NeuroImage
,
31
(
4
),
1580
1591
. ,
[PubMed]
Deco
,
G.
,
Cabral
,
J.
,
Saenger
,
V. M.
,
Boly
,
M.
,
Tagliazucchi
,
E.
,
Laufs
,
H.
, …
Kringelbach
,
M. L.
(
2018
).
Perturbation of whole-brain dynamics in silico reveals mechanistic differences between brain states
.
NeuroImage
,
169
,
46
56
. ,
[PubMed]
Deco
,
G.
,
Cabral
,
J.
,
Woolrich
,
M. W.
,
Stevner
,
A. B. A.
,
van Hartevelt
,
T. J.
, &
Kringelbach
,
M. L.
(
2017
).
Single or multiple frequency generators in on-going brain activity: A mechanistic whole-brain model of empirical MEG data
.
NeuroImage
,
152
,
538
550
. ,
[PubMed]
Deco
,
G.
,
Cruzat
,
J.
,
Cabral
,
J.
,
Tagliazucchi
,
E.
,
Laufs
,
H.
,
Logothetis
,
N. K.
, &
Kringelbach
,
M. L.
(
2019
).
Awakening: Predicting external stimulation to force transitions between different brain states
.
Proceedings of the National Academy of Sciences
,
116
(
36
),
18088
18097
. ,
[PubMed]
Deco
,
G.
,
Jirsa
,
V.
,
McIntosh
,
A. R.
,
Sporns
,
O.
, &
Kötter
,
R.
(
2009
).
Key role of coupling, delay, and noise in resting brain fluctuations
.
Proceedings of the National Academy of Sciences
,
106
(
25
),
10302
10307
. ,
[PubMed]
Deco
,
G.
,
Jirsa
,
V. K.
,
Robinson
,
P. A.
,
Breakspear
,
M.
, &
Friston
,
K.
(
2008
).
The dynamic brain: From spiking neurons to neural masses and cortical fields
.
PLOS Computational Biology
,
4
(
8
),
e1000092
. ,
[PubMed]
Deco
,
G.
,
Kringelbach
,
M. L.
,
Arnatkeviciute
,
A.
,
Oldham
,
S.
,
Sabaroedin
,
K.
,
Rogasch
,
N. C.
, …
Fornito
,
A.
(
2021
).
Dynamical consequences of regional heterogeneity in the brain’s transcriptional landscape
.
Science Advances
,
7
(
29
),
eabf4752
. ,
[PubMed]
Deco
,
G.
,
Kringelbach
,
M. L.
,
Jirsa
,
V. K.
, &
Ritter
,
P.
(
2017
).
The dynamics of resting fluctuations in the brain: Metastability and its dynamical cortical core
.
Scientific Reports
,
7
(
1
),
3095
. ,
[PubMed]
Deisseroth
,
K.
(
2015
).
Optogenetics: 10 years of microbial opsins in neuroscience
.
Nature Neuroscience
,
18
(
9
),
1213
1225
. ,
[PubMed]
Flesher
,
S. N.
,
Collinger
,
J. L.
,
Foldes
,
S. T.
,
Weiss
,
J. M.
,
Downey
,
J. E.
,
Tyler-Kabara
,
E. C.
, …
Gaunt
,
R. A.
(
2016
).
Intracortical microstimulation of human somatosensory cortex
.
Science Translational Medicine
,
8
(
361
),
361ra141
. ,
[PubMed]
Fox
,
M. D.
, &
Raichle
,
M. E.
(
2007
).
Spontaneous fluctuations in brain activity observed with functional magnetic resonance imaging
.
Nature Reviews Neuroscience
,
8
(
9
),
700
711
. ,
[PubMed]
Freyer
,
F.
,
Roberts
,
J. A.
,
Becker
,
R.
,
Robinson
,
P. A.
,
Ritter
,
P.
, &
Breakspear
,
M.
(
2011
).
Biophysical mechanisms of multistability in resting-state cortical rhythms
.
Journal of Neuroscience
,
31
(
17
),
6353
6361
. ,
[PubMed]
Fung
,
P. K.
, &
Robinson
,
P. A.
(
2013
).
Neural field theory of calcium dependent plasticity with applications to transcranial magnetic stimulation
.
Journal of Theoretical Biology
,
324
,
72
83
. ,
[PubMed]
Gast
,
R.
,
Solla
,
S. A.
, &
Kennedy
,
A.
(
2024
).
Neural heterogeneity controls computations in spiking neural networks
.
Proceedings of the National Academy of Sciences
,
121
(
3
),
e2311885121
. ,
[PubMed]
Gollo
,
L. L.
,
Roberts
,
J. A.
, &
Cocchi
,
L.
(
2017
).
Mapping how local perturbations influence systems-level brain dynamics
.
NeuroImage
,
160
,
97
112
. ,
[PubMed]
Gouwens
,
N. W.
,
Berg
,
J.
,
Feng
,
D.
,
Sorensen
,
S. A.
,
Zeng
,
H.
,
Hawrylycz
,
M. J.
, …
Arkhipov
,
A.
(
2018
).
Systematic generation of biophysically detailed models for diverse cortical neuron types
.
Nature Communications
,
9
(
1
),
710
. ,
[PubMed]
Grienberger
,
C.
, &
Konnerth
,
A.
(
2012
).
Imaging calcium in neurons
.
Neuron
,
73
(
5
),
862
885
. ,
[PubMed]
Gu
,
S.
,
Betzel
,
R. F.
,
Mattar
,
M. G.
,
Cieslak
,
M.
,
Delio
,
P. R.
,
Grafton
,
S. T.
, …
Bassett
,
D. S.
(
2017
).
Optimal trajectories of brain state transitions
.
NeuroImage
,
148
,
305
317
. ,
[PubMed]
Güçlü
,
U.
, &
van Gerven
,
M. A. J.
(
2017
).
Modeling the dynamics of human brain activity with recurrent neural networks
.
Frontiers in Computational Neuroscience
,
11
,
7
. ,
[PubMed]
Hashemi
,
M.
,
Ziaeemehr
,
A.
,
Woodman
,
M. M.
,
Fousek
,
J.
,
Petkoski
,
S.
, &
Jirsa
,
V. K.
(
2024
).
Simulation-based inference on virtual brain models of disorders
.
Machine Learning: Science and Technology
,
5
,
035019
.
Hill
,
C. L.
, &
Stephens
,
G. J.
(
2021
).
An introduction to patch clamp recording
. In
M.
Dallas
&
D.
Bell
(Eds.),
Patch clamp electrophysiology: Methods and protocols
(pp.
1
19
).
New York, NY
:
Springer US
.
Hodgkin
,
A. L.
, &
Huxley
,
A. F.
(
1952
).
A quantitative description of membrane current and its application to conduction and excitation in nerve
.
Journal of Physiology
,
117
(
4
),
500
544
. ,
[PubMed]
Honey
,
C. J.
,
Kötter
,
R.
,
Breakspear
,
M.
, &
Sporns
,
O.
(
2007
).
Network structure of cerebral cortex shapes functional connectivity on multiple time scales
.
Proceedings of the National Academy of Sciences
,
104
(
24
),
10240
10245
. ,
[PubMed]
Honey
,
C. J.
,
Sporns
,
O.
,
Cammoun
,
L.
,
Gigandet
,
X.
,
Thiran
,
J. P.
,
Meuli
,
R.
, &
Hagmann
,
P.
(
2009
).
Predicting human resting-state functional connectivity from structural connectivity
.
Proceedings of the National Academy of Sciences
,
106
(
6
),
2035
2040
. ,
[PubMed]
Hornik
,
K.
,
Stinchcombe
,
M.
, &
White
,
H.
(
1989
).
Multilayer feedforward networks are universal approximators
.
Neural Networks
,
2
(
5
),
359
366
.
Hosaka
,
Y.
,
Hieda
,
T.
,
Hayashi
,
K.
,
Jimura
,
K.
, &
Matsui
,
T.
(
2024
).
Linear models replicate the energy landscape and dynamics of resting-state brain activity
.
bioRxiv
.
Humphries
,
M. D.
(
2021
).
Strong and weak principles of neural dimension reduction
.
Neurons, Behavior, Data Analysis, and Theory
,
5
(
2
),
1
28
.
Huntenburg
,
J. M.
,
Bazin
,
P.-L.
, &
Margulies
,
D. S.
(
2018
).
Large-scale gradients in human cortical organization
.
Trends in Cognitive Sciences
,
22
(
1
),
21
31
. ,
[PubMed]
Jansen
,
B. H.
, &
Rit
,
V. G.
(
1995
).
Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns
.
Biological Cybernetics
,
73
(
4
),
357
366
. ,
[PubMed]
Jedynak
,
M.
,
Boyer
,
A.
,
Chanteloup-Forêt
,
B.
,
Bhattacharjee
,
M.
,
Saubat
,
C.
,
Tadel
,
F.
, …
F-TRACT Consortium
.
(
2023
).
Variability of single pulse electrical stimulation responses recorded with intracranial electroencephalography in epileptic patients
.
Brain Topography
,
36
(
1
),
119
127
. ,
[PubMed]
Jha
,
J.
,
Hashemi
,
M.
,
Vattikonda
,
A. N.
,
Wang
,
H.
, &
Jirsa
,
V.
(
2022
).
Fully Bayesian estimation of virtual brain parameters with self-tuning Hamiltonian Monte Carlo
.
Machine Learning: Science and Technology
,
3
,
035016
.
Jirsa
,
V. K.
,
Friedrich
,
R.
,
Haken
,
H.
, &
Kelso
,
J. A. S.
(
1994
).
A theoretical model of phase transitions in the human brain
.
Biological Cybernetics
,
71
(
1
),
27
35
. ,
[PubMed]
Jirsa
,
V. K.
, &
Haken
,
H.
(
1996
).
Field theory of electromagnetic brain activity
.
Physical Review Letters
,
77
(
5
),
960
963
. ,
[PubMed]
Jirsa
,
V. K.
, &
Haken
,
H.
(
1997
).
A derivation of a macroscopic field theory of the brain from the quasi-microscopic neural dynamics
.
Physica D: Nonlinear Phenomena
,
99
(
4
),
503
526
.
John
,
Y. J.
,
Sawyer
,
K. S.
,
Srinivasan
,
K.
,
Müller
,
E. J.
,
Munn
,
B. R.
, &
Shine
,
J. M.
(
2022
).
It’s about time: Linking dynamical systems with human neuroimaging to understand the brain
.
Network Neuroscience
,
6
(
4
),
960
979
. ,
[PubMed]
Keller
,
C. J.
,
Honey
,
C. J.
,
Entz
,
L.
,
Bickel
,
S.
,
Groppe
,
D. M.
,
Toth
,
E.
, …
Mehta
,
A. D.
(
2014
).
Corticocortical evoked potentials reveal projectors and integrators in human brain networks
.
Journal of Neuroscience
,
34
(
27
),
9152
9163
. ,
[PubMed]
Kerr
,
C. C.
,
Rennie
,
C. J.
, &
Robinson
,
P. A.
(
2008
).
Physiology-based modeling of cortical auditory evoked potentials
.
Biological Cybernetics
,
98
(
2
),
171
184
. ,
[PubMed]
Kim
,
J. Z.
, &
Bassett
,
D. S.
(
2020
).
Linear dynamics and control of brain networks
. In
B.
He
(Ed.),
Neural engineering
(pp.
497
518
).
Cham
:
Springer International Publishing
.
Kringelbach
,
M. L.
, &
Deco
,
G.
(
2020
).
Brain states and transitions: Insights from computational neuroscience
.
Cell Reports
,
32
(
10
),
108128
. ,
[PubMed]
Kringelbach
,
M. L.
,
Jenkinson
,
N.
,
Owen
,
S. L. F.
, &
Aziz
,
T. Z.
(
2007
).
Translational principles of deep brain stimulation
.
Nature Reviews Neuroscience
,
8
(
8
),
623
635
. ,
[PubMed]
Kuehn
,
C.
(
2015
).
Introduction
. In
C.
Kuehn
(Ed.),
Multiple time scale dynamics
(pp.
1
17
).
Cham
:
Springer International Publishing
.
Kurtin
,
D. L.
,
Giunchiglia
,
V.
,
Vohryzek
,
J.
,
Cabral
,
J.
,
Skeldon
,
A. C.
, &
Violante
,
I. R.
(
2023
).
Moving from phenomenological to predictive modelling: Progress and pitfalls of modelling brain stimulation in-silico
.
NeuroImage
,
272
,
120042
. ,
[PubMed]
Lachaux
,
J. P.
,
Rudrauf
,
D.
, &
Kahane
,
P.
(
2003
).
Intracranial EEG and human brain mapping
.
Journal of Physiology-Paris
,
97
(
4–6
),
613
628
. ,
[PubMed]
Landau
,
I. D.
,
Egger
,
R.
,
Dercksen
,
V. J.
,
Oberlaender
,
M.
, &
Sompolinsky
,
H.
(
2016
).
The impact of structural heterogeneity on excitation-inhibition balance in cortical networks
.
Neuron
,
92
(
5
),
1106
1121
. ,
[PubMed]
Larter
,
R.
,
Speelman
,
B.
, &
Worth
,
R. M.
(
1999
).
A coupled ordinary differential equation lattice model for the simulation of epileptic seizures
.
Chaos
,
9
(
3
),
795
804
. ,
[PubMed]
Ljung
,
L.
(
1998
).
System identification
. In
A.
Procházka
,
J.
Uhlíř
,
P. W. J.
Rayner
, &
N. G.
Kingsbury
(Eds.),
Signal analysis and prediction
(pp.
163
173
).
Boston, MA
:
Birkhäuser
.
Ljung
,
L.
,
Andersson
,
C.
,
Tiels
,
K.
, &
Schön
,
T. B.
(
2020
).
Deep learning and system identification
.
IFAC-PapersOnLine
,
53
(
2
),
1175
1181
.
Lopes da Silva
,
F. H.
,
van Rotterdam
,
A.
,
Barts
,
P.
,
van Heusden
,
E.
, &
Burr
,
W.
(
1976
).
Models of neuronal populations: The basic mechanisms of rhythmicity
.
Progress in Brain Research
,
45
,
281
308
. ,
[PubMed]
Luboeinski
,
J.
, &
Tchumatchenko
,
T.
(
2020
).
Nonlinear response characteristics of neural networks and single neurons undergoing optogenetic excitation
.
Network Neuroscience
,
4
(
3
),
852
870
. ,
[PubMed]
Ma
,
J.
, &
Tang
,
J.
(
2017
).
A review for dynamics in neuron and neuronal network
.
Nonlinear Dynamics
,
89
,
1569
1578
.
Mante
,
V.
,
Sussillo
,
D.
,
Shenoy
,
K. V.
, &
Newsome
,
W. T.
(
2013
).
Context-dependent computation by recurrent dynamics in prefrontal cortex
.
Nature
,
503
(
7474
),
78
84
. ,
[PubMed]
Markicevic
,
M.
,
Fulcher
,
B. D.
,
Lewis
,
C.
,
Helmchen
,
F.
,
Rudin
,
M.
,
Zerbi
,
V.
, &
Wenderoth
,
N.
(
2020
).
Cortical excitation: Inhibition imbalance causes abnormal brain network dynamics as observed in neurodevelopmental disorders
.
Cerebral Cortex
,
30
(
9
),
4922
4937
. ,
[PubMed]
Markicevic
,
M.
,
Sturman
,
O.
,
Bohacek
,
J.
,
Rudin
,
M.
,
Zerbi
,
V.
,
Fulcher
,
B. D.
, &
Wenderoth
,
N.
(
2023
).
Neuromodulation of striatal D1 cells shapes BOLD fluctuations in anatomically connected thalamic and cortical regions
.
eLife
,
12
,
e78620
. ,
[PubMed]
Markram
,
H.
(
2006
).
The blue brain project
.
Nature Reviews Neuroscience
,
7
(
2
),
153
160
. ,
[PubMed]
Messé
,
A.
,
Rudrauf
,
D.
,
Giron
,
A.
, &
Marrelec
,
G.
(
2015
).
Predicting functional connectivity from structural connectivity via computational models using MRI: An extensive comparison study
.
NeuroImage
,
111
,
65
75
. ,
[PubMed]
Mišić
,
B.
,
Betzel
,
R. F.
,
Nematzadeh
,
A.
,
Goñi
,
J.
,
Griffa
,
A.
,
Hagmann
,
P.
, …
Sporns
,
O.
(
2015
).
Cooperative and competitive spreading dynamics on the human connectome
.
Neuron
,
86
(
6
),
1518
1529
. ,
[PubMed]
Misra
,
J.
,
Surampudi
,
S. G.
,
Venkatesh
,
M.
,
Limbachia
,
C.
,
Jaja
,
J.
, &
Pessoa
,
L.
(
2021
).
Learning brain dynamics for decoding and predicting individual differences
.
PLOS Computational Biology
,
17
(
9
),
e1008943
. ,
[PubMed]
Moran
,
R. J.
,
Kiebel
,
S. J.
,
Stephan
,
K. E.
,
Reilly
,
R. B.
,
Daunizeau
,
J.
, &
Friston
,
K. J.
(
2007
).
A neural mass model of spectral responses in electrophysiology
.
NeuroImage
,
37
(
3
),
706
720
. ,
[PubMed]
Muir
,
J.
,
Lopez
,
J.
, &
Bagot
,
R. C.
(
2019
).
Wiring the depressed brain: Optogenetic and chemogenetic circuit interrogation in animal models of depression
.
Neuropsychopharmacology
,
44
(
6
),
1013
1026
. ,
[PubMed]
Muldoon
,
S. F.
,
Pasqualetti
,
F.
,
Gu
,
S.
,
Cieslak
,
M.
,
Grafton
,
S. T.
,
Vettel
,
J. M.
, &
Bassett
,
D. S.
(
2016
).
Stimulation-based control of dynamic brain networks
.
PLOS Computational Biology
,
12
(
9
),
e1005076
. ,
[PubMed]
Nozari
,
E.
,
Bertolero
,
M. A.
,
Stiso
,
J.
,
Caciagli
,
L.
,
Cornblath
,
E. J.
,
He
,
X.
, …
Bassett
,
D. S.
(
2024
).
Macroscopic resting-state brain dynamics are best described by linear models
.
Nature Biomedical Engineering
,
8
(
1
),
68
84
. ,
[PubMed]
O’Shea
,
D. J.
,
Duncker
,
L.
,
Goo
,
W.
,
Sun
,
X.
,
Vyas
,
S.
,
Trautmann
,
E. M.
, …
Shenoy
,
K. V.
(
2022
).
Direct neural perturbations reveal a dynamical mechanism for robust computation
.
bioRxiv
.
Pang
,
J. C.
,
Aquino
,
K. M.
,
Oldehinkel
,
M.
,
Robinson
,
P. A.
,
Fulcher
,
B. D.
,
Breakspear
,
M.
, &
Fornito
,
A.
(
2023
).
Geometric constraints on human brain function
.
Nature
,
618
(
7965
),
566
574
. ,
[PubMed]
Parkes
,
L.
,
Kim
,
J. Z.
,
Stiso
,
J.
,
Calkins
,
M. E.
,
Cieslak
,
M.
,
Gur
,
R. E.
, …
Bassett
,
D. S.
(
2022
).
Asymmetric signaling across the hierarchy of cytoarchitecture within the human connectome
.
Science Advances
,
8
(
50
),
eadd2185
. ,
[PubMed]
Pathak
,
A.
,
Roy
,
D.
, &
Banerjee
,
A.
(
2022
).
Whole-brain network models: From physics to bedside
.
Frontiers in Computational Neuroscience
,
16
,
866517
. ,
[PubMed]
Penas
,
D. R.
,
Hashemi
,
M.
,
Jirsa
,
V. K.
, &
Banga
,
J. R.
(
2024
).
Parameter estimation in a whole-brain network model of epilepsy: Comparison of parallel global optimization solvers
.
PLOS Computational Biology
,
20
(
7
),
e1011642
. ,
[PubMed]
Perl
,
Y. S.
,
Pallavicini
,
C.
,
Ipiña
,
I. P.
,
Demertzi
,
A.
,
Bonhomme
,
V.
,
Martial
,
C.
, …
Tagliazucchi
,
E.
(
2021
).
Perturbations in dynamical models of whole-brain activity dissociate between the level and stability of consciousness
.
PLOS Computational Biology
,
17
(
7
),
e1009139
. ,
[PubMed]
Raj
,
A.
,
Kuceyeski
,
A.
, &
Weiner
,
M.
(
2012
).
A network diffusion model of disease progression in dementia
.
Neuron
,
73
(
6
),
1204
1215
. ,
[PubMed]
Ramasubbu
,
R.
,
Lang
,
S.
, &
Kiss
,
Z. H. T.
(
2018
).
Dosing of electrical parameters in deep brain stimulation (DBS) for intractable depression: A review of clinical studies
.
Frontiers in Psychiatry
,
9
,
302
. ,
[PubMed]
Ramezanian-Panahi
,
M.
,
Abrevaya
,
G.
,
Gagnon-Audet
,
J.-C.
,
Voleti
,
V.
,
Rish
,
I.
, &
Dumas
,
G.
(
2022
).
Generative models of brain dynamics
.
Frontiers in Artificial Intelligence
,
5
,
807406
. ,
[PubMed]
Randi
,
F.
,
Sharma
,
A. K.
,
Dvali
,
S.
, &
Leifer
,
A. M.
(
2023
).
Neural signal propagation atlas of Caenorhabditis elegans
.
Nature
,
623
(
7986
),
406
414
. ,
[PubMed]
Rennie
,
C. J.
,
Robinson
,
P. A.
, &
Wright
,
J. J.
(
2002
).
Unified neurophysical model of EEG spectra and evoked potentials
.
Biological Cybernetics
,
86
(
6
),
457
471
. ,
[PubMed]
Rickgauer
,
J. P.
, &
Tank
,
D. W.
(
2009
).
Two-photon excitation of channelrhodopsin-2 at saturation
.
Proceedings of the National Academy of Sciences
,
106
(
35
),
15025
15030
. ,
[PubMed]
Robinson
,
P. A.
(
2005
).
Propagator theory of brain dynamics
.
Physical Review E
,
72
(
1
),
011904
. ,
[PubMed]
Robinson
,
P. A.
(
2019
).
Physical brain connectomics
.
Physical Review E
,
99
(
1
),
012421
. ,
[PubMed]
Robinson
,
P. A.
(
2022
).
Ten rules for effective modeling
.
NeuroImage
,
263
,
119622
. ,
[PubMed]
Robinson
,
P. A.
,
Loxley
,
P. N.
,
O’Connor
,
S. C.
, &
Rennie
,
C. J.
(
2001
).
Modal analysis of corticothalamic dynamics, electroencephalographic spectra, and evoked potentials
.
Physical Review E
,
63
(
4
),
041909
. ,
[PubMed]
Robinson
,
P. A.
,
Rennie
,
C. J.
, &
Rowe
,
D. L.
(
2002
).
Dynamics of large-scale brain activity in normal arousal states and epileptic seizures
.
Physical Review E
,
65
(
4
),
041924
. ,
[PubMed]
Robinson
,
P. A.
,
Rennie
,
C. J.
, &
Wright
,
J. J.
(
1997
).
Propagation and stability of waves of electrical activity in the cerebral cortex
.
Physical Review E
,
56
(
1
),
826
.
Rogasch
,
N. C.
, &
Fitzgerald
,
P. B.
(
2013
).
Assessing cortical network properties using TMS–EEG
.
Human Brain Mapping
,
34
(
7
),
1652
1669
. ,
[PubMed]
Roth
,
B. L.
(
2016
).
DREADDs for neuroscientists
.
Neuron
,
89
(
4
),
683
694
. ,
[PubMed]
Sanzeni
,
A.
,
Palmigiano
,
A.
,
Nguyen
,
T. H.
,
Luo
,
J.
,
Nassi
,
J. J.
,
Reynolds
,
J. H.
, …
Brunel
,
N.
(
2023
).
Mechanisms underlying reshuffling of visual responses by optogenetic stimulation in mice and monkeys
.
Neuron
,
111
(
24
),
4102
4115
. ,
[PubMed]
Shemesh
,
O. A.
,
Tanese
,
D.
,
Zampini
,
V.
,
Linghu
,
C.
,
Piatkevich
,
K.
,
Ronzitti
,
E.
, …
Emiliani
,
V.
(
2017
).
Temporally precise single-cell-resolution optogenetics
.
Nature Neuroscience
,
20
(
12
),
1796
1806
. ,
[PubMed]
Shine
,
J. M.
,
Breakspear
,
M.
,
Bell
,
P. T.
,
Ehgoetz Martens
,
K. A.
,
Shine
,
R.
,
Koyejo
,
O.
, …
Poldrack
,
R. A.
(
2019
).
Human cognition involves the dynamic integration of neural activity and neuromodulatory systems
.
Nature Neuroscience
,
22
(
2
),
289
296
. ,
[PubMed]
Shine
,
J. M.
,
Hearne
,
L. J.
,
Breakspear
,
M.
,
Hwang
,
K.
,
Müller
,
E. J.
,
Sporns
,
O.
, …
Cocchi
,
L.
(
2019
).
The low-dimensional neural architecture of cognitive complexity is related to activity in medial thalamic nuclei
.
Neuron
,
104
(
5
),
849
855
. ,
[PubMed]
Sip
,
V.
,
Hashemi
,
M.
,
Dickscheid
,
T.
,
Amunts
,
K.
,
Petkoski
,
S.
, &
Jirsa
,
V.
(
2023
).
Characterization of regional differences in resting-state fMRI with a data-driven network model of brain dynamics
.
Science Advances
,
9
(
11
),
eabq7547
. ,
[PubMed]
Siu
,
P. H.
,
Müller
,
E.
,
Zerbi
,
V.
,
Aquino
,
K.
, &
Fulcher
,
B. D.
(
2022
).
Extracting dynamical understanding from neural-mass models of mouse cortex
.
Frontiers in Computational Neuroscience
,
16
,
847336
. ,
[PubMed]
Spiegler
,
A.
,
Abadchi
,
J. K.
,
Mohajerani
,
M.
, &
Jirsa
,
V. K.
(
2020
).
In silico exploration of mouse brain dynamics by focal stimulation reflects the organization of functional networks and sensory processing
.
Network Neuroscience
,
4
(
3
),
807
851
. ,
[PubMed]
Spiegler
,
A.
,
Hansen
,
E. C. A.
,
Bernard
,
C.
,
McIntosh
,
A. R.
, &
Jirsa
,
V. K.
(
2016
).
Selective activation of resting-state networks following focal stimulation in a connectome-based network model of the human brain
.
eNeuro
,
3
(
5
),
ENEURO.0068-16.2016
. ,
[PubMed]
Spivey
,
M. J.
, &
Dale
,
R.
(
2006
).
Continuous dynamics in real-time cognition
.
Current Directions in Psychological Science
,
15
(
5
),
207
211
.
Stefanescu
,
R. A.
, &
Jirsa
,
V. K.
(
2008
).
A low dimensional description of globally coupled heterogeneous neural networks of excitatory and inhibitory neurons
.
PLOS Computational Biology
,
4
(
11
),
e1000219
. ,
[PubMed]
Steinmetz
,
N. A.
,
Aydin
,
C.
,
Lebedeva
,
A.
,
Okun
,
M.
,
Pachitariu
,
M.
,
Bauza
,
M.
, …
Harris
,
T. D.
(
2021
).
Neuropixels 2.0: A miniaturized high-density probe for stable, long-term brain recordings
.
Science
,
372
(
6539
),
eabf4588
. ,
[PubMed]
Sternson
,
S. M.
, &
Roth
,
B. L.
(
2014
).
Chemogenetic tools to interrogate brain functions
.
Annual Review of Neuroscience
,
37
(
1
),
387
407
. ,
[PubMed]
Strogatz
,
S. H.
(
2018
).
Nonlinear dynamics and chaos with student solutions manual: With applications to physics, biology, chemistry, and engineering
.
CRC Press
.
Tang
,
E.
, &
Bassett
,
D. S.
(
2018
).
Colloquium: Control of dynamics in brain networks
.
Reviews of Modern Physics
,
90
(
3
),
031003
.
Tong
,
L.
,
Han
,
S.
,
Xue
,
Y.
,
Chen
,
M.
,
Chen
,
F.
,
Ke
,
W.
, …
Grutzendler
,
J.
(
2023
).
Single cell in vivo optogenetic stimulation by two-photon excitation fluorescence transfer
.
iScience
,
26
(
10
),
107857
. ,
[PubMed]
Tremblay
,
S.
,
Rogasch
,
N. C.
,
Premoli
,
I.
,
Blumberger
,
D. M.
,
Casarotto
,
S.
,
Chen
,
R.
, …
Daskalakis
,
Z. J.
(
2019
).
Clinical utility and prospective of TMS–EEG
.
Clinical Neurophysiology
,
130
(
5
),
802
844
. ,
[PubMed]
van den Heuvel
,
M. P.
, &
Hulshoff Pol
,
H. E.
(
2010
).
Exploring the brain network: A review on resting-state fMRI functional connectivity
.
European Neuropsychopharmacology
,
20
(
8
),
519
534
. ,
[PubMed]
Vidaurre
,
D.
,
Smith
,
S. M.
, &
Woolrich
,
M. W.
(
2017
).
Brain network dynamics are hierarchically organized in time
.
Proceedings of the National Academy of Sciences
,
114
(
48
),
12827
12832
. ,
[PubMed]
Vyas
,
S.
,
Golub
,
M. D.
,
Sussillo
,
D.
, &
Shenoy
,
K. V.
(
2020
).
Computation through neural population dynamics
.
Annual Review of Neuroscience
,
43
(
1
),
249
275
. ,
[PubMed]
Wang
,
Y.
,
Hutchings
,
F.
, &
Kaiser
,
M.
(
2015
).
Computational modeling of neurostimulation in brain diseases
. In
S.
Bestmann
(Ed.),
Progress in brain research
(Vol.
222
, pp.
191
228
).
Elsevier
.
Williams
,
M. O.
,
Kevrekidis
,
I. G.
, &
Rowley
,
C. W.
(
2015
).
A data–driven approximation of the koopman operator: Extending dynamic mode decomposition
.
Journal of Nonlinear Science
,
25
(
6
),
1307
1346
.
Yang
,
Y.
,
Qiao
,
S.
,
Sani
,
O. G.
,
Sedillo
,
J. I.
,
Ferrentino
,
B.
,
Pesaran
,
B.
, &
Shanechi
,
M. M.
(
2021
).
Modelling and prediction of the dynamic responses of large-scale brain networks during direct electrical stimulation
.
Nature Biomedical Engineering
,
5
(
4
),
324
345
. ,
[PubMed]
Yizhar
,
O.
,
Fenno
,
L. E.
,
Davidson
,
T. J.
,
Mogri
,
M.
, &
Deisseroth
,
K.
(
2011
).
Optogenetics in neural systems
.
Neuron
,
71
(
1
),
9
34
.
Zerbi
,
V.
,
Floriou-Servou
,
A.
,
Markicevic
,
M.
,
Vermeiren
,
Y.
,
Sturman
,
O.
,
Privitera
,
M.
, …
Bohacek
,
J.
(
2019
).
Rapid reconfiguration of the functional connectome after chemogenetic locus coeruleus activation
.
Neuron
,
103
(
4
),
702
718
. ,
[PubMed]
Zipser
,
D.
(
1991
).
Recurrent network model of the neural mechanism of short-term active memory
.
Neural Computation
,
3
(
2
),
179
193
. ,
[PubMed]

Competing Interests

Competing Interests: The authors have declared that no competing interests exist.

Author notes

Handling Editor: Olaf Sporns

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.