Abstract
The human brain is a complex system with high metabolic demands and extensive connectivity that requires control to balance energy consumption and functional efficiency over time. How this control is manifested on a whole-brain scale is largely unexplored, particularly what the associated costs are. Using the network control theory, here, we introduce a novel concept, time-averaged control energy (TCE), to quantify the cost of controlling human brain dynamics at rest, as measured from functional and diffusion MRI. Importantly, TCE spatially correlates with oxygen metabolism measures from the positron emission tomography, providing insight into the bioenergetic footing of resting-state control. Examining the temporal dimension of control costs, we find that brain state transitions along a hierarchical axis from sensory to association areas are more efficient in terms of control costs and more frequent within hierarchical groups than between. This inverse correlation between temporal control costs and state visits suggests a mechanism for maintaining functional diversity while minimizing energy expenditure. By unpacking the temporal dimension of control costs, we contribute to the neuroscientific understanding of how the brain governs its functionality while managing energy expenses.
Author Summary
Understanding how the brain balances functional efficiency with energy conservation is a central question in neuroscience. The network control theory (NCT) views this question from a network perspective where the brain manages signal propagations along its structural connections to transition across desired activity states. Our study thus presents a novel framework based on the NCT to analyze the costs associated with transitioning across resting states, revealing that regions with high control costs on average are also metabolically demanding in terms of oxygen use. Our findings further show that transitions between sensory and association states are infrequent due to high control costs, while transitions within these states are more common. This suggests that the brain employs a mechanism to preserve functional diversity while minimizing energy costs.
INTRODUCTION
The intricate nature of the human brain, characterized by its extensive connectivity (Bassett & Sporns, 2017; Bullmore & Sporns, 2012; Thiebaut de Schotten & Forkel, 2022) and correspondingly high metabolic demands (Padamsey & Rochefort, 2023; Raichle & Gusnard, 2002), has led to the assumption that its inherent organization represents a delicate balance between energy consumption and functional efficiency (Shine & Poldrack, 2018). This notion underscores the role of the nervous system as a regulatory entity (Friston, 2010), orchestrating its functions to fulfill cognitive demands while simultaneously managing energy-related constraints. While prior research has shed light on how the brain manages its energy expenditures at a cellular level (Gautron, Elmquist, & Williams, 2015; Levin, 2006; Padamsey & Rochefort, 2023), the broader exploration of how these mechanisms translate to the regulation of functional costs at a whole-brain scale remains relatively uncharted.
Drawing inspiration from engineering principles, network control theory (NCT) offers a novel perspective on this problem by conceptualizing the brain as a networked control system in order to explain its dynamics (Gu et al., 2015). In its most basic form, NCT considers brain dynamics as a composite outcome of a region’s connectivity profile and the necessary control inputs to guide a neural activity toward a desired state (Gu et al., 2015; Karrer et al., 2020; Parkes et al., 2024; Figure 1A). The former aspect delves into the constant anatomical interactions between various brain regions, while the latter presents an adaptable measure to optimally transition between states within the confines of energy constraints (Figure 1B). These constraints are quantified as control energy, which measures the costs entailed in controlling the brain across diverse states (Figure 1C).
Network Control Theory. (A) The brain as a network of connected regions fluctuates through different modes of activity across time. We can model these transitions throughout time using the NCT. The NCT is a framework that uses information about the anatomical connections in the brain, which are fixed, and regional control inputs, which are optimized, as bases to model the temporal evolution of brain activity (highlighted by the changing node color patterns on the right). In other words, the changes in brain activity of a particular region are modeled as depending on its connectivity profile and an “activation” that spreads out along such profile, but also spreads in from other regions. Such activation is referred to control input and is fitted to represent an optimal bridge between the brain’s current and next state (Parkes et al., 2024). (B) Before employing a control model, we define what a brain state is, that is, how maps of brain activity are represented. In previous literature, states have been defined as functional intrinsic networks (Cornblath et al., 2020; Cui et al., 2020; He et al., 2022), cytoarchitectonical hierarchy levels (Parkes et al., 2022), or cognitive maps (Luppi, Singleton, et al., 2024), to name a few. These states are represented as single points in an N-dimensional state space, with N being the number of brain regions that we control. We seek to find the shortest possible trajectory between two points in this space and generate activations such that the brain is steered, or “controlled,” along a previously optimized trajectory. (C) We define the current and next state of brain activity to be related by the structural connectome of an individual (see the Methods section). The connectome imposes a resistance, or flow, into the state space such that it is easier to make activity changes between states in one direction than another. Think of walking on a hill where the walk uphill will be more strenuous than the downhill walk. In analogy to this, it costs the brain more energy, called “control energy,” to control its activity between particular states than others. This asymmetry has been the subject of study in previous literature (Luppi, Singleton, et al., 2024; Parkes et al., 2022).
Network Control Theory. (A) The brain as a network of connected regions fluctuates through different modes of activity across time. We can model these transitions throughout time using the NCT. The NCT is a framework that uses information about the anatomical connections in the brain, which are fixed, and regional control inputs, which are optimized, as bases to model the temporal evolution of brain activity (highlighted by the changing node color patterns on the right). In other words, the changes in brain activity of a particular region are modeled as depending on its connectivity profile and an “activation” that spreads out along such profile, but also spreads in from other regions. Such activation is referred to control input and is fitted to represent an optimal bridge between the brain’s current and next state (Parkes et al., 2024). (B) Before employing a control model, we define what a brain state is, that is, how maps of brain activity are represented. In previous literature, states have been defined as functional intrinsic networks (Cornblath et al., 2020; Cui et al., 2020; He et al., 2022), cytoarchitectonical hierarchy levels (Parkes et al., 2022), or cognitive maps (Luppi, Singleton, et al., 2024), to name a few. These states are represented as single points in an N-dimensional state space, with N being the number of brain regions that we control. We seek to find the shortest possible trajectory between two points in this space and generate activations such that the brain is steered, or “controlled,” along a previously optimized trajectory. (C) We define the current and next state of brain activity to be related by the structural connectome of an individual (see the Methods section). The connectome imposes a resistance, or flow, into the state space such that it is easier to make activity changes between states in one direction than another. Think of walking on a hill where the walk uphill will be more strenuous than the downhill walk. In analogy to this, it costs the brain more energy, called “control energy,” to control its activity between particular states than others. This asymmetry has been the subject of study in previous literature (Luppi, Singleton, et al., 2024; Parkes et al., 2022).
Control energy has been shown to be a powerful tool in various domains of neuroscience, providing an explanation of how the enhancement of executive functions is supported by structural changes across development (Cui et al., 2020), explaining the effects of psychedelics on brain dynamics (Singleton et al., 2022) or predicting cortical responses to electrical stimulation (Stiso et al., 2019). Nevertheless, studies employing NCT traditionally focus on the control energy between single pairs of states without accounting for how such costs organize in time. Furthermore, control energy remains a purely statistical notion (Srivastava et al., 2020), dissociated from the actual energetic currency employed by the brain (e.g., glucose and oxygen metabolism), with only recent efforts attempting to establish a connection between control costs and metabolism by showing that relative hemispheric differences in glucose uptake are mirrored by differences in control energy for temporal lobe epilepsy (TLE) patients (He et al., 2022).
Despite these advances, the costs associated with controlling a neurotypical resting-state activity as well as their energetic signature remain elusive. In light of these considerations, we propose a novel framework to examine the expenses incurred in regulating resting-state dynamics, that is, activity over time. Building upon the findings of previous reports (He et al., 2022), we further investigate whether theoretically derived control costs manifest in tangible biological metrics of energy expenditure by comparing them with normative measures of energy metabolism. Moreover, we extend our methodology to investigate organizational principles in brain dynamics through control costs. Ultimately, our efforts seek to establish a foundation for comprehending how the brain governs its functionality and the associated costs entailed in this intricate control.
RESULTS
Here, we present our methodology to study the control costs of human brain dynamics, that is, activity changes over time, using NCT. First, we begin by describing our analysis pipeline and the modeling choices made to simulate the control costs of resting-state dynamics. Second, we ask how our new measure of control costs compares with measurements of energy metabolism based on the positron emission tomography (PET). Finally, we conclude by showing how our framework can be leveraged to compute the transition costs between functional hierarchical levels in the brain. For all our main results, we parcellate our data into 400 cortical regions according to the delineation by Schaefer et al. (2018). Further, our analysis is primarily based on human anatomical and functional data from diffusion and functional magnetic resonance imaging (dMRI and fMRI) recordings of n = 327 participants, readily available in the Human Connectome Project Young Adult dataset (Van Essen et al., 2013). Further replications with other parcellations and datasets are presented at the end of the Results section.
Simulating the Control Costs of Resting-State Dynamics
We begin our analysis by defining brain states for our resting-state data so that we can later simulate the costs to transition between them. While the definition of a brain state is subject to debate (Kringelbach & Deco, 2020), brain states in the context of brain network control have been typically defined as task activations, for example, based on statistical maps of parameter estimates (Braun et al., 2021) or meta-analytic maps (Luppi, Singleton, et al., 2024). In resting-state studies, brain states have been defined using two different approaches. On the one hand, states have been defined according to a priori specified parcellations, where areas that fall into a given intrinsic network are represented with ones and all other areas with zeros (Cui et al., 2020; He et al., 2022). On the other, resting states have been derived from clustering techniques and then categorized based on their similarity to binary intrinsic network maps like in the previous approach (Cornblath et al., 2020; Singleton et al., 2022).
While both approaches offer a respective advantage, they also come with their drawbacks. The former approach profits from the interpretability of previously established brain networks with defined neurocognitive functions. However, such binary, discontinuous state representations are equally defined for all individuals and do not convey any temporal order of the state sequence, thereby losing unique information about the subject. The latter approach circumvents this by using data-driven clustering techniques to derive states with a temporal order from individual data. However, given the possibility of multiple solutions in clustering (Eickhoff, Thirion, Varoquaux, & Bzdok, 2015; Jain, 2010; Kleinberg, 2002), there is no guarantee that any representation of intrinsic networks are present, thereby complicating the interpretability and replicability of findings across studies. We thus propose a new approach that preserves the intelligibility of intrinsic networks as states, is nonbinary, and presents a temporal sequence tailored to the individual data.
We begin with a denoised time series activity from 400 cortical regions defined in the Schaefer atlas (Schaefer et al., 2018) that are z-scored across time. Each region is uniquely assigned to one of seven canonical intrinsic networks as proposed by Yeo-Krienen (Yeo et al., 2011). We therefore proceed by averaging regional time series data per network, resulting in seven time series, each for a Yeo-Krienen intrinsic network (Figure 2A). Intrinsic networks are then compared at each time point based on their average activity, and the network with the highest activity is designated the dominating network of that time point. Note that in all steps hereafter, as well as the rest of our analyses, we work with the 400 region representation of the data and not their network aggregate. Moreover, to control for noise, we discard time points where no network activity surpasses a fixed threshold set at half a standard deviation in an average activity. We redo our analysis with a lower threshold and also leave out the network modeling altogether to validate our approach (see the Sensitivity and Robustness Analysis section).
Estimating the costs of brain dynamics. (A) We first average the activity of parcels into their respective Yeo-Krienen intrinsic networks (Yeo et al., 2011) in the Schaefer-400 parcellation (Schaefer et al., 2018), thereby aggregating the data from 400 to 7 time series (left). Then, we select the network with the highest amplitude at each time point, yielding a sequence of network dominance across time (right). (B) We proceed by averaging the original 400-parcel BOLD activity of each network across time points it dominated. (C) This yields seven representative network state maps, reflecting the average magnitude and sign of cortical activity during each network dominance. (D) Following the sequence of network dominance, we simulate the optimal control strategy in which the brain transitions between pairs of network states. Thus, each transition yields a map of momentary control costs to move between states. We then average all control maps across transitions. (E) The previous steps culminate in a map of TCE for each individual, where the value of each region represents the amount of control energy integration a region saw throughout the recording on average. VIS = visual network; SMN = somatomotor network; DAN = dorsal attention network; SAL = salience ventral network; LIM = limbic network; FPN = frontoparietal network; DMN = default mode network.
Estimating the costs of brain dynamics. (A) We first average the activity of parcels into their respective Yeo-Krienen intrinsic networks (Yeo et al., 2011) in the Schaefer-400 parcellation (Schaefer et al., 2018), thereby aggregating the data from 400 to 7 time series (left). Then, we select the network with the highest amplitude at each time point, yielding a sequence of network dominance across time (right). (B) We proceed by averaging the original 400-parcel BOLD activity of each network across time points it dominated. (C) This yields seven representative network state maps, reflecting the average magnitude and sign of cortical activity during each network dominance. (D) Following the sequence of network dominance, we simulate the optimal control strategy in which the brain transitions between pairs of network states. Thus, each transition yields a map of momentary control costs to move between states. We then average all control maps across transitions. (E) The previous steps culminate in a map of TCE for each individual, where the value of each region represents the amount of control energy integration a region saw throughout the recording on average. VIS = visual network; SMN = somatomotor network; DAN = dorsal attention network; SAL = salience ventral network; LIM = limbic network; FPN = frontoparietal network; DMN = default mode network.
With the identified time points of dominance for each intrinsic network in hand, we proceed to compute the actual state representations of networks. The dominance labels serve as temporal markers to define which time points belong together. We average the activity within such markers to yield a centroid-like representation that resembles an intrinsic network, with a continuous magnitude and sign that is captured by the activity of each individual (Figure 2B). These state coefficient maps represent each intrinsic network in the a priori defined parcellation, with additional information about regions outside the network that cofluctuate with it (Figure 2C). Essentially, we are defining network representations for each individual by tailoring the network weights to their brain activity.
To simulate the control energy between each network coefficient map, we leverage structural information about the anatomical connections of each individual, derived from dMRI. As referenced earlier, network control methods require state maps to represent initial and target positions from where we simulate the control costs to move between them. Such costs are subject to anatomical constrains imposed by the individual’s connectome. In other words, we quantify the effort required in each region to change its current activity to a desired state, with other connected regions also contributing to its change (Gu et al., 2015; Karrer et al., 2020; Parkes et al., 2024). To maintain biological plausibility, we set our simulated path to follow an optimal trajectory that balances the energy required to enable a transition with the proximity to the desired target state (Gu et al., 2015). More simply, we simulated transitions that focus on energy minimization while imposing constraints to prevent them from deviating into implausible, nonphysiological pathways that may otherwise provide a low-energy pathway.
The results of our modeling of pairwise state transitions manifest in an asymmetric matrix representing the energy cost required to transition the entire brain from one network state to another (Supporting Information Figure S2). The inherent asymmetry in these costs arises from the directional flow imposed by the structural connectivity within the state space. In another context, the states we have modeled could be thought of lying on peaks and valleys, where access is significantly easier in one direction than the other—a phenomenon analogous to the contrast between ascending and descending a hill, with the former requiring more energy.
Having captured the control cost of the transition between all pairs of resting states, we now reconcile this information with the temporal sequence of the dominant networks. This means that we use the network sequence to simulate the energy required for the transitions between states as they occurred during the recording (Figure 2D). We aggregate the transition energy over all pairs of states in the recording sequence and then normalize the cumulative energy by dividing it by the number of transitions, that is, we average the overall energy across transitions. This normalization ensures that our measurement remains comparable across different recording durations. Ultimately, our approach culminates in the derivation of a metric we refer to as time-averaged control energy, or TCE (Figure 2E).
TCE is an estimate of the temporal control costs associated with idiosyncratic intrinsic network activations across time (Figure 3A). It is a spatially heterogeneous measure, but we find that its distribution in the left and right hemispheres is statistically indistinguishable (Mann–Whitney U = 76,390, p = 0.27; Figure 3B). Moreover, previous evidence indicates that control energy is modulated by age (Cui et al., 2020; Tang et al., 2017). We find that stratifying by age (using bins of 22–25, 26–30, 31–35, and 36+, with 247, 527, 418, and 14 individuals, respectively) does not lead to any statistically significant group differences (Levene W = 108, p = 0.955; ANOVA (analysis of variance), F(3, 1596) = 0.133, p = 0.94; Figure 3C). We thus show that the control costs of human brain dynamics are region specific, consistent between hemispheres, and conserved across ages studied here.
TCE. (A) Averaging all TCE maps across individuals, we observe that the costs to control resting-state dynamics are regionally heterogeneous. (B) Regions in the dorsal attention network require the most control with limbic regions costing the least. In general, temporal control costs display no preference for a hemisphere, with comparable values evident in both the left and right hemispheres (Mann–Whitney U = 76,390, p = 0.27). (C) When stratifying our analysis by age, we see no statistical difference in temporal control costs across ages groups (ANOVA F[3, 1596] = 0.133, p = 0.94). Each dot represents the subject-averaged value for a region of interest. Bars around the mean indicate the standard deviation of the data.
TCE. (A) Averaging all TCE maps across individuals, we observe that the costs to control resting-state dynamics are regionally heterogeneous. (B) Regions in the dorsal attention network require the most control with limbic regions costing the least. In general, temporal control costs display no preference for a hemisphere, with comparable values evident in both the left and right hemispheres (Mann–Whitney U = 76,390, p = 0.27). (C) When stratifying our analysis by age, we see no statistical difference in temporal control costs across ages groups (ANOVA F[3, 1596] = 0.133, p = 0.94). Each dot represents the subject-averaged value for a region of interest. Bars around the mean indicate the standard deviation of the data.
Recapitulating, our framework first begins by identifying the dominant intrinsic networks at each time point (Figure 2A). This information is used to estimate the average activity across time points where each network dominates (Figure 2B), resulting in state maps for each intrinsic network (Figure 2C). We then simulate the control energy to transition between each state in order to map these energetic costs to the sequence of dominating networks from our first step. This allows us to track the amount of control energy required to transition between time points (Figure 2D). Finally, we average across all transitions to determine the TCE of an individual, which is an estimate of the average costs to control their resting-state activity over time (Figure 2E). Our measure offers a new insight into the energy required to control resting-state activity, which we proceed to validate with actual measures of energetic costs, such as glucose and oxygen metabolism.
Grounding Control Costs in Metabolism
Previously, NCT metrics were shown to correlate with physiological markers of energy metabolism (He et al., 2022), thereby establishing an initial bridge between theoretical and tangible indicators of energy. As such, we contextualize our results with neurobiological maps to look for energetic correlates of temporal control costs. To accomplish this, we scrutinize the relationship between temporal control costs and two pivotal substrates of metabolic energy: oxygen and glucose metabolism (Camandola & Mattson, 2017; Raichle & Gusnard, 2002; Figure 4).
Relationship to metabolism. (A) Left: We compared how TCE spatially covaries with glucose metabolism from an internally acquired PET dataset (Castrillon et al., 2023). Glucose uptake has been previously shown to scale with momentary control energy (He et al., 2022). We, however, do not find a similar significant relationship to control energy when considering its temporal dimension (Spearman ρ = 0.144, PSMASH = 0.347). Shades around the regression line represent 95% confidence interval with bootstrap samples. Right: Distribution of spatial nulls. Correlation results accounted for spatial autocorrelation in the data by using variogram-matched nulls (Burt et al., 2020; Markello & Misic, 2021; Váša & Mišić, 2022). (B) Autocorrelation in the data by using variogram-matched nulls (Burt et al., 2020; Markello & Misic, 2021; Váša & Mišić, 2022).
Relationship to metabolism. (A) Left: We compared how TCE spatially covaries with glucose metabolism from an internally acquired PET dataset (Castrillon et al., 2023). Glucose uptake has been previously shown to scale with momentary control energy (He et al., 2022). We, however, do not find a similar significant relationship to control energy when considering its temporal dimension (Spearman ρ = 0.144, PSMASH = 0.347). Shades around the regression line represent 95% confidence interval with bootstrap samples. Right: Distribution of spatial nulls. Correlation results accounted for spatial autocorrelation in the data by using variogram-matched nulls (Burt et al., 2020; Markello & Misic, 2021; Váša & Mišić, 2022). (B) Autocorrelation in the data by using variogram-matched nulls (Burt et al., 2020; Markello & Misic, 2021; Váša & Mišić, 2022).
Our initial investigation involved comparing TCE with the regional variations in cerebral glucose metabolism (CMRglc) across the cortex. To achieve this, we employed a group-average template derived from an internally acquired dataset (Castrillon et al., 2023) comprising n = 20 subjects who underwent PET scans, utilizing 18F-fluorodeoxyglucose (FDG) as a tracer to monitor their glucose uptake (see the Methods section).
Given the inherent spatial smoothness of our PET map, we employ rigorous measures to validate our results by contrasting them against 10,000 null maps that replicate the variogram of the data (Burt et al., 2020; see the Methods section). These null maps randomize the values in our PET map while accounting for the spatial autocorrelation inherent in the recordings, enabling us to distinguish true effects from spatially induced artifacts using a modified p value (PSMASH; Burt et al., 2020; Markello & Misic, 2021). Our results showed a weak correlation between TCE and CMRglc, which, however, did not exhibit statistical significance when the spatial smoothness of the dataset was taken into account (Spearman ρ = 0.144, p = 0.003, PSMASH = 0.347).
Subsequently, we investigate the relationship between TCE and cerebral oxygen uptake within the cortex. Here, we employed a normative map of cerebral oxygen metabolism (CMRO2) derived from Vaishnavi et al. (2010). This map was computed from n = 33 participants who underwent PET scans to estimate their cerebral oxygen consumption, utilizing 15O-labeled oxygen (see the Methods section). We note that the map is not quantified into cerebral metabolic units; however, we retain the name for oxygen uptake as CMRO2, in accordance with the original authors’ nomenclature for their activity concentration.
Given the same technical limitations as with the CMRglc map, we generate 10,000 null maps that preserve the spatial autocorrelation of the original data using the same variogram-matching algorithm (Burt et al., 2020). Our results show that TCE significantly correlates with CMRO2 and that this relationship is significant even after accounting for the spatial autocorrelation in the data (Spearman ρ = 0.35, p < 0.001, PSMASH = 0.027). Altogether, we find that the regional cortical pattern of temporal control costs resembles the spatial pattern of oxygen uptake, establishing further evidence about the link between biological and control energy.
Temporal Organization of Control Costs
We now turn to a new perspective on brain states by reframing our analysis from the perspective of functional hierarchical processing as defined by Mesulam (1998). Previous work has shown that control energy differs across cortical hierarchies (Parkes et al., 2022). In this context, we examine the control costs of switching within and between hierarchical levels across time, ultimately looking at their dynamical organization in time.
We begin by mapping the sequence of dominant networks to their respective levels within the functional hierarchy of the brain (Figure 5A). Visual and sensorimotor networks are hereafter referred to as “unimodal,” a distinction that arises from their specialized processing that assumes specific inputs (Margulies et al., 2016; Mesulam, 1998). In turn, we refer to all other networks as “heteromodal” (associative) as they integrate multiple functional streams (Mesulam, 1998). Bimodal mapping ensures consistency in our subsequent validation with a different parcellation that defines intrinsic networks differently (Supporting Information Figure S5). Moreover, we show that adopting this new perspective provides a fresh view of the dynamics of the brain, uncovering a relationship between TCE and state-switching behavior. We follow the same framework to compute TCE, but now separately average transitions within unimodal networks, that is, from unimodal to unimodal, and within heteromodal networks, as well as transitions between unimodal and heteromodal levels.
Temporal organization of control costs. (A) We map our sequence of dominating networks to their respective hierarchical level. This means sensory networks are jointly labeled as unimodal networks (U), with the remaining networks defined as heteromodal, association networks (H). (B) Transitions within hierarchical levels represent the majority of transitions (86.5%) at rest. (C) Averaging control costs across the whole brain, we observed that transitions within a hierarchical level required significantly less TCE than between. Asterisks indicate high significance (Mann–Whitney U = 376,072, p < 0.001). Each dot per transition type represents a subject. (D) We relate whole-brain control costs to transition frequency using repeated measures correlation (Bakdash & Marusich, 2017) to assess common intra-individual relationships and find that the number of transitions per transition type is negatively correlated to the cost needed to perform such switch (rrm = −0.619, p < 0.001).
Temporal organization of control costs. (A) We map our sequence of dominating networks to their respective hierarchical level. This means sensory networks are jointly labeled as unimodal networks (U), with the remaining networks defined as heteromodal, association networks (H). (B) Transitions within hierarchical levels represent the majority of transitions (86.5%) at rest. (C) Averaging control costs across the whole brain, we observed that transitions within a hierarchical level required significantly less TCE than between. Asterisks indicate high significance (Mann–Whitney U = 376,072, p < 0.001). Each dot per transition type represents a subject. (D) We relate whole-brain control costs to transition frequency using repeated measures correlation (Bakdash & Marusich, 2017) to assess common intra-individual relationships and find that the number of transitions per transition type is negatively correlated to the cost needed to perform such switch (rrm = −0.619, p < 0.001).
After mapping the sequence of dominating networks, we observe that, on average for individuals, transitions occur predominantly within a hierarchy rather than between hierarchies (Figure 5B). Analogously, the control costs to transition between hierarchies are more demanding than those within a hierarchical level (Mann–Whitney U = 376,072, p < 0.001; Figure 5C). Using repeated-measures correlation (Bakdash & Marusich, 2017; see the Methods section) to account for repeated measures of control costs for each participant as well as their individual traits, we examine the commonalities in the relationship between participants’ control costs and transition frequency across hierarchies. We find a robust negative correlation throughout the cohort (rrm = −0.619, p < 0.001), indicating that more frequent transitions within a hierarchy are associated with lower average control energy costs, in stark contrast to infrequent, costly transitions that bridge different hierarchical domains. We show that this relationship is not only an artifact of the network partition in our parcellation but is also evident when using alternative partitions (see the Sensitivity and Robustness Analysis section). Altogether, our findings suggests a dynamic temporal principle within the brain that strives to minimize its control costs by keeping expensive transitions sparse.
Sensitivity and Robustness Analysis
To ensure the robustness of our results, we replicate our analyses on an independent dataset (Castrillon et al., 2023). This additional dataset has lower spatial and temporal resolution and is subjected to different preprocessing protocols (see the Methods section). Indeed, even in the face of these differences, our results remain unchanged (Supporting Information Figure S6).
We then repeat our analysis using an alternative cortical parcellation (Gordon et al., 2016). This approach seeks to validate the reproducibility of our results with a different definition of intrinsic networks. We find that this approach leads to consistent results (Supporting Information Figure S5). Additionally, we investigate the impact of a coarser partition by dividing the cortex into 200 regions instead of the original 400 regions. We find that this modification does not affect our results (Supporting Information Figure S4).
Further, we test the impact of our state modeling step by leaving it out and instead simulate the control costs to transition between time points of raw activity, that is, our state maps are represented as the BOLD activity at each time point. We form a group-average template like in our main results and compare these results with the temporal signal-to-noise ratio (tSNR) of each region. tSNR is a metric to quantify the quality of a BOLD signal, calculated as the mean of the signal prior to demeaning and divided by the signal standard deviation (Welvaert & Rosseel, 2013). We compute the tSNR for each participant run in the Human Connectome Project (HCP) dataset, rescale it from 0 to 1 using min-max normalization, and finally average it across runs.
We find that, when leaving out the state modeling step, regions with high TCE are also regions that exhibit a low tSNR (Spearman ρ = 0.36, Pspin = 0.042). In contrast, using our approach, we observe that the group-level results are unlikely related to the group-level regional tSNR (Spearman ρ = 0.234, PSMASH = 0.232). Following our main approach, we discard time points with an average activity below 0.5 but observe that, using a lower threshold of 0 instead, our group-level results again show no significant relationship to the regional signal quality (Spearman ρ = 0.274, PSMASH = 0.099). Moreover, we observed on an individual level that a larger extent of participants’ TCE maps were significantly related to their tSNR when leaving out our modeling approach (Supporting Information Figure S1). Taken together, this suggests that modeling states using our approach mitigates the influence of noise in our analysis.
We conclude by examining whether temporal control costs are related to the occurrence of resting states. This would mean that the patterning of individual TCE maps are a reflection of the most frequently occurring states. To test this, we create a representative map of the occurring intrinsic networks by counting how frequent, that is, how many time points, each state occurs. Dividing this number by the total number of time points yields an estimate of how much each network was present in the recording, for example, the visual network dominated 20% of the time. We multiply this ratio with its corresponding state coefficient map, thereby creating time-weighted maps, which we can sum together to create a map that represents the frequency of each network and its spatial distribution. Interestingly, we find no significant association between this map and our TCE map (Spearman ρ = 0.27, PSMASH = 0.092; Supporting Information Figure S3). Altogether, we conclude that our findings result from a convergence of characteristic sequences of resting states and individual anatomical constraints, which together form the basis to simulate the control costs of resting-state dynamics.
DISCUSSION
In the present study, we introduce a metric for assessing the control costs inherent in human brain dynamics, referred to as TCE. TCE serves as a composite measure quantifying the average control exerted by an individual to sustain a dynamic neural activity over time. Notably, our findings reveal a statistically significant spatial resemblance between the distribution of TCE and cortical oxygen metabolism, a major energetic substrate crucial for compensating a neural activity (Harris, Jolivet, & Attwell, 2012). Our results further show that transitions within uni- or heteromodal networks entail distinct temporal control costs as opposed to transitions between modalities, thereby offering insight into the temporal organization of brain dynamics. Importantly, we provide an open-access code to facilitate the replication and extension of our results in subsequent studies.
Temporal Control Is a Confluence of Anatomical Constraints and State Sequences
NCT is a powerful tool to investigate the emergence of dynamics from the structural scaffold of the brain (Parkes et al., 2024). Its application in neuroscience has yielded valuable insights, for example, into individual differences in psychiatric conditions (Luppi, Singleton, et al., 2024; Parkes et al., 2021) or developmental changes in youth (Cui et al., 2020; Tang et al., 2017). Notably, controllability is reported to increase asymptotically and plateau with adolescence (Tang et al., 2017), potentially explaining the absence of age effects observed in our results.
We find that the distribution of temporal control costs is spatially distinct, with certain brain regions requiring higher TCE than others (Figure 3). The extent of engagement in this control dynamic may be influenced by the manner in which control inputs propagate, reminiscent of the role of connectivity profiles in facilitating diverse spreading dynamics across brain regions (Mišić et al., 2015). This spreading process may parallel the flow of control throughout the connectome (Srivastava et al., 2020).
Control costs are further influenced by the specific states the brain traverses. Our results corroborate previous findings (Luppi, Singleton, et al., 2024; Parkes et al., 2022), indicating that control energy is dependent on both initial and goal states (Figure 5; Supporting Information Figure S2). This underscores the significance of our state-modeling approach that determines the sequence of network dominance based on intrinsic network activations, thereby establishing the order of state transitions.
Our methodology demonstrates robustness across different partitions of intrinsic networks (Supporting Information Figures S4 and S5), as well as separate datasets (Supporting Information Figure S6). Nonetheless, our state-modeling approach trades off robustness for temporal granularity, thereby losing the ability to capture temporal shifts in brain activity. Indeed, brain dynamics are hypothesized to be temporally organized such that high-amplitude modes of activity are counterbalanced by transient “off” states (Saggar, Shine, Liégeois, Dosenbach, & Fair, 2022; Vidaurre, Smith, & Woolrich, 2017; Zamani Esfahlani et al., 2020). We prompt future investigations to experiment with alternative state-modeling methods, like sliding-window approaches, for improved temporal precision. Appropriate tests should, however, be developed in tandem to distill real fluctuations from spurious ones (Leonardi & Van De Ville, 2015; Zalesky & Breakspear, 2015).
Finally, we note that our network control model is linear, therefore assuming that linear dynamics govern brain transitions (Karrer et al., 2020; Parkes et al., 2024). Previous studies demonstrated how local nonlinear fluctuations in the cortex support the emergence of intrinsic resting-state networks (Cabral, Hugues, Sporns, & Deco, 2011; Mantini, Perrucci, Del Gratta, Romani, & Corbetta, 2007) and, as such, denoted the utility of nonlinear models to simulate neural dynamics (Breakspear, 2017). However, recent evidence has shown that models based on first-order linear approximations can approximate macroscale activity (Nozari et al., 2023). Indeed, previous work showed that linear control model predictions align with nonlinear biophysical model simulations for personalized stimulation protocols (Muldoon et al., 2016). Moreover, compared with nonlinear control models, a desirable feature of linear control models is that they are more interpretable, which can facilitate incorporation of additional biological properties, such as receptor availability (Singleton et al., 2022, 2023) or cortical abnormalities in patient populations (Luppi, Singleton, et al., 2024). Consequently, we see the potential for future efforts to expand on our work in order to investigate how temporal control costs are modulated by incorporating additional biological insights or clinical distinctions.
Control Costs Reflect Metabolic Features of the Brain
Our results show that temporal control costs colocalize with oxygen metabolism, but not glucose metabolism (Figure 4). A previous study from He et al. (2022) showed that control energy and glucose uptake are similarly put out of balance between hemispheres in TLE patients. The authors, however, only show this relationship in the limbic system, and, moreover, their methodology does not consider the costs of temporal switching but seeks to link relative hemispheric asymmetries in momentary control costs and glucose uptake to explain structural changes in TLE patients. In contrast, our study relates control and metabolism on an absolute whole-brain basis and considers the temporal dimension of control costs.
Moreover, while both glucose and oxygen constitute fundamental components for meeting the energetic demands of neural activity (Camandola & Mattson, 2017; Harris et al., 2012; Li & Sheng, 2022; Mergenthaler, Lindauer, Dienel, & Meisel, 2013; Vaishnavi et al., 2010; Watts, Pocock, & Claudianos, 2018), they exhibit distinct spatial patterns. The predominant energy-producing pathways in the brain encompass glycolysis and oxidative metabolism, where glycolysis relies on glucose and oxidative metabolism relies on glucose-derived pyruvate and oxygen (Camandola & Mattson, 2017; Dienel, 2012). The disparate spatial distribution of glucose and oxygen metabolism is attributed to their dissimilar upscaling capacities for energy generation (Díaz-García et al., 2017; Dienel, 2019; Zala et al., 2013). Specifically, certain regions are posited to favor glycolysis beyond basal levels of oxygen and glucose consumption at an equable ratio in order to afford greater flexibility in energy demand necessary for supporting human cognition (Luppi, Rosas, et al., 2024; Vaishnavi et al., 2010). In light of these considerations, our association of temporal control costs with the spatial distribution of oxygen, rather than glucose metabolism, supports the interpretation that the control of human brain dynamics is grounded in the baseline metabolic activity constituent of oxidative energy generation.
We note, however, that our conclusion is based on group-level comparisons of control costs and metabolic maps. Further research is warranted to investigate the nuanced relationship between control costs and energy metabolism at an individual level. Advances in MRI now enable the simultaneous acquisition of BOLD and oxygen metabolism data from the same subjects (Epp et al., 2024), providing an avenue for an in-depth exploration of this relationship.
State Switching Is Grounded by Control Costs
We observed that the costs required to transition within and between hierarchical levels, that is, unimodal or heteromodal, were significantly different (Figures 5C and 5D). Similarly, this phenomenon was mirrored in the frequency with which the brain either transitions out or persists in a hierarchical level (Figures 5B and 5D). This means that costly transitions between hierarchical levels were rare, with more efficient transitions within hierarchical levels happening more frequently.
In order to enable information flow, the human brain is posited to switch between modes of integration and segregation over time, where segregation refers to regions clustering together to form functionally distinct modules and integration represents a form of global intercommunication between modules (Deco, Tononi, Boly, & Kringelbach, 2015; Finc et al., 2020; Shine & Poldrack, 2018). Building upon this, we propose that the transitions between functional hierarchical levels may reflect an integration-segregation dynamic. In this sense, switches between hierarchical levels might represent modes of communication among functional hierarchies, counterbalanced by switches within hierarchical levels as modes of segregated activity. As previous studies have shown the disruption of integration-segregation dynamics by pathology or altered states of consciousness (Lord, Stevner, Deco, & Kringelbach, 2017; Luppi et al., 2019), future works could investigate whether such conditions equally disrupt state-switching dynamics.
Furthermore, we hypothesize that the inverse correlation between control costs and state visits represents a mechanism where the brain seeks to maintain functional diversity, while minimizing its energy expenditure. This is similar to the hypothesis put forward by Zalesky, Fornito, Cocchi, Gollo, and Breakspear (2014), where the authors posit that the brain transitions through costly, intermittent states in order to facilitate global integration while keeping energy demand minimal. As such, we encourage future efforts to further elucidate our understanding of the intricacies governing state-switching dynamics and control costs.
In conclusion, our work presents a new perspective of control energy as a measure of the costs to control human brain dynamics. We are able to show that temporal control costs are spatially related to oxygen metabolism as an energy substrate and uncover an organizational principle that relates temporal dynamics to their costs in the brain. As such, we envision that our work can help advance research on the costs of regulating brain activity.
METHODS
All preprocessed data are available at https://osf.io/nw9zt. The code and additional data used to perform the analyses are available at https://github.com/NeuroenergeticsLab/control_costs.
Data
HCP.
The main data used for this study consisted of resting-state time series from fMRI and structural connectomes from dMRI taken from the Human Connectome Project S900 Young Adult release (Van Essen et al., 2013). Scans from 327 unrelated participants (mean age = 28.6 ± 3.73 years, 55% females) were used to ensure that familial factors do not confound our analysis (Miranda-Dominguez et al., 2018). Informed consent was obtained for all subjects (the protocol was approved by the Washington University Institutional Review Board as part of the HCP). The participants were scanned in the HCP’s custom Siemens 3T “Connectome Skyra” scanner, and the acquisition protocol included four 15-min resting-state fMRI sessions and a high angular resolution diffusion imaging (HARDI) sequence. The resting-state fMRI data were acquired using a gradient-echo EPI sequence (TR = 720 ms; TE = 33.1 ms; FOV = 208 × 180 mm2; voxel size = 2 mm3; number of slices = 72; and number of volumes = 1,200). The dMRI data were acquired with a spin-echo EPI sequence (TR = 5520 ms; TE = 89.5 ms; FOV = 210 × 180 mm2; voxel size = 1.25 mm3; b value = three different shells, i.e., 1000, 2,000, and 3,000 s/mm2; number of diffusion directions = 270; and number of b0 images = 18). Additional information regarding the acquisition protocol can be found under Van Essen et al. (2013).
To process the functional data, each run of each subject’s resting-state fMRI recording was preprocessed in terms of gradient distortion correction, motion correction, and spatial normalization according to S. M. Smith et al. (2013) and Glasser et al. (2013). Artifacts were then removed using ICA-FIX (Salimi-Khorshidi et al., 2014). Intersubject registration of the cerebral cortex was carried out using areal feature-based alignment and the multimodal surface matching algorithm (Robinson et al., 2014). The clean time series were then parcellated into 400 and 200 cortical region time series according to the Schaefer functional atlas (Schaefer et al., 2018).
To process the diffusion data, structural connectomes were reconstructed from the dMRI data using the MRtrix3 package (Tournier et al., 2019). Gray matter was parcellated into 400 and 200 cortical regions according to the Schaefer functional atlas (Schaefer et al., 2018), and fiber orientation distributions were generated using a multishell, multitissue constrained spherical deconvolution algorithm (Dhollander, Raffelt, & Connelly, 2016; Jeurissen, Tournier, Dhollander, Connelly, & Sijbers, 2014). The initial tractogram was generated with 40 million streamlines, with a maximum tract length of 250 and a fractional anisotropy cutoff of 0.06. Spherical-deconvolution informed filtering of tractograms (SIFT2) was used to reconstruct whole-brain streamlines weighted by cross-section multipliers (R. E. Smith, Tournier, Calamante, & Connelly, 2015). More information regarding the individual network reconstructions is available in Park et al. (2021).
TU Munich dataset.
The replication data consisted of simultaneously measured FDG-PET and fMRI, with subsequent dMRI recordings, all while the participants kept their eyes open. Scans from 20 participants (mean age = 34.2 ± 5.99 years, 50% females) were used. Informed consent was obtained for all subjects (the protocol was approved by the Technical University of Munich Review Board). The participants were scanned in an integrated PET/MR (3T) Siemens Biograph mMR scanner (Siemens, Erlangen, Germany) and used a 12-channel phase-array head coil for the MRI acquisition. The PET data were collected in a list-mode format with an average intravenous bolus injection of 184 MBq (SD = 12 MBq) of [18F]FDG. In parallel to the PET measurement, automatic arterial blood samples were taken from the radial artery every second to measure blood radioactivity using a Twilite blood sampler (Swisstrace, Zurich, Switzerland).
The fMRI data were acquired during a 10-min time interval using a single-shot EPI sequence (300 volumes; 35 slices; repetition time, TR = 2,000 ms; echo time, TE = 30 ms; flip angle [FA] = 90°; FOV = 192 × 192 mm2; matrix size = 64 × 64; voxel size = 3 × 3 × 3.6 mm3). Diffusion-weighted images were acquired using a single-shot EPI sequence (60 slices; 30 noncolinear gradient directions; b value = 800 s/mm2 and one b = 0 s/mm2 image; TR = 10,800 ms, TE = 82 ms; FA = 90°; FOV = 260 × 264 mm2; matrix size = 130 × 132; voxel size = 2 × 2 × 2 mm3). Additional information regarding the acquisition protocol can be found under Castrillon et al. (2023).
To process the functional data, each subject’s resting-state fMRI recording was preprocessed using the Configurable Pipeline for the Analysis of Connectomes (CPAC v1.4.0; Craddock et al., 2013). This included slice-timing correction, motion correction, spatial normalization, quadratic and linear detrending of scanner drift, anatomical CompCor regression of white matter and cerebrospinal fluid activity (Behzadi, Restom, Liau, & Liu, 2007), and subsequent bandpass filtering (0.01–0.1 Hz). More details can be found in Castrillon et al. (2023).
To process the diffusion data, structural connectomes were generated using the MRtrix3_connectome BIDS App (Gorgolewski et al., 2017), which operates principally using tools provided in the MRtrix3 package (Tournier et al., 2019). This included denoising (Veraart et al., 2016), Gibbs ringing removal (Kellner, Dhital, Kiselev, & Reisert, 2016), preprocessing (Andersson, Skare, & Ashburner, 2003; Andersson & Sotiropoulos, 2016), bias field correction (Tustison et al., 2010), intermodal registration (Bhushan et al., 2015), brain extraction (S. M. Smith, 2002), T1 tissue segmentation (Patenaude, Smith, Kennedy, & Jenkinson, 2011; R. E. Smith, Tournier, Calamante, & Connelly, 2012; S. M. Smith, 2002; Zhang, Brady, & Smith, 2001), spherical deconvolution (Jeurissen et al., 2014; Tournier, Calamante, Gadian, & Connelly, 2004), and probabilistic tractography (Tournier, Calamante, & Connelly, 2011) utilizing Anatomically Constrained Tractography (R. E. Smith et al., 2012) and dynamic seeding (R. E. Smith et al., 2015). The resulting fiber track files were subsequently converted into streamline counts by counting the number of streamlines that passed through one of the 400 cortical regions according to the Schaefer functional atlas (Schaefer et al., 2018). In order to compensate for the bias toward longer fibers inherent in the tractography procedure, as well as differences in region size, we normalized the streamline count by the average length of the streamlines and average surface area of the two regions (Hagmann et al., 2008).
To process the PET data, the first 45 min of the PET acquisition were reconstructed offline using the NiftyPET library (Markiewicz et al., 2018) based on the ordered subsets expectation maximization (OSEM) algorithm with 14 subsets, four iterations, and divided into 33 dynamic frames: 10 × 12 s, 8 × 30 s, 8 × 60 s, 2 × 180 s, and 5 × 300 s. The attenuation correction was based on the T1-derived pseudo-CT images (Burgos et al., 2014). All reconstructed PET images were motion corrected and spatially smoothed (Gaussian filter, full width at half maximum [FWHM] = 6 mm). The net uptake rate constant (Ki) was calculated using the Patlak plot model (Patlak & Blasberg, 1985) based on the last five frames of the preprocessed PET images (frames between 20 and 45 min) and the arterial input function derived from the preprocessed arterial blood samples. The cerebral metabolic rate of glucose (CMRglc) was calculated by multiplying the Ki map with the concentration of glucose in the plasma of every participant, divided by a lumped constant of 0.65 (Wu et al., 2003). Finally, the CMRglc maps were partial volume corrected using the grey matter, white matter, and cerebrospinal fluid masks derived from the T1 images using the iterative Yang method (Yang et al., 2017) and finally registered to the MNI152NLin6ASym 3 mm template through the anatomical image.
Additional data.
In addition to the metabolic map of glucose uptake from the TUM dataset, we complemented our analysis with a normative, group-average map of oxygen metabolism (CMRO2) from Vaishnavi et al. (2010), acquired through neuromaps (Markello et al., 2022). The brain map was derived from 33 healthy, right-handed, neurologically normal participants (mean age = 25.4 ± 2.6 years, 58% females) that were recruited from the Washington University community. The recording was performed using a Siemens model 961 ECAT EXACT HR 47 PET scanner (Siemens/CTI) with 47 slices encompassing an axial field of view of 15 cm. Transverse resolution was 3.8–5.0 mm FWHM, and axial resolution was 4.7- to 5.7-mm FWHM. Attenuation data were obtained using 6868GeGa rotating rod sources to enable quantitative reconstruction of subsequent emission scans. Emission data were obtained in the 2D mode (interslice septa extended). The PET data were reconstructed using a ramp filter (6-mmm FWHM) and then blurred to a 12-mm FWHM. Distribution of CMRO2 was measured with a 40-s emission scan (derived from a 120-s dynamic scan) after brief inhalation of 60 mCi of [15O]oxygen in room air. More details on the recording parameters and processing can be found in Vaishnavi et al. (2010).
Network Control Theory (NCT)
Network Control Theory (NCT) is a framework that provides the approach to computing metrics related to the control of brain activity.
Fundamental to NCT is the definition of the brain as a networked system with nodes connected by edges and defined by an adjacency matrix A. Biologically, the matrix A represents the thick bundles of myelinated axonal fibers that run through large-scale connections of brain regions (Sporns, 2011). These fibers are thought to play a critical role in coupling the activity of distant brain regions (Avena-Koenigsberger et al., 2017).
Looking at the assumptions made, Equation 1 imposes that the temporal evolution of the brain is a linear function that is described by its connectivity and its current state in time. In addition to linearity, it is important to note that this equation assumes that A is not changing, that is, the brain is a time-invariant system and that brain dynamics are noise free (Karrer et al., 2020; Srivastava et al., 2020).
Optimal control energy.
Optimal control energy provides a measure of the controllability of a system in terms of (a) the state trajectory traversed and (b) the work required to reach a state. It further specifies a time horizon T during which the control input u(t) is effective in moving the system from the initial state x(0) = x0 to the goal state x(T) = xT.
The optimal energy E* was computed for each spatial unit—in this case, regions from a parcellation. These values therefore represent the quadratic input across model time to optimally move toward a goal state given an initial state.
Readers interested in a more in-depth discussion about the mechanisms of NCT are invited to read the review from Srivastava et al. (2020) and Parkes et al. (2024). For more discussions on parameter sensibilities of the model, interested readers are referred to the exhaustive study by Karrer et al. (2020).
Time-averaged control energy.
Here, is defined as the optimal control energy to transition from time point 0 to 1, and P is the total number of transitions, which is equivalent to the total number of time points in a recording minus one, that is P = Trecording − 1. The resulting TCE value represents the average control energy exerted onto a region across a resting-state recording. As we show in our results, each region sees a different demand for control energy, resulting in a spatially heterogeneous map of control costs for each individual. Finally, we average each map of TCE across individuals to have a group representation of TCE, which we can statistically compare with other brain maps.
Spatial Autocorrelation Preserving Null Model
For all of our hypothesis tests that relied on comparing two brain maps, we employed BrainSMASH (Burt et al., 2020) as implemented in the neuromaps (Markello et al., 2022) library. Briefly, the methodology of BrainSMASH entails two steps: (a) random permutation of values within a designated brain map and (b) smoothing and rescaling to restore the spatial autocorrelation characteristic of the original data.
Repeated-Measures Correlation
Repeated-measures correlation (rrm) is a statistical technique developed by Bakdash and Marusich (2017) to examine the common association between variables within each subject, focusing on accounting for within-individual associations in paired measures. This method is particularly valuable when dealing with repeated measures on the same participants—in our case four control cost estimates for each hierarchical transition. Because this method only assesses the common slope across participants, we can investigate whether there is a consistent relationship trend throughout measurements while automatically correcting for individual confounding factors such as the participant’s gender or age.
ACKNOWLEDGMENTS
We thank Filip Milisav, Justine Hansen, Vincent Bazinet, Asa Farahani, Zhen-Qi Liu, Moohebat Pourmajidian, Laura Suarez, and Markus Ploner for their helpful discussion. E.G.C. acknowledges support from the Molson Foundation, the IFI fellowship of the German Academic Exchange Service, and the Elite Network of Bavaria. A.I.L. was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC; funding reference number 202209BPF-489453-401636; Banting Postdoctoral Fellowship). V.R. has been funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (ERC Starting Grant, ID 759659).
SUPPORTING INFORMATION
Supporting information for this article is available at https://doi.org/10.1162/netn_a_00425.
AUTHOR CONTRIBUTIONS
Eric G. Ceballos: Conceptualization; Data curation; Formal analysis; Funding acquisition; Investigation; Methodology; Project administration; Resources; Software; Supervision; Validation; Visualization; Writing – original draft; Writing – review & editing. Andrea I. Luppi: Conceptualization; Investigation; Methodology; Supervision; Validation; Writing – original draft; Writing – review & editing. Gabriel Castrillon: Conceptualization; Data curation; Software; Validation. Manish Saggar: Conceptualization; Formal analysis; Funding acquisition; Investigation; Methodology; Project administration; Supervision; Validation; Writing – review & editing. Bratislav Misic: Conceptualization; Data curation; Funding acquisition; Investigation; Methodology; Project administration; Supervision; Validation; Visualization; Writing – review & editing. Valentin Riedl: Conceptualization; Data curation; Funding acquisition; Investigation; Methodology; Project administration; Supervision; Validation; Visualization; Writing – original draft; Writing – review & editing.
FUNDING INFORMATION
Valentin Riedl, HORIZON EUROPE European Research Council (https://dx.doi.org/10.13039/100019180), Award ID: 759659.
TECHNICAL TERMS
- Network control theory:
Engineering framework used in neuroscience to depict the brain as a networked dynamical system to model its intrinsic activity.
- Brain state:
Pattern of momentary activity that the brain can express.
- Transition:
Switch from one pattern of momentary activity to another.
- Control energy:
Energy to transition from one single brain state to another, modeled according to the network control theory.
- Time-averaged control energy:
Average energy required to transition across a sequence of physiological states, in this case resting states.
- Unimodal:
Processing of a single modality stream, for example, visual inputs.
- Heteromodal:
Processing of multiple modality streams like combined visual, auditory, and proprioceptive cues.
- Controllability:
Theoretical measure of a node’s capacity to spread an input throughout its network vicinity.
- Intrinsic networks:
Patterns of co-activation in the cortex that pertain to resting-state activity.
REFERENCES
Competing Interests
Competing Interests: The authors have declared that no competing interests exist.
Author notes
Handling Editor: Claus Hilgetag