At the inception of human brain mapping, two principles of functional anatomy underwrote most conceptions—and analyses—of distributed brain responses: namely, functional segregation and integration. There are currently two main approaches to characterizing functional integration. The first is a mechanistic modeling of connectomics in terms of directed effective connectivity that mediates neuronal message passing and dynamics on neuronal circuits. The second phenomenological approach usually characterizes undirected functional connectivity (i.e., measurable correlations), in terms of intrinsic brain networks, self-organized criticality, dynamical instability, and so on. This paper describes a treatment of effective connectivity that speaks to the emergence of intrinsic brain networks and critical dynamics. It is predicated on the notion of Markov blankets that play a fundamental role in the self-organization of far from equilibrium systems. Using the apparatus of the renormalization group, we show that much of the phenomenology found in network neuroscience is an emergent property of a particular partition of neuronal states, over progressively coarser scales. As such, it offers a way of linking dynamics on directed graphs to the phenomenology of intrinsic brain networks.

This paper describes a treatment of effective connectivity that speaks to the emergence of intrinsic brain networks and critical dynamics. It is predicated on the notion of Markov blankets that play a fundamental role in the self-organization of far from equilibrium systems. Using the apparatus of the renormalization group, we show that much of the phenomenology found in network neuroscience is an emergent property of a particular partition of neuronal states, over progressively coarser scales. As such, it offers a way of linking dynamics on directed graphs to the phenomenology of intrinsic brain networks.

A persistent theme in systems neuroscience, especially neuroimaging, is the search for principles that underlie the functional anatomy of distributed neuronal processes. These principles are usually articulated in terms of functional segregation (or differentiation) and integration, which inherit from centuries of neuroanatomical, neurophysiological, and neuropsychological study (Zeki & Shipp, 1988). In recent thinking about functional integration, people have turned to formal accounts of (predictive) processing in the brain (e.g., Bastos et al., 2012; Keller & Mrsic-Flogel, 2018; Parr & Friston, 2018; Rao & Ballard, 1999; Spratling, 2008) to understand the nature of (neuronal) message passing on graphs, where edges correspond to connectivity and nodes correspond to neuronal populations. Crucially, this characterization rests upon the asymmetric and directed connectivity that defines cortical and subcortical hierarchies (e.g., Bastos et al., 2012; Crick & Koch, 1998; Felleman & Van Essen, 1991; K. J. Friston, Parr, & de Vries, 2017; Keller & Mrsic-Flogel, 2018; N. Markov et al., 2013; N. T. Markov et al., 2014; Mesulam, 1998; Stachenfeld, Botvinick, & Gershman, 2017; Zeki & Shipp, 1988). Usually, these asymmetries are expressed in terms of things like laminar specificity that distinguish between forward and backward connections (Buffalo, Fries, Landman, Buschman, & Desimone, 2011; Grossberg, 2007; Haeusler & Maass, 2007; Hilgetag, O’Neill, & Young, 2000; Thomson & Bannister, 2003; Trojanowski & Jacobson, 1977). More recently, asymmetries in spectral content have become an emerging theme (Arnal & Giraud, 2012; Bastos et al., 2015; Buffalo et al., 2011; Giraud & Poeppel, 2012; Hovsepyan, Olasagasti, & Giraud, 2018; Self, van Kerkoerle, Goebel, & Roelfsema, 2019; Singer, Sejnowski, & Rakic, 2019; van Kerkoerle et al., 2014).

In contrast, analyses of functional connectivity have focused on distributed patterns of coherent fluctuations in neuronal activity and phenomenological descriptions of the implicit dynamics (Bassett & Sporns, 2017; Biswal, Van Kylen, & Hyde, 1997; Bullmore & Sporns, 2009; Gilson, Moreno-Bote, Ponce-Alvarez, Ritter, & Deco, 2016; Gu et al., 2018; Lynall et al., 2010; van den Heuvel & Sporns, 2013). This phenomenology ranges from intrinsic brain networks—which are conserved over subjects in resting-state functional magnetic resonance imaging—to the dependence of neuronal dynamics on cortical excitability (Freyer, Roberts, Ritter, & Breakspear, 2012; Roy et al., 2014). The principles that are brought to bear on this kind of characterization could be seen as ascribing neuronal dynamics to various universality classes, such as self-organized criticality (Bak, Tang, & Wiesenfeld, 1988; Breakspear, Heitmann, & Daffertshofer, 2010; Cocchi, Gollo, Zalesky, & Breakspear, 2017; Deco & Jirsa, 2012; Haimovici, Tagliazucchi, Balenzuela, & Chialvo, 2013; Kitzbichler, Smith, Christensen, & Bullmore, 2009; Lopez, Litvak, Espinosa, Friston, & Barnes, 2014; Shin & Kim, 2006). (Note: Although we have subsumed criticality and dynamic instability under phenomenological approaches, criticality can refer to the dynamics of neurons and neural assemblies—as opposed to the statistical properties of data leveraged by functional connectivity. A simple example of criticality is a branching process, an inherently directed process. Neurobiological models of these kinds of processes have been derived from causal neural field models with directed (corticothalamic) interactions [Freyer et al., 2011] and with coupled oscillators [Deco & Jirsa, 2012]. In this setting, criticality acquires a more mechanistic aspect, as we will see below.) This dual-pronged approach to functional integration invites an obvious question: Is there a way of linking the two?

Practically, the study of context-sensitive, directed coupling between the nodes of neuronal networks calls for an estimate of effective connectivity, under some model of how measured brain signals are generated. One then has to resolve the ill-posed problem of recovering the underlying (connectivity) parameters of the model, usually using Bayesian inference. The best example here is dynamic causal modeling (K. J. Friston, Harrison, & Penny, 2003; K. J. Friston, Kahan, Biswal, & Razi, 2014; Razi & Friston, 2016). The complementary approach—based upon functional connectivity—borrows ideas from network science and graph theory. This entails specifying an adjacency matrix, usually formed by thresholding a functional connectivity matrix summarizing dependencies among nodes, where the nodes are generally defined in terms of some parcellation scheme (Bassett & Sporns, 2017; Bullmore & Sporns, 2009).

In what follows, we will consider a particular parcellation scheme based upon effective connectivity and ask whether it leads to the same phenomenology seen in network neuroscience. In doing so, we can, in principle, explain and quantify the emergence of large-scale intrinsic brain networks and their characteristic dynamics. A crucial aspect of the particular parcellation or partition—employed in this work—means that it can be applied recursively in the spirit of the renormalization group (Schwabl, 2002). This means that there is a formal way of quantifying the dynamics at various spatiotemporal scales. Our hypothesis was that the spatiotemporal dynamics of coarser scales would evince both the functional anatomy of intrinsic brain networks and the emergence of (self-organized) criticality—as assessed in terms of dynamical instability.

Although this work is framed as addressing issues in network neuroscience (Bassett & Sporns, 2017), it was originally conceived as a parcellation scheme for multiscale analyses of neuroimaging time series. In other words, it was intended as a first principle approach to dimension reduction and decomposition, as a prelude for subsequent graph theoretic or dynamic causal modeling (K. J. Friston, Kahan, et al., 2014; Razi, Kahan, Rees, & Friston, 2015; Razi et al., 2017; Zhou et al., 2018). However, the theoretical foundations—and uniqueness of the partition—proved too involved to support a simple and practical procedure. Instead, what follows is offered as a case study of emergence in coupled dynamical systems, using the brain as a paradigm example.

This paper comprises five sections. In the first, we review the notion of Markov blankets and how recursive applications of a partition or parcellation of states into Markov blankets allows one to express dynamics at increasing scales. We will use the notion of the renormalization group (RG) to motivate this recursive parcellation because there are some formal constructs (in terms of RG scaling) that furnish an insight into how dynamics change as we move from one scale to the next. The second section describes a simple (dynamic causal modeling) analysis of directed effective connectivity at the finest spatial scale, as summarized with a Jacobian. This plays the role of a directed adjacency matrix, which is all that is needed for successive renormalization to higher scales. The renormalization group is illustrated with an exemplar dataset, to show what the ensuing parcellation scheme looks like. This section concludes with a brief consideration of sparse coupling at the finest scale, in terms of excitatory and inhibitory connections. The subsequent sections consider dynamics at different scales of parcellation, in terms of intrinsic (within parcel) and extrinsic (between parcel) connectivity. Our focus here is on the progressive slowing of intrinsic dynamics as we move from one scale to the next—a slowing that organizes the dynamics at coarser (higher) scales towards critical regimes of instability and slowly fluctuating dynamical modes. The third section illustrates the emergence of autonomous dynamics, in terms of characteristic frequencies associated with intrinsic connectivity, and in terms of positive Lyapunov exponents that speak to transcritical bifurcations at, and only at, coarser scales. The fourth section focuses on extrinsic connectivity and the coupling between (complex) modes or patterns of activity and how this relates to functional connectivity and intrinsic brain networks (Fox et al., 2005). The final section reviews the dynamical phenomenology at hand from the point of view of statistical physics, with a special focus on dissipative dynamics and detailed balance at nonequilibrium steady state. We conclude with a brief discussion and qualification of this particular (sic) approach to functional integration.

The last section concluded with reference to a particular partition. The use of the word “particular” has a double entendre here. It is predicated on a more fundamental (or perhaps foundational) analysis of coupled dynamical systems that consider the emergence of “particles.” Full details of this treatment can be found in K. Friston (2019). From the current perspective, we just need to know how to define Markov blankets (Clark, 2017; Kirchhoff, Parr, Palacios, Friston, & Kiverstein, 2018; Pearl, 1988; Pellet & Elisseeff, 2008) and how Markov blankets engender particles and particular partitions (K. Friston, 2019). For readers interested in Markov blankets for dynamical systems, fairly comprehensive discussions can be found in K. Friston, Da Costa, and Parr (2020) and Parr, Da Costa, and Friston (2020).

In brief, a Markov blank et allows one to distinguish a collection of vector states (hereafter, simply states) that belong to a particle from states that do not. This provides an operational definition of a particle that, in the present setting, can be regarded as a region of interest or parcel of brain states. This means that a particular partition becomes a parcellation scheme, in terms of functional anatomy. The particular partition refers to a partition of a (potentially large) set of states into a smaller number of particles, where each particle is distinguished from other particles, in virtue of possessing a Markov blanket. A Markov blanket is simply a set of states that separate or insulate—in a statistical sense—states that are internal to the blanket and states that are on the outside; namely, external states. Technically, this means that internal states are conditionally independent of external states, when conditioned upon their blanket states (Pearl, 2009).

In a particular partition, all external states are assigned to particles, to create an ensemble of particles that are constituted by their blanket states and the internal states within or beneath the blanket. The crucial aspect of this partition is that we only need the blanket states to understand coupling between particles. This follows from the conditional independence between internal and external states, where the external states “that matter” are the blanket states of other particles. In short, the particular partition is a principled way of dividing states into particles or parcels that is defined in terms of statistical dependencies or coupling among states. In more complete treatments, one can divide the blanket states into active states and sensory states, according to the following rules: Sensory states are not influenced by internal states, while active states are not influenced by external states. Indeed, it is the absence of these influences that enables us to identify the Markov blanket of any given set of internal states. Please see the Supporting Information for a formal definition of Markov blankets in this dynamical context.

As noted above, we are dealing with vector states (not scalar variables). So, what is a vector state? A vector state is the multidimensional state of a particle, for example, the principal eigenstates of its Markov blanket. However, we have just said that a particle arises from a partition of states—and now we are saying that a state is an eigenstate (i.e., a linear mixture) of the blanket states of a particle. So, is a particle a collection of states or is a state the attribute of a particle (i.e., its blanket states)? The answer is both, because we have particles at multiple levels.

This is where the renormalization group comes in, via a recursive application of the particular partition. In other words, if we start with some states at any level, we can partition these states into a set of particles, based upon how the states are coupled to each other. We can then take the principal eigenstates of each particle’s blanket states to form new states at the scale above, and start again. This recursive application of a grouping or partition operator (G)—followed by a dimension reduction (R)—leads to the renormalization group based upon two operators, R and G. In theoretical physics, the renormalization group (RG) refers to a transformation that characterizes a system when measured at different scales (Cardy, 2015; Schwabl, 2002). A working definition of renormalization rests on three things (Lin, Tegmark, & Rolnick, 2017): vectors of random variables, a coarse-graining operation, and a requirement that the operation does not change the functional form of the Lagrangian to within a multiplicative constant. For example, under a transformation of position and velocity variables x and ẋ given by xax and ẋbẋ, the corresponding Lagrangian λ transforms (if scale-invariant) according to λ(x, ẋ) → λ(ax, bẋ) = (x, ẋ), where a, b, and c are constants (Landau & Lifshitz, 1976). Equivalently, a scale-invariant system’s equation of motion must remain perfectly unchanged under the rescaling operation. This can readily be seen by applying the Euler-Lagrange equation to the scaled Lagrangian:
(1)
Here, the rescaling constant c cancels, leaving the original equation of motion. (Note: In what follows, instead of dealing with real positions and velocities, wewill deal with complex variables that have real and complex parts.)

In our case, the random variables are states; the coarse-graining operation corresponds to the grouping into a particular partition (G) and a dimension reduction (R) inherent in retaining the principal eigenstates of particular blanket states. The dimension-reduction operator (R) has two parts. First, we can eliminate the internal states because they do not contribute to coupling between particles. Second, we can eliminate the eigenstates that dissipate very quickly; namely, those with large negative eigenvalues. These are the fast or stable modes of a dynamical system (Carr, 1981; Haken, 1983). This leaves us with the slow, unstable eigenstates picked out by the dimension reduction, which we can now see as an adiabatic approximation. Please note that in quantum mechanics, the adiabatic approximation refers to those solutions to the Schrödinger equation that make use of a timescale separation between fast and slow degrees of freedom.

Formally, we can express the coarse-graining or blocking transformation RG as a composition of a particular partition and adiabatic reduction applied to any random dynamical system (at scale i) that can be characterized as coupled subsets of states. The n-th subset xn(i)x(i) constitutes the vector state of a particle, subject to random fluctuations, ωn(i):
(2)
These equations of motion for the states of the n-th particle comprise intrinsic andextrinsic components, determined by the states of the particle in question and other particles, respectively. In this form, the diagonal elements of the Jacobian or coupling matrix, λnn(i) ∈ ℂ, determine the frequency and decay of oscillatory responses to extrinsic perturbations and random fluctuations. The grouping operator (G) groups states into particles, where particles comprise blanket and internal states: πj(i) = {bj(i), μj(i)}. The blocking transformation (R) then reduces the number of states, by eliminating internal states at the lower level and retaining slow eigenstates using the principal eigenvectors ξn(i) = eig(J(bn(i), bn(i))) of the Jacobian of blanket states bn(i). These eigenstates then become the vector states at the next scale:
(3)
Here, the parameters of the Lagrangian are taken to be the coupling parameters λnm(i), whose changes are implemented by a beta function that is said to induce a renormalization group flow (or RG flow). The key aspect of this flow rests upon the adiabatic reduction, which renders the dynamics progressively slower at successive macroscopic scales. This follows because, by construction, only slow eigenstates are retained, where the intrinsic coupling among these eigenstates is a diagonal matrix of (negative) eigenvalues, which determine how quickly the eigenstates decay:
(4)
The RG flow speaks to a progressive move from dynamics with high amplitude, fast fluctuations (e.g., quantum mechanics) through to deterministic systems that are dominated by slow dynamics (e.g., classical mechanics). In deterministic systems, the real parts of λnn(i) play the role of Lyapunov exponents (cf. critical exponents), which quantify the rate of separation of infinitesimally close trajectories (Lyapunov & Fuller, 1992; Pyragas, 1997). This suggests that as we move from one scale to the next, there is a concomitant increase in the tendency to critical slowing and dynamic itinerancy (Cessac, Blanchard, & Krüger, 2001; Pavlos, Karakatsanis, & Xenakis, 2012).

In this (RG) setting, a relevant variable is said to describe the macroscopic behavior of the system. From our perspective, the relevant variables in question correspond to the slow eigenstates. In short, we can reduce many states to a small number of eigenstates that summarize the dynamics “that matter.” These eigenstates are the relevant variables that underwrite critical slowing. Figures 1 and 2 provide a graphical illustration of this recursive partitioning and reduction based upon an adiabatic approximation (i.e., eliminating fast eigenstates and approximating dynamics with the remaining slow eigenstates). This adiabatic reduction is commonplace in physics, where it plays a central role in synergetics through the enslaving principle (Haken, 1983) and, in a related form, in the center manifold theorem (Carr, 1981).

Figure 1. 

Blankets of blankets. This schematic illustrates the recursive procedure by which successively coarser scale (and slower) dynamics arise from subordinate levels. At the bottom of the figure (lower panel), we start with an ensemble of vector states (here nine). The conditional dependencies among these vector states (i.e., eigenstates) define a particular partition into particles (upper panels). Crucially, this partition equips each particle with a bipartition into blanket and internal states, where blanket states comprise active (red) and sensory (magenta) states. The behavior of each particle can now be summarized in terms of (slow) eigenstates or mixtures of its blanket states to produce states at the next level or scale. These constitute an ensemble of vector states and the process starts again. Formally, one can understand this in terms of coarse-graining the dynamics of a system via two operators. The first uses the particular partition to group subsets of states (G), while the second uses the eigenstates of the resulting blanket states to reduce dimensionality (R). The upper panels illustrate the bipartition for a single particle (left panel) and an ensemble of particles, that is, the particular partition per se (right panel). The insets on top illustrate the implicit self-similarity of particular dependencies pictorially, in moving from one scale to the next. Please see the main text for a definition of the variables used in this figure.

Figure 1. 

Blankets of blankets. This schematic illustrates the recursive procedure by which successively coarser scale (and slower) dynamics arise from subordinate levels. At the bottom of the figure (lower panel), we start with an ensemble of vector states (here nine). The conditional dependencies among these vector states (i.e., eigenstates) define a particular partition into particles (upper panels). Crucially, this partition equips each particle with a bipartition into blanket and internal states, where blanket states comprise active (red) and sensory (magenta) states. The behavior of each particle can now be summarized in terms of (slow) eigenstates or mixtures of its blanket states to produce states at the next level or scale. These constitute an ensemble of vector states and the process starts again. Formally, one can understand this in terms of coarse-graining the dynamics of a system via two operators. The first uses the particular partition to group subsets of states (G), while the second uses the eigenstates of the resulting blanket states to reduce dimensionality (R). The upper panels illustrate the bipartition for a single particle (left panel) and an ensemble of particles, that is, the particular partition per se (right panel). The insets on top illustrate the implicit self-similarity of particular dependencies pictorially, in moving from one scale to the next. Please see the main text for a definition of the variables used in this figure.

Close modal
Figure 2. 

The particular partition. This schematic illustrates a partition of eigenstates (small colored balls) into particles (comprising nine vectors), where each particle has six blanket states (red and magenta for active and sensory states, respectively) and three internal states (cyan). The upper panel summarizes the operators used to create a particular partition. We start by forming an adjacency matrix that characterizes the coupling between different vectors’ states. This is based upon the Jacobian and implicitly the flow of vector states. The resulting adjacency matrix defines a Markov blanket–forming matrix (B), which identifies the children, parents, and parents of the children. The same adjacency matrix is used to form a graph Laplacian (G) that is used to define neighboring (i.e., coupled) internal states. One first identifies a set of internal states using the graph Laplacian. Here, the j-th subset of internal states at level i are chosen, based upon dense coupling with the vector state with the largest graph Laplacian. Coupled internal states are then selected from the columns of the graph Laplacian that exceed some threshold. In practice, the examples used later specify the number of internal states desired for each level of the hierarchical decomposition. Having identified a new set of internal states (that are not members of any particle that has been identified so far), its Markov blanket is recovered using the Markov blanket–forming matrix. The internal and blanket states then constitute a new particle, which is added to the list of particles identified. This procedure is repeated until all vector states have been accounted for. Usually, towards the end of this procedure, candidate internal states are exhausted because all remaining unassigned vector states belong to the Markov blanket of the particles identified previously. In this instance, the next particle can be an active or sensory state, depending upon whether there is a subset (of active states) that is not influenced by another. In the example here, we have already identified four particles and the procedure adds a fifth (top) particle to the list of particles, thereby accounting for nine of the remaining vector states.

Figure 2. 

The particular partition. This schematic illustrates a partition of eigenstates (small colored balls) into particles (comprising nine vectors), where each particle has six blanket states (red and magenta for active and sensory states, respectively) and three internal states (cyan). The upper panel summarizes the operators used to create a particular partition. We start by forming an adjacency matrix that characterizes the coupling between different vectors’ states. This is based upon the Jacobian and implicitly the flow of vector states. The resulting adjacency matrix defines a Markov blanket–forming matrix (B), which identifies the children, parents, and parents of the children. The same adjacency matrix is used to form a graph Laplacian (G) that is used to define neighboring (i.e., coupled) internal states. One first identifies a set of internal states using the graph Laplacian. Here, the j-th subset of internal states at level i are chosen, based upon dense coupling with the vector state with the largest graph Laplacian. Coupled internal states are then selected from the columns of the graph Laplacian that exceed some threshold. In practice, the examples used later specify the number of internal states desired for each level of the hierarchical decomposition. Having identified a new set of internal states (that are not members of any particle that has been identified so far), its Markov blanket is recovered using the Markov blanket–forming matrix. The internal and blanket states then constitute a new particle, which is added to the list of particles identified. This procedure is repeated until all vector states have been accounted for. Usually, towards the end of this procedure, candidate internal states are exhausted because all remaining unassigned vector states belong to the Markov blanket of the particles identified previously. In this instance, the next particle can be an active or sensory state, depending upon whether there is a subset (of active states) that is not influenced by another. In the example here, we have already identified four particles and the procedure adds a fifth (top) particle to the list of particles, thereby accounting for nine of the remaining vector states.

Close modal

We now have at hand a principled procedure to repeatedly coarse-grain a system of loosely coupled particles (e.g., nonlinear neuronal oscillators) at successively coarser spatiotemporal scales. One can see that, by construction, as we ascend scales, things will get coarser and slower. It is this progressive slowing towards criticality that is the primary focus of the examples pursued below. However, before we can apply the particular partition to some empirical data, we first need to quantify the coupling among particles at a suitably fine or small scale. Having characterized this coupling in terms of some dynamical system or state space model, we can then use the Jacobian to identify a particular partition, compute the Jacobian of the blanket states, and then take the ensuing eigenstates to the next level, as described above. This furnishes a description of dynamics in terms of the intrinsic (within particle) coupling (i.e., eigenvalues) of any particle λnn(i) and their extrinsic (between particle) coupling λnm(i). We will unpack the meaning of these terms later using numerical examples and analysis. At present, we will focus on estimating the coupling among a large number of particles at the smallest scale.

To use the machinery of Markov blankets, in the setting of loosely coupled dynamical systems, we need to specify the coupling among vector states (that we can associate with the eigenstates of the smallest particles under consideration). To do this, one can use a simplified form of dynamic causal modeling that can be applied to hundreds or thousands of neuronal states. This is easier than it might sound, provided one commits to low (first) order approximations (e.g., Frassle et al., 2017). Consider the state space model describing the coupling among a large number of states, where the flow is subject to random fluctuations (dropping superscripts for clarity):
(5)
Notice that we have introduced a convolution operator that converts latent (neuronal) states to some observable measurement (e.g., hemodynamic signals from functional magnetic resonance imaging). Here, y(t) is a linear convolution (with kernel k) of some states x(t) subject to observation and system noise, respectively. We have also assumed that there is an observation for each relevant state. Linearizing this state space model, where J = ∂xf(x) and † denotes conjugate transpose, we have the following:
(6)
Here, the states have been arranged into a matrix, with one row for each point in time and a column for each dimension. This means we can replace the derivative and convolution operators in Equation 5 with the matrix operators in Equation 6 that commute, that is, KD = DK. (Note: This is due to the linearity of the convolution operator and is true regardless of whether the temporal derivative is in matrix form. Intuitively, a linear combination of velocities is equivalent to the rate of change of a linear combination of positions.) In turn, this means we can approximate the system with a general linear model:
(7)
This approximation assumes that JJI. This assumption is licensed by the fact that the Jacobian of relevant states will be dominated by large negative leading diagonals (that underwrite the stability of each state). Equation 7 is a straightforward general linear model, with random fluctuations that have distinct covariance components, that depends upon the form of the (e.g., hemodynamic) convolution kernel and the amplitude of state and observation noise. If K is specified in terms of the basis set of convolution kernels, then the covariance components of the linearized system can be expressed as the following:
(8)
such that κiκj replaces the hyperparameter γ1 above.

This linearized system can now be solved using standard (variational Laplace) schemes for parametric empirical Bayesian (PEB) models, to provide (approximate) Gaussian posteriors over the unknown elements of the Jacobian—and the unknown covariance parameters encoding the amplitude of various random effects (K. Friston, Mattout, Trujillo-Barreto, Ashburner, & Penny, 2007). This Bayesian model inversion requires priors on the parameters and hyperparameters (i.e., covariance component parameters), specified as Gaussian shrinkage priors. For nonnegative hyperparameters, Gaussian shrinkage priors are generally applied to log-transformed hyperparameters (i.e., a lognormal prior over nonnegative scale parameters).

Equipped with posterior densities over the coupling parameters—or elements of the Jacobian—we can now use Bayesian model reduction to eliminate redundant parameters (K. J. Friston et al., 2016); namely, parameters whose shrinkage to zero increases model evidence by removing redundancy or complexity. As described elsewhere (K. Friston & Penny, 2011), this can be done very efficiently, because we know the form of the posteriors, before and after reducing the model. Furthermore, we can apply other prior constraints to eliminate redundant coupling parameters.

In the examples below, we performed Bayesian model reduction to enforce reciprocal coupling among states, given that extrinsic connections in the brain are almost universally recurrent (Felleman & Van Essen, 1991; N. Markov et al., 2013; N. T. Markov et al., 2014). This was implemented by combining the changes in variational free energy—or log model evidence—when removing connections between two states in both directions. If model evidence increased by three natural units (i.e., a log odds ratio of exp(3):1 or 20:1), both connections were removed but not otherwise. In addition, we precluded long-range coupling (beyond 32 mm) and used Bayesian model reduction to identify the most likely upper bound on the spatial reach of coupling between nonhomologous particles (i.e., particles that did not occupy homologous positions in each hemisphere). These empirical connectivity priors were based upon a body of empirical work, suggesting that the density of axonal projections—from one area to another—declines exponentially as a function of anatomical separation (Ercsey-Ravasz et al., 2013; Finlay, 2016; Horvát et al., 2016; Wang & Kennedy, 2016). We will later examine the evidence for this kind of distance rule, based upon coupling among particles at the finest scale.

In summary, Equation 7 is used to evaluate the effective connectivity of a dynamic causal model based upon the Jacobian of a stochastic differential equation under some simplifying assumptions. In brief, we start with a linear state space model, in which the response variable y (the multivariate BOLD time series) is a linear convolution (K) of some hidden states x subject to observation and system noise ωy and ωx, respectively (cf. Equation 5). We can linearize this state space model (cf. Equation 6) such that it can be written in matrix form as a general linear model (cf. Equation 7). This linearized system is then solved using standard variational Laplace for parametric empirical Bayes (PEB) that provides the (Gaussian) posterior estimates for the system parameters (elements of the Jacobian) and hyperparameters (the unknown covariance of the observation and system noise). Since this scheme uses PEB for model inversion, it is automatically protected against becoming trapped in local minima. PEB uses the formal apparatus of variational Laplace, which optimizes a free energy lower bound, which optimizes the trade-off between model complexity and accuracy. Given the posterior distributions over the system parameters (and hyperparameters) and the model evidence, one can then use Bayesian model reduction procedures to prune away any redundant couplings. This constitutes a computationally efficient inversion scheme that can invert very large systems within minutes on a standard laptop. At the finest scale, the Jacobian had 1,024 by 1,024 elements taking about 45 min to infer the model parameters.

Computationally, the benefit of linearizing the system in this way means that one can evaluate the posterior coupling parameters or elements of the Jacobian region by region (cf. Frassle et al., 2017). This means that, provided one is prepared to wait long enough, one can invert large systems with thousands of regions or parcels. On a personal computer, it takes about an hour to evaluate the Jacobian for coupling among 1,024 states. To illustrate the renormalization group procedure practically, we applied it to the fMRI time series from a single subject. These time series are the same data used to illustrate previous developments in dynamic causal modeling. Time series data were acquired from a normal subject at 2 Tesla using a Magnetom VISION (Siemens, Erlangen) whole-body MRI system. Contiguous multislice images were acquired with a gradient echo-planar sequence (TE = 40 ms; TR = 3.22 s; matrix size = 64 × 64 × 32, voxel size 3 × 3 × 3 mm). Four consecutive 100-scan sessions were acquired, comprising a sequence of 10-scan blocks under five conditions. The first was a dummy condition to allow for magnetic saturation effects. In the second, Fixation, the subject viewed a fixation point at the center of the screen. In an Attention condition, the subject viewed 250 dots moving radially from the center at 4.7 degrees per second and was asked to detect changes in radial velocity. In No attention, the subject was asked to look at the moving dots. In the last condition, subject viewed stationary dots. The order of the conditions alternated between Fixation and photic stimulation. The subject fixated on the center of the screen in all conditions. No overt response was required in any condition, and there were no actual speed changes. In contradistinction to normal procedures in functional connectivity fMRI analyses, the time series were not smoothed (other than adjusting for ultraslow scanner drifts). This is because the random fluctuations at all timescales play a material role in the decomposition at hand. Informed consent from the subject was obtained and the study was approved by the Human Ethics Review Committee of University College London.

In the exemplar analyses below, we started at a scale where each particle can be plausibly summarized with a single state. This single state was the principal eigenstate following a principal components analysis of voxels that lay within about 4 mm of each other. This can be thought of as reducing the dynamics to a single mode of the Markov blanket of this small collection of voxels. Practically, this simply involved taking all voxels within a fixed radius of the voxel showing the largest variance, performing a singular value decomposition, and recording the first eigenvariate. These voxels were then removed, and the procedure repeated until the entire multivariate time series was reduced to 1,024 eigenstates, where each eigenstate corresponds to a simple particle. See Figure 3. Clearly, we could have summarized the dynamics of each collection of voxels with two or more eigenstates; however, for simplicity we will assume that the eigenstate with the greatest variance is a sufficient summary of the slow, non-dissipative, dynamics of this smallest scale. Interestingly, this variance is proportional to the characteristic time constant of systemic dynamics; namely, the negative inverse of the eigenvalues of the underlying Jacobian (see the final section). In other words, as the (negative) principal eigenvalue of effective connectivity approaches zero from below, the principal eigenvalue of functional connectivity (i.e., variance) increases; see Equation 9 in Lopez et al. (2014).

Figure 3. 

Distributed variance. This figure illustrates the variance explained by particles at the first level. The upper panel is a maximum intensity projection of the variance of the fMRI time series, for a single subject over 360 scans (with a repetition time of 3.22 s) in voxel space. One can see that visual (i.e., striate) and extrastriate regions have been preferentially engaged; however, there is distributed activity throughout the brain. The upper right panel shows the corresponding variance in terms of the eigenmodes of 1,024 particles. As in subsequent figures, these projections involve weighting the absolute value of each eigenmode by the quantity in question; here, the variance. This maximum intensity projection shows that the particles furnish a reasonably faithful summary of voxel-specific variance. The lower right panel shows the same variance assigned to the spatial support of each eigenmode, to illustrate the coarse-graining when assembling particles from voxels. These characterizations of fluctuations over time have been framed in terms of variance. We will see later that variance can be interpreted as a dissipative time constant. In other words, in this example, visual areas show the least dissipation, with dynamics that decay slowly. The lower left panel shows the Euclidean distance between the centers of pairs of particles. The center of each particle was defined as the expected anatomical location, where the probability density over location was taken to be a softmax function of the absolute value of the eigenmode over voxels. In this and subsequent figures, Euclidean distances are evaluated after projecting centers across the sagittal plane, that is, superimposing homologous particles in the right and left hemispheres. We calculated the Euclidean distance after projecting the particle centers across the sagittal plane so that each parcel will be in close vicinity to the homologous particle in the opposite hemisphere. This reflects our prior beliefs about interhemispheric coupling—which brings homologous regions close together—in terms of path lengths.

Figure 3. 

Distributed variance. This figure illustrates the variance explained by particles at the first level. The upper panel is a maximum intensity projection of the variance of the fMRI time series, for a single subject over 360 scans (with a repetition time of 3.22 s) in voxel space. One can see that visual (i.e., striate) and extrastriate regions have been preferentially engaged; however, there is distributed activity throughout the brain. The upper right panel shows the corresponding variance in terms of the eigenmodes of 1,024 particles. As in subsequent figures, these projections involve weighting the absolute value of each eigenmode by the quantity in question; here, the variance. This maximum intensity projection shows that the particles furnish a reasonably faithful summary of voxel-specific variance. The lower right panel shows the same variance assigned to the spatial support of each eigenmode, to illustrate the coarse-graining when assembling particles from voxels. These characterizations of fluctuations over time have been framed in terms of variance. We will see later that variance can be interpreted as a dissipative time constant. In other words, in this example, visual areas show the least dissipation, with dynamics that decay slowly. The lower left panel shows the Euclidean distance between the centers of pairs of particles. The center of each particle was defined as the expected anatomical location, where the probability density over location was taken to be a softmax function of the absolute value of the eigenmode over voxels. In this and subsequent figures, Euclidean distances are evaluated after projecting centers across the sagittal plane, that is, superimposing homologous particles in the right and left hemispheres. We calculated the Euclidean distance after projecting the particle centers across the sagittal plane so that each parcel will be in close vicinity to the homologous particle in the opposite hemisphere. This reflects our prior beliefs about interhemispheric coupling—which brings homologous regions close together—in terms of path lengths.

Close modal

Following Bayesian model reduction, we now have a sparse Jacobian or directed, weighted adjacency matrix describing the dynamical coupling between univariate states of 1,024 particles (see Figure 4). This Jacobian can now be subject to a particular partition to identify the most connected internal states and their Markov blanket, following the procedures summarized in Figure 2. This grouping process (i.e., the G operator) was repeated until all 1,024 states are accounted for: in this example, grouping 1,024 states into 57 particles. For simplicity, and for consistency with the first level, each particle was assigned a single internal state. The ensuing partition was then subject to an adiabatic reduction (i.e., the R operator) by taking the eigenvectors of the Jacobian describing the intrinsic dynamics of each particle’s blanket states.

Figure 4. 

Sparse connectivity. This figure illustrates the sparsity of effective connectivity using Bayesian model reduction. The left panel shows the log evidence for a series of models that precluded connections beyond a certain distance or radius. This log evidence has been normalized to the log evidence of the model with the least marginal likelihood (i.e., coupling over less than 10 mm). These results show that a model with local connectivity (about 18 mm) has the greatest evidence. In other words, effective connections beyond this distance are redundant, in the sense that they add more complexity to log evidence that is licensed by an increase in accuracy. The middle panel shows the ensuing sparse coupling (within the upper bound of 32 mm) as an adjacency matrix, where particles have been ordered using a nearest neighbor scheme in voxel space. The blue dots indicate connections that have been removed by Bayesian model reduction. In this instance, nearly 60% of estimated connections were redundant. The right panel zooms in on the first 32 particles, to show some local connections that were retained (red) or removed (blue).

Figure 4. 

Sparse connectivity. This figure illustrates the sparsity of effective connectivity using Bayesian model reduction. The left panel shows the log evidence for a series of models that precluded connections beyond a certain distance or radius. This log evidence has been normalized to the log evidence of the model with the least marginal likelihood (i.e., coupling over less than 10 mm). These results show that a model with local connectivity (about 18 mm) has the greatest evidence. In other words, effective connections beyond this distance are redundant, in the sense that they add more complexity to log evidence that is licensed by an increase in accuracy. The middle panel shows the ensuing sparse coupling (within the upper bound of 32 mm) as an adjacency matrix, where particles have been ordered using a nearest neighbor scheme in voxel space. The blue dots indicate connections that have been removed by Bayesian model reduction. In this instance, nearly 60% of estimated connections were redundant. The right panel zooms in on the first 32 particles, to show some local connections that were retained (red) or removed (blue).

Close modal

The eight principal eigenstates were retained if their eigenvalues were less than 1. This is the adiabatic approximation that dispenses with modes that dissipate quickly, here, within a second. This reduces the intrinsic coupling to a diagonal matrix λnn(i), corresponding to the eigenvalues of the intrinsic Jacobian ∂xnfn(i). The extrinsic coupling λnm(i) contains complex elements that couple the eigenstates of one particle to the eigenstates of another. We will return to how these Jacobians manifest in terms of connectivity later.

In short, we now have a summary of dynamics at the scale above in terms of the eigenstates of a particle that, by construction, have been orthogonalized. These constitute the vector states for the next application of the RG operator to produce a description of dynamics at subsequent scales. See Figure 5 through Figure 8. These examples show that by the fourth scale, we have reduced the dynamics to a single particle, shown in a maximum intensity projection format in Figure 8. We can project particles onto anatomical space because each state that constitutes a particle at any scale is a mixture of states that, ultimately, can be associated with a particular location in voxel space. In other words, particles inherit a spatial location from the scale below, enabling one to visualize (and quantify) the spatial scale of particles at successively higher scales. We will refer to this characterization of an eigenstate as an eigenmode; namely, a pattern in voxel space whose amplitude is determined by the corresponding eigenstate. One can express the eigenmodes in terms of the eigenvectors at each scale as follows:
(9)
This gives the eigenmode of the j-th eigenstate of the n-th particle at the i-th scale.
Figure 5. 

Brain particles. This figure illustrates the partition of states at the first level. The format of this figure is replicated in subsequent figures that detail a particular decomposition at increasing scales. The upper left panel shows all the constituent particles as a maximum intensity projection, where the spatial support of each particle has been color-coded according to the variance explained by its eigenmode. One can see that nearly the entire brain volume has been effectively tiled by 1,024 particles. The upper middle panel shows the corresponding adjacency matrix or coupling among particles. The colored circles encode the identity of each particle. In this instance, the particles have been arranged in order of descending dissipation (i.e., the real part of the eigenvalue of each particle’s Jacobian). The upper right panel shows these eigenvalues above the corresponding particle (encoded by colored dots) in terms of rate constants (i.e., the negative inverse of the eigenvalues). The remaining panels show the first 12 particles as maximum intensity projections. The color of the background corresponds to the color that designates each particle. In this first level, each particle has a single eigenstate. The numbers in parentheses above each maximum intensity projection correspond to the number of internal, active, and sensory states, respectively, where the active and sensory states comprise blanket states. At this finest level, every eigenstate is a sensory state because it can influence—and be influenced by—the eigenstates of other particles. At this scale, one can see that the particles are small, with a standard deviation of about 3 mm (based on the softmax function of the absolute value of each particle’s eigenmodes). Here, the characteristic time constants of these particles are about 1 s. This should be compared with the equivalent distribution in the upper right panel of the next figure.

Figure 5. 

Brain particles. This figure illustrates the partition of states at the first level. The format of this figure is replicated in subsequent figures that detail a particular decomposition at increasing scales. The upper left panel shows all the constituent particles as a maximum intensity projection, where the spatial support of each particle has been color-coded according to the variance explained by its eigenmode. One can see that nearly the entire brain volume has been effectively tiled by 1,024 particles. The upper middle panel shows the corresponding adjacency matrix or coupling among particles. The colored circles encode the identity of each particle. In this instance, the particles have been arranged in order of descending dissipation (i.e., the real part of the eigenvalue of each particle’s Jacobian). The upper right panel shows these eigenvalues above the corresponding particle (encoded by colored dots) in terms of rate constants (i.e., the negative inverse of the eigenvalues). The remaining panels show the first 12 particles as maximum intensity projections. The color of the background corresponds to the color that designates each particle. In this first level, each particle has a single eigenstate. The numbers in parentheses above each maximum intensity projection correspond to the number of internal, active, and sensory states, respectively, where the active and sensory states comprise blanket states. At this finest level, every eigenstate is a sensory state because it can influence—and be influenced by—the eigenstates of other particles. At this scale, one can see that the particles are small, with a standard deviation of about 3 mm (based on the softmax function of the absolute value of each particle’s eigenmodes). Here, the characteristic time constants of these particles are about 1 s. This should be compared with the equivalent distribution in the upper right panel of the next figure.

Close modal
Figure 6. 

Particular partition at the second scale. This figure uses the same format as the previous figure; however, here, we are looking at particles at the next scale. In other words, aggregates of the eigenstates of the blankets of first-level particles. Here, there were 1,024 such eigenstates that have been partitioned into 57 particles. Each particle has one or more eigenstates; here, a total of 296. At this level, time constants have started to increase, including some particles that evince slow dynamics of about 10 s. Note that the particles now have a greater spatial scale and have, in most instances, a symmetric spatial deployment across hemispheres. This reflects the fact that Jacobian includes transcallosal or interhemispheric coupling. For example, the first particle has one internal state (by design), 29 active states, and 44 sensory states. These different states are color-coded with white, light gray, and dark gray, respectively, to illustrate the characteristic “fried egg” arrangement in which internal states (white) are surrounded by blanket (i.e., active and sensory) states (in gray). The eigenmodes of this particle cover voxels in primary visual and extrastriate cortex. The second particle sits across the bilateral superior parietal cortices, while the third particle encompasses anterior (i.e., polar) temporal regions—and so on. The spatial scale of these particles corresponds roughly to a cytoarchitectonic parcellation. The ensuing (57) particles collectively comprise 296 eigenstates that are partitioned into five particles at the next level, corresponding roughly to lobar neuroanatomy.

Figure 6. 

Particular partition at the second scale. This figure uses the same format as the previous figure; however, here, we are looking at particles at the next scale. In other words, aggregates of the eigenstates of the blankets of first-level particles. Here, there were 1,024 such eigenstates that have been partitioned into 57 particles. Each particle has one or more eigenstates; here, a total of 296. At this level, time constants have started to increase, including some particles that evince slow dynamics of about 10 s. Note that the particles now have a greater spatial scale and have, in most instances, a symmetric spatial deployment across hemispheres. This reflects the fact that Jacobian includes transcallosal or interhemispheric coupling. For example, the first particle has one internal state (by design), 29 active states, and 44 sensory states. These different states are color-coded with white, light gray, and dark gray, respectively, to illustrate the characteristic “fried egg” arrangement in which internal states (white) are surrounded by blanket (i.e., active and sensory) states (in gray). The eigenmodes of this particle cover voxels in primary visual and extrastriate cortex. The second particle sits across the bilateral superior parietal cortices, while the third particle encompasses anterior (i.e., polar) temporal regions—and so on. The spatial scale of these particles corresponds roughly to a cytoarchitectonic parcellation. The ensuing (57) particles collectively comprise 296 eigenstates that are partitioned into five particles at the next level, corresponding roughly to lobar neuroanatomy.

Close modal
Figure 7. 

Particular partition at the third level. This follows the same format as the previous figures. Here, the 57 particles from the previous scale are now partitioned into five particles that, collectively, possess 32 eigenstates. Here, the adjacency matrix is shown in image format, in terms of the real part of each (complex) Jacobian. At this scale, dynamics of each particle are becoming increasingly slow, with typical time constants between 5 and 10 s. The negative time constant reflects a positive eigenvalue that denotes an exponential divergence of trajectories that underwrites stochastic chaos. The five particles retain a degree of interhemispheric symmetry: the first particle has six internal states, 71 active states, and 87 sensory states. This particle covers a large dorsal portion of cortex, including parietal cortex and frontal eye fields. Conversely, the second particle covers large regions of frontal cortex, with the eight active states located in orbitofrontal regions. The third particle is located in posterior visual and inferotemporal regions, while the fourth particle subsumes anterior temporal and ventral regions. Interestingly, there is one small particle (with a single sensory state) in the anterior medial prefrontal cortex. These five lobe-like particles (with 32 eigenstates) now contribute to a single particle at the final (fourth) level.

Figure 7. 

Particular partition at the third level. This follows the same format as the previous figures. Here, the 57 particles from the previous scale are now partitioned into five particles that, collectively, possess 32 eigenstates. Here, the adjacency matrix is shown in image format, in terms of the real part of each (complex) Jacobian. At this scale, dynamics of each particle are becoming increasingly slow, with typical time constants between 5 and 10 s. The negative time constant reflects a positive eigenvalue that denotes an exponential divergence of trajectories that underwrites stochastic chaos. The five particles retain a degree of interhemispheric symmetry: the first particle has six internal states, 71 active states, and 87 sensory states. This particle covers a large dorsal portion of cortex, including parietal cortex and frontal eye fields. Conversely, the second particle covers large regions of frontal cortex, with the eight active states located in orbitofrontal regions. The third particle is located in posterior visual and inferotemporal regions, while the fourth particle subsumes anterior temporal and ventral regions. Interestingly, there is one small particle (with a single sensory state) in the anterior medial prefrontal cortex. These five lobe-like particles (with 32 eigenstates) now contribute to a single particle at the final (fourth) level.

Close modal
Figure 8. 

Particular partition at the fourth scale. The five particles of the previous level have now been partitioned into a single particle with eight internal states (in rostral regions) and 24 sensory states (in caudal regions), in white and light gray, respectively. This particle possesses eight eigenstates, the first of which has a positive eigenvalue—or negative time constant (denoting stochastic chaos). At this scale, all of the eigenstates have protracted dynamics, with time constants in the order of 10 s. Note that there is no coupling among the eigenstates in the Jacobian. This means the dynamics are completely characterized by the leading diagonal terms, that is, the complex eigenvalues of the eight constituent eigenstates.

Figure 8. 

Particular partition at the fourth scale. The five particles of the previous level have now been partitioned into a single particle with eight internal states (in rostral regions) and 24 sensory states (in caudal regions), in white and light gray, respectively. This particle possesses eight eigenstates, the first of which has a positive eigenvalue—or negative time constant (denoting stochastic chaos). At this scale, all of the eigenstates have protracted dynamics, with time constants in the order of 10 s. Note that there is no coupling among the eigenstates in the Jacobian. This means the dynamics are completely characterized by the leading diagonal terms, that is, the complex eigenvalues of the eight constituent eigenstates.

Close modal

Note that it would have been possible to reevaluate the Jacobian using another dynamic causal model of the eigenstates at any particular level and then use Bayesian model reduction to eliminate redundant coupling parameters. This is an interesting alternative to using the estimates of the Jacobian based upon the first-order approximation at the smallest scale. We will explore the impact of reevaluating the Jacobian in subsequent work. For the purposes of the current illustration, we will retain the linear solutions at higher scales—based upon the finest scale—to illustrate that one can still reproduce the emergent phenomena of interest described below. These dynamical phenomena are therefore directly attributable to local linear coupling with a particular sparsity structure that is sufficient to produce interesting self-organized dynamics at higher scales. Before taking a closer look at dynamics over scales, this section concludes with a brief characterization of coupling at the smallest scale.

The Jacobian from the above analysis summarizes the effective connectivity at the smallest scale, where each node particle has a reasonably precise anatomical designation. This means that we can interpret the elements of the Jacobian in terms of directed (effective) connectivity. We had anticipated that this would mirror the exponential distance rule established through anatomical tracing studies (Finlay, 2016; Horvát et al., 2016; Wang & Kennedy, 2016). However, it did not. Instead, this (linear) characterization of effective connectivity was better explained with a power law that, interestingly, was quantitatively distinct for inhibitory (i.e., negative) and excitatory (i.e., positive) connections (i.e., elements of the Jacobian).

Figure 9 summarizes the statistical characteristics of coupling among particles at the first level. The upper left panel shows each connection in terms of the real part of the corresponding Jacobian in hertz, against the distance spanned by the connection (i.e., Euclidean distance between the centers of the two particles). Two things are evident from this scatterplot: first, positive (excitatory, red dots) connections dominate in the short range (around 8 mm), while negative (inhibitory, blue dots) dominate around 16 mm. Although there is variability, the dependency of the coupling strength on distance shows some lawful behavior that is disclosed by plotting the log-coupling (real part) against log-distance (upper right panel). The circles are the averages in bins (discrete ranges) of the dots in the upper left panel. A linear regression suggests a scaling exponent of −1.14 for excitatory coupling and a smaller scaling exponent of −0.52 for inhibitory coupling. This dissociation is consistent with a Mexican hat–like coupling kernel, with short-range excitation and an inhibitory penumbra. This kind of architecture predominates in neural field models of cortical and subcortical coupling (e.g., Coombes, 2005; Itskov, Curto, Pastalkova, & Buzsaki, 2011; Moran, Pinotsis, & Friston, 2013).

Figure 9. 

Local connectivity. This figure reports some of the statistics of dynamical coupling among particles at the first level. The upper left panel plots each connection in terms of the real part of the Jacobian in hertz, against the distance spanned by the connection. Two things are evident from this scatterplot: first, positive (excitatory, red dots) connections dominate in the short range (around 8 mm), while negative (inhibitory, blue dots) dominate around 16 mm. The upper right panel plots the log-coupling (real part) against log-distance, where circles report the averages in bins (discrete ranges) of the dots in the left panel. A linear regression gives a scaling exponent of −1.14 for excitatory coupling and a scaling exponent of the −0.52, for inhibitory coupling. The lower panel plots the strength of reciprocal connections against each other, to illustrate the relative proportions of recurrent excitatory and inhibitory coupling; here 65% and 31%, respectively.

Figure 9. 

Local connectivity. This figure reports some of the statistics of dynamical coupling among particles at the first level. The upper left panel plots each connection in terms of the real part of the Jacobian in hertz, against the distance spanned by the connection. Two things are evident from this scatterplot: first, positive (excitatory, red dots) connections dominate in the short range (around 8 mm), while negative (inhibitory, blue dots) dominate around 16 mm. The upper right panel plots the log-coupling (real part) against log-distance, where circles report the averages in bins (discrete ranges) of the dots in the left panel. A linear regression gives a scaling exponent of −1.14 for excitatory coupling and a scaling exponent of the −0.52, for inhibitory coupling. The lower panel plots the strength of reciprocal connections against each other, to illustrate the relative proportions of recurrent excitatory and inhibitory coupling; here 65% and 31%, respectively.

Close modal

The lower panel plots the strength of reciprocal connections against each other, to illustrate the relative proportions of recurrent excitatory and inhibitory coupling, here 65% and 31%, respectively. There are only about 4% of connections that show an antisymmetry, that is, excitatory in one direction and inhibitory in the other. The rarefied region in the center of this scatterplot reflects the fact that connections with small coupling strengths have been eliminated during Bayesian model reduction (see Figure 4). In the next section, we will see how this sparse local coupling engenders progressively more structured and itinerant dynamics at increasing spatial and temporal scales.

This section focuses on the intrinsic dynamics of each particle at different scales by associating the Jacobian of each particle with Lyapunov exponents. For people not familiar with dynamical systems theory, the Lyapunov exponents score the average exponential rate of divergence or convergence of trajectories in state space (Lyapunov & Fuller, 1992; Pavlos et al., 2012; Yuan, Ma, Yuan, & Ao, 2011). Because we are dealing with a linearized system, the Lyapunov exponents are the same as the eigenvalues of the Jacobian describing intrinsic coupling. By construction, this is a leading diagonal matrix containing intrinsic eigenvalues whose real values are close to zero. In terms of a linear stability analysis, the real part of these eigenvalues (i.e., self-induced decay) corresponds to the rate of decay. This means that as the eigenvalue approaches zero from below, the pattern of activity encoded by this eigenstate decays more and more slowly. This is the essence of critical slowing and means that, from the point of view of dynamical stability, this eigenstate has become unstable (Haken, 1983; Jirsa, Friedrich, Haken, & Kelso, 1994; Mantegna & Stanley, 1995; Pavlos et al., 2012). The complement of critical instability is a stable fast eigenstate that decays very quickly, that is, an eigenstate whose eigenvalue has a large negative real part.

The imaginary part of the eigenvalue describes the characteristic frequency at which this decay is manifest. If the imaginary part is zero, the system decays monotonically. However, complex values mean that the intrinsic dynamics acquire a sinusoidal aspect. Because each particle has a number of eigenstates, an ensemble of particles can be construed as loosely coupled phase oscillators (Breakspear & Stam, 2005; De Monte, d’Ovidio, & Mosekilde, 2003; Kaluza & Meyer-Ortmanns, 2010; Kayser & Ermentrout, 2010), featuring multiple frequencies. The associated dynamics of a single particle can be visualized by plotting its eigenvalues in the complex number plane. The closer the eigenvalue to the vertical axis, the slower the dynamics, such that as the real eigenvalue approaches zero (i.e., from the left), the particle approaches a transcritical bifurcation (at zero) and displays a simple form of critical slowing.

This characterization of intrinsic dynamics—at different scales—is illustrated in the right panels of Figure 10. Note that the complex values are symmetrically paired (dots of the same color). The key thing to observe here is that when we look at the eigenvalues of particles at higher scales, there are some eigenstates that approach criticality and start to show intrinsic oscillatory behavior. This is one of the key observations from the current renormalization scheme; namely, there is a necessary slowing as one progresses from one scale to the scale above. Furthermore, at higher scales intrinsic dynamics start to appear with progressively slower frequencies.

Figure 10. 

Transfer functions. This figure characterizes the dynamics at successive scales in terms of transfer functions, as quantified by the complex eigenvalues (cf. a pole-zero map). The left column shows the transfer functions of frequency for all particles with a complex eigenvalue (at successive scales). These eigenvalues are shown in the right column in the complex number plane. As we ascend from one scale to the next, the real part of the eigenvalue approaches zero from the left—and the number of eigenvalues (i.e., number of particles) falls with the coarse-graining. The complex part of an eigenvalue corresponds to the peak frequency of the associated transfer function, while the dispersion around this peak decreases as the real part approaches zero. The emergence of spectral peaks in the transfer functions inherit from the complex part of the eigenvalues, which emerge under asymmetric coupling with solenoidal flow. The next figure addresses the following question: Do these solenoidal dynamics vary in a systematic way over the brain?

Figure 10. 

Transfer functions. This figure characterizes the dynamics at successive scales in terms of transfer functions, as quantified by the complex eigenvalues (cf. a pole-zero map). The left column shows the transfer functions of frequency for all particles with a complex eigenvalue (at successive scales). These eigenvalues are shown in the right column in the complex number plane. As we ascend from one scale to the next, the real part of the eigenvalue approaches zero from the left—and the number of eigenvalues (i.e., number of particles) falls with the coarse-graining. The complex part of an eigenvalue corresponds to the peak frequency of the associated transfer function, while the dispersion around this peak decreases as the real part approaches zero. The emergence of spectral peaks in the transfer functions inherit from the complex part of the eigenvalues, which emerge under asymmetric coupling with solenoidal flow. The next figure addresses the following question: Do these solenoidal dynamics vary in a systematic way over the brain?

Close modal

Another way of characterizing temporal dynamics is to use linear systems theory to map the eigenvalues to the spectral density of the time series that would be measured empirically. This rests upon standard transforms and the convolution theorem that enables us to express the systems’ first-order kernels as a function of the Jacobian (Lopez et al., 2014). In frequency space, these kernels correspond to transfer functions and describe the spectral power that is transferred from the random fluctuations to the macroscopic dynamics of each eigenstate. The left panels of Figure 10 show the transfer functions of the eigenstates of each particle at different scales. At the finest scale, power is spread over a large range of frequencies. At progressively higher scales, the power becomes more concentrated in the lower frequencies with a transfer function that has a characteristic Lorentzian form. Crucially, the frequencies at the highest scale correspond to the characteristic ultraslow frequencies studied in resting-state fMRI; namely, < 0.01 Hz. This is an interesting observation, which suggests that one can explain ultraslow fluctuations in resting-state fMRI purely in terms of local directed coupling among small particles of brain tissue. Note that this explanation does not involve any hemodynamics because the Jacobian that gives rise to these slow oscillations pertains to the neuronal states (prior to hemodynamic convolution). In other words, this is not an artefact of removing fast frequencies from the measured fMRI signals.

One might ask whether there is any systematic variation of these ultraslow frequencies across the brain. Figure 11 reports the implicit intrinsic timescales at intermediate scales (second scale, upper rows; third scale, lower rows). The left column shows the eigenmodes in terms of their principal frequency, that is, the largest complex eigenvalue (divided by 2π). The right column shows the corresponding eigenmodes in terms of their principal time constants, that is, the reciprocal of the largest negative real part. These two characterizations—principal frequency and time constant—speak to different aspects of intrinsic timescales, both of which contribute to the shape of an eigenstate’s transfer function. The first quantifies the frequency of solenoidal flow, while the second reflects the rate of decay associated with the dissipative flow. We will unpack solenoidal and dissipative flows in the last section, in terms of density dynamics. In brief, the flows of a random dynamical system, at nonequilibrium steady state, can be decomposed into two orthogonal components. The first (dissipative) component is a gradient flow, in the direction of increasing density. This flow counters the dispersive effect of random fluctuations. The solenoidal (non-dissipative) component circulates on the isoprobability contours, thereby having no effect on the nonequilibrium steady-state density. Heuristically, if one imagines water flowing down a plughole, the dissipative part is the flow out of the basin, while the solenoidal part accounts for the circular motion of water. Generally speaking, when the random fluctuations are small, the gradient flows disappear (at steady state), leaving the solenoidal flow, that is, the classical mechanics that would apply to heavenly bodies. However, when random fluctuations are in play, both solenoidal and dissipative flows characterize the dynamics at steady state.

Figure 11. 

Intrinsic timescales in the brain. This figure reports intrinsic timescales at intermediate scales (second scale, upper rows; third scale, lower rows). The left column shows the eigenmodes in terms of their principal frequency, that is, the largest complex eigenvalue (divided by 2π). The right column shows the corresponding eigenmodes in terms of their principal time constants, that is, the reciprocal of the largest negative real part.

Figure 11. 

Intrinsic timescales in the brain. This figure reports intrinsic timescales at intermediate scales (second scale, upper rows; third scale, lower rows). The left column shows the eigenmodes in terms of their principal frequency, that is, the largest complex eigenvalue (divided by 2π). The right column shows the corresponding eigenmodes in terms of their principal time constants, that is, the reciprocal of the largest negative real part.

Close modal

It is clear from these results that caudal (i.e., posterior) regions have faster intrinsic frequencies, relative to rostral (i.e., anterior) regions. Interestingly, in this example, the inferotemporal and ventral eigenmodes also show a relatively high frequency. At the third scale, this caudal-rostral gradient is more evident, suggesting that faster solenoidal dynamics dominate in posterior parts of the brain. This is consistent with both theoretical and empirical findings suggestive of a gradient of timescales—as one moves from the back to the front of the brain and, implicitly, from hierarchically lower areas to higher areas (Cocchi et al., 2016; Hasson, Yang, Vallines, Heeger, & Rubin, 2008; Kiebel, Daunizeau, & Friston, 2008; Liegeois et al., 2019; Murray et al., 2014; Wang & Kennedy, 2016). Note that the frequencies in question here are very slow; namely, about 0.01 Hz or below. These are the ultraslow frequencies typically characterized in resting-state fMRI (Liegeois et al., 2019). In the present setting, these ultraslow frequencies are an emergent property of scale-invariant behavior, when one moves from spatial temporal scales suitable for describing lobar dynamics or large intrinsic brain networks.

The eigenvalues in Figure 10 take positive real values at higher (second and third) scales. This means that they have crossed the zero threshold to engender a transcritical bifurcation. Strictly speaking, these produce solutions that cannot be realized because of an exponential divergence of trajectories. This reflects the first-order approximation that we are using to summarize the dynamics. Although this linear approximation precludes stochastic chaos, positive real values speak to the notion that some particles at higher scales become excitable for short periods of time. This means that we are moving away from a loosely coupled oscillator model—that has a fixed point or limit cycle attractor—towards what physicists call active or excitable matter (Keber et al., 2014; Ramaswamy, 2010). This is a nice metaphor for the brain and means that if the particles that constitute active (gray) matter are considered in isolation, they can show autonomous dynamics that can be characterized as stochastic chaos or itinerancy.

At this point we can return to the renormalization group and RG scaling behavior. This scaling behavior depends upon the link between various parameters of the systems Lagrangian (or equivalent characterization of dynamics) between successively higher levels. Consider the following RG flow or beta function as an instance of Equation 4:
This says as we move from one scale to the next, the timescale increases by eβτ ≥ 1. Invoking the same beta function for spatial scale induces a relationship between temporal and spatial scales in the form of a power law with a scaling exponent α. This exponent corresponds to the ratio of the time and spatial exponents of the beta functions:
(10)
The last quality in the first line follows by eliminating the scale i from the pair of beta functions. Intuitively, this scaling behavior means that as we move from one scale to the next, things get slower and bigger—but at different geometric rates. This difference gives rise to a scaling exponent that links the increases in spatial scale to increases in temporal scale. We evaluated the characteristic time and spatial constants for each scale by taking the mean of the real eigenvalues and the spatial dispersion of the corresponding eigenmodes associated with all particles at each scale:
(11)
Plotting the logarithms of these values against each other allows one to estimate the scaling exponent using linear regression. Figure 12 shows the results of this analysis across all scales. The scaling exponent here was 1.14. This is not dissimilar to the value of 1.47 obtained with a similar analysis of murine calcium imaging data (Fagerholm et al., 2019), where coarse-graining was implemented by averaging over spatial blocks. To put this value into perspective, the scaling exponent for Kepler’s laws of motion is 1.5. This scaling exponent reflects the disparity in spatial and temporal constants, where the temporal constant increases by a factor of 2.37 from one scale the next, while the spatial support increases by 2.13.
Figure 12. 

Scale invariance. This figure illustrates scaling behavior across the scales of the particular decomposition. The upper panel plots the real part of the eigenvalues of each particle against its spatial scale; namely the caliper width of each particle’s eigenmode. This is replicated for each of the four scales, denoted by the different colors (green, pink, cyan, and puce, respectively). The expected values are shown as encircled large dots. The lower left panel plots the logarithms of these temporal and spatial expectations against each other. The resulting regression slope corresponds to the scaling exponent; here, 1.14. The light gray circles correspond to what would have been seen at higher and lower scales. The lower right plot shows the same regression in terms of the implicit time constant, as a function of spatial scale expressed in millimeters. The red lines correspond to the coarse scales (of i = 6 and i = 8) depicted in the left panel—suggesting characteristic time constants in the order of 60 and 360 s. This scaling behavior suggests that as we increase the spatial scale or coarse-graining, dynamics become slower, as the real parts of particular eigenvalues approach zero from below.

Figure 12. 

Scale invariance. This figure illustrates scaling behavior across the scales of the particular decomposition. The upper panel plots the real part of the eigenvalues of each particle against its spatial scale; namely the caliper width of each particle’s eigenmode. This is replicated for each of the four scales, denoted by the different colors (green, pink, cyan, and puce, respectively). The expected values are shown as encircled large dots. The lower left panel plots the logarithms of these temporal and spatial expectations against each other. The resulting regression slope corresponds to the scaling exponent; here, 1.14. The light gray circles correspond to what would have been seen at higher and lower scales. The lower right plot shows the same regression in terms of the implicit time constant, as a function of spatial scale expressed in millimeters. The red lines correspond to the coarse scales (of i = 6 and i = 8) depicted in the left panel—suggesting characteristic time constants in the order of 60 and 360 s. This scaling behavior suggests that as we increase the spatial scale or coarse-graining, dynamics become slower, as the real parts of particular eigenvalues approach zero from below.

Close modal

This scale-free behavior means that we can evaluate the time constants at scales that we have not characterized empirically. Table 1 lists these extrapolated or projected timescales right down to the nanoscale and up to higher scales that would be appropriate to talk about networks of brains or social communities or institutions. Note that the extrapolation of scaling to conferences (Table 1) should not be taken too seriously. There would be good reason to estimate the scaling exponent from a larger database, since small errors in the estimation of scaling exponents can amplify quickly in extrapolation.

Table 1. 
Spatiotemporal scales and examples
ScaleSpatial scaleTimescaleExample
−8 4.38 μm 380 μs Dendritic spines occur at a density of up to 5 spines per μm of dendrite. Spines contain fast voltage-gated ion channels with time constants in the order of 1 ms. 
−4 89.3 μm 11.9 ms A cortical minicolumn: A minicolumn measures of the order of 40–50 μm in transverse diameter 80 μm spacing (Peters & Yilmaz, 1993). The membrane time constant of a typical cat layer III pyramidal cell is about 20 ms. 
1.82 mm 374 ms A cortical hypercolumn (e.g., a 1-mm expense of V1 containing ocular dominance and orientation columns for a particular region in visual space; Mountcastle, 1997). Typical duration of evoked responses in the order of 1 to 300 ms (cf. the cognitive moment). 
37.2 mm 11.8 s The cerebellum is about 50 mm in diameter, corresponding to the size of cortical lobes. Sympathetic unit activity associated with Mayer waves within frequency of 0.1 Hz (wavelength of 10 s). 
758 mm 6.15 min A dyadic interaction (e.g., a visit to your doctor). 
12 15.5 m 3.22 hr A dinner party for six guests, lasting for several hours. 
16 0.31 km 4.21 days An international scientific conference (pre-coronavirus). 
ScaleSpatial scaleTimescaleExample
−8 4.38 μm 380 μs Dendritic spines occur at a density of up to 5 spines per μm of dendrite. Spines contain fast voltage-gated ion channels with time constants in the order of 1 ms. 
−4 89.3 μm 11.9 ms A cortical minicolumn: A minicolumn measures of the order of 40–50 μm in transverse diameter 80 μm spacing (Peters & Yilmaz, 1993). The membrane time constant of a typical cat layer III pyramidal cell is about 20 ms. 
1.82 mm 374 ms A cortical hypercolumn (e.g., a 1-mm expense of V1 containing ocular dominance and orientation columns for a particular region in visual space; Mountcastle, 1997). Typical duration of evoked responses in the order of 1 to 300 ms (cf. the cognitive moment). 
37.2 mm 11.8 s The cerebellum is about 50 mm in diameter, corresponding to the size of cortical lobes. Sympathetic unit activity associated with Mayer waves within frequency of 0.1 Hz (wavelength of 10 s). 
758 mm 6.15 min A dyadic interaction (e.g., a visit to your doctor). 
12 15.5 m 3.22 hr A dinner party for six guests, lasting for several hours. 
16 0.31 km 4.21 days An international scientific conference (pre-coronavirus). 

It may help to distinguish between scalable and scale-free systems. A scalable system is one in which performing a scale transformation to a system’s state gives a new state, but crucially, one that is also a solution of the governing equation of motion. This is what Landau refers to as “mechanical similarity” in the context of Lagrangian mechanics. For example, orbital trajectories are scalable, seeing as increasing the scale of the system gives new (larger) orbits with different (slower) periods. Crucially however, these new orbits also follow Newton’s second law regardless of size—hence qualifying as “scalable.” On the other hand, a scale-free system has no characteristic-length scale and images taken at different resolutions are statistically unchanging.

This completes our discussion of scale invariance and associated dynamics, where we have taken a special interest in the temporal scaling behavior that emerges from local connectivity at smaller scales of analysis. In the next section, we turn to the coupling between particles and see what this has to say in terms of how intrinsic brain networks influence each other.

In this section, we consider the off-diagonal elements of the Jacobian at the successive scales afforded by the renormalization group. By construction, these terms couple different particles. The ij-th element of the nm-th block of the Jacobian couples the j-th eigenstate of the m-th particle to the i-th eigenstate of the n-th particle.
(12)
This directed coupling is generally complex. The complex part can be thought of as inducing a phase shift or delay in the influence of one eigenstate on another. The real part is of more interest here and corresponds to a rate constant, much like the real part of the Lyapunov exponents of the intrinsic coupling describe the rate of decay. However, here, we are talking about the rate at which an eigenstate of one particle responds to the eigenstate of another. This means that large positive or negative real extrinsic coupling becomes interesting (previously, we have been discarding eigenstates with large negative intrinsic eigenvalues because they dissipate almost immediately). Figure 13 illustrates this extrinsic (between particle) coupling at the penultimate scale (scale three) in the form of a connectogram.
Figure 13. 

Extrinsic connectivity. This figure illustrates asymmetric extrinsic (between particle) coupling at the penultimate scale (scale three). The upper panels reproduce the results in Figure 7, while the lower panel is a connectogram illustrating the coupling among the (32) eigenstates that constitute the five particles at this scale. The width of each connector reflects the strength of the coupling, after dividing the strength into five bins and eliminating the lowest bin. The color of the dots corresponds to the color of the particle in the upper right panel. The color of the connectors corresponds to the source of the strongest (reciprocal) connection. In this example, the largest afferent connection is from eigenstate 24 to eigenstate 3. This corresponds to an influence of the first eigenstate of the fourth (cyan) particle on the third eigenstate of the first (green) particle. The coupling strength corresponds to the real part of the Jacobian, in hertz. The fact that coupling is mediated by complex coupling coefficients means that the influence of one eigenstate on another can show profound asymmetries in time. This is illustrated in the next figure, which examines the largest connection above in more detail.

Figure 13. 

Extrinsic connectivity. This figure illustrates asymmetric extrinsic (between particle) coupling at the penultimate scale (scale three). The upper panels reproduce the results in Figure 7, while the lower panel is a connectogram illustrating the coupling among the (32) eigenstates that constitute the five particles at this scale. The width of each connector reflects the strength of the coupling, after dividing the strength into five bins and eliminating the lowest bin. The color of the dots corresponds to the color of the particle in the upper right panel. The color of the connectors corresponds to the source of the strongest (reciprocal) connection. In this example, the largest afferent connection is from eigenstate 24 to eigenstate 3. This corresponds to an influence of the first eigenstate of the fourth (cyan) particle on the third eigenstate of the first (green) particle. The coupling strength corresponds to the real part of the Jacobian, in hertz. The fact that coupling is mediated by complex coupling coefficients means that the influence of one eigenstate on another can show profound asymmetries in time. This is illustrated in the next figure, which examines the largest connection above in more detail.

Close modal

The implications of complex extrinsic coupling can be understood in terms of cross-covariance functions of time that characterize delayed or lagged dependencies. Please note that the cross-covariance functions can be evaluated in a straightforward manner from the complex transfer functions, shown in Figure 10. In other words, they can be derived directly from the Jacobian, under first-order assumptions. For example, Figure 14 characterizes these dependencies between the two eigenstates with the strongest coupling at this (third) scale. The implicit coupling is mediated by the corresponding element of the (complex) Jacobian—circled in red in the upper middle panel. The flanking panels on the left and right show the associated eigenmodes in voxel space. The middle row shows the autocovariance functions of the two eigenstates, illustrating serial correlations that can last for many seconds. The interesting part of this figure is in the lower panels: These report the cross-covariance function between the two eigenstates, over 256 s (lower left panel) and 32 s (lower right panel), respectively. The key thing to observe here is that the peak cross-covariance emerges at an 8-s lag from the 24th to the third eigenstate. This asymmetrical cross-covariance (and implicitly cross-correlation) function reflects the solenoidal coupling and implicit breaking of detailed balance accommodated by the particular decomposition (see the next section). Note that the (zero lag) correlation is almost zero. This speaks to the potential importance of using cross-covariance functions (or complex cross spectral in frequency space), when characterizing functional connectivity in distributed brain responses (K. J. Friston, Bastos, et al., 2014; Mohanty, Sethares, Nair, & Prabhakaran, 2020). This brief treatment of extrinsic coupling has made much of the complex nature of dynamical coupling and how it manifests in terms of functional connectivity. In the final section, we revisit this kind of coupling in terms of nonequilibrium steady states.

Figure 14. 

Dynamic coupling. This figure characterizes the coupling between the two eigenstates of the previous figure with the strongest coupling at the third scale. This coupling is mediated by the corresponding element of the (complex) Jacobian, circled in red in the upper middle panel. The flanking panels on the left and right show the corresponding eigenmodes in voxel space. The middle row shows the autocovariance functions of the two eigenstates, illustrating serial correlations that can last for many seconds. The lower two panels report the cross-covariance function between the two eigenstates, over 256 s (lower left panel) and 32 s (lower right panel). The red line indicates the peak cross covariance at about 8-s lag.

Figure 14. 

Dynamic coupling. This figure characterizes the coupling between the two eigenstates of the previous figure with the strongest coupling at the third scale. This coupling is mediated by the corresponding element of the (complex) Jacobian, circled in red in the upper middle panel. The flanking panels on the left and right show the corresponding eigenmodes in voxel space. The middle row shows the autocovariance functions of the two eigenstates, illustrating serial correlations that can last for many seconds. The lower two panels report the cross-covariance function between the two eigenstates, over 256 s (lower left panel) and 32 s (lower right panel). The red line indicates the peak cross covariance at about 8-s lag.

Close modal

Above, we have referred to solenoidal and dissipative flows, in relation to the complex and real parts of intrinsic eigenvalues—and how they manifest in terms of intrinsic brain networks. This section unpacks this relationship by applying the statistical physics of nonequilibrium steady states to neuronal fluctuations. Our focus will be on the relationship between conventional characterizations of functional connectivity and the more general formulation afforded by a particular decomposition. Specifically, we will see that conventional formulations assume a special case that discounts solenoidal flow—and implicitly assumes neuronal dynamics attain steady state at statistical equilibrium.

In the previous section, we examined extrinsic coupling among particles in terms of their covariance. Here, we return to coupling and dynamics that are intrinsic to a particle; namely, the final particle at the last level. In this example, the particle has eight eigenstates, whose complex eigenvalues imply a loss of detailed balance and implicit steady state that is far from equilibrium. To understand the link between detailed balance and equilibria versus nonequilibrium steady states, it is useful to consider the eigen-decomposition of the final particle in relation to standard analyses of functional connectivity (e.g., singular value decomposition or principal component analysis of covariance matrices). In what follows, we first rehearse the relationship between flow and steady-state distributions over states afforded by the Helmholtz decomposition. We then look at what this implies under the assumption of symmetric coupling—and how this leads to equilibrium mechanics and a simple relationship between the Jacobian and covariances among their respective eigenstates. We then revisit these relationships but replacing solenoidal flow, to clarify the differences between summarizing dynamics in terms of the eigenvectors of the Jacobian and the eigenvectors of the functional connectivity matrix.

In general, one can express the flow at steady state in terms of a Helmholtz decomposition of the solution to density dynamics (as described by the Fokker-Planck equation). This is an important expression that underwrites much of physics and related treatments of self-organization in the biological sciences. (Note: It is also known known as the fundamental theorem of vector calculus. This decomposition is at the heart of most formulations of nonequilibrium steady state in nonlinear systems, ranging from molecular interactions to evolution. See Ao, 2004, 2005; Qian & Beard, 2005; Zhang, Xu, Zhang, Wang, & Wang, 2012. For a concise derivation of Equation 13, under simplifying assumptions, please see Lemma D.1 in K. Friston & Ao, 2012.) Starting with a Langevin formulation of neuronal dynamics, we can express the flow of states in Equation 2 as follows:
(13)
Here, ℑ(x) = −ln p(x) is a potential energy that quantifies the surprise at finding the brain in any state. The positive definite matrix Γ ∝ I plays the role of a diffusion tensor describing the amplitude of random fluctuations, ω (assumed to be a Wiener process), while the antisymmetric matrix Q = −Q mediates solenoidal flow. Equation 13 says that the expected flow at any point in state space has two components: a dissipative gradient flow, −Γ∇ℑ, on the logarithm of the steady-state density, and a solenoidal flow, Q∇ℑ, that circulates on the isocontours of this density. In brief, the gradient flow counters the dispersive effects of random fluctuations, thereby rendering the probability density stationary. Please note here that we have omitted (correction) terms that generalize this (Helmholtz) decomposition, because we are assuming that the amplitude of random fluctuations and the solenoidal terms change slowly over state space. We have also dropped the scale superscripts for clarity. On differentiating the Helmholtz decomposition, with respect to systemic states we have, ∀x:
(14)
Here, the Jacobian J = ∇f(x) and Hessian Π(x) = ∇2ℑ(x) are functions of states. Because the Hessian matrix is symmetrical, there are linear constraints on the solenoidal coupling (Qian & Beard, 2005):
These constraints mean that in the absence of solenoidal coupling—when random fluctuations have the same amplitude everywhere—the Jacobian has to be symmetric Q = 0 ⇒ ΓJ(x)T = J(x)Γ. In other words, symmetric coupling guarantees detailed balance (i.e., an absence of solenoidal flow).
So how does this help us connect conventional analyses of functional connectivity to the eigenvectors of the Jacobian? First, if we make the simplifying assumption that effective connectivity is symmetric, we can ignore solenoidal flow. If we make the further assumption that the steady state is Gaussian, the Hessian can be interpreted as a precision matrix (i.e., inverse covariance or functional connectivity matrix). Note that this Gaussian assumption is usually motivated in terms of a first-order approximation to the flow (in terms of the Jacobian) around the maxima of the steady-state density. To the extent that the steady-state density approximates a Gaussian, then this local linear approximation becomes global. Under these simplifying assumptions, the Jacobian becomes a scaled version of the precision: setting Q = 0 in Equation 14 gives the following:
(15)
This means that the eigenvalues of the Jacobian, which reflect the rate of dissipation of each mode, are inversely related to the eigenvalues of the precision matrix. In other words, if we were to perform a principal component analysis of the covariance matrix Σ = Π−1, the principal eigenvalues would be interpreted as explaining the most variance in the eigenstates. However, this is exactly the same as identifying the eigenstates whose flow has the smallest rate constant. In other words, the principal components are just the slow, unstable modes that do not dissipate quickly.
(16)
One can quantify the dissipative aspect of the eigenmodes in terms of the expected dispersion of the flow:
(17)
This expression shows that the uncertainty about the flow—over state space at steady state—is inversely proportional to the corresponding uncertainty about the state (i.e., variance). This is Heisenberg’s uncertainty principle. The connection to the uncertainty principle can be made explicit by associating the amplitude of random fluctuations with inverse mass (K. Friston, 2019), where the constant of proportionality is Planck’s constant. Equation 17 can then be expressed as the following:
(18)
This can be interpreted as follows: If we are fairly certain about the state of a system, we will be very unsure about its flow – and vice versa. This follows from the fact that, at steady state, systems with predictable, slow flows become dispersed over state space, in virtue of the random fluctuations. Conversely, if a system can “gather it states up” and locate them in a small regime of state space, the requisite flows must be fast and varied.

In summary, if we assume detailed balance (i.e., discount solenoidal flow), we are assuming an equilibrium steady state of the sort studied in quantum and statistical mechanics. In this special case, there is a direct relationship between the (eigenvectors of) the Jacobian and the Hessian matrix (i.e., precision, or inverse functional connectivity) matrix. Furthermore, there is also a direct relationship between the Jacobian and the variance of the expected flow or dynamics. The assumption of detailed balance is licensed in many situations; in particular, if we are dealing with ensembles of states or particles that are exchangeable (e.g., an idealized gas). This renders the Jacobian symmetrical and ensures detailed balance. The Jacobian is symmetrical because the influence of one particle on a second is the same as the influence of the second on the first. However, this symmetry cannot be assumed in biological systems that break detailed balance, especially the brain. We now rehearse the above analysis by retaining the symmetry breaking, solenoidal flow that underwrites nonequilibrium steady-state dynamics.

In the presence of solenoidal flow, the eigenvectors of the Jacobian and Hessian are no longer the same. So, which is the best summary of dynamics? Clearly, there is no definitive answer to this question; however, if we are interested in relevant quantities “that matter,” we are specifically interested in non-dissipative, slow, unstable dynamics. By construction, this is what the particular decomposition “picks out,” by discarding fast fluctuations at each successive scale. This means that the eigenstates of the final particle should have identified slow, unstable, or critical dynamics. In contrast, had we just taken the principal components of the covariance matrix of the data (i.e., functional connectivity), we may not have identified the slow modes.

This begs the question, to what extent do solenoidal dynamics contribute to the intrinsic dynamics of the final particle? One can evaluate the relative contribution of dissipative gradient flows and non-dissipative solenoidal flow in terms of their expected dispersion:
(19)
Clearly, to do this, we need estimates of the amplitude of intrinsic fluctuations and the solenoidal term. However, under local linear (i.e., Gaussian) assumptions, these two quantities must satisfy JQ + QJT = JTΓ − ΓJ, or in terms of eigenstates, λQ + = λΓ − Γλ. We can use this constraint to decompose the kinetic energy of the flow in terms of, and only of, the eigenvalues of the Jacobian, where κ = −Re(λ):
(20)
The first equality follows from substituting the subsequent equalities in Equation 19. The use of kinetic energy here appeals to Equation 18, in which the amplitude of random fluctuations is associated with inverse mass. This equality says that the dissipative part of flow is determined by the real part of an eigenstate’s eigenvalue, while the solenoidal contribution is the imaginary part squared, divided by the real part. Intuitively, this would be like decomposing the kinetic energy of the Earth into a solenoidal component corresponding to its orbital velocity—and a dissipative component, as it is drawn towards the sun. This speaks to an increase in kinetic energy with the frequency of (e.g., neuronal) oscillations, which is not unrelated to the Plank-Einstein and de Broglie relations in physics.

Note that when working with eigenstates, the solenoidal terms are encoded by Q, which is a leading diagonal matrix of imaginary values. Similarly, the dissipative terms are encoded by Γ, which is a leading diagonal matrix of real values. In other words, nonequilibrium steady state—as defined by the prevalence of solenoidal flow—manifests as the imaginary parts of the Helmholtz decomposition, when the system is projected onto the eigenvectors of the Jacobian.

Figure 15 shows the dissipative and solenoidal (kinetic) energy of the eigenstates at the final scale. (Note: Kinetic energy is normally positive because it involves a squared quantity—momentum—usually assumed to be purely real; see Equation 20. However, when we relax this assumption and allow complex momenta-like quantities, squaring no longer guarantees positivity. Hence kinetic energy can take negative values [see Figure 15, top left panel] in this complex state space.) The corresponding eigenmodes are shown in the subsequent panels as maximum intensity projections (of their absolute values). In terms of dissipative dynamics, the first eigenmode has the smallest dissipative energy. In other words, it features the slowest, most unstable mode macroscopic (intrinsic) dynamics. Eigenmodes 2 and 3 are a conjugate pair, with complex parts—and, implicitly, a solenoidal contribution to their (kinetic) energy. These nodes are most pronounced in dorsal mid-prefrontal regions. Note that the kinetic energy of the first eigenstate is negative. This may seem counterintuitive; however, it is a simple reflection of the fact that the principal eigenvalue has a real part that is greater than zero. Clearly, the implicit exponential divergence of trajectories cannot persist globally. In a more complete analysis, the (stochastic) trajectory would quickly enter regimes of dissipation, such that the average real part (cf. Lyapunov exponent) was less than zero. One might ask where the dissipative energy comes from. It is effectively driven by intrinsic fluctuations that, at the finest level, include the fluctuations in active states, which play the role of experimental or sensory inputs. This raises an interesting question: At what scale do experimental inputs manifest?

Figure 15. 

Dissipative and solenoidal dynamics. This figure unpacks the intrinsic coupling at the final (fourth, single particle) level. At this level, there can be no coupling between particles and—by construction—the dynamics are completely characterized in terms of the eigenstates that comprise the particle. In turn, these are completely characterized by their complex eigenvalues; namely, the intrinsic complex coupling. The upper panels show the dissipative and solenoidal (kinetic) energy of the eight eigenstates that comprise the particle. The corresponding eigenmodes are shown in the subsequent panels as maximum intensity projections (of their absolute values). In terms of dissipative dynamics, the first eigenmode has the smallest dissipative energy. In other words, it is the slowest, most unstable mode of this particle. Eigenmodes 2 and 3 are a conjugate pair, with complex parts—and, implicitly, a solenoidal contribution to their (kinetic) energy. These nodes are most pronounced in dorsal mid-prefrontal regions, with some expression in posterior parietal regions. The dissipative energy is, effectively, driven by intrinsic fluctuations that, at the finest level, include the fluctuations in active states, which play the role of experimental or sensory inputs.

Figure 15. 

Dissipative and solenoidal dynamics. This figure unpacks the intrinsic coupling at the final (fourth, single particle) level. At this level, there can be no coupling between particles and—by construction—the dynamics are completely characterized in terms of the eigenstates that comprise the particle. In turn, these are completely characterized by their complex eigenvalues; namely, the intrinsic complex coupling. The upper panels show the dissipative and solenoidal (kinetic) energy of the eight eigenstates that comprise the particle. The corresponding eigenmodes are shown in the subsequent panels as maximum intensity projections (of their absolute values). In terms of dissipative dynamics, the first eigenmode has the smallest dissipative energy. In other words, it is the slowest, most unstable mode of this particle. Eigenmodes 2 and 3 are a conjugate pair, with complex parts—and, implicitly, a solenoidal contribution to their (kinetic) energy. These nodes are most pronounced in dorsal mid-prefrontal regions, with some expression in posterior parietal regions. The dissipative energy is, effectively, driven by intrinsic fluctuations that, at the finest level, include the fluctuations in active states, which play the role of experimental or sensory inputs.

Close modal

An intuitive way of thinking about the distinction between dissipative and solenoidal dynamics is in terms of the fluctuations in bath water when perturbed (e.g., when the tap or faucet is running), as opposed to the ripples and waves that persist after the perturbation is removed (e.g., when the tap or faucet is turned off). In one case, the water is trying to find its free energy minimum, while in the second case solenoidal, divergence free flow is more like the complicated swinging of a frictionless pendulum that neither consumes nor creates energy. In this view, it becomes interesting to characterize the response of the system to perturbation—here, the exogenous inputs provided by the experimental design. Conceptually, we can regard experimental inputs (such as visual afferents to the lateral geniculate) as (active states of) external particles that influence (but are not influenced by) the sensory states of particles at the finest level. Practically, these experimental inputs were included in the estimation of the coupling parameters that subtend the Jacobian at the finest scale.

Figure 16 characterizes the influence of exogenous, condition-specific effects at different scales in terms of correlations between fluctuations in the eigenstates that can be explained by any of the three experimental inputs (i.e., visual, motion, and attention). This analysis suggests that the effect of exogenous inputs can be detected at all scales. For example, at the third scale, eigenmode 17 shows an extremely significant effect of sensory perturbation, dominated by visual motion. The associated eigenmode picks out primary visual cortex and extrastriate areas, encroaching upon motion-sensitive regions. Note that eigenmode 17 has a negative time constant, and hence the real part of its eigenvalue is positive. This suggests that during part of its orbit, it becomes transiently unstable, while following the temporal structure of exogenous fluctuations.

Figure 16. 

Induced responses. This figure illustrates the expression of experimental or condition-specific effects at different scales of the particular decomposition. The top panel is an unusual form of statistical parametric map; namely, an image of the F statistic, testing for the significance of an effect of any of the three exogenous inputs (i.e., visual, motion, and attention). Each row of the F-statistic map corresponds to a scale—and comprises the F statistic for each successive particle at that scale. This map shows that the effect of (some linear mixture of) exogenous inputs can be detected at all scales, as evidenced by the dark bars in all four rows. For example, at the third scale, eigenmode 17 shows an extremely significant effect of inputs with an F statistic of over 150 and an exceedingly small p value of less than 0.0001. This eigenmode is shown on the lower left in voxel space. Its expression over time—in terms of its real value—is depicted in the middle panel (blue line), with the best fitting prediction based upon exogenous input (green line). This prediction is a contrast (i.e., linear mixture) of the input functions shown in the design matrix on the lower right. The coefficients of this contrast are shown below the design matrix, demonstrating that the largest contribution is from the second (motion) input. The last column of the design matrix is simply a column of ones. The associated eigenmode picks out primary visual cortex and extrastriate areas, encroaching upon motion-sensitive regions in its lateral extremity. The next figure provides a complementary perspective on the effects of inputs, in terms of their first-order kernels or impulse response functions throughout the brain, and over extended periods of time.

Figure 16. 

Induced responses. This figure illustrates the expression of experimental or condition-specific effects at different scales of the particular decomposition. The top panel is an unusual form of statistical parametric map; namely, an image of the F statistic, testing for the significance of an effect of any of the three exogenous inputs (i.e., visual, motion, and attention). Each row of the F-statistic map corresponds to a scale—and comprises the F statistic for each successive particle at that scale. This map shows that the effect of (some linear mixture of) exogenous inputs can be detected at all scales, as evidenced by the dark bars in all four rows. For example, at the third scale, eigenmode 17 shows an extremely significant effect of inputs with an F statistic of over 150 and an exceedingly small p value of less than 0.0001. This eigenmode is shown on the lower left in voxel space. Its expression over time—in terms of its real value—is depicted in the middle panel (blue line), with the best fitting prediction based upon exogenous input (green line). This prediction is a contrast (i.e., linear mixture) of the input functions shown in the design matrix on the lower right. The coefficients of this contrast are shown below the design matrix, demonstrating that the largest contribution is from the second (motion) input. The last column of the design matrix is simply a column of ones. The associated eigenmode picks out primary visual cortex and extrastriate areas, encroaching upon motion-sensitive regions in its lateral extremity. The next figure provides a complementary perspective on the effects of inputs, in terms of their first-order kernels or impulse response functions throughout the brain, and over extended periods of time.

Close modal

Figure 17 provides a complementary and revealing perspective on the effects of sensory perturbation. This figure characterizes induced responses in terms of first-order Volterra kernels—that is, impulse response functions—of particles at the first (finest) scale of coarse-graining. These kernels are based upon the Jacobian and quantify the effects of changing an input on each eigenmode over time (based on the parameters mediating the influence of experimental inputs on motion of states at the first scale). The maximum intensity projections on the left report the variance attributable to each input, based upon the sum of squared kernels over time (i.e., the autocovariance function at zero lag under each input). Note that this is a fundamentally different characterization of brain “activation” because it is modeling the variance induced by an input that is distributed in space and time through recurrent coupling among brain regions.

Figure 17. 

Induced responses over space and time. This figure characterizes induced responses in terms of first-order Volterra kernels—that is, impulse response functions—of particles at the first (finest) scale of coarse-graining. Each row corresponds to the three inputs considered (i.e., visual, motion, and attention effects). The left column shows the expression of these inputs over particles (weighted by the absolute value of their eigenmodes). This effect is the variance attributable to each input (i.e., square of the corresponding kernel, summed over time), shown in the left row. These kernels are shown for the 32 particles with the greatest (absolute) magnitude. The key thing to take from these results is that motion influences the dynamics of visual and extrastriate (presumably, motion-sensitive regions), while attention has protracted influences on parietal, prefrontal, and medial temporal regions, including the frontal eye fields and intraparietal sulcus. Interestingly, visual input per se seems to be expressed preferentially in subcortical systems, including the lateral geniculate but also other subcortical and medial temporal regions. In addition, visual input appears to selectively engage posterior superior temporal regions in the left hemisphere—often associated with biological motion processing. The more telling aspect of this characterization is the protracted nature of the kernels, which decay to small values after 100 s or so. Notice that these dynamics are supposedly neuronal in nature because we have accommodated hemodynamic convolution at the point of estimating the Jacobian.

Figure 17. 

Induced responses over space and time. This figure characterizes induced responses in terms of first-order Volterra kernels—that is, impulse response functions—of particles at the first (finest) scale of coarse-graining. Each row corresponds to the three inputs considered (i.e., visual, motion, and attention effects). The left column shows the expression of these inputs over particles (weighted by the absolute value of their eigenmodes). This effect is the variance attributable to each input (i.e., square of the corresponding kernel, summed over time), shown in the left row. These kernels are shown for the 32 particles with the greatest (absolute) magnitude. The key thing to take from these results is that motion influences the dynamics of visual and extrastriate (presumably, motion-sensitive regions), while attention has protracted influences on parietal, prefrontal, and medial temporal regions, including the frontal eye fields and intraparietal sulcus. Interestingly, visual input per se seems to be expressed preferentially in subcortical systems, including the lateral geniculate but also other subcortical and medial temporal regions. In addition, visual input appears to selectively engage posterior superior temporal regions in the left hemisphere—often associated with biological motion processing. The more telling aspect of this characterization is the protracted nature of the kernels, which decay to small values after 100 s or so. Notice that these dynamics are supposedly neuronal in nature because we have accommodated hemodynamic convolution at the point of estimating the Jacobian.

Close modal

In this example, motion induces responses in visual and extrastriate—presumably, motion-sensitive eigenmodes—while attention has protracted influences on parietal, prefrontal, and medial temporal regions, including the frontal eye fields and intraparietal sulcus. Visual input per se seems to be expressed preferentially in subcortical systems, including the lateral geniculate but also other subcortical and medial temporal regions. In addition, it appears to selectively engage posterior superior temporal regions in the left hemisphere, often associated with biological motion processing. The interesting aspect of this characterization is the protracted nature of the kernels, which decay to small values after 100 s or so. In effect, this means that although induced responses may be expressed in a regionally specific way almost instantaneously, there are enduring effects that can last for a minute or so, following any exogenous perturbation. Clearly, these effects will be overwritten by ongoing sensory input; however, this suggests that brain systems—and accompanying distributed neuronal responses—have a greater memory than might have been anticipated. Heuristically, this means that I should be able to tell you whether you have “seen something” in the past minute or so by examining your brain activity at this moment in time.

In summary, we have introduced a particular partition that plays the role of a functional parcellation scheme for a system of loosely coupled nonlinear oscillators, such as neuronal populations in the brain. The key aspect of this parcellation scheme is that it can be applied recursively in the spirit of the renormalization group. This enables one to examine the scale-invariant behavior of the ensuing spatiotemporal dynamics in a principled way. The numerical analyses above confirm the analytic intuitions that as we move from one scale to the next, there is a progressive slowing and loss of stability of eigenstates associated with each parcel or particle. This manifests as a form of self-organized criticality, in the sense that slow unstable (non-dissipative) eigenmodes supervene on lower scales. Quantitatively speaking, the spatial scale of a particle, its characteristic frequencies and Lyapunov exponents, all fit nicely with empirical observations of (dynamic) functional connectivity within and among large-scale intrinsic brain networks (Liegeois et al., 2019; Northoff, Wainio-Theberge, & Evers, 2019). The use of an eigen-decomposition of this sort is particularly interesting in relation to recent eigenmode-decompositions of structural connectivity (Atasoy, Donnelly, & Pearson, 2016), cortical geometry (Tokariev et al., 2019), and spatially embedded neural fields (Robinson et al., 2016). These related applications are slightly simpler than the current analysis because they can assume symmetric coupling—of one sort or another—and therefore need only deal with real variables and symmetric modes. For a complementary approach to coarse-graining fMRI data, and then plotting cross-scale covariance functions (including time-asymmetric decompositions), please see Breakspear, Bullmore, Aquino, Das, and Williams (2006), especially Figures 10 and 11. It is also worth noting that our use of the graph Laplacian (G) to define neighboring (i.e., coupled) internal states bears a similarity to the graph signal processing (GSP) scheme (Huang et al., 2018). GSP is used to analyze and integrate structural and functional connectomes, where the (functional) brain activity at the graph nodes is studied on the underlying graph’s (anatomical) structure. In the GSP framework, the eigen-decomposition of graph Laplacian (constructed from structural connectome) is used to identify eigenmodes of low and high frequency (together they define graph Fourier transform).

Because this paper is a technical (foundational) description of the procedures entailed by the existence of Markov blankets, we have focused on the simplest implementation. This means that we started off with linearization assumptions, and propagated this approximation to higher levels. Clearly, it would be nice to revisit the particular partition using higher order approximations that retain nonlinearity in the equations of motion; for example, Equation 14. This would require a more careful analysis of the Lyapunov exponents, which would involve integrating the system and averaging the eigenvalues over the ensuing state-dependent Jacobian: The Jacobian becomes a function of states when one includes nonlinearities in the equations of motion. This raises the interesting issue of how to identify the adjacency matrix used to define the Markov blankets. In other words, we need to establish the conditional independences in terms of a zero entry in the Jacobian. However, if the Jacobian is fluctuating over time, over an orbit in state space, then there may be times when the Jacobian element is zero (i.e., zero coupling) and nonzero at other times. Related numerical analyses of nonlinear systems (K. Friston, 2013) usually require that the Jacobian is zero over a suitably long period of time, when forming the adjacency matrix in Figure 2. Clearly, this would involve evaluating the Jacobian over all the solutions to the trajectory in state space. This may be a time-consuming but otherwise interesting exercise.

We have already mentioned some limitations and extensions. These include starting off with multivariate characterizations of intrinsic dynamics at the finest level. As noted above, this is easy to implement by using the first few principal eigenstates, following a singular decomposition of the smallest particles. Another extension is repeating the dynamic causal modeling at each scale, to reevaluate the Jacobian with suitable high-order (i.e., nonlinear) approximations to the equations of motion.

The nonuniqueness of the particular partition is a key practical issue. There is no pretense that there is any unique particular partition. There are a vast number of particular partitions for any given coupled dynamical system. In other words, by simply starting with different internal states —or indeed the number of internal states per particle—we would get a different particular partition. Furthermore, the thresholds used in the elimination of fast dissipative eigenmodes will also change the nature of the partition, leading to more or less inclusive dynamics at the scales above. This latter aspect is probably more defensible in terms of summarizing multiscale behavior, in the sense that we can easily motivate the adiabatic approximation in terms of the relative stability of eigenmodes at a particular level. However, the number of internal states to consider—and how to pick them—introduces a more severe form of nonuniqueness. In this paper, we used the state that was maximally coupled to other states as the internal state of the next particle. This was based upon the graph Laplacian of the adjacency matrix at the appropriate scale. This is a sensible but somewhat arbitrary definition of an internal state and speaks to the point that there are a multitude of particular partitions—and implicit Markov blankets—that could be used. There are two ways that one could handle this nonuniqueness. (Note: As noted by one of our reviewers, it might be useful to fix the [initial] centroids of the particles to the centroids of brain atlases [for example, Glasser et al., 2016; Schaefer et al., 2018]. This might ease the interpretation of the findings, as well as comparisons between the current and complementary brain network studies.) One would be to embrace it and focus on the statistics of characterizations over different particular partitions and look for scaling behaviors that are conserved over partitions. The alternative is to think about a unique particular partition and how this would be identified. This as an outstanding issue; namely, what is the “best” particular partition and, indeed, is the notion of the best partition appropriate?

Another important caveat is the fact that we have predicated the illustrative analyses in this paper on a single-subject dataset acquired under an experimental activation paradigm. We chose this dataset because it has been used to illustrate previous developments of dynamic causal modeling. Conceptually, this means that the particular partition is specific to this subject and the subject’s responses to the attentional paradigm (summarized in Figure 17). Because this paradigm introduced context-sensitive or condition-specific changes in effective connectivity, it was designed to change the Jacobian over different periods of stimulation (e.g., attentional modulation of coupling between visual motion areas and early visual cortex). We did not attempt to model these effects here; this would require the nonlinear modeling mentioned above. If this modeling was to second order, we would end up with a bilinear form for Equation 2, which is the basis of most DCM analyses of fMRI data. This speaks to the fact that the parcellation scheme may not produce the same results when applied to a different paradigm. In turn, this means that there is further work to be done in terms of finding a particular partition that accommodates variability in functional anatomy. In theory, this would probably be best addressed using a generative model; in other words, assuming one underlying sparse Jacobian at any given scale and then adding random effects, so that it could be used to explain multiple paradigms or subjects. Having said this, the current analyses can be taken as proof of principle that this sort of multiscale decomposition can be applied to empirical neuroimaging time series and leads to the same phenomenology reported in functional connectivity literature.

The software producing the figures in this figure are available as part of the academic software SPM. They can be accessed by invoking DEM graphical user interface and selecting the DCM and blankets button (DEMO_DCM_MB.m). Please see https://www.fil.ion.ucl.ac.uk/spm/.

Supporting information for this article is available at https://doi.org/10.1162/netn_a_00175.

Karl J. Friston: Conceptualization; Methodology; Visualization; Writing – original draft. Erik D. Fagerholm: Methodology; Writing – review & editing. Tahereh S. Zarghami: Writing – review & editing. Thomas Parr: Methodology; Writing – review & editing. Inês Hipólito: Writing – review & editing. Loïc Magrou: Writing – review & editing. Adeel Razi: Conceptualization; Methodology; Writing – review & editing.

Karl J. Friston, Wellcome Trust (http://dx.doi.org/10.13039/100004440), Award ID: 088130/Z/09/Z. Erik D. Fagerholm, King’s College London Prize Fellowship. Tahereh S. Zarghami, Cognitive Sciences and Technologies Council of Iran international research visit. Adeel Razi, Australian Research Council, Award ID: DE170100128. Adeel Razi, Australian Research Council, Award ID: DP200100757. Adeel Razi, National Health and Medical Research Council, Award ID: 1194910.

Functional connectivity:

A (undirected) measure of the statistical dependencies between spatially remote neurophysiological events.

Effective connectivity:

A measure of the directed (causal) influence of one neural system over another using a model of neuronal interactions

Dynamic causal modeling:

A Bayesian framework that is used to infer causal interaction between coupled or distributed neuronal systems.

Markov blanket:

A Markov blanket allows one to distinguish a collection of states that belong to a particle from states that do not.

Jacobian:

A matrix that contains a first-order partial derivative for a vector function.

Lyapunov exponent:

It gives the rate of exponential divergence from perturbed initial conditions.

Bayesian model reduction:

A Bayesian inversion and comparison of models that are reduced (or sparsed) forms of a full (or parent) model.

Eigenmode:

A stable state (i.e., mode) of a dynamical system in which all parts of the system oscillate at the same frequency.

Graph Laplacian:

A matrix representation of graph that combines node adjacency and node degree in mathematical formulation and belongs to spectral graph theory.

Generative model:

A model for randomly generating observable data values, typically given some hidden parameters.

Ao
,
P.
(
2004
).
Potential in stochastic differential equations: Novel construction
.
Journal of Physics A: Mathematical and General
,
37
(
3
),
L25
L30
.
Ao
,
P.
(
2005
).
Laws in Darwinian evolutionary theory
.
Physics of Life Reviews
,
2
(
2
),
117
156
.
Arnal
,
L. H.
, &
Giraud
,
A. L.
(
2012
).
Cortical oscillations and sensory predictions
.
Trends in Cognitive Sciences
,
16
(
7
),
390
398
.
Atasoy
,
S.
,
Donnelly
,
I.
, &
Pearson
,
J.
(
2016
).
Human brain networks function in connectome-specific harmonic waves
.
Nature Communications
,
7
,
10340
.
Bak
,
P.
,
Tang
,
C.
, &
Wiesenfeld
,
K.
(
1988
).
Self-organized criticality
.
Physical Review A: General Physics
,
38
(
1
),
364
374
.
Bassett
,
D. S.
, &
Sporns
,
O.
(
2017
).
Network neuroscience
.
Nature Neuroscience
,
20
(
3
),
353
364
.
Bastos
,
A. M.
,
Usrey
,
W. M.
,
Adams
,
R. A.
,
Mangun
,
G. R.
,
Fries
,
P.
, &
Friston
,
K. J.
(
2012
).
Canonical microcircuits for predictive coding
.
Neuron
,
76
(
4
),
695
711
.
Bastos
,
A. M.
,
Vezoli
,
J.
,
Bosman
,
C. A.
,
Schoffelen
,
J. M.
,
Oostenveld
,
R.
,
Dowdall
,
J. R.
, …
Fries
,
P.
(
2015
).
Visual areas exert feedforward and feedback influences through distinct frequency channels
.
Neuron
,
85
(
2
),
390
401
.
Biswal
,
B. B.
,
Van Kylen
,
J.
, &
Hyde
,
J. S.
(
1997
).
Simultaneous assessment of flow and BOLD signals in resting-state functional connectivity maps
.
NMR in Biomedicine
,
10
(
4–5
),
165
170
.
Breakspear
,
M.
,
Bullmore
,
E. T.
,
Aquino
,
K.
,
Das
,
P.
, &
Williams
,
L. M.
(
2006
).
The multiscale character of evoked cortical activity
.
NeuroImage
,
30
(
4
),
1230
1242
.
Breakspear
,
M.
,
Heitmann
,
S.
, &
Daffertshofer
,
A.
(
2010
).
Generative models of cortical oscillations: Neurobiological implications of the Kuramoto model
.
Frontiers in Human Neuroscience
,
4
(
190
).
Breakspear
,
M.
, &
Stam
,
C. J.
(
2005
).
Dynamics of a neural system with a multiscale architecture
.
Philosophical Transactions of the Royal Society of London B: Biological Sciences
,
360
(
1457
),
1051
1074
.
Buffalo
,
E. A.
,
Fries
,
P.
,
Landman
,
R.
,
Buschman
,
T. J.
, &
Desimone
,
R.
(
2011
).
Laminar differences in gamma and alpha coherence in the ventral stream
.
Proceedings of the National Academy of Sciences
,
108
(
27
),
11262
11267
.
Bullmore
,
E.
, &
Sporns
,
O.
(
2009
).
Complex brain networks: Graph theoretical analysis of structural and functional systems
.
Nature Reviews Neuroscience
,
10
(
3
),
186
198
.
Cardy
,
J. L.
(
2015
).
Scaling and renormalization in statistical physics
.
Carr
,
J.
(
1981
).
Applications of centre manifold theory
.
Berlin, Germany
:
Springer-Verlag
.
Cessac
,
B.
,
Blanchard
,
P.
, &
Krüger
,
T.
(
2001
).
Lyapunov exponents and transport in the Zhang model of self-organized criticality
.
Physical Review E: Statistical, Nonlinear, and Soft Matter Physics
,
64
(
1 Pt. 2
),
016133
.
Clark
,
A.
(
2017
).
How to knit Your own Markov blanket
. In
T. K.
Metzinger
&
W.
Wiese
(Eds.),
Philosophy and Predictive Processing
.
Frankfurt am Main, Germany
:
MIND Group
.
Cocchi
,
L.
,
Gollo
,
L. L.
,
Zalesky
,
A.
, &
Breakspear
,
M.
(
2017
).
Criticality in the brain: A synthesis of neurobiology, models and cognition
.
Progress in Neurobiology
,
158
,
132
152
.
Cocchi
,
L.
,
Sale
,
M. V.
,
Gollo
,
L. L.
,
Bell
,
P. T.
,
Nguyen
,
V. T.
,
Zalesky
,
A.
, …
Mattingley
,
J. B.
(
2016
).
A hierarchy of timescales explains distinct effects of local inhibition of primary visual cortex and frontal eye fields
.
eLife
,
5
.
Coombes
,
S.
(
2005
).
Waves, bumps, and patterns in neural field theories
.
Biological Cybernetics
,
93
(
2
),
91
108
.
Crick
,
F.
, &
Koch
,
C.
(
1998
).
Constraints on cortical and thalamic projections: The no-strong-loops hypothesis
.
Nature
,
391
(
6664
),
245
250
.
De Monte
,
S.
,
d’Ovidio
,
F.
, &
Mosekilde
,
E.
(
2003
).
Coherent regimes of globally coupled dynamical systems
.
Physical Review Letters
,
90
(
5
),
054102
.
Deco
,
G.
, &
Jirsa
,
V. K.
(
2012
).
Ongoing cortical activity at rest: Criticality, multistability, and ghost attractors
.
Journal of Neuroscience
,
32
(
10
),
3366
3375
.
Ercsey-Ravasz
,
M.
,
Markov
,
N. T.
,
Lamy
,
C.
,
Van Essen
,
D. C.
,
Knoblauch
,
K.
,
Toroczkai
,
Z.
, &
Kennedy
,
H.
(
2013
).
A predictive network model of cerebral cortical connectivity based on a distance rule
.
Neuron
,
80
(
1
),
184
197
.
Fagerholm
,
E. D.
,
Foulkes
,
W. M. C.
,
Gallero-Salas
,
Y.
,
Helmchen
,
F.
,
Friston
,
K. J.
,
Leech
,
R.
, &
Moran
,
R. J.
(
2019
).
Network constraints in scale free dynamical systems
.
arXiv:1908.06678
.
Felleman
,
D.
, &
Van Essen
,
D. C.
(
1991
).
Distributed hierarchical processing in the primate cerebral cortex
.
Cerebral Cortex
,
1
,
1
47
.
Finlay
,
B. L.
(
2016
).
Principles of network architecture emerging from comparisons of the cerebral cortex in large and small brains
.
PLoS Biology
,
14
(
9
),
e1002556
.
Fox
,
M. D.
,
Snyder
,
A. Z.
,
Vincent
,
J. L.
,
Corbetta
,
M.
,
Van Essen
,
D. C.
, &
Raichle
,
M. E.
(
2005
).
The human brain is intrinsically organized into dynamic, anticorrelated functional networks
.
Proceedings of the National Academy of Sciences
,
102
(
27
),
9673
9678
.
Frassle
,
S.
,
Lomakina
,
E. I.
,
Razi
,
A.
,
Friston
,
K. J.
,
Buhmann
,
J. M.
, &
Stephan
,
K. E.
(
2017
).
Regression DCM for fMRI
.
NeuroImage
,
155
,
406
421
.
Freyer
,
F.
,
Roberts
,
J. A.
,
Becker
,
R.
,
Robinson
,
P. A.
,
Ritter
,
P.
, &
Breakspear
,
M.
(
2011
).
Biophysical mechanisms of multistability in resting-state cortical rhythms
.
Journal of Neuroscience
,
31
(
17
),
6353
6361
.
Freyer
,
F.
,
Roberts
,
J. A.
,
Ritter
,
P.
, &
Breakspear
,
M.
(
2012
).
A canonical model of multistability and scale-invariance in biological systems
.
PLoS Computational Biology
,
8
(
8
),
e1002634
.
Friston
,
K.
(
2013
).
Life as we know it
.
Journal of the Royal Society Interface
,
10
(
86
),
20130475
.
Friston
,
K.
(
2019
).
A free energy principle for a particular physics
.
arXiv:1906.10184.
Friston
,
K.
, &
Ao
,
P.
(
2012
).
Free energy, value, and attractors
.
Computational and Mathematical Methods in Medicine
,
2012
,
937860
.
Friston
,
K.
,
Da Costa
,
L.
, &
Parr
,
T.
(
2020
).
Some interesting observations on the free energy principle
.
arXiv:2002.04501
.
Friston
,
K.
,
Mattout
,
J.
,
Trujillo-Barreto
,
N.
,
Ashburner
,
J.
, &
Penny
,
W.
(
2007
).
Variational free energy and the Laplace approximation
.
NeuroImage
,
34
(
1
),
220
234
.
Friston
,
K.
, &
Penny
,
W.
(
2011
).
Post hoc Bayesian model selection
.
NeuroImage
,
56
(
4
),
2089
2099
.
Friston
,
K. J.
,
Bastos
,
A. M.
,
Oswal
,
A.
,
van Wijk
,
B.
,
Richter
,
C.
, &
Litvak
,
V.
(
2014
).
Granger causality revisited
.
NeuroImage
,
101
,
796
808
.
Friston
,
K. J.
,
Harrison
,
L.
, &
Penny
,
W.
(
2003
).
Dynamic causal modelling
.
NeuroImage
,
19
,
1273
1302
.
Friston
,
K. J.
,
Kahan
,
J.
,
Biswal
,
B.
, &
Razi
,
A.
(
2014
).
A DCM for resting state fMRI
.
NeuroImage
,
94
,
396
407
.
Friston
,
K. J.
,
Litvak
,
V.
,
Oswal
,
A.
,
Razi
,
A.
,
Stephan
,
K. E.
,
van Wijk
,
B. C. M.
, …
Zeidman
,
P.
(
2016
).
Bayesian model reduction and empirical Bayes for group (DCM) studies
.
NeuroImage
,
128
,
413
431
.
Friston
,
K. J.
,
Parr
,
T.
, &
de Vries
,
B.
(
2017
).
The graphical brain: Belief propagation and active inference
.
Network Neuroscience
,
1
(
4
),
381
414
.
Gilson
,
M.
,
Moreno-Bote
,
R.
,
Ponce-Alvarez
,
A.
,
Ritter
,
P.
, &
Deco
,
G.
(
2016
).
Estimation of directed effective connectivity from fMRI functional connectivity hints at asymmetries of cortical connectome
.
PLoS Computational Biology
,
12
(
3
),
e1004762
.
Giraud
,
A. L.
, &
Poeppel
,
D.
(
2012
).
Cortical oscillations and speech processing: Emerging computational principles and operations
.
Nature Neuroscience
,
15
(
4
),
511
517
.
Glasser
,
M. F.
,
Coalson
,
T. S.
,
Robinson
,
E. C.
,
Hacker
,
C. D.
,
Harwell
,
J.
,
Yacoub
,
E.
, …
Van Essen
,
D. C.
(
2016
).
A multi-modal parcellation of human cerebral cortex
.
Nature
,
536
(
7615
),
171
178
.
Grossberg
,
S.
(
2007
).
Towards a unified theory of neocortex: Laminar cortical circuits for vision and cognition
.
Progress in Brain Research
,
165
,
79
104
.
Gu
,
S.
,
Cieslak
,
M.
,
Baird
,
B.
,
Muldoon
,
S. F.
,
Grafton
,
S. T.
,
Pasqualetti
,
F.
, &
Bassett
,
D. S.
(
2018
).
The energy landscape of neurophysiological activity implicit in brain network structure
.
Scientific Reports
,
8
(
1
),
2507
.
Haeusler
,
S.
, &
Maass
,
W.
(
2007
).
A statistical analysis of information-processing properties of lamina-specific cortical microcircuit models
.
Cerebral Cortex
,
17
(
1
),
149
162
.
Haimovici
,
A.
,
Tagliazucchi
,
E.
,
Balenzuela
,
P.
, &
Chialvo
,
D. R.
(
2013
).
Brain organization into resting state networks emerges at criticality on a model of the human connectome
.
Physical Review Letters
,
110
(
17
),
178101
.
Haken
,
H.
(
1983
).
Synergetics: An introduction. Non-equilibrium phase transition and self-selforganisation in physics, chemistry and biology
.
Berlin, Germany
:
Springer Verlag
.
Hasson
,
U.
,
Yang
,
E.
,
Vallines
,
I.
,
Heeger
,
D. J.
, &
Rubin
,
N.
(
2008
).
A hierarchy of temporal receptive windows in human cortex
.
Journal of Neuroscience
,
28
(
10
),
2539
2550
.
Hilgetag
,
C. C.
,
O’Neill
,
M. A.
, &
Young
,
M. P.
(
2000
).
Hierarchical organization of macaque and cat cortical sensory systems explored with a novel network processor
.
Philosophical Transactions of the Royal Society B: Biological Sciences
,
355
(
1393
),
71
89
.
Horvát
,
S.
,
Gămănuţ
,
R.
,
Ercsey-Ravasz
,
M.
,
Magrou
,
L.
,
Gămănuţ
,
B.
,
Van Essen
,
D. C.
, …
Kennedy
,
H.
(
2016
).
Spatial embedding and wiring cost constrain the functional layout of the cortical network of rodents and primates
.
PLoS Biology
,
14
(
7
),
e1002512
.
Hovsepyan
,
S.
,
Olasagasti
,
I.
, &
Giraud
,
A.-L.
(
2018
).
Combining predictive coding with neural oscillations optimizes on-line speech processing
.
bioRxiv:477588
.
Huang
,
W. Y.
,
Bolton
,
T. A. W.
,
Medaglia
,
J. D.
,
Bassett
,
D. S.
,
Ribeiro
,
A.
, &
Van De Ville
,
D.
(
2018
).
A graph signal processing perspective on functional brain imaging
.
Proceedings of the IEEE
,
106
(
5
),
868
885
.
Itskov
,
V.
,
Curto
,
C.
,
Pastalkova
,
E.
, &
Buzsaki
,
G.
(
2011
).
Cell assembly sequences arising from spike threshold adaptation keep track of time in the hippocampus
.
Journal of Neuroscience
,
31
(
8
),
2828
2834
.
Jirsa
,
V. K.
,
Friedrich
,
R.
,
Haken
,
H.
, &
Kelso
,
J. A.
(
1994
).
A theoretical model of phase transitions in the human brain
.
Biological Cybernetics
,
71
(
1
),
27
35
.
Kaluza
,
P.
, &
Meyer-Ortmanns
,
H.
(
2010
).
On the role of frustration in excitable systems
.
Chaos
,
20
(
4
),
043111
.
Kayser
,
C.
, &
Ermentrout
,
B.
(
2010
).
Complex times for earthquakes, stocks, and the brain’s activity
.
Neuron
,
66
(
3
),
329
331
.
Keber
,
F. C.
,
Loiseau
,
E.
,
Sanchez
,
T.
,
DeCamp
,
S. J.
,
Giomi
,
L.
,
Bowick
,
M. J.
, …
Bausch
,
A. R.
(
2014
).
Topology and dynamics of active nematic vesicles
.
Science
,
345
(
6201
),
1135
1139
.
Keller
,
G. B.
, &
Mrsic-Flogel
,
T. D.
(
2018
).
Predictive processing: A canonical cortical computation
.
Neuron
,
100
(
2
),
424
435
.
Kiebel
,
S. J.
,
Daunizeau
,
J.
, &
Friston
,
K.
(
2008
).
A hierarchy of time-scales and the brain
.
PLoS Computational Biology
,
4
,
e1000209
.
Kirchhoff
,
M.
,
Parr
,
T.
,
Palacios
,
E.
,
Friston
,
K.
, &
Kiverstein
,
J.
(
2018
).
The Markov blankets of life: Autonomy, active inference and the free energy principle
.
Journal of the Royal Society Interface
,
15
(
138
).
Kitzbichler
,
M. G.
,
Smith
,
M. L.
,
Christensen
,
S. R.
, &
Bullmore
,
E.
(
2009
).
Broadband criticality of human brain network synchronization
.
PLoS Computational Biology
,
5
(
3
),
e1000314
.
Landau
,
L. D.
, &
Lifshitz
,
E. M.
(
1976
).
Mechanics
(3rd ed.).
Vol. 1 of Course of Theoretical Physics
.
Oxford, United Kingdom
:
Pergamon Press
.
Liegeois
,
R.
,
Li
,
J.
,
Kong
,
R.
,
Orban
,
C.
,
Van De Ville
,
D.
,
Ge
,
T.
, …
Yeo
,
B. T. T.
(
2019
).
Resting brain dynamics at different timescales capture distinct aspects of human behavior
.
Nature Communications
,
10
(
1
),
2317
.
Lin
,
H. W.
,
Tegmark
,
M.
, &
Rolnick
,
D.
(
2017
).
Why does deep and cheap learning work so well?
Journal of Statistical Physics
,
168
,
1223
1247
.
Lopez
,
J. D.
,
Litvak
,
V.
,
Espinosa
,
J. J.
,
Friston
,
K.
, &
Barnes
,
G. R.
(
2014
).
Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM
.
NeuroImage
,
84
,
476
487
.
Lyapunov
,
A. M.
, &
Fuller
,
A. T.
(
1992
).
The general problem of the stability of motion
.
London, United Kingdom
:
Taylor & Francis
.
Lynall
,
M. E.
,
Bassett
,
D. S.
,
Kerwin
,
R.
,
McKenna
,
P. J.
,
Kitzbichler
,
M.
,
Muller
,
U.
, &
Bullmore
,
E.
(
2010
).
Functional connectivity and brain networks in schizophrenia
.
Journal of Neuroscience
,
30
(
28
),
9477
9487
.
Mantegna
,
R. N.
, &
Stanley
,
H. E.
(
1995
).
Scaling behavior in the dynamics of an economic index
.
Nature
,
376
(
6535
),
46
49
.
Markov
,
N.
,
Ercsey-Ravasz
,
M.
,
Van Essen
,
D.
,
Knoblauch
,
K.
,
Toroczkai
,
Z.
, &
Kennedy
,
H.
(
2013
).
Cortical high-density counterstream architectures
.
Science
,
342
(
6158
),
1238406
.
Markov
,
N. T.
,
Vezoli
,
J.
,
Chameau
,
P.
,
Falchier
,
A.
,
Quilodran
,
R.
,
Huissoud
,
C.
, …
Kennedy
,
H.
(
2014
).
Anatomy of hierarchy: Feedforward and feedback pathways in macaque visual cortex
.
Journal of Comparative Neurology
,
522
(
1
),
225
259
.
Mesulam
,
M. M.
(
1998
).
From sensation to cognition
.
Brain
,
121
,
1013
1052
.
Mohanty
,
R.
,
Sethares
,
W. A.
,
Nair
,
V. A.
, &
Prabhakaran
,
V.
(
2020
).
Rethinking measures of functional connectivity via feature extraction
.
Scientific Reports
,
10
(
1
),
1298
.
Moran
,
R.
,
Pinotsis
,
D. A.
, &
Friston
,
K.
(
2013
).
Neural masses and fields in dynamic causal modeling
.
Frontiers in Computational Neuroscience
,
7
,
57
.
Mountcastle
,
V. B.
(
1997
).
The columnar organization of the neocortex
.
Brain
,
120
(
Pt. 4
),
701
722
.
Murray
,
J. D.
,
Bernacchia
,
A.
,
Freedman
,
D. J.
,
Romo
,
R.
,
Wallis
,
J. D.
,
Cai
,
X.
, …
Wang
,
X. J.
(
2014
).
A hierarchy of intrinsic timescales across primate cortex
.
Nature Neuroscience
,
17
(
12
),
1661
1663
.
Northoff
,
G.
,
Wainio-Theberge
,
S.
, &
Evers
,
K.
(
2019
).
Is temporo-spatial dynamics the “common currency” of brain and mind? In quest of “spatiotemporal neuroscience.”
Physics of Life Reviews
.
Parr
,
T.
,
Da Costa
,
L.
, &
Friston
,
K.
(
2020
).
Markov blankets, information geometry and stochastic thermodynamics
.
Philosophical Transactions of the Royal Society A: Mathematical, Physical, and Engineering Sciences
,
378
(
2164
),
20190159
.
Parr
,
T.
, &
Friston
,
K. J.
(
2018
).
The anatomy of inference: Generative models and brain structure
.
Frontiers in Computational Neuroscience
,
12
(
90
).
Pavlos
,
G. P.
,
Karakatsanis
,
L. P.
, &
Xenakis
,
M. N.
(
2012
).
Tsallis non-extensive statistics, intermittent turbulence, SOC and chaos in the solar plasma, Part one: Sunspot dynamics
.
Physica A: Statistical Mechanics and Its Applications
,
391
(
24
),
6287
6319
.
Pearl
,
J.
(
1988
).
Probabilistic reasoning in intelligent systems: Networks of plausible inference
.
San Fransisco, CA
:
Morgan Kaufmann
.
Pearl
,
J.
(
2009
).
Causality
.
Cambridge, United Kingdom
:
Cambridge University Press
.
Pellet
,
J. P.
, &
Elisseeff
,
A.
(
2008
).
Using Markov blankets for causal structure learning
.
Journal of Machine Learning Research
,
9
,
1295
1342
.
Peters
,
A.
, &
Yilmaz
,
E.
(
1993
).
Neuronal organization in area 17 of cat visual cortex
.
Cerebral Cortex
,
3
(
1
),
49
68
.
Pyragas
,
K.
(
1997
).
Conditional Lyapunov exponents from time series
.
Physical Review E
,
56
(
5
),
5183
5187
.
Qian
,
H.
, &
Beard
,
D. A.
(
2005
).
Thermodynamics of stoichiometric biochemical networks in living systems far from equilibrium
.
Biophysical Chemistry
,
114
(
2–3
),
213
220
.
Ramaswamy
,
S.
(
2010
).
The mechanics and statistics of active matter
.
Annual Review of Condensed Matter Physics
,
1
,
323
345
.
Rao
,
R. P.
, &
Ballard
,
D. H.
(
1999
).
Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects
.
Nature Neuroscience
,
2
(
1
),
79
87
.
Razi
,
A.
, &
Friston
,
K. J.
(
2016
).
The connected brain: Causality, models, and intrinsic dynamics
.
IEEE Signal Processing Magazine
,
33
(
3
),
14
35
.
Razi
,
A.
,
Kahan
,
J.
,
Rees
,
G.
, &
Friston
,
K. J.
(
2015
).
Construct validation of a DCM for resting state fMRI
.
NeuroImage
,
106
(
1–4
),
222
241
.
Razi
,
A.
,
Seghier
,
M. L.
,
Zhou
,
Y.
,
McColgan
,
P.
,
Zeidman
,
P.
,
Park
,
H. J.
, …
Friston
,
K. J.
(
2017
).
Large-scale DCMs for resting-state fMRI
.
Network Neuroscience
,
1
(
3
),
222
241
.
Robinson
,
P. A.
,
Zhao
,
X.
,
Aquino
,
K. M.
,
Griffiths
,
J. D.
,
Sarkar
,
S.
, &
Mehta-Pandejee
,
G.
(
2016
).
Eigenmodes of brain activity: Neural field theory predictions and comparison with experiment
.
NeuroImage
,
142
,
79
98
.
Roy
,
D.
,
Sigala
,
R.
,
Breakspear
,
M.
,
McIntosh
,
A. R.
,
Jirsa
,
V. K.
,
Deco
,
G.
, &
Ritter
,
P.
(
2014
).
Using the virtual brain to reveal the role of oscillations and plasticity in shaping brain’s dynamical landscape
.
Brain Connectivity
,
4
(
10
),
791
811
.
Schaefer
,
A.
,
Kong
,
R.
,
Gordon
,
E. M.
,
Laumann
,
T. O.
,
Zuo
,
X. N.
,
Holmes
,
A. J.
, …
Yeo
,
B. T. T.
(
2018
).
Local-global parcellation of the human cerebral cortex from intrinsic functional connectivity MRI
.
Cerebral Cortex
,
28
(
9
),
3095
3114
.
Schwabl
,
F.
(
2002
).
Phase transitions, scale invariance, renormalization group theory, and percolation
.
In Statistical mechanics
(pp.
327
404
).
Berlin, Germany
:
Springer
.
Self
,
M. W.
,
van Kerkoerle
,
T.
,
Goebel
,
R.
, &
Roelfsema
,
P. R.
(
2019
).
Benchmarking laminar fMRI: Neuronal spiking and synaptic activity during top-down and bottom-up processing in the different layers of cortex
.
NeuroImage
,
197
,
806
817
.
Shin
,
C. W.
, &
Kim
,
S.
(
2006
).
Self-organized criticality and scale-free properties in emergent functional neural networks
.
Physical Review E: Statistical, Nonlinear, and Soft Matter Physics
,
74
(
4 Pt. 2
),
45101
.
Singer
,
W.
,
Sejnowski
,
T. J.
, &
Rakic
,
P.
(
2019
).
The neocortex.
Spratling
,
M. W.
(
2008
).
Predictive-coding as a model of biased competition in visual attention
.
Vision Research
,
48
,
1391
1408
.
Stachenfeld
,
K. L.
,
Botvinick
,
M. M.
, &
Gershman
,
S. J.
(
2017
).
The hippocampus as a predictive map
.
Nature Neuroscience
,
20
(
11
),
1643
1653
.
Thomson
,
A. M.
, &
Bannister
,
A. P.
(
2003
).
Interlaminar connections in the neocortex
.
Cerebral Cortex
,
13
(
1
),
5
14
.
Tokariev
,
A.
,
Roberts
,
J. A.
,
Zalesky
,
A.
,
Zhao
,
X.
,
Vanhatalo
,
S.
,
Breakspear
,
M.
, &
Cocchi
,
L.
(
2019
).
Large-scale brain modes reorganize between infant sleep states and carry prognostic information for preterms
.
Nature Communications
,
10
(
1
),
2619
.
Trojanowski
,
J.
, &
Jacobson
,
S.
(
1977
).
The morphology and laminar distribution of cortico-pulvinar neurons in the rhesus monkey
.
Experimental Brain Research
,
28
,
51
62
.
van den Heuvel
,
M. P.
, &
Sporns
,
O.
(
2013
).
Network hubs in the human brain
.
Trends in Cognitive Sciences
,
17
(
12
),
683
696
.
van Kerkoerle
,
T.
,
Self
,
M. W.
,
Dagnino
,
B.
,
Gariel-Mathis
,
M. A.
,
Poort
,
J.
,
van der Togt
,
C.
, &
Roelfsema
,
P. R.
(
2014
).
Alpha and gamma oscillations characterize feedback and feedforward processing in monkey visual cortex
.
Proceedings of the National Academy of Sciences
,
111
(
40
),
14332
14341
.
Wang
,
X.-J.
, &
Kennedy
,
H.
(
2016
).
Brain structure and dynamics across scales: In search of rules
.
Current Opinion in Neurobiology
,
37
,
92
98
.
Yuan
,
R.
,
Ma
,
Y.
,
Yuan
,
B.
, &
Ao
,
P.
(
2011
).
Potential function in dynamical systems and the relation with Lyapunov function
.
Paper presented at the Proceedings of the 30th Chinese Control Conference
.
Zeki
,
S.
, &
Shipp
,
S.
(
1988
).
The functional logic of cortical connections
.
Nature
,
335
,
311
317
.
Zhang
,
F.
,
Xu
,
L.
,
Zhang
,
K.
,
Wang
,
E.
, &
Wang
,
J.
(
2012
).
The potential and flux landscape theory of evolution
.
Journal of Chemical Physics
,
137
(
6
),
065102
.
Zhou
,
Y.
,
Friston
,
K. J.
,
Zeidman
,
P.
,
Chen
,
J.
,
Li
,
S.
, &
Razi
,
A.
(
2018
).
The hierarchical organization of the default, dorsal attention and salience networks in adolescents and young adults
.
Cerebral Cortex
,
28
(
2
),
726
737
.

Author notes

Competing Interests: The authors have declared that no competing interests exist.

Handling Editor: Randy McIntosh

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.

Supplementary data