Structure-informed functional connectivity driven by identifiable and state-specific control regions

Describing how the brain anatomical wiring contributes to the emergence of coordinated neural activity underlying complex behavior remains challenging. Indeed, patterns of remote coactivations that adjust with the ongoing task-demand do not systematically match direct, static anatomical links. Here, we propose that observed coactivation patterns, known as functional connectivity (FC), can be explained by a controllable linear diffusion dynamics defined on the brain architecture. Our model, termed structure-informed FC, is based on the hypothesis that different sets of brain regions controlling the information flow on the anatomical wiring produce state-specific functional patterns. We thus introduce a principled framework for the identification of potential control centers in the brain. We find that well-defined, sparse, and robust sets of control regions, partially overlapping across several tasks and resting state, produce FC patterns comparable to empirical ones. Our findings suggest that controllability is a fundamental feature allowing the brain to reach different states.


Note 1 : Linear models to map structure and function in brain networks
We use a linear time-invariant model of the form where x(k) describes the level of neurophysiological activity at time k in each node of a brain network. The model parameters are the system matrix A, the input matrix B and the input signals u.
Roberto Galán proposed such a model based on the linearization and discretization of a general Wilson-Cowan model [1]. The author obtained A as a linear transformation of the structural connectivity matrix (i.e. the connectome), fixed B as the identity matrix, modelled u as white noise signals and derived an analytical expression of the state-covariance matrix of the nodal dynamics. This model was further explored by Honey and colleagues [2]. Based on this work, Gu et al. leveraged the established theory of linear systems to investigate the controllability of brain networks [3]. By choosing B as the i-th canonical vector, they derived quantitative control properties of each node i through the computation of the controllability Gramian. Later, Gu et al. focused on optimal trajectories of brain state transitions [4] and computed the input signals u minimizing the energy for the transitions between predefined brain states. For that, they took a hypothesis-driven approach and fixed the input matrix B based on a priori knowledge. The authors pointed out to the theoretical and empirical motivations of considering a set of control regions (multipoint control ) instead of a single input node. However, the question of how to identify the control set associated with a given brain state from empirical data remains challenging.
In parallel, recent studies investigated how information propagates in the white matter wiring in order to generate the observed patterns of functional activity [5]. A spectrum of communication strategies have been investigated [6] among which several models based on information diffusion [7,8,9,10]. These models account for the autonomous dynamics of System 1 through modifications of the system matrix A [11] in the absence of external stimulation (B = 0).
Here, we propose a principled method to identify state-specific sets of control regions from empirical data. For that, we use System 1 in which A describes a Laplacian diffusion dynamics [7] although the general framework is valid for other dynamics. We derive a model of correlation-based functional connectivity that is linked to the controllability Gramian and find the input matrix B such that the similarity between modelled and empirical functional connectivity is maximized, assuming white driving noise signals u. In our analysis, we weighted the structural connection between two regions of the connectome as the number of streamlines reconstructed by the tractography algorithm between both regions, normalized by the volume of the regions. This process has been proposed by Hagmann and colleagues [12] in order to mitigate the bias due to the variable size of ROIs, larger ROIs receiving more reconstructed streamlines. In Supplementary Figure S2, we observe that some regions such as the cerebellum are particularly affected by this normalization. Alternative weightings of structural connections exist in the literature, related to the length of reconstructed streamlines, their fractional anisotropy or the apparent diffusion coefficient for instance. Choosing the edges weighting scheme remains an open question in diffusion MRI connectomics [13] and our results should be interpreted with respect to this choice as the nodal strength has been shown to be linked to the chance of a ROI to be selected as an input node.

Note 3 : Impact of Global Signal Regression
Global Signal Regression (GSR) is a highly debated preprocessing operation [14,15,16,17] because the global signal is thought to include components of both neuronal and non-neuronal origin. Here, we can anticipate that GSR will have a negative impact on the correlation score between structure-informed and empirical functional connectivity. On the one hand, the solution Σ of the Lyapunov equation used in our our model is written and cannot have negative entries with our choice for A and B. On the other hand, GSR has been shown to introduce negative correlations in empirical FC [18]. Therefore, our model is expected to show lower performance when GSR is applied to the fMRI data, because it cannot predict the negative correlations that are introduced. To illustrate this, we reproduced Figures 2A, 2B, 3A and 3B of the manuscript using fMRI data that underwent GSR. In Supplementary Figure S13A, we observe that correlation scores are indeed lower for all states, with the relational processing task being the most affected. Compared to the results of the main text, less control ROIs are selected for all states except the resting-state (Supplementary Figure S13B). Our previous observation that subcortical areas are consistently selected across states remains valid (Supplementary Figure S13C and D for the motor task and the resting-state respectively ; result not shown for the other tasks).

Note 4 : Structure-function correlation in open and closed subsystems
In order to compute the structure-function correlation score for a subsystem, one can consider two options : (i) comparing the edges with both extremities belonging to the subsystem ('closed' subsystem) or (ii) comparing the edges with at least one extremity belonging to the subsystem ('open' subsystem).
The first option ignores all edges linking two subsystems while the second includes these edges in the computation of the correlation score for both subsystems. In the manuscript, Figure 3D is obtained with open subsystems and shows a gradient of structure-function coupling in resting-state, from high correlation in primary sensory areas to low correlation in regions associated with higher-order cognitive functions [19]. In Supplementary Figure S8, we show the result of the same analysis with closed subsystems. This result is similar to that reported by Tipnis et al. [20], and the gradient observed in resting-state is different from that of the manuscript. We also observe that the visual subsystem is the most affected by taking into account the edges linking different subsystems, indicating that the connections between visual areas and other systems show a closer structure-function relationship, compared to intrinsic connections of the visual subsystems.   Figure S4: Robustness with respect to consensus input set thresholds. A) (resp. B, C, D) Correlation score between structure-informed and empirical functional connectivity with respect to the number of input nodes allowed U (group-level). F SI is obtained using the consensus input set formed by ROIs selected at least 10 (resp. 15, 20, 30) times over 30 optimization runs. E) (resp. F, G, H) Size of the corresponding consensus input set with respect to the number of input nodes allowed U (group-level). The grey line denotes the identity function y = x.