Abstract
The application of graph theory to model the complex structure and function of the brain has shed new light on its organization, prompting the emergence of network neuroscience. Despite the tremendous progress that has been achieved in this field, still relatively few methods exploit the topology of brain networks to analyze brain activity. Recent attempts in this direction have leveraged on the one hand graph spectral analysis (to decompose brain connectivity into eigenmodes or gradients) and the other graph signal processing (to decompose brain activity “coupled to” an underlying network in graph Fourier modes). These studies have used a variety of imaging techniques (e.g., fMRI, electroencephalography, diffusion-weighted and myelin-sensitive imaging) and connectivity estimators to model brain networks. Results are promising in terms of interpretability and functional relevance, but methodologies and terminology are variable. The goals of this paper are twofold. First, we summarize recent contributions related to connectivity gradients and graph signal processing, and attempt a clarification of the terminology and methods used in the field, while pointing out current methodological limitations. Second, we discuss the perspective that the functional relevance of connectivity gradients could be fruitfully exploited by considering them as graph Fourier bases of brain activity.
INTRODUCTION
Modern attempts at understanding brain function have leveraged the use of graph theory to grasp complex properties of neuronal networks, giving rise to the field of network neuroscience (Bassett & Sporns, 2017; Sporns, 2017). Modeling brain organization using graphs has led to fascinating results, such as the brain’s hypothetical rich-club organization (van den Heuvel & Sporns, 2011), the cortical organization in functionally relevant modules (Sporns & Betzel, 2016), as well as common wiring principles across species (Goulas, Majka, Rosa, & Hilgetag, 2019). Despite the tremendous progress that has been achieved in network neuroscience, surprisingly relatively few methods such as graph signal processing (GSP; Shuman, Narang, Frossard, Ortega, & Vandergheynst, 2013) exploit brain connectivity (i.e., take into account the topology of brain networks) to characterize brain activity (Ju & Bassett, 2020).
First steps in the direction of exploiting connectivity graphs in the analysis of brain signals have been made using spectral graph theory (Chung, 1997; Mhaskar, 2016). The underlying idea of this theory is to interpret the eigenvectors of graph Laplacians as harmonic components. Increasing evidence of the functional relevance of these spectral components of brain networks (i.e., connectivity gradients, eigenmodes, or harmonics; Atasoy, Donnelly, & Pearson, 2016; Belkin & Niyogi, 2003; W. Huang et al., 2016; see Table 1), has recently been shown with a variety of approaches (Atasoy, Deco, Kringelbach, & Pearson, 2017; Margulies, Ghosh, Goulas, Falkiewicz, & Huntenburg, 2016; M. B. Wang, Owen, Mukherjee, & Raj, 2017).
. | Approach . | Toolbox . | Key references . | |
---|---|---|---|---|
Connectome Harmonics, Harmonic Brain Modes | Laplacian eigenvectors ui of large-scale brain networks estimated from DWI and anatomical MRI. They are interpreted as spectral components of spatiotemporal neural activity and compared with resting-state networks and oscillatory patterns (neural field model). | (Atasoy, Deco, et al., 2017; Atasoy et al., 2016; Atasoy et al., 2017) | ||
Brain Gradients | Eigenvectors ui obtained applying diffusion map embedding on large-scale functional (Margulies et al., 2016), microstructural (Paquola et al., 2019), or spontaneous oscillation (Mahjoory et al., 2019) networks estimated respectively from resting-state fMRI data, myelin-sensitive MRI data, or MEG signals. They reveal macroscale axes of cortical organization with functional and neurodevelopment relevance. | BrainSpace | (Hong et al., 2019; Margulies et al., 2016; Mckeown et al., 2020; Paquola et al., 2020; Paquola et al., 2019) | |
Connectopies | Laplacian eigenvectors ui of the graph obtained computing the correlation between voxels within a selected ROI and the rest of the gray matter voxels. The approach reveals fine-grained topographic organization of a brain region’s connectivity (i.e., primary motor or visual cortex). | Congrads | (Haak, Marquand, & Beckmann, 2018; Schröder et al., 2015) | |
Brain Activity Eigenmodes | Excitatory or inhibitory neural activity expanded in terms of spatial eigenmodes of the cortex mesh obtained solving corticothalamic neural field theory equations. These brain eigenmodes show high similarity with spherical harmonics (cortical folding = 0) and DWI connectivity eigenvectors (graph Laplacian). | (Robinson et al., 2016) | ||
Fourier/Harmonic Modes | Graph Fourier modes obtained applying graph Fourier transform to a signal (i.e., fMRI, W. Huang et al., 2016; Preti, Bolton, & Van De Ville, 2017; or EEG, Glomb et al., 2020) on a graph (i.e., structural connectivity graph estimated from DWI). This analysis reveals low (high) frequency modes that are aligned (liberal) with respect to the underlying graph structure. | PyGSP | (Glomb et al., 2020; B. W. Huang et al., 2018; Medaglia et al., 2017; Preti & Van De Ville, 2019) | |
Graph Neural Fields | Excitatory or inhibitory neural activity expressed as stochastic neural field equations on the human connectome graph. This approach combines Wilson-Cowan neural field equations and graph signal processing to model and analyze whole-brain activity. | Code available† | (Aqil et al., 2020) | |
Notes | †https://github.com/marcoaqil/Graph-Stochastic-Wilson-Cowan-Model |
. | Approach . | Toolbox . | Key references . | |
---|---|---|---|---|
Connectome Harmonics, Harmonic Brain Modes | Laplacian eigenvectors ui of large-scale brain networks estimated from DWI and anatomical MRI. They are interpreted as spectral components of spatiotemporal neural activity and compared with resting-state networks and oscillatory patterns (neural field model). | (Atasoy, Deco, et al., 2017; Atasoy et al., 2016; Atasoy et al., 2017) | ||
Brain Gradients | Eigenvectors ui obtained applying diffusion map embedding on large-scale functional (Margulies et al., 2016), microstructural (Paquola et al., 2019), or spontaneous oscillation (Mahjoory et al., 2019) networks estimated respectively from resting-state fMRI data, myelin-sensitive MRI data, or MEG signals. They reveal macroscale axes of cortical organization with functional and neurodevelopment relevance. | BrainSpace | (Hong et al., 2019; Margulies et al., 2016; Mckeown et al., 2020; Paquola et al., 2020; Paquola et al., 2019) | |
Connectopies | Laplacian eigenvectors ui of the graph obtained computing the correlation between voxels within a selected ROI and the rest of the gray matter voxels. The approach reveals fine-grained topographic organization of a brain region’s connectivity (i.e., primary motor or visual cortex). | Congrads | (Haak, Marquand, & Beckmann, 2018; Schröder et al., 2015) | |
Brain Activity Eigenmodes | Excitatory or inhibitory neural activity expanded in terms of spatial eigenmodes of the cortex mesh obtained solving corticothalamic neural field theory equations. These brain eigenmodes show high similarity with spherical harmonics (cortical folding = 0) and DWI connectivity eigenvectors (graph Laplacian). | (Robinson et al., 2016) | ||
Fourier/Harmonic Modes | Graph Fourier modes obtained applying graph Fourier transform to a signal (i.e., fMRI, W. Huang et al., 2016; Preti, Bolton, & Van De Ville, 2017; or EEG, Glomb et al., 2020) on a graph (i.e., structural connectivity graph estimated from DWI). This analysis reveals low (high) frequency modes that are aligned (liberal) with respect to the underlying graph structure. | PyGSP | (Glomb et al., 2020; B. W. Huang et al., 2018; Medaglia et al., 2017; Preti & Van De Ville, 2019) | |
Graph Neural Fields | Excitatory or inhibitory neural activity expressed as stochastic neural field equations on the human connectome graph. This approach combines Wilson-Cowan neural field equations and graph signal processing to model and analyze whole-brain activity. | Code available† | (Aqil et al., 2020) | |
Notes | †https://github.com/marcoaqil/Graph-Stochastic-Wilson-Cowan-Model |
GSP, by modeling attributes of network nodes as signals onto a graph (Ortega, Frossard, Kovacevic, Moura, & Vandergheynst, 2018), takes a step further as it allows a joint analysis of brain activity and connectivity. The emergence of GSP is mostly due to the elegant and powerful analogy between graph Laplacian eigenvectors and classical Fourier analysis (Girault, 2015) and the possibility of decomposing a signal “living on a graph” as a combination of spatial harmonics. Recent works have exploited GSP to decompose brain activity in graph Fourier modes, with encouraging results (Glomb, Queralt, Pascucci, & Tourbier, 2020; B. W. Huang et al., 2018; Medaglia et al., 2017; Preti & Van De Ville, 2019).
The gradients/GSP framework is complementary to the classical approach of mapping cortical area functions (brain parcellations) and discrete networks associated with a particular condition or task. Exploiting the topology of brain networks, this framework allows for a decomposition of brain activity or structure as a continuum of spectral components, better describing subregional heterogeneity and multiplicity than parcellation approaches, which consider uniform brain regions (nodes or parcels; Haak & Beckmann, 2020). This novel approach has been successfully applied to the analysis of the healthy or pathological brain in an increasing number of studies (Glomb et al., 2020; Medaglia et al., 2017; Mortaheb et al., 2019; Preti & Van De Ville, 2019). However, readers that first approach the recent literature are often confronted with different notations, terminology, and methods (e.g., Laplacian embedding, diffusion maps, graph Fourier modes; see Table 2) that sometimes are not consistent between the GSP or “gradients” communities and may be difficult to unravel.
. | Notation . | Method . | Key references . |
---|---|---|---|
Laplacian Embedding or Eigenmaps | Operator | Spectral decomposition of a graph G in eigenvectors of the graph Laplacian L. It is the discrete counterpart (on graph) of the Laplacian-Beltrami operator on continuous manifolds. The Laplacian eigenvectors associated with the lowest eigenvalues provide a dimensionality-reduction mapping that preserves locality. | (Haak et al., 2018; Schröder et al., 2015) (Belkin & Niyogi, 2003 ) |
L = D − W Laplacian | |||
Ln = D−1/2LD−1/2 Normalized Laplacian | |||
Lrw = D−1L Random Walk Laplacian | |||
Low-dimensional representation | |||
G = [u1, u2, u3, …] | |||
Diffusion Maps | Operator | Diffusion map embedding treats the graph G as the basis of a diffusion process. Diffusion maps are a family of graph Laplacians that depend on a diffusion parameter α. They can be employed to embed the data into a Euclidean space where the probability of transition between nodes defines the Euclidean distance between the corresponding points in the embedding space. | (Hong et al., 2019; Huntenburg et al., 2018; Langs et al., 2016; Margulies et al., 2016; Mckeown et al., 2020) (Coifman & Lafon, 2006 ; Coifman et al., 2005 ) |
W(α) = D−1/αWD−1/α | |||
P(α) = D(α)−1W(α) Transition Probability | |||
Low-dimensional representation | |||
>G = [u1, u2, …] | |||
Graph Signal Processing | Laplacian eigendecomposition | Expansion of a signal (or a stochastic function) X in terms of the eigenvectors of the underlying graph Laplacian L. Laplacian eigenvectors carry a notion of spatial frequency: i.e. eigenvectors corresponding to low eigenvalues vary smoothly across the graph; those corresponding to large eigenvalues have higher spatial frequencies (i.e. are more likely to have different values across adjacent nodes). | (Aqil et al., 2020; Glomb et al., 2020; B. W. Huang et al., 2018; Medaglia et al., 2017; Preti & Van De Ville, 2019) (Ortega et al., 2018; Shuman et al., 2013 ) |
L = UΛUT | |||
Graph Fourier transform of X | |||
= UTX |
. | Notation . | Method . | Key references . |
---|---|---|---|
Laplacian Embedding or Eigenmaps | Operator | Spectral decomposition of a graph G in eigenvectors of the graph Laplacian L. It is the discrete counterpart (on graph) of the Laplacian-Beltrami operator on continuous manifolds. The Laplacian eigenvectors associated with the lowest eigenvalues provide a dimensionality-reduction mapping that preserves locality. | (Haak et al., 2018; Schröder et al., 2015) (Belkin & Niyogi, 2003 ) |
L = D − W Laplacian | |||
Ln = D−1/2LD−1/2 Normalized Laplacian | |||
Lrw = D−1L Random Walk Laplacian | |||
Low-dimensional representation | |||
G = [u1, u2, u3, …] | |||
Diffusion Maps | Operator | Diffusion map embedding treats the graph G as the basis of a diffusion process. Diffusion maps are a family of graph Laplacians that depend on a diffusion parameter α. They can be employed to embed the data into a Euclidean space where the probability of transition between nodes defines the Euclidean distance between the corresponding points in the embedding space. | (Hong et al., 2019; Huntenburg et al., 2018; Langs et al., 2016; Margulies et al., 2016; Mckeown et al., 2020) (Coifman & Lafon, 2006 ; Coifman et al., 2005 ) |
W(α) = D−1/αWD−1/α | |||
P(α) = D(α)−1W(α) Transition Probability | |||
Low-dimensional representation | |||
>G = [u1, u2, …] | |||
Graph Signal Processing | Laplacian eigendecomposition | Expansion of a signal (or a stochastic function) X in terms of the eigenvectors of the underlying graph Laplacian L. Laplacian eigenvectors carry a notion of spatial frequency: i.e. eigenvectors corresponding to low eigenvalues vary smoothly across the graph; those corresponding to large eigenvalues have higher spatial frequencies (i.e. are more likely to have different values across adjacent nodes). | (Aqil et al., 2020; Glomb et al., 2020; B. W. Huang et al., 2018; Medaglia et al., 2017; Preti & Van De Ville, 2019) (Ortega et al., 2018; Shuman et al., 2013 ) |
L = UΛUT | |||
Graph Fourier transform of X | |||
= UTX |
The goal of this paper is first to summarize recent contributions that have used connectivity gradients and GSP for neuroimaging. In particular, we aim to clarify terminology, compare different methodologies, and provide resources (see Box 1) and key references (Table 1 and 2). The second goal of this paper is to make a link between the two frameworks by discussing gradients as graph Fourier bases. We argue that using GSP for the analysis of multimodal neuroimaging data will pave the way to more interpretable analysis methods.
GRADIENTS OF BRAIN CONNECTIVITY
Connectivity gradients are obtained from graph spectral decomposition of the connectivity matrix. As described in Box 2, they correspond to Laplacian eigenvectors u of connectivity graphs (the reader can also refer to Bajada et al. [2020] and Vos de Wael et al. [2020] for a detailed description and step-by-step tutorials of the “connectivity gradient analysis”). Connectivity gradients provide a representation of cortical organization as a continuum of spatial harmonics that can overlap in space (graph nodes) in the absence of hard boundaries. This approach is therefore complementary to a brain atlas that maps cortical areas onto a set of discontinuous functional or structural regions (brain parcellations; Eickhoff, Yeo, & Genon, 2018; Glasser et al., 2017; Thomas Yeo et al., 2011). A varied (and sometimes inconsistent) terminology has been used in relation to connectivity gradients, even though they are grounded in the same fundamental approach. While a comprehensive review of the growing gradients literature is outside the scope of this paper, Tables 1 and 2 aim to clarify terminology and describe corresponding methodologies, while providing some key references.
Connectivity gradients estimated from functional (Margulies et al., 2016), structural (Robineau et al., 2017), or microstructural (Paquola et al., 2019) brain data have been applied to study the hierarchical organization of brain structure and function. The pioneering work of Margulies and colleagues (Huntenburg, Bazin, & Margulies, 2018; Margulies et al., 2016) introduced the concept of gradients to indicate eigenvectors obtained applying diffusion map embedding to a resting-state fMRI connectivity matrix. Interestingly, the first nontrivial eigenvector of the spectral decomposition (corresponding to eigenvalue λ1) revealed a macroscale cortical organization spanning from unimodal sensory areas to transmodal association areas. Atasoy and colleagues (Atasoy et al., 2016) obtained connectome harmonics through Laplacian decomposition of diffusion-weighted imaging (DWI) connectivity and showed that they can predict resting-state functional network activity. Results indicated that visual, sensory-motor, and limbic networks more closely matched low-frequency harmonics, while higher cognitive function networks spanned over a broader range of brain modes. Another recent work on brain structure/function relation (Park et al., 2021) compared structural connectome gradients and functional states estimated from fMRI using hidden Markov-autoregressive models. Results from Park and colleagues revealed a strong structure-function coupling for functional states anchored in sensorimotor areas, whereas functional states associated with transmodal areas more largely relied on rich-club nodes and long-range connectivity. In Paquola et al. (2019), functional gradients were shown to only partially align with microstructural gradients obtained from histology and high-field MRI: If both axes of variance originate in primary sensory areas, the functional gradient identified in Margulies et al. (2016) arches towards default mode and frontoparietal networks, while its microstructural counterpart extends to limbic cortices.
Earlier work applied Laplacian decomposition to the cortical surface mesh (instead of graphs of connectivity) to extract anatomically relevant features. In Germanaud et al. (2012), Laplacian-based spectral analysis was applied to the cortical mesh to investigate gyrification complexity. Using a segmentation of the cortex based on Laplacian spectral decomposition, the authors were able to identify developmentally relevant features such as primary, secondary, and tertiary folds. Spatial eigenmodes of the curved cortex were also used in Robinson et al. (2016) to solve neural field corticothalamic equations and estimate so-called activity eigenmodes of the brain (Deco, Jirsa, Robinson, Breakspear, & Friston, 2008). The authors also show that excitatory or inhibitory states can be reconstructed through a finite number of eigenfunctions, and that these largely overlap with the Laplacian eigenvectors of a connectivity matrix estimated from DWI. Interestingly enough, and in line with our attempt at giving a unified view of the gradients and GSP methodologies, this approach linking neural field equations and Laplacian decomposition has been revisited in a GSP perspective in a recent publication (Aqil, Atasoy, Kringelbach, & Hindriks, 2020 introducing the graph neural fields framework (see Table 1).
Different from the whole-brain approaches described hitherto, Haak and colleagues (Haak, Marquand, & Beckmann, 2018; Schröder, Haak, Jimenez, Beckmann, & Doeller, 2015) proposed a framework to map regional connection topography (i.e., connectopies) using Laplacian eigenmaps (Belkin & Niyogi, 2003). For every voxel in the selected region, a connectivity fingerprint was obtained computing correlation with the fMRI time series from all the other gray matter voxels; nonlinear manifold learning based on Laplacian decomposition of the connectivity matrix was then applied to extract corresponding eigenvectors named connectopies (i.e., connections topography). The authors underline the utility of using nonlinear dimensionality reduction based on connectivity (instead of linear approaches such as PCA and ICA) to identify more biologically plausible maps of functional organization. Following this perspective, recent work has investigated the relationship between connectivity gradients and cognitive processes, showing that these are altered depending on the ongoing cognitive experience or psychological state (Atasoy, Deco, et al., 2017; Ito, Hearne, & Cole, 2020; Lanzoni et al., 2020; Mckeown et al., 2020; X. Wang, Margulies, Smallwood, & Jefferies, 2020). In Mckeown et al. (2020), for example, macro-scale gradients of functional connectivity at rest are related to ongoing thoughts, and different gradient profiles are associated with different cognitive tasks (e.g., problem-solving vs. past-related thoughts).
Taken together, these studies underline the potential of data-driven dimensionality reduction based on brain networks to reveal principles of large-scale cortical organization and identify gradual changes in functional, white matter, and cytoarchitectonic architecture in different conditions. They also suggest that connectivity gradients can yield meaningful functional relevance, and thus it might be particularly sensible to use them as a basis to analyze brain activity using GSP.
GSP FOR NEUROIMAGING
The fundamental difference between GSP and graph theory is that while the latter provides tools to analyze and manipulate graphs, GSP focuses on analyzing signals on graphs. In other words, GSP leverages concepts developed in spectral graph theory to translate Fourier analysis to signals on graphs. A general overview of the GSP framework illustrating how to perform operations on graphs (spectral analysis, convolution, filtering) is presented in Ortega et al. (2018) and Shuman et al. (2013), and a first review on GSP for neuroimaging was recently proposed (W. Huang et al., 2017). Using a graph estimated from the white matter tracts of the brain, the authors detail how to apply graph Fourier transform to analyze brain activity, and exploit the corresponding graph frequency bands to interpret the data. Precisely, low frequencies correspond to brain activity that follows closely (i.e., is “aligned with”) the underlying white matter connectivity, while high frequencies are characterized by brain activity patterns that can be seen as “liberal” with respect to the network structure.
This approach provides a simple and interpretable framework, which has successfully been applied to study the decoupling between brain structure and function in several recent studies (Glomb et al., 2020; Medaglia et al., 2017; Preti & Van De Ville, 2019; Sihag et al., 2020). In Medaglia et al. (2017), the application of GSP to model fMRI activity onto a DWI connectivity graph allowed a novel analysis of brain functional-structural coupling. The authors investigated to what extent fMRI time series are constrained by the underlying structure. Results indicate that aligned signals were concentrated within default mode and frontoparietal systems, while the subcortical system included both aligned and liberal modes. Interestingly, these findings are interpreted in terms of brain dynamics flexibility and linked to cognitive performance, showing that GSP can discriminate behaviorally relevant signals. Preti and Van De Ville (2019) also decomposed resting-state fMRI time series using Laplacian eigenvectors of structural connectivity, revealing a gradient of large-scale organization of the cortex spanning from sensory-motor areas (with high functional-structural alignment) to higher cognitive function areas (whose activity is more decoupled from underlying structure). In seminal work by Glomb and colleagues (Glomb et al., 2020), the GSP framework allowed for a sparser representation of source EEG data than the conventional individual regions analysis. Few structural connectivity harmonics were shown to capture EEG task dynamics and, more importantly, revealed significant patterns of activation involving the entire cortex, which were disregarded in the classical region-by-region analysis. Together with a high-density EEG study in patients with consciousness disorders (Mortaheb et al., 2019), this is the only work applying GSP to the analysis of EEG signals, and it indicates that network harmonics also have functional significance, as they can be considered as an orthogonal basis of large-scale EEG dynamics.
Going beyond metrics and inference-based approaches, other studies have combined GSP and machine learning to derive features from graph Fourier transform (Brahim & Farrugia, 2020; Ménoret, Farrugia, Pasdeloup, & Gripon, 2017; Pilavci & Farrugia, 2019; Xu, Li, Zhu, & Wu, 2019). For instance, in Brahim and Farrugia (2020), a combination of GSP and machine learning was proposed for autism spectrum disorder classification. More specifically, the authors revealed that the analysis of fMRI data could be enriched by projecting resting-state fMRI (rs-fMRI) time series on a structural brain graph, as shown by substantial classification performance gains. In Pilavci and Farrugia (2019), the authors presented a feature extractor approach based on machine learning and spectral wavelets on brain graphs. In Zhang, Tetrel, Thirion, and Bellec (2020), a functional graph Laplacian embedding of deep neural networks (graph convolutional networks) is used to classify task fMRI time series, in a joint GSP-deep learning framework. Finally, other approaches have taken advantage of graphs to denoise brain signals, such as in Kim et al. (2019), where the authors simultaneously clean brain signals and learn the associated graph.
DISCUSSION AND PERSPECTIVES
In this paper we provided an overview of work that applied Laplacian spectral decomposition and GSP to analyze brain signals. We attempted to elucidate terminology and related approaches used in the “brain gradients” and GSP communities, systematically describing recent promising results. In this section we discuss the potential of an integrated gradients/GSP framework to reveal a spectral basis of brain activity grounded in brain connectivity topology. We will also bring up open questions and methodological challenges in this novel approach.
Connectivity Gradients as a Fourier Basis of Brain Activity
Recent work applying Laplacian-based spectral decomposition of brain networks has revealed functionally, developmentally, and anatomically relevant organizational hierarchies of the brain that have also been related to cognitive performance. Connectivity gradients obtained by such graph spectral decomposition not only have been shown to represent relevant axes of brain organization, but within the GSP framework can be seen as a Fourier basis to decompose brain activity. In this sense they provide a new spatial-frequency language to characterize patterns of neural activity and a novel perspective for probing brain dynamics.
A few studies have investigated the relationship between the principal (Ito et al., 2020; Lanzoni et al., 2020; X. Wang et al., 2020) or second (Mckeown et al., 2020) gradient of rs-fMRI connectivity during various tasks, suggesting that the cortical organization encoded in connectivity gradients supports specific cognitive or semantic functions. The GSP framework takes the analysis further by using the whole spectrum of brain gradients as a Fourier basis to decompose brain activity. For instance, the large-scale fMRI connectivity gradients identified in Margulies et al. (2016) could be exploited to disentangle brain activity measured during a complex cognitive task in unimodal (e.g., related to sensory processing) and transmodal (e.g., related to ongoing thoughts) patterns. This analysis would extend the work of Ito et al. (2020) relating local and distributed processes to the organization of the cortex in unimodal and transmodal areas. Moreover, by decomposing brain signals as a function of eigenvectors (or gradients) of the underlying connectivity, the GSP framework uses the information encoded in higher order connectivity gradients whose functional relevance has scarcely been explored in literature. For instance, in Preti and Van De Ville (2019), the spectrum of structural connectivity eigenvectors is split into low and high frequency components to define a binary decoupling index. Low frequencies correspond to signals coupled to structural connectivity, while high-frequency components are considered to be decoupled. The potential of the GSP framework could be further developed by considering the whole set of connectivity gradients, instead of a partition in low versus high frequencies. This is similar to the way classical Fourier analysis is used to decompose signals in the time domain. Continuing the analogy with the classical frequency domain, fundamental operations such as filtering and denoising can be generalized to brain signals on graphs by taking into account the full graph spectrum. For instance, artifactual components of brain activity (i.e., balistocardiogram artifact for EEG-fMRI simultaneous acquisition; Allen, Polizzi, Krakow, Fish, & Lemieux, 1998; Lioi et al., 2020) could be reduced by filtering out the graph frequency component or band that best represents the artifact (i.e., that maximally correlates with the electrocardiogram signal).
Using a continuous set of dimensions (graph Fourier modes) for the analysis of brain dynamics is an approach that complements (rather than excluding) the more classical hard boundaries parcellation. Some processes may be best characterized in terms of nonoverlapping fixed regions, others in terms of delocalized, overlapping eigenmodes. In this sense, this graph modal approach may be more appropriate than modular analysis in describing complex cognitive states depending on multiple overlapping phenomena. Neural patterns of ongoing activity could be seen as location in a multidimensional state-space constructed out of large-scale brain gradients (Margulies et al., 2016). The “biological” validity of this approach can be also found in the intrinsic organization of brain tissues. Brain structure (and function) are organized into overlapping hierarchical components (Betzel & Bassett, 2017). It is well known, for instance, that the visual and auditory cortices are organized into topographic maps that reflect how sensory information is processed (i.e., retinotopic or tonotopic mapping; Silver, Ress, & Heeger, 2005). As a result, in the same cortical area multiple and heterogeneous modes coexist. Moreover, while the cytoarchitecture of a region can be considered uniform, the same region can be heterogeneous in terms of function, gene profile, or axonal projections (i.e., connectivity topography; Haak et al., 2018). This intrinsic complexity may be better represented by a continuum of functions rather than a mosaic of brain areas (Haak & Beckmann, 2020).
The GSP approach also allows the efficient integration of different neuroimaging techniques (EEG, fMRI, DWI, MEG), thus exploiting complementary measurements of brain properties. Only a few studies have analyzed fMRI and EEG signals at rest using graph Fourier modes of the underlying structural graph (i.e., estimated from DWI). In the future, this framework could be applied to jointly decompose electrophysiological and functional time series using underlying structural topology, thus integrating different temporal and spatial scales in a multimodal analysis, with potential to shed new light on the complex interplay between brain function and structure. In addition, GSP could be extended to the analysis of brain signals during different tasks and (pathological) conditions, holding promise for developing more sensitive markers of disease.
Methodological Challenges and Future Directions
As described in Table 2 and discussed in Bajada et al. (2020), even if grounded in the same fundamental approach, various algorithms and similarity measures have been applied for graph inference and spectral analysis. These methodologies have different properties and it is not clear which one should be used and for which specific applications. For instance, different Laplacian matrices can be used for spectral decomposition. The normalized Laplacian has the useful property that its spectrum is limited to the [0, 2] interval. However its first eigenvector (associated with λ0 = 0) is not constant as for the combinatorial Laplacian, which is less intuitive for an interpretation of eigenvalues in terms of spatial frequency (Shuman et al., 2013). Similarly, some influential studies have used diffusion map embedding (Margulies et al., 2016; Mckeown et al., 2020; Paquola et al., 2019) with specific parameter choices (diffusion time and anisotropic diffusion parameter α) because of a series of advantageous properties for brain connectivity analysis. Future work should explore how different choices could affect results and which metrics are more adapted to the analysis of specific graph structures or brain signals.
Another point to clarify is the impact of the choice of parcellation on the connectivity gradient topography. In the pioneering work of Margulies et al. (2016), high-resolution rs-fMRI connectivity was computed between fMRI voxels, while in other work (Mckeown et al., 2020; Preti & Van De Ville, 2019; D. Wang, Yu, & Zou, 2020) different atlases were used to identify brain ROIs and then compute functional connectivity between them. It would be interesting to assess how the use of different parcellations affects the computation of connectivity gradients and whether it introduces bias in the characterization of connectivity topography, which is supposed to describe a continuous pattern of organization.
One important limitation of the studies we have reviewed is the lack of accounting for temporal dependencies in the graph model. Indeed, in most of the literature “constant” spatial dependencies between brain regions are considered (i.e., structural graphs built using white matter tracts, or functional graphs estimated using statistics of brain activity). Several theoretical breakthroughs have been made to address dynamic properties (i.e., time-varying graphs), and their application to neuroimaging data could prompt the understanding of cognitive process dynamics. Some promising frameworks to model time-varying aspects in graphs include graph slepians (Van De Ville, Demesmaeker, & Preti, 2017), sparseness of temporal variation (Yamada, Tanaka, & Ortega, 2019), or lapped Fourier transform (Lu & Ortega, 2019). Another avenue to model spatiotemporal dynamics could be to use deep learning models adapted to sequence modeling, combined with graph convolutional networks (J. Chen, Xu, Wu, & Zheng, 2018).
Finally, GSP is an active research field and there are a few recent theoretical proposals in GSP that have not yet been applied to neuroimaging data, but which could potentially bring interesting breakthroughs. Considering generalized signal processing operations on graphs such as graph filters (Segarra, Mateos, Marques, & Ribeiro, 2017), graph wavelets (Hammond, Vandergheynst, & Gribonval, 2011; Pilavci & Farrugia, 2019), multiscale graphs (Tremblay & Borgnat, 2014), graph slepians (W. Huang et al., 2017), graph sampling (Marques, Segarra, Leus, & Ribeiro, 2016; Puy, Tremblay, Gribonval, & Vandergheynst, 2016), or locating and decomposing signals on graphs (S. Chen, Yang, Moura, & Kovačević, 2016) could allow richer interpretations, and potentially a unified perspective on graph signals. GSP also includes novel methods for graph inference that are centered on signal representations on graphs (Dong, Thanou, Rabbat, & Frossard, 2019; Giannakis, Shen, & Karanikolas, 2018; Mateos, Segarra, Marques, & Ribeiro, 2019; Pasdeloup, Gripon, Mercier, Pastor, & Rabbat, 2018). GSP-based methods aim to infer graphs from measurements while enforcing specific properties of these signals on the learned graph (e.g., spectral distribution, smoothness), with potential to uncover specific structural (Hu et al., 2015) and functional (Shen, Giannakis, & Baingana, 2019) interactions between brain areas.
In conclusion, this work reviewed recent studies applying connectivity gradients and GSP for the analysis of brain signals, clarified terminology and methods, and related these two approaches grounded in the eigenvector decomposition of connectivity matrices. We point out that, given the increasing relevance connectivity gradients are taking in the understanding of brain macroscale organization, the application of GSP to neuroimaging is an exciting avenue towards a deeper understanding of brain organization. We also identify methodological challenges and suggest that future work should address multimodal and time-varying modeling and further explore the use of different metrics.
AUTHOR CONTRIBUTIONS
Giulia Lioi: Conceptualization; Resources; Writing – original draft. Vincent Gripon: Supervision; Writing – review & editing. Abdelbasset Brahim: Conceptualization; Writing – review & editing. François Rousseau: Supervision; Writing – review & editing. Nicolas Farrugia: Conceptualization; Supervision; Writing – original draft; Writing – review & editing.
FUNDING INFORMATION
Nicolas Farrugia, Région Bretagne (http://dx.doi.org/10.13039/501100011697), Award ID: SAD-2019.
TECHNICAL TERMS
- Graph signal processing:
Framework that allows studying graph matrices together with graph signals (i.e., vectors associated with graph nodes).
- Connectivity gradients:
Eigenvectors obtained from spectral decomposition of a connectivity matrix estimated from brain signals.
- Adjacency matrix:
A squared matrix W describing the properties of the graph whose entries wij are different from zero if there is an edge connecting nodes i and j, and zero otherwise.
- Strength matrix:
A diagonal matrix D whose entry di is the strength (or weighted degree) of the corresponding node i.
- Laplacian matrix:
A squared matrix L function of the strength and/or adjacency matrices of a graph that is key to describe graph spectral properties.
- Graph Fourier transform:
Projection of a signal defined on a graph using the basis defined by the eigenvectors of the graph Laplacian (equivalent of Fourier transform in the graph domain).
REFERENCES
Author notes
Competing Interests: The authors have declared that no competing interests exist.
Handling Editor: Olaf Sporns