Group-level analyses have typically linked behavioral signatures to a constrained set of brain areas. Here, we show that two behavioral metrics—reaction time (RT) and confidence—can be decoded across the cortex when each individual is considered separately. Subjects (N = 50) completed a perceptual decision-making task with confidence. We built models decoding trial-level RT and confidence separately for each subject using the activation patterns in one brain area at a time after splitting the entire cortex into 200 regions of interest (ROIs). First, we developed a simple test to determine the robustness of decoding performance, which showed that several hundred trials per subject are required for robust decoding. We then examined the decoding performance at the group and subject levels. At the group level, we replicated previous results by showing that both RT and confidence could be decoded from a small number of ROIs (12.0% and 3.5%, respectively). Critically, at the subject level, both RT and confidence could be decoded from most brain regions even after Bonferroni correction (90.0% and 72.5%, respectively). Surprisingly, we observed that many brain regions exhibited opposite brain-behavior relationships across individuals, such that, for example, higher activations predicted fast RTs in some subjects but slow RTs in others. All results were replicated in a second dataset. These findings show that behavioral signatures can be decoded from a much broader range of cortical areas than previously recognized and suggest the need to study the brain-behavior relationship at both the group and subject levels.

An enduring goal of cognitive neuroscience is to establish the relationship between brain and behavior. For example, studies have linked brain measures to healthy aging (Dosenbach et al., 2010; Geerligs et al., 2015), personality (Yarkoni, 2015), intelligence (Finn et al., 2015; Heuvel et al., 2009), and mood (Smith et al., 2015). Additionally, there are considerable efforts to use brain measures for diagnosis (Arbabshirani et al., 2013; Yahata et al., 2016), treatment selection (Williams et al., 2015), and prediction of patient outcomes in clinical settings (Whelan et al., 2014).

What is the right level at which to study the brain-behavior relationship? It is increasingly evident that this relationship is a complex interplay between group-level factors, which are shared among individuals, and individual-level factors that manifest as unique characteristics (Dubois & Adolphs, 2016; Gratton et al., 2018; Nakuci, Yeon, Xue, et al., 2023). The discernible variation in brain activity among individuals has long been recognized for its potential to reveal the intricacies of the differences in behavior between individuals (Miller et al., 2012; van Horn et al., 2008). Understanding how the individual differs from the group is critical since individual factors may be crucial for diagnosing and treating pathology (Gratton et al., 2020; Lebreton et al., 2019). Consequently, it is imperative to understand the complex individual factors shaping the brain's role in behavior.

Group-level analyses using both mass univariate statistics and multivariate pattern analysis with searchlight have been used to identify brain-behavior signatures across the cortex (Friston et al., 1999; Kriegeskorte et al., 2008). These analyses typically uncovered behavioral signatures within constrained sets of brain areas. However, in the presence of substantial individual variability, one may expect that many areas of the brain that are not predictive of behavior in the group may, nonetheless, predict behavior in certain individuals. Nevertheless, it remains unknown how widely across cortex one can decode behavioral signatures in individual subjects. However, the problem is partly statistical since one requires both a lot of power (e.g., high number of trials) within an individual and a large enough sample of individuals for individual differences to manifest.

Differences across individuals manifest themselves in multiple ways. Previous work has found differences in spatial organization among brain regions across individuals (Williams et al., 2015). Moreover, there are substantial differences in large-scale organization among individuals (Braga & Buckner, 2017; Dworetsky et al., 2024). Crucially, these differences, particularly in spatial organization among individuals culminate in decoding across individuals being generally much poorer than within an individual (Haxby et al., 2011).

Here, we investigate the extent to which two behavioral signatures—reaction time (RT) and confidence—can be decoded from across the cortex. Each subject (N = 50) completed 700 trials from a perceptual decision-making task—a number significantly surpassing standard practice. The high number of trials empowers us to robustly estimate the brain-behavior relationship within individuals. To anticipate, we find that when factoring in individual differences, RT and confidence can be decoded from brain activity across the cortex in stark contrast to group-level analyses. We replicated these results in a second dataset where subjects (N = 36) completed 804 trials of a different perceptual decision-making task. These results demonstrate that behavior can be predicted from a wider set of brain areas than would be suggested by standard group analyses.

2.1 Dataset 1 subjects and task

Dataset 1 has been previously published (Nakuci, Yeon, Xue, et al., 2023). All details can be found in the original publication. Briefly, subjects (N= 50; 25 females; mean age = 26; age range = 19–40) completed a task where they judged which set of colored dots (red vs. blue) is more frequent in a cloud of dots. The stimulus was presented for 500 ms and subjects made untimed decision and confidence decisions using separate button presses. Most subjects performed six runs of 128 trials (a total of 768 trials). Three subjects completed only half of the sixth run and another three subjects completed only the first five runs due to time constraints. All subjects were screened for neurological disorders and MRI contraindications. The study was approved by Ulsan National Institute of Science and Technology Review Board, and all subjects gave written consent.

2.2 Dataset 1 MRI recording

The MRI data were collected on a 64-channel head coil 3T MRI system (Magnetom Prisma; Siemens). Whole-brain functional data were acquired using a T2*-weighted multi-band accelerated imaging (FoV = 200 mm; TR = 2,000 ms; TE = 35 ms; multiband acceleration factor = 3; in-plane acceleration factor = 2; 72 interleaved slices; flip angle = 90°; voxel size = 2.0 x 2.0 x 2.0 mm3). High-resolution anatomical MP-RAGE data were acquired using T1-weighted imaging (FoV = 256 mm; TR = 2,300 ms; TE = 2.28 ms; 192 slices; flip angle = 8°; voxel size = 1.0 x 1.0 x 1.0 mm3).

2.3 Dataset 2 subjects and task

Dataset 2 has been previously published as Experiment 2 in Yeon et al. (2020). All details can be found in the original publication. Briefly, subjects (N = 36, 23 females; mean age = 21.5; age range = 18–28) completed a task where they indicated whether a moving-dots stimulus had an overall coherent motion (always in downward direction) or not. The stimulus was presented for 500 ms, and subjects made an untimed decision immediately after the stimulus. All subjects completed 6 runs of 144 trials (a total of 864 trials). In the first half of the experiment (runs 1-3), subjects performed the task without providing confidence ratings. In the second half of the experiment (runs 4-6), subjects reported their confidence level with a separate, untimed button press immediately after making their perceptual decision. All subjects were screened for neurological disorders and MRI contraindications. The study was approved by the Georgia Tech Institutional Review Board, and all subjects gave written consent.

2.4 Dataset 2 MRI recording

The MRI data were collected on 3 T MRI systems using a 32-channel head coil. Anatomical images were acquired using T1-weighted sequences with a MEMPRAGE sequence, FoV = 256 mm; TR = 2,530 ms; TE = 1.69 ms; 176 slices; flip angle = 7˚; voxel size = 1.0 × 1.0 × 1.0 mm3. Functional images were acquired using T2*-weighted gradient echo-planar imaging sequences (FoV = 220 mm; TR = 1,200 ms; TE = 30 ms; 51 slices; flip angle = 65˚; voxel size = 2.5 × 2.5 × 2.5 mm3).

2.5 MRI preprocessing

MRI data were preprocessed with SPM12 (Wellcome Department of Imaging Neuroscience, London, UK). All images were first converted from DICOM to NIFTI and removed the first three volumes to allow for scanner equilibration. We then preprocessed with the following steps: de-spiking, slice-timing correction, realignment, segmentation, coregistration, normalization, and spatial smoothing with 10 mm full width half maximum. Despiking was done using the 3dDespike function in AFNI. The preprocessing of the T1-weighted structural images involved skull-removal, normalization into MNI anatomical standard space, and segmentation into gray matter, white matter, and cerebral spinal fluid, soft tissues, and air and background.

2.6 Single-trial beta estimation

Single-trial beta responses were estimated with a general linear model (GLM) using GLMsingle, a Matlab toolbox for single-trial analyses (Prince et al., 2022). The hemodynamic response function was estimated for each voxel, and nuisance regressors were estimated in the same manner as described in Allen et al. (Allen et al., 2022). Additionally, regressors for the global signal and for six motion parameters (three translation and three rotation) were included. The single-trial betas were estimated in three batches. In each batch, the betas for every third trial were estimated because the trials in our study were temporally close together. Also, trials that were within 20 seconds from the end of run were removed due to the lack of sufficient length of signal from which to estimate the trial-specific hemodynamic response function. The beta estimates from GLMsingle have been validated in our previous work and the results are comparable to what standard GLM analyses is SPM12 (Nakuci, Yeon, Kim, et al., 2023). Further, to reduce any differences in the modeling training and fitting that might arise from difference in the number of trials between subjects, when possible, we opted for a uniform number of trials per subject. In total, the analysis was based on 700 trials for Dataset 1 and 804 trials for Dataset 2; for subjects who had more trials than these, we simply removed trials from the end of the experiment. (Note that six subjects in Dataset 1 and two subjects in Dataset 2 had fewer total trials because they did not complete all six runs.)

2.7 Subject-level brain-behavior decoding analysis

For each trial, the activation within each of the 200 regions of interest (ROIs) from the Schaefer atlas was estimated by averaging all voxel in the ROI (Schaefer et al., 2018). Individual trials were randomly separated into training and testing bins each containing 350 trials in Dataset 1 and 402 trials in Dataset 2. (For the confidence analyses in Dataset 2, the training was done on 301 trials and testing on the remaining 101 trials.) For each ROI, a linear model was trained on training bin and used to predict behavioral performance on the testing bin. The linear model was trained using fitlm.m in Matlab. Additionally, we utilized a more advanced model based on Support Vector Regression (SVR) to determine if decoding could be improved when compared to the simple linear model. The SVR model was trained using fitrsvm.m in Matlab with default parameters.

Decoding performance for a given ROI was determined by correlating the empirical and predicted RT and confidence. The analysis was repeated 25x to ensure that model performance was not dependent on the initial division of trials. The decoding performance values in each of the 25 iterations were first z-scored using the Fisher transformation and then averaged and converted back to r-values. Significance was determined by converting r-values to t-values.

Decoding analysis was conducted for each individual subject separately and compared to a permuted null model. A brain region was deemed to significantly decode behavioral performance if the correlation between predicted and empirical RT or confidence exceeded a null model based on permutating RT or confidence 1000x at P < 0.05, uncorrected for multiple comparison (Puncor).

However, when conducting an analysis at the individual level, multiple comparisons is a particularly acute issue because comparisons are performed independently across many subjects (50 in Experiment 1 and 36 in Experiment 2) and across 200 regions within a subject. Therefore, a multiple comparison correction is needed. In Dataset 1, subject-level decoding performance was evaluated at P < 0.05 uncorrected and with three Bonferroni multiple comparison corrections of 50 tests (equal to the number of subjects), 200 tests (equal to the number of ROIs), and 10,000 tests (the number of subjects times the number of ROIs). Similarly, in Dataset 2, subject-level decoding performance was evaluated at P < 0.05 uncorrected and with three Bonferroni multiple comparison corrections of 36 tests (equal to the number of subjects), 200 tests (equal to the number of ROIs), and 7,200 tests (the number of subjects times the number of ROIs).

2.8 Group-level brain-behavior decoding analysis

Group-level decoding performance was estimated by averaging the individual subject decoding performance for each ROI. Group-level decoding was Bonferroni corrected for 200 tests (equal to the number of ROIs).

2.9 Decoding performance and trial number

We developed a test to determine how many trials are needed to obtain robust individual-level brain-behavior relationships. The test relies on estimating decoding performance on a range of trials in training and testing bins. Specifically, for each subject, a whole-brain multilinear regression model was trained on a subset of trials that ranged from 5% to 95% of trials and tested on the remaining trials. The multilinear regression model was repeated 25 times training on different subset of data. Decoding performance was estimated by averaging across the 25 iterations, and decoding performance variance was estimated by calculating the standard deviation across the 25 iterations.

3.1 Analytical framework for behavior decoding

We investigated the brain-behavior relationship in two perceptual decision-making datasets. In the first dataset, each subject (N = 50) completed over 700 trials of a perceptual task with confidence. In the second dataset (N = 36), each subject completed 804 trials but only half of the trials included confidence ratings. We recorded subjects’ reaction time (RT) and confidence and investigated how well they can be decoded from different parts of the brain. We utilized a recently developed method, GLMsingle, to estimate the voxel-wise activation on a given trial (i.e., single-trial beta) (Prince et al., 2022). For each subject, we performed the decoding analysis on the average activation within each of the 200 regions of interest (ROIs) from the Schaefer atlas (Schaefer et al., 2018). Specifically, for each trial, we averaged the beta values from all voxels within a brain region to obtain a single beta value for each ROI. Individual trials were then randomly separated into training and testing bins (Fig. 1A). A prediction model was trained on half of the trials (Fig. 1B, left) and used to predict behavioral performance on the remaining trials from the activation (beta values) separately for each ROI (Fig. 1B, middle). Decoding performance for a given ROI and subject was determined by correlating the empirical and predicted RT and confidence (Fig. 1B, right). The analysis was repeated 25x to ensure that model performance did not depend on the initial trial split (Fig. 1C).

Fig. 1.

Analytical framework for behavior decoding analysis. (A) For each brain region, individual trials were randomly separated into training (red) and testing (green) bins. (B) The trials in the training bin were used to train a prediction model (left). The model was used to predict the behavioral performance of the trials in the testing bin using the observed brain activation (middle). Model performance was estimated using the Pearson correlation between the empirical and predicted behavioral performance (right). (C) The analysis was repeated 25x to ensure that model performance was robust and did not depend on the initial trial split.

Fig. 1.

Analytical framework for behavior decoding analysis. (A) For each brain region, individual trials were randomly separated into training (red) and testing (green) bins. (B) The trials in the training bin were used to train a prediction model (left). The model was used to predict the behavioral performance of the trials in the testing bin using the observed brain activation (middle). Model performance was estimated using the Pearson correlation between the empirical and predicted behavioral performance (right). (C) The analysis was repeated 25x to ensure that model performance was robust and did not depend on the initial trial split.

Close modal

3.2 Robust decoding performance requires several hundred trials per subject

Our Datasets 1 and 2 included 700 and 804 trials per subject, respectively. We first investigated how the number of trials used for training and testing affects decoding performance. We develop a simple test to determine how many trials are needed to obtain robust individual-level decoding.

The test relies on estimating decoding performance using a range of trials for both training and testing. For robustness, we used the data from all 200 ROIs to decode both RT and confidence. We trained a linear regression model on subsets of trials that ranged from 5% to 95% of all trials and tested on the remaining trials. For each percentage of trials used, we randomly selected trials for training and testing 25 times. We then computed both the average decoding performance across the 25 iterations and the variance in decoding performance. We then determined the percentage of trials that should be devoted to training the decoding model to minimize the variance in the decoding performance.

For Dataset 1, minimum decoding variance was obtained when training/testing each contained 350 of trials (50% of trials) for both RT (Fig. 2A) and confidence (Fig. 2B), while average decoding performance increased with higher number of training trials. Critically, using fewer than 350 trials for training exhibited poor decoding performance. On the other hand, using more than 350 trials for training improved performance, but the variance in performance increased across iterations. The maximum difference between decoding performance and variance was obtained when training was performed on 525 trials (75% of all trials). We found similar results for Dataset 2. Minimum decoding variance was obtained when training contained 402 trials (50% of trials) for RT (Fig. 2C) and 301 trials (75% of trials) for confidence (Fig. 2D), while decoding performance increased with higher number of training trials. These results suggest that a 50:50 split between training and testing bins might produce more consistent results compared to 80:20 or 90:10 (5- or 10-fold) division of trials between training and testing bins. All subsequent analyses are based on 50:50 split between training and testing bins which exhibited the optimal performance while minimizing the variance across iterations compared to other division ratios. Additionally, we repeated the analysis using a fixed number of trials during testing, 35 (5%), to equate performance across the different training sets. The result indicated that decreasing the number of trials used in training increased the decoding variance across iterations (Fig. S1). Importantly, these results suggest the need of several hundred trials to robustly train a decoding model for RT or confidence, implying that many previous studies estimating brain-behavior relationship at the level of the individual may be underpowered.

Fig. 2.

Robust decoding of RT and confidence requires several hundred trials. In Dataset 1, the variance in decoding performance was minimized when using 350 trials (corresponding to 50% of all trials) for both (A) RT and (B) confidence. The black dots present the average decoding performance across subject after 25 repeats for each subject (left axis). The red dots present the average subject-level decoding variance across the 25 repeats for each subject (right axis). The green boxes indicate the number of trials that minimize the variance in the decoding performance. The orange boxes indicate the number of trials that maximize the difference between decoding performance and the decoding variance. Error bars show SEM. (C-D) Same as Panels A-B but for Dataset 2. Note that in Dataset 2, confidence was measured on only half the trials (402) and correspondingly the decoding variance was minimized for a higher percentage of trials (75%).

Fig. 2.

Robust decoding of RT and confidence requires several hundred trials. In Dataset 1, the variance in decoding performance was minimized when using 350 trials (corresponding to 50% of all trials) for both (A) RT and (B) confidence. The black dots present the average decoding performance across subject after 25 repeats for each subject (left axis). The red dots present the average subject-level decoding variance across the 25 repeats for each subject (right axis). The green boxes indicate the number of trials that minimize the variance in the decoding performance. The orange boxes indicate the number of trials that maximize the difference between decoding performance and the decoding variance. Error bars show SEM. (C-D) Same as Panels A-B but for Dataset 2. Note that in Dataset 2, confidence was measured on only half the trials (402) and correspondingly the decoding variance was minimized for a higher percentage of trials (75%).

Close modal

3.3 RT and confidence can be predicted from across the brain

Having examined how the number of trials used for training and testing affects decoding performance, we proceeded to investigate how widely across the brain we can decode RT and confidence. We first performed group-level analyses to decode RT across all subjects for each of the 200 ROIs. To identify brain regions from which RT and confidence could be significantly decoded at the group level, the decoding performance values across individual subjects were aggregated and tested against zero. We found that RT could be significantly decoded in 12.0% of all ROIs after correcting for multiple comparisons (P < 0.05, Bonferroni-corrected, one-tailed one sample t-tests; Fig. 3A). These standard group-level analyses appear to suggest that RT can be decoded from only a handful of ROIs in the brain and that the rest of the ROIs cannot be used for decoding RT. We found similar results when we attempted to decode confidence instead of RT. At the group level, confidence could be decoded from only 3.5% of ROIs after correcting for 200 comparisons (one for each ROI; Fig. 3B). Moreover, we repeated our analysis using support vector regression model (SVR) and found similar results for both RT (Fig. 3C) and confidence (Fig. 3D), suggesting that more advanced models may only marginally improve decoding. The analyses focus on the results obtained with the linear model because of challenges associated with interpreting beta values from non-linear decoding models (Kriegeskorte et al., 2008).

Fig. 3.

Group-level RT and confidence decoding in Dataset 1. Brain regions from which (A) RT and (B) confidence could be significantly decoded from at the group level after correcting for 200 comparisons. (C, D) Same as panel A and B, but using support vector regression (SVR) model to decode RT and confidence, respectively. * Indicates brain region that significantly, P < 0.05, Bonferroni corrected. FPN, Frontal Parietal Network; DMN, Default Mode Network; DAN, Dorsal Attention Network; LIM, Limbic Network; VAN, Ventral Attention Network; SOM, Somatomotor Network; VIS, Visual Network.

Fig. 3.

Group-level RT and confidence decoding in Dataset 1. Brain regions from which (A) RT and (B) confidence could be significantly decoded from at the group level after correcting for 200 comparisons. (C, D) Same as panel A and B, but using support vector regression (SVR) model to decode RT and confidence, respectively. * Indicates brain region that significantly, P < 0.05, Bonferroni corrected. FPN, Frontal Parietal Network; DMN, Default Mode Network; DAN, Dorsal Attention Network; LIM, Limbic Network; VAN, Ventral Attention Network; SOM, Somatomotor Network; VIS, Visual Network.

Close modal

Critically, we asked whether this conclusion holds on the level of the individual subject. To address this question, we determined for how many ROIs there was at least one subject for whom RT could be decoded. We found that with no correction for multiple comparisons, decoding was significant for at least one subject in every single one of the 200 ROIs (Fig. 4A-B). Moreover, decoding performance at the individual level exceeded group-level performance. Additionally, as can be observed, there was substantial individual variability in decoding capabilities of ROIs across subjects (Fig. 4B).

Fig. 4.

RT can be predicted from across the brain in Dataset 1. (A) Brain regions from which RT could be significantly decoded in each subject and region of interest. Significance is shown at four levels of correction: without any correction (Puncor), and after correcting for 50 (P50), 200 (P200), and 10,000 (P10K) comparisons. NS, not significant. Colors indicate the most conservative threshold at which one can significantly decode RT from a given region (see color legend on the right). (B) Decoding performance for which subject and brain region. Performance was estimated using the Pearson correlation between empirical and predicted RT. (C) Percentage of ROIs where RT could be significantly decoded for at least one subject for four levels of multiple comparison correction. Group-level results have been added for comparison. (D) Percentage of ROIs per subject where RT could be significantly decoded at four levels of multiple comparison correction. (E) Brain maps plotting percentage of subjects for whom RT could be significantly decoded for four levels of multiple comparison correction. (F) Brain maps plotting ROIs for which RT could be significantly decoded at the group level after multiple comparison correction. FPN, Frontal Parietal Network; DMN, Default Mode Network; DAN, Dorsal Attention Network; LIM, Limbic Network; VAN, Ventral Attention Network; SOM, Somatomotor Network; VIS, Visual Network.

Fig. 4.

RT can be predicted from across the brain in Dataset 1. (A) Brain regions from which RT could be significantly decoded in each subject and region of interest. Significance is shown at four levels of correction: without any correction (Puncor), and after correcting for 50 (P50), 200 (P200), and 10,000 (P10K) comparisons. NS, not significant. Colors indicate the most conservative threshold at which one can significantly decode RT from a given region (see color legend on the right). (B) Decoding performance for which subject and brain region. Performance was estimated using the Pearson correlation between empirical and predicted RT. (C) Percentage of ROIs where RT could be significantly decoded for at least one subject for four levels of multiple comparison correction. Group-level results have been added for comparison. (D) Percentage of ROIs per subject where RT could be significantly decoded at four levels of multiple comparison correction. (E) Brain maps plotting percentage of subjects for whom RT could be significantly decoded for four levels of multiple comparison correction. (F) Brain maps plotting ROIs for which RT could be significantly decoded at the group level after multiple comparison correction. FPN, Frontal Parietal Network; DMN, Default Mode Network; DAN, Dorsal Attention Network; LIM, Limbic Network; VAN, Ventral Attention Network; SOM, Somatomotor Network; VIS, Visual Network.

Close modal

Next, we examined the decoding accuracy for each ROI separately and applied Bonferroni correction for the presence of 50 tests (equal to the number of subjects). We found that RT could be significantly decoded in at least one subject from 90% of all ROIs (180 of 200; Fig. 4C). These ROIs spanned all seven major brain networks associated with the Schaefer atlas. Further, when using a more stringent Bonferroni correction for 200 tests (equal to the number of ROIs), RT could be significantly decoded in at least one subject from 78.5% of all ROIs (157 of 200; Fig. 4C). Even with the most stringent Bonferroni correction for 10,000 tests (the number of subjects times the number of ROIs), RT could be significantly decoded in at least one subject from 50.5% of all ROIs (101 of 200, Fig. 4C). (Note, we show all four levels of correction because there is no “right” level of correction, and no level of correction communicates all relevant information.)

Importantly, even though most ROIs permitted the decoding of RT in at least one subject, this decoding was only significant in a few subjects per ROI. Specifically, the average ROI could be used to decode RT in 20.2% without multiple comparison correction, and in 6.6, 4.5, and 1.9% of subjects after correcting for 50, 200, and 10,000 comparisons (Fig. 4D). In all four cases, the most predictive ROIs were interspersed across much of the cortical surface compared to the group-level which was limited to a subset of regions (Fig. 4E-F). These results suggest the presence of strong individual differences, such that each ROI is only predictive of RT in a handful of subjects.

We found similar results when we attempted to decode confidence. Nevertheless, at the individual level, many more ROIs could be used to significantly decode confidence in at least one subject. Specifically, confidence could be decoded in 99.5% of all ROIs without multiple comparison correction, and in 72.5, 59.5, and 26.5% of ROIs after correcting for 50, 200, and 10,000 comparisons (Fig. 5). Note that confidence was overall less decodable than RT. Overall, the decoding results demonstrate that RT and confidence can be decoded from across the cortex when individual differences are considered.

Fig. 5.

Confidence can be predicted from across the brain in Dataset 1. (A) Brain regions from which confidence could be significantly decoded in each subject and region of interest. Significance is shown at four levels of correction: without any correction (Puncor), and after correcting for 50 (P50), 200 (P200), and 10,000 (P10K) comparisons. NS, not significant. Colors indicate the most conservative threshold at which one can significantly decode confidence from a given region (see color legend on the right). (B) Decoding performance for which subject and brain region. Performance was estimated using the Pearson correlation between empirical and predicted confidence. (C) Percentage of ROIs where confidence could be significantly decoded for at least one subject for four levels of multiple comparison correction. Group-level results have been added for comparison. (D) Percentage of ROIs per subject where confidence could be significantly decoded at four levels of multiple comparison correction. (E) Brain maps plotting percentage of subjects for whom confidence could be significantly decoded for four levels of multiple comparison correction. (F) Brain maps plotting ROIs for which confidence could be significantly decoded at the group level after multiple comparisons correction. FPN, Frontal Parietal Network; DMN, Default Mode Network; DAN, Dorsal Attention Network; LIM, Limbic Network; VAN, Ventral Attention Network; SOM, Somatomotor Network; VIS, Visual Network.

Fig. 5.

Confidence can be predicted from across the brain in Dataset 1. (A) Brain regions from which confidence could be significantly decoded in each subject and region of interest. Significance is shown at four levels of correction: without any correction (Puncor), and after correcting for 50 (P50), 200 (P200), and 10,000 (P10K) comparisons. NS, not significant. Colors indicate the most conservative threshold at which one can significantly decode confidence from a given region (see color legend on the right). (B) Decoding performance for which subject and brain region. Performance was estimated using the Pearson correlation between empirical and predicted confidence. (C) Percentage of ROIs where confidence could be significantly decoded for at least one subject for four levels of multiple comparison correction. Group-level results have been added for comparison. (D) Percentage of ROIs per subject where confidence could be significantly decoded at four levels of multiple comparison correction. (E) Brain maps plotting percentage of subjects for whom confidence could be significantly decoded for four levels of multiple comparison correction. (F) Brain maps plotting ROIs for which confidence could be significantly decoded at the group level after multiple comparisons correction. FPN, Frontal Parietal Network; DMN, Default Mode Network; DAN, Dorsal Attention Network; LIM, Limbic Network; VAN, Ventral Attention Network; SOM, Somatomotor Network; VIS, Visual Network.

Close modal

3.4 Opposite brain-behavior relationship among subjects for the same brain region

Critically, for many ROIs, the relationship between brain activity and both RT and confidence often went in the opposite direction for different subjects. For example, one set of subjects would exhibit higher RT or confidence with higher ROI activation, whereas a different set of subjects would exhibit lower RT or confidence with higher ROI activation (Fig. 6A-B). Indeed, we found that without multiple comparison corrections, high brain activity predicted higher RT in at least one subject and lower RT in at least one subject in 67.0% of all ROIs (Fig. 6C). This percentage decreased to 16.5, 8.0, and 1.0% after correcting for 50, 200, and 10,000 comparisons, respectively. Interestingly, the dorsal attention network contained the most regions with opposite relationship between brain activity and RT among subjects, with the default mode and visual networks as close second and third, respectively (Fig. 6D). We found similar results for confidence with 75.5% of all ROIs showing significant decoding in different direction for at least two subjects, and 15.0, 7.5, and 0% after correcting for 50, 200, and 10,000 comparisons, respectively (Fig. 6E-G). The ROIs from which RT and confidence could be decoded at the group level were the ROIs from which behavioral performance could be decoded in most subjects. These ROIs exhibited opposite brain-behavior relationship in 69.56% of subjects in RT and in 85.71% of subjects in confidence. Further, the default mode network contained the most regions with opposite relationship between brain activity and confidence among subjects (Fig. 6H). In contrast to the findings for RT that lacked large clusters where decoding was consistently in the same direction, we found that a subset of ROIs associated with somatomotor network exhibited consistent relationship between brain activity and confidence across subjects. Except for this relatively small cluster, our results demonstrate that the relationship between brain activity and behavioral outcomes is not universal and that it frequently goes in opposite directions for different subjects.

Fig. 6.

Opposite relationship between brain activity and both RT and confidence among subjects for the same brain region in Dataset 1. (A) Beta values for each subject and region of interest. Red and blue dots reflect subjects for whom RT could be significantly decoded from a given ROI (P < 0.05, uncorrected for multiple comparison). Gray dots show subjects for whom RT could not be significantly decoded from a given ROI. (B) Brain maps showing the ROIs that contained at least one subject with both positive and negative beta values for RT. Significance is shown at four levels of correction: without any correction (Puncor), and after correcting for 50 (P50), 200 (P200), and 10,000 (P10K) comparisons. (C) Percentage of ROIs that contained at least one subject with both positive and negative beta values for RT for each for four levels of multiple comparison correction. (D) Percentage of ROIs within each brain network that that contained at least one subject with both positive and negative beta values for RT after correcting for 50 comparisons. (E-H) Same as Panels A-D but for confidence.

Fig. 6.

Opposite relationship between brain activity and both RT and confidence among subjects for the same brain region in Dataset 1. (A) Beta values for each subject and region of interest. Red and blue dots reflect subjects for whom RT could be significantly decoded from a given ROI (P < 0.05, uncorrected for multiple comparison). Gray dots show subjects for whom RT could not be significantly decoded from a given ROI. (B) Brain maps showing the ROIs that contained at least one subject with both positive and negative beta values for RT. Significance is shown at four levels of correction: without any correction (Puncor), and after correcting for 50 (P50), 200 (P200), and 10,000 (P10K) comparisons. (C) Percentage of ROIs that contained at least one subject with both positive and negative beta values for RT for each for four levels of multiple comparison correction. (D) Percentage of ROIs within each brain network that that contained at least one subject with both positive and negative beta values for RT after correcting for 50 comparisons. (E-H) Same as Panels A-D but for confidence.

Close modal

3.5 RT and confidence can be decoded from across the brain in Dataset 2

We replicated these results in a second dataset where subjects completed a different perceptual decision-making task with confidence (Yeon et al., 2020). Subjects (N = 36) completed 804 trials but only half of them included confidence ratings, thus substantially decreasing the power for the confidence analyses.

Similar to Dataset 1, we found that at the group level, RT could be significantly decoded in 13.0% of all ROIs after correcting for 200 comparisons in the linear model (P < 0.05, Bonferroni-corrected, one-tailed one-sample t-tests; Fig. 7A). Crucially, we determined for how many ROIs there was at least one subject for whom RT could be decoded. At the individual level, RT could be decoded in 100% of all ROIs without multiple comparison correction, and in 87.0, 72, and 43.5% of ROIs after correcting for 36 (number of subjects), 200 (number of ROIs), and 7,200 (subjects times ROIs) comparisons (Fig. 8). Given the lower number of trials that contained confidence, we found that with the linear model, confidence could be significantly decoded in only 1.5% of all ROIs at the group level after correcting for multiple comparisons (P < 0.05, Bonferroni-corrected, one-tailed one-sample t-tests; Fig. 7B). Additionally, comparable results were obtained when using the SVR model for both RT and confidence (Fig. 7C and D). Despite the lower power, at the individual level, confidence could still be decoded in 76.5 % of all ROIs without multiple comparison correction, and in 47.0, 32.5, and 22.5% of ROIs after correcting for 36, 200, and 7,200 comparisons (Fig. 9). Overall, the results from Dataset 2 further support the notion that both the RT and confidence can be decoded from across the cortex when individual differences are considered.

Fig. 7.

Group-level RT and confidence decoding in Dataset 2. Brain regions from which RT could be decode from using (A) linear and (B) support vector regression (SVR) model at the group level after correcting for 200 comparisons. (C, D) same as plane A and B, but for confidence. * Indicates brain region that significantly, P < 0.05, Bonferroni corrected. FPN, Frontal Parietal Network; DMN, Default Mode Network; DAN, Dorsal Attention Network; LIM, Limbic Network; VAN, Ventral Attention Network; SOM, Somatomotor Network; VIS, Visual Network.

Fig. 7.

Group-level RT and confidence decoding in Dataset 2. Brain regions from which RT could be decode from using (A) linear and (B) support vector regression (SVR) model at the group level after correcting for 200 comparisons. (C, D) same as plane A and B, but for confidence. * Indicates brain region that significantly, P < 0.05, Bonferroni corrected. FPN, Frontal Parietal Network; DMN, Default Mode Network; DAN, Dorsal Attention Network; LIM, Limbic Network; VAN, Ventral Attention Network; SOM, Somatomotor Network; VIS, Visual Network.

Close modal
Fig. 8.

RT can be decoded from across the brain in Dataset 2. (A) Brain regions from which RT could be significantly decoded in each subject and region of interest. Significance is shown at four levels of correction: without any correction (Puncor), and after correcting for 36 (P36), 200 (P200), and 7,200 (P7.2 K) comparisons. NS, not significant. Colors indicate the most conservative threshold at which one can significantly decode RT from a given region (see color legend on the right). (B) Decoding performance for which subject and brain region. Performance was estimated using the Pearson correlation between empirical and predicted RT. (C) Percentage of ROIs where RT could be significantly decoded for at least one subject for four levels of multiple comparison correction. Group-level results have been added for comparison. (D) Percentage of ROIs per subject where RT could be significantly decoded at four levels of multiple comparison correction. (E) Brain maps plotting percentage of subjects for whom RT could be significantly decoded for four levels of multiple comparison correction. (F) Brain maps plotting ROIs for which RT could be significantly decoded at the group level after multiple comparison correction. FPN, Frontal Parietal Network; DMN, Default Mode Network; DAN, Dorsal Attention Network; LIM, Limbic Network; VAN, Ventral Attention Network; SOM, Somatomotor Network; VIS, Visual Network.

Fig. 8.

RT can be decoded from across the brain in Dataset 2. (A) Brain regions from which RT could be significantly decoded in each subject and region of interest. Significance is shown at four levels of correction: without any correction (Puncor), and after correcting for 36 (P36), 200 (P200), and 7,200 (P7.2 K) comparisons. NS, not significant. Colors indicate the most conservative threshold at which one can significantly decode RT from a given region (see color legend on the right). (B) Decoding performance for which subject and brain region. Performance was estimated using the Pearson correlation between empirical and predicted RT. (C) Percentage of ROIs where RT could be significantly decoded for at least one subject for four levels of multiple comparison correction. Group-level results have been added for comparison. (D) Percentage of ROIs per subject where RT could be significantly decoded at four levels of multiple comparison correction. (E) Brain maps plotting percentage of subjects for whom RT could be significantly decoded for four levels of multiple comparison correction. (F) Brain maps plotting ROIs for which RT could be significantly decoded at the group level after multiple comparison correction. FPN, Frontal Parietal Network; DMN, Default Mode Network; DAN, Dorsal Attention Network; LIM, Limbic Network; VAN, Ventral Attention Network; SOM, Somatomotor Network; VIS, Visual Network.

Close modal
Fig. 9.

Confidence can be decoded from across the brain in Dataset 2. (A) Brain regions from which confidence could be significantly decoded in each subject and region of interest. Significance is shown at four levels of correction: without any correction (Puncor), and after correcting for 36 (P36), 200 (P200), and 7,200 (P7.2 K) comparisons. NS, not significant. Colors indicate the most conservative threshold at which one can significantly decode confidence from a given region (see color legend on the right). (B) Decoding performance for which subject and brain region. Performance was estimated using the Pearson correlation between empirical and predicted confidence. (C) Percentage of ROIs where confidence could be significantly decoded for at least one subject for four levels of multiple comparison correction. Group-level results have been added for comparison. (D) Percentage of ROIs per subject where confidence could be significantly decoded at four levels of multiple comparison correction. (E) Brain maps plotting percentage of subjects for whom confidence could be significantly decoded for four levels of multiple comparison correction. (F) Brain maps plotting ROIs for which confidence could be significantly decoded at the group level after multiple comparison correction. FPN, Frontal Parietal Network; DMN, Default Mode Network; DAN, Dorsal Attention Network; LIM, Limbic Network; VAN, Ventral Attention Network; SOM, Somatomotor Network; VIS, Visual Network.

Fig. 9.

Confidence can be decoded from across the brain in Dataset 2. (A) Brain regions from which confidence could be significantly decoded in each subject and region of interest. Significance is shown at four levels of correction: without any correction (Puncor), and after correcting for 36 (P36), 200 (P200), and 7,200 (P7.2 K) comparisons. NS, not significant. Colors indicate the most conservative threshold at which one can significantly decode confidence from a given region (see color legend on the right). (B) Decoding performance for which subject and brain region. Performance was estimated using the Pearson correlation between empirical and predicted confidence. (C) Percentage of ROIs where confidence could be significantly decoded for at least one subject for four levels of multiple comparison correction. Group-level results have been added for comparison. (D) Percentage of ROIs per subject where confidence could be significantly decoded at four levels of multiple comparison correction. (E) Brain maps plotting percentage of subjects for whom confidence could be significantly decoded for four levels of multiple comparison correction. (F) Brain maps plotting ROIs for which confidence could be significantly decoded at the group level after multiple comparison correction. FPN, Frontal Parietal Network; DMN, Default Mode Network; DAN, Dorsal Attention Network; LIM, Limbic Network; VAN, Ventral Attention Network; SOM, Somatomotor Network; VIS, Visual Network.

Close modal

3.6 Opposite brain-behavior relationship among subjects for the same brain region in Dataset 2

In Dataset 1, we found that for many ROIs, the relationship between brain activity and both RT and confidence often went in the opposite direction for different subjects. Here, we show that the same effects also occur in Dataset 2. Similar to Dataset 1, we found that without multiple comparison corrections, high brain activity predicted higher RT in at least one subject and lower RT in at least one subject in 50.0% of all ROIs and 16.5, 6.5, and 0.5% after correcting for 36, 200, and 7,200 comparisons, respectively (Fig. 10A-D). Reflecting the lower number of trials per subject, confidence showed weaker effects with 12.0% of all ROIs showing significant decoding in different direction for at least two subjects, and 3.0, 1.5, and 0% after correcting for 36, 200, and 7,200 comparisons, respectively (Fig. 10E-H). Interestingly, differences emerged between Datasets 1 and 2 when comparing which network contained the most regions with opposite relationship between brain activity. These results demonstrate that the relationship between brain activity and behavioral outcomes is not universal, and that this opposite relationship is not limited to a specific set of regions but instead depends on the individual subject.

Fig. 10.

Opposite relationship between brain activity and both RT and confidence among subjects for the same brain region in Dataset 2. (A) Beta values for each subject and region of interest. Red and blue dots reflect subjects for whom RT could be significantly decoded from a given ROI (P < 0.05, uncorrected for multiple comparison). Gray dots show subjects for whom RT could not be significantly decoded from a given ROI. (B) Brain maps showing the ROIs that contained at least one subject with both positive and negative beta values for RT. Significance is shown at four levels of correction: without any correction (Puncor), and after correcting for 36 (P36), 200 (P200), and 7,200 (P7.2 K) comparisons. (C) Percentage of ROIs that contained at least one subject with both positive and negative beta values for RT for each for four levels of multiple comparison correction. (D) Percentage of ROIs within each brain network that that contained at least one subject with both positive and negative beta values for RT after correcting for 50 comparisons. (E-H) Same as Panels A-D but for confidence.

Fig. 10.

Opposite relationship between brain activity and both RT and confidence among subjects for the same brain region in Dataset 2. (A) Beta values for each subject and region of interest. Red and blue dots reflect subjects for whom RT could be significantly decoded from a given ROI (P < 0.05, uncorrected for multiple comparison). Gray dots show subjects for whom RT could not be significantly decoded from a given ROI. (B) Brain maps showing the ROIs that contained at least one subject with both positive and negative beta values for RT. Significance is shown at four levels of correction: without any correction (Puncor), and after correcting for 36 (P36), 200 (P200), and 7,200 (P7.2 K) comparisons. (C) Percentage of ROIs that contained at least one subject with both positive and negative beta values for RT for each for four levels of multiple comparison correction. (D) Percentage of ROIs within each brain network that that contained at least one subject with both positive and negative beta values for RT after correcting for 50 comparisons. (E-H) Same as Panels A-D but for confidence.

Close modal

3.7 No relationship between decoding performance and frame displacement

To confirm that these results were not due to motion artifacts, we determined if there was an association between decoding performance and frame displacement (FD). We performed a regression analysis where average FD for a subject was used to predict average decoding performance for that subject. We found no significant association between FD and decoding performance in either Dataset 1 (RT: R2 = 0.02, P = 0.34; Conf: R2 = 0.002, P = 0.72; Fig. S2A-B) or Dataset 2 (RT: R2 = 0.05, P = 0.20; Conf: R2 = 0.04, P = 0.22; Fig. S2C-D). These results suggest that decoding performance was not driven by motion artifacts.

Traditionally, the brain-behavior relationship has been examined at the group level to identify the commonalities among individuals. Group-level analyses have typically associated behavioral signatures within a constrained set of brain areas. Here, in contrast to the traditional approach, we focus on the brain-behavior relationship within an individual. We tested how well trial-level RT and confidence can be decoded from the activation in each of the 200 cortical regions of interest obtained using the Schaefer atlas. We showed that RT and confidence can be significantly decoded for at least one subject from brain activity across most of the cortex. Additionally, we were still able to identify differences in the brain-behavior relationship among individuals even with the strictest multiple comparison corrections, indicating that these differences were robust and persistent across individuals. These results demonstrate that behavior can be predicted from a wider set of brain areas than would be suggested by standard group analyses.

These findings indicate that individual variability is a major factor in the ability to decode behavioral signatures. In fact, we found that most brain regions could predict RT and confidence in only a small percentage of subjects. Therefore, individual-level analyses might be more sensitive since group-level analyses aggregate over the large degree of within-subject variability (Fisher et al., 2018; Lebreton et al., 2019).

How meaningful is it to find that an ROI predicts behavior in only a small percentage of subjects? For example, one may only be interested in finding brain regions for which behavior can be decoded in most participants. However, the question whether a brain area can predict behavior in at least one subject versus whether it does so in the majority of subjects are simply different. The latter is concerned with what is common across subjects. This is traditionally the goal of the majority of cognitive neuroscience. While this is an important goal, here we focus specifically on individual differences. In other words, we want to ask how some subjects may differ in meaningful ways from the group. To answer this type of question, one cannot use the traditional approach of looking at the group data. Instead, the idea is that robust decoding in an ROI for even one subject is something that is interesting and meaningful, even if the ROI cannot be used for decoding in any other subject.

A natural question that might arise is whether these individual differences would disappear if the analysis used subject-specific parcellation (Kong et al., 2021). As observed in Kong et al., using subject-specific parcellation can improve brain-behavior decoding. However, in their study, subject-specific parcellation only marginally improved group-level decoding, suggesting that the low decoding values do not simply reflect parcellation differences between subjects.

Individual variability in brain-behavior relationship may have many sources. Such variation could emerge from functional degeneracy, the ability of different brain regions to perform the same computation (Price & Friston, 2002; Sajid et al., 2020; Tononi et al., 1999). Functional degeneracy could arise from differences in structural connectivity among individuals (Bansal et al., 2018; Muldoon et al., 2016), since structural connectivity has been associated with differences in behavioral performance (Kanai & Rees, 2011).

Despite the heterogeneity among individuals, a few brain regions showed relatively high consistency across subjects. The lateral frontal cortex and the visual cortex were the most consistent regions from which RT could be decoded. Similarly, for confidence, the somatomotor system was the most consistent area across subjects from which confidence could be decoded, suggesting an association between action-related brain signals and confidence (Fleming et al., 2015; Kiani & Shadlen, 2009).

In a seminal paper, Marek et al. (2022) showed that thousands of subjects are required to robustly estimate the brain-behavior relationship. Even if a brain region can be used to decode behavioral performance, the underlying relationship may not be consistent across subjects underscoring the role of individual differences and the limitation of group-level analyses. Our results suggest that beyond the number of subjects, an arguably even more important factor for brain-behavior relationships is the high degree of between-subject variability. Specifically, if one brain-region is predictive of behavior in one direction in some people but in the opposite direction in others, then no matter how many people a model is trained on, it will always fail to capture many of the individual subjects. It has been suggested that beta values cannot be interpreted particularly from non-linear decoding models using multiple voxels because the beta-values may indicate reduction in noise (Kriegeskorte et al., 2008). This may well be true in some cases, but in the cases where opposite beta values are meaningfully different between individuals, individual variability becomes a crucial limitation in our understanding of how behavior arises from brain activity. Specifically, most brain-behavior computational models assume that there is a uniform relationship between a given region and behavior across all individuals, but this may simply not be true. Instead, we may need to build models that are more flexible and can account for opposite beta values for different subjects.

Finally, we developed a simple test that future studies can use to determine the optimal number of trials in the training and testing bins. This is a crucial step in all decoding analyses, but one that has received little attention. The test relies on training the model on a subset of trials that ranges from 5 to 95% of all trials, with testing performed on the remaining trials. In the context of the current study, we found that several hundred (~300) trials per person are needed to robustly decode the brain-behavior relationship at the individual level. These results suggest that many previous studies estimating brain-behavior relationship at the individual level may be underpowered. We suggest that the optimal number of trials used in training a decoding model could be based on: (1) minimizing the variance in the decoding performance; or (2) maximizing the difference between decoding performance and the variance in the performance. In the current study, we opted to minimize the variance in the decoding performance even though it came at the cost of a lower decoding performance.

In conclusion, our findings show that behavioral signatures can be decoded from a much broader range of cortical areas than previously recognized. These results highlight that studying the brain-behavior relationship should be studied at both the group and the individual level.

Data and scripts necessary to perform the analysis are available at https://osf.io/dc9pa/.

J.N. and D.R. designed the analysis; J.Y. and D.R. designed the study; J.Y., J-H.K., and S-P.K. acquired the data; J.N. and J.Y. pre-processed the data; J.N. analyzed the data; and J.N. and D.R. wrote the first draft of the paper.

The authors declare no competing interests.

This work was supported by the National Institute of Health (award: R01MH119189) and the Office of Naval Research (award: N00014-20-1-2622).

Supplementary material for this article is available with the online version here: https://doi.org/10.1162/imag_a_00359.

Allen
,
E. J.
,
St-Yves
,
G.
,
Wu
,
Y.
,
Breedlove
,
J. L.
,
Prince
,
J. S.
,
Dowdle
,
L. T.
,
Nau
,
M.
,
Caron
,
B.
,
Pestilli
,
F.
,
Charest
,
I.
,
Hutchinson
,
J. B.
,
Naselaris
,
T.
, &
Kay
,
K.
(
2022
).
A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence
.
Nature Neuroscience
,
25
,
116
126
. https://doi.org/10.1038/s41593-021-00962-x
Arbabshirani
,
M.
,
Kiehl
,
K.
,
Pearlson
,
G.
, &
Calhoun
,
V.
(
2013
).
Classification of schizophrenia patients based on resting-state functional network connectivity
.
Frontiers in Neuroscience
,
7
,
133
. https://doi.org/10.3389/fnins.2013.00133
Bansal
,
K.
,
Nakuci
,
J.
, &
Muldoon
,
S. F.
(
2018
).
Personalized brain network models for assessing structure-function relationships
.
Current Opinion in Neurobiology
,
52
,
1
13
. https://doi.org/10.1016/J.CONB.2018.04.014
Braga
,
R. M.
, &
Buckner
,
R. L.
(
2017
).
Parallel interdigitated distributed networks within the individual estimated by intrinsic functional connectivity
.
Neuron
,
95
,
457
471
. https://doi.org/10.1016/j.neuron.2017.06.038
Dosenbach
,
N. U. F.
,
Nardos
,
B.
,
Cohen
,
A. L.
,
Fair
,
D. A.
,
Power
,
J. D.
,
Church
,
J. A.
,
Nelson
,
S. M.
,
Wig
,
G. S.
,
Vogel
,
A. C.
,
Lessov-Schlaggar
,
C. N.
,
Barnes
,
K. A.
,
Dubis
,
J. W.
,
Feczko
,
E.
,
Coalson
,
R. S.
,
Pruett
,
J. R.
,
Barch
,
D. M.
,
Petersen
,
S. E.
, &
Schlaggar
,
B. L.
(
2010
).
Prediction of individual brain maturity using fMRI
.
Science
,
329
,
1358
1361
. https://doi.org/10.1126/science.1194144
Dubois
,
J.
, &
Adolphs
,
R.
(
2016
).
Building a science of individual differences from fMRI
.
Trends in Cognitive Sciences
,
20
,
425
443
. https://doi.org/10.1016/j.tics.2016.03.014
Dworetsky
,
A.
,
Seitzman
,
B. A.
,
Adeyemo
,
B.
,
Nielsen
,
A. N.
,
Hatoum
,
A. S.
,
Smith
,
D. M.
,
Nichols
,
T. E.
,
Neta
,
M.
,
Petersen
,
S. E.
, &
Gratton
,
C.
(
2024
).
Two common and distinct forms of variation in human functional brain networks
.
Nature Neuroscience
,
27
,
1187
1198
. https://doi.org/10.1038/s41593-024-01618-2
Finn
,
E. S.
,
Shen
,
X.
,
Scheinost
,
D.
,
Rosenberg
,
M. D.
,
Huang
,
J.
,
Chun
,
M. M.
,
Papademetris
,
X.
, &
Todd Constable
,
R.
(
2015
).
Functional connectome fingerprinting: Identifying individuals based on patterns of brain connectivity HHS Public Access
.
Nature Neuroscience
,
18
,
1664
1671
. https://doi.org/10.1038/nn.4135
Fisher
,
A. J.
,
Medaglia
,
J. D.
, &
Jeronimus
,
B. F.
(
2018
).
Lack of group-to-individual generalizability is a threat to human subjects research
.
Proceedings of the National Academy of Sciences of the United States of America
,
115
,
6106
6115
. https://doi.org/10.1073/pnas.1711978115
Fleming
,
S. M.
,
Maniscalco
,
B.
,
Ko
,
Y.
,
Amendi
,
N.
,
Ro
,
T.
, &
Lau
,
H.
(
2015
).
Action-specific disruption of perceptual confidence
.
Psychological Science
,
26
,
89
98
. https://doi.org/10.1177/0956797614557697
Friston
,
K. J.
,
Holmes
,
A. P.
,
Price
,
C. J.
,
Büchel
,
C.
, &
Worsley
,
K. J.
(
1999
).
Multisubject fMRI studies and conjunction analyses
.
NeuroImage
,
10
,
385
396
. https://doi.org/10.1006/nimg.1999.0484
Geerligs
,
L.
,
Rubinov
,
M.
,
Tyler
,
L. K.
,
Brayne
,
C.
,
Bullmore
,
E. T.
,
Calder
,
A. C.
,
Cusack
,
R.
,
Dalgleish
,
T.
,
Duncan
,
J.
,
Henson
,
R. N.
,
Matthews
,
F. E.
,
Marslen-Wilson
,
W. D.
,
Rowe
,
J. B.
,
Shafto
,
M. A.
,
Campbell
,
K.
,
Cheung
,
T.
,
Davis
,
S.
,
Geerligs
,
L.
,
Kievit
,
R.
, …
Henson
,
R. N.
(
2015
).
State and trait components of functional connectivity: Individual differences vary with mental state
.
Journal of Neuroscience
,
35
,
13949
13961
. https://doi.org/10.1523/JNEUROSCI.1324-15.2015
Gratton
,
C.
,
Kraus
,
B. T.
,
Greene
,
D. J.
,
Gordon
,
E. M.
,
Laumann
,
T. O.
,
Nelson
,
S. M.
,
Dosenbach
,
N. U. F.
, &
Petersen
,
S. E.
(
2020
).
Defining individual-specific functional neuroanatomy for precision psychiatry
.
Biological Psychiatry
,
88
,
28
39
. https://doi.org/10.1016/j.biopsych.2019.10.026
Gratton
,
C.
,
Laumann
,
T. O.
,
Nielsen
,
A. N.
,
Greene
,
D. J.
,
Gordon
,
E. M.
,
Gilmore
,
A. W.
,
Nelson
,
S. M.
,
Coalson
,
R. S.
,
Snyder
,
A. Z.
,
Schlaggar
,
B. L.
,
Dosenbach
,
N. U. F.
, &
Petersen
,
S. E.
(
2018
).
Functional brain networks are dominated by stable group and individual factors, not cognitive or daily variation
.
Neuron
,
98
,
439
452
. https://doi.org/10.1016/j.neuron.2018.03.035
Haxby
,
J. V.
,
Guntupalli
,
J. S.
,
Connolly
,
A. C.
,
Halchenko
,
Y. O.
,
Conroy
,
B. R.
,
Gobbini
,
M. I.
,
Hanke
,
M.
, &
Ramadge
,
P. J.
(
2011
).
A common, high-dimensional model of the representational space in human ventral temporal cortex
.
Neuron
,
72
,
404
416
. https://doi.org/10.1016/j.neuron.2011.08.026
Heuvel
,
M. P. van den
,
Stam
,
C. J.
,
Kahn
,
R. S.
, &
Pol
,
H. E. H.
(
2009
).
Efficiency of functional brain networks and intellectual performance
.
The Journal of Neuroscience
,
29
,
7619
. https://doi.org/10.1523/JNEUROSCI.1443-09.2009
Kanai
,
R.
, &
Rees
,
G.
(
2011
).
The structural basis of inter-individual differences in human behaviour and cognition
.
Nature Reviews Neuroscience
,
12
,
231
242
. https://doi.org/10.1038/nrn3000
Kiani
,
R.
, &
Shadlen
,
M. N.
(
2009
).
Representation of confidence associated with a decision by neurons in the parietal cortex
.
Science
,
324
,
759
764
. https://doi.org/10.1126/science.1169405
Kong
,
R.
,
Yang
,
Q.
,
Gordon
,
E.
,
Xue
,
A.
,
Yan
,
X.
,
Orban
,
C.
,
Zuo
,
X.-N.
,
Spreng
,
N.
,
Ge
,
T.
,
Holmes
,
A.
,
Eickhoff
,
S.
, &
Yeo
,
B. T. T.
(
2021
).
Individual-specific areal-level parcellations improve functional connectivity prediction of behavior
.
Cerebral Cortex
,
31
,
4477
4500
. https://doi.org/10.1093/cercor/bhab101
Kriegeskorte
,
N.
,
Mur
,
M.
, &
Bandettini
,
P.
(
2008
).
Representational similarity analysis - connecting the branches of systems neuroscience
.
Frontiers in Systems Neuroscience
,
2
,
4
. https://doi.org/10.3389/neuro.06.004.2008
Lebreton
,
M.
,
Bavard
,
S.
,
Daunizeau
,
J.
, &
Palminteri
,
S.
(
2019
).
Assessing inter-individual differences with task-related functional neuroimaging
.
Nature Human Behaviour
,
3
,
897
905
. https://doi.org/10.1038/s41562-019-0681-8
Marek
,
S.
,
Tervo-Clemmens
,
B.
,
Calabro
,
F. J.
,
Montez
,
D. F.
,
Kay
,
B. P.
,
Hatoum
,
A. S.
,
Donohue
,
M. R.
,
Foran
,
W.
,
Miller
,
R. L.
,
Hendrickson
,
T. J.
,
Malone
,
S. M.
,
Kandala
,
S.
,
Feczko
,
E.
,
Miranda-Dominguez
,
O.
,
Graham
,
A. M.
,
Earl
,
E. A.
,
Perrone
,
A. J.
,
Cordova
,
M.
,
Doyle
,
O.
, …
Dosenbach
,
N. U. F.
(
2022
).
Reproducible brain-wide association studies require thousands of individuals
.
Nature
,
603
,
654
660
. https://doi.org/10.1038/s41586-022-04492-9
Miller
,
M. B.
,
Donovan
,
C. L.
,
Bennett
,
C. M.
,
Aminoff
,
E. M.
, &
Mayer
,
R. E.
(
2012
).
Individual differences in cognitive style and strategy predict similarities in the patterns of brain activity between individuals
.
NeuroImage
,
59
,
83
93
. https://doi.org/10.1016/j.neuroimage.2011.05.060
Muldoon
,
S. F.
,
Pasqualetti
,
F.
,
Gu
,
S.
,
Cieslak
,
M.
,
Grafton
,
S. T.
,
Vettel
,
J. M.
, &
Bassett
,
D. S.
(
2016
).
Stimulation-based control of dynamic brain networks
.
PLoS Computational Biology
,
12
,
e1005076
. https://doi.org/10.1371/journal.pcbi.1005076
Nakuci
,
J.
,
Yeon
,
J.
,
Kim
,
J.-H.
,
Kim
,
S.-P.
, &
Rahnev
,
D.
(
2023
).
Multiple brain activation patterns for the same task
.
bioRxiv
, 2023.04.08.536107. https://doi.org/10.1101/2023.04.08.536107
Nakuci
,
J.
,
Yeon
,
J.
,
Xue
,
K.
,
Kim
,
J.-H.
,
Kim
,
S.-P.
, &
Rahnev
,
D.
(
2023
).
Quantifying the contribution of subject and group factors in brain activation
.
Cerebral Cortex
,
33
,
11092
11101
. https://doi.org/10.1093/cercor/bhad348
Price
,
C. J.
, &
Friston
,
K. J.
(
2002
).
Degeneracy and cognitive anatomy
.
Trends in Cognitive Sciences
,
6
,
416
421
. https://doi.org/10.1016/S1364-6613(02)01976-9
Prince
,
J. S.
,
Charest
,
I.
,
Kurzawski
,
J. W.
,
Pyles
,
J. A.
,
Tarr
,
M. J.
, &
Kay
,
K. N.
(
2022
).
Improving the accuracy of single-trial fMRI response estimates using GLMsingle
.
eLife
,
11
,
e77599
. https://doi.org/10.7554/eLife.77599
Sajid
,
N.
,
Parr
,
T.
,
Hope
,
T. M.
,
Price
,
C. J.
, &
Friston
,
K. J.
(
2020
).
Degeneracy and redundancy in active inference
.
Cerebral Cortex
,
30
,
5750
5766
. https://doi.org/10.1093/cercor/bhaa148
Schaefer
,
A.
,
Kong
,
R.
,
Gordon
,
E. M.
,
Laumann
,
T. O.
,
Zuo
,
X.-N.
,
Holmes
,
A. J.
,
Eickhoff
,
S. B.
, &
Yeo
,
B. T. T.
(
2018
).
Local-global parcellation of the human cerebral cortex from intrinsic functional connectivity MRI
.
Cerebral Cortex
,
28
,
3095
3114
. https://doi.org/10.1093/cercor/bhx179
Smith
,
S. M.
,
Nichols
,
T. E.
,
Vidaurre
,
D.
,
Winkler
,
A. M.
,
Behrens
,
T. E. J.
,
Glasser
,
M. F.
,
Ugurbil
,
K.
,
Barch
,
D. M.
,
Van Essen
,
D. C.
, &
Miller
,
K. L.
(
2015
).
A positive-negative mode of population covariation links brain connectivity, demographics and behavior
.
Nature Neuroscience
,
18
,
1565
1567
. https://doi.org/10.1038/nn.4125
Tononi
,
G.
,
Sporns
,
O.
, &
Edelman
,
G. M.
(
1999
).
Measures of degeneracy and redundancy in biological networks
.
Proceedings of the National Academy of Sciences of the United States of America
,
96
,
3257
3262
. https://doi.org/10.1073/pnas.96.6.3257
van Horn
,
J. D.
,
Grafton
,
S. T.
, &
Miller
,
M. B.
(
2008
).
Individual variability in brain activity: A nuisance or an opportunity?
Brain Imaging and Behavior
,
2
,
327
334
. https://doi.org/10.1007/s11682-008-9049-9
Whelan
,
R.
,
Watts
,
R.
,
Orr
,
C. A.
,
Althoff
,
R. R.
,
Artiges
,
E.
,
Banaschewski
,
T.
,
Barker
,
G. J.
,
Bokde
,
A. L. W.
,
Büchel
,
C.
,
Carvalho
,
F. M.
,
Conrod
,
P. J.
,
Flor
,
H.
,
Fauth-Bühler
,
M.
,
Frouin
,
V.
,
Gallinat
,
J.
,
Gan
,
G.
,
Gowland
,
P.
,
Heinz
,
A.
,
Ittermann
,
B.
, …
IMAGEN Consortium
. (
2014
).
Neuropsychosocial profiles of current and future adolescent alcohol misusers
.
Nature
,
512
,
185
189
. https://doi.org/10.1038/nature13402
Williams
,
L. M.
,
Korgaonkar
,
M. S.
,
Song
,
Y. C.
,
Paton
,
R.
,
Eagles
,
S.
,
Goldstein-Piekarski
,
A.
,
Grieve
,
S. M.
,
Harris
,
A. W. F.
,
Usherwood
,
T.
, &
Etkin
,
A.
(
2015
).
Amygdala reactivity to emotional faces in the prediction of general and medication-specific responses to antidepressant treatment in the randomized iSPOT-D trial
.
Neuropsychopharmacology
,
40
,
2398
2408
. https://doi.org/10.1038/npp.2015.89
Yahata
,
N.
,
Morimoto
,
J.
,
Hashimoto
,
R.
,
Lisi
,
G.
,
Shibata
,
K.
,
Kawakubo
,
Y.
,
Kuwabara
,
H.
,
Kuroda
,
M.
,
Yamada
,
T.
,
Megumi
,
F.
,
Imamizu
,
H.
,
Náñez
Sr,
J. E.
,
Takahashi
,
H.
,
Okamoto
,
Y.
,
Kasai
,
K.
,
Kato
,
N.
,
Sasaki
,
Y.
,
Watanabe
,
T.
, &
Kawato
,
M.
(
2016
).
A small number of abnormal brain connections predicts adult autism spectrum disorder
.
Nature Communications
,
7
,
11254
. https://doi.org/10.1038/ncomms11254
Yarkoni
,
T.
(
2015
).
Neurobiological substrates of personality: A critical overview
. In
APA handbook of personality and social psychology, Volume 4: Personality processes and individual differences
(pp.
61
83
). APA Handbooks in Psychology®.
American Psychological Association
,
Washington, DC
. https://doi.org/10.1037/14343-003
Yeon
,
J.
,
Shekhar
,
M.
, &
Rahnev
,
D.
(
2020
).
Overlapping and unique neural circuits are activated during perceptual decision making and confidence
.
Scientific Reports
,
10
. https://doi.org/10.1038/s41598-020-77820-6
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0) license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.

Supplementary data