Abstract

Response inhibition is a widely studied aspect of cognitive control that is particularly interesting because of its applications to clinical populations. Although individual differences are integral to cognitive control, so too is our ability to aggregate information across a group of individuals, so that we can powerfully generalize and characterize the group's behavior. Hence, an examination of response inhibition would ideally involve an accurate estimation of both group- and individual-level effects. Hierarchical Bayesian analyses account for individual differences by simultaneously estimating group and individual factors and compensate for sparse data by pooling information across participants. Hierarchical Bayesian models are thus an ideal tool for studying response inhibition, especially when analyzing neural data. We construct hierarchical Bayesian models of the fMRI neural time series, models assuming hierarchies across conditions, participants, and ROIs. Here, we demonstrate the advantages of our models over a conventional generalized linear model in accurately separating signal from noise. We then apply our models to go/no-go and stop signal data from 11 participants. We find strong evidence for individual differences in neural responses to going, not going, and stopping and in functional connectivity across the two tasks and demonstrate how hierarchical Bayesian models can effectively compensate for these individual differences while providing group-level summarizations. Finally, we validated the reliability of our findings using a larger go/no-go data set consisting of 179 participants. In conclusion, hierarchical Bayesian models not only account for individual differences but allow us to better understand the cognitive dynamics of response inhibition.

INTRODUCTION

Cognitive control is composed of a wide range of processes involving executive functioning. One widely studied component of cognitive control is response inhibition, which requires the suppression of a response (often a motor response) after a specified cue. This suppression often involves either withholding a response (“not going”) or canceling an already initiated response (“stopping”). Two paradigms often used to study response inhibition are the go/no-go task (measuring not going) and the stop signal task (measuring stopping). Response inhibition is particularly interesting because of its various clinical applications pertaining to attention deficit hyperactivity disorder (Nigg, 2001; Schachar & Logan, 1990), schizophrenia (Hughes, Fulham, Johnston, & Michie, 2012), obsessive compulsive disorder (Penadés et al., 2007; Bannon, Gonsalvez, Croft, & Boyce, 2002), and substance use disorders (Nigg et al., 2006; Monterosso, Aron, Cordova, Xu, & London, 2005).

Understanding response inhibition has been important at two seemingly different levels. First, response inhibition is often used to systematically examine differences in executive function between populations, such as comparing the stop signal RTs between children and adults or comparing neural activation during stopping processes between clinical populations and healthy controls. Second, response inhibition is often used as a diagnostic tool for assessing individuals, such as when making diagnoses about individual patients. Hence, an ideal framework for investigating response inhibition would synthesize these two objectives simultaneously. However, given the wide variability across individuals performing cognitive control tasks (Miyake & Friedman, 2012), so far it has been difficult to accurately assess individual differences, as well as justify generalization across individuals within a group.

Hierarchical (Bayesian) models are ideal inferential tools for response inhibition because estimates of group- and individual-level effects are obtained simultaneously (Turner, Sederberg, Brown, & Steyvers, 2013; Ahn, Krawitz, Kim, Busmeyer, & Brown, 2011; Lee, 2008; Shiffrin, Lee, Kim, & Wagenmakers, 2008; Rouder & Lu, 2005). In a hierarchical model, lower level parameters detailing individuals are estimated conditionally and are informed by higher level parameters detailing properties of the group. The hierarchical structure allows the information at the individual level to propagate up to a higher level and “pool,” where the pooled information can be used to perform group-level inferences. Reciprocally, the pooled information also conveys information to the lower level estimates by exerting a group-informed constraint on the individual-level model parameters. This “top–down” statistical pooling is especially helpful when data at the individual level are sparse or missing entirely. Although hierarchical models can be fitted to data in a frequentist inferential framework, Bayesian statistics offer some compelling advantages, including computational conveniences (e.g., Gibbs sampling), making them the chosen framework within this article (Lee & Wagenmakers, 2013; Lee, 2008; Shiffrin et al., 2008).

As many scientists wish to understand the neural basis of response inhibition, a popular measure used to study cognitive control is through fMRI. Although fMRI data provide convincing spatial resolution, they are notoriously noisy. This makes a systematic extraction of the inhibition signal an elusive endeavor in terms of both experimental design and statistical inference. Moreover, imaging studies have logistical constraints that are not present in experiments outside the scanner (e.g., safety protocol, structural scans, and hemodynamic lag), so there are often strict practical limitations on the number of trials experimenters can obtain for each individual participant. Bayesian analyses (including hierarchical Bayesian analyses) have been applied to fMRI data in many ways, including spatial priors, adaptive priors, and modeling effective connectivity (see Zhang, Guindani, & Vannucci, 2015, for a review). Here, we extend the generalized linear model (GLM) hierarchically by adding participant- and region-specific effects. Unlike standard GLM analyses, the models we develop produce single-trial estimates and measures of coactivation, so that we can compare patterns of activation across individuals. Beyond the practical benefits, as we show below, hierarchical Bayesian models are better able to accurately recover individual differences compared with a nonhierarchical Bayesian GLM.

The models we present below not only improve estimates of neural activity but also provide more detailed information than a standard analysis to further aid in understanding response inhibition. First, the models can allow for some temporal dynamics by estimating the neural response corresponding to every stimulus. Although fMRI has lower temporal resolution than other measures, single-trial estimates can still account for interactions occurring over time that may affect the neural response. For example, a stop trial after a long series of go trials may produce a different neural response compared with a stop trial immediately after another stop trial, due to the sequential dependency in brain dynamics. Second, the model's estimation of a coactivation matrix allows for investigations into the functional connectivity of key ROIs at both group and individual levels. We should note that using the coactivation matrix as a measure of functional connectivity has a slightly different interpretation from other measures of functional connectivity, so this should be considered before making direct comparisons to other methods. We estimate functional connectivity as correlations between coactivation, whereas it is often calculated by correlating the raw time series data. However, our method of analyses allows us to directly compare “group” and “individual” functional connectivity measures by using two different models. We believe this particular advantage is quite compelling, as it gives us an opportunity to preserve stable individual-specific characteristics of functional connectivity that are abstracted away from idiosyncratic details of the experimental design (Swick, Ashley, & Turken, 2011). For example, Gratton et al. (2018) demonstrated that functional connectivity measures remain stable across individuals and are significantly less variable compared with task and session factors. Hence, modeling individual-specific patterns of coactivation should provide a way of assessing individual variation relative to the group.

In this article, we aim to demonstrate the utility of hierarchical Bayesian models by using them to characterize individual differences in response inhibition. First, we detail the experimental methods and specify the set of models under investigation. Second, in a simulation study, we show that hierarchical Bayesian models are preferred to conventional GLMs (C-GLMs), particularly when the true data possess individual variation. Third, we apply the hierarchical Bayesian models to fMRI data from go/no-go and stop signal experiments. Here, we compare individual and task differences across neural activation and functional connectivity. We also provide evidence that variability in the tasks is a result of individual differences as opposed to run-to-run variability, sample size, or method of analyses. We conclude with a discussion comparing our work to previous findings, while emphasizing the prevalence and importance of individual variation in cognitive neuroscience.

METHODS

Participants

The 11 participants analyzed in this study were part of a larger study involving multiple cognitive tasks and well-being inventories (Gaut et al., 2019; Molloy et al., 2018). These 11 participants, in contrast to the rest of the group from the first scan, were recruited to take part in a second experiment. Whereas the first session involved a go/no-go task, the second session involved only a stop signal task. All participants were recruited from The Ohio State University and the surrounding community, and each provided informed consent. The study was approved by the institutional review board of the university. Among the 11 participants (mean age = 24.6 years, ranging from 18 to 48 years) included in the analysis, there were five women and six men.

Stimuli

All stimuli were programmed in MATLAB using Psychtoolbox extensions (psychtoolbox.org) on a Windows PC. The participants lay supine on the scanner bed and viewed the visual stimuli back-projected onto a screen through a mirror attached onto the head coil. In the go/no-go task, participants were instructed to press a button when they viewed an A, B, C, D, or E and to not press any button when they viewed an X, Y, or Z. The button response was collected using an MRI-compatible fiber-optic response pad (https://www.curdes.com/). The transistor–transistor logic output from the fiber-optic response pad was fed into the RTBox (Li, Liang, Kleiner, & Lu, 2010) to measure RT with high accuracy. The stop signal task contained both of these “go” and “no-go” trials, but also on some trials, a go signal was presented but then after a delay, a stop signal (square around the letter) appeared on the screen. The go/no-go task consisted of 75 “go” and 25 “no-go” trials, for a total of 100 trials. The stop signal task consisted of 64 “go” trials, 16 “no-go” trials, and 80 “stop” trials of three different delays (individually fit for each participant, based on RT distributions in pilot testing). There were 160 trials per run, and each participant completed three runs of the stop signal task, resulting in 480 trials total. In this study, our analysis focused on just the first run from both tasks. Figure 1 shows the trial examples for both go/no-go and stop signal tasks. The jitter in each trial was designed in such a way that the trial duration ranged from 3 to 7 sec, with an increment of 1 sec. The trial duration was optimized by optseq (https://surfer.nmr.mgh.harvard.edu/optseq/).

Figure 1. 

Example trials. In each panel, an illustrative diagram shows example stimuli within a trial for the go/no-go and stop signal tasks. The top shows the stimuli within the go/no-go task (one go trial and one no-go trial), and the bottom shows the stimuli within the stop signal task (one go trial, one no-go trial, and one stop trial). For a stop trial, a square around the letter appears after a variable amount of time to indicate that a response should be inhibited.

Figure 1. 

Example trials. In each panel, an illustrative diagram shows example stimuli within a trial for the go/no-go and stop signal tasks. The top shows the stimuli within the go/no-go task (one go trial and one no-go trial), and the bottom shows the stimuli within the stop signal task (one go trial, one no-go trial, and one stop trial). For a stop trial, a square around the letter appears after a variable amount of time to indicate that a response should be inhibited.

MRI Data Acquisition

MRI recording was performed using a 12-channel head coil in a Siemens 3T Trio Magnetic Resonance Imaging System with TIM, housed in the Center for Cognitive and Behavioral Brain Imaging at The Ohio State University. BOLD functional activations were measured with a T2*-weighted EPI sequence (repetition time = 2000 msec, echo time = 28 msec, flip angle = 72 deg, field of view = 222 × 222 mm, in-plane resolution = 74 × 74 pixels or 3 × 3 mm, and 38 axial slices with 3-mm thickness to cover the entire cerebral cortex and most of the cerebellum). In addition, the anatomical structure of the brain was acquired with the three-dimensional MPRAGE sequence (1 × 1 × 1 mm3 resolution, inversion time = 950 msec, repetition time = 1950 msec, echo time = 4.44 msec, flip angle = 12 deg, matrix size = 256 × 224, 176 sagittal slices per slab; scan time = 7.5 min) for each participant.

Image Preprocessing and Analysis

The fMRI preprocessing was carried out using FEAT (fMRI Expert Analysis Tool; Woolrich, Ripley, Brady, & Smith, 2001) in FSL (FMRIB Software Library, Version 5.0.8; Smith et al., 2004). The first six volumes were discarded to allow for T1 equilibrium. The remaining images were then realigned to correct head motion. Data were spatially smoothed using a 6-mm FWHM Gaussian kernel. The data were filtered in the temporal domain using a nonlinear high-pass filter with a 90-sec cutoff. A two-step registration procedure was used whereby EPI images were first registered to the MPRAGE structural image and then into the standard (MNI) space using affine transformations. Registration from the MPRAGE structural image to the standard space was further refined using FNIRT nonlinear registration.

After the neural data were preprocessed, the time series from 24 ROIs were extracted. The selection of ROIs was based on related literature (Dunovan, Lynch, Molesworth, & Verstynen, 2015). Table 1 shows information about ROIs and their corresponding indices used in later figures. The MNI coordinates in the table defined the center of each ROI, and the radius of a sphere ROI was estimated from the number of voxels provided in Dunovan et al. (2015).

Table 1. 
ROIs
NumberNameMNI xyznVox (<40)
Callosum [3 −23 29] 208 
Posterior cingulate cortex [−2 −56 22] 957 
preSMA [4 21 47] 1952 
Left angular gyrus [−44 −72 30] 328 
Left fusiform gyrus [−43 −60 −17] 84 
Left IFG-1 [−37 18 −4] 912 
Left IFG-2 [−44 9 29] 426 
Left inferior parietal lobe [−34 −52 46] 459 
Left inferior temporal gyrus [−56 −10 −20] 44 
10 Left insula [−39 −3 7] 41 
11 Left middle frontal gyrus [−3 50 −9] 477 
12 Left putamen [−27 −13 7] 48 
13 Left superior frontal gyrus [−9 57 35] 128 
14 Left thalamus [−6 −16 −2] 72 
15 Left ventral striatum [−1 16 −9] 100 
16 Right caudate [13 10 6] 55 
17 Right IFG [43 20 12] 2830 
18 Right inferior parietal lobe [48 −44 43] 1400 
19 Right middle frontal gyrus [38 48 −10] 83 
20 Right middle temporal gyrus [49 −66 26] 60 
21 Right precuneus [12 −67 42] 83 
22 Right putamen [31 −11 4] 44 
23 Right superior frontal gyrus [21 49 31] 45 
24 Right thalamus [9 −16 3] 154 
NumberNameMNI xyznVox (<40)
Callosum [3 −23 29] 208 
Posterior cingulate cortex [−2 −56 22] 957 
preSMA [4 21 47] 1952 
Left angular gyrus [−44 −72 30] 328 
Left fusiform gyrus [−43 −60 −17] 84 
Left IFG-1 [−37 18 −4] 912 
Left IFG-2 [−44 9 29] 426 
Left inferior parietal lobe [−34 −52 46] 459 
Left inferior temporal gyrus [−56 −10 −20] 44 
10 Left insula [−39 −3 7] 41 
11 Left middle frontal gyrus [−3 50 −9] 477 
12 Left putamen [−27 −13 7] 48 
13 Left superior frontal gyrus [−9 57 35] 128 
14 Left thalamus [−6 −16 −2] 72 
15 Left ventral striatum [−1 16 −9] 100 
16 Right caudate [13 10 6] 55 
17 Right IFG [43 20 12] 2830 
18 Right inferior parietal lobe [48 −44 43] 1400 
19 Right middle frontal gyrus [38 48 −10] 83 
20 Right middle temporal gyrus [49 −66 26] 60 
21 Right precuneus [12 −67 42] 83 
22 Right putamen [31 −11 4] 44 
23 Right superior frontal gyrus [21 49 31] 45 
24 Right thalamus [9 −16 3] 154 

The table shows the number index, name of each ROI, MNI coordinates, and the number of voxels. MNI = Montreal Neurological Institute.

MODEL SPECIFICATION

In addition to a standard GLM, we developed two models for the time series of the BOLD response by building in two types of individual variation. Figure 2 shows a graphical diagram of the two hierarchical models (Model 1 [M1] and Model 2 [M2]) and the C-GLM (right). Each node represents a parameter in the model, where shaded nodes are observed data and empty nodes are latent or unobserved parameters. Arrows represent conditional dependencies between parameters, and plates represent replications or loops across dimensions (such as conditions or participants). Both M1 and M2 provide estimates of the neural response to each stimulus. These single-stimulus parameter estimates provide temporal information that is lacking in a C-GLM analysis, as indicated by the plate notation in Figure 2. Although temporal precision in fMRI is not as precise as in other modalities such as EEG, single-stimulus estimates can still provide valuable information of activity across a run. In addition, the models we developed contain both individual- and group-level responses to a condition (i.e., stop, go, or no-go) that correspond to the neural activation (β) estimates from a standard GLM. However, unlike GLMs, the group and individual responses are estimated simultaneously. Finally, measures of functional connectivity between ROI pairs are built into both M1 and M2 through the variance–covariance matrix Σ. The difference between M1 and M2 is the assumption about these coactivation matrices. Specifically, whereas M1 contains only a single variance–covariance matrix across all participants (i.e., no individual variation), M2 contains a variance–covariance matrix for each participant (i.e., full individual variation). By comparing the single variance–covariance matrix in M1 to the set of variance–covariance matrices in M2, we can gain a better appreciation of the degree to which individual variation plays a role in assessing response inhibition.

Figure 2. 

Graphical diagrams of the models. Each panel illustrates a graphical diagram for each model used in our analysis. Each node represents a variable in the model, where the filled nodes represent the observed neural time series from the experiment, and empty nodes represent latent variables. The design matrix (i.e., information about stimulus condition and onset time) was not included in this diagram for visual clarity. Arrows represent relationships between variables and plates represent replications across dimensions (e.g., conditions or participants). Models 1 and 2 construct a hierarchical component across conditions, participants, and ROIs. Model 1 assumes a common covariance matrix for the entire group of participants, whereas Model 2 assumes one covariance matrix for each participant.

Figure 2. 

Graphical diagrams of the models. Each panel illustrates a graphical diagram for each model used in our analysis. Each node represents a variable in the model, where the filled nodes represent the observed neural time series from the experiment, and empty nodes represent latent variables. The design matrix (i.e., information about stimulus condition and onset time) was not included in this diagram for visual clarity. Arrows represent relationships between variables and plates represent replications across dimensions (e.g., conditions or participants). Models 1 and 2 construct a hierarchical component across conditions, participants, and ROIs. Model 1 assumes a common covariance matrix for the entire group of participants, whereas Model 2 assumes one covariance matrix for each participant.

Although both models are complex relative to the C-GLM, we previously found that these particular hierarchical levels allow better constraint and generalizability. In Molloy et al. (2018), we built five increasingly complex models, all with single-stimulus estimates, of the neural time series of the stop signal task. The simplest model had no hierarchical component. The next models constructed hierarchies across conditions; across conditions and participants; and finally across conditions, participants, and ROIs (i.e., M1 and M2). The models were compared in terms of fit to data, parameter constraint, and generalizability to other experimental runs within the same participant. We found that constructing a hierarchy across (at least) conditions and participants provided the best balance between fit, constraint, and generalizability. We chose to include the models that also included ROIs in this analysis to simultaneously estimate coactivation and understand individual differences within functional connectivity. In the previous analyses, M1 and M2 performed similarly well in terms of fit, constraint, and generalizability. Our rationale for focusing on M1 and M2 in this article is because they are particularly well postured to assess the magnitude of individual differences, especially with respect to differences in functional coactivation.

Conceptual Overview

Here, we conceptually describe the models under consideration while providing an explanation of assumptions motivating each component. Explicit specifications of the likelihood and priors of all three models can be found in the Appendix. To begin, let Ni,k,j,r,t denote the observed BOLD response for the ith stimulus, kth condition, jth participant, rth ROI, at each time point t. We will use this notation for each effect across all model parameters to ensure consistency. Because the C-GLM considers ROIs and participants independently, the neural data node in Figure 2 is simply Nk,t. Although Ni,k,j,r,t is the only filled/observed node in Figure 2, the design matrix (not included for visual clarity) containing information about the stimulus condition and onset time is also observed and used within the model. Excluding the design matrix and neural time series, every other component of each model is latent or unobserved and hence will need to be estimated.

The three latent parameters that are directly related to the neural data Ni,k,j,r,t are βj,r0, σj,r, and βi,j,k,r. These components are analogous to the parameters that would be estimated in a C-GLM analysis, where βj,r0 is the baseline activation (see β0 in the C-GLM), σj,r is the noise term (σ in the C-GLM), and βi,j,k,r is the neural activation (βk in the C-GLM). The major difference between these parameters and their GLM counterparts is they are estimated within a hierarchical framework, and thus, information from higher levels in the model is used to inform estimates on lower levels. Additionally, the neural activation is predicted on a single-stimulus level, as discussed previously.

Across the three models, the neural activation, baseline activation, and noise terms are all devised with specific assumptions in place. First, in our models, we assume that each participant will have distinct levels of baseline activation βj,r0 for each ROI. Additionally, this baseline activation is informed by a hyperparameter μr0, which we assume may vary from ROI to ROI, in a way that is consistent across individuals. Second, we assume that the noise components σj,r may differ across ROIs and individuals.

Finally, the single-stimulus neural activity parameter βi,j,k,r is the most theoretically relevant to our research questions and the most complex compared with the other parameters. The neural activity parameter βi,j,k,r is informed by two parameters: σr0, a noise term, which informs the standard deviation of β, and δj,k,r, which informs the mean. The parameter σr0 allows the noise level for the single-stimulus β estimates to vary across ROIs. The parameter δj,k,r is a condition- and participant-level hyperparameter of neural activation for each ROI. We will use this hyperparameter as an indicator of an individual's neural response in a particular ROI to going, not going, or stopping.

The individual conditional neural response variable δj,k,r is also informed by two different hyperparameters: μk,r, which informs the mean, and Σ, which informs the standard deviation. The parameter μk,r is interpreted as the group neural response to a condition in a particular ROI, and Σ denotes the covariance matrix. Σ is a 24 × 24 matrix (where the dimensions correspond to the 24 ROIs) that estimates the pairwise correlations of coactivation between ROIs over time and is the only difference between M1 and M2. Specifically, M1 assumes that all individuals have the same coactivation matrix Σ, whereas M2 assumes a separate Σ for every individual. By constraining Σ to be equivalent or separate across individuals, our modeling analyses allow us to explore whether individual differences are also present in the interactions between different ROIs.

Fitting Details

All models were fit using Just Another Gibbs Sampler (JAGS; Plummer, 2003). Each model was fit using a standard pipeline procedure involving an initialization stage, a burn-in stage, and a sampling stage. In the initialization stage, three chains were placed in the parameter space and then underwent an adaptation period where the tuning parameters of the algorithms used by JAGS were adjusted for the particular parameters of a given model. In the burn-in stage, chains migrated to the highest density areas of the parameter space. Technically, the movement of the chains in this stage provides information about the target posterior, but these stochastic transitions are not considered true samples from the posterior because the chains have not yet converged to the region of the parameter space that most accurately defines the posterior's shape. Hence, these samples are not used when forming our estimate of the posterior distribution. In the sampling stage, the stochastic movement of the chains within the parameter space constitutes a series of draws from the target posterior distribution when collapsed across iterations. The collection of draws through iterations of the algorithm is what is used to estimate our posterior distribution, and this posterior is what we used to interpret the results of our parameters of interest.

In the simulation study, the C-GLM and M2 models were fit. For the C-GLM, model initialization ran for 2000 adaptations, followed by a burn-in period of 4000 iterations. The posterior sampling then ran for 6000 iterations. There were 18,000 samples for each parameter. For M2, model initialization ran for 1000 adaptations, followed by a burn-in period of 2000 iterations. The posterior sampling then ran for 3000 iterations. Hence, 9000 samples were used to estimate the joint posterior distribution of the model parameters. In the real data analyses, both M1 and M2 were fit to the data. For both models, model initialization ran for 1000 adaptations, followed by a burn-in period of 2000 iterations. The posterior sampling then ran for 3000 iterations. Hence, 9000 samples were used to estimate the joint posterior distribution for each parameter. In the large sample analyses, the C-GLM and M2 were fit in the same way as for the simulation study and real data analyses. For the C-GLM, model initialization ran for 2000 adaptations, followed by a burn-in period of 4000 iterations, and sampled for 6000 iterations. For M2, model initialization ran for 1000 adaptations, followed by a burn-in period of 2000 iterations, and sampled for 3000 iterations. For all models, the chains were plotted and visually checked for convergence.

ADVANTAGES OF MODELING INDIVIDUAL DIFFERENCES: A SIMULATION STUDY

The multiple levels of hierarchy cause Models 1 and 2 to be much more complex than a C-GLM analysis, allowing for extraction of additional information from the data. For example, while the C-GLM estimates just one β value per condition, our hierarchical models estimate this conditional information, as well as a β estimate in response to every stimulus, providing information about temporal dynamics. Additionally, our hierarchical models show relationships between pairs of ROIs (through estimation of Σ) providing insight into functional connectivity. However, this additional complexity may come at a cost. In addition to the computational burden, hierarchical models could potentially overfit the data, potentially confusing noise for signal. Hence, it is important to first test our hierarchical models on the basis of how well they recover the true state of the world. Unfortunately, when using real experimental data, there is no way of knowing whether or not latent parameter estimates are actually correct. By simulating data based on chosen parameter values, we can identify how well a model is able to recover estimates. The accuracy and precision of the recovered posteriors can be used to infer if a model is providing accurate estimates when fit to real data. In this section, we compare the more complex M2 to the simpler C-GLM in terms of recovering both signal and noise.

Methods

Data sets were generated using both the C-GLM and M2. To ensure that the generated data were realistic, the parameters and experimental design of the generated data were based on the fits and design of the real data. The data sets consisted of time series data for 24 ROIs and 11 participants to keep the dimensionality the same as the dimensionality of our real data. To choose reasonable “true” values, we fit the C-GLM and M2 to the real data and used the means of the posteriors from the β0, conditional βs (for C-GLM: β and for M2: δ), and σ terms as the true values. The design matrix (i.e., presentation onset and condition order) was identical to the design matrix used in the experimental data. To test recoverability, we fit both models to both sets of generated data. In other words, the time series data generated by the C-GLM were fit by both the C-GLM and M2 models, and likewise, the time series data generated by M2 were fit by both the C-GLM and M2 models.

Recovering Signal

We begin by comparing the ability of each model to accurately recover the “signal” present in the simulated data. Here, we directly compared the C-GLM and M2's ability to recover the neural activation in response to a go, no-go, or stop stimulus. Figure 3 shows the recovery of the condition-level β estimates where the top row shows the recovery for data generated by the C-GLM, and the bottom row shows the recovery for data generated by M2. The y-axis shows the true values, and the x-axis shows the recovered values from the C-GLM (blue) and M2 (red). Each point corresponds to a particular participant and ROI combination. The black line shows where the true and recovered values are equivalent; points that lie above the line underestimate the true values, and points that lie below the line overestimate the true values.

Figure 3. 

Signal recovery. Recovered conditional-level β estimates for data simulated by the conventional general linear model (C-GLM; top row) and the hierarchical M2 (bottom row). Each column corresponds to a condition: go, no-go, or stop. The estimates recovered (x-axis) by the C-GLM are denoted by blue points, and the estimates recovered by M2 are denoted by red points. The y-axis shows the true values, and the black line shows where the recovered and true βs are equal.

Figure 3. 

Signal recovery. Recovered conditional-level β estimates for data simulated by the conventional general linear model (C-GLM; top row) and the hierarchical M2 (bottom row). Each column corresponds to a condition: go, no-go, or stop. The estimates recovered (x-axis) by the C-GLM are denoted by blue points, and the estimates recovered by M2 are denoted by red points. The y-axis shows the true values, and the black line shows where the recovered and true βs are equal.

For the data simulated by the C-GLM, both the C-GLM and M2 can accurately recover the signal. However, for the stop condition, the C-GLM tends to overestimate smaller β values. Importantly, the added complexity in M2 does not hurt estimation of the signal parameters. Even when the C-GLM is the true data generating model (e.g., no stimulus-to-stimulus variability, no constraint based on other participants or ROIs), the more complex hierarchical model is still able to make reasonable estimates. The differences between the C-GLM and M2 are more discernible when fit to the data generated by the more complex model M2 (Turner, Wang, & Merkle, 2017). Here, both models can accurately recover β, but at the extremes, the C-GLM tends to underestimate small (negative) βs and overestimate large βs. Overall, both models can recover the signal, but when assuming the presence of individual differences (i.e., when data are generated by M2), M2 can better predict the extreme values of neural activity.

Recovering and Differentiating Noise

The ability to discriminate between noise and the underlying signal in neural data is essential. Here, we compare the abilities of the C-GLM and M2 to accurately recover the noise term σ. In addition to the common (i.e., present in both C-GLM and M2) observation noise term σ, M2 estimates the variability from stimulus to stimulus through the term σβ. By comparing the estimated σβ term across M2 when fit to data either with or without stimulus-to-stimulus variability, we can assess the degree to which (1) M2 accurately estimates σβ and (2) the estimates of the C-GLM are distorted by this additional, unaccounted for variance term. Figure 4 shows the recovery of observation noise (σ; left column) and the estimation of stimulus-to-stimulus variability (σβ; right column). The results for the data generated by the C-GLM are in the first row, whereas the results for data generated by M2 are in the second row. The top left shows that both the C-GLM (blue) and M2 (red) are able to accurately recover σ. The top right shows M2's estimates of σβ when fit to data generated by the C-GLM. Recall that the C-GLM assumes that there is no variability between trials and does not have a σβ term, so the recoverability of the C-GLM and M2 cannot be directly compared. Instead, Figure 4 shows the posterior estimate of σβ from M2 as box plots for each ROI. The blue horizontal line is at zero, which can be thought of as the true σβ for the C-GLM. The estimates are not near zero and, as we will see next, are much larger than the estimates recovered from the data generated by M2. The large variability of the estimated posterior distributions are likely a result of the data not providing enough constraint to the posteriors, so the high means and wide spreads of the posteriors resemble the prior distributions.

Figure 4. 

Noise recovery. The recovery of the noise term (σ; left column) and estimation of stimulus-to-stimulus variability (σβ; right column) is compared between the conventional general linear model (C-GLM) and the hierarchical M2. The top row shows the results of the data generated by the C-GLM, and the bottom row shows the results of the data generated by M2. In the left column, the true σ value is on the y-axis, whereas the recovered σ value is on the x-axis. Estimates recovered by the C-GLM are colored blue, whereas estimates recovered by M2 are colored red. The estimation of the stimulus-to-stimulus variability term (σβ) is shown in the right column. For the data generated by the C-GLM, no stimulus-to-stimulus variability is assumed, so the “true” values for each estimate is zero (pictured as the blue horizontal line). Box plots of the posteriors of estimated σβs from M2 are displayed for each ROI (x-axis). For the data generated by M2, we can compare the true σβs (y-axis) to the mean of the posterior of the recovered σβs (x-axis). Within each panel, the black line indicates perfect recovery of the estimated parameters.

Figure 4. 

Noise recovery. The recovery of the noise term (σ; left column) and estimation of stimulus-to-stimulus variability (σβ; right column) is compared between the conventional general linear model (C-GLM) and the hierarchical M2. The top row shows the results of the data generated by the C-GLM, and the bottom row shows the results of the data generated by M2. In the left column, the true σ value is on the y-axis, whereas the recovered σ value is on the x-axis. Estimates recovered by the C-GLM are colored blue, whereas estimates recovered by M2 are colored red. The estimation of the stimulus-to-stimulus variability term (σβ) is shown in the right column. For the data generated by the C-GLM, no stimulus-to-stimulus variability is assumed, so the “true” values for each estimate is zero (pictured as the blue horizontal line). Box plots of the posteriors of estimated σβs from M2 are displayed for each ROI (x-axis). For the data generated by M2, we can compare the true σβs (y-axis) to the mean of the posterior of the recovered σβs (x-axis). Within each panel, the black line indicates perfect recovery of the estimated parameters.

The second row of Figure 4 shows the model-fitting results when the data were generated by M2. The top left shows that, unlike the results from the C-GLM, there is a clear difference in ability to recover σ between the C-GLM and M2. The C-GLM underestimates σ for almost every participant and ROI. Conversely, M2 closely recovers σ, although there is a slight tendency to overestimate it, especially as the true value of σ increases. The C-GLM's consistent underestimation of σ indicates that it is unable to isolate the effects of signal from the effects of noise. The bottom right shows the recovered σβs from M2. M2 slightly overestimates the true σβ, but the range of these estimates is starkly different from the σβ estimates of the data generated by the C-GLM in the top right. In conclusion, the stimulation study shows that M2 not only provides additional information for differentiating variability but also accurately recovers both signal and noise. By contrast, when the C-GLM is misspecified (i.e., when a more complex model generates the data), the estimated parameters are severely biased in ways that might affect our conclusions, as the bias of estimated noise could impact statistical significance.

UNDERSTANDING RESPONSE INHIBITION THROUGH MODELING INDIVIDUAL DIFFERENCES

In the simulation study, we demonstrated that M2 is preferable to the C-GLM in situations where additional variability such as stimulus-to-stimulus variability or individual differences are present in the data. In this section, we apply our hierarchical models (M1 and M2) to the experimental data from go/no-go and stop signal tasks. First, we analyze neural activation in response to different conditions. Here, we discuss key patterns in both the go/no-go and stop signal tasks but also examine how individual variability differs across ROIs and conditions. Second, we investigate functional connectivity in the go/no-go and stop signal tasks on both group and individual levels. We aim to uncover any consistent patterns of coactivation between individuals for the two tasks. Finally, we explore the role of individual differences in response inhibition. Specifically, we examine whether it is necessary to assume that individual factors affect coactivation in the brain (as we do in M2).

Neural Activation When Inhibiting a Response

Go/No-go

As expected, go and no-go stimuli evoke different responses in the brain. Figure 5A shows the mean group results of the δ distributions by condition in the go/no-go task. Each numbered dot corresponds to an ROI (location is approximated for visualization). The color of the dot denotes neural activation, where cooler colors represent a smaller or more negative activation and warmer colors represent a larger activation. Importantly, the individual variability in neural activation differs across ROI and condition. Figure 5B shows the mean of each individual's δ estimate (measured by the neural activation; y-axis) for each ROI (x-axis). Conditions are represented by semitransparent rectangles, with green rectangles for the go condition and blue rectangles for the no-go condition. The length of the semitransparent rectangles denotes the 95% credible interval of the δj,k,r posteriors. The green and blue Xs represent the group mean for go and no-go, respectively, and are thus equivalent to the neural activation patterns in Figure 5A.

Figure 5. 

Go/no-go ROI activation by condition. M2 parameter estimates for δ in the go/no-go task. (A) Aggregated group results of the mean for δ across participants in each condition (with δGo on the top and δNo-go on the bottom) as a colored dot showing neural activation for each ROI. Cooler colors represent a smaller levels of activation, whereas warmer colors represent larger activation. (B) Individual results. The semitransparent rectangles (green for go and blue for no-go) represent 95% credible intervals of δ (y-axis) for each participant, and Xs represent the group means for each ROI (x-axis).

Figure 5. 

Go/no-go ROI activation by condition. M2 parameter estimates for δ in the go/no-go task. (A) Aggregated group results of the mean for δ across participants in each condition (with δGo on the top and δNo-go on the bottom) as a colored dot showing neural activation for each ROI. Cooler colors represent a smaller levels of activation, whereas warmer colors represent larger activation. (B) Individual results. The semitransparent rectangles (green for go and blue for no-go) represent 95% credible intervals of δ (y-axis) for each participant, and Xs represent the group means for each ROI (x-axis).

In our discussion, we focus on the major areas thought to be involved in response inhibition: the inferior frontal gyrus (IFG), the preSMA, and the BG—particularly, the subthalamic nucleus (Aron, Robbins, & Poldrack, 2014). In our analyses, the IFG consists of three ROIs: the right IFG (ROI 17), and two ROIs comprising the left IFG (left IFG-1 and left IFG-2; ROI 6 and ROI 7). One ROI comprises the bilateral preSMA (ROI 3). Finally, three ROIs correspond to BG structures found by Dunovan et al. (2015) to be involved in response inhibition: the right caudate (ROI 16), the left thalamus (ROI 14), and the right thalamus (ROI 24). In the go/no-go task, only one area, the right caudate, showed higher mean neural activation in response to go stimuli than in response to no-go stimuli (ROI 16, μGo − μNo-go = 0.16 [−0.26, 0.57]). All of the other areas of interest, including the IFG (left IFG-1, ROI 6: μGo − μNo-go = −0.20 [−0.61, 0.20], left IFG-2, ROI 7: μGo − μNo-go = −0.29 [−0.76, 0.18], and right IFG, ROI 17: μGo − μNo-go = −0.23 [−0.58, 0.12]), the preSMA (ROI 3, μGo − μNo-go = −0.14 [−0.55, 0.26]), and the thalamus (left, ROI 14: μGo − μNo-go = −0.17 [−0.58, 0.21] and right, ROI 24: μGo − μNo-go = −0.15 [−0.53, 0.24]), showed more activation in response to no-go stimuli than to go stimuli. The bracketed values indicate the 95% credible intervals, calculated by taking the 2.5% and 97.5% quantiles of the μGo − μNo-go posterior. Note that all of these 95% credible intervals contain zero.

Stop Signal

Figure 6A shows aggregated group results for each ROI in the stop signal task. Each row corresponds to a condition, where go is in the top row, no-go is in the middle row, and stop is on the bottom row. In Figure 6B, conditions are represented by semitransparent rectangles, where green rectangles are for the go condition, blue rectangles are for the no-go condition, and red rectangles are for the stop condition. Again, the length of the semitransparent rectangles denotes the 95% credible interval of the δj,k,r posteriors. The layout of this plot is otherwise identical to the layout of Figure 5. The model fit to the stop signal task also shows clear conditional differences in average activation, as well as differences in variability from ROI to ROI. The most striking result in Figure 6 is the neural deactivation across the brain in response to a stop signal. Mean activation was higher in response to go stimuli than in response to no-go stimuli for every area of interest discussed in the go/no-go task: bilateral IFG (left IFG-1, ROI 6: μGo − μNo-go = 0.29 [−0.13, 0.70]; left IFG-2, ROI 7: μGo − μNo-go = 0.27 [−0.21, 0.74]; right IFG, ROI 17: μGo − μNo-go = 0.29 [−0.11, 0.67]); preSMA (ROI 3: μGo − μNo-go = 0.24 [−0.19, 0.66]), right caudate (ROI 16, μGo − μNo-go = 0.36 [−0.11, 0.67]), and bilateral thalamus (left thalamus, ROI 14: μGo − μNo-go = 0.021 [−0.40, 0.43]; right thalamus, ROI 24: μGo − μNo-go = 0.19 [−0.24, 0.62]). This result is nearly opposite to the results of the go/no-go task, where only the right caudate displayed a pattern of mean higher activation in go than no-go. Again, the bracketed values indicate the 95% credible intervals of the μGo − μNo-go posterior. Note that all of these intervals contain 0, suggesting that, again, on a group level, there is not a strong difference between the go and no-go conditions within these key ROIs. Additionally, for these seven areas of interest, there was higher mean activation in response to go stimuli than in response to stop signals (μGo − μStop: left IFG-1, ROI 6 = 0.45 [−0.065, 0.92], left IFG-2, ROI 7 = 0.67 [0.19, 1.20]*, right IFG, ROI 17 = 0.18 [−0.24,0.60], preSMA, ROI 3 = 0.87 [0.38, 1.40]*, right caudate, ROI 16 = 0.66 [0.13, 1.20]*, left thalamus, ROI 14 = 0.30 [−0.18, 0.79], and right thalamus, ROI 24 = 0.45 [−0.05, 0.92]). Here, three areas (left IFG-2, preSMA, and right caudate) had 95% credible intervals that were positive and did not contain zero, suggesting strong deactivation within the stop condition when compared with go. Furthermore, for six of seven of these areas of interest, there was higher activation in mean response to no-go stimuli than in response to stop signals (μNo-go − μStop: left IFG-1, ROI 6 = 0.16 [−0.37, 0.66], left IFG-2, ROI 7 = 0.40 [−0.16, 0.97], preSMA, ROI 3 = 0.63 [0.11, 1.10]*, right caudate, ROI 16 = 0.30 [−0.26, 0.85], left thalamus, ROI 14 = 0.28 [−0.19, 0.77], and right thalamus, ROI 24 = 0.26 [−0.24, 0.76]). Here, only the credible interval for the preSMA did not contain zero, again suggesting strong deactivation in the stop condition, when compared with no-go as well. The mean for the right IFG was slightly negative, although the 95% credible interval still included 0 (μNo-go − μStop: right IFG, ROI 17 = −0.10 [−0.53, 0.34]). Although both not going and stopping measure a type of response inhibition, there was a clear difference in their neural responses.

Figure 6. 

Stop signal ROI activation by condition. M2 parameter estimates for δ in the stop signal task. (A) Aggregated group results for the estimated mean of δ across participants in each condition: δGo (top row), δNo-go (middle row), and δStop (bottom row). Activation for each ROI is represented according to the legend on the right-hand side, where cooler colors represent a smaller neural activation and warmer colors represent larger neural activation. (B) Individual results. The semitransparent rectangles (green for go, blue for no-go, and red for stop) represent the 95% credible intervals of δ (y-axis) for each participant, and Xs represent the group means for each ROI (x-axis).

Figure 6. 

Stop signal ROI activation by condition. M2 parameter estimates for δ in the stop signal task. (A) Aggregated group results for the estimated mean of δ across participants in each condition: δGo (top row), δNo-go (middle row), and δStop (bottom row). Activation for each ROI is represented according to the legend on the right-hand side, where cooler colors represent a smaller neural activation and warmer colors represent larger neural activation. (B) Individual results. The semitransparent rectangles (green for go, blue for no-go, and red for stop) represent the 95% credible intervals of δ (y-axis) for each participant, and Xs represent the group means for each ROI (x-axis).

Functional Connectivity Within Response Inhibition Tasks

Go/No-go

Both models estimate Σ as a covariance matrix, but for the purposes of interpretability, each prediction (for each sample and chain) was converted into a correlation matrix and then averaged. Figure 7A shows a plot of the Σ matrix estimated from M1 fit to go/no-go data and four plots of Σj estimated from M2 fit to go/no-go data from four representative participants. In all five matrices, the diagonal components were removed to avoid distorting the scale, as the diagonal is always equal to 1.0. All five plots are colored according to the same scale, where warmer colors show a higher correlation and cooler colors show a smaller or more negative correlation.

Figure 7. 

Go/no-go correlation matrices. (A) Pairwise correlation matrices exhibiting coactivation in the go/no-go task across the 24 ROIs. The correlation matrix on the left is the group correlation matrix from M1. The four correlation matrices on the right are representative individual-level matrices obtained from M2. Each correlation value is color-coded according to the legend on the right-hand side, where cooler colors show negative correlations and warmer colors show more positive correlations. The diagonal entries were removed for visual clarity. (B) Box plots of the distribution of M1 correlations minus M2 correlations for a given participant (x-axis). Positive values denote that an estimate was higher for M1 than for a given participant in M2.

Figure 7. 

Go/no-go correlation matrices. (A) Pairwise correlation matrices exhibiting coactivation in the go/no-go task across the 24 ROIs. The correlation matrix on the left is the group correlation matrix from M1. The four correlation matrices on the right are representative individual-level matrices obtained from M2. Each correlation value is color-coded according to the legend on the right-hand side, where cooler colors show negative correlations and warmer colors show more positive correlations. The diagonal entries were removed for visual clarity. (B) Box plots of the distribution of M1 correlations minus M2 correlations for a given participant (x-axis). Positive values denote that an estimate was higher for M1 than for a given participant in M2.

Overall, there were no consistent patterns of coactivation. First, there were no consistent trends on an individual level (shown by the lack of similarities between the individual σ estimates from M2). Second, the group-level matrix of M1 did not resemble any of the individual matrices. Although this may be somewhat apparent from Figure 7A to provide more quantitative evidence, we calculated the differences between the pairwise correlations of M1 and the pairwise correlations of M2 for each individual. Figure 7B displays the box plots of these differences, where each box plot represents a different participant. Positive values (above the horizontal line) indicate that M1 estimated higher correlations than M2 for a given participant. The mean coactivation for M1 was higher than that of M2 for every participant. To further corroborate this difference between M1 and M2, we calculated the region of practical equivalence (ROPE; Kruschke & Liddell, 2018). We defined the ROPE as ranging from −0.05 to 0.05. If a large proportion of the differences are within this ROPE, we would conclude that M1 and M2's estimates of functional connectivity are essentially the same. However, we found that a minority of the samples (13.98%) were within the ROPE. These results taken together suggest that individual differences in functional connectivity are a major source of variation in the go/no-go task.

Stop Signal

Figure 8A shows the group correlation matrix from M1 and representative participants' correlation matrices from M2. The scale in this figure is for the ranges of the stop signal Σs and is not equivalent to the scale in Figure 7A, but otherwise, the layout of the figures is equivalent. Similar to the go/no-go task, the group matrix did not represent the individual matrices. To quantify these dissimilarities, we again constructed box plots of the differences in coactivation estimates between M1 and M2, shown in Figure 8B. As observed previously in the go/no-go task, the mean differences were all positive, meaning that M1 estimated higher coactivations than M2 across all 11 participants. Furthermore, the ROPE analysis (with the same defined region) concluded that only 7.80% of the differences were within the ROPE. This is a smaller proportion than that calculated for the go/no-go task, suggesting that the differences between M1 and M2 are even more pronounced in these data. Additionally, the individual matrices show no recurrent patterns. Thus, we can conclude that individual differences in functional connectivity also arise in the stop signal task. Moreover, these stop signal matrices, on both group and individual levels, did not resemble the go/no-go matrices, although there is some similarity for Participants 4 and 11.

Figure 8. 

Stop signal correlation matrices. (A) Pairwise correlation matrices exhibiting coactivation in the stop signal task between the 24 ROIs. The correlation matrix on the left is the group correlation matrix from M1. The four correlation matrices on the right are representative individual-level matrices from M2. Each correlation value is color-coded according to the legend on the right-hand side, where cooler colors show negative correlations and warmer colors show more positive correlations. The diagonal entries were removed for visual clarity. (B) Box plots of the distribution of M1 correlations minus M2 correlations for a given participant (x-axis). Positive values denote that an estimate was higher for M1 than for a given participant in M2.

Figure 8. 

Stop signal correlation matrices. (A) Pairwise correlation matrices exhibiting coactivation in the stop signal task between the 24 ROIs. The correlation matrix on the left is the group correlation matrix from M1. The four correlation matrices on the right are representative individual-level matrices from M2. Each correlation value is color-coded according to the legend on the right-hand side, where cooler colors show negative correlations and warmer colors show more positive correlations. The diagonal entries were removed for visual clarity. (B) Box plots of the distribution of M1 correlations minus M2 correlations for a given participant (x-axis). Positive values denote that an estimate was higher for M1 than for a given participant in M2.

The model estimates of Σ and δ demonstrate widespread differences between tasks and individuals. In both go/no-go and stop signal tasks, the degree of individual differences in these neural responses differed from ROI to ROI (Figures 5B and 6B). Comparisons of Σ further support the claim that individual differences are integral to (but not similar within) both go/no-go and stop signal tasks (Figures 7 and 8).

DISTINGUISHING BETWEEN INDIVIDUAL DIFFERENCES AND ADDITIONAL FACTORS

We argue that the variability we observed in neural activation and functional connectivity is the result of individual differences. However, there are many other factors that could produce this variability. First, we used only 11 participants in our analyses, and this relatively small sample size could be causing a more extreme (nontypical) participant to influence the hyperparameters of the hierarchical model, skewing the results of every individual. Second, hierarchical Bayesian models are much more complicated than standard GLM analyses. Although our simulation study provided evidence that the hierarchical approach is preferable, these advantages could be present only in smaller samples. Third, the variation observed may actually be a result of run-to-run differences. It is plausible that what we perceive as variation caused by individual differences may actually just be a result of another phenomenon, such as practice effects. In this section, we aim to validate that the variability we observed is actually a result of individual differences, not these aforementioned possible confounds.

Sample Size and Analysis Variability

In all of the above analyses, we chose to focus on 11 participants who completed both response inhibition tasks to directly compare the tasks and types of inhibition while accounting for the distinct individual differences. These 11 participants, as noted, were part of a larger pool of participants consisting of 168 other participants who also completed one run of the go/no-go task. The task details and model fitting procedures (for both M2 and the C-GLM) are identical to those described in the methods. This larger subset of the same task fit to the same models allows us to decouple any confounds that may have arisen from using a smaller sample size. In this section, we aim to demonstrate that the results are consistent in small and large sample sizes. First, the results we reported on above are not greatly influenced by using more participants to inform the hierarchical hyperparameters. Second, hierarchical Bayesian models are advantageous over the C-GLM even in larger data sets.

Figure 9 compares the neural activation estimates of going and not going (δ) from M2 and the C-GLM in small (n = 11) and large (n = 179) data sets. In each panel, the points represent a δ estimate for one ROI and one participant for either the go or no-go conditions, denoted by green or blue points, respectively. The diagonal line signifies equivalence between the methods and sample sizes being compared, and the printed value in the top left corner is the correlation. The left plot compares the hierarchical estimates between the 11 participants for the small (x-axis) and large (y-axis) samples. The two estimates are highly correlated, showing that the hyperparameters in the larger sample size do not greatly skew the individual estimates. This suggests that the results we observed (for the smaller sample size) are consistent regardless of sample size. The next two panels of Figure 9 compare the methods of analyses in the small samples (middle plot) and large samples (right plot). In both the small and large samples, the estimates of M2 and the C-GLM are highly correlated, though the smallest and largest values are more extreme in the C-GLM than in M2 in both sample sizes. The no-go values are particularly more extreme for the C-GLM, as there are fewer trials than the go trials, so the advantages from the pooling of data in M2 are most noticeable. Thus, the shrinkage imposed by the hierarchy is consistent regardless of sample size. Additionally, this pattern of shrinkage parallels the results we observed in the simulation study (see Figure 3). From these comparisons, we can be more confident that the variability we observed is a result of individual differences as opposed to confounding factors from sample size or method of analysis.

Figure 9. 

Large sample go/no-go. Comparison of conditional neural activation estimates between the hierarchical Bayesian M2 and the C-GLM in small (n = 11) and large (n = 179) data sets. In all three plots, each point corresponds to the mean of the posterior for δ for a given ROI and participant, colored by condition, where green is go and blue is no-go. The correlation is printed in the top left of each subplot, and the diagonal line denotes equivalence. The left plot compares mean δ estimates for the overlapping 11 participants in smaller (x-axis) and larger (y-axis) hierarchical models. The middle plot compares mean δ estimates for the overlapping 11 participants in smaller (x-axis) and larger (y-axis) C-GLMs. The right plot compares mean δ estimates for all 179 participants in the large C-GLM (x-axis) and the large M2 (y-axis).

Figure 9. 

Large sample go/no-go. Comparison of conditional neural activation estimates between the hierarchical Bayesian M2 and the C-GLM in small (n = 11) and large (n = 179) data sets. In all three plots, each point corresponds to the mean of the posterior for δ for a given ROI and participant, colored by condition, where green is go and blue is no-go. The correlation is printed in the top left of each subplot, and the diagonal line denotes equivalence. The left plot compares mean δ estimates for the overlapping 11 participants in smaller (x-axis) and larger (y-axis) hierarchical models. The middle plot compares mean δ estimates for the overlapping 11 participants in smaller (x-axis) and larger (y-axis) C-GLMs. The right plot compares mean δ estimates for all 179 participants in the large C-GLM (x-axis) and the large M2 (y-axis).

Run-to-run Variability

Using additional data from the go/no-go task, we demonstrated that sample size and method of analysis are distinct from the individual differences we observed. Now, we use the additional data from the additional runs of the stop signal task of the original 11 participants to distinguish run-to-run differences from individual differences. First, we compare how functional connectivity in general varies across participants and across runs. If individual differences are more important than run-to-run differences, we would expect that the coactivation matrices estimated for each run of the model are higher correlated within a particular participant than when compared with another participant. In this case, coactivation would be more similar within a participant even across runs than between participants. Second, to further ensure that individual differences are key, we compare model fit of models including details from other runs to model fit of models not including details from other runs. In this case, if individual differences are more important, then the model fit will be improved when considering connectivity from that participant in a previous run. If run-to-run differences are the source of variability, then including information from a different run would not improve model fit statistics.

Functional Connectivity between and across Participants

This section comprises two analyses. M2 was used to test these analyses because it allows individual differences to be present in coactivation. The first analysis looks at differences between individuals' coactivation when all three runs are considered. Here, we took the pairwise correlations between each participant's coactivation matrix. The second analysis tests how an individual can vary with from one run to another. Here, we look at the correlations of the coactivation from one run to the next within a single participant.

First, Figure 10A shows correlations between Σ matrices across different participants. The figure shows an 11 × 11 matrix showing the correlations between the coactivation matrix of one participant to the coactivation matrices of each of the other participants. Each column/row represents an individual participant. The correlations are for all three runs and were calculated by concatenating the three coactivation matrices output from each model fitting for each run. Second, we wanted to explore whether or not the coactivation matrices were similar when compared on a run-to-run basis in an individual. Figure 10A shows 11 3 × 3 plots showing the correlations between the coactivation matrices between each run (x and y axes) for each participant (panels). The legend on the right is for both A and B.

Figure 10. 

Coactivation correlations in the stop signal task. (A) 11 × 11 plot of the correlations of coactivation matrices concatenated across the three runs of the stop signal task for pairs of participants (x and y axes). (B) Eleven 3 × 3 plots of the correlations of the coactivation matrices between each run (x and y axes) for each participant. The legend applies to both panels, where higher correlations are orange–red and lower correlations are blue–green. The diagonal elements of each matrix are all equal to 1 and are removed for visual clarity.

Figure 10. 

Coactivation correlations in the stop signal task. (A) 11 × 11 plot of the correlations of coactivation matrices concatenated across the three runs of the stop signal task for pairs of participants (x and y axes). (B) Eleven 3 × 3 plots of the correlations of the coactivation matrices between each run (x and y axes) for each participant. The legend applies to both panels, where higher correlations are orange–red and lower correlations are blue–green. The diagonal elements of each matrix are all equal to 1 and are removed for visual clarity.

In Figure 10A, the correlations between participants were overall close to zero or slightly positive (mean of 0.061). This follows our claim that modeling and reporting individual differences in this task is important. However, some pairs of participants had relatively larger correlations. Specifically, Participants 2 and 4 have the highest correlation (of 0.397) between coactivation matrices. To compare, the highest correlation across runs within a single participant (Figure 10B) is Participant 11, with an average of 0.267. Thus, the correlation between those two participants was higher than any participant's average correlation to their own coactivation matrices between runs. Furthermore, participants also vary greatly in Figure 10B. However, overall, participants have higher correlations from run to run in their own data than with other participants across runs. To summarize, although there were differences from run to run within the same participant, overall the differences between participants were greater. Therefore, the next step is to provide more evidence for the importance of including individual differences by evaluating model fit.

Constraining Connectivity Priors on an Individual Level

A way to test whether or not including individual differences in coactivation is important is to compare model fit between models that assume the coactivation matrix varies for individuals, not runs, with models that do not have this assumption. In other words, we will compare models that have informed (containing information from other runs) priors on Σ to models that have uninformed priors on Σ. To do this, we compared model fit between nine different model/data combinations. The deviance information criterion (DIC; Spiegelhalter, Best, Carlin, & van der Linde, 2002) was used as a measure of model fit. DIC measures model fit while penalizing for complexity, but because the models all have the same level of complexity, the differences in DIC show only model fit.

We first fit three “uninformed models.” Here, M2 was fit to each run separately with an uninformed prior on the covariance matrix (e.g., setting the scale matrix of the inverse Wishart distribution to an identity matrix). Then, using the estimated covariance matrix for those three models, we obtained a more informed estimate of the scale matrix. The expectation of an inverse Wishart distribution is the product of the degrees of freedom and the scale matrix. Thus, to approximate the scale matrices to be used in the “informed” model fit, the covariance matrices estimated by the “uninformed” model were averaged and then divided by a fixed degree of freedom (24). Approximate scale matrices were obtained for all three runs. The other six model fits are then the informed model fits (e.g., we fit Run 1 informed by Run 2, and fitting Run 1 informed by Run 3). The DIC values were obtained within JAGS. Table 2 shows the differences between the informed and uninformed runs, and the sample standard error. A positive difference value means that the informed model is better than the uninformed model. For Runs 1 and 2, the model fits are much better for when the model is informed by other runs. For Run 3, the difference is small, but the standard errors are large, so the data from Runs 1 and 2 neither significantly improved nor hurt the model fit. Overall, including individualized run data improved model fits to other runs, providing further evidence for the importance of individual differences.

Table 2. 
Model Fits
Fit to Informed by
Run 1Run 2Run 3
Run 1 Difference — 78.31265 26.4543 
Sample standard error — 7.409526 6.270958 
Run 2 Difference 19.38235 — 19.11392 
Sample standard error 6.645165 — 6.431498 
Run 3 Difference −8.990306 −5.408381 — 
Sample standard error 6.88419 7.771539 — 
Fit to Informed by
Run 1Run 2Run 3
Run 1 Difference — 78.31265 26.4543 
Sample standard error — 7.409526 6.270958 
Run 2 Difference 19.38235 — 19.11392 
Sample standard error 6.645165 — 6.431498 
Run 3 Difference −8.990306 −5.408381 — 
Sample standard error 6.88419 7.771539 — 

The table shows the differences in DIC fit statistics and sample standard errors between M2 fit with an uniformed prior on Σ. Positive values indicate a better fit in models informed by an individual's coactivation matrices on another run.

DISCUSSION

Through the application of hierarchical Bayesian models, we demonstrated the importance of individual differences in response inhibition tasks. First, through a simulation study, we showed that our hierarchical Bayesian model outperformed the C-GLM in terms of recovering and differentiating between signal and sources of noise, especially in contexts where individual differences are present. Second, we observed individual differences in condition-wise activation in both the go/no-go and stop signal tasks. Additionally, we found task differences characterized by a brain-wide deactivation following a stop signal (but not following a no-go cue). Third, the coactivation matrices demonstrated both task-wise and strong individual differences in the go/no-go and stop signal tasks. Fourth, we distinguished the individual variability we observed from run-to-run variability, sample size, and method of analysis. In this discussion, we further explore whether the variability observed is actually a result of individual differences, relate our findings to the response inhibition literature, and discuss limitations and further directions.

We argue that the variability observed across individuals in ROI activation and functional connectivity is a result of individual differences. However, variation could arise from a variety of sources. This is a reasonable hesitation especially when considering findings of significant day-to-day or session-to-session variability (Noble et al., 2017; Pannunzi et al., 2017). However, Gratton et al. (2018) found that daily or session-by-session factors accounted for only a small proportion of variability, given sufficient data across runs and participants. Additionally, they found that these functional networks (analogous, though as noted in the Introduction, not identical, to our coactivation matrices) were stable across individuals. Stable individual differences in functional connectivity have been found across a variety of domains, both at rest and (to a lesser degree) during different tasks (Gordon et al., 2017; Finn et al., 2015). To explore this possible issue within the response inhibition data presented above, we examined run-to-run differences in functional connectivity and model fit statistics. We found higher correlations within individual participants across runs than between different participants. Additionally, model fit improved when accounting for individual features of the correlation matrices in the other runs. Taken with previous findings, this suggests the primary source of the observed variability is individual differences.

Although individual differences are known to be influential in cognitive control (Miyake & Friedman, 2012), not all of our findings are in line with major theories of response inhibition. Although some aspects of our analyses corroborate major findings, other aspects contradict these findings, so we discuss some of these relationships and propose possible explanations for discrepancies. First, we noted differences between the go/no-go and stop signal tasks in terms of both conditional differences and ROI coactivations. This coincides with evidence from meta-analyses of versions of go/no-go and stop signal tasks that found different neural correlates and suggested that different systems may be involved between the two tasks (Swick et al., 2011; Rubia et al., 2001). Second, we did not observe increased activation in the right IFG in response to a stop signal. The right IFG is hypothesized to act as a brake within a network of outright stopping also involving the preSMA and subthalamic nucleus. This is a robust result that has been observed in fMRI, EEG, animal, and lesion studies (Aron et al., 2014). Although aspects of this hypothesis are still debated, especially regarding lateralization of the IFG (Swick, Ashley, & Turken, 2008) and involvement of entire networks (Hampshire & Sharp, 2015), there may be other factors that contributed to the observed deactivation in the right IFG in our analyses. One possibility is that our stop signal task does not follow the convention of having a minority of stop signal trials. Aron et al. (2014) argue that increases in proportion can turn the task into a decision-making task, as opposed to a response inhibition task. Another factor could be the size of the right IFG region in our analyses; at 2830 voxels, the right IFG was the largest ROI in our analyses. Based on evidence that a subregion of the right IFG (pars opercularis) may be responsible for stopping, our ROI may be too large to detect activation in this subregion (Levy & Wagner, 2011).

A possible limitation of our models is the stability of the coactivation matrices across conditions. In a meta-analysis, Swick et al. (2011) found different networks may be utilized during go/no-go tasks and stop signal tasks. Because our stop signal task combines both components of not going and stopping, based on these studies, it would be a reasonable assumption that connectivity measures would be different condition to condition, as there is evidence that there are different networks corresponding to each process. The purpose of our analyses was on individual differences across the task as a whole, but a next step would be to apply these models to more (or different) data to see how, for example, connectivity differs in inhibiting and initiating a response. In our analyses, having only 16 no-go trials per person was insufficient data to estimate a coactivation matrix. However, we may also expect not to see any differences across these matrices (if fit to sufficient data), as Gratton et al. (2018) found that functional networks varied mostly by individuals and only to a small degree by task. However, the analyses presented by Gratton et al. collapsed across the time series of the BOLD response within a task, so the degree to which different coactivation matrices are needed remains an open question.

A major limitation of our models is that they have no behavioral component. For example, the models treat unsuccessful and successful response inhibition the same. Although there is some evidence to suggest that some components of the neural response to a stop signal are the same regardless of successful inhibition (Aron & Poldrack, 2006), we believe this to be a strong assumption. Furthermore, these models provide no mechanistic explanation of the cognitive processes behind going, not going, and stopping. Numerous models have been proposed to represent the cognitive processes in the stop signal task (Logan, Van Zandt, Verbruggen, & Wagenmakers, 2014; Matzke, Dolan, Logan, Brown, & Wagenmakers, 2013; Logan & Cowan, 1984). However, only a few of these models incorporate neural data (Logan, Yamaguchi, Schall, & Palmeri, 2015; Boucher, Palmeri, Logan, & Schall, 2007). By linking the neural data to the behavioral data (Turner, Forstmann, & Steyvers, 2018; Turner, Forstmann, Love, Palmeri, & Van Maanen, 2017; Turner, Rodriguez, Norcia, McClure, & Steyvers, 2016; Turner, Van Maanen, & Forstmann, 2015; Turner, Forstmann, et al., 2013), the set of extant cognitive models can be further constrained, potentially providing an opportunity to better understand how response inhibition is carried out in the brain from a mechanistic perspective.

Conclusions

Here, hierarchical Bayesian models revealed the ubiquity of individual differences in the neural processes underlying response inhibition. The models we constructed outperformed a standard analysis in separating signal from noise in fMRI data, especially when accounting for individual and trial-to-trial variability. The simultaneous group and individual estimates revealed the different dynamics in going, not going, and stopping on a group level while preserving individuality. Finally, analyses of coactivation between ROIs estimated by the models demonstrated the prevalence of individual differences within functional connectivity.

APPENDIX: MODEL SPECIFICATION

The conceptual and theoretical assumptions of the two models were presented above. Here, we provide the equations for the likelihood and priors used. To begin, the neural likelihood is defined as:
Ni,k,j,r,t=βj,r0i=1Rhit+ϵt=β0i=1Rβi,j,k,rh0,it+ϵt
where there is neural data Ni,k,j,r,t for every time point t (and thus every stimuli i and condition k) for every participant j and ROI r. In the neural likelihood, βj,r0 denotes an intercept term for baseline activation for each participant and ROI, and ϵ(t) is an error term described below. These terms are added to the convolved hemodynamic response functions across the number of stimuli presentations, R. Here, we assume the hemodynamic response function h0,i is a double-gamma model (Glover, 1999; Boynton, Engel, Glover, & Heeger, 1996), with fixed shape parameters, a1 = 6, a2 = 16, b1 = 1, b2 = 1, and c = 1/6.
The error term is defined by
ϵtN0σj,r
where σj,r is set to
σj,r2InvGamma0.0010.001
and the intercept term is defined by
βj,r0Nμr01000
where
μr0N01000
The neural likelihood can also be written in a distribution format:
NNβj,r0+Xβσj,r
where X is the design matrix of conditions and onsets and β is normally distributed:
βi,j,k,rNδj,k,rσrβ
with mean δj,k,r and standard deviation σrβ. σrβ varies across ROIs, with the vague prior:
σrβ2InvGamma0.0010.001
The mean for β is
δj,k,1:RN24(μk,1:R, Σ)
Here, Np(a, b) denotes a p-dimensional multivariate normal distribution with mean vector a and variance–covariance matrix b. μk,1:R refers the kth row of μ where
μk,1:R=μk,1μk,2μk,24T
This notation is also used for δ. The hyper prior for μk,1:R is again a 24-dimensional multivariate normal distribution
μk,1:RN24ϕ0s0
where ϕ0 is a 24-dimensional vector of zeros and s0 is a (24 × 24) identity matrix. The variance of δ is governed by Σ, a 24 × 24 variance–covariance matrix to capture patterns of pairwise coactivation between the 24 ROIs. In M1, we assumed these patterns to be similar across all participants. The prior for Σ follows an inverse Wishart distribution
ΣW1I0n0
where I0 is a (24 × 24) identity matrix and n0 = 24 is the degrees of freedom. An inverse Wishart prior was chosen because it is a conjugate prior for Σ and is thus computationally convenient.
For M2, all of the above priors are identical with the exception of Σ and δ. In M2, we assume that individual differences exist in the patterns of coactivation, and thus, Σ is estimated for every participant j, so
ΣjW1I0n0
where I0 and n0 are defined the same way as in M1. Additionally, M2 now uses Σj instead of Σ in the prior for δ
δj,k,1:RN24μk,1:RΣj
where μ is defined the same way as in M1.
The C-GLM is a simplified case of the above models with the following priors:
σ2InvGamma0.0010.001
β0N00.001
βkN00.001
where k indices the four conditions.

Reprint requests should be sent to Brandon M. Turner, Department of Psychology, The Ohio State University, 1827 Neil Avenue, Columbus, OH 43210-1132, or via e-mail: turner.826@gmail.com.

REFERENCES

Ahn
,
W.-Y.
,
Krawitz
,
A.
,
Kim
,
W.
,
Busmeyer
,
J. R.
, &
Brown
,
J. W.
(
2011
).
A model-based fMRI analysis with hierarchical Bayesian parameter estimation
.
Journal of Neuroscience, Psychology, and Economics
,
4
,
95
110
.
Aron
,
A. R.
, &
Poldrack
,
R. A.
(
2006
).
Cortical and subcortical contributions to stop signal response inhibition: Role of the subthalamic nucleus
.
Journal of Neuroscience
,
26
,
2424
2433
.
Aron
,
A. R.
,
Robbins
,
T. W.
, &
Poldrack
,
R. A.
(
2014
).
Inhibition and the right inferior frontal cortex: One decade on
.
Trends in Cognitive Sciences
,
18
,
177
185
.
Bannon
,
S.
,
Gonsalvez
,
C. J.
,
Croft
,
R. J.
, &
Boyce
,
P. M.
(
2002
).
Response inhibition deficits in obsessive–compulsive disorder
.
Psychiatry Research
,
110
,
165
174
.
Boucher
,
L.
,
Palmeri
,
T. J.
,
Logan
,
G. D.
, &
Schall
,
J. D.
(
2007
).
Inhibitory control in mind and brain: An interactive race model of countermanding saccades
.
Psychological Review
,
114
,
376
397
.
Boynton
,
G. M.
,
Engel
,
S. A.
,
Glover
,
G. H.
, &
Heeger
,
D. J.
(
1996
).
Linear systems analysis of functional magnetic resonance imaging in human V1
.
Journal of Neuroscience
,
16
,
4207
4221
.
Dunovan
,
K.
,
Lynch
,
B.
,
Molesworth
,
T.
, &
Verstynen
,
T.
(
2015
).
Competing basal ganglia pathways determine the difference between stopping and deciding not to go
.
eLife
,
4
,
e08723
.
Finn
,
E. S.
,
Shen
,
X.
,
Scheinost
,
D.
,
Rosenberg
,
M. D.
,
Huang
,
J.
,
Chun
,
M. M.
, et al
(
2015
).
Functional connectome fingerprinting: Identifying individuals using patterns of brain connectivity
.
Nature Neuroscience
,
18
,
1664
1671
.
Gaut
,
G.
,
Turner
,
B.
,
Lu
,
Z.-L.
,
Li
,
X.
,
Cunningham
,
W. A.
, &
Steyvers
,
M.
(
2019
).
Predicting task and subject differences with functional connectivity and blood-oxygen-level-dependent variability
.
Brain Connectivity
,
9
,
451
463
.
Glover
,
G. H.
(
1999
).
Deconvolution of impulse response in event-related BOLD fMRI
.
Neuroimage
,
9
,
416
429
.
Gordon
,
E. M.
,
Laumann
,
T. O.
,
Adeyemo
,
B.
,
Gilmore
,
A. W.
,
Nelson
,
S. M.
,
Dosenbach
,
N. U. F.
, et al
(
2017
).
Individual-specific features of brain systems identified with resting state functional correlations
.
Neuroimage
,
146
,
918
939
.
Gratton
,
C.
,
Laumann
,
T. O.
,
Nielsen
,
A. N.
,
Greene
,
D. J.
,
Gordon
,
E. M.
,
Gilmore
,
A. W.
, et al
(
2018
).
Functional brain networks are dominated by stable group and individual factors, not cognitive or daily variation
.
Neuron
,
98
,
439
452
.
Hampshire
,
A.
, &
Sharp
,
D. J.
(
2015
).
Contrasting network and modular perspectives on inhibitory control
.
Trends in Cognitive Sciences
,
19
,
445
452
.
Hughes
,
M. E.
,
Fulham
,
W. R.
,
Johnston
,
P. J.
, &
Michie
,
P. T.
(
2012
).
Stop signal response inhibition in schizophrenia: Behavioural, event-related potential and functional neuroimaging data
.
Biological Psychology
,
89
,
220
231
.
Kruschke
,
J. K.
, &
Liddell
,
T. M.
(
2018
).
The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective
.
Psychonomic Bulletin & Review
,
25
,
178
206
.
Lee
,
M. D.
(
2008
).
Three case studies in the Bayesian analysis of cognitive models
.
Psychonomic Bulletin & Review
,
15
,
1
15
.
Lee
,
M. D.
, &
Wagenmakers
,
E.-J.
(
2013
).
Bayesian cognitive modeling: A practical course
.
New York
:
Cambridge University Press
.
Levy
,
B. J.
, &
Wagner
,
A. D.
(
2011
).
Cognitive control and right ventrolateral prefrontal cortex: Reflexive reorienting, motor inhibition, and action updating
.
Annals of the New York Academy of Sciences
,
1224
,
40
62
.
Li
,
X.
,
Liang
,
Z.
,
Kleiner
,
M.
, &
Lu
,
Z.-L.
(
2010
).
RTbox: A device for highly accurate response time measurements
.
Behavior Research Methods
,
42
,
212
225
.
Logan
,
G. D.
, &
Cowan
,
W. B.
(
1984
).
On the ability to inhibit thought and action: A theory of an act of control
.
Psychological Review
,
91
,
295
327
.
Logan
,
G. D.
,
Van Zandt
,
T.
,
Verbruggen
,
F.
, &
Wagenmakers
,
E.-J.
(
2014
).
On the ability to inhibit thought and action: General and special theories of an act of control
.
Psychological Review
,
121
,
66
95
.
Logan
,
G. D.
,
Yamaguchi
,
M.
,
Schall
,
J. D.
, &
Palmeri
,
T. J.
(
2015
).
Inhibitory control in mind and brain 2.0: Blocked-input models of saccadic countermanding
.
Psychological Review
,
122
,
115
147
.
Matzke
,
D.
,
Dolan
,
C. V.
,
Logan
,
G. D.
,
Brown
,
S. D.
, &
Wagenmakers
,
E.-J.
(
2013
).
Bayesian parametric estimation of stop signal reaction time distributions
.
Journal of Experimental Psychology: General
,
142
,
1047
1073
.
Miyake
,
A.
, &
Friedman
,
N. P.
(
2012
).
The nature and organization of individual differences in executive functions: Four general conclusions
.
Current Directions in Psychological Science
,
21
,
8
14
.
Molloy
,
M. F.
,
Bahg
,
G.
,
Li
,
X.
,
Steyvers
,
M.
,
Lu
,
Z.-L.
, &
Turner
,
B. M.
(
2018
).
Hierarchical Bayesian analyses for modeling BOLD time series data
.
Computational Brain & Behavior
,
1
,
184
213
.
Monterosso
,
J. R.
,
Aron
,
A. R.
,
Cordova
,
X.
,
Xu
,
J.
, &
London
,
E. D.
(
2005
).
Deficits in response inhibition associated with chronic methamphetamine abuse
.
Drug and Alcohol Dependence
,
79
,
273
277
.
Nigg
,
J. T.
(
2001
).
Is ADHD a disinhibitory disorder?
Psychological Bulletin
,
127
,
571
598
.
Nigg
,
J. T.
,
Wong
,
M. M.
,
Martel
,
M. M.
,
Jester
,
J. M.
,
Puttler
,
L. I.
,
Glass
,
J. M.
, et al
(
2006
).
Poor response inhibition as a predictor of problem drinking and illicit drug use in adolescents at risk for alcoholism and other substance use disorders
.
Journal of the American Academy of Child and Adolescent Psychiatry
,
45
,
468
475
.
Noble
,
S.
,
Spann
,
M. N.
,
Tokoglu
,
F.
,
Shen
,
X.
,
Constable
,
R. T.
, &
Scheinost
,
D.
(
2017
).
Influences on the test–retest reliability of functional connectivity MRI and its relationship with behavioral utility
.
Cerebral Cortex
,
27
,
5415
5429
.
Pannunzi
,
M.
,
Hindriks
,
R.
,
Bettinardi
,
R. G.
,
Wenger
,
E.
,
Lisofsky
,
N.
,
Martensson
,
J.
, et al
(
2017
).
Resting-state fMRI correlations: From link-wise unreliability to whole brain stability
.
Neuroimage
,
157
,
250
262
.
Penadés
,
R.
,
Catalán
,
R.
,
Rubia
,
K.
,
Andrés
,
S.
,
Salamero
,
M.
, &
Gastó
,
C.
(
2007
).
Impaired response inhibition in obsessive compulsive disorder
.
European Psychiatry
,
22
,
404
410
.
Plummer
,
M.
(
2003
).
JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling
. In
K.
Hornik
,
F.
Leisch
, &
A.
Zeileis
(Eds.),
Proceedings of the 3rd International Workshop on Distributed Statistical Computing
(pp.
20
22
).
Vienna
:
Technische Universitaet Wien
.
Rouder
,
J. N.
, &
Lu
,
J.
(
2005
).
An introduction to Bayesian hierarchical models with an application in the theory of signal detection
.
Psychonomic Bulletin & Review
,
12
,
573
604
.
Rubia
,
K.
,
Russell
,
T.
,
Overmeyer
,
S.
,
Brammer
,
M. J.
,
Bullmore
,
E. T.
,
Sharma
,
T.
, et al
(
2001
).
Mapping motor inhibition: Conjunctive brain activations across different versions of go/no-go and stop tasks
.
Neuroimage
,
13
,
250
261
.
Schachar
,
R.
, &
Logan
,
G. D.
(
1990
).
Impulsivity and inhibitory control in normal development and childhood psychopathology
.
Developmental Psychology
,
26
,
710
720
.
Shiffrin
,
R. M.
,
Lee
,
M. D.
,
Kim
,
W.
, &
Wagenmakers
,
E.-J.
(
2008
).
A survey of model evaluation approaches with a tutorial on hierarchical Bayesian methods
.
Cognitive Science
,
32
,
1248
1284
.
Smith
,
S. M.
,
Jenkinson
,
M.
,
Woolrich
,
M. W.
,
Beckmann
,
C. F.
,
Behrens
,
T. E. J.
,
Johansen-Berg
,
H.
, et al
(
2004
).
Advances in functional and structural MR image analysis and implementation as FSL
.
Neuroimage
,
23(Suppl. 1)
,
S208
S219
.
Spiegelhalter
,
D. J.
,
Best
,
N. G.
,
Carlin
,
B. P.
, &
van der Linde
,
A.
(
2002
).
Bayesian measures of model complexity and fit
.
Journal of the Royal Statistical Society, Series B: Statistical Methodology
,
64
,
583
639
.
Swick
,
D.
,
Ashley
,
V.
, &
Turken
,
U.
(
2008
).
Left inferior frontal gyrus is critical for response inhibition
.
BMC Neuroscience
,
9
,
102
.
Swick
,
D.
,
Ashley
,
V.
, &
Turken
,
U.
(
2011
).
Are the neural correlates of stopping and not going identical? Quantitative meta-analysis of two response inhibition tasks
.
Neuroimage
,
56
,
1655
1665
.
Turner
,
B. M.
,
Forstmann
,
B. U.
,
Love
,
B. C.
,
Palmeri
,
T. J.
, &
Van Maanen
,
L.
(
2017
).
Approaches to analysis in model-based cognitive neuroscience
.
Journal of Mathematical Psychology
,
76
,
65
79
.
Turner
,
B. M.
,
Forstmann
,
B. U.
, &
Steyvers
,
M.
(
2018
).
Computational approaches to cognition and perception
. In
A. H.
Criss
(Ed.),
Simultaneous modeling of neural and behavioral data
.
Switzerland
:
Springer
.
Turner
,
B. M.
,
Forstmann
,
B. U.
,
Wagenmakers
,
E.-J.
,
Brown
,
S. D.
,
Sederberg
,
P. B.
, &
Steyvers
,
M.
(
2013
).
A Bayesian framework for simultaneously modeling neural and behavioral data
.
Neuroimage
,
72
,
193
206
.
Turner
,
B. M.
,
Rodriguez
,
C. A.
,
Norcia
,
T. M.
,
McClure
,
S. M.
, &
Steyvers
,
M.
(
2016
).
Why more is better: Simultaneous modeling of EEG, fMRI, and behavioral data
.
Neuroimage
,
128
,
96
115
.
Turner
,
B. M.
,
Sederberg
,
P. B.
,
Brown
,
S. D.
, &
Steyvers
,
M.
(
2013
).
A method for efficiently sampling from distributions with correlated dimensions
.
Psychological Methods
,
18
,
368
384
.
Turner
,
B. M.
,
Van Maanen
,
L.
, &
Forstmann
,
B. U.
(
2015
).
Informing cognitive abstractions through neuroimaging: The neural drift diffusion model
.
Psychological Review
,
122
,
312
336
.
Turner
,
B. M.
,
Wang
,
T.
, &
Merkle
,
E. C.
(
2017
).
Factor analysis linking functions for simultaneously modeling neural and behavioral data
.
Neuroimage
,
153
,
28
48
.
Woolrich
,
M. W.
,
Ripley
,
B. D.
,
Brady
,
M.
, &
Smith
,
S. M.
(
2001
).
Temporal autocorrelation in univariate linear modeling of fMRI data
.
Neuroimage
,
14
,
1370
1386
.
Zhang
,
L.
,
Guindani
,
M.
, &
Vannucci
,
M.
(
2015
).
Bayesian models for functional magnetic resonance imaging data analysis
.
WIREs Computational Statistics
,
7
,
21
41
.