The human brain is extremely flexible and capable of rapidly selecting relevant information in accordance with task goals. Regions of frontoparietal cortex flexibly represent relevant task information such as task rules and stimulus features when participants perform tasks successfully, but less is known about how information processing breaks down when participants make mistakes. This is important for understanding whether and when information coding recorded with neuroimaging is directly meaningful for behavior. Here, we used magnetoencephalography to assess the temporal dynamics of information processing and linked neural responses with goal-directed behavior by analyzing how they changed on behavioral error. Participants performed a difficult stimulus–response task using two stimulus–response mapping rules. We used time-resolved multivariate pattern analysis to characterize the progression of information coding from perceptual information about the stimulus, cue and rule coding, and finally, motor response. Response-aligned analyses revealed a ramping up of perceptual information before a correct response, suggestive of internal evidence accumulation. Strikingly, when participants made a stimulus-related error, and not when they made other types of errors, patterns of activity initially reflected the stimulus presented, but later reversed, and accumulated toward a representation of the “incorrect” stimulus. This suggests that the patterns recorded at later time points reflect an internally generated stimulus representation that was used to make the (incorrect) decision. These results illustrate the orderly and overlapping temporal dynamics of information coding in perceptual decision-making and show a clear link between neural patterns in the late stages of processing and behavior.

A primary function of the human brain is to flexibly respond to relevant perceptual information in accordance with current context and task goals. The sound of a phone ringing, for example, should prompt a different response if it is your phone than if it belongs to someone else. This set of complex processes, termed cognitive control, involves interpreting incoming information given the current context to determine an appropriate action (Posner & Presti, 1987; Posner & Snyder, 1975). Cognitive control involves dynamic information exchange at different levels of processing, from perceptual information processing to decision-making and response selection. Understanding how these different processes unfold could provide a great deal of insight into how the brain achieves goal-directed behavior.

A large body of neuroimaging research implicates frontoparietal brain regions in goal-directed behavior, which form a distributed network responsible for cognitive control (Duncan, 2010). This multiple-demand (MD) network (Duncan, 2010), elsewhere referred to as the cognitive control network (Cole & Schneider, 2007), frontoparietal control system (Vincent, Kahn, Snyder, Raichle, & Buckner, 2008), or task-positive network (Fox et al., 2005), appears to flexibly represent different types of information depending on task context. For example, activity in these regions encodes task rules (e.g., Crittenden, Mitchell, & Duncan, 2016; Woolgar, Afshar, Williams & Rich, 2015; Woolgar, Hampshire, Thompson, & Duncan, 2011) and auditory, visual, and tactile stimulus features (Long & Kuhl, 2018; Bracci, Daniels, & Op de Beeck, 2017; Jackson, Rich, Williams, & Woolgar, 2017; Woolgar & Zopf, 2017; for a review, see Woolgar, Jackson, & Duncan, 2016). These regions particularly encode task elements that are demanding (Woolgar, Afshar, et al., 2015; Woolgar et al., 2011) or at the focus of attention (Jackson & Woolgar, 2018; Jackson et al., 2017; Woolgar, Williams, et al., 2015). Activity in some of these regions has also been causally implicated in selectively facilitating coding of task-relevant information (Jackson, Feredoes, Rich, Lindner, & Woolgar, 2021). This lends support to the possibility that flexible responses within the MD regions play a causal role in goal-directed behavior (e.g., Duncan, Assem, & Shashidhara, 2020; Woolgar, Dermody, Afshar, Williams, & Rich, 2019; Woolgar, Duncan, Manes, & Fedorenko, 2018; Woolgar et al., 2010).

A characteristic feature of cognitive control is that it dynamically changes in response to task-relevant information. Research using fMRI has yielded insight into the brain networks involved in goal-directed behavior, but the slow nature of the blood-oxygen-level response has limited the exploration of the corresponding dynamics. Time-resolved neuroimaging methods such as magnetoencephalography (MEG) and EEG have been more fruitful in understanding how cognitive control unfolds over time (for a review, see Gratton, Cooper, Fabiani, Carter, & Karayanidis, 2017). For example, conflict-related processing involving incongruent task features elicits a larger evoked response than a congruent condition approximately 200–400 msec after stimulus onset, which has been linked to activity within the anterior cingulate (Folstein & Van Petten, 2008), and task-switching involves a larger parietal positivity around 300 msec relative to task repeats (Karayanidis et al., 2010). The newer method of multivariate pattern analysis (MVPA) in conjunction with MEG/EEG allows further insight into processing dynamics underlying cognitive control. MVPA uses pattern classification approaches applied to neuroimaging data to show what information is being coded within the brain (Haxby, 2012; Haxby et al., 2001). Time-resolved MVPA has been used to characterize how information coding changes over time (Hebart & Baker, 2018; Carlson, Hogendoorn, Kanai, Mesik, & Turret, 2011). For example, Hebart, Bankson, Harel, Baker, and Cichy (2018) had participants perform different tasks on visual object stimuli while measuring MEG and showed that task-relevant object features were enhanced at late stages of processing, more than 500 msec after the stimulus was presented. Other work has shown clear progression of task-relevant information during complex tasks, with different dynamics for features such as stimulus, task, and response (Kikumoto & Mayr, 2020; Hubbard, Kikumoto, & Mayr, 2019; Wen, Duncan, & Mitchell, 2019). This line of research has also highlighted the importance of combining relevant task information for successful behavior (Kikumoto & Mayr, 2020). These MVPA studies provided great insight into the neural dynamics of goal-directed behavior, but all used designs where the task cue was presented before the target, allowing participants to prepare for the task in advance. In addition, these studies focused on stimulus-aligned neural responses. It seems likely that tracking the dynamic coding of relevant task features relative to both stimulus onset and response, using a task that induces more flexible behavior, might elucidate stronger links between dynamic neural responses and goal-directed behavior.

Decades of neuroimaging research have focused on the neural correlates of behavior, but assessing whether particular patterns of brain activity are necessary for behavior has presented a challenge. In MVPA, a classifier algorithm is trained to distinguish between conditions using patterns of neural data from multiple trials of each condition. If a classifier can predict the conditions of new neural data better than chance, this demonstrates that the patterns of activity in the data must contain, or represent, information about the different conditions. However, the conclusion that decodable patterns represent information has been questioned on theoretical grounds (de Wit, Alexander, Ekroll, & Wagemans, 2016): Just because information is decodable using machine learning does not necessarily mean it is used by the brain to generate behavior. This awareness has led researchers to push for more explicit links between MVPA patterns and behavior, for example, comparing details of patterns with RTs or accumulation rates in models of behavior (Grootswagers, Cichy, & Carlson, 2018; Ritchie & Carlson, 2016; Ritchie, Tovar, & Carlson, 2015).

Exploring how information coding changes when participants make errors is another way to establish how behaviorally meaningful patterns of activation are. For example, Williams, Dang, and Kanwisher (2007) demonstrated that multivariate fMRI patterns in lateral occipital cortex, but not those in early visual regions, reduced to chance when participants made errors on a shape discrimination task, indicating that patterns in early visual cortex were not directly read out in behavior. In another study, participants performed a scene classification task (Walther, Caddigan, Fei-Fei, & Beck, 2009), and classifier prediction error patterns correlated with the types of errors in behavior within high-level object and scene-specific brain regions, but not within early visual cortex. Using MEG, we have recently shown that this logic can even be used to predict behavioral errors before they occur (Karimi-Rouzbahani, Woolgar, & Rich, 2021).

A stronger requirement for a behaviorally meaningful pattern of activity is that it should not only change on error but also change to something that predicts the particular error to be made (Woolgar et al., 2019). We tested this in fMRI and found that patterns of activation in frontoparietal cortex indeed reversed on error, such that patterns of activation on error trials represented information that was not presented to the participant, in a manner that was diagnostic of the particular behavioral error they made (Woolgar et al., 2019). In that study, participants performed a difficult response-mapping task. When participants made a rule error, MD patterns of activity reflected the incorrect rule, and when participants made other errors, MD patterns of activity reflected the incorrect stimulus (Woolgar et al., 2019). Within visual cortex, in contrast, there was no evidence of relevant information (correct or incorrect) during errors. Thus, some multivariate patterns appear to be more directly relatable to behavior that others, and there is a tight link between frontoparietal activity patterns and behavioral outcome.

In the current study, we used MEG and MVPA to examine the dynamics of this effect, asking whether information coding through the course of a trial was equally associated with behavior. We aimed to (1) characterize the neural dynamics of multiple types of task-relevant information and (2) examine their relationship to behavior over time. Participants performed a difficult response-mapping task that required different responses to a target stimulus depending on the current rule. To determine what aspects of this representation could be directly linked to behavior, we examined information coding on incorrect trials: when the wrong rule was applied or when there were errors in perception. We found a clear progression in onset of information coding, such that stimulus features are evident shortly after stimulus onset, followed by abstract rule coding and then the response, with the information about each task feature accumulating up to the time of response. When participants made stimulus errors, stimulus information was initially coded veridically but later accumulated in the opposite direction, toward a representation of the incorrect stimulus. By contrast, stimulus information was encoded correctly when participants made rule errors. The data reveal the dynamics with which information coding in the brain can be tightly linked to participant behavior.

All code and materials are available on the Open Science Project at https://osf.io/2nwhr/.

Participants

Participants were 22 healthy adults (14 women, eight men; age range: 18–38 years) with normal or corrected-to-normal vision recruited from Macquarie University. This study was approved by the Macquarie University ethics committee, and informed consent was obtained from all participants. Participants took part in two sessions: a 1-hr behavioral session and a 2-hr MEG session, on separate days. They were compensated $15 for the behavioral session and $40 for the MEG session. For two participants, initial photodiode inaccuracies meant that the timing for two and five trials, respectively, was not adequately marked, so these trials were excluded from analyses. Data from an additional two participants (two men) were collected and excluded: Both participants had very few stimulus position errors during the MEG session (<10), and for one of the participants, there was a recording error such that MEG data were recorded for only 680 of 800 trials.

Design and Procedure

Participants learned to apply two difficult response-mapping rules regarding the position of a target stimulus. The target was a gray square approximately 2° × 2° of visual angle that appeared in one of four positions. All positions were equidistant from fixation at an eccentricity of 4° of visual angle. Within the left and right visual fields, the two possible target locations overlapped by 60% horizontally and 65% vertically to create a high degree of position uncertainty (Figure 1A). Participants had to respond to the position of the stimulus using two possible response-mapping rules (Figure 1A). The two rules each comprised four unique position transformations and were mirror images of each other. The color of a central fixation square acted as a cue for the rule. There were two cues per rule, to dissociate neural responses to cues from the neural responses to rules (e.g., blue and yellow = Rule 1, pink and green = Rule 2; counterbalanced across participants). Participants responded by pressing one of four response keys with their right hand. The stimuli were presented using PsychToolbox in MATLAB (The MathWorks).

Figure 1.

Experimental design. (A) Response mapping rules. Participants had to indicate the position of a target stimulus that appeared in one of four possible locations. There were two cues per rule, designated by blue/yellow and green/pink squares at fixation. The button press associated with each position is indicated by the specific rule. (B) Trial timeline. After a fixation screen, the target stimulus and colored fixation cue appeared simultaneously, and participants had to apply the correct response-mapping rule using a button press. (C) Behavioral response types. In this example, the stimulus was in Position 1 and the rule was Rule 1 (blue cue), so the correct response was Button 4. A rule error occurred if the rule was mistaken for Rule 2, leading to a response of Button 3. A stimulus position error occurred if the position was mistaken to be Position 2, leading to a response of Button 1. (D) Depiction of MEG data collation: data aligned to stimulus onset (left) or RT (right). The temporal dynamics of stimulus-related and decision-related neural responses vary across trials, with different processes aligned with onset and response. Aligning the MEG data to stimulus onset versus response highlights different neural stages on the aggregate of all trials, although the content of each trial is identical.

Figure 1.

Experimental design. (A) Response mapping rules. Participants had to indicate the position of a target stimulus that appeared in one of four possible locations. There were two cues per rule, designated by blue/yellow and green/pink squares at fixation. The button press associated with each position is indicated by the specific rule. (B) Trial timeline. After a fixation screen, the target stimulus and colored fixation cue appeared simultaneously, and participants had to apply the correct response-mapping rule using a button press. (C) Behavioral response types. In this example, the stimulus was in Position 1 and the rule was Rule 1 (blue cue), so the correct response was Button 4. A rule error occurred if the rule was mistaken for Rule 2, leading to a response of Button 3. A stimulus position error occurred if the position was mistaken to be Position 2, leading to a response of Button 1. (D) Depiction of MEG data collation: data aligned to stimulus onset (left) or RT (right). The temporal dynamics of stimulus-related and decision-related neural responses vary across trials, with different processes aligned with onset and response. Aligning the MEG data to stimulus onset versus response highlights different neural stages on the aggregate of all trials, although the content of each trial is identical.

Close modal

The stimulus–response rule mappings were designed to distinguish correct responses and specific types of errors (Figure 1C). An error was considered a “rule error” when the button press response reflected the combination of the correct stimulus position with the wrong rule. In contrast, a “stimulus error” was defined as a button press response consistent with the combination of the adjacent perceptually confusable position with the correct rule. For example, under Rule 1, if the stimulus appeared on the far left, the correct response would be Button 4, a rule error (i.e., using Rule 2 applied to the correct position) would lead to a Button 3 response, a stimulus error (i.e., using Rule 1 correctly but confusing the stimulus with the other left position) would lead to a Button 1 response, and confusing both the rule and the stimulus led to a Button 2 response.

Training Session

Participants learned each rule in a separate session outside the MEG. They were trained to perform the task using increasingly difficult blocks of trials (see below). Feedback was given on every trial. For every incorrect response, participants were shown the correct response.

Initially, stimuli were presented in nonoverlapping positions (i.e., further apart than the final paradigm) so there was no position uncertainty. Stimuli were presented on the screen until a response was made (i.e., not time limited). In the first block, participants learned the first rule (Rule 1 or Rule 2, counterbalanced across participants). Each stimulus position was shown with its associated response four times (16 trials), and participants had to press the appropriate button for each stimulus. In each trial, cue color was chosen randomly from the two possible cues for that rule. The second block followed the same protocol, but for the other rule (Rule 1/Rule 2). In the third block, participants had to perform the task by implementing both rules, but still with well-separated stimuli. In the fourth block, the stimuli were presented in their final, overlapping experimental positions. Finally, in the fifth block, the stimuli were presented with the same procedure as the final experimental paradigm: The stimuli were overlapping and were presented for only 500 msec. Blocks 3–5 contained 32 trials each, consisting of two repeats of each cue and stimulus position, randomly ordered. In all blocks, participants had to perform at 60% accuracy or above to progress to the next block type. Blocks were repeated if they did not reach this threshold. On average, participants completed 8.61 training blocks (SD = 2.46). Block 3 was most often repeated (M = 3.09 repeats).

Experimental Session

In the second session, participants performed the task while their neural activity was recorded using MEG. At the start of each block, participants were shown a graphical depiction of the rules for at least 2 sec. When they were ready, they pressed a button to begin the block. In each trial, participants were shown a gray fixation marker for 500 msec, and then, the square target stimulus and colored rule cue were presented for 500 msec (Figure 1B). The participants were instructed to respond as quickly as possible without sacrificing accuracy. After they responded, there was an intertrial interval of 1000 msec before the next trial started. There were 10 blocks of 80 trials, each containing five trials per stimulus and cue color combination. Within each block, the order of the trials was randomized. Instead of feedback on every trial, like in training, participants were given feedback about their mean accuracy and RTs at the end of each block.

MEG Acquisition

MEG data were collected at Macquarie University in the KIT-MQ MEG facility with a whole-head supine Yokogawa system containing 160 gradiometers (Kado et al., 1999). The participant's head was fitted with a cap containing five marker coils. The head shape and position of the marker coils was marked using a Polhemus digitization system. Once inside the MEG, the position of the marker coils was measured to ensure the MEG sensors had good coverage over the participant's head. Marker position measurements were repeated halfway through the experiment and at the end of the session. Raw MEG data were collected at 1000 Hz with online 0.03-Hz high-pass and 200-Hz low-pass filters.

Stimuli were projected onto the ceiling of the magnetically shielded room. Stimulus timing was measured using a photodiode placed on the projection mirror and marked in an additional channel in the MEG recording. Participants indicated their response using a four-button fiber optic response pad (Current Designs). Response timing was marked in the MEG recording using a parallel port trigger.

MEG Data Analysis

MEG data were analyzed using multivariate decoding, which is very sensitive to reliable effects in the data and resistant to artifacts such as eye blinks that are not consistent across time and condition (Carlson, Grootswagers, & Robinson, 2020; Grootswagers, Wardle, & Carlson, 2017). Because of the robustness of decoding to such artifacts, data were minimally preprocessed using EEGLAB (Delorme & Makeig, 2004). Data were filtered using a Hamming window finite impulse response filter (default EEGLAB filter pop_eegfiltnew) with high pass of 0.1 Hz and low pass of 100 Hz and then downsampled to 200 Hz before epoching. For separate analyses, trials were epoched relative to stimulus onset, marked by the photodiode, and to the button press response, marked by the parallel port trigger.

Data were analyzed using time-resolved classification methods (e.g., Carlson et al., 2020; Robinson, Grootswagers, & Carlson, 2019; Grootswagers et al., 2017) and implemented using the CoSMoMVPA toolbox (Oosterhof, Connolly, & Haxby, 2016). For each time point, data were pooled across all 160 MEG sensors, and we tested the ability of a linear discriminant analysis classifier to discriminate between the patterns of neural responses associated with the different conditions. Trials were divided according to their associated behavioral responses: correct trials, rule errors, and stimulus position errors (Figure 1C). The classifiers were always trained on correct trials. To ensure that there were equal numbers of trials for each condition, correct trials were subsampled to be equal for each position and rule combination for each block per participant. To ensure adequate trial numbers for each of the analyses, blocks with fewer than two trials per Rule × Position combination were excluded; this amounted to nine excluded blocks in total across eight participants, with the remaining 14 participants having all blocks included. The total number of selected trials per participant was M = 437.09 (minimum = 280, maximum = 600).

Temporal Dynamics of Stimulus, Cue, Rule, and Response Coding

We performed pattern classification analyses to determine the time points at which stimulus position, cue, rule, and response representations emerge in the brain. First, we decoded stimulus position by comparing neural representations of the inner two stimulus positions (Positions 2 and 3) with those of the outer two stimulus positions (Positions 1 and 4). Separating position in this manner meant that motor responses and rule were balanced across the two position conditions and could not drive the classification results, ensuring we are detecting information related to stimulus position.

Next, we assessed the time course of rule coding by training a classifier to distinguish between Rule 1 and Rule 2. In having two color cues per rule, this analysis focused on rule coding over and above the physical properties of the cues (Rule 1 [blue and yellow cues] vs. Rule 2 [pink and green cues]).

We can also, however, decode cue coding separately from rule coding. To assess how cue decoding differed from rule decoding, we decoded between the two cues per rule (i.e., blue vs. yellow color cue for Rule 1 and pink vs. green for Rule 2). Cue coding was quantified as the mean of the two pairwise analyses.

As a final analysis, we decoded motor response by comparing the inner two button presses with the outer two button presses. This comparison ensured that stimulus position and rule were balanced within each class, so that the classifier would be driven by the motor response alone.

For each decoding analysis, classification analyses were performed using a leave-one-block-out cross-validation approach. This resulted in 10-fold cross-validation for participants with no excluded blocks (n = 14). The remaining participants used ninefold (n = 7) and eightfold cross-validation (n = 1). For all decoding analyses, chance performance was 50%.

Error Representations

The next set of analyses focused on decoding neural activity when participants made errors, to explore the relationship between patterns of activity and behavior. To investigate the representation of rule and stimulus errors, we trained the classifier on the correct trials and tested on the error trials. This allowed us to decode what information was present in the patterns of response across sensors when participants made different kinds of mistakes. Specifically, the analyses assessed whether the error patterns resembled the “correct” stimulus and rule patterns (above-chance decoding) or the neural patterns associated with the “incorrect” stimulus and rule (below-chance decoding). Note that, in this approach, below-chance classification is meaningful: It indicates the representation of the pattern that is instantiated when the other (incorrect in this case) rule or stimulus position is encoded.

We performed error decoding for stimulus position and rule information. In a comparable procedure to the correct trial analysis, we used leave-one-block-out cross-decoding analyses. In each fold, the classifier was trained on “correct” trials from all but one block and tested on all “error” trials across the whole session. This ensured the same training data (and thus decoding models) were used as in the correct trial analyses but allowed well-characterized results for the relatively small number of error trials. Participants made an average of 5.71% rule errors and 9.82% stimulus position errors (Figure 2A). Table 1 shows the mean number of trials used for stimulus and rule decoding per trial response type (correct, stimulus error, rule error).

Figure 2.

Behavioral results in the MEG session (n = 22). (A) Proportion of trials and (B) median RT for correct, rule errors, stimulus errors, and other errors. Gray lines denote individual participants, and black markers denote group means. Error bars are 95% confidence intervals across participants.

Figure 2.

Behavioral results in the MEG session (n = 22). (A) Proportion of trials and (B) median RT for correct, rule errors, stimulus errors, and other errors. Gray lines denote individual participants, and black markers denote group means. Error bars are 95% confidence intervals across participants.

Close modal
Table 1.

Mean Number of Trials Used for Stimulus and Rule Decoding Analyses per Participant

 CorrectStimulus ErrorRule Error
Inner positions 218.55 (46.00) 30.00 (29.35) 17.55 (9.14) 
Outer positions 218.55 (46.00) 48.50 (26.05) 28.14 (17.94) 
Rule 1 218.55 (46.00) 38.14 (16.09) 22.18 (13.29) 
Rule 2 218.55 (46.00) 40.36 (21.44) 23.50 (12.19) 
 CorrectStimulus ErrorRule Error
Inner positions 218.55 (46.00) 30.00 (29.35) 17.55 (9.14) 
Outer positions 218.55 (46.00) 48.50 (26.05) 28.14 (17.94) 
Rule 1 218.55 (46.00) 38.14 (16.09) 22.18 (13.29) 
Rule 2 218.55 (46.00) 40.36 (21.44) 23.50 (12.19) 

Group means (standard deviation) for trial numbers are presented according to the two different stimulus classes (inner or outer positions; regardless of rule) and two rules (regardless of stimulus position). Trials are split by behavior: correct response, stimulus error, and rule error. Decoding models were always trained using correct trials.

Exploratory Searchlight Analysis

To illustrate the spatial extent of the MEG signal containing relevant task-related information, we applied searchlight decoding. For each sensor in turn, we defined a searchlight consisting of that sensor and its immediate neighbors (mean = 5.6 sensors per searchlight). We then ran the same decoding schemes as on the whole head (above) in each of these searchlights. Decoding accuracy for each searchlight was plotted on the central sensor, resulting in a head map of decoding accuracies, showing the regions containing task-related information at a given time. For each task feature and condition (e.g., Rule 1 vs. Rule 2 on stimulus-aligned correct trials), searchlight results were plotted using 20-msec time windows of interest centered around representative time points: 150 and 1000 msec for the onset-aligned analyses, and −600 and −200 msec for the response-aligned analyses.

Statistical Testing

To assess performance of the classifier, we used Bayesian statistics to determine the evidence that decoding performance was different from chance (Teichmann, Moerel, Baker, & Grootswagers, 2021; Dienes, 2011, 2016; Rouder, Speckman, Sun, Morey, & Iverson, 2009; Wagenmakers, 2007; Jeffreys, 1961). A Bayes factor (BF) is the probability of the data under the alternative hypothesis relative to the null hypothesis. In all analyses, the alternative hypotheses of above- and below-chance (50%) decoding were tested using the BayesFactor package in R (Morey et al., 2018). BFs were calculated using a JZS prior, centered around chance decoding of 50% (Rouder et al., 2009) with a default scale factor of 0.707, meaning that for the alternative hypotheses of above- and below-chance decoding, we expected to see 50% of parameter values falling within −0.707 and 0.707 SDs from chance (Wetzels & Wagenmakers, 2012; Rouder et al., 2009; Zellner & Siow, 1980; Jeffreys, 1961). A null interval was specified as a range of effect sizes between −0.5 and 0.5.

In accordance with the BF literature, we did not make a correction for multiple comparisons for the large number of time points tested (Teichmann et al., 2021; Świątkowski & Carrier, 2020; Dienes, 2011, 2016). BFs assess the strength of evidence for the null hypothesis versus the alternative hypothesis; here, they are used to directly test the evidence for above-chance (or below-chance) decoding and the evidence that decoding is equivalent to chance. Thus, we assess the magnitude of evidence in either direction, rather than a probability of the observed data occurring by chance as in traditional null hypothesis testing. BF > 3 is typically considered evidence for the alternative hypothesis; and BF < 1/3, as evidence in favor of the null hypothesis (Wetzels et al., 2011; Jeffreys, 1961). When the magnitude of BFs is interpreted at face value, rather than applying a threshold for “significance,” there is no need to correct for the number of tests (time points; Teichmann et al., 2021). Additional tests provide additional evidence and can be interpreted as such without correcting for multiple comparisons (Dienes, 2011, 2016). Accordingly, we do not interpret high BFs at isolated time points as evidence for decoding; rather, we assess the pattern of evidence through time in support of the null or alternative hypotheses. Single time points are not considered to provide substantial evidence if neighboring time points support the opposite hypothesis. We will refer to periods with sustained evidence for the alternative hypothesis as times when information could be decoded, indicating information was represented in the brain.

Behavioral Results

All participants performed above 60% on the final block of the response-mapping task in the training session and therefore participated in the experimental MEG session. In the MEG session, participants performed well above chance (M = 81.92%) but still made both rule errors (M = 5.71%) and stimulus position errors (M = 9.82%; Figure 2A). There were very few other errors (M = 1.43%) or trials with no response (M = 1.13%). RTs were slower for stimulus error trials than rule error and correct trials (Figure 2B).

Temporal Dynamics of Goal-directed Behavior

First, we investigated neural coding during correct trials by decoding different task-related information from the MEG signal when each trial was aligned to stimulus onset (analogous to classic event-related analyses). We then realigned the MEG signal of each trial to the response and performed the same decoding analyses (see Figure 1D for a depiction of realignment). This gives us a unique insight into the time course of the processing stages during goal-directed behavior.

Time-resolved decoding performed relative to stimulus onset revealed a progression of relevant information over time (Figure 3A). Stimulus position information was represented in the neural signal from approximately 75 msec after the stimulus appeared, with a double-peak response. Cue information could be decoded from 170 msec, and the dynamics were similar for the blue versus yellow (53.34% at 170 msec) and green versus pink (51.86% at 170 msec) cue color decoding (mean presented in Figure 3A), indicating the decoding reflects general cue information rather than being specific to one set of cues. The timing of stimulus and cue information was thus consistent with early visual stages of retinotopic position (Battistoni, Kaiser, Hickey, & Peelen, 2020; Carlson et al., 2011) and color processing (Teichmann, Grootswagers, Carlson, & Rich, 2019), respectively.

Figure 3.

The temporal dynamics of correct stimulus position, cue color, rule, and response information coding. (A) Decoding analyses conducted relative to stimulus onset. (B) Decoding analyses conducted relative to RT. Shaded areas show standard error across participants (n = 22). Decoding accuracy is smoothed with a 20-msec window for visualization. Line at the top of Plot A marks the mean and 95% confidence interval of median RTs per participant. BFs for above-chance decoding are displayed below the x axes for every time point using a log scale and color coded according to the evidence for above-chance decoding (see inset). Head maps depict sensor searchlight decoding results for each task-related feature at representative periods. The color bar indicates decoding accuracy per sensor searchlight, calculated as the mean of a 20-msec time window centered on the time of interest.

Figure 3.

The temporal dynamics of correct stimulus position, cue color, rule, and response information coding. (A) Decoding analyses conducted relative to stimulus onset. (B) Decoding analyses conducted relative to RT. Shaded areas show standard error across participants (n = 22). Decoding accuracy is smoothed with a 20-msec window for visualization. Line at the top of Plot A marks the mean and 95% confidence interval of median RTs per participant. BFs for above-chance decoding are displayed below the x axes for every time point using a log scale and color coded according to the evidence for above-chance decoding (see inset). Head maps depict sensor searchlight decoding results for each task-related feature at representative periods. The color bar indicates decoding accuracy per sensor searchlight, calculated as the mean of a 20-msec time window centered on the time of interest.

Close modal

Rule information was briefly evident at about the same time as the cues (around 150 msec), but also prominent from approximately 400 to 1200 msec, likely coinciding with higher-level cognitive stages of processing. Rule information was quite low in general. One possibility is that there was a carryover effect such that the rule type on a given trial subsequently affected rule coding on the next trial. In an additional exploratory analysis, we assessed rule decoding separately for rule switch and rule repeat trials by training Rule 1 versus Rule 2 on correct trials (as in the original analysis) and testing on repeat and switch trials separately. We found that rule decoding exhibited similar dynamics regardless of the rule on the previous trial, with slightly higher decoding on switch trials (see OSF repository). Thus, inertia from the previous trial did not appear to cause a switch cost in the representation of rule information in the brain.

Finally, we assessed the temporal dynamics of response information. The button press response was represented from about 715 msec, corresponding with RTs on some of the faster trials. Together, decoding accuracy peaked at 125 msec for stimulus position, 175 msec for cue color, 600 msec for rule, and 1055 msec for response button, showing a clear progression in information processing through time.

Searchlight decoding results showed that position, cue, and rule information at 150 msec localized to posterior regions of the brain (Figure 3A, bottom). At a later period, 1000 msec, position and response information were more frontal and diffuse across the MEG sensors. Rule information was much lower in general but again showed a dissociation for the early versus late time points, with frontal and central topography at the later period.

As is typical in difficult tasks, there was a wide variation in RTs across trials and participants, indicating that the dynamics of high-level task-related processes such as decision-making vary trial to trial with respect to stimulus onset. Time-resolved decoding relies on processes occurring at the same time across trials, so this temporal jitter can mask results (Vidaurre, Myers, Stokes, Nobre, & Woolrich, 2019). To capture processes that are more closely aligned with the response, we next realigned the MEG data to the RT (Figure 1D) and performed the same decoding analyses. The temporal dynamics of relevant task-related information were markedly different compared with onset-aligned results (Figure 3B). Notably, cue information could no longer be reliably detected, presumably because cue color representations were transient and tightly stimulus locked because, by design, the cue distinctions were irrelevant as soon as rule information could be extracted from them. In contrast, decoding of stimulus position coding was evident more than 1000 msec before the response, and rule coding was evident more than 600 msec before the response, with evidence for stimulus processing earlier than rule processing. Motor response coding was sporadically present from more than 1000 msec before the response but was sustained from around 485 msec before when the response was made. Response coding peaked after the response was given, potentially reflecting the contribution of somatosensory feedback from the different button presses. Response coding 200 msec before the response was associated with highest decoding over central-left sensors, which would be consistent with motor and somatosensory cortex activity associated with a right-hand response. Interestingly, the representation of stimulus position and response information appeared to ramp up before the response, plausibly reflecting the accumulation of evidence leading to a decision.

Error Representations

We were particularly interested in understanding whether and how the task-related information we can decode with MVPA is related to participant performance. Specifically, we investigated how information coding changes when an error is made. Recall that our design explicitly allows us to identify the likely source of the error based on the behavioral response (Figure 1C). We assessed stimulus and rule information in the neural signal when participants made stimulus errors versus rule errors. On the basis of our previous work with fMRI (Woolgar et al., 2019), we hypothesized that the brain would represent the incorrect stimulus before a stimulus error and the incorrect rule before a rule error. Classifiers were trained to classify stimulus position and rule using correct trials and tested on incorrect trials. Therefore, for each time point on each error trial, the analysis reveals whether activation patterns were more similar to the usual patterns for the presented rule and stimulus (correct rule and stimulus) or the alternate one (incorrect) corresponding to the participant's decision (as shown by the behavior response).

Stimulus Decoding—Aligned to Stimulus Onset

In this analysis, we looked at how stimulus position was coded on error trials. We found that, when participants made rule errors, in which behavior suggested that the stimulus was encoded correctly but the incorrect rule had been used, there was sustained stimulus decoding with similar dynamics to that on correct trials (Figure 4A, blue line). Stimulus coding on stimulus errors, however, was present only transiently at 335 msec after which coding attenuated (Figure 4A, green line). After 495 msec, there was substantial evidence that stimulus coding was higher when participants made errors based on applying the wrong rule than when the response suggested they had misperceived the stimulus. This indicates that, when participants made stimulus errors, correct stimulus information was lost.

Figure 4.

Stimulus position decoding on rule and stimulus error trials. (A) Decoding analyses conducted relative to stimulus onset revealed initial stimulus coding on both rule and stimulus error trials, with sustained stimulus coding on rule error trials (similar to correct trials; Figure 3), but no stimulus coding at later time points on stimulus error trials. The interaction, shown by BF difference (pink), confirmed that at later time points, there was more evidence for stimulus coding on rule errors than stimulus errors. (B) Decoding analyses conducted relative to RT revealed evidence for correct stimulus coding on rule error trials and evidence for “incorrect” stimulus coding on stimulus error trials, evident as below-chance decoding accuracy. Decoding accuracy is smoothed with a 20-msec window for visualization. BFs are shown on a log scale and color coded according to amount of evidence.

Figure 4.

Stimulus position decoding on rule and stimulus error trials. (A) Decoding analyses conducted relative to stimulus onset revealed initial stimulus coding on both rule and stimulus error trials, with sustained stimulus coding on rule error trials (similar to correct trials; Figure 3), but no stimulus coding at later time points on stimulus error trials. The interaction, shown by BF difference (pink), confirmed that at later time points, there was more evidence for stimulus coding on rule errors than stimulus errors. (B) Decoding analyses conducted relative to RT revealed evidence for correct stimulus coding on rule error trials and evidence for “incorrect” stimulus coding on stimulus error trials, evident as below-chance decoding accuracy. Decoding accuracy is smoothed with a 20-msec window for visualization. BFs are shown on a log scale and color coded according to amount of evidence.

Close modal

Stimulus Decoding—Aligned to RT

Next, we asked the same question but with the data realigned to the RT. On rule errors, there was a gradual ramping up of stimulus coding in the lead up to the response (Figure 4B, blue line), as we had observed for correct trials (Figure 3B). In contrast, on stimulus errors, activation patterns ramped toward the patterns encoding the incorrect stimulus, as indexed by below-chance decoding from approximately 355 msec before the response (Figure 4B, green line). Given that the correct stimulus had been encoded in the early part of these same trials (Figure 4A), this suggests an evolution of information coding toward the incorrect stimulus decision. Stimulus decoding accuracy on rule errors was higher than that on stimulus errors for the bulk of the epoch, particularly from about 795 msec before the response to 600 msec after the response. Together, this finding shows that stimulus coding in the latter part of the trial reflected the decision ultimately made by participants, rather than the stimulus presented.

Rule Decoding

Next, we asked whether the representation of task rule in the correct trials would also generalize to error trials. However, there were only very brief periods of evidence for rule information coding on rule errors and stimulus errors, whether we aligned the MEG data to the stimulus onset (Figure 5A) or response (Figure 5B). There was also no difference in rule coding between error types. For onset-aligned analyses, BFs indicated evidence for above-chance decoding on rule and stimulus errors for some time points, but it was not sustained. There were also some brief periods of below-chance decoding on rule errors for response-aligned analyses, which would indicate coding of the incorrect rule, consistent with behavior, but this did not reach our interpretation levels for BFs (two consecutive time points, BF > 3). Overall, rule information that had been (weakly) present on correct trials was largely absent on both types of behavioral error. Moreover, the “reversal” in coding—in this case, coding of the incorrect rule—was not evident as it was for stimulus coding.

Figure 5.

Rule decoding on rule and stimulus error trials. (A) Decoding analyses conducted relative to stimulus onset. (B) Decoding analyses conducted relative to RT. Decoding accuracy is smoothed with a 20-msec window for visualization. There was substantial evidence for the null (i.e., that rule could not be decoded) on both stimulus error and rule error trials, indicating the rule coding did not reverse for either type of error.

Figure 5.

Rule decoding on rule and stimulus error trials. (A) Decoding analyses conducted relative to stimulus onset. (B) Decoding analyses conducted relative to RT. Decoding accuracy is smoothed with a 20-msec window for visualization. There was substantial evidence for the null (i.e., that rule could not be decoded) on both stimulus error and rule error trials, indicating the rule coding did not reverse for either type of error.

Close modal

Taken together, the MEG decoding results show that, on correct trials, all task-relevant aspects (stimulus position, cue color, rule, and response) could be decoded. The dynamics of coding varied such that analyses revealed stimulus-locked coding of perceptual features (stimulus position, cue) early in the time course, and analyses aligned to the response revealed coding of relevant task aspects (position, rule) and the resulting motor response ramping up before the response being given. Strikingly, the error decoding analysis showed that the increased stimulus position coding in the latter part of the epoch reflected the participants' decision about the stimulus more closely than the physical stimulus presented to them, providing strong evidence for the connection between specific neural responses decoded with MVPA and behavior.

In this study, we used MVPA with MEG to characterize the neural dynamics of stimulus, cue, rule, and response coding in a difficult response-mapping task as well as the link between these codes and behavior. Our results showed a clear and orderly progression of task-relevant information coding after the stimulus was presented, whereas analyses aligned to the RT revealed that information coding for the stimulus and motoric response ramped up over the ∼1 sec before the response was given, in a manner reminiscent of evidence accumulation (e.g., Tagliabue et al., 2019; Pisauro, Fouragnan, Retzler, & Philiastides, 2017). Strikingly, for trials on which participants made an error in the stimulus position, information coding initially corresponded to physical stimulation, but later accumulated in the opposite direction, so that activity patterns at later time points resembled those encoding the “incorrect” stimulus. This provides a crucial demonstration that patterns of neural activity recorded and classified in this way can be predictive of behavior. These findings give insight into the dynamics of processes underlying cognitive control and provide a clear link between neural responses and behavior.

The difficult response-mapping task implemented in this study required complex processing for successful performance. The task involved processing different types of perceptual information (cue, stimulus), conversion of the cue into the appropriate rule, application of the relevant rule to the stimulus position, and selection of the correct button-press response. Using MEG decoding with a carefully balanced experimental design, we were able to investigate the coding of each of these types of relevant information over time and observe the succession of the different task-related features. We summarize and consider the findings below.

Information Coding After Stimulus Onset: Correct Trials

Our results demonstrated different dynamics for perceptual, rule-related, and motor processes for analyses aligned to stimulus onset (Figure 3A). Stimulus position was represented early in the time course (<100 msec after stimulus onset), consistent with early retinotopic visual processes (Im, Gururajan, Zhang, Chen, & He, 2007; Di Russo et al., 2005). The cue was represented shortly thereafter at a time that is consistent with general color processing and in line with previous work that found color decoding was most evident from 135–155 msec after image onset (Teichmann et al., 2019, 2020). Rule information, by contrast, was most evident at around 600 msec and maintained for longer than cue information, perhaps reflecting the ongoing process of combining the stimulus and rule information to derive the response. Motor responses emerged last and exhibited a broad, shallow peak, perhaps reflective of the wide range of RTs in the task. Our data thus emphasize the progression of coding for different types of task-relevant information over time, despite the relevant sensory information (stimulus and cue) being presented simultaneously.

These onset-aligned analyses are consistent with previous time-resolved multivariate analyses that showed progression of task-related information after stimulus presentation (Kikumoto & Mayr, 2020; Hubbard et al., 2019; Wen et al., 2019; Hebart et al., 2018). For example, Hebart et al. (2018) showed that task-related information was evident in the MEG signal shortly after a task cue was presented but ramped up again after the target object was presented. In a more complex design, Hubbard et al. (2019) used cued task-switching with EEG, which allowed them to look at coding of multiple task-related aspects over time using oscillatory power in the neural signal. Like our cue and rule results, they showed cue information preceded task rule information, and task decoding was prolonged throughout the trial period. Relevant and irrelevant stimulus information was evident after the stimuli were presented, and response information was present later in the signal. Here, we show that a similar cascade of information arises when the cue and target stimulus are presented simultaneously. Our results and this previous work show that different task features are represented in the brain with different temporal dynamics, but there are periods when multiple types of features are represented, potentially giving the neural correlates for the information integration needed on these tasks.

The cue color decoding we observed is indicative of transient cue processing before the relevant rule was selected, at which point the color distinctions (e.g., blue vs. yellow, when both indicate Rule 1) become irrelevant. In contrast, the prolonged coding of stimulus position and rule (far exceeding the stimulus presentation time of 500 msec) likely reflects position and rule information being maintained, accumulated, and/or manipulated as it is combined to achieve the correct response. Previous fMRI research has shown that the MD regions in frontoparietal cortex represent a range of information including details of stimuli and task rules (Woolgar et al., 2016), with a particular emphasis on information that is task relevant (Jackson et al., 2017; Woolgar, Afshar, et al., 2015), perceptually confusable (Woolgar et al., 2011), or difficult, like our task rules (Woolgar, Afshar, et al., 2015). The representations of stimulus position and rules we observed at late stages of processing (>500 msec after stimulus onset) would be theoretically consistent with processing within higher-level frontoparietal regions. For example, previous combined MEG–fMRI research has shown task-related enhancement of relevant features occurred after 500 msec following stimulus presentation, and task coding in posterior parietal cortex and lateral pFC seemed to peak from 500 msec (Hebart et al., 2018). Moreover, attention enhances task-relevant information in frontoparietal regions from 500 msec (Moerel, Rich, & Woolgar, 2021). On the other hand, task-relevant stimulus information also seems to persist in occipital regions until these late time points (Moerel et al., 2021; Hebart et al., 2018; Goddard, Carlson, Dermody, & Woolgar, 2016). Our exploratory sensor searchlight analyses suggested that early position, cue, and rule information was evident in occipital sensors, whereas later position information was more diffuse across the brain, and later rule information was more frontal. Future work could address the spatial nature of these processes with more precision, perhaps using computational methods to combine fMRI and MEG data such as similarity-based fusion (Cichy, Pantazis, & Oliva, 2014, 2016).

Information Coding Before the Response: Correct Trials

The analyses aligned to the RT provided rich additional information about the temporal dynamics of stimulus, cue, rule, and response coding (Figure 3B). We expected that these response-aligned analyses would emphasize higher-level decision-related processes required for behavior, which might not be so salient in data aligned to stimulus onset because of variability in their timing (Vidaurre et al., 2019). Previous EEG work has shown neural signals ramp up during perceptual decision-making, which has been described as evidence accumulation (e.g., Tagliabue et al., 2019; Pisauro et al., 2017), but these effects could be related to a general decision-making process rather than involving information about the stimulus of interest and could be confounded with preparatory motor activity. Here, using decoding, we were able to assess the dynamics of different types of task-related information, separate from and in addition to response information, that was represented in the brain before the response. The results revealed an increase in stimulus information from approximately 1000 msec before the response that peaked around the RT, a pattern that was noticeably absent in the onset-locked analyses. Response coding, by contrast, showed a later, sharper ramping in information that peaked just after the response was made. The ramping of stimulus position and response coding was, for the most part, when the stimulus and cue were no longer visible; the stimulus and cue were only presented for 500 msec, and the median RT was over 1400 msec, so on a typical trial, there was no stimulus presented in the 900 msec before the response. Therefore, instead of perceptual accumulation, these pre-response representations appear to be internally generated codes that reflect the system moving toward different end states as the person arrives at his or her decision.

The results revealed concurrent coding of position, rule, and response information before the response, which might reflect the need to combine position and rule information to select the appropriate response. Kikumoto and Mayr (2020) recently investigated the temporal dynamics of action selection using EEG in a cued rule selection task and provided evidence that conjunctions between task-relevant features are necessary for action selection. In addition to the succession of individual task features, they found rule stimulus–response conjunctive representations could be decoded using stimulus-aligned EEG, and the strength of the conjunctive information was associated with faster responses, providing a link with behavior. Other work has used temporal decomposition of EEG data and concluded that stimulus–response bindings have different temporal profiles to stimulus information, with gradual activation and decay over time (Takacs, Mückschel, Roessner, & Beste, 2020). Our exploratory searchlight decoding results showed that position and rule information before the response was mostly lateral and diffuse across the sensors but with similar spatial patterns (although lower for rule decoding), potentially reflecting the integration of task-relevant information within brain regions. We did not explicitly set out to look at conjunctive representations, but our results certainly fit with this account. During goal-directed behavior, it seems that multiple task-relevant features are represented concurrently, presumably reflecting the need for this information to be maintained, and are then combined over time.

Our onset-aligned and response-aligned analyses revealed complementary aspects of the data. The pattern of results suggests that onset-aligned analyses may be most sensitive to perceptual responses, whereas response-aligned analyses may capture processes that are time-varied relative to stimulus onset and more closely yoked to the time of response, such as higher decision and motor preparation processes. Specifically, we found that stimulus position and cue color had sharp initial decoding when aligned to stimulus onset, which was not visible after realignment to response. However, neural representations of stimulus and response exhibited a ramping accumulation before the button was pressed, which was not visible in onset-aligned data. This highlights the utility of including both approaches, perhaps particularly for difficult tasks with substantial RT variability, to yield additional information about the dynamics underlying successful task performance.

Information Coding Leading to Incorrect Behavior: Error Trials

To test whether the neural coding of task-relevant information detected with MVPA reflects activity necessary to successfully perform a task, we examined how these codes changed when participants made errors. We focused on decoding stimulus and rule information during stimulus errors and rule errors, situations in which the decision made could be dissociated from the stimulus and rule cue presented. Stimulus errors consisted of trials on which participants correctly applied the rule but confused the stimulus. Despite the behavioral evidence for correct rule use on these trials, there was only some evidence of rule coding, perhaps reflecting weak rule coding in general (on correct trials) and the limited number of error trials. However, stimulus position coding on stimulus error trials revealed a striking result: Initial stimulus coding showed some fleeting evidence of the correct stimulus neural pattern, but before the response, stimulus coding became consistent with the incorrect stimulus. Thus, onset-aligned analyses and responses at early time points reflected perception, whereas response-aligned analyses and coding at later time points reflected behavior. Recall in the paradigm that stimulus position was designed to be confusable, and a stimulus position error, by definition, means participants confused two (of four) stimulus positions. When the stimulus was presented, participants would see it, which is consistent with brief veridical stimulus position decoding, but the insufficient maintenance of this information correlates with the behavioral performance: Participants could not localize the stimulus precisely, which led to a decision in favor of the wrong stimulus. It is this internal decision-related process that seems to be reflected in below-chance (incorrect stimulus) decoding before the response. Previous work has shown that higher, but not early, perceptual regions reflect behavior in terms of accuracy (Walther et al., 2009; Williams et al., 2007) and RT (Grootswagers et al., 2018), although none of these studies revealed the code reversal needed for a strong link with behavior. Here, we used the temporal domain to show “what” was coded on error trials at different stages of processing. There was a dissociation between the coding of early perceptual information and the stimulus decision used to generate the behavioral response.

Rule errors consisted of trials on which participants appeared (in their behavior) to apply the wrong rule to the correct stimulus. Accordingly, for stimulus position coding, the classifier trained on correct trials could successfully classify the stimulus position after onset and before the response on rule error trials. This indicates that the stimulus coding reversal seen above was diagnostic of the particular type of behavioral error, rather than reflective of errors in general, indicating a tight link with the specific decision made and reflected in behavior.

For rule coding, we again found little evidence for rule coding on rule error trials. A couple of time points just before the response showed patterns of activity consistent with the incorrect rule, as we had predicted for a full double dissociation, but the effect was so transient that it is difficult to interpret with confidence. This may reflect the very small number of rule error trials and/or the relatively weak coding of rule information in general in our data (potentially attributable to more variability in timing of this task aspect and/or relatively poor signal from frontal regions that are further from the sensors in our supine MEG system). This limitation means that, in contrast to the stimulus information, we cannot conclude with confidence whether or not the rule patterns we decoded were closely linked to behavior.

Our research contributes to the growing literature drawing links between neural responses and behavior using MVPA. Using spatial and temporal neuroimaging, classifier prediction errors and distances from the classifier boundary have been shown to correlate with behavioral error patterns and RTs (e.g., González-García, Formica, Wisniewski, & Brass, 2021; Grootswagers et al., 2018; Carlson, Ritchie, Kriegeskorte, Durvasula, & Ma, 2014; Walther et al., 2009). Here, we argue that a tighter link between brain and behavior can be found by testing “what” is represented on error trials when an incorrect decision is made. The results parallel fMRI work showing frontoparietal MD regions represent the correct stimulus but wrong rule during a rule error as well as the correct rule but wrong stimulus during other errors (Woolgar et al., 2019). The current study extends this work by elucidating the dynamics with which the incorrect representations evolve over time, with early representations reflecting the stimuli presented, but a late gradual accumulation toward the opposite stimulus at time points just before behavioral response. We also show here that there is a dissociation in the perceptual coding (indexed by onset-aligned analyses) and high-level decision coding (indexed by response-aligned analyses) for error trials. Specifically, on stimulus errors, after a transient representation of the veridical stimulus, activity accumulated toward a pattern state reflecting the opposite and incorrect stimulus, apparently reflecting the internal generation of accumulation toward the wrong decision. This pattern was specifically diagnostic of behavioral errors attributable to stimulus misperception, as position information was coded correctly on other types of behavioral errors.

The results of this study provide new insights into how task-relevant information is processed in the human brain to allow successful goal-directed behavior. There was a clear progression of the onset of task-relevant information in the brain, from stimulus position and cue, to rule and then response information. Complimentary response-aligned analyses, which highlight later high-level processes aligned in time to behavior, additionally revealed dynamics of information coding resembling an accumulation of multiple types of task-relevant information. Moreover, when participants made behavioral errors, the direction of accumulation was reversed. Under these conditions, the trajectory of representation moved in the opposite direction such that the neural pattern increasingly represented the incorrect stimulus (which had not been shown) in a manner diagnostic of the subsequent behavioral choice. The data highlight the orderly but overlapping dynamics with which several task elements can be represented in brain activity. Our findings emphasize a particular role for the trajectory of information coding at later time points in determining behavioral success or failure and also demonstrate the utility of aligning neural data differently to examine high-level complex cognitive processes.

We thank Christopher Whyte for assistance in data collection and Dr. Tijl Grootswagers for helpful discussions. We acknowledge the Sydney Informatics Hub and the University of Sydney's high-performance computing cluster Artemis for providing high-performance computing resources that contributed to these research results.

Reprint requests should be sent to Amanda K. Robinson, School of Psychology, The University of Sydney, Camperdown, NSW 2006, Australia, or via e-mail: [email protected].

This work was funded by Australian Research Council Discovery Project (https://dx.doi.org/10.13039/501100000923), grant number: 170101840, Australian Research Council Future Fellowship (https://dx.doi.org/10.13039/501100000923), grant number: FT170100105, Medical Research Council (UK) intramural funding, grant number: SUAG/052/G101400; and Australian Research Council Discovery Early Career Researcher Award (https://dx.doi.org/10.13039/501100000923), grant number: DE200101159.

Retrospective analysis of the citations in every article published in this journal from 2010 to 2021 reveals a persistent pattern of gender imbalance: Although the proportions of authorship teams (categorized by estimated gender identification of first author/last author) publishing in the Journal of Cognitive Neuroscience (JoCN) during this period were M(an)/M = .407, W(oman)/M = .32, M/W = .115, and W/W = .159, the comparable proportions for the articles that these authorship teams cited were M/M = .549, W/M = .257, M/W = .109, and W/W = .085 (Postle and Fulvio, JoCN, 34:1, pp. 1–3). Consequently, JoCN encourages all authors to consider gender balance explicitly when selecting which articles to cite and gives them the opportunity to report their article's gender citation balance. The authors of this article report its proportions of citations by gender category to be as follows: M/M = .536, W/M = .145, M/W = .130, and W/W = .188.

Battistoni
,
E.
,
Kaiser
,
D.
,
Hickey
,
C.
, &
Peelen
,
M. V.
(
2020
).
The time course of spatial attention during naturalistic visual search
.
Cortex
,
122
,
225
234
. ,
[PubMed]
Bracci
,
S.
,
Daniels
,
N.
, &
Op de Beeck
,
H.
(
2017
).
Task context overrules object- and category-related representational content in the human parietal cortex
.
Cerebral Cortex
,
27
,
310
321
. ,
[PubMed]
Carlson
,
T. A.
,
Grootswagers
,
T.
, &
Robinson
,
A. K.
(
2020
).
An introduction to time-resolved decoding analysis for M/EEG
. In
M. G.
Gazzaniga
,
G. R.
Mangun
&
D.
Poeppel
(Eds.),
The cognitive neurosciences
.
Cambridge, MA
:
MIT Press
.
Carlson
,
T. A.
,
Hogendoorn
,
H.
,
Kanai
,
R.
,
Mesik
,
J.
, &
Turret
,
J.
(
2011
).
High temporal resolution decoding of object position and category
.
Journal of Vision
,
11
,
1
17
. ,
[PubMed]
Carlson
,
T. A.
,
Ritchie
,
J. B.
,
Kriegeskorte
,
N.
,
Durvasula
,
S.
, &
Ma
,
J.
(
2014
).
Reaction time for object categorization is predicted by representational distance
.
Journal of Cognitive Neuroscience
,
26
,
132
142
. ,
[PubMed]
Cichy
,
R. M.
,
Pantazis
,
D.
, &
Oliva
,
A.
(
2014
).
Resolving human object recognition in space and time
.
Nature Neuroscience
,
17
,
455
462
. ,
[PubMed]
Cichy
,
R. M.
,
Pantazis
,
D.
, &
Oliva
,
A.
(
2016
).
Similarity-based fusion of MEG and fMRI reveals spatio-temporal dynamics in human cortex during visual object recognition
.
Cerebral Cortex
,
26
,
3563
3579
. ,
[PubMed]
Cole
,
M. W.
, &
Schneider
,
W.
(
2007
).
The cognitive control network: Integrated cortical regions with dissociable functions
.
Neuroimage
,
37
,
343
360
. ,
[PubMed]
Crittenden
,
B. M.
,
Mitchell
,
D. J.
, &
Duncan
,
J.
(
2016
).
Task encoding across the multiple demand cortex is consistent with a frontoparietal and cingulo-opercular dual networks distinction
.
Journal of Neuroscience
,
36
,
6147
6155
. ,
[PubMed]
Delorme
,
A.
, &
Makeig
,
S.
(
2004
).
EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis
.
Journal of Neuroscience Methods
,
134
,
9
21
. ,
[PubMed]
de Wit
,
L.
,
Alexander
,
D.
,
Ekroll
,
V.
, &
Wagemans
,
J.
(
2016
).
Is neuroimaging measuring information in the brain?
Psychonomic Bulletin & Review
,
23
,
1415
1428
. ,
[PubMed]
Di Russo
,
F.
,
Pitzalis
,
S.
,
Spitoni
,
G.
,
Aprile
,
T.
,
Patria
,
F.
,
Spinelli
,
D.
, et al
(
2005
).
Identification of the neural sources of the pattern-reversal VEP
.
Neuroimage
,
24
,
874
886
. ,
[PubMed]
Dienes
,
Z.
(
2011
).
Bayesian versus orthodox statistics: Which side are you on?
Perspectives on Psychological Science
,
6
,
274
290
. ,
[PubMed]
Dienes
,
Z.
(
2016
).
How Bayes factors change scientific practice
.
Journal of Mathematical Psychology
,
72
,
78
89
.
Duncan
,
J.
(
2010
).
The multiple-demand (MD) system of the primate brain: Mental programs for intelligent behavior
.
Trends in Cognitive Sciences
,
14
,
172
179
. ,
[PubMed]
Duncan
,
J.
,
Assem
,
M.
, &
Shashidhara
,
S.
(
2020
).
Integrated intelligence from distributed brain activity
.
Trends in Cognitive Sciences
,
24
,
838
852
. ,
[PubMed]
Folstein
,
J. R.
, &
Van Petten
,
C.
(
2008
).
Influence of cognitive control and mismatch on the N2 component of the ERP: A review
.
Psychophysiology
,
45
,
152
170
. ,
[PubMed]
Fox
,
M. D.
,
Snyder
,
A. Z.
,
Vincent
,
J. L.
,
Corbetta
,
M.
,
Van Essen
,
D. C.
, &
Raichle
,
M. E.
(
2005
).
The human brain is intrinsically organized into dynamic, anticorrelated functional networks
.
Proceedings of the National Academy of Sciences, U.S.A.
,
102
,
9673
9678
. ,
[PubMed]
Goddard
,
E.
,
Carlson
,
T. A.
,
Dermody
,
N.
, &
Woolgar
,
A.
(
2016
).
Representational dynamics of object recognition: Feedforward and feedback information flows
.
Neuroimage
,
128
,
385
397
. ,
[PubMed]
González-García
,
C.
,
Formica
,
S.
,
Wisniewski
,
D.
, &
Brass
,
M.
(
2021
).
Frontoparietal action-oriented codes support novel instruction implementation
.
Neuroimage
,
226
,
117608
. ,
[PubMed]
Gratton
,
G.
,
Cooper
,
P.
,
Fabiani
,
M.
,
Carter
,
C. S.
, &
Karayanidis
,
F.
(
2017
).
Dynamics of cognitive control: Theoretical bases, paradigms, and a view for the future
.
Psychophysiology
,
55
,
e13016
.
Grootswagers
,
T.
,
Cichy
,
R. M.
, &
Carlson
,
T. A.
(
2018
).
Finding decodable information that can be read out in behavior
.
Neuroimage
,
179
,
252
262
. ,
[PubMed]
Grootswagers
,
T.
,
Wardle
,
S. G.
, &
Carlson
,
T. A.
(
2017
).
Decoding dynamic brain patterns from evoked responses: A tutorial on multivariate pattern analysis applied to time series neuroimaging data
.
Journal of Cognitive Neuroscience
,
29
,
677
697
. ,
[PubMed]
Haxby
,
J. V.
(
2012
).
Multivariate pattern analysis of fMRI: The early beginnings
.
Neuroimage
,
62
,
852
855
. ,
[PubMed]
Haxby
,
J. V.
,
Gobbini
,
M. I.
,
Furey
,
M. L.
,
Ishai
,
A.
,
Schouten
,
J. L.
, &
Pietrini
,
P.
(
2001
).
Distributed and overlapping representations of faces and objects in ventral temporal cortex
.
Science
,
293
,
2425
2430
. ,
[PubMed]
Hebart
,
M. N.
, &
Baker
,
C. I.
(
2018
).
Deconstructing multivariate decoding for the study of brain function
.
Neuroimage
,
180
,
4
18
. ,
[PubMed]
Hebart
,
M. N.
,
Bankson
,
B. B.
,
Harel
,
A.
,
Baker
,
C. I.
, &
Cichy
,
R. M.
(
2018
).
The representational dynamics of task and object processing in humans
.
eLife
,
7
,
1
21
. ,
[PubMed]
Hubbard
,
J.
,
Kikumoto
,
A.
, &
Mayr
,
U.
(
2019
).
EEG decoding reveals the strength and temporal dynamics of goal-relevant representations
.
Scientific Reports
,
9
,
9051
. ,
[PubMed]
Im
,
C.-H.
,
Gururajan
,
A.
,
Zhang
,
N.
,
Chen
,
W.
, &
He
,
B.
(
2007
).
Spatial resolution of EEG cortical source imaging revealed by localization of retinotopic organization in human primary visual cortex
.
Journal of Neuroscience Methods
,
161
,
142
154
. ,
[PubMed]
Jackson
,
J. B.
,
Feredoes
,
E.
,
Rich
,
A. N.
,
Lindner
,
M.
, &
Woolgar
,
A.
(
2021
).
Concurrent neuroimaging and neurostimulation reveals a causal role for dlPFC in coding of task-relevant information
.
Communications Biology
,
4
,
1
16
. ,
[PubMed]
Jackson
,
J.
,
Rich
,
A. N.
,
Williams
,
M. A.
, &
Woolgar
,
A.
(
2017
).
Feature-selective attention in frontoparietal cortex: Multivoxel codes adjust to prioritize task-relevant information
.
Journal of Cognitive Neuroscience
,
29
,
310
321
. ,
[PubMed]
Jackson
,
J. B.
, &
Woolgar
,
A.
(
2018
).
Adaptive coding in the human brain: Distinct object features are encoded by overlapping voxels in frontoparietal cortex
.
Cortex
,
108
,
25
34
. ,
[PubMed]
Jeffreys
,
H.
(
1961
).
Theory of probability
(3rd ed.).
Oxford
:
Oxford University Press
.
Kado
,
H.
,
Higuchi
,
M.
,
Shimogawara
,
M.
,
Haruta
,
Y.
,
Adachi
,
Y.
,
Kawai
,
J.
, et al
(
1999
).
Magnetoencephalogram systems developed at KIT
.
IEEE Transactions on Applied Superconductivity
,
9
,
4057
4062
.
Karayanidis
,
F.
,
Jamadar
,
S.
,
Ruge
,
H.
,
Phillips
,
N.
,
Heathcote
,
A.
, &
Forstmann
,
B. U.
(
2010
).
Advance preparation in task-switching: Converging evidence from behavioral, brain activation, and model-based approaches
.
Frontiers in Psychology
,
1
,
25
. ,
[PubMed]
Karimi-Rouzbahani
,
H.
,
Woolgar
,
A.
, &
Rich
,
A. N.
(
2021
).
Neural signatures of vigilance decrements predict behavioral errors before they occur
.
eLife
,
10
,
e60563
. ,
[PubMed]
Kikumoto
,
A.
, &
Mayr
,
U.
(
2020
).
Conjunctive representations that integrate stimuli, responses, and rules are critical for action selection
.
Proceedings of the National Academy of Sciences, U.S.A.
,
117
,
10603
10608
. ,
[PubMed]
Long
,
N. M.
, &
Kuhl
,
B. A.
(
2018
).
Bottom–up and top–down factors differentially influence stimulus representations across large-scale attentional networks
.
Journal of Neuroscience
,
38
,
2495
2504
. ,
[PubMed]
Moerel
,
D.
,
Rich
,
A. N.
, &
Woolgar
,
A.
(
2021
).
Selective attention and decision-making have separable neural bases in space and time [preprint]
.
BioRxiv
,
2021.02.28.433294
.
Morey
,
R. D.
,
Rouder
,
J. N.
,
Jamil
,
T.
,
Urbanek
,
S.
,
Forner
,
K.
, &
Ly
,
A.
(
2018
).
Package “BayesFactor”
. https://cran.r-project.org/web/packages/BayesFactor/BayesFactor.pdf
Oosterhof
,
N. N.
,
Connolly
,
A. C.
, &
Haxby
,
J. V.
(
2016
).
CoSMoMVPA: Multi-modal multivariate pattern analysis of neuroimaging data in MATLAB/GNU octave
.
Frontiers in Neuroinformatics
,
10
,
27
. ,
[PubMed]
Pisauro
,
M. A.
,
Fouragnan
,
E.
,
Retzler
,
C.
, &
Philiastides
,
M. G.
(
2017
).
Neural correlates of evidence accumulation during value-based decisions revealed via simultaneous EEG–fMRI
.
Nature Communications
,
8
,
15808
. ,
[PubMed]
Posner
,
M. I.
, &
Presti
,
D. E.
(
1987
).
Selective attention and cognitive control
.
Trends in Neurosciences
,
10
,
13
17
.
Posner
,
M. I.
, &
Snyder
,
C. R. R.
(
1975
).
Attention and cognitive control
. In
R. L.
Solso
(Ed.),
Information processing and cognition: The Loyola symposium
.
Erlbaum
.
Ritchie
,
J. B.
, &
Carlson
,
T. A.
(
2016
).
Neural decoding and “inner” psychophysics: A distance-to-bound approach for linking mind, brain, and behavior
.
Frontiers in Neuroscience
,
10
,
1
8
. ,
[PubMed]
Ritchie
,
J. B.
,
Tovar
,
D. A.
, &
Carlson
,
T. A.
(
2015
).
Emerging object representations in the visual system predict reaction times for categorization
.
PLoS Computational Biology
,
11
,
e1004316
. ,
[PubMed]
Robinson
,
A. K.
,
Grootswagers
,
T.
, &
Carlson
,
T. A.
(
2019
).
The influence of image masking on object representations during rapid serial visual presentation
.
Neuroimage
,
197
,
224
231
. ,
[PubMed]
Rouder
,
J. N.
,
Speckman
,
P. L.
,
Sun
,
D.
,
Morey
,
R. D.
, &
Iverson
,
G.
(
2009
).
Bayesian t tests for accepting and rejecting the null hypothesis
.
Psychonomic Bulletin & Review
,
16
,
225
237
. ,
[PubMed]
Świątkowski
,
W.
, &
Carrier
,
A.
(
2020
).
There is nothing magical about Bayesian statistics: An introduction to epistemic probabilities in data analysis for psychology starters
.
Basic and Applied Social Psychology
,
42
,
387
412
.
Tagliabue
,
C. F.
,
Veniero
,
D.
,
Benwell
,
C. S. Y.
,
Cecere
,
R.
,
Savazzi
,
S.
, &
Thut
,
G.
(
2019
).
The EEG signature of sensory evidence accumulation during decision formation closely tracks subjective perceptual experience
.
Scientific Reports
,
9
,
4949
. ,
[PubMed]
Takacs
,
A.
,
Mückschel
,
M.
,
Roessner
,
V.
, &
Beste
,
C.
(
2020
).
Decoding stimulus–response representations and their stability using EEG-based multivariate pattern analysis
.
Cerebral Cortex Communications
,
1
,
tgaa016
. ,
[PubMed]
Teichmann
,
L.
,
Grootswagers
,
T.
,
Carlson
,
T. A.
, &
Rich
,
A. N.
(
2019
).
Seeing versus knowing: The temporal dynamics of real and implied color processing in the human brain
.
Neuroimage
,
200
,
373
381
. ,
[PubMed]
Teichmann
,
L.
,
Moerel
,
D.
,
Baker
,
C.
, &
Grootswagers
,
T.
(
2021
).
An empirically-driven guide on using Bayes factors for M/EEG decoding
.
BioRxiv
,
2021.06.23.449663
.
Teichmann
,
L.
,
Quek
,
G. L.
,
Robinson
,
A. K.
,
Grootswagers
,
T.
,
Carlson
,
T. A.
, &
Rich
,
A. N.
(
2020
).
The influence of object-color knowledge on emerging object representations in the brain
.
Journal of Neuroscience
,
40
,
6779
6789
. ,
[PubMed]
Vidaurre
,
D.
,
Myers
,
N. E.
,
Stokes
,
M.
,
Nobre
,
A. C.
, &
Woolrich
,
M. W.
(
2019
).
Temporally unconstrained decoding reveals consistent but time-varying stages of stimulus processing
.
Cerebral Cortex
,
29
,
874
863
. ,
[PubMed]
Vincent
,
J. L.
,
Kahn
,
I.
,
Snyder
,
A. Z.
,
Raichle
,
M. E.
, &
Buckner
,
R. L.
(
2008
).
Evidence for a frontoparietal control system revealed by intrinsic functional connectivity
.
Journal of Neurophysiology
,
100
,
3328
3342
. ,
[PubMed]
.
Wagenmakers
,
E.
(
2007
).
A practical solution to the pervasive problems of p values
.
Psychonomic Bulletin and Review
,
14
,
779
804
. ,
[PubMed]
Walther
,
D. B.
,
Caddigan
,
E.
,
Fei-Fei
,
L.
, &
Beck
,
D. M.
(
2009
).
Natural scene categories revealed in distributed patterns of activity in the human brain
.
Journal of Neuroscience
,
29
,
10573
10581
. ,
[PubMed]
Wen
,
T.
,
Duncan
,
J.
, &
Mitchell
,
D. J.
(
2019
).
The time-course of component processes of selective attention
.
Neuroimage
,
199
,
396
407
. ,
[PubMed]
Wetzels
,
R.
,
Matzke
,
D.
,
Lee
,
M. D.
,
Rouder
,
J. N.
,
Iverson
,
G. J.
, &
Wagenmakers
,
E. J.
(
2011
).
Statistical evidence in experimental psychology: An empirical comparison using 855 t tests
.
Perspectives on Psychological Science
,
6
,
291
298
. ,
[PubMed]
Wetzels
,
R.
, &
Wagenmakers
,
E.-J.
(
2012
).
A default Bayesian hypothesis test for correlations and partial correlations
.
Psychonomic Bulletin & Review
,
19
,
1057
1064
. ,
[PubMed]
Williams
,
M. A.
,
Dang
,
S.
, &
Kanwisher
,
N. G.
(
2007
).
Only some spatial patterns of fMRI response are read out in task performance
.
Nature Neuroscience
,
10
,
685
686
. ,
[PubMed]
Woolgar
,
A.
,
Afshar
,
S.
,
Williams
,
M. A.
, &
Rich
,
A. N.
(
2015
).
Flexible coding of task rules in frontoparietal cortex: An adaptive system for flexible cognitive control
.
Journal of Cognitive Neuroscience
,
27
,
1895
1911
. ,
[PubMed]
Woolgar
,
A.
,
Dermody
,
N.
,
Afshar
,
S.
,
Williams
,
M. A.
, &
Rich
,
A. N.
(
2019
).
Meaningful patterns of information in the brain revealed through analysis of errors [preprint]
.
BioRxiv
,
673681
.
Woolgar
,
A.
,
Duncan
,
J.
,
Manes
,
F.
, &
Fedorenko
,
E.
(
2018
).
Fluid intelligence is supported by the multiple-demand system not the language system
.
Nature Human Behavior
,
2
,
200
204
. ,
[PubMed]
Woolgar
,
A.
,
Hampshire
,
A.
,
Thompson
,
R.
, &
Duncan
,
J.
(
2011
).
Adaptive coding of task-relevant information in human frontoparietal cortex
.
Journal of Neuroscience
,
31
,
14592
14599
. ,
[PubMed]
Woolgar
,
A.
,
Jackson
,
J.
, &
Duncan
,
J.
(
2016
).
Coding of visual, auditory, rule, and response information in the brain: 10 Years of multivoxel pattern analysis
.
Journal of Cognitive Neuroscience
,
28
,
1433
1454
. ,
[PubMed]
Woolgar
,
A.
,
Parr
,
A.
,
Cusack
,
R.
,
Thompson
,
R.
,
Nimmo-Smith
,
I.
,
Torralva
,
T.
, et al
(
2010
).
Fluid intelligence loss linked to restricted regions of damage within frontal and parietal cortex
.
Proceedings of the National Academy of Sciences, U.S.A.
,
107
,
14899
14902
. ,
[PubMed]
Woolgar
,
A.
,
Williams
,
M. A.
, &
Rich
,
A. N.
(
2015
).
Attention enhances multi-voxel representation of novel objects in frontal, parietal and visual cortices
.
Neuroimage
,
109
,
429
437
. ,
[PubMed]
Woolgar
,
A.
, &
Zopf
,
R.
(
2017
).
Multi-sensory coding in the multiple-demand regions: Vibrotactile task information is coded in frontoparietal cortex
.
Journal of Neurophysiology
,
118
,
703
716
. ,
[PubMed]
Zellner
,
A.
, &
Siow
,
A.
(
1980
).
Posterior odds ratios for selected regression hypotheses
.
Trabajos de Estadistica y de Investigacion Operativa
,
31
,
585
603
.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.