Abstract

How is the processing of task information organized in the brain? Many views of brain function emphasize modularity, with different regions specialized for processing different types of information. However, recent accounts also highlight flexibility, pointing especially to the highly consistent pattern of frontoparietal activation across many tasks. Although early insights from functional imaging were based on overall activation levels during different cognitive operations, in the last decade many researchers have used multivoxel pattern analyses to interrogate the representational content of activations, mapping out the brain regions that make particular stimulus, rule, or response distinctions. Here, we drew on 100 searchlight decoding analyses from 57 published papers to characterize the information coded in different brain networks. The outcome was highly structured. Visual, auditory, and motor networks predominantly (but not exclusively) coded visual, auditory, and motor information, respectively. By contrast, the frontoparietal multiple-demand network was characterized by domain generality, coding visual, auditory, motor, and rule information. The contribution of the default mode network and voxels elsewhere was minor. The data suggest a balanced picture of brain organization in which sensory and motor networks are relatively specialized for information in their own domain, whereas a specific frontoparietal network acts as a domain-general “core” with the capacity to code many different aspects of a task.

INTRODUCTION

Multivoxel pattern analysis (MVPA) of fMRI data is a powerful and increasingly popular technique used to examine information coding in the human brain. In MVPA, information coding is inferred when the pattern of activation across voxels can reliably discriminate between two or more events such as different stimuli, task rules, or participant responses (e.g., Haynes & Rees, 2006; Haxby et al., 2001). For example, if, in a certain brain region, the patterns of activation elicited in response to viewing red objects are more similar to each other than to the patterns elicited by green objects (and vice versa), we conclude that there is information in the patterns that discriminates between red and green objects and therefore codes for color. This allows inference beyond traditional univariate brain mapping (e.g., this region is more active for colored objects than black and white ones) to examine the particular discriminations that the region is able to make (e.g., the region carries specific information about what color was presented). Information coding may be tested by comparing the correlation of patterns within object classes to correlations between object classes (e.g., Haxby et al., 2001), or using a machine learning algorithm such as a pattern classifier. For example, if a classifier can be trained to discriminate between red and green objects, such that it can predict object color on an independent set of data, we conclude that the pattern of activation can be used reliably to discriminate between red and green objects. The technique has also been generalized to incorporate multiple classes to test more complex representational models (e.g., representational similarity analysis; Kriegeskorte, Mur, & Bandettini, 2008). It has been used to examine neural coding of a wide range of different task events including aspects of stimuli, task rules, participant responses, rewards, emotion, and language (e.g., McNamee, Rangel, & O'Doherty, 2013; Herrmann, Obleser, Kalberlah, Haynes, & Friederici, 2012; Woolgar, Thompson, Bor, & Duncan, 2011; Peelen & Vuilleumier, 2010; Haxby et al., 2001).

Using a “searchlight,” MVPA can be used to map out the brain regions that code for each particular type of information (Kriegeskorte, Goebel, & Bandettini, 2006). For each brain voxel in turn, pattern analysis is applied to the pattern of activation across voxels in the local neighborhood (e.g., in a sphere of a fixed radius centered on the current voxel of interest), and the resulting metric, which summarizes the strength of information coding in the local neighborhood, is given to the center voxel. The resulting whole-brain map indicates where in the brain a particular distinction is coded. This technique allows for exploratory analyses that are free from a priori hypotheses about where local patterns will be discriminative, and opens the door for unpredicted findings.

After several years of searchlight MVPA, we now have an unprecedented opportunity to summarize our knowledge of information coding in the brain. This is the aim of the current paper. In the literature, most cognitive tasks comprise visual and/or auditory input, task rules, and motor output, so we focus our analysis on coding of these task features. We examine the frequency of information coding reported in five brain networks: the visual, auditory, and motor networks; the frontoparietal multiple demand (MD; Duncan, 2006, 2010) or “task-positive” (Fox et al., 2005) network; and a “task-negative” (Fox et al., 2005) or “default mode” (Raichle et al., 2001) network (DMN).

Although traditional accounts of brain organization emphasized modularity of function, several recent proposals highlight the flexibility of many brain regions (e.g., Yeo et al., 2015; Dehaene & Naccache, 2001; Duncan, 2001). For example, one of the most consistent findings in human neuroimaging is a characteristic pattern of activation in the frontoparietal MD network across a wide range of different cognitive tasks (e.g., Yeo et al., 2015; Fedorenko, Duncan, & Kanwisher, 2013; Niendam et al., 2012; Dosenbach et al., 2006; Naghavi & Nyberg, 2005; Owen, McMillan, Laird, & Bullmore, 2005; Duncan & Owen, 2000). This common activity may reflect the common need for cognitive control, one aspect of which is proposed to be the adaptive representation of task-relevant information (Duncan, 2001, 2010). Accordingly, the suggestion is that single neurons in the MD regions adjust their pattern of firing to encode the specific information currently relevant for the task, including stimuli, cues, rules, responses, etc.

The result of our review is a balanced and highly structured picture of brain organization. According to the MVPA data published in the last decade, auditory, visual, and motor networks predominantly code information from their own domain, whereas the frontoparietal MD network is characterized by domain generality, coding all four task features (visual, auditory, motor, and rule information) more frequently than other brain areas. After correcting for network area and the number of studies examining each feature, the contribution of the DMN and cortex elsewhere is minor. Although sensory and motor networks are relatively specialized for information in their own domain, the MD network appears to act as a domain-general core with the capacity to code different aspects of a task as needed for behavior.

METHODS

Paper Selection

We identified peer-reviewed papers published up until the end of December 2014 by searching PubMed, Scopus, Web of Science, HighWire, JSTOR, Oxford University Press Journals, and ScienceDirect databases with the following search terms: “MVPA searchlight,” “multivariate analysis searchlight,” “multivoxel analysis searchlight,” and “MVPA spotlight” in any field. We additionally retrieved all the studies listed by Google scholar as citing Kriegeskorte et al. (2006) in which the procedure for searchlight MVPA was first described. This yielded 537 empirical papers (excluding reviews, comments, methods papers, or conference abstracts). Of these, we included studies that performed volumetric searchlight analysis (Kriegeskorte et al., 2006) across the whole brain of healthy adults and reported a complete list of the coordinates of peak decoding in template (MNI or TAL) space.1 Because most tasks comprise visual or auditory input, task rules, and motor output, we focused on these task features. From each of the papers, we identified independent analyses that isolated the multivoxel representation of a single one of these task features. To achieve this, if a paper reported two or more nonindependent analyses (e.g., analyzed overlapping aspects of the same data), only one analysis was included. We excluded any analyses in which sensory and motor responses were confounded (e.g., if the same visual stimulus was associated with the same motor response). This procedure yielded a total of 100 independent analyses from 57 papers.

Characterization of Task Features

We categorized each of the 100 analyses according to what task feature they examined, namely, whether they examined the multivoxel discrimination between two or more visual stimuli, two or more auditory stimuli, two or more task rules, or two or more motor responses (Table 1). This categorization was done twice, the first time being as inclusive as possible, and the second time using stricter criteria (Table 1, second column). For the strict categorization, we excluded analyses in which the multivoxel discrimination pertained to both an aspect of the physical stimulus and a higher-level stimulus attribute such as emotion or semantic category. Analyses focusing on linguistic stimuli (e.g., written or spoken words) were not included, on the basis that representation of these stimuli would be likely to load on language-related processing more than visual and/or auditory information processing.

Table 1. 

Multivoxel Decoding Analyses Included in This Study

CategoryIncluded in Strict Categorization?StudyDecoding AnalysisSearchlight SizeThreshold at Which Results Were ReportedNumber of Participants
Visual Yes Bode and Haynes (2009Target stimuli (dynamic color patterns) 4 voxel radius p < .001 uncorr 12 
Visual Yes Mayhew et al. (2010Radial vs. concentric glass pattern stimuli (young adults) 9 mm radius (av. 98 voxels) p < .05, k = 5 mm2 10 
Visual Yes Mayhew et al. (2010Radial vs. concentric glass pattern stimuli (older adults) 9 mm radius (av. 98 voxels) p < .05, k = 5 mm2 10 
Visual Yes Bogler, Bode, and Haynes (2011Most salient visual quadrant of grayscale pictures of natural scenes 6 voxel radius p < .05, FWE 21 
Visual Yes Carlin, Calder, Kriegeskorte, Nili, and Rowe (2011View-invariant gaze direction 5 mm radius p < .05 FWE 18 
Visual Yes Kahnt et al. (2011Stimulus orientation (low contrast Gabor in upper right visual field) 4 voxel radius p < .0001, k = 20, cluster level corr p < .001 20 
Visual Yes Kalberlah et al. (2011Which of 4 spatial locations participant is attending and responding to 12 mm radius p < 10e−5 at vertex level, p < .005 at cluster level 18 
Visual Yes Vickery et al. (2011Visual stimulus (coin showing heads vs. tails side) 27 voxel cube p < .001 uncorr, k = 10 17 
Visual Yes Woolgar, Thompson, et al. (2011Stimulus position 5 mm radius p < .001 uncorr 17 
Visual Yes Woolgar, Thompson, et al. (2011Background color of screen (within rule) 5 mm radius p < .001 uncorr 17 
Visual Yes Guo, Preston, Das, Giesbrecht, and Eckstein (2012Target present vs. absent in full color natural scenes 9 mm radius (153 voxels volume) p < .005, uncorr 12 
Visual Yes Hebart, Donner, and Haynes (2012Direction of motion (dynamic random dot patterns) 10 mm radius p < .00001, uncorr, k = 30 22 
Visual Yes Peelen and Caramazza (2012Perceptual similarity of 12 familiar objects 8 mm radius p < .05 Bonferroni, k = 5 15 
Visual Yes Peelen and Caramazza (2012Pixelwise similarity of 12 familiar objects 8 mm radius p < .05 Bonferroni, k = 5 15 
Visual Yes Reverberi et al. (2012aVisual cue (two visually unrelated abstract shapes coding for the same rule) 4 voxel radius (3 × 3 × 3.75 mm voxels) p < .05 cluster corr 13 
Visual Yes Billington, Furlan, and Wann (2013Congruency in depth information (congruent looming and binocular disparity cues vs. incongruent looming and binocular disparity cues) 6 mm radius p < .01, Bonferroni corr 16 
Visual Yes Bode, Bogler, and Haynes (2013Black and white photograph (piano vs. chair) 4 voxel radius p < .05, FWE cluster corr 15 
Visual Yes Mayhew and Kourtzi (2013Radial vs. concentric glass pattern stimuli (young adults) 9 mm radius, av. 98 voxels volume p < .05, cluster threshold 5 mm2 10 
Visual Yes Mayhew and Kourtzi (2013Radial vs. concentric glass pattern stimuli (older adults) 9 mm radius, av. 98 voxels volume p < .05, cluster threshold 5 mm2 10 
Visual Yes Clarke and Tyler (2014Low-level visual features (early visual cortex model) 7 mm radius p < .05, FDR, k = 20 16 
Visual Yes Pollmann et al. (2014Gabor patches differing in both color (red/green) and spatial frequency 3 voxels (10.5 mm) radius (123 voxels volume) p < .001 15 
Visual No Clithero et al. (2011Image of face vs. currency (currency influenced participant payment), within participants analysis 12 mm radius (max 123 voxels) p < .05 Bonferroni correction 16 
Visual No Vickery et al. (2011Visual stimulus (photograph of hand making rock/paper/scissor action) 27 voxel cube p < .001 uncorr 22 
Visual No Christophel, Hebart, and Haynes (2012Complex artificial stimuli consisting of multicolored random fields (STM storage of visual stimuli during delay phase) 4 voxel radius p < .05 FWE, k = 20 17 
Visual No Bode, Bogler, Soon, and Haynes (2012Category of visual stimulus (piano/chair/noise), high-visibility condition 4 voxel radius p < .0001 uncorr 14 
Visual No Carlin et al. (2012Left vs. right head turn (silent video clips) 5 mm radius p < .05 FWE 17 
Visual No Gilbert, Swencionis, and Amodio (2012Black vs. white faces (color photographs, collapsed over trait and friendship judgment tasks) 3 voxel radius p < .05 FWE 16 
Visual No Kaplan and Meyer (2012Discrimination between five 5-sec video clips showing manipulation of different objects (plant, tennis ball, skein of yarn, light bulb, set of keys) within-subject analysis 8 mm radius (average 75 voxels) >maximum value given by a permutation test in a occipital spherical ROI for each subject 
Visual No Linden, Oosterhof, Klein, and Downing (2012Visual category (face/body/scene/flower) (encoding phase) 100 voxels volume p < .05, voxelwise uncorr, cluster correction using Monte Carlo simulation 18 
Visual No Linden et al. (2012Visual category (face/body/scene/flower) (delay 1 phase) 100 voxels volume p < .05, voxelwise uncorr, cluster correction using Monte Carlo simulation 18 
Visual No Murawski, Harris, Bode, Dominguez, and Egan (2012Subliminal presentation of apple logo vs. neutral cup 3 voxel radius p < .05 FWE cluster corr 13 
Visual No Peelen and Caramazza (2012Conceptual level information about 12 familiar objects: kitchen vs. garage, and rotate vs. squeeze 8 mm radius p < .05 Bonferroni, k = 5 15 
Visual No Weygandt, Schaefer, Schienle, and Haynes (2012Food vs. neutral images (normal weight control group) 3 voxel radius p < .001 uncorr k = 10 and p < .05 FWE 19 
Visual No McNamee et al. (2013Visual category (food/money/trinkets) 20 mm radius p < .05 FDR k = 10 13 
Visual No Clarke and Tyler (2014Semantic features of visual stimuli 7 mm radius p < .05, FDR, k = 20 16 
Visual No Clarke and Tyler (2014Category of visual stimuli (animals vs. fruits vs. vegetables vs. vehicles vs. tools vs. musical instruments) 7 mm radius p < .05, FDR, k = 20 16 
Visual No Clarke and Tyler (2014Animal and plants vs. nonbiological visual stimuli 7 mm radius p < .05, FDR, k = 20 16 
Visual No Clarke and Tyler (2014Animal visual stimuli (a model in which patterns for animal stimuli are similar to one another and all other stimuli are dissimilar from animals and from each other) 7 mm radius p < .05, FDR, k = 20 16 
Visual No Simanova et al. (2014Category of visual stimulus (animal vs. tool) – photographs 2.5 voxels, 8.75 mm radius (33 voxel volume) p < .001, FDR 14 
Visual No FitzGerald, Schwartenbeck, and Dolan (2014Discriminate between two abstract visual stimuli when attending to visual stimuli (stimuli differentially associated with financial reward) 6 mm radius (31 voxels volume) p < .05 FWE-corrected 25 
Visual No FitzGerald et al. (2014Discriminate between two abstract visual stimuli when attending to concurrently presented auditory stimuli (visual stimuli differentially associated with financial reward) 6 mm radius (31 voxels volume) p < .05 FWE-corrected 25 
Auditory Yes Alink, Euler, Kriegeskorte, Singer, and Kohler (2012Direction of auditory motion (rightwards vs. leftwards) 1.05 cm radius (72 voxel volume) T > 4.0, k > 4 19 
Auditory Yes Lee, Janata, Frost, Hanke, and Granger (2011Ascending vs. descending melodic sequences 2 neighboring voxels (max 33 voxels) p < .05 cluster corr 12 
Auditory Yes Linke, Vicente-Grabovetsky, and Cusack (2011Frequency-specific coding (discriminate pure tones in four frequency ranges) 10 mm radius p < .005 FDR 16 (9 with two sessions, 7 with one session) 
Auditory Yes Giordano et al. (2013Pitch (median) 6.25 mm radius p < .0001, k = 20 20 
Auditory Yes Giordano et al. (2013Loudness (median) 6.25 mm radius p < .0001, k = 20 20 
Auditory Yes Giordano et al. (2013Spectral centroid (interquartile range) 6.25 mm radius p < .0001, k = 20 20 
Auditory Yes Giordano et al. (2013Harmonicity (median) 6.25 mm radius p < .0001, k = 20 20 
Auditory Yes Giordano et al. (2013Loudness (cross-correlation) 6.25 mm radius p < .0001, k = 20 20 
Auditory Yes Lee, Turkeltaub, Granger, and Raizada (2012/ba/ vs. /da/ speech category (3 voxel searchlight) 3 voxel radius p < .001 voxelwise uncorr and p < .05 clusterwise-corrected 13 
Auditory Yes Lee et al. (2012/ba/ vs. /da/ speech category (reanalysis of a previous data set) 3 voxel radius p < .001 voxelwise uncorr and p < .05 clusterwise-corrected 12 
Auditory Yes Merrill et al. (2012Hummed speech prosody (rhythm + pitch) vs. speech rhythm 6 mm radius p < .05 cluster size-corrected 21 
Auditory Yes Merrill et al. (2012Hummed song melody (pitch + rhythm) vs. musical rhythm 6 mm radius p < .05 cluster size-corrected 21 
Auditory Yes Jiang, Stecker, and Fine (2014Apparent direction of unambiguous auditory motion (left vs. right), 50% coherence, sighted subjects 2 mm radius (33 voxels volume) p < .001 corrected 
Auditory Yes Klein and Zatorre (2015Musical category (minor third vs. major third) 3 voxels radius (max 123 voxels volume) p < .001 (uncorrected) 10 
Auditory No Giordano et al. (2013Human similar 6.25 mm radius p < .0001, k = 20 20 
Auditory No Giordano et al. (2013Living similar 6.25 mm radius p < .0001, k = 20 20 
Auditory No Kotz et al. (2013Emotion of vocal expression (angry, happy, sad, surprised or neutral) 9 mm radius (3 × 3 × 3 mm voxels) p < .0001 uncorr 20 
Auditory No Merrill et al. (2012Spoken sentences (words + rhythm + pitch) vs. hummed speech prosody (rhythm + pitch) 6 mm radius p < .05 cluster size-corrected 21 
Auditory No Merrill et al. (2012Sung sentences (words + pitch + rhythm) vs. hummed song melody (pitch + rhythm) 6 mm radius p < .05 cluster size-corrected 21 
Auditory No Simanova et al. (2014Category of sound (animal vs. tool) 2.5 voxels, 8.75 mm radius (33 voxel volume) p < .001, FDR 14 
Auditory No Boets et al. (2013Speech sounds (sounds differing on both consonant and vowel vs. neither), typical readers 9 mm radius (123 voxels volume) p < .001 voxelwise uncorr and p < .05 FWE clusterwise 22 
Auditory No Zheng et al. (2013Speech stimuli vs. noise during passive listening 4 mm radius p < .05, FWE 20 
Auditory No Jiang et al. (2014Reported apparent direction of ambiguous auditory motion (left vs. right), 0% coherence, sighted subjects 2 mm radius (33 voxels volume) p < .001 corrected 
Rule Yes Haynes et al. (2007Intended task: addition vs. subtraction 3 voxel radius p < .005 uncorr 
Rule Yes Bode and Haynes (2009Stimulus–response mapping rule 4 voxel radius p < .001 uncorr 12 
Rule Yes Greenberg, Esterman, Wilson, Serences, and Yantis (2010Type of attentional shift to make (shift attention to alternate location vs. shift attention to alternate color) 27 voxel cube p < .05 cluster correction 
Rule Yes Vickery et al. (2011Participants upcoming choice (switch or stay relative to last choice) 27 voxel cube p < .001 uncorr 22 
Rule Yes Woolgar, Thompson, et al. (2011Stimulus–response mapping rule 5 mm radius p < .001 uncorr 17 
Rule Yes Hebart et al. (2012Stimulus–response mapping rule 10 mm radius p < .00001, uncorr, k = 30 
Rule Yes Momennejad and Haynes (2012Task participants are intending to perform (parity or magnitude task) during maintenance phase 4 voxel radius p < .0001 uncorr, k = 0 12 
Rule Yes Momennejad and Haynes (2012Task participants are performing (parity or magnitude task) during retrieval phase 4 voxel radius p < .0001 uncorr, k = 0 12 
Rule Yes Momennejad and Haynes (2012Time delay after which participants should self-initiate a task switch (15, 20 or 25 sec), during maintenance phase 4 voxel radius p < .0001 uncorr, k = 0 12 
Rule Yes Nee and Brown (2012Higher level task context (first delay period) 10 mm radius p < .05 cluster-corrected (p < .005 with k = 66 – extent threshold given using simulations in AlphaSim) 21 
Rule Yes Nee and Brown (2012Higher and lower level task context (second delay period) 10 mm radius p < .05 cluster-corrected (p < .005 with k = 66 – extent threshold given using simulations in AlphaSim) 21 
Rule Yes Reverberi et al. (2012aRule representation (e.g., if there is a house press left) 4 voxel radius (3 × 3 × 3.75 mm voxels) p < .05 cluster corr 13 
Rule Yes Reverberi et al. (2012bStimulus response mapping “rule identity” (if furniture then A, if transport then B vs. if furniture then B, if transport then A) 4 voxel radius p < .05 corrected 14 
Rule Yes Reverberi et al. (2012bOrder in which rules are to be applied 4 voxel radius p < .05 corrected 14 
Rule Yes Soon et al. (2013Task (search for house/face/car/bird) during preparatory period 20 mm radius p < .05 Bonferroni-corrected against 55%, 25 mm cluster threshold 15 
Rule Yes Zhang et al. (2013Current rule: attend to direction of motion, color or size of dots in a random dot kinematogram 2 voxel radius p < .001 uncorr k = 35 20 
Rule Yes Helfinstein et al. (2014Safe vs. risky choice to be taken by the participant on the following trial Not reported >60%, whole-brain cluster-corrected p < .05 via comparison with 1,000 random permutations 108 
Rule Yes Jiang and Egner (2014Congruency of face/word compound stimulus (congruent vs. incongruent) in a face gender categorization task 3 voxel radius (3 × 3 × 3 mm voxels) p < .05 corrected 21 
Rule Yes Jiang and Egner (2014Congruency of decision with side of response (congruent vs. incongruent) 3 voxel radius (3 × 3 × 3 mm voxels) p < .05 corrected 21 
Rule Yes Wisniewski, Reverberi, Tusche, and Haynes (2015Stimulus–response mapping rule (selected by participant) 4 voxel radius p < .001 FWE corr 14 
Rule No Gilbert (2011Dual task (2-back + prospective memory task) vs. single task (2-back) 3 voxel radius p < .05 FWE, k = 10 32 
Rule No Ekman, Derrfuss, Tittgemeyer, and Fiebach (2012Prepare for upcoming color or motion task vs. neutral condition in which upcoming task was not known (no preparation possible) 8 mm radius nonparametric permutation test, FDR threshold at q = 0.05, peaks more than 24 mm apart 
Rule No Li and Yang (2012Task set: categorize glass patterns as radial or concentric when stimuli vary on angle vs. categorize glass patterns as radial or concentric when they vary on signal level 9 mm radius (voxels 3 × 3 × 3 mm) p < .01, cluster correction using Monte Carlo simulation 20 
Motor Yes Soon et al. (2008Left vs. right button press intention (before conscious decision indicated) 3 voxel radius p < .05 FWE 14 
Motor Yes Soon et al. (2008Left vs. right button press (execution) 3 voxel radius p < .05 FWE 14 
Motor Yes Soon et al. (2008Left vs. right response preparation (cued when to make choice) 3 voxel radius p < .001 uncorr 14 
Motor Yes Bode and Haynes (2009Motor response (leftward vs. rightward joystick movement) 4 voxel radius p < .05 FWE 12 
Motor Yes Woolgar, Thompson, et al. (2011Button press response (index vs. middle finger) 5 mm radius p < .001 uncorr 17 
Motor Yes Bode et al. (2012Button press response (index/middle/ring finger of right hand) 4 voxel radius p < .0001 uncorr 14 
Motor Yes Bode et al. (2013Button press (right index vs. middle finger) 4 voxel radius p < .05, FWE cluster corr 15 
Motor No Carp et al. (2011Left vs. right index finger tapping 12 mm radius p < 1e−7, k = 50 37 
Motor No Colas and Hsieh (2014Motor bias (whether participant will later press left or right button, operated with index finger of each hand, prestimulus display) 5 voxel sided cube p < .025 14 
Motor No Colas and Hsieh (2014Left vs. right button press (index finger of each hand) 5 voxel sided cube p < .01, k = 9 14 
Motor No Huang et al. (2014Left vs. right key press (left vs. right hand) 5 voxels (15 mm) cube p < .01 cluster-corrected 14 
CategoryIncluded in Strict Categorization?StudyDecoding AnalysisSearchlight SizeThreshold at Which Results Were ReportedNumber of Participants
Visual Yes Bode and Haynes (2009Target stimuli (dynamic color patterns) 4 voxel radius p < .001 uncorr 12 
Visual Yes Mayhew et al. (2010Radial vs. concentric glass pattern stimuli (young adults) 9 mm radius (av. 98 voxels) p < .05, k = 5 mm2 10 
Visual Yes Mayhew et al. (2010Radial vs. concentric glass pattern stimuli (older adults) 9 mm radius (av. 98 voxels) p < .05, k = 5 mm2 10 
Visual Yes Bogler, Bode, and Haynes (2011Most salient visual quadrant of grayscale pictures of natural scenes 6 voxel radius p < .05, FWE 21 
Visual Yes Carlin, Calder, Kriegeskorte, Nili, and Rowe (2011View-invariant gaze direction 5 mm radius p < .05 FWE 18 
Visual Yes Kahnt et al. (2011Stimulus orientation (low contrast Gabor in upper right visual field) 4 voxel radius p < .0001, k = 20, cluster level corr p < .001 20 
Visual Yes Kalberlah et al. (2011Which of 4 spatial locations participant is attending and responding to 12 mm radius p < 10e−5 at vertex level, p < .005 at cluster level 18 
Visual Yes Vickery et al. (2011Visual stimulus (coin showing heads vs. tails side) 27 voxel cube p < .001 uncorr, k = 10 17 
Visual Yes Woolgar, Thompson, et al. (2011Stimulus position 5 mm radius p < .001 uncorr 17 
Visual Yes Woolgar, Thompson, et al. (2011Background color of screen (within rule) 5 mm radius p < .001 uncorr 17 
Visual Yes Guo, Preston, Das, Giesbrecht, and Eckstein (2012Target present vs. absent in full color natural scenes 9 mm radius (153 voxels volume) p < .005, uncorr 12 
Visual Yes Hebart, Donner, and Haynes (2012Direction of motion (dynamic random dot patterns) 10 mm radius p < .00001, uncorr, k = 30 22 
Visual Yes Peelen and Caramazza (2012Perceptual similarity of 12 familiar objects 8 mm radius p < .05 Bonferroni, k = 5 15 
Visual Yes Peelen and Caramazza (2012Pixelwise similarity of 12 familiar objects 8 mm radius p < .05 Bonferroni, k = 5 15 
Visual Yes Reverberi et al. (2012aVisual cue (two visually unrelated abstract shapes coding for the same rule) 4 voxel radius (3 × 3 × 3.75 mm voxels) p < .05 cluster corr 13 
Visual Yes Billington, Furlan, and Wann (2013Congruency in depth information (congruent looming and binocular disparity cues vs. incongruent looming and binocular disparity cues) 6 mm radius p < .01, Bonferroni corr 16 
Visual Yes Bode, Bogler, and Haynes (2013Black and white photograph (piano vs. chair) 4 voxel radius p < .05, FWE cluster corr 15 
Visual Yes Mayhew and Kourtzi (2013Radial vs. concentric glass pattern stimuli (young adults) 9 mm radius, av. 98 voxels volume p < .05, cluster threshold 5 mm2 10 
Visual Yes Mayhew and Kourtzi (2013Radial vs. concentric glass pattern stimuli (older adults) 9 mm radius, av. 98 voxels volume p < .05, cluster threshold 5 mm2 10 
Visual Yes Clarke and Tyler (2014Low-level visual features (early visual cortex model) 7 mm radius p < .05, FDR, k = 20 16 
Visual Yes Pollmann et al. (2014Gabor patches differing in both color (red/green) and spatial frequency 3 voxels (10.5 mm) radius (123 voxels volume) p < .001 15 
Visual No Clithero et al. (2011Image of face vs. currency (currency influenced participant payment), within participants analysis 12 mm radius (max 123 voxels) p < .05 Bonferroni correction 16 
Visual No Vickery et al. (2011Visual stimulus (photograph of hand making rock/paper/scissor action) 27 voxel cube p < .001 uncorr 22 
Visual No Christophel, Hebart, and Haynes (2012Complex artificial stimuli consisting of multicolored random fields (STM storage of visual stimuli during delay phase) 4 voxel radius p < .05 FWE, k = 20 17 
Visual No Bode, Bogler, Soon, and Haynes (2012Category of visual stimulus (piano/chair/noise), high-visibility condition 4 voxel radius p < .0001 uncorr 14 
Visual No Carlin et al. (2012Left vs. right head turn (silent video clips) 5 mm radius p < .05 FWE 17 
Visual No Gilbert, Swencionis, and Amodio (2012Black vs. white faces (color photographs, collapsed over trait and friendship judgment tasks) 3 voxel radius p < .05 FWE 16 
Visual No Kaplan and Meyer (2012Discrimination between five 5-sec video clips showing manipulation of different objects (plant, tennis ball, skein of yarn, light bulb, set of keys) within-subject analysis 8 mm radius (average 75 voxels) >maximum value given by a permutation test in a occipital spherical ROI for each subject 
Visual No Linden, Oosterhof, Klein, and Downing (2012Visual category (face/body/scene/flower) (encoding phase) 100 voxels volume p < .05, voxelwise uncorr, cluster correction using Monte Carlo simulation 18 
Visual No Linden et al. (2012Visual category (face/body/scene/flower) (delay 1 phase) 100 voxels volume p < .05, voxelwise uncorr, cluster correction using Monte Carlo simulation 18 
Visual No Murawski, Harris, Bode, Dominguez, and Egan (2012Subliminal presentation of apple logo vs. neutral cup 3 voxel radius p < .05 FWE cluster corr 13 
Visual No Peelen and Caramazza (2012Conceptual level information about 12 familiar objects: kitchen vs. garage, and rotate vs. squeeze 8 mm radius p < .05 Bonferroni, k = 5 15 
Visual No Weygandt, Schaefer, Schienle, and Haynes (2012Food vs. neutral images (normal weight control group) 3 voxel radius p < .001 uncorr k = 10 and p < .05 FWE 19 
Visual No McNamee et al. (2013Visual category (food/money/trinkets) 20 mm radius p < .05 FDR k = 10 13 
Visual No Clarke and Tyler (2014Semantic features of visual stimuli 7 mm radius p < .05, FDR, k = 20 16 
Visual No Clarke and Tyler (2014Category of visual stimuli (animals vs. fruits vs. vegetables vs. vehicles vs. tools vs. musical instruments) 7 mm radius p < .05, FDR, k = 20 16 
Visual No Clarke and Tyler (2014Animal and plants vs. nonbiological visual stimuli 7 mm radius p < .05, FDR, k = 20 16 
Visual No Clarke and Tyler (2014Animal visual stimuli (a model in which patterns for animal stimuli are similar to one another and all other stimuli are dissimilar from animals and from each other) 7 mm radius p < .05, FDR, k = 20 16 
Visual No Simanova et al. (2014Category of visual stimulus (animal vs. tool) – photographs 2.5 voxels, 8.75 mm radius (33 voxel volume) p < .001, FDR 14 
Visual No FitzGerald, Schwartenbeck, and Dolan (2014Discriminate between two abstract visual stimuli when attending to visual stimuli (stimuli differentially associated with financial reward) 6 mm radius (31 voxels volume) p < .05 FWE-corrected 25 
Visual No FitzGerald et al. (2014Discriminate between two abstract visual stimuli when attending to concurrently presented auditory stimuli (visual stimuli differentially associated with financial reward) 6 mm radius (31 voxels volume) p < .05 FWE-corrected 25 
Auditory Yes Alink, Euler, Kriegeskorte, Singer, and Kohler (2012Direction of auditory motion (rightwards vs. leftwards) 1.05 cm radius (72 voxel volume) T > 4.0, k > 4 19 
Auditory Yes Lee, Janata, Frost, Hanke, and Granger (2011Ascending vs. descending melodic sequences 2 neighboring voxels (max 33 voxels) p < .05 cluster corr 12 
Auditory Yes Linke, Vicente-Grabovetsky, and Cusack (2011Frequency-specific coding (discriminate pure tones in four frequency ranges) 10 mm radius p < .005 FDR 16 (9 with two sessions, 7 with one session) 
Auditory Yes Giordano et al. (2013Pitch (median) 6.25 mm radius p < .0001, k = 20 20 
Auditory Yes Giordano et al. (2013Loudness (median) 6.25 mm radius p < .0001, k = 20 20 
Auditory Yes Giordano et al. (2013Spectral centroid (interquartile range) 6.25 mm radius p < .0001, k = 20 20 
Auditory Yes Giordano et al. (2013Harmonicity (median) 6.25 mm radius p < .0001, k = 20 20 
Auditory Yes Giordano et al. (2013Loudness (cross-correlation) 6.25 mm radius p < .0001, k = 20 20 
Auditory Yes Lee, Turkeltaub, Granger, and Raizada (2012/ba/ vs. /da/ speech category (3 voxel searchlight) 3 voxel radius p < .001 voxelwise uncorr and p < .05 clusterwise-corrected 13 
Auditory Yes Lee et al. (2012/ba/ vs. /da/ speech category (reanalysis of a previous data set) 3 voxel radius p < .001 voxelwise uncorr and p < .05 clusterwise-corrected 12 
Auditory Yes Merrill et al. (2012Hummed speech prosody (rhythm + pitch) vs. speech rhythm 6 mm radius p < .05 cluster size-corrected 21 
Auditory Yes Merrill et al. (2012Hummed song melody (pitch + rhythm) vs. musical rhythm 6 mm radius p < .05 cluster size-corrected 21 
Auditory Yes Jiang, Stecker, and Fine (2014Apparent direction of unambiguous auditory motion (left vs. right), 50% coherence, sighted subjects 2 mm radius (33 voxels volume) p < .001 corrected 
Auditory Yes Klein and Zatorre (2015Musical category (minor third vs. major third) 3 voxels radius (max 123 voxels volume) p < .001 (uncorrected) 10 
Auditory No Giordano et al. (2013Human similar 6.25 mm radius p < .0001, k = 20 20 
Auditory No Giordano et al. (2013Living similar 6.25 mm radius p < .0001, k = 20 20 
Auditory No Kotz et al. (2013Emotion of vocal expression (angry, happy, sad, surprised or neutral) 9 mm radius (3 × 3 × 3 mm voxels) p < .0001 uncorr 20 
Auditory No Merrill et al. (2012Spoken sentences (words + rhythm + pitch) vs. hummed speech prosody (rhythm + pitch) 6 mm radius p < .05 cluster size-corrected 21 
Auditory No Merrill et al. (2012Sung sentences (words + pitch + rhythm) vs. hummed song melody (pitch + rhythm) 6 mm radius p < .05 cluster size-corrected 21 
Auditory No Simanova et al. (2014Category of sound (animal vs. tool) 2.5 voxels, 8.75 mm radius (33 voxel volume) p < .001, FDR 14 
Auditory No Boets et al. (2013Speech sounds (sounds differing on both consonant and vowel vs. neither), typical readers 9 mm radius (123 voxels volume) p < .001 voxelwise uncorr and p < .05 FWE clusterwise 22 
Auditory No Zheng et al. (2013Speech stimuli vs. noise during passive listening 4 mm radius p < .05, FWE 20 
Auditory No Jiang et al. (2014Reported apparent direction of ambiguous auditory motion (left vs. right), 0% coherence, sighted subjects 2 mm radius (33 voxels volume) p < .001 corrected 
Rule Yes Haynes et al. (2007Intended task: addition vs. subtraction 3 voxel radius p < .005 uncorr 
Rule Yes Bode and Haynes (2009Stimulus–response mapping rule 4 voxel radius p < .001 uncorr 12 
Rule Yes Greenberg, Esterman, Wilson, Serences, and Yantis (2010Type of attentional shift to make (shift attention to alternate location vs. shift attention to alternate color) 27 voxel cube p < .05 cluster correction 
Rule Yes Vickery et al. (2011Participants upcoming choice (switch or stay relative to last choice) 27 voxel cube p < .001 uncorr 22 
Rule Yes Woolgar, Thompson, et al. (2011Stimulus–response mapping rule 5 mm radius p < .001 uncorr 17 
Rule Yes Hebart et al. (2012Stimulus–response mapping rule 10 mm radius p < .00001, uncorr, k = 30 
Rule Yes Momennejad and Haynes (2012Task participants are intending to perform (parity or magnitude task) during maintenance phase 4 voxel radius p < .0001 uncorr, k = 0 12 
Rule Yes Momennejad and Haynes (2012Task participants are performing (parity or magnitude task) during retrieval phase 4 voxel radius p < .0001 uncorr, k = 0 12 
Rule Yes Momennejad and Haynes (2012Time delay after which participants should self-initiate a task switch (15, 20 or 25 sec), during maintenance phase 4 voxel radius p < .0001 uncorr, k = 0 12 
Rule Yes Nee and Brown (2012Higher level task context (first delay period) 10 mm radius p < .05 cluster-corrected (p < .005 with k = 66 – extent threshold given using simulations in AlphaSim) 21 
Rule Yes Nee and Brown (2012Higher and lower level task context (second delay period) 10 mm radius p < .05 cluster-corrected (p < .005 with k = 66 – extent threshold given using simulations in AlphaSim) 21 
Rule Yes Reverberi et al. (2012aRule representation (e.g., if there is a house press left) 4 voxel radius (3 × 3 × 3.75 mm voxels) p < .05 cluster corr 13 
Rule Yes Reverberi et al. (2012bStimulus response mapping “rule identity” (if furniture then A, if transport then B vs. if furniture then B, if transport then A) 4 voxel radius p < .05 corrected 14 
Rule Yes Reverberi et al. (2012bOrder in which rules are to be applied 4 voxel radius p < .05 corrected 14 
Rule Yes Soon et al. (2013Task (search for house/face/car/bird) during preparatory period 20 mm radius p < .05 Bonferroni-corrected against 55%, 25 mm cluster threshold 15 
Rule Yes Zhang et al. (2013Current rule: attend to direction of motion, color or size of dots in a random dot kinematogram 2 voxel radius p < .001 uncorr k = 35 20 
Rule Yes Helfinstein et al. (2014Safe vs. risky choice to be taken by the participant on the following trial Not reported >60%, whole-brain cluster-corrected p < .05 via comparison with 1,000 random permutations 108 
Rule Yes Jiang and Egner (2014Congruency of face/word compound stimulus (congruent vs. incongruent) in a face gender categorization task 3 voxel radius (3 × 3 × 3 mm voxels) p < .05 corrected 21 
Rule Yes Jiang and Egner (2014Congruency of decision with side of response (congruent vs. incongruent) 3 voxel radius (3 × 3 × 3 mm voxels) p < .05 corrected 21 
Rule Yes Wisniewski, Reverberi, Tusche, and Haynes (2015Stimulus–response mapping rule (selected by participant) 4 voxel radius p < .001 FWE corr 14 
Rule No Gilbert (2011Dual task (2-back + prospective memory task) vs. single task (2-back) 3 voxel radius p < .05 FWE, k = 10 32 
Rule No Ekman, Derrfuss, Tittgemeyer, and Fiebach (2012Prepare for upcoming color or motion task vs. neutral condition in which upcoming task was not known (no preparation possible) 8 mm radius nonparametric permutation test, FDR threshold at q = 0.05, peaks more than 24 mm apart 
Rule No Li and Yang (2012Task set: categorize glass patterns as radial or concentric when stimuli vary on angle vs. categorize glass patterns as radial or concentric when they vary on signal level 9 mm radius (voxels 3 × 3 × 3 mm) p < .01, cluster correction using Monte Carlo simulation 20 
Motor Yes Soon et al. (2008Left vs. right button press intention (before conscious decision indicated) 3 voxel radius p < .05 FWE 14 
Motor Yes Soon et al. (2008Left vs. right button press (execution) 3 voxel radius p < .05 FWE 14 
Motor Yes Soon et al. (2008Left vs. right response preparation (cued when to make choice) 3 voxel radius p < .001 uncorr 14 
Motor Yes Bode and Haynes (2009Motor response (leftward vs. rightward joystick movement) 4 voxel radius p < .05 FWE 12 
Motor Yes Woolgar, Thompson, et al. (2011Button press response (index vs. middle finger) 5 mm radius p < .001 uncorr 17 
Motor Yes Bode et al. (2012Button press response (index/middle/ring finger of right hand) 4 voxel radius p < .0001 uncorr 14 
Motor Yes Bode et al. (2013Button press (right index vs. middle finger) 4 voxel radius p < .05, FWE cluster corr 15 
Motor No Carp et al. (2011Left vs. right index finger tapping 12 mm radius p < 1e−7, k = 50 37 
Motor No Colas and Hsieh (2014Motor bias (whether participant will later press left or right button, operated with index finger of each hand, prestimulus display) 5 voxel sided cube p < .025 14 
Motor No Colas and Hsieh (2014Left vs. right button press (index finger of each hand) 5 voxel sided cube p < .01, k = 9 14 
Motor No Huang et al. (2014Left vs. right key press (left vs. right hand) 5 voxels (15 mm) cube p < .01 cluster-corrected 14 

corr = corrected; uncorr = uncorrected; FDR = false discovery rate correction; FWE = family-wise error correction; k = cluster extent threshold.

Analyses pertaining to the discrimination of visual stimuli included discrimination of stimulus orientation, position, color, and form. Additional analyses pertaining to the semantic category of the visual stimulus (e.g., animals vs. tools; Simanova, Hagoort, Oostenveld, & van Gerven, 2014) and stimuli that were consistently associated with different rewards (e.g., face vs. currency, where a picture of currency indicated a monetary reward; Clithero, Smith, Carter, & Huettel, 2011) were included in our lenient categorization but excluded from the strict categorization. In our strict categorization, we also excluded two further studies in which there was a possibility that the visual stimulus could evoke representation of motor actions. These were videos of head turns (Carlin, Rowe, Kriegeskorte, Thompson, & Calder, 2012) and photos of hands in rock/paper/scissor pose (Vickery, Chun, & Lee, 2011).

Analyses pertaining to the coding of auditory information included discrimination of the direction of auditory motion, pitch, loudness, and melody. Analyses pertaining to the semantic category of sound (e.g., animals vs. tools; Simanova et al., 2014) or emotion of vocal expression (Kotz, Kalberlah, Bahlmann, Friederici, & Haynes, 2013) were also included in our lenient categorization and excluded from the strict categorization.

Analyses pertaining to the discrimination of task rules included discrimination of different stimulus–response mappings (e.g., Bode & Haynes, 2009), intended tasks (e.g., addition vs. subtraction; Haynes et al., 2007) and task set (e.g., attend to motion vs. color vs. size; Zhang, Kriegeskorte, Carlin, & Rowe, 2013). Two analyses were included in our lenient categorization and excluded from the strict categorization. One was an analysis that discriminated a dual from single task (Gilbert, 2011), which was excluded from the strict categorization because of the obvious confound with difficulty (for discussion, see Woolgar, Golland, & Bode, 2014; Todd, Nystrom, & Cohen, 2013), and the other pertained to discrimination of task set where the stimuli were very similar but not identical between the two tasks (Li & Yang, 2012).

Analyses pertaining to the discrimination of motor responses included discrimination of different button presses and the direction of joystick movement during response preparation and execution. One analysis that discriminated between left and right finger tapping (Carp, Park, Hebrank, Park, & Polk, 2011) was also excluded from the strict categorization, because it was not clear whether the side to tap was confounded with a visual cue. Two further studies were excluded from our stricter analysis, because it was unclear which of two possible motor responses was modeled (Colas & Hsieh, 2014; Huang, Soon, Mullette-Gillman, & Hsieh, 2014).

Analyses

Our first analysis quantified the prevalence of visual, auditory, rule, and motor information coding in different brain networks. We focused on Visual, Auditory, and Motor networks (capitalized to distinguish from visual, auditory, and motor task features), the frontoparietal MD network (Fedorenko et al., 2013; Fox et al., 2005; Duncan & Owen, 2000), and the DMN (Fox et al., 2005; Raichle et al., 2001). Our definition of the MD network was taken from the average activation map of Fedorenko et al. (2013), which is freely available online at imaging.mrc-cbu.cam.ac.uk/imaging/MDsystem. This map indicates the average activation for high relative to low demand versions of seven tasks including arithmetic, spatial and verbal working memory, flanker, and Stroop tasks. Thus, the MD network definition is activation based: It indexes regions that show a demand-related univariate increase in activity across tasks. The map is symmetrical about the midline because data from the two hemispheres were averaged together in the original paper (Fedorenko et al., 2013). We used the parcellated map provided online in which the original average activation map was thresholded at t > 1.5 and then split into anatomical subregions (imaging.mrc-cbu.cam.ac.uk/imaging/MDsystem). This map includes restricted regions of frontal, parietal, and occipitotemporal cortices as well as a number of small subcortical regions. We only included frontal and parietal regions. The resulting 13 MD regions were located in and around the left and right anterior inferior frontal sulcus (aIFS; center of mass [COM] +/−35 47 19, 5.0 cm3), left and right posterior inferior frontal sulcus (pIFS; COM +/−40 32 27, 5.7 cm3), left and right anterior insula/frontal operculum (AI/FO; COM +/−34 19 2, 7.9 cm3), left and right inferior frontal junction (IFJ; COM +/−44 4 32, 10.1 cm3), left and right premotor cortex (PM; COM +/−28 −2 56, 9.0 cm3), bilateral ACC/pre-SMA (COM 0 15 46, 18.6 cm3), and left and right intraparietal sulcus (IPS; COM +/−29 −56 46, 34.0 cm3). Visual, Auditory, Motor, and DMN networks were taken from the whole-brain map provided by Power et al. (2011), which partitions the brain into networks based on resting state connectivity. The Visual network consisted of a large cluster of 182.6 cm3 mm covering the inferior, middle, and superior occipital, calcarine, lingual and fusiform gyri, and the cuneus (BA 17, 18, 19, 37), with COM at MNI coordinates 1 −79 6, plus small clusters in left BA 37 (0.22 cm3, COM −54 −65 −21) and right inferior parietal lobe (0.17 cm3, COM 26 −55 55, BA 7). The Auditory network comprised two large clusters in left and right superior temporal gyrus and rolandic operculum (23.4 cm3 in each hemisphere, with COM at −51 −22 12 and 52 −19 10, BA 22, 42). The Motor network comprised a large cluster over the precentral and postcentral gyri, paracentral lobule and SMA (107.7 cm3, COM 1 −25 60, BA 4, 5, 6), and small clusters in the SMA at the midline (0.04 cm3, COM 3 7 72) and left and right middle temporal gyrus (0.07 cm3 with COM −48 −64 11 and 0.02 cm3 with COM 55 −60 6). The DMN comprised six main clusters around the precuneus (extending to mid cingulate cortex, 43.9 cm3, COM −1 −51 31, BA 7, 23), ventral ACC, and orbital frontal cortex extending dorsally along the medial part of the superior frontal gyrus (107.2 cm3, COM −2 42 24, BA 9, 10, 11, 32), left and right angular gyrus (12.2 cm3, COM −43 −66 34; 10.6 cm3, COM 47 −62 32; BA 39), and left and right middle temporal lobe (18.7 cm3, COM −58 −17 −13; 15.0 cm3, COM 58 −11 −17, BA 21, 20). To ensure that the networks did not overlap, the MD network was masked with each of the other networks. Therefore, our definition of the MD network pertained to voxels that were not part of the Visual, Auditory, Motor, or DMN networks. To serve as a comparison with our five principal networks, all other voxels in the voxelwise map of Power et al. (2011), which corresponds to the anatomical labeling (AAL) atlas (Tzourio-Mazoyer et al., 2002) and excludes the cerebellum, ventricles, and large white matter tracts, were considered as a residual, Other network. Definitions of the five principal networks are depicted in Figure 1.

Figure 1. 

Number of significant decoding points reported in each network, after correcting for the number of analyses examining coding of each task feature and network volume. Asterisks indicate significance of chi-square or exact binomial goodness of fit tests examining whether there was more coding in each principal network compared with Other for all points (above bars) or for each task feature separately (asterisks on colored bar segments). Statistical testing was carried out for the strict categorization data only. *p < .05, **p < .01, ***p < .00001.

Figure 1. 

Number of significant decoding points reported in each network, after correcting for the number of analyses examining coding of each task feature and network volume. Asterisks indicate significance of chi-square or exact binomial goodness of fit tests examining whether there was more coding in each principal network compared with Other for all points (above bars) or for each task feature separately (asterisks on colored bar segments). Statistical testing was carried out for the strict categorization data only. *p < .05, **p < .01, ***p < .00001.

For each of our task features, we counted the number of decoding peaks that were reported in each of our six networks, including Other (any decoding peaks reported using TAL coordinates were converted to MNI152 space using tal2mni; imaging.mrc-cbu.cam.ac.uk/downloads/MNI2tal/tal2mni.m). To visualize these data, for each task feature and network, we divided the relevant tally by the number of reported analyses for that task feature and the volume of the network and plotted them on a stacked bar chart. We visualized the data from the lenient and strict categorization separately. Using data from the strict categorization, we then carried out a series of chi-square analyses to test for statistical differences in the distribution of information coding across the networks. First, we carried out a one-way chi-square analysis on the total number of decoding peaks in each network. For this, the observed values were the raw numbers of decoding peaks (across all task features) reported in each network, and the expected values were set proportional to the volume of each network. This analysis tests whether the distribution of information coding between the networks is predicted by network volume. Second, we carried out a chi-square test of independence to assess whether the distribution of information about each task feature (visual, auditory, motor, and rule decoding points) was independent of network (MD, Visual, Auditory, Motor, DMN, and Other). Finally, where significant effects were found in these first two analyses, we carried out a series of post hoc analyses considering each task feature and region separately to clarify the basis for the effect. For each task feature separately, we compared the distribution of observed coding (tally of decoding points in each network) to that predicted by the relative volumes of the six networks. This was done using chi-square (visual and rule information) or the equivalent exact goodness of fit multinomial test for situations where >20% of expected values were <5 (motor and auditory information; implemented in R version 3.2.2 (Team, 2015) using the XNomial package (Engels, 2014)). Finally we asked whether the tally of observed coding in each of the five principal networks separately was greater than that in Other, using a one-way chi-square test or a one-tailed exact binomial test where any expected value was <5.

Our second analysis concerned subdivisions within the MD network. Although the observation of the MD activation pattern in response to many kinds of demand emphasizes the similarity of their response, we do expect that there will be some functional differences between the different regions (e.g., Fedorenko et al., 2013). To explore this, we first carried out a one-way chi-square comparing the total number of decoding peaks reported in the seven different MD regions (aIFS, pIFS, AI/FO, IFJ, PM, ACC/pre-SMA, IPS; data pooled over hemispheres). Next, we divided the MD regions into two subnetworks: a frontoparietal (FP) subnetwork, comprising the IPS, IFJ, and pIFS MD regions, and a cingulo-opercular (CO) subnetwork comprising ACC/pre-SMA, AI/FO, and aIFS MD regions (Power & Petersen, 2013; Power et al., 2011; Dosenbach et al., 2007). We carried out one-way chi-square test comparing the total number of decoding peaks reported in the two subnetworks to each other and to coding in Other. We again used chi-square or the equivalent exact test (Freeman & Halton, 1951) to test for independence between subnetwork and task feature and to compare coding of each feature between the two subnetworks. Statistical testing was again carried out for the “strict” categorization data only.

RESULTS

We summarized 100 independent decoding analyses, reported in 57 published papers, that isolated the multivoxel representation of a single one of the following task features: visual or auditory stimuli, task rules, or motor output. First, we compared information coding in each of our five a priori networks of interest, with Other included as a baseline. The data, shown in Figure 1, suggest a highly structured distribution. For data from the strict categorization (Figure 1B), we used a series of chi-square analyses and exact tests to examine the statistical differences between networks. First we asked whether there was more decoding in some networks compared with others, over and above the differences expected due to variation in network volume (see Methods). Indeed, the total number of decoding peaks varied significantly between the six networks even after network volume was accounted for (χ2 (5, n = 365) = 157.16, p < .00001). Second, we asked whether there was a relationship between the distribution of coding of the different task features and the different brain networks. This chi-square test of independence was also highly significant (χ2 (15, n = 365) = 172.34, p < .00001), indicating a significant relationship between task feature and brain network. We carried out a series of post hoc analyses to clarify the basis for these effects. For this, we considered each task feature separately and compared the number of reported points to the number that would be expected based on the relative volumes of the six networks. For all four task features separately, coding differed significantly between networks (visual information: χ2 (5, n = 153) = 188.37, p < .00001; auditory information: exact test p < .00001; rule information: χ2 (5, n = 151) = 29.47, p = .00002; motor information: exact test p < .00001). For visual information, compared with expectations based on network volume, coding in the Visual (χ2 (1, n = 84) = 140.71, p < .00001), Motor (exact test, p = .015), and MD (χ2 (1, n = 77) = 119.65, p < .00001) networks was significantly more frequent than coding in Other. No such difference was seen for visual information coding in the DMN and Auditory networks (ps > .13). Auditory information coding was reported more frequently in the Auditory (exact test, p < .00001) and MD (exact test, p = .043) networks compared with Other (for DMN, Motor, and Visual networks compared with Other, ps > .68). Rule information coding was reported more frequently in the MD (χ2 (1, n = 99) = 21.06, p < .00001) and Visual (χ2 (1, n = 89) = 5.02, p = .03) networks compared with Other (equivalent tests for DMN, Auditory and Motor networks, ps > .09). Motor information was coded more frequently in the Motor (exact test, p < .00001), MD (exact test, p = .008), and DMN (exact test, p = .019) networks compared with Other (equivalent tests for Visual and Auditory networks, ps > .61). Therefore, relative to Other, the MD network showed more coding of all four task features (visual, auditory, rule, and motor), the DMN showed more coding of motor information, the Motor network showed more coding of motor and visual information, the Visual network showed more coding of visual and rule information, and the Auditory network showed more coding of auditory information.

Our second series of analyses concerned subdivisions within the MD network, again using data from the strict categorization. First, we examined the total number of decoding peaks in each region, combining across task feature (visual, auditory, motor, rule). There was no evidence for a difference between the seven MD regions compared with expectations based on region volume (data collapsed over hemisphere, χ2 (6, n = 93) = 5.77, p = .45). Second, we asked whether there were differences in the reported representational content of two putative subnetworks, an FP subnetwork (IPS, IFJ, and pIFS), proposed to support transient control processes, and a CO network (ACC/pre-SMA, AI/FO, and aIFS), proposed to support sustained control processes (Dosenbach et al., 2007). The data are shown in Figure 2. There was no evidence for a difference in the frequency of information coding in these two subnetworks (χ2 (1, n = 84) = 2.62, p = .11), with encoding in both subnetworks more frequent than encoding in Other (FP: χ2 (1, n = 178) = 124.28, p < .00001; CO: χ2 (1, n = 132) = 23.99, p < .00001). Interestingly, however, there was a significant relationship between subnetwork and information type (Freeman–Halton extension of Fisher's exact test, p = .002), suggesting that the two networks had different representational profiles. The dissociation was driven by more coding of visual information in FP than CO (χ2 (1, n = 41) = 6.65, p = .010) and more coding of motor information in CO than in FP (two-tailed binomial exact test, 0% of motor points in FP was less than the 69.2% predicted based on the two subnetwork volumes, p = .009). Visual points were reported in all FP regions as well as in ACC–pre-SMA and AI/FO, whereas motor points were only reported in ACC/pre-SMA and aIFS. There was no difference in coding between the subnetworks for rule or auditory information, ps > .48. The pattern of results did not change if ROIs were restricted to gray matter or if coordinates reported in TAL were converted to MNI using the tal2icbm_spm routine provided with GingerALE (www.brainmap.org/icbm2tal/) instead of tal2mni.

Figure 2. 

Number of significant decoding points reported in each MD subnetwork after correcting for the number of analyses examining coding of each task feature and subnetwork volume. Asterisks indicate significance of chi-square or exact binomial goodness of fit tests examining whether there was more coding in each subnetwork compared with Other for all points (above bars) or for each task feature separately (asterisks on colored bar segments) and comparing coding of each task feature between the two subnetworks (asterisks above colored horizontal lines). Statistical testing was carried out for the strict categorization data only. *p < .05, **p < .01, ***p < .00001.

Figure 2. 

Number of significant decoding points reported in each MD subnetwork after correcting for the number of analyses examining coding of each task feature and subnetwork volume. Asterisks indicate significance of chi-square or exact binomial goodness of fit tests examining whether there was more coding in each subnetwork compared with Other for all points (above bars) or for each task feature separately (asterisks on colored bar segments) and comparing coding of each task feature between the two subnetworks (asterisks above colored horizontal lines). Statistical testing was carried out for the strict categorization data only. *p < .05, **p < .01, ***p < .00001.

To aid the reader in visualizing the data, we generated a whole-brain decoding map from the lenient categorization. For this, the peak decoding coordinates reported in each analysis were projected onto a single template brain, smoothed (15 FWHM Guassian kernel) and thresholded (≥3 times the height of a single peak). The resulting map indicates regions most commonly identified as making task-relevant distinctions in the literature. As can be seen in Figure 3, regions of maximum reported decoding corresponded well with our a priori networks. Information coding was frequently reported in the MD network (bilateral ACC/pre-SMA, right AI/FO, left IFJ, left and right aIFS, right pIFS, left PM, and left and right IPS), Visual network (BA 18/19) extending to inferior temporal cortex, Auditory network (left and right superior temporal gyrus), and the Motor network (left and right precentral and postcentral gyri). Additional small regions of frequent decoding were found in the dorsal part of the right middle frontal gyrus (BA 9/8), the ventral part of the right inferior frontal gyrus (BA 45/47), a ventral part of the left precuneus (BA 30), and the right temporal parietal junction (BA 21). We similarly generated whole-brain decoding maps for each task feature separately (using a lower threshold of 1.2 * single peak height to account for the smaller number of data points in this visualization). As can be seen in Figure 4, the result was a reassuring picture in which visual information was predominantly found to be encoded in the visual cortex, with some additional contribution from frontal and parietal lobes, auditory information was predominantly reported in the auditory cortex, and motor information was primarily coded in motor cortices. Rule was the most diffusely coded task feature, represented in frontal, parietal, and occipitotemporal cortices. These maps did not change markedly if the strict categorization data were used instead.

Figure 3. 

Brain regions where significant decoding of visual, auditory, rule, and motor information was most frequently reported in the literature. Areas of maximal decoding are shown rendered on left and right hemisphere and on the medial surface (x = −4). To create this visualization, all the decoding peaks were projected onto a single template brain, smoothed, and summed, and the resulting image was thresholded at 3 times the maximum height of a single smoothed peak.

Figure 3. 

Brain regions where significant decoding of visual, auditory, rule, and motor information was most frequently reported in the literature. Areas of maximal decoding are shown rendered on left and right hemisphere and on the medial surface (x = −4). To create this visualization, all the decoding peaks were projected onto a single template brain, smoothed, and summed, and the resulting image was thresholded at 3 times the maximum height of a single smoothed peak.

Figure 4. 

Brain regions where significant decoding of (A) visual, (B) auditory, (C) rule, and (D) motor information was most frequently reported in the literature. To create this visualization, the decoding peaks for each task feature (lenient categorization) were projected onto a single template brain, smoothed, and summed, and the resulting image was thresholded at 1.2 times the maximum height of a single smoothed peak. (E) Maps from A to D flattened and overlaid at 50% transparency.

Figure 4. 

Brain regions where significant decoding of (A) visual, (B) auditory, (C) rule, and (D) motor information was most frequently reported in the literature. To create this visualization, the decoding peaks for each task feature (lenient categorization) were projected onto a single template brain, smoothed, and summed, and the resulting image was thresholded at 1.2 times the maximum height of a single smoothed peak. (E) Maps from A to D flattened and overlaid at 50% transparency.

DISCUSSION

The human brain is a massively parallel complex system. In the past three decades, PET and fMRI technologies have allowed us to probe the function of different parts of this system by assessing what regions are active in different tasks. In the last decade, MVPA has taken this endeavor to a new level, enabling us to study what aspects of stimuli, rules, and responses are discriminated in the local pattern of multivoxel activation in different brain regions. In this paper, we summarized the current state of the literature, drawing on 100 independent analyses, reported in 57 published papers, to describe the distribution of visual, auditory, rule, and motor information processing in the brain. The result is a balanced view of brain modularity and flexibility. Sensory and motor networks predominantly coded information from their own domain, whereas the frontoparietal MD network coded all the different task features we examined. The contribution of the DMN and voxels elsewhere was minor.

The observation that the MD network codes information from multiple domains fits well with an adaptive view of this system. Consistent with the observation of similar frontoparietal activity across many tasks (e.g., Yeo et al., 2015; Fedorenko et al., 2013; Duncan & Owen, 2000; Dosenbach et al., 2006), the proposal is that these regions adapt their function as needed for the task in hand (Duncan, 2001, 2010). To support goal-directed behavior in different circumstances, they are proposed to be capable of encoding a range of different types of information, including the details of auditory and visual stimuli that are relevant to the current cognitive operation (Duncan, 2010). Support comes from single unit recordings, in which the firing rates of prefrontal and parietal cells have been shown to code task rules (e.g., Sigala, Kusunoki, Nimmo-Smith, Gaffan, & Duncan, 2008; Wallis, Anderson, & Miller, 2001; White & Wise, 1999), behavioral responses (e.g., Asaad, Rainer, & Miller, 1998; Niki & Watanabe, 1976), auditory stimuli (e.g., Romanski, 2007; Azumo & Suzuki, 1984), and visual stimuli (e.g., Freedman & Assad, 2006; Freedman, Riesenhuber, Poggio, & Miller, 2001; Hoshi, Shima, & Tanji, 1998; Rao, Rainer, & Miller, 1997). Further support for an adaptive view of this system comes from the observation that the responses of single units in prefrontal and parietal regions adjust to code different information over the course of single trials (Kadohisa et al., 2013; Stokes et al., 2013; Rao et al., 1997) and make different stimulus distinctions in different task contexts (Freedman & Assad, 2006; Freedman et al., 2001). Accordingly, in human functional imaging, the strength of multivoxel codes in the MD system has been found to adjust according to task requirements, with perceptual discrimination increasing under conditions of high perceptual demand (Woolgar, Williams, & Rich, 2015; Woolgar, Hampshire, Thompson, & Duncan, 2011), rule discrimination increasing when rules are more complex (Woolgar, Afshar, Williams, & Rich, 2015), and a greater representation of visual objects that are at the focus of attention (Woolgar, Williams, et al., 2015). These regions are also thought to make qualitatively different distinctions between visual stimuli in different task contexts (Harel, Kravitz, & Baker, 2014). The data presented here emphasize the extent of flexibility in these regions, suggesting that they are capable of representing task relevant information from visual, auditory, rule, and motor domains.

Although each of the individual MD regions are known to respond to a wide range of cognitive demands (e.g., Fedorenko et al., 2013), it nonetheless seems likely that the different regions will support somewhat different cognitive functions. Several organizational schemes have been proposed for the pFC, including a rostrocaudal axis along which different regions support progressively more abstract control processes (Badre & D'Esposito, 2007; Koechlin & Summerfield, 2007), ventral and dorsal segregation based on the modality of the information being processed (Goldman-Rakic, 1998), different types of attentional orienting (Corbetta & Shulman, 2002) or what the information will be used for (O'Reilly, 2010), and a medial/lateral segregation based on conflict monitoring and task set implementation (Botvinick, 2008), although some of these accounts have been challenged experimentally (Crittenden & Duncan, 2014; Grinband et al., 2011). One prominent subdivision of the MD system draws a distinction between an FP subnetwork comprising the MD regions on the dorsal lateral prefrontal surface and the IPS, and a CO subnetwork comprising cortex around ACC/pre-SMA, AI/FO, and aIFS. This distinction is born out in analysis of resting state connectivity (Power & Petersen, 2013; Power et al., 2011), and the two subnetworks have been ascribed various different functions, for example, supporting transient versus sustained control processes (Power & Petersen, 2013; Dosenbach et al., 2007), “executive” versus “salience” systems (Seeley et al., 2007), and transformation versus maintenance of information (Hampshire, Highfield, Parkin, & Owen, 2012). In our data, there was no evidence for differences in the frequency with which information coding was reported in the seven (bilateral) MD regions separately. Subdividing the MD system into FP and CO subnetworks also resulted in comparable levels of coding overall in each subnetwork. However, there was a significant difference in the profile of task features coded by these two subnetworks, with more coding of visual information in FP than in CO and more coding of motor information in CO than in FP. In CO, motor points were reported both in the ACC/pre-SMA region known to support motor function and also in the aIFS. Clarification of the basis of the subnetwork coding difference, and how we should interpret it, will require further work.

Visual, auditory, and motor regions principally coded information from their own domain. However, the visual and motor networks also showed some domain generality, with coding of other task features. Particularly salient was the overlap between the maps for visual and rule information in the visual cortex (Figure 4E). In our review, it was difficult to completely rule out confounds between domains. For example, task rules were usually cued visually, meaning that the visual properties of the cues, as much as representation of the abstract rules per se, could drive discrimination between rules. However, there are some cases of rule coding in the visual cortex where this explanation is not sufficient. For example, we previously reported that discrimination between two stimulus–response mapping rules in the visual cortex generalizes over the two visual stimuli used to cue each rule (Woolgar, Thompson, et al., 2011). Similarly, Zhang et al. (2013) found that rule discrimination in the calcarine sulcus generalized over externally cued and internally chosen rules, and Soon, Namburi, and Chee (2013) reported rule discrimination in the visual cortex when rules were cued with an auditory cue. In some cases, rule discrimination in the visual cortex may reflect different preparatory signals, for example, if the two rules direct attention to different visual features (e.g., Zhang et al., 2013) or object categories (e.g., Soon et al., 2013), but this is not always the case: the two rules of Woolgar, Thompson, et al. (2011) required attention to the same features of identical visual stimuli. Intriguingly, both rule and response coding has previously been reported in the firing rates of single units in V4 of the macaque visual cortex (Mirabella et al., 2007).

In the motor cortex, the majority of reported coding was for discrimination between motor movements, but this region also showed appreciable coding of visual stimuli. Interestingly, population level responses in the primary motor cortex of the macaque have been reported to encode visual stimuli and stimulus–response mapping rules (e.g., Riehle, Kornblum, & Requin, 1994, 1997; Zhang, Riehle, Requin, & Kornblum, 1997). In the MVPA papers we studied, it was often difficult to say precisely what aspects of a stimulus underpinned a given multivoxel discrimination. For example, visual presentation of a familiar object might evoke representation of its associated properties in other sensory domains (e.g., implied somatosensory properties when watching manual exploration of objects; Kaplan & Meyer, 2012). We excluded any papers in which there were obvious associations between our task features, and in our stricter analysis, we also excluded any studies in which higher-level features such as semantic category differed between decoded items, or cases where items might evoke representations of associated motor actions. The remaining points of visual discrimination in the motor cortex were for discrimination between Gabor patches differing in color and spatial frequency (Pollmann, Zinke, Baumgartner, Geringswald, & Hanke, 2014), the spatial location of a target (Kalberlah, Chen, Heinzle, & Haynes, 2011), radial versus concentric glass patterns (Mayhew & Kourtzi, 2013; Mayhew, Li, Storrar, Tsvetanov, & Kourtzi, 2010), and between two abstract shapes cuing the same rule (Reverberi, Gorgen, & Haynes, 2012a). In one study, radial and concentric patterns had been associated with differential button presses during training, although during scanning, participants performed an unrelated task (Mayhew et al., 2010). In all other cases, any button press responses given by participants were orthogonal (Mayhew & Kourtzi, 2013) or unrelated (Pollmann et al., 2014; Reverberi et al., 2012a; Kalberlah et al., 2011; Mayhew et al., 2010) to the visual discrimination.

A few of the studies we included reported multivoxel coding in the DMN. In some cases, the reported discrimination in the DMN reflected participant intentions, such as coding of internally selected task choices (Momennejad & Haynes, 2012; Vickery et al., 2011; Haynes et al., 2007) or externally instructed task rules (Soon et al., 2013; Nee & Brown, 2012) during preparatory periods, the time delay after which participants will self-initiate a switch (Momennejad & Haynes, 2012), and the button which the participant intends to press (Soon, Brass, Heinze, & Haynes, 2008). In other cases, it reflected aspects of active tasks including current rule (Zhang et al., 2013; Reverberi et al., 2012a; Reverberi, Gorgen, & Haynes, 2012b) and stimulus (e.g., orientation of a Gabor [Kahnt, Grueschow, Speck, & Haynes, 2011], concentric versus radial glass patterns [Mayhew & Kourtzi, 2013], and harmonicity of a sound [Giordano, McAdams, Zatorre, Kriegeskorte, & Belin, 2013]). Interestingly, this network has recently been reported to show activation during task switching and multivoxel discrimination between the tasks being switched to (Crittenden, Mitchell, & Duncan, 2015). Additionally, we recently reported multivoxel discrimination between stimulus–response mapping rules in the precuneus, overlapping a major node of the DMN, during an active stimulus–response task (Woolgar, Afshar, et al., 2015). Those data suggest a role for DMN that is qualitatively different from the internally driven activities such as mind wandering and introspection with which this network is more typically associated (e.g., Buckner, Andrews-Hanna, & Schacter, 2008).

There was more coding of motor information in the DMN than in Other, but all five DMN motor coding points came from a single study (Soon et al., 2008). Four of these points corresponded to discriminatory activation in preparation of a left versus right button press at a time point before the participant had indicated their conscious intention to press a button, and the remaining point was for response preparation when participants were cued to make a choice. There were no motor coding points in the DMN during button press execution.

An important challenge for MVPA is to account for variables that differ between conditions on an individual participant basis, such as differences in RT (Woolgar et al., 2014; Todd et al., 2013). Because MVPA is usually carried out at the level of individual participants, with a directionless summary statistic (e.g., classification accuracy) taken to the second level, any effect of difficulty, effort, attention, time on task, trial order (etc.) will not average out at the group level. This may be a particular concern in regions such as the MD and DMN networks, which are known to show different overall activity levels according to task demand. It is difficult to estimate the extent to which these factors have contributed to the data analyzed here. Some of the included studies matched their conditions for difficulty (e.g., Zhang et al., 2013), explicitly accounted for differences in RT in their analysis (e.g., Woolgar, Thompson, et al., 2011), or used designs in which difficulty was unlikely to artifactually drive coding (e.g., passive viewing, Kaplan & Meyer, 2012), but many did not. Other studies sought to account for univariate effects of difficulty that could drive multivariate results, for example, by normalizing the multivoxel patterns to remove overall activation differences between conditions at the level of individual participants (e.g., Gilbert, 2011). However, because the effect of difficulty would not necessarily manifest as an overall activation difference, this could still fail to remove the effect of difficulty on decoding. In our stricter analysis, we excluded analyses in which there was an obvious difference in difficulty between discriminated conditions, but most studies did not report whether there were any differences between conditions on an individual participant basis. Note, though, that we have previously examined the extent to which trial by trial differences in RT contribute to decoding in empirical data and found the contribution to be minor (Crittenden et al., 2015; Erez & Duncan, 2015; Woolgar et al., 2014).

We summarized 100 independent analyses, reported in 57 published papers, that isolated the multivoxel representation of visual and auditory sensory input, task rules, or motor output. The results confirm the power of the MVPA method, with predominant coding of visual, auditory, and response distinctions in the expected sensory and motor regions. Outside sensory and motor areas, the results were also structured, with a specific network of frontal and parietal regions involved in coding several different types of information. Consistent with the observation of similar frontoparietal activity across many tasks and the suggestion that neurons in these regions adapt their function as needed for current behavior (Duncan 2001), frontoparietal cortex codes information from across sensory and task domains.

Acknowledgments

This work was supported by the Australian Research Council's (ARC) Discovery Projects funding scheme (grant no. DP12102835 to A. W. and J. D.). A. W. is a recipient of an ARC Fellowship (Discovery Early Career Researcher Award, DECRA, grant no. DE120100898), J. J. is a recipient of an International Macquarie University Research Excellence Scholarship, and J. D. is supported by the Medical Research Council (United Kingdom) intramural program (grant no. MC-A060-5PQ10). The authors thank Jonathan Power for providing the canonical partition of resting state networks.

Reprint requests should be sent to Alexandra Woolgar, Perception in Action Research Centre and Department of Cognitive Science, Macquarie University, Sydney, New South Wales 2109, Australia, or via e-mail: alexandra.woolgar@mq.edu.au.

Note

1. 

In two cases, coordinates were not reported, but a list of peaks was sent by e-mail to AW.

REFERENCES

Alink
,
A.
,
Euler
,
F.
,
Kriegeskorte
,
N.
,
Singer
,
W.
, &
Kohler
,
A.
(
2012
).
Auditory motion direction encoding in auditory cortex and high-level visual cortex
.
Human Brain Mapping
,
33
,
969
978
.
Asaad
,
W. F.
,
Rainer
,
G.
, &
Miller
,
E. K.
(
1998
).
Neural activity in the primate prefrontal cortex during associative learning
.
Neuron
,
21
,
1399
1407
.
Azumo
,
M.
, &
Suzuki
,
H.
(
1984
).
Properties and distribution of auditory neurons in the dorsolateral prefrontal cortex of the alert monkey
.
Brain Research
,
298
,
343
346
.
Badre
,
D.
, &
D'Esposito
,
M.
(
2007
).
Functional magnetic resonance imaging evidence for a hierarchical organization of the prefrontal cortex
.
Journal of Cognitive Neuroscience
,
19
,
2082
2099
.
Billington
,
J.
,
Furlan
,
M.
, &
Wann
,
J.
(
2013
).
Cortical responses to congruent and incongruent stereo cues for objects on a collision path with the observer
.
Displays
,
34
,
114
119
.
Bode
,
S.
,
Bogler
,
C.
, &
Haynes
,
J. D.
(
2013
).
Similar neural mechanisms for perceptual guesses and free decisions
.
Neuroimage
,
65
,
456
465
.
Bode
,
S.
,
Bogler
,
C.
,
Soon
,
C. S.
, &
Haynes
,
J. D.
(
2012
).
The neural encoding of guesses in the human brain
.
Neuroimage
,
59
,
1924
1931
.
Bode
,
S.
, &
Haynes
,
J. D.
(
2009
).
Decoding sequential stages of task preparation in the human brain
.
Neuroimage
,
45
,
606
613
.
Boets
,
B.
,
Op de Beeck
,
H. P.
,
Vandermosten
,
M.
,
Scott
,
S. K.
,
Gillebert
,
C. R.
,
Mantini
,
D.
, et al
(
2013
).
Intact but less accessible phonetic representations in adults with dyslexia
.
Science
,
342
,
1251
1254
.
Bogler
,
C.
,
Bode
,
S.
, &
Haynes
,
J. D.
(
2011
).
Decoding successive computational stages of saliency processing
.
Current Biology
,
21
,
1667
1671
.
Botvinick
,
M. M.
(
2008
).
Hierarchical models of behavior and prefrontal function
.
Trends in Cognitive Sciences
,
12
,
201
208
.
Buckner
,
R. L.
,
Andrews-Hanna
,
J. R.
, &
Schacter
,
D. L.
(
2008
).
The brain's default network: Anatomy, function, and relevance to disease
.
Annals of the New York Academy of Sciences
,
1124
,
1
38
.
Carlin
,
J. D.
,
Calder
,
A. J.
,
Kriegeskorte
,
N.
,
Nili
,
H.
, &
Rowe
,
J. B.
(
2011
).
A head view-invariant representation of gaze direction in anterior superior temporal sulcus
.
Current Biology
,
21
,
1817
1821
.
Carlin
,
J. D.
,
Rowe
,
J. B.
,
Kriegeskorte
,
N.
,
Thompson
,
R.
, &
Calder
,
A. J.
(
2012
).
Direction-sensitive codes for observed head turns in human superior temporal sulcus
.
Cerebral Cortex
,
22
,
735
744
.
Carp
,
J.
,
Park
,
J.
,
Hebrank
,
A.
,
Park
,
D. C.
, &
Polk
,
T. A.
(
2011
).
Age-related neural differentiation in the motor system
.
PLoS One
,
6
,
1
6
.
Christophel
,
T. B.
,
Hebart
,
M. N.
, &
Haynes
,
J. D.
(
2012
).
Decoding the contents of visual short-term memory from human visual and parietal cortex
.
Journal of Neuroscience
,
32
,
12983
12989
.
Clarke
,
A.
, &
Tyler
,
L. K.
(
2014
).
Object-specific semantic coding in human perirhinal cortex
.
Journal of Neuroscience
,
34
,
4766
4775
.
Clithero
,
J. A.
,
Smith
,
D. V.
,
Carter
,
R. M.
, &
Huettel
,
S. A.
(
2011
).
Within- and cross-participant classifiers reveal different neural coding of information
.
Neuroimage
,
56
,
699
708
.
Colas
,
J. T.
, &
Hsieh
,
P. J.
(
2014
).
Pre-existing brain states predict aesthetic judgments
.
Human Brain Mapping
,
35
,
2924
2934
.
Corbetta
,
M.
, &
Shulman
,
G. L.
(
2002
).
Control of goal-directed and stimulus-driven attention in the brain
.
Nature Reviews Neuroscience
,
3
,
201
215
.
Crittenden
,
B. M.
, &
Duncan
,
J.
(
2014
).
Task difficulty manipulation reveals multiple demand activity but no frontal lobe hierarchy
.
Cerebral Cortex
,
24
,
532
540
.
Crittenden
,
B. M.
,
Mitchell
,
D. J.
, &
Duncan
,
J.
(
2015
).
Recruitment of the default mode network during a demanding act of executive control
.
Elife
,
4
,
1
12
.
Dehaene
,
S.
, &
Naccache
,
L.
(
2001
).
Towards a cognitive neuroscience of consciousness: Basic evidence and a workspace framework
.
Cognition
,
79
,
1
37
.
Dosenbach
,
N. U.
,
Fair
,
D. A.
,
Miezin
,
F. M.
,
Cohen
,
A. L.
,
Wenger
,
K. K.
,
Dosenbach
,
R. A.
, et al
(
2007
).
Distinct brain networks for adaptive and stable task control in humans
.
Proceedings of the National Academy of Sciences, U.S.A.
,
104
,
11073
11078
.
Dosenbach
,
N. U.
,
Visscher
,
K. M.
,
Palmer
,
E. D.
,
Miezin
,
F. M.
,
Wenger
,
K. K.
,
Kang
,
H. C.
, et al
(
2006
).
A core system for the implementation of task sets
.
Neuron
,
50
,
799
812
.
Duncan
,
J.
(
2001
).
An adaptive coding model of neural function in prefrontal cortex
.
Nature Reviews Neuroscience
,
2
,
820
829
.
Duncan
,
J.
(
2006
).
EPS Mid-Career Award 2004: Brain mechanisms of attention
.
Quarterly Journal of Experimental Psychology (Hove)
,
59
,
2
27
.
Duncan
,
J.
(
2010
).
The multiple-demand (MD) system of the primate brain: Mental programs for intelligent behaviour
.
Trends in Cognitive Sciences
,
14
,
172
179
.
Duncan
,
J.
, &
Owen
,
A. M.
(
2000
).
Common regions of the human frontal lobe recruited by diverse cognitive demands
.
Trends in Neurosciences
,
23
,
475
483
.
Ekman
,
M.
,
Derrfuss
,
J.
,
Tittgemeyer
,
M.
, &
Fiebach
,
C. J.
(
2012
).
Predicting errors from reconfiguration patterns in human brain networks
.
Proceedings of the National Academy of Sciences, U.S.A.
,
109
,
16714
16719
.
Engels
,
B.
(
2014
).
XNomial: Exact goodness-of-fit test for multinomial data with fixed probabilities
.
R package version 1.0.1. CRAN.R-project.org/package=XNomial
.
Erez
,
Y.
, &
Duncan
,
J.
(
2015
).
Discrimination of visual categories based on behavioral relevance in widespread regions of frontoparietal cortex
.
Journal of Neuroscience
,
35
,
12383
12393
.
Fedorenko
,
E.
,
Duncan
,
J.
, &
Kanwisher
,
N.
(
2013
).
Broad domain generality in focal regions of frontal and parietal cortex
.
Proceedings of the National Academy of Sciences, U.S.A.
,
110
,
16616
16621
.
FitzGerald
,
T. H.
,
Schwartenbeck
,
P.
, &
Dolan
,
R. J.
(
2014
).
Reward-related activity in ventral striatum is action contingent and modulated by behavioral relevance
.
Journal of Neuroscience
,
34
,
1271
1279
.
Fox
,
M. D.
,
Snyder
,
A. Z.
,
Vincent
,
J. L.
,
Corbetta
,
M.
,
Van Essen
,
D. C.
, &
Raichle
,
M. E.
(
2005
).
The human brain is intrinsically organized into dynamic, anticorrelated functional networks
.
Proceedings of the National Academy of Sciences, U.S.A.
,
102
,
9673
9678
.
Freedman
,
D. J.
, &
Assad
,
J. A.
(
2006
).
Experience-dependent representation of visual categories in parietal cortex
.
Nature
,
443
,
85
88
.
Freedman
,
D. J.
,
Riesenhuber
,
M.
,
Poggio
,
T.
, &
Miller
,
E. K.
(
2001
).
Categorical representation of visual stimuli in the primate prefrontal cortex
.
Science
,
291
,
312
316
.
Freeman
,
G. H.
, &
Halton
,
J. H.
(
1951
).
Note on exact treatment of contingency, goodness of fit and other problems of significance
.
Biometrika
,
38
,
141
149
.
Gilbert
,
S. J.
(
2011
).
Decoding the content of delayed intentions
.
Journal of Neuroscience
,
31
,
2888
2894
.
Gilbert
,
S. J.
,
Swencionis
,
J. K.
, &
Amodio
,
D. M.
(
2012
).
Evaluative vs. trait representation in intergroup social judgments: Distinct roles of anterior temporal lobe and prefrontal cortex
.
Neuropsychologia
,
50
,
3600
3611
.
Giordano
,
B. L.
,
McAdams
,
S.
,
Zatorre
,
R. J.
,
Kriegeskorte
,
N.
, &
Belin
,
P.
(
2013
).
Abstract encoding of auditory objects in cortical activity patterns
.
Cerebral Cortex
,
23
,
2025
2037
.
Goldman-Rakic
,
P. S
. (
1998
).
The prefrontal landscape: Implications of functional architecture for understanding human mentation and the central executive
.
Philosophical Transactions of the Royal Society of London B: Biological Sciences
,
351
,
1445
1453
.
Greenberg
,
A. S.
,
Esterman
,
M.
,
Wilson
,
D.
,
Serences
,
J. T.
, &
Yantis
,
S.
(
2010
).
Control of spatial and feature-based attention in frontoparietal cortex
.
Journal of Neuroscience
,
30
,
14330
14339
.
Grinband
,
J.
,
Savitskaya
,
J.
,
Wager
,
T. D.
,
Teichert
,
T.
,
Ferrera
,
V. P.
, &
Hirsch
,
J.
(
2011
).
The dorsal medial frontal cortex is sensitive to time on task, not response conflict or error likelihood
.
Neuroimage
,
57
,
303
311
.
Guo
,
F.
,
Preston
,
T. J.
,
Das
,
K.
,
Giesbrecht
,
B.
, &
Eckstein
,
M. P.
(
2012
).
Feature-independent neural coding of target detection during search of natural scenes
.
Journal of Neuroscience
,
32
,
9499
9510
.
Hampshire
,
A.
,
Highfield
,
R. R.
,
Parkin
,
B. L.
, &
Owen
,
A. M.
(
2012
).
Fractionating human intelligence
.
Neuron
,
76
,
1225
1237
.
Harel
,
A.
,
Kravitz
,
D. J.
, &
Baker
,
C. I.
(
2014
).
Task context impacts visual object processing differentially across the cortex
.
Proceedings of the National Academy of Sciences, U.S.A.
,
111
,
E962
E971
.
Haxby
,
J. V.
,
Gobbini
,
M. I.
,
Furey
,
M. L.
,
Ishai
,
A.
,
Schouten
,
J. L.
, &
Pietrini
,
P.
(
2001
).
Distributed and overlapping representations of faces and objects in ventral temporal cortex
.
Science
,
293
,
2425
2430
.
Haynes
,
J. D.
, &
Rees
,
G.
(
2006
).
Decoding mental states from brain activity in humans
.
Nature Reviews Neuroscience
,
7
,
523
534
.
Haynes
,
J. D.
,
Sakai
,
K.
,
Rees
,
G.
,
Gilbert
,
S.
,
Frith
,
C.
, &
Passingham
,
R. E.
(
2007
).
Reading hidden intentions in the human brain
.
Current Biology
,
17
,
323
328
.
Hebart
,
M. N.
,
Donner
,
T. H.
, &
Haynes
,
J.-D.
(
2012
).
Human visual and parietal cortex encode visual choices independent of motor plans
.
Neuroimage
,
63
,
1393
1403
.
Helfinstein
,
S. M.
,
Schonberg
,
T.
,
Congdon
,
E.
,
Karlsgodt
,
K. H.
,
Mumford
,
J. A.
,
Sabb
,
F. W.
, et al
(
2014
).
Predicting risky choices from brain activity patterns
.
Proceedings of the National Academy of Sciences, U.S.A.
,
111
,
2470
2475
.
Herrmann
,
B.
,
Obleser
,
J.
,
Kalberlah
,
C.
,
Haynes
,
J. D.
, &
Friederici
,
A. D.
(
2012
).
Dissociable neural imprints of perception and grammar in auditory functional imaging
.
Human Brain Mapping
,
33
,
584
595
.
Hoshi
,
E.
,
Shima
,
K.
, &
Tanji
,
J.
(
1998
).
Task-dependent selectivity of movement-related neuronal activity in the primate prefrontal cortex
.
Journal of Neurophysiology
,
80
,
3392
3397
.
Huang
,
Y. F.
,
Soon
,
C. S.
,
Mullette-Gillman
,
O. A.
, &
Hsieh
,
P. J.
(
2014
).
Pre-existing brain states predict risky choices
.
Neuroimage
,
101
,
466
472
.
Jiang
,
F.
,
Stecker
,
G. C.
, &
Fine
,
I.
(
2014
).
Auditory motion processing after early blindness
.
Journal of Vision
,
14
,
4
.
Jiang
,
J.
, &
Egner
,
T.
(
2014
).
Using neural pattern classifiers to quantify the modularity of conflict-control mechanisms in the human brain
.
Cerebral Cortex
,
24
,
1793
1805
.
Kadohisa
,
M.
,
Petrov
,
P.
,
Stokes
,
M.
,
Sigala
,
N.
,
Buckley
,
M.
,
Gaffan
,
D.
, et al
(
2013
).
Dynamic construction of a coherent attentional state in a prefrontal cell population
.
Neuron
,
80
,
235
246
.
Kahnt
,
T.
,
Grueschow
,
M.
,
Speck
,
O.
, &
Haynes
,
J. D.
(
2011
).
Perceptual learning and decision-making in human medial frontal cortex
.
Neuron
,
70
,
549
559
.
Kalberlah
,
C.
,
Chen
,
Y.
,
Heinzle
,
J.
, &
Haynes
,
J. D.
(
2011
).
Beyond topographic representation: Decoding visuospatial attention from local activity patterns in the human frontal cortex
.
Wiley Periodicals
,
21
,
201
210
.
Kaplan
,
J. T.
, &
Meyer
,
K.
(
2012
).
Multivariate pattern analysis reveals common neural patterns across individuals during touch observation
.
Neuroimage
,
60
,
204
212
.
Klein
,
M. E.
, &
Zatorre
,
R. J.
(
2015
).
Representations of invariant musical categories are decodable by pattern analysis of locally distributed BOLD responses in superior temporal and intraparietal sulci
.
Cerebral Cortex
,
25
,
1947
1957
.
Koechlin
,
E.
, &
Summerfield
,
C.
(
2007
).
An information theoretical approach to prefrontal executive function
.
Trends in Cognitive Sciences
,
11
,
229
235
.
Kotz
,
S. A.
,
Kalberlah
,
C.
,
Bahlmann
,
J.
,
Friederici
,
A. D.
, &
Haynes
,
J. D.
(
2013
).
Predicting vocal emotion expressions from the human brain
.
Human Brain Mapping
,
34
,
1971
1981
.
Kriegeskorte
,
N.
,
Goebel
,
R.
, &
Bandettini
,
P.
(
2006
).
Information-based functional brain mapping
.
Proceedings of the National Academy of Sciences, U.S.A.
,
103
,
3863
3868
.
Kriegeskorte
,
N.
,
Mur
,
M.
, &
Bandettini
,
P.
(
2008
).
Representational similarity analysis—Connecting the branches of systems neuroscience
.
Frontiers in Systems Neuroscience
,
2
,
4
.
Lee
,
Y.
,
Janata
,
P.
,
Frost
,
C.
,
Hanke
,
M.
, &
Granger
,
R.
(
2011
).
Investigation of melodic contour processing in the brain using multivariate pattern-based fMRI
.
Neuroimage
,
57
,
292
300
.
Lee
,
Y. S.
,
Turkeltaub
,
P.
,
Granger
,
R.
, &
Raizada
,
R. D.
(
2012
).
Categorical speech processing in Broca's area: An fMRI study using multivariate pattern-based analysis
.
Journal of Neuroscience
,
32
,
3942
3948
.
Li
,
S.
, &
Yang
,
F.
(
2012
).
Task-dependent uncertainty modulation of perceptual decisions in the human brain
.
European Journal of Neuroscience
,
36
,
3732
3739
.
Linden
,
D. E.
,
Oosterhof
,
N. N.
,
Klein
,
C.
, &
Downing
,
P. E.
(
2012
).
Mapping brain activation and information during category-specific visual working memory
.
Journal of Neurophysiology
,
107
,
628
639
.
Linke
,
A. C.
,
Vicente-Grabovetsky
,
A.
, &
Cusack
,
R.
(
2011
).
Stimulus-specific suppression preserves information in auditory short-term memory
.
Proceedings of the National Academy of Sciences, U.S.A.
,
108
,
12961
12966
.
Mayhew
,
S. D.
, &
Kourtzi
,
Z.
(
2013
).
Dissociable circuits for visual shape learning in the young and aging human brain
.
Frontiers in Human Neuroscience
,
7
,
75
.
Mayhew
,
S. D.
,
Li
,
S.
,
Storrar
,
J. K.
,
Tsvetanov
,
K. A.
, &
Kourtzi
,
Z.
(
2010
).
Learning shapes the representation of visual categories in the aging human brain
.
Journal of Cognitive Neuroscience
,
22
,
2899
2912
.
McNamee
,
D.
,
Rangel
,
A.
, &
O'Doherty
,
J. P.
(
2013
).
Category-dependent and category-independent goal-value codes in human ventromedial prefrontal cortex
.
Nature Neuroscience
,
16
,
479
485
.
Merrill
,
J.
,
Sammler
,
D.
,
Bangert
,
M.
,
Goldhahn
,
D.
,
Lohmann
,
G.
,
Turner
,
R.
, et al
(
2012
).
Perception of words and pitch patterns in song and speech
.
Frontiers in Psychology
,
3
,
76
.
Mirabella
,
G.
,
Bertini
,
G.
,
Samengo
,
I.
,
Kilavik
,
B. E.
,
Frilli
,
D.
,
Della Libera
,
C.
, et al
(
2007
).
Neurons in area V4 of the macaque translate attended visual features into behaviorally relevant categories
.
Neuron
,
54
,
303
318
.
Momennejad
,
I.
, &
Haynes
,
J. D.
(
2012
).
Human anterior prefrontal cortex encodes the “what” and “when” of future intentions
.
Neuroimage
,
61
,
139
148
.
Murawski
,
C.
,
Harris
,
P. G.
,
Bode
,
S.
,
Dominguez
,
D. J.
, &
Egan
,
G. F.
(
2012
).
Led into temptation? Rewarding brand logos bias the neural encoding of incidental economic decisions
.
PLoS One
,
7
,
e34155
.
Naghavi
,
H. R.
, &
Nyberg
,
L.
(
2005
).
Common fronto-parietal activity in attention, memory, and consciousness: Shared demands on integration?
Consciousness and Cognition
,
14
,
390
425
.
Nee
,
D. E.
, &
Brown
,
J. W.
(
2012
).
Rostral-caudal gradients of abstraction revealed by multi-variate pattern analysis of working memory
.
Neuroimage
,
63
,
1285
1294
.
Niendam
,
T. A.
,
Laird
,
A. R.
,
Ray
,
K. L.
,
Dean
,
Y. M.
,
Glahn
,
D. C.
, &
Carter
,
C. S.
(
2012
).
Meta-analytic evidence for a superordinate cognitive control network subserving diverse executive functions
.
Cognitive, Affective & Behavioral Neuroscience
,
12
,
241
268
.
Niki
,
H.
, &
Watanabe
,
M.
(
1976
).
Prefrontal unit activity and delayed response: Relation to cue location versus direction of response
.
Brain Research
,
105
,
79
88
.
O'Reilly
,
R. C.
(
2010
).
The what and how of prefrontal cortical organization
.
Trends in Neurosciences
,
33
,
355
361
.
Owen
,
A. M.
,
McMillan
,
K. M.
,
Laird
,
A. R.
, &
Bullmore
,
E.
(
2005
).
n-Back working memory paradigm: A meta-analysis of normative functional neuroimaging studies
.
Human Brain Mapping
,
25
,
46
59
.
Peelen
,
A.
, &
Vuilleumier
. (
2010
).
Supramodal representations of perceived emotions in the human brain
.
Journal of Neuroscience
,
30
,
10127
10134
.
Peelen
,
M. V.
, &
Caramazza
,
A.
(
2012
).
Conceptual object representations in human anterior temporal cortex
.
Journal of Neuroscience
,
32
,
15728
15736
.
Pollmann
,
S.
,
Zinke
,
W.
,
Baumgartner
,
F.
,
Geringswald
,
F.
, &
Hanke
,
M.
(
2014
).
The right temporo-parietal junction contributes to visual feature binding
.
Neuroimage
,
101
,
289
297
.
Power
,
J. D.
,
Cohen
,
A. L.
,
Nelson
,
S. M.
,
Wig
,
G. S.
,
Barnes
,
K. A.
,
Church
,
J. A.
, et al
(
2011
).
Functional network organization of the human brain
.
Neuron
,
72
,
665
678
.
Power
,
J. D.
, &
Petersen
,
S. E.
(
2013
).
Control-related systems in the human brain
.
Current Opinion in Neurobiology
,
23
,
223
228
.
Raichle
,
M. E.
,
MacLeod
,
A. M.
,
Snyder
,
A. Z.
,
Powers
,
W. J.
,
Gusnard
,
D. A.
, &
Shulman
,
G. L.
(
2001
).
A default mode of brain function
.
Proceedings of the National Academy of Sciences, U.S.A.
,
98
,
676
682
.
Rao
,
S. C.
,
Rainer
,
G.
, &
Miller
,
E. K.
(
1997
).
Integration of what and where in the primate prefrontal cortex
.
Science
,
276
,
821
824
.
Reverberi
,
C.
,
Gorgen
,
K.
, &
Haynes
,
J. D.
(
2012a
).
Compositionality of rule representations in human prefrontal cortex
.
Cerebral Cortex
,
22
,
1237
1246
.
Reverberi
,
C.
,
Gorgen
,
K.
, &
Haynes
,
J. D.
(
2012b
).
Distributed representations of rule identity and rule order in human frontal cortex and striatum
.
Journal of Neuroscience
,
32
,
17420
17430
.
Riehle
,
A.
,
Kornblum
,
S.
, &
Requin
,
J.
(
1994
).
Neuronal coding of stimulus–response association rules in the motor cortex
.
NeuroReport
,
5
,
2462
2464
.
Riehle
,
A.
,
Kornblum
,
S.
, &
Requin
,
J.
(
1997
).
Neuronal correlates of sensorimotor association in stimulus–response compatibility
.
Journal of Experimental Psychology: Human Perception and Performance
,
23
,
1708
1726
.
Romanski
,
L. M.
(
2007
).
Representation and integration of auditory and visual stimuli in the primate ventral lateral prefrontal cortex
.
Cerebral Cortex
,
17(Suppl. 1)
,
i61
i69
.
Seeley
,
W. W.
,
Menon
,
V.
,
Schatzberg
,
A. F.
,
Keller
,
J.
,
Glover
,
G. H.
,
Kenna
,
H.
, et al
(
2007
).
Dissociable intrinsic connectivity networks for salience processing and executive control
.
Journal of Neuroscience
,
27
,
2349
2356
.
Sigala
,
N.
,
Kusunoki
,
M.
,
Nimmo-Smith
,
I.
,
Gaffan
,
D.
, &
Duncan
,
J.
(
2008
).
Hierarchical coding for sequential task events in the monkey prefrontal cortex
.
Proceedings of the National Academy of Sciences, U.S.A.
,
105
,
11969
11974
.
Simanova
,
I.
,
Hagoort
,
P.
,
Oostenveld
,
R.
, &
van Gerven
,
M. A.
(
2014
).
Modality-independent decoding of semantic information from the human brain
.
Cerebral Cortex
,
24
,
426
434
.
Soon
,
C. S.
,
Brass
,
M.
,
Heinze
,
H. J.
, &
Haynes
,
J. D.
(
2008
).
Unconscious determinants of free decisions in the human brain
.
Nature Neuroscience
,
11
,
543
545
.
Soon
,
C. S.
,
Namburi
,
P.
, &
Chee
,
M. W. L.
(
2013
).
Preparatory patterns of neural activity predict visual category search speed
.
Neuroimage
,
66
,
215
222
.
Stokes
,
M. G.
,
Kusunoki
,
M.
,
Sigala
,
N.
,
Nili
,
H.
,
Gaffan
,
D.
, &
Duncan
,
J.
(
2013
).
Dynamic coding for cognitive control in prefrontal cortex
.
Neuron
,
78
,
364
375
.
Team
,
R. C.
(
2015
).
R: A language and environment for statistical computing
.
Vienna
:
R Foundation for Statistical Computing
.
Todd
,
M. T.
,
Nystrom
,
L. E.
, &
Cohen
,
J. D.
(
2013
).
Confounds in multivariate pattern analysis: Theory and rule representation case study
.
Neuroimage
,
77
,
157
165
.
Tzourio-Mazoyer
,
N.
,
Landeau
,
B.
,
Papathanassiou
,
D.
,
Crivello
,
F.
,
Etard
,
O.
,
Delcroix
,
N.
, et al
(
2002
).
Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain
.
Neuroimage
,
15
,
273
289
.
Vickery
,
T. J.
,
Chun
,
M. M.
, &
Lee
,
D.
(
2011
).
Ubiquity and specificity of reinforcement signals throughout the human brain
.
Neuron
,
72
,
166
177
.
Wallis
,
J. D.
,
Anderson
,
K. C.
, &
Miller
,
E. K.
(
2001
).
Single neurons in prefrontal cortex encode abstract rules
.
Nature
,
411
,
953
956
.
Weygandt
,
M.
,
Schaefer
,
A.
,
Schienle
,
A.
, &
Haynes
,
J. D.
(
2012
).
Diagnosing different binge-eating disorders based on reward-related brain activation patterns
.
Human Brain Mapping
,
33
,
2135
2146
.
White
,
I. M.
, &
Wise
,
S. P.
(
1999
).
Rule-dependent neuronal activity in the prefrontal cortex
.
Experimental Brain Research
,
126
,
315
335
.
Wisniewski
,
D.
,
Reverberi
,
C.
,
Tusche
,
A.
, &
Haynes
,
J.-D.
(
2015
).
The neural representation of voluntary task-set selection in dynamic environments
.
Cerebral Cortex
,
25
,
4715
4726
.
Woolgar
,
A.
,
Afshar
,
S.
,
Williams
,
M. A.
, &
Rich
,
A. N.
(
2015
).
Flexible coding of task rules in frontoparietal cortex: An adaptive system for flexible cognitive control
.
Journal of Cognitive Neuroscience
,
27
,
1895
1911
.
Woolgar
,
A.
,
Golland
,
P.
, &
Bode
,
S.
(
2014
).
Coping with confounds in multivoxel pattern analysis: What should we do about reaction time differences? A comment on Todd, Nystrom & Cohen 2013
.
Neuroimage
,
98
,
506
512
.
Woolgar
,
A.
,
Hampshire
,
A.
,
Thompson
,
R.
, &
Duncan
,
J.
(
2011
).
Adaptive coding of task-relevant information in human frontoparietal cortex
.
Journal of Neuroscience
,
31
,
14592
14599
.
Woolgar
,
A.
,
Thompson
,
R.
,
Bor
,
D.
, &
Duncan
,
J.
(
2011
).
Multivoxel coding of stimuli, rules, and responses in human frontoparietal cortex
.
Neuroimage
,
56
,
744
752
.
Woolgar
,
A.
,
Williams
,
M. A.
, &
Rich
,
A. N.
(
2015
).
Attention enhances multivoxel representation of novel objects in frontal, parietal and visual cortices
.
Neuroimage
,
109
,
429
437
.
Yeo
,
B. T.
,
Krienen
,
F. M.
,
Eickhoff
,
S. B.
,
Yaakub
,
S. N.
,
Fox
,
P. T.
,
Buckner
,
R. L.
, et al
(
2015
).
Functional specialization and flexibility in human association cortex
.
Cerebral Cortex
,
25
,
3654
3672
.
Zhang
,
J.
,
Kriegeskorte
,
N.
,
Carlin
,
J. D.
, &
Rowe
,
J. B.
(
2013
).
Choosing the rules: Distinct and overlapping frontoparietal representations of task rules for perceptual decisions
.
Journal of Neuroscience
,
33
,
11852
11862
.
Zhang
,
J.
,
Riehle
,
A.
,
Requin
,
J.
, &
Kornblum
,
S.
(
1997
).
Dynamics of single neuron activity in monkey primary motor cortex related to sensorimotor transformation
.
Journal of Neuroscience
,
17
,
2227
2246
.
Zheng
,
Z. Z.
,
Vicente-Grabovetsky
,
A.
,
MacDonald
,
E. N.
,
Munhall
,
K. G.
,
Cusack
,
R.
, &
Johnsrude
,
I. S.
(
2013
).
Multivoxel patterns reveal functionally differentiated networks underlying auditory feedback processing of speech
.
Journal of Neuroscience
,
33
,
4339
4348
.