Abstract
How is the processing of task information organized in the brain? Many views of brain function emphasize modularity, with different regions specialized for processing different types of information. However, recent accounts also highlight flexibility, pointing especially to the highly consistent pattern of frontoparietal activation across many tasks. Although early insights from functional imaging were based on overall activation levels during different cognitive operations, in the last decade many researchers have used multivoxel pattern analyses to interrogate the representational content of activations, mapping out the brain regions that make particular stimulus, rule, or response distinctions. Here, we drew on 100 searchlight decoding analyses from 57 published papers to characterize the information coded in different brain networks. The outcome was highly structured. Visual, auditory, and motor networks predominantly (but not exclusively) coded visual, auditory, and motor information, respectively. By contrast, the frontoparietal multiple-demand network was characterized by domain generality, coding visual, auditory, motor, and rule information. The contribution of the default mode network and voxels elsewhere was minor. The data suggest a balanced picture of brain organization in which sensory and motor networks are relatively specialized for information in their own domain, whereas a specific frontoparietal network acts as a domain-general “core” with the capacity to code many different aspects of a task.
INTRODUCTION
Multivoxel pattern analysis (MVPA) of fMRI data is a powerful and increasingly popular technique used to examine information coding in the human brain. In MVPA, information coding is inferred when the pattern of activation across voxels can reliably discriminate between two or more events such as different stimuli, task rules, or participant responses (e.g., Haynes & Rees, 2006; Haxby et al., 2001). For example, if, in a certain brain region, the patterns of activation elicited in response to viewing red objects are more similar to each other than to the patterns elicited by green objects (and vice versa), we conclude that there is information in the patterns that discriminates between red and green objects and therefore codes for color. This allows inference beyond traditional univariate brain mapping (e.g., this region is more active for colored objects than black and white ones) to examine the particular discriminations that the region is able to make (e.g., the region carries specific information about what color was presented). Information coding may be tested by comparing the correlation of patterns within object classes to correlations between object classes (e.g., Haxby et al., 2001), or using a machine learning algorithm such as a pattern classifier. For example, if a classifier can be trained to discriminate between red and green objects, such that it can predict object color on an independent set of data, we conclude that the pattern of activation can be used reliably to discriminate between red and green objects. The technique has also been generalized to incorporate multiple classes to test more complex representational models (e.g., representational similarity analysis; Kriegeskorte, Mur, & Bandettini, 2008). It has been used to examine neural coding of a wide range of different task events including aspects of stimuli, task rules, participant responses, rewards, emotion, and language (e.g., McNamee, Rangel, & O'Doherty, 2013; Herrmann, Obleser, Kalberlah, Haynes, & Friederici, 2012; Woolgar, Thompson, Bor, & Duncan, 2011; Peelen & Vuilleumier, 2010; Haxby et al., 2001).
Using a “searchlight,” MVPA can be used to map out the brain regions that code for each particular type of information (Kriegeskorte, Goebel, & Bandettini, 2006). For each brain voxel in turn, pattern analysis is applied to the pattern of activation across voxels in the local neighborhood (e.g., in a sphere of a fixed radius centered on the current voxel of interest), and the resulting metric, which summarizes the strength of information coding in the local neighborhood, is given to the center voxel. The resulting whole-brain map indicates where in the brain a particular distinction is coded. This technique allows for exploratory analyses that are free from a priori hypotheses about where local patterns will be discriminative, and opens the door for unpredicted findings.
After several years of searchlight MVPA, we now have an unprecedented opportunity to summarize our knowledge of information coding in the brain. This is the aim of the current paper. In the literature, most cognitive tasks comprise visual and/or auditory input, task rules, and motor output, so we focus our analysis on coding of these task features. We examine the frequency of information coding reported in five brain networks: the visual, auditory, and motor networks; the frontoparietal multiple demand (MD; Duncan, 2006, 2010) or “task-positive” (Fox et al., 2005) network; and a “task-negative” (Fox et al., 2005) or “default mode” (Raichle et al., 2001) network (DMN).
Although traditional accounts of brain organization emphasized modularity of function, several recent proposals highlight the flexibility of many brain regions (e.g., Yeo et al., 2015; Dehaene & Naccache, 2001; Duncan, 2001). For example, one of the most consistent findings in human neuroimaging is a characteristic pattern of activation in the frontoparietal MD network across a wide range of different cognitive tasks (e.g., Yeo et al., 2015; Fedorenko, Duncan, & Kanwisher, 2013; Niendam et al., 2012; Dosenbach et al., 2006; Naghavi & Nyberg, 2005; Owen, McMillan, Laird, & Bullmore, 2005; Duncan & Owen, 2000). This common activity may reflect the common need for cognitive control, one aspect of which is proposed to be the adaptive representation of task-relevant information (Duncan, 2001, 2010). Accordingly, the suggestion is that single neurons in the MD regions adjust their pattern of firing to encode the specific information currently relevant for the task, including stimuli, cues, rules, responses, etc.
The result of our review is a balanced and highly structured picture of brain organization. According to the MVPA data published in the last decade, auditory, visual, and motor networks predominantly code information from their own domain, whereas the frontoparietal MD network is characterized by domain generality, coding all four task features (visual, auditory, motor, and rule information) more frequently than other brain areas. After correcting for network area and the number of studies examining each feature, the contribution of the DMN and cortex elsewhere is minor. Although sensory and motor networks are relatively specialized for information in their own domain, the MD network appears to act as a domain-general core with the capacity to code different aspects of a task as needed for behavior.
METHODS
Paper Selection
We identified peer-reviewed papers published up until the end of December 2014 by searching PubMed, Scopus, Web of Science, HighWire, JSTOR, Oxford University Press Journals, and ScienceDirect databases with the following search terms: “MVPA searchlight,” “multivariate analysis searchlight,” “multivoxel analysis searchlight,” and “MVPA spotlight” in any field. We additionally retrieved all the studies listed by Google scholar as citing Kriegeskorte et al. (2006) in which the procedure for searchlight MVPA was first described. This yielded 537 empirical papers (excluding reviews, comments, methods papers, or conference abstracts). Of these, we included studies that performed volumetric searchlight analysis (Kriegeskorte et al., 2006) across the whole brain of healthy adults and reported a complete list of the coordinates of peak decoding in template (MNI or TAL) space.1 Because most tasks comprise visual or auditory input, task rules, and motor output, we focused on these task features. From each of the papers, we identified independent analyses that isolated the multivoxel representation of a single one of these task features. To achieve this, if a paper reported two or more nonindependent analyses (e.g., analyzed overlapping aspects of the same data), only one analysis was included. We excluded any analyses in which sensory and motor responses were confounded (e.g., if the same visual stimulus was associated with the same motor response). This procedure yielded a total of 100 independent analyses from 57 papers.
Characterization of Task Features
We categorized each of the 100 analyses according to what task feature they examined, namely, whether they examined the multivoxel discrimination between two or more visual stimuli, two or more auditory stimuli, two or more task rules, or two or more motor responses (Table 1). This categorization was done twice, the first time being as inclusive as possible, and the second time using stricter criteria (Table 1, second column). For the strict categorization, we excluded analyses in which the multivoxel discrimination pertained to both an aspect of the physical stimulus and a higher-level stimulus attribute such as emotion or semantic category. Analyses focusing on linguistic stimuli (e.g., written or spoken words) were not included, on the basis that representation of these stimuli would be likely to load on language-related processing more than visual and/or auditory information processing.
Multivoxel Decoding Analyses Included in This Study
Category . | Included in Strict Categorization? . | Study . | Decoding Analysis . | Searchlight Size . | Threshold at Which Results Were Reported . | Number of Participants . |
---|---|---|---|---|---|---|
Visual | Yes | Bode and Haynes (2009) | Target stimuli (dynamic color patterns) | 4 voxel radius | p < .001 uncorr | 12 |
Visual | Yes | Mayhew et al. (2010) | Radial vs. concentric glass pattern stimuli (young adults) | 9 mm radius (av. 98 voxels) | p < .05, k = 5 mm2 | 10 |
Visual | Yes | Mayhew et al. (2010) | Radial vs. concentric glass pattern stimuli (older adults) | 9 mm radius (av. 98 voxels) | p < .05, k = 5 mm2 | 10 |
Visual | Yes | Bogler, Bode, and Haynes (2011) | Most salient visual quadrant of grayscale pictures of natural scenes | 6 voxel radius | p < .05, FWE | 21 |
Visual | Yes | Carlin, Calder, Kriegeskorte, Nili, and Rowe (2011) | View-invariant gaze direction | 5 mm radius | p < .05 FWE | 18 |
Visual | Yes | Kahnt et al. (2011) | Stimulus orientation (low contrast Gabor in upper right visual field) | 4 voxel radius | p < .0001, k = 20, cluster level corr p < .001 | 20 |
Visual | Yes | Kalberlah et al. (2011) | Which of 4 spatial locations participant is attending and responding to | 12 mm radius | p < 10e−5 at vertex level, p < .005 at cluster level | 18 |
Visual | Yes | Vickery et al. (2011) | Visual stimulus (coin showing heads vs. tails side) | 27 voxel cube | p < .001 uncorr, k = 10 | 17 |
Visual | Yes | Woolgar, Thompson, et al. (2011) | Stimulus position | 5 mm radius | p < .001 uncorr | 17 |
Visual | Yes | Woolgar, Thompson, et al. (2011) | Background color of screen (within rule) | 5 mm radius | p < .001 uncorr | 17 |
Visual | Yes | Guo, Preston, Das, Giesbrecht, and Eckstein (2012) | Target present vs. absent in full color natural scenes | 9 mm radius (153 voxels volume) | p < .005, uncorr | 12 |
Visual | Yes | Hebart, Donner, and Haynes (2012) | Direction of motion (dynamic random dot patterns) | 10 mm radius | p < .00001, uncorr, k = 30 | 22 |
Visual | Yes | Peelen and Caramazza (2012) | Perceptual similarity of 12 familiar objects | 8 mm radius | p < .05 Bonferroni, k = 5 | 15 |
Visual | Yes | Peelen and Caramazza (2012) | Pixelwise similarity of 12 familiar objects | 8 mm radius | p < .05 Bonferroni, k = 5 | 15 |
Visual | Yes | Reverberi et al. (2012a) | Visual cue (two visually unrelated abstract shapes coding for the same rule) | 4 voxel radius (3 × 3 × 3.75 mm voxels) | p < .05 cluster corr | 13 |
Visual | Yes | Billington, Furlan, and Wann (2013) | Congruency in depth information (congruent looming and binocular disparity cues vs. incongruent looming and binocular disparity cues) | 6 mm radius | p < .01, Bonferroni corr | 16 |
Visual | Yes | Bode, Bogler, and Haynes (2013) | Black and white photograph (piano vs. chair) | 4 voxel radius | p < .05, FWE cluster corr | 15 |
Visual | Yes | Mayhew and Kourtzi (2013) | Radial vs. concentric glass pattern stimuli (young adults) | 9 mm radius, av. 98 voxels volume | p < .05, cluster threshold 5 mm2 | 10 |
Visual | Yes | Mayhew and Kourtzi (2013) | Radial vs. concentric glass pattern stimuli (older adults) | 9 mm radius, av. 98 voxels volume | p < .05, cluster threshold 5 mm2 | 10 |
Visual | Yes | Clarke and Tyler (2014) | Low-level visual features (early visual cortex model) | 7 mm radius | p < .05, FDR, k = 20 | 16 |
Visual | Yes | Pollmann et al. (2014) | Gabor patches differing in both color (red/green) and spatial frequency | 3 voxels (10.5 mm) radius (123 voxels volume) | p < .001 | 15 |
Visual | No | Clithero et al. (2011) | Image of face vs. currency (currency influenced participant payment), within participants analysis | 12 mm radius (max 123 voxels) | p < .05 Bonferroni correction | 16 |
Visual | No | Vickery et al. (2011) | Visual stimulus (photograph of hand making rock/paper/scissor action) | 27 voxel cube | p < .001 uncorr | 22 |
Visual | No | Christophel, Hebart, and Haynes (2012) | Complex artificial stimuli consisting of multicolored random fields (STM storage of visual stimuli during delay phase) | 4 voxel radius | p < .05 FWE, k = 20 | 17 |
Visual | No | Bode, Bogler, Soon, and Haynes (2012) | Category of visual stimulus (piano/chair/noise), high-visibility condition | 4 voxel radius | p < .0001 uncorr | 14 |
Visual | No | Carlin et al. (2012) | Left vs. right head turn (silent video clips) | 5 mm radius | p < .05 FWE | 17 |
Visual | No | Gilbert, Swencionis, and Amodio (2012) | Black vs. white faces (color photographs, collapsed over trait and friendship judgment tasks) | 3 voxel radius | p < .05 FWE | 16 |
Visual | No | Kaplan and Meyer (2012) | Discrimination between five 5-sec video clips showing manipulation of different objects (plant, tennis ball, skein of yarn, light bulb, set of keys) within-subject analysis | 8 mm radius (average 75 voxels) | >maximum value given by a permutation test in a occipital spherical ROI for each subject | 8 |
Visual | No | Linden, Oosterhof, Klein, and Downing (2012) | Visual category (face/body/scene/flower) (encoding phase) | 100 voxels volume | p < .05, voxelwise uncorr, cluster correction using Monte Carlo simulation | 18 |
Visual | No | Linden et al. (2012) | Visual category (face/body/scene/flower) (delay 1 phase) | 100 voxels volume | p < .05, voxelwise uncorr, cluster correction using Monte Carlo simulation | 18 |
Visual | No | Murawski, Harris, Bode, Dominguez, and Egan (2012) | Subliminal presentation of apple logo vs. neutral cup | 3 voxel radius | p < .05 FWE cluster corr | 13 |
Visual | No | Peelen and Caramazza (2012) | Conceptual level information about 12 familiar objects: kitchen vs. garage, and rotate vs. squeeze | 8 mm radius | p < .05 Bonferroni, k = 5 | 15 |
Visual | No | Weygandt, Schaefer, Schienle, and Haynes (2012) | Food vs. neutral images (normal weight control group) | 3 voxel radius | p < .001 uncorr k = 10 and p < .05 FWE | 19 |
Visual | No | McNamee et al. (2013) | Visual category (food/money/trinkets) | 20 mm radius | p < .05 FDR k = 10 | 13 |
Visual | No | Clarke and Tyler (2014) | Semantic features of visual stimuli | 7 mm radius | p < .05, FDR, k = 20 | 16 |
Visual | No | Clarke and Tyler (2014) | Category of visual stimuli (animals vs. fruits vs. vegetables vs. vehicles vs. tools vs. musical instruments) | 7 mm radius | p < .05, FDR, k = 20 | 16 |
Visual | No | Clarke and Tyler (2014) | Animal and plants vs. nonbiological visual stimuli | 7 mm radius | p < .05, FDR, k = 20 | 16 |
Visual | No | Clarke and Tyler (2014) | Animal visual stimuli (a model in which patterns for animal stimuli are similar to one another and all other stimuli are dissimilar from animals and from each other) | 7 mm radius | p < .05, FDR, k = 20 | 16 |
Visual | No | Simanova et al. (2014) | Category of visual stimulus (animal vs. tool) – photographs | 2.5 voxels, 8.75 mm radius (33 voxel volume) | p < .001, FDR | 14 |
Visual | No | FitzGerald, Schwartenbeck, and Dolan (2014) | Discriminate between two abstract visual stimuli when attending to visual stimuli (stimuli differentially associated with financial reward) | 6 mm radius (31 voxels volume) | p < .05 FWE-corrected | 25 |
Visual | No | FitzGerald et al. (2014) | Discriminate between two abstract visual stimuli when attending to concurrently presented auditory stimuli (visual stimuli differentially associated with financial reward) | 6 mm radius (31 voxels volume) | p < .05 FWE-corrected | 25 |
Auditory | Yes | Alink, Euler, Kriegeskorte, Singer, and Kohler (2012) | Direction of auditory motion (rightwards vs. leftwards) | 1.05 cm radius (72 voxel volume) | T > 4.0, k > 4 | 19 |
Auditory | Yes | Lee, Janata, Frost, Hanke, and Granger (2011) | Ascending vs. descending melodic sequences | 2 neighboring voxels (max 33 voxels) | p < .05 cluster corr | 12 |
Auditory | Yes | Linke, Vicente-Grabovetsky, and Cusack (2011) | Frequency-specific coding (discriminate pure tones in four frequency ranges) | 10 mm radius | p < .005 FDR | 16 (9 with two sessions, 7 with one session) |
Auditory | Yes | Giordano et al. (2013) | Pitch (median) | 6.25 mm radius | p < .0001, k = 20 | 20 |
Auditory | Yes | Giordano et al. (2013) | Loudness (median) | 6.25 mm radius | p < .0001, k = 20 | 20 |
Auditory | Yes | Giordano et al. (2013) | Spectral centroid (interquartile range) | 6.25 mm radius | p < .0001, k = 20 | 20 |
Auditory | Yes | Giordano et al. (2013) | Harmonicity (median) | 6.25 mm radius | p < .0001, k = 20 | 20 |
Auditory | Yes | Giordano et al. (2013) | Loudness (cross-correlation) | 6.25 mm radius | p < .0001, k = 20 | 20 |
Auditory | Yes | Lee, Turkeltaub, Granger, and Raizada (2012) | /ba/ vs. /da/ speech category (3 voxel searchlight) | 3 voxel radius | p < .001 voxelwise uncorr and p < .05 clusterwise-corrected | 13 |
Auditory | Yes | Lee et al. (2012) | /ba/ vs. /da/ speech category (reanalysis of a previous data set) | 3 voxel radius | p < .001 voxelwise uncorr and p < .05 clusterwise-corrected | 12 |
Auditory | Yes | Merrill et al. (2012) | Hummed speech prosody (rhythm + pitch) vs. speech rhythm | 6 mm radius | p < .05 cluster size-corrected | 21 |
Auditory | Yes | Merrill et al. (2012) | Hummed song melody (pitch + rhythm) vs. musical rhythm | 6 mm radius | p < .05 cluster size-corrected | 21 |
Auditory | Yes | Jiang, Stecker, and Fine (2014) | Apparent direction of unambiguous auditory motion (left vs. right), 50% coherence, sighted subjects | 2 mm radius (33 voxels volume) | p < .001 corrected | 7 |
Auditory | Yes | Klein and Zatorre (2015) | Musical category (minor third vs. major third) | 3 voxels radius (max 123 voxels volume) | p < .001 (uncorrected) | 10 |
Auditory | No | Giordano et al. (2013) | Human similar | 6.25 mm radius | p < .0001, k = 20 | 20 |
Auditory | No | Giordano et al. (2013) | Living similar | 6.25 mm radius | p < .0001, k = 20 | 20 |
Auditory | No | Kotz et al. (2013) | Emotion of vocal expression (angry, happy, sad, surprised or neutral) | 9 mm radius (3 × 3 × 3 mm voxels) | p < .0001 uncorr | 20 |
Auditory | No | Merrill et al. (2012) | Spoken sentences (words + rhythm + pitch) vs. hummed speech prosody (rhythm + pitch) | 6 mm radius | p < .05 cluster size-corrected | 21 |
Auditory | No | Merrill et al. (2012) | Sung sentences (words + pitch + rhythm) vs. hummed song melody (pitch + rhythm) | 6 mm radius | p < .05 cluster size-corrected | 21 |
Auditory | No | Simanova et al. (2014) | Category of sound (animal vs. tool) | 2.5 voxels, 8.75 mm radius (33 voxel volume) | p < .001, FDR | 14 |
Auditory | No | Boets et al. (2013) | Speech sounds (sounds differing on both consonant and vowel vs. neither), typical readers | 9 mm radius (123 voxels volume) | p < .001 voxelwise uncorr and p < .05 FWE clusterwise | 22 |
Auditory | No | Zheng et al. (2013) | Speech stimuli vs. noise during passive listening | 4 mm radius | p < .05, FWE | 20 |
Auditory | No | Jiang et al. (2014) | Reported apparent direction of ambiguous auditory motion (left vs. right), 0% coherence, sighted subjects | 2 mm radius (33 voxels volume) | p < .001 corrected | 7 |
Rule | Yes | Haynes et al. (2007) | Intended task: addition vs. subtraction | 3 voxel radius | p < .005 uncorr | 8 |
Rule | Yes | Bode and Haynes (2009) | Stimulus–response mapping rule | 4 voxel radius | p < .001 uncorr | 12 |
Rule | Yes | Greenberg, Esterman, Wilson, Serences, and Yantis (2010) | Type of attentional shift to make (shift attention to alternate location vs. shift attention to alternate color) | 27 voxel cube | p < .05 cluster correction | 8 |
Rule | Yes | Vickery et al. (2011) | Participants upcoming choice (switch or stay relative to last choice) | 27 voxel cube | p < .001 uncorr | 22 |
Rule | Yes | Woolgar, Thompson, et al. (2011) | Stimulus–response mapping rule | 5 mm radius | p < .001 uncorr | 17 |
Rule | Yes | Hebart et al. (2012) | Stimulus–response mapping rule | 10 mm radius | p < .00001, uncorr, k = 30 | 2 |
Rule | Yes | Momennejad and Haynes (2012) | Task participants are intending to perform (parity or magnitude task) during maintenance phase | 4 voxel radius | p < .0001 uncorr, k = 0 | 12 |
Rule | Yes | Momennejad and Haynes (2012) | Task participants are performing (parity or magnitude task) during retrieval phase | 4 voxel radius | p < .0001 uncorr, k = 0 | 12 |
Rule | Yes | Momennejad and Haynes (2012) | Time delay after which participants should self-initiate a task switch (15, 20 or 25 sec), during maintenance phase | 4 voxel radius | p < .0001 uncorr, k = 0 | 12 |
Rule | Yes | Nee and Brown (2012) | Higher level task context (first delay period) | 10 mm radius | p < .05 cluster-corrected (p < .005 with k = 66 – extent threshold given using simulations in AlphaSim) | 21 |
Rule | Yes | Nee and Brown (2012) | Higher and lower level task context (second delay period) | 10 mm radius | p < .05 cluster-corrected (p < .005 with k = 66 – extent threshold given using simulations in AlphaSim) | 21 |
Rule | Yes | Reverberi et al. (2012a) | Rule representation (e.g., if there is a house press left) | 4 voxel radius (3 × 3 × 3.75 mm voxels) | p < .05 cluster corr | 13 |
Rule | Yes | Reverberi et al. (2012b) | Stimulus response mapping “rule identity” (if furniture then A, if transport then B vs. if furniture then B, if transport then A) | 4 voxel radius | p < .05 corrected | 14 |
Rule | Yes | Reverberi et al. (2012b) | Order in which rules are to be applied | 4 voxel radius | p < .05 corrected | 14 |
Rule | Yes | Soon et al. (2013) | Task (search for house/face/car/bird) during preparatory period | 20 mm radius | p < .05 Bonferroni-corrected against 55%, 25 mm cluster threshold | 15 |
Rule | Yes | Zhang et al. (2013) | Current rule: attend to direction of motion, color or size of dots in a random dot kinematogram | 2 voxel radius | p < .001 uncorr k = 35 | 20 |
Rule | Yes | Helfinstein et al. (2014) | Safe vs. risky choice to be taken by the participant on the following trial | Not reported | >60%, whole-brain cluster-corrected p < .05 via comparison with 1,000 random permutations | 108 |
Rule | Yes | Jiang and Egner (2014) | Congruency of face/word compound stimulus (congruent vs. incongruent) in a face gender categorization task | 3 voxel radius (3 × 3 × 3 mm voxels) | p < .05 corrected | 21 |
Rule | Yes | Jiang and Egner (2014) | Congruency of decision with side of response (congruent vs. incongruent) | 3 voxel radius (3 × 3 × 3 mm voxels) | p < .05 corrected | 21 |
Rule | Yes | Wisniewski, Reverberi, Tusche, and Haynes (2015) | Stimulus–response mapping rule (selected by participant) | 4 voxel radius | p < .001 FWE corr | 14 |
Rule | No | Gilbert (2011) | Dual task (2-back + prospective memory task) vs. single task (2-back) | 3 voxel radius | p < .05 FWE, k = 10 | 32 |
Rule | No | Ekman, Derrfuss, Tittgemeyer, and Fiebach (2012) | Prepare for upcoming color or motion task vs. neutral condition in which upcoming task was not known (no preparation possible) | 8 mm radius | nonparametric permutation test, FDR threshold at q = 0.05, peaks more than 24 mm apart | 9 |
Rule | No | Li and Yang (2012) | Task set: categorize glass patterns as radial or concentric when stimuli vary on angle vs. categorize glass patterns as radial or concentric when they vary on signal level | 9 mm radius (voxels 3 × 3 × 3 mm) | p < .01, cluster correction using Monte Carlo simulation | 20 |
Motor | Yes | Soon et al. (2008) | Left vs. right button press intention (before conscious decision indicated) | 3 voxel radius | p < .05 FWE | 14 |
Motor | Yes | Soon et al. (2008) | Left vs. right button press (execution) | 3 voxel radius | p < .05 FWE | 14 |
Motor | Yes | Soon et al. (2008) | Left vs. right response preparation (cued when to make choice) | 3 voxel radius | p < .001 uncorr | 14 |
Motor | Yes | Bode and Haynes (2009) | Motor response (leftward vs. rightward joystick movement) | 4 voxel radius | p < .05 FWE | 12 |
Motor | Yes | Woolgar, Thompson, et al. (2011) | Button press response (index vs. middle finger) | 5 mm radius | p < .001 uncorr | 17 |
Motor | Yes | Bode et al. (2012) | Button press response (index/middle/ring finger of right hand) | 4 voxel radius | p < .0001 uncorr | 14 |
Motor | Yes | Bode et al. (2013) | Button press (right index vs. middle finger) | 4 voxel radius | p < .05, FWE cluster corr | 15 |
Motor | No | Carp et al. (2011) | Left vs. right index finger tapping | 12 mm radius | p < 1e−7, k = 50 | 37 |
Motor | No | Colas and Hsieh (2014) | Motor bias (whether participant will later press left or right button, operated with index finger of each hand, prestimulus display) | 5 voxel sided cube | p < .025 | 14 |
Motor | No | Colas and Hsieh (2014) | Left vs. right button press (index finger of each hand) | 5 voxel sided cube | p < .01, k = 9 | 14 |
Motor | No | Huang et al. (2014) | Left vs. right key press (left vs. right hand) | 5 voxels (15 mm) cube | p < .01 cluster-corrected | 14 |
Category . | Included in Strict Categorization? . | Study . | Decoding Analysis . | Searchlight Size . | Threshold at Which Results Were Reported . | Number of Participants . |
---|---|---|---|---|---|---|
Visual | Yes | Bode and Haynes (2009) | Target stimuli (dynamic color patterns) | 4 voxel radius | p < .001 uncorr | 12 |
Visual | Yes | Mayhew et al. (2010) | Radial vs. concentric glass pattern stimuli (young adults) | 9 mm radius (av. 98 voxels) | p < .05, k = 5 mm2 | 10 |
Visual | Yes | Mayhew et al. (2010) | Radial vs. concentric glass pattern stimuli (older adults) | 9 mm radius (av. 98 voxels) | p < .05, k = 5 mm2 | 10 |
Visual | Yes | Bogler, Bode, and Haynes (2011) | Most salient visual quadrant of grayscale pictures of natural scenes | 6 voxel radius | p < .05, FWE | 21 |
Visual | Yes | Carlin, Calder, Kriegeskorte, Nili, and Rowe (2011) | View-invariant gaze direction | 5 mm radius | p < .05 FWE | 18 |
Visual | Yes | Kahnt et al. (2011) | Stimulus orientation (low contrast Gabor in upper right visual field) | 4 voxel radius | p < .0001, k = 20, cluster level corr p < .001 | 20 |
Visual | Yes | Kalberlah et al. (2011) | Which of 4 spatial locations participant is attending and responding to | 12 mm radius | p < 10e−5 at vertex level, p < .005 at cluster level | 18 |
Visual | Yes | Vickery et al. (2011) | Visual stimulus (coin showing heads vs. tails side) | 27 voxel cube | p < .001 uncorr, k = 10 | 17 |
Visual | Yes | Woolgar, Thompson, et al. (2011) | Stimulus position | 5 mm radius | p < .001 uncorr | 17 |
Visual | Yes | Woolgar, Thompson, et al. (2011) | Background color of screen (within rule) | 5 mm radius | p < .001 uncorr | 17 |
Visual | Yes | Guo, Preston, Das, Giesbrecht, and Eckstein (2012) | Target present vs. absent in full color natural scenes | 9 mm radius (153 voxels volume) | p < .005, uncorr | 12 |
Visual | Yes | Hebart, Donner, and Haynes (2012) | Direction of motion (dynamic random dot patterns) | 10 mm radius | p < .00001, uncorr, k = 30 | 22 |
Visual | Yes | Peelen and Caramazza (2012) | Perceptual similarity of 12 familiar objects | 8 mm radius | p < .05 Bonferroni, k = 5 | 15 |
Visual | Yes | Peelen and Caramazza (2012) | Pixelwise similarity of 12 familiar objects | 8 mm radius | p < .05 Bonferroni, k = 5 | 15 |
Visual | Yes | Reverberi et al. (2012a) | Visual cue (two visually unrelated abstract shapes coding for the same rule) | 4 voxel radius (3 × 3 × 3.75 mm voxels) | p < .05 cluster corr | 13 |
Visual | Yes | Billington, Furlan, and Wann (2013) | Congruency in depth information (congruent looming and binocular disparity cues vs. incongruent looming and binocular disparity cues) | 6 mm radius | p < .01, Bonferroni corr | 16 |
Visual | Yes | Bode, Bogler, and Haynes (2013) | Black and white photograph (piano vs. chair) | 4 voxel radius | p < .05, FWE cluster corr | 15 |
Visual | Yes | Mayhew and Kourtzi (2013) | Radial vs. concentric glass pattern stimuli (young adults) | 9 mm radius, av. 98 voxels volume | p < .05, cluster threshold 5 mm2 | 10 |
Visual | Yes | Mayhew and Kourtzi (2013) | Radial vs. concentric glass pattern stimuli (older adults) | 9 mm radius, av. 98 voxels volume | p < .05, cluster threshold 5 mm2 | 10 |
Visual | Yes | Clarke and Tyler (2014) | Low-level visual features (early visual cortex model) | 7 mm radius | p < .05, FDR, k = 20 | 16 |
Visual | Yes | Pollmann et al. (2014) | Gabor patches differing in both color (red/green) and spatial frequency | 3 voxels (10.5 mm) radius (123 voxels volume) | p < .001 | 15 |
Visual | No | Clithero et al. (2011) | Image of face vs. currency (currency influenced participant payment), within participants analysis | 12 mm radius (max 123 voxels) | p < .05 Bonferroni correction | 16 |
Visual | No | Vickery et al. (2011) | Visual stimulus (photograph of hand making rock/paper/scissor action) | 27 voxel cube | p < .001 uncorr | 22 |
Visual | No | Christophel, Hebart, and Haynes (2012) | Complex artificial stimuli consisting of multicolored random fields (STM storage of visual stimuli during delay phase) | 4 voxel radius | p < .05 FWE, k = 20 | 17 |
Visual | No | Bode, Bogler, Soon, and Haynes (2012) | Category of visual stimulus (piano/chair/noise), high-visibility condition | 4 voxel radius | p < .0001 uncorr | 14 |
Visual | No | Carlin et al. (2012) | Left vs. right head turn (silent video clips) | 5 mm radius | p < .05 FWE | 17 |
Visual | No | Gilbert, Swencionis, and Amodio (2012) | Black vs. white faces (color photographs, collapsed over trait and friendship judgment tasks) | 3 voxel radius | p < .05 FWE | 16 |
Visual | No | Kaplan and Meyer (2012) | Discrimination between five 5-sec video clips showing manipulation of different objects (plant, tennis ball, skein of yarn, light bulb, set of keys) within-subject analysis | 8 mm radius (average 75 voxels) | >maximum value given by a permutation test in a occipital spherical ROI for each subject | 8 |
Visual | No | Linden, Oosterhof, Klein, and Downing (2012) | Visual category (face/body/scene/flower) (encoding phase) | 100 voxels volume | p < .05, voxelwise uncorr, cluster correction using Monte Carlo simulation | 18 |
Visual | No | Linden et al. (2012) | Visual category (face/body/scene/flower) (delay 1 phase) | 100 voxels volume | p < .05, voxelwise uncorr, cluster correction using Monte Carlo simulation | 18 |
Visual | No | Murawski, Harris, Bode, Dominguez, and Egan (2012) | Subliminal presentation of apple logo vs. neutral cup | 3 voxel radius | p < .05 FWE cluster corr | 13 |
Visual | No | Peelen and Caramazza (2012) | Conceptual level information about 12 familiar objects: kitchen vs. garage, and rotate vs. squeeze | 8 mm radius | p < .05 Bonferroni, k = 5 | 15 |
Visual | No | Weygandt, Schaefer, Schienle, and Haynes (2012) | Food vs. neutral images (normal weight control group) | 3 voxel radius | p < .001 uncorr k = 10 and p < .05 FWE | 19 |
Visual | No | McNamee et al. (2013) | Visual category (food/money/trinkets) | 20 mm radius | p < .05 FDR k = 10 | 13 |
Visual | No | Clarke and Tyler (2014) | Semantic features of visual stimuli | 7 mm radius | p < .05, FDR, k = 20 | 16 |
Visual | No | Clarke and Tyler (2014) | Category of visual stimuli (animals vs. fruits vs. vegetables vs. vehicles vs. tools vs. musical instruments) | 7 mm radius | p < .05, FDR, k = 20 | 16 |
Visual | No | Clarke and Tyler (2014) | Animal and plants vs. nonbiological visual stimuli | 7 mm radius | p < .05, FDR, k = 20 | 16 |
Visual | No | Clarke and Tyler (2014) | Animal visual stimuli (a model in which patterns for animal stimuli are similar to one another and all other stimuli are dissimilar from animals and from each other) | 7 mm radius | p < .05, FDR, k = 20 | 16 |
Visual | No | Simanova et al. (2014) | Category of visual stimulus (animal vs. tool) – photographs | 2.5 voxels, 8.75 mm radius (33 voxel volume) | p < .001, FDR | 14 |
Visual | No | FitzGerald, Schwartenbeck, and Dolan (2014) | Discriminate between two abstract visual stimuli when attending to visual stimuli (stimuli differentially associated with financial reward) | 6 mm radius (31 voxels volume) | p < .05 FWE-corrected | 25 |
Visual | No | FitzGerald et al. (2014) | Discriminate between two abstract visual stimuli when attending to concurrently presented auditory stimuli (visual stimuli differentially associated with financial reward) | 6 mm radius (31 voxels volume) | p < .05 FWE-corrected | 25 |
Auditory | Yes | Alink, Euler, Kriegeskorte, Singer, and Kohler (2012) | Direction of auditory motion (rightwards vs. leftwards) | 1.05 cm radius (72 voxel volume) | T > 4.0, k > 4 | 19 |
Auditory | Yes | Lee, Janata, Frost, Hanke, and Granger (2011) | Ascending vs. descending melodic sequences | 2 neighboring voxels (max 33 voxels) | p < .05 cluster corr | 12 |
Auditory | Yes | Linke, Vicente-Grabovetsky, and Cusack (2011) | Frequency-specific coding (discriminate pure tones in four frequency ranges) | 10 mm radius | p < .005 FDR | 16 (9 with two sessions, 7 with one session) |
Auditory | Yes | Giordano et al. (2013) | Pitch (median) | 6.25 mm radius | p < .0001, k = 20 | 20 |
Auditory | Yes | Giordano et al. (2013) | Loudness (median) | 6.25 mm radius | p < .0001, k = 20 | 20 |
Auditory | Yes | Giordano et al. (2013) | Spectral centroid (interquartile range) | 6.25 mm radius | p < .0001, k = 20 | 20 |
Auditory | Yes | Giordano et al. (2013) | Harmonicity (median) | 6.25 mm radius | p < .0001, k = 20 | 20 |
Auditory | Yes | Giordano et al. (2013) | Loudness (cross-correlation) | 6.25 mm radius | p < .0001, k = 20 | 20 |
Auditory | Yes | Lee, Turkeltaub, Granger, and Raizada (2012) | /ba/ vs. /da/ speech category (3 voxel searchlight) | 3 voxel radius | p < .001 voxelwise uncorr and p < .05 clusterwise-corrected | 13 |
Auditory | Yes | Lee et al. (2012) | /ba/ vs. /da/ speech category (reanalysis of a previous data set) | 3 voxel radius | p < .001 voxelwise uncorr and p < .05 clusterwise-corrected | 12 |
Auditory | Yes | Merrill et al. (2012) | Hummed speech prosody (rhythm + pitch) vs. speech rhythm | 6 mm radius | p < .05 cluster size-corrected | 21 |
Auditory | Yes | Merrill et al. (2012) | Hummed song melody (pitch + rhythm) vs. musical rhythm | 6 mm radius | p < .05 cluster size-corrected | 21 |
Auditory | Yes | Jiang, Stecker, and Fine (2014) | Apparent direction of unambiguous auditory motion (left vs. right), 50% coherence, sighted subjects | 2 mm radius (33 voxels volume) | p < .001 corrected | 7 |
Auditory | Yes | Klein and Zatorre (2015) | Musical category (minor third vs. major third) | 3 voxels radius (max 123 voxels volume) | p < .001 (uncorrected) | 10 |
Auditory | No | Giordano et al. (2013) | Human similar | 6.25 mm radius | p < .0001, k = 20 | 20 |
Auditory | No | Giordano et al. (2013) | Living similar | 6.25 mm radius | p < .0001, k = 20 | 20 |
Auditory | No | Kotz et al. (2013) | Emotion of vocal expression (angry, happy, sad, surprised or neutral) | 9 mm radius (3 × 3 × 3 mm voxels) | p < .0001 uncorr | 20 |
Auditory | No | Merrill et al. (2012) | Spoken sentences (words + rhythm + pitch) vs. hummed speech prosody (rhythm + pitch) | 6 mm radius | p < .05 cluster size-corrected | 21 |
Auditory | No | Merrill et al. (2012) | Sung sentences (words + pitch + rhythm) vs. hummed song melody (pitch + rhythm) | 6 mm radius | p < .05 cluster size-corrected | 21 |
Auditory | No | Simanova et al. (2014) | Category of sound (animal vs. tool) | 2.5 voxels, 8.75 mm radius (33 voxel volume) | p < .001, FDR | 14 |
Auditory | No | Boets et al. (2013) | Speech sounds (sounds differing on both consonant and vowel vs. neither), typical readers | 9 mm radius (123 voxels volume) | p < .001 voxelwise uncorr and p < .05 FWE clusterwise | 22 |
Auditory | No | Zheng et al. (2013) | Speech stimuli vs. noise during passive listening | 4 mm radius | p < .05, FWE | 20 |
Auditory | No | Jiang et al. (2014) | Reported apparent direction of ambiguous auditory motion (left vs. right), 0% coherence, sighted subjects | 2 mm radius (33 voxels volume) | p < .001 corrected | 7 |
Rule | Yes | Haynes et al. (2007) | Intended task: addition vs. subtraction | 3 voxel radius | p < .005 uncorr | 8 |
Rule | Yes | Bode and Haynes (2009) | Stimulus–response mapping rule | 4 voxel radius | p < .001 uncorr | 12 |
Rule | Yes | Greenberg, Esterman, Wilson, Serences, and Yantis (2010) | Type of attentional shift to make (shift attention to alternate location vs. shift attention to alternate color) | 27 voxel cube | p < .05 cluster correction | 8 |
Rule | Yes | Vickery et al. (2011) | Participants upcoming choice (switch or stay relative to last choice) | 27 voxel cube | p < .001 uncorr | 22 |
Rule | Yes | Woolgar, Thompson, et al. (2011) | Stimulus–response mapping rule | 5 mm radius | p < .001 uncorr | 17 |
Rule | Yes | Hebart et al. (2012) | Stimulus–response mapping rule | 10 mm radius | p < .00001, uncorr, k = 30 | 2 |
Rule | Yes | Momennejad and Haynes (2012) | Task participants are intending to perform (parity or magnitude task) during maintenance phase | 4 voxel radius | p < .0001 uncorr, k = 0 | 12 |
Rule | Yes | Momennejad and Haynes (2012) | Task participants are performing (parity or magnitude task) during retrieval phase | 4 voxel radius | p < .0001 uncorr, k = 0 | 12 |
Rule | Yes | Momennejad and Haynes (2012) | Time delay after which participants should self-initiate a task switch (15, 20 or 25 sec), during maintenance phase | 4 voxel radius | p < .0001 uncorr, k = 0 | 12 |
Rule | Yes | Nee and Brown (2012) | Higher level task context (first delay period) | 10 mm radius | p < .05 cluster-corrected (p < .005 with k = 66 – extent threshold given using simulations in AlphaSim) | 21 |
Rule | Yes | Nee and Brown (2012) | Higher and lower level task context (second delay period) | 10 mm radius | p < .05 cluster-corrected (p < .005 with k = 66 – extent threshold given using simulations in AlphaSim) | 21 |
Rule | Yes | Reverberi et al. (2012a) | Rule representation (e.g., if there is a house press left) | 4 voxel radius (3 × 3 × 3.75 mm voxels) | p < .05 cluster corr | 13 |
Rule | Yes | Reverberi et al. (2012b) | Stimulus response mapping “rule identity” (if furniture then A, if transport then B vs. if furniture then B, if transport then A) | 4 voxel radius | p < .05 corrected | 14 |
Rule | Yes | Reverberi et al. (2012b) | Order in which rules are to be applied | 4 voxel radius | p < .05 corrected | 14 |
Rule | Yes | Soon et al. (2013) | Task (search for house/face/car/bird) during preparatory period | 20 mm radius | p < .05 Bonferroni-corrected against 55%, 25 mm cluster threshold | 15 |
Rule | Yes | Zhang et al. (2013) | Current rule: attend to direction of motion, color or size of dots in a random dot kinematogram | 2 voxel radius | p < .001 uncorr k = 35 | 20 |
Rule | Yes | Helfinstein et al. (2014) | Safe vs. risky choice to be taken by the participant on the following trial | Not reported | >60%, whole-brain cluster-corrected p < .05 via comparison with 1,000 random permutations | 108 |
Rule | Yes | Jiang and Egner (2014) | Congruency of face/word compound stimulus (congruent vs. incongruent) in a face gender categorization task | 3 voxel radius (3 × 3 × 3 mm voxels) | p < .05 corrected | 21 |
Rule | Yes | Jiang and Egner (2014) | Congruency of decision with side of response (congruent vs. incongruent) | 3 voxel radius (3 × 3 × 3 mm voxels) | p < .05 corrected | 21 |
Rule | Yes | Wisniewski, Reverberi, Tusche, and Haynes (2015) | Stimulus–response mapping rule (selected by participant) | 4 voxel radius | p < .001 FWE corr | 14 |
Rule | No | Gilbert (2011) | Dual task (2-back + prospective memory task) vs. single task (2-back) | 3 voxel radius | p < .05 FWE, k = 10 | 32 |
Rule | No | Ekman, Derrfuss, Tittgemeyer, and Fiebach (2012) | Prepare for upcoming color or motion task vs. neutral condition in which upcoming task was not known (no preparation possible) | 8 mm radius | nonparametric permutation test, FDR threshold at q = 0.05, peaks more than 24 mm apart | 9 |
Rule | No | Li and Yang (2012) | Task set: categorize glass patterns as radial or concentric when stimuli vary on angle vs. categorize glass patterns as radial or concentric when they vary on signal level | 9 mm radius (voxels 3 × 3 × 3 mm) | p < .01, cluster correction using Monte Carlo simulation | 20 |
Motor | Yes | Soon et al. (2008) | Left vs. right button press intention (before conscious decision indicated) | 3 voxel radius | p < .05 FWE | 14 |
Motor | Yes | Soon et al. (2008) | Left vs. right button press (execution) | 3 voxel radius | p < .05 FWE | 14 |
Motor | Yes | Soon et al. (2008) | Left vs. right response preparation (cued when to make choice) | 3 voxel radius | p < .001 uncorr | 14 |
Motor | Yes | Bode and Haynes (2009) | Motor response (leftward vs. rightward joystick movement) | 4 voxel radius | p < .05 FWE | 12 |
Motor | Yes | Woolgar, Thompson, et al. (2011) | Button press response (index vs. middle finger) | 5 mm radius | p < .001 uncorr | 17 |
Motor | Yes | Bode et al. (2012) | Button press response (index/middle/ring finger of right hand) | 4 voxel radius | p < .0001 uncorr | 14 |
Motor | Yes | Bode et al. (2013) | Button press (right index vs. middle finger) | 4 voxel radius | p < .05, FWE cluster corr | 15 |
Motor | No | Carp et al. (2011) | Left vs. right index finger tapping | 12 mm radius | p < 1e−7, k = 50 | 37 |
Motor | No | Colas and Hsieh (2014) | Motor bias (whether participant will later press left or right button, operated with index finger of each hand, prestimulus display) | 5 voxel sided cube | p < .025 | 14 |
Motor | No | Colas and Hsieh (2014) | Left vs. right button press (index finger of each hand) | 5 voxel sided cube | p < .01, k = 9 | 14 |
Motor | No | Huang et al. (2014) | Left vs. right key press (left vs. right hand) | 5 voxels (15 mm) cube | p < .01 cluster-corrected | 14 |
corr = corrected; uncorr = uncorrected; FDR = false discovery rate correction; FWE = family-wise error correction; k = cluster extent threshold.
Analyses pertaining to the discrimination of visual stimuli included discrimination of stimulus orientation, position, color, and form. Additional analyses pertaining to the semantic category of the visual stimulus (e.g., animals vs. tools; Simanova, Hagoort, Oostenveld, & van Gerven, 2014) and stimuli that were consistently associated with different rewards (e.g., face vs. currency, where a picture of currency indicated a monetary reward; Clithero, Smith, Carter, & Huettel, 2011) were included in our lenient categorization but excluded from the strict categorization. In our strict categorization, we also excluded two further studies in which there was a possibility that the visual stimulus could evoke representation of motor actions. These were videos of head turns (Carlin, Rowe, Kriegeskorte, Thompson, & Calder, 2012) and photos of hands in rock/paper/scissor pose (Vickery, Chun, & Lee, 2011).
Analyses pertaining to the coding of auditory information included discrimination of the direction of auditory motion, pitch, loudness, and melody. Analyses pertaining to the semantic category of sound (e.g., animals vs. tools; Simanova et al., 2014) or emotion of vocal expression (Kotz, Kalberlah, Bahlmann, Friederici, & Haynes, 2013) were also included in our lenient categorization and excluded from the strict categorization.
Analyses pertaining to the discrimination of task rules included discrimination of different stimulus–response mappings (e.g., Bode & Haynes, 2009), intended tasks (e.g., addition vs. subtraction; Haynes et al., 2007) and task set (e.g., attend to motion vs. color vs. size; Zhang, Kriegeskorte, Carlin, & Rowe, 2013). Two analyses were included in our lenient categorization and excluded from the strict categorization. One was an analysis that discriminated a dual from single task (Gilbert, 2011), which was excluded from the strict categorization because of the obvious confound with difficulty (for discussion, see Woolgar, Golland, & Bode, 2014; Todd, Nystrom, & Cohen, 2013), and the other pertained to discrimination of task set where the stimuli were very similar but not identical between the two tasks (Li & Yang, 2012).
Analyses pertaining to the discrimination of motor responses included discrimination of different button presses and the direction of joystick movement during response preparation and execution. One analysis that discriminated between left and right finger tapping (Carp, Park, Hebrank, Park, & Polk, 2011) was also excluded from the strict categorization, because it was not clear whether the side to tap was confounded with a visual cue. Two further studies were excluded from our stricter analysis, because it was unclear which of two possible motor responses was modeled (Colas & Hsieh, 2014; Huang, Soon, Mullette-Gillman, & Hsieh, 2014).
Analyses
Our first analysis quantified the prevalence of visual, auditory, rule, and motor information coding in different brain networks. We focused on Visual, Auditory, and Motor networks (capitalized to distinguish from visual, auditory, and motor task features), the frontoparietal MD network (Fedorenko et al., 2013; Fox et al., 2005; Duncan & Owen, 2000), and the DMN (Fox et al., 2005; Raichle et al., 2001). Our definition of the MD network was taken from the average activation map of Fedorenko et al. (2013), which is freely available online at imaging.mrc-cbu.cam.ac.uk/imaging/MDsystem. This map indicates the average activation for high relative to low demand versions of seven tasks including arithmetic, spatial and verbal working memory, flanker, and Stroop tasks. Thus, the MD network definition is activation based: It indexes regions that show a demand-related univariate increase in activity across tasks. The map is symmetrical about the midline because data from the two hemispheres were averaged together in the original paper (Fedorenko et al., 2013). We used the parcellated map provided online in which the original average activation map was thresholded at t > 1.5 and then split into anatomical subregions (imaging.mrc-cbu.cam.ac.uk/imaging/MDsystem). This map includes restricted regions of frontal, parietal, and occipitotemporal cortices as well as a number of small subcortical regions. We only included frontal and parietal regions. The resulting 13 MD regions were located in and around the left and right anterior inferior frontal sulcus (aIFS; center of mass [COM] +/−35 47 19, 5.0 cm3), left and right posterior inferior frontal sulcus (pIFS; COM +/−40 32 27, 5.7 cm3), left and right anterior insula/frontal operculum (AI/FO; COM +/−34 19 2, 7.9 cm3), left and right inferior frontal junction (IFJ; COM +/−44 4 32, 10.1 cm3), left and right premotor cortex (PM; COM +/−28 −2 56, 9.0 cm3), bilateral ACC/pre-SMA (COM 0 15 46, 18.6 cm3), and left and right intraparietal sulcus (IPS; COM +/−29 −56 46, 34.0 cm3). Visual, Auditory, Motor, and DMN networks were taken from the whole-brain map provided by Power et al. (2011), which partitions the brain into networks based on resting state connectivity. The Visual network consisted of a large cluster of 182.6 cm3 mm covering the inferior, middle, and superior occipital, calcarine, lingual and fusiform gyri, and the cuneus (BA 17, 18, 19, 37), with COM at MNI coordinates 1 −79 6, plus small clusters in left BA 37 (0.22 cm3, COM −54 −65 −21) and right inferior parietal lobe (0.17 cm3, COM 26 −55 55, BA 7). The Auditory network comprised two large clusters in left and right superior temporal gyrus and rolandic operculum (23.4 cm3 in each hemisphere, with COM at −51 −22 12 and 52 −19 10, BA 22, 42). The Motor network comprised a large cluster over the precentral and postcentral gyri, paracentral lobule and SMA (107.7 cm3, COM 1 −25 60, BA 4, 5, 6), and small clusters in the SMA at the midline (0.04 cm3, COM 3 7 72) and left and right middle temporal gyrus (0.07 cm3 with COM −48 −64 11 and 0.02 cm3 with COM 55 −60 6). The DMN comprised six main clusters around the precuneus (extending to mid cingulate cortex, 43.9 cm3, COM −1 −51 31, BA 7, 23), ventral ACC, and orbital frontal cortex extending dorsally along the medial part of the superior frontal gyrus (107.2 cm3, COM −2 42 24, BA 9, 10, 11, 32), left and right angular gyrus (12.2 cm3, COM −43 −66 34; 10.6 cm3, COM 47 −62 32; BA 39), and left and right middle temporal lobe (18.7 cm3, COM −58 −17 −13; 15.0 cm3, COM 58 −11 −17, BA 21, 20). To ensure that the networks did not overlap, the MD network was masked with each of the other networks. Therefore, our definition of the MD network pertained to voxels that were not part of the Visual, Auditory, Motor, or DMN networks. To serve as a comparison with our five principal networks, all other voxels in the voxelwise map of Power et al. (2011), which corresponds to the anatomical labeling (AAL) atlas (Tzourio-Mazoyer et al., 2002) and excludes the cerebellum, ventricles, and large white matter tracts, were considered as a residual, Other network. Definitions of the five principal networks are depicted in Figure 1.
Number of significant decoding points reported in each network, after correcting for the number of analyses examining coding of each task feature and network volume. Asterisks indicate significance of chi-square or exact binomial goodness of fit tests examining whether there was more coding in each principal network compared with Other for all points (above bars) or for each task feature separately (asterisks on colored bar segments). Statistical testing was carried out for the strict categorization data only. *p < .05, **p < .01, ***p < .00001.
Number of significant decoding points reported in each network, after correcting for the number of analyses examining coding of each task feature and network volume. Asterisks indicate significance of chi-square or exact binomial goodness of fit tests examining whether there was more coding in each principal network compared with Other for all points (above bars) or for each task feature separately (asterisks on colored bar segments). Statistical testing was carried out for the strict categorization data only. *p < .05, **p < .01, ***p < .00001.
For each of our task features, we counted the number of decoding peaks that were reported in each of our six networks, including Other (any decoding peaks reported using TAL coordinates were converted to MNI152 space using tal2mni; imaging.mrc-cbu.cam.ac.uk/downloads/MNI2tal/tal2mni.m). To visualize these data, for each task feature and network, we divided the relevant tally by the number of reported analyses for that task feature and the volume of the network and plotted them on a stacked bar chart. We visualized the data from the lenient and strict categorization separately. Using data from the strict categorization, we then carried out a series of chi-square analyses to test for statistical differences in the distribution of information coding across the networks. First, we carried out a one-way chi-square analysis on the total number of decoding peaks in each network. For this, the observed values were the raw numbers of decoding peaks (across all task features) reported in each network, and the expected values were set proportional to the volume of each network. This analysis tests whether the distribution of information coding between the networks is predicted by network volume. Second, we carried out a chi-square test of independence to assess whether the distribution of information about each task feature (visual, auditory, motor, and rule decoding points) was independent of network (MD, Visual, Auditory, Motor, DMN, and Other). Finally, where significant effects were found in these first two analyses, we carried out a series of post hoc analyses considering each task feature and region separately to clarify the basis for the effect. For each task feature separately, we compared the distribution of observed coding (tally of decoding points in each network) to that predicted by the relative volumes of the six networks. This was done using chi-square (visual and rule information) or the equivalent exact goodness of fit multinomial test for situations where >20% of expected values were <5 (motor and auditory information; implemented in R version 3.2.2 (Team, 2015) using the XNomial package (Engels, 2014)). Finally we asked whether the tally of observed coding in each of the five principal networks separately was greater than that in Other, using a one-way chi-square test or a one-tailed exact binomial test where any expected value was <5.
Our second analysis concerned subdivisions within the MD network. Although the observation of the MD activation pattern in response to many kinds of demand emphasizes the similarity of their response, we do expect that there will be some functional differences between the different regions (e.g., Fedorenko et al., 2013). To explore this, we first carried out a one-way chi-square comparing the total number of decoding peaks reported in the seven different MD regions (aIFS, pIFS, AI/FO, IFJ, PM, ACC/pre-SMA, IPS; data pooled over hemispheres). Next, we divided the MD regions into two subnetworks: a frontoparietal (FP) subnetwork, comprising the IPS, IFJ, and pIFS MD regions, and a cingulo-opercular (CO) subnetwork comprising ACC/pre-SMA, AI/FO, and aIFS MD regions (Power & Petersen, 2013; Power et al., 2011; Dosenbach et al., 2007). We carried out one-way chi-square test comparing the total number of decoding peaks reported in the two subnetworks to each other and to coding in Other. We again used chi-square or the equivalent exact test (Freeman & Halton, 1951) to test for independence between subnetwork and task feature and to compare coding of each feature between the two subnetworks. Statistical testing was again carried out for the “strict” categorization data only.
RESULTS
We summarized 100 independent decoding analyses, reported in 57 published papers, that isolated the multivoxel representation of a single one of the following task features: visual or auditory stimuli, task rules, or motor output. First, we compared information coding in each of our five a priori networks of interest, with Other included as a baseline. The data, shown in Figure 1, suggest a highly structured distribution. For data from the strict categorization (Figure 1B), we used a series of chi-square analyses and exact tests to examine the statistical differences between networks. First we asked whether there was more decoding in some networks compared with others, over and above the differences expected due to variation in network volume (see Methods). Indeed, the total number of decoding peaks varied significantly between the six networks even after network volume was accounted for (χ2 (5, n = 365) = 157.16, p < .00001). Second, we asked whether there was a relationship between the distribution of coding of the different task features and the different brain networks. This chi-square test of independence was also highly significant (χ2 (15, n = 365) = 172.34, p < .00001), indicating a significant relationship between task feature and brain network. We carried out a series of post hoc analyses to clarify the basis for these effects. For this, we considered each task feature separately and compared the number of reported points to the number that would be expected based on the relative volumes of the six networks. For all four task features separately, coding differed significantly between networks (visual information: χ2 (5, n = 153) = 188.37, p < .00001; auditory information: exact test p < .00001; rule information: χ2 (5, n = 151) = 29.47, p = .00002; motor information: exact test p < .00001). For visual information, compared with expectations based on network volume, coding in the Visual (χ2 (1, n = 84) = 140.71, p < .00001), Motor (exact test, p = .015), and MD (χ2 (1, n = 77) = 119.65, p < .00001) networks was significantly more frequent than coding in Other. No such difference was seen for visual information coding in the DMN and Auditory networks (ps > .13). Auditory information coding was reported more frequently in the Auditory (exact test, p < .00001) and MD (exact test, p = .043) networks compared with Other (for DMN, Motor, and Visual networks compared with Other, ps > .68). Rule information coding was reported more frequently in the MD (χ2 (1, n = 99) = 21.06, p < .00001) and Visual (χ2 (1, n = 89) = 5.02, p = .03) networks compared with Other (equivalent tests for DMN, Auditory and Motor networks, ps > .09). Motor information was coded more frequently in the Motor (exact test, p < .00001), MD (exact test, p = .008), and DMN (exact test, p = .019) networks compared with Other (equivalent tests for Visual and Auditory networks, ps > .61). Therefore, relative to Other, the MD network showed more coding of all four task features (visual, auditory, rule, and motor), the DMN showed more coding of motor information, the Motor network showed more coding of motor and visual information, the Visual network showed more coding of visual and rule information, and the Auditory network showed more coding of auditory information.
Our second series of analyses concerned subdivisions within the MD network, again using data from the strict categorization. First, we examined the total number of decoding peaks in each region, combining across task feature (visual, auditory, motor, rule). There was no evidence for a difference between the seven MD regions compared with expectations based on region volume (data collapsed over hemisphere, χ2 (6, n = 93) = 5.77, p = .45). Second, we asked whether there were differences in the reported representational content of two putative subnetworks, an FP subnetwork (IPS, IFJ, and pIFS), proposed to support transient control processes, and a CO network (ACC/pre-SMA, AI/FO, and aIFS), proposed to support sustained control processes (Dosenbach et al., 2007). The data are shown in Figure 2. There was no evidence for a difference in the frequency of information coding in these two subnetworks (χ2 (1, n = 84) = 2.62, p = .11), with encoding in both subnetworks more frequent than encoding in Other (FP: χ2 (1, n = 178) = 124.28, p < .00001; CO: χ2 (1, n = 132) = 23.99, p < .00001). Interestingly, however, there was a significant relationship between subnetwork and information type (Freeman–Halton extension of Fisher's exact test, p = .002), suggesting that the two networks had different representational profiles. The dissociation was driven by more coding of visual information in FP than CO (χ2 (1, n = 41) = 6.65, p = .010) and more coding of motor information in CO than in FP (two-tailed binomial exact test, 0% of motor points in FP was less than the 69.2% predicted based on the two subnetwork volumes, p = .009). Visual points were reported in all FP regions as well as in ACC–pre-SMA and AI/FO, whereas motor points were only reported in ACC/pre-SMA and aIFS. There was no difference in coding between the subnetworks for rule or auditory information, ps > .48. The pattern of results did not change if ROIs were restricted to gray matter or if coordinates reported in TAL were converted to MNI using the tal2icbm_spm routine provided with GingerALE (www.brainmap.org/icbm2tal/) instead of tal2mni.
Number of significant decoding points reported in each MD subnetwork after correcting for the number of analyses examining coding of each task feature and subnetwork volume. Asterisks indicate significance of chi-square or exact binomial goodness of fit tests examining whether there was more coding in each subnetwork compared with Other for all points (above bars) or for each task feature separately (asterisks on colored bar segments) and comparing coding of each task feature between the two subnetworks (asterisks above colored horizontal lines). Statistical testing was carried out for the strict categorization data only. *p < .05, **p < .01, ***p < .00001.
Number of significant decoding points reported in each MD subnetwork after correcting for the number of analyses examining coding of each task feature and subnetwork volume. Asterisks indicate significance of chi-square or exact binomial goodness of fit tests examining whether there was more coding in each subnetwork compared with Other for all points (above bars) or for each task feature separately (asterisks on colored bar segments) and comparing coding of each task feature between the two subnetworks (asterisks above colored horizontal lines). Statistical testing was carried out for the strict categorization data only. *p < .05, **p < .01, ***p < .00001.
To aid the reader in visualizing the data, we generated a whole-brain decoding map from the lenient categorization. For this, the peak decoding coordinates reported in each analysis were projected onto a single template brain, smoothed (15 FWHM Guassian kernel) and thresholded (≥3 times the height of a single peak). The resulting map indicates regions most commonly identified as making task-relevant distinctions in the literature. As can be seen in Figure 3, regions of maximum reported decoding corresponded well with our a priori networks. Information coding was frequently reported in the MD network (bilateral ACC/pre-SMA, right AI/FO, left IFJ, left and right aIFS, right pIFS, left PM, and left and right IPS), Visual network (BA 18/19) extending to inferior temporal cortex, Auditory network (left and right superior temporal gyrus), and the Motor network (left and right precentral and postcentral gyri). Additional small regions of frequent decoding were found in the dorsal part of the right middle frontal gyrus (BA 9/8), the ventral part of the right inferior frontal gyrus (BA 45/47), a ventral part of the left precuneus (BA 30), and the right temporal parietal junction (BA 21). We similarly generated whole-brain decoding maps for each task feature separately (using a lower threshold of 1.2 * single peak height to account for the smaller number of data points in this visualization). As can be seen in Figure 4, the result was a reassuring picture in which visual information was predominantly found to be encoded in the visual cortex, with some additional contribution from frontal and parietal lobes, auditory information was predominantly reported in the auditory cortex, and motor information was primarily coded in motor cortices. Rule was the most diffusely coded task feature, represented in frontal, parietal, and occipitotemporal cortices. These maps did not change markedly if the strict categorization data were used instead.
Brain regions where significant decoding of visual, auditory, rule, and motor information was most frequently reported in the literature. Areas of maximal decoding are shown rendered on left and right hemisphere and on the medial surface (x = −4). To create this visualization, all the decoding peaks were projected onto a single template brain, smoothed, and summed, and the resulting image was thresholded at 3 times the maximum height of a single smoothed peak.
Brain regions where significant decoding of visual, auditory, rule, and motor information was most frequently reported in the literature. Areas of maximal decoding are shown rendered on left and right hemisphere and on the medial surface (x = −4). To create this visualization, all the decoding peaks were projected onto a single template brain, smoothed, and summed, and the resulting image was thresholded at 3 times the maximum height of a single smoothed peak.
Brain regions where significant decoding of (A) visual, (B) auditory, (C) rule, and (D) motor information was most frequently reported in the literature. To create this visualization, the decoding peaks for each task feature (lenient categorization) were projected onto a single template brain, smoothed, and summed, and the resulting image was thresholded at 1.2 times the maximum height of a single smoothed peak. (E) Maps from A to D flattened and overlaid at 50% transparency.
Brain regions where significant decoding of (A) visual, (B) auditory, (C) rule, and (D) motor information was most frequently reported in the literature. To create this visualization, the decoding peaks for each task feature (lenient categorization) were projected onto a single template brain, smoothed, and summed, and the resulting image was thresholded at 1.2 times the maximum height of a single smoothed peak. (E) Maps from A to D flattened and overlaid at 50% transparency.
DISCUSSION
The human brain is a massively parallel complex system. In the past three decades, PET and fMRI technologies have allowed us to probe the function of different parts of this system by assessing what regions are active in different tasks. In the last decade, MVPA has taken this endeavor to a new level, enabling us to study what aspects of stimuli, rules, and responses are discriminated in the local pattern of multivoxel activation in different brain regions. In this paper, we summarized the current state of the literature, drawing on 100 independent analyses, reported in 57 published papers, to describe the distribution of visual, auditory, rule, and motor information processing in the brain. The result is a balanced view of brain modularity and flexibility. Sensory and motor networks predominantly coded information from their own domain, whereas the frontoparietal MD network coded all the different task features we examined. The contribution of the DMN and voxels elsewhere was minor.
The observation that the MD network codes information from multiple domains fits well with an adaptive view of this system. Consistent with the observation of similar frontoparietal activity across many tasks (e.g., Yeo et al., 2015; Fedorenko et al., 2013; Duncan & Owen, 2000; Dosenbach et al., 2006), the proposal is that these regions adapt their function as needed for the task in hand (Duncan, 2001, 2010). To support goal-directed behavior in different circumstances, they are proposed to be capable of encoding a range of different types of information, including the details of auditory and visual stimuli that are relevant to the current cognitive operation (Duncan, 2010). Support comes from single unit recordings, in which the firing rates of prefrontal and parietal cells have been shown to code task rules (e.g., Sigala, Kusunoki, Nimmo-Smith, Gaffan, & Duncan, 2008; Wallis, Anderson, & Miller, 2001; White & Wise, 1999), behavioral responses (e.g., Asaad, Rainer, & Miller, 1998; Niki & Watanabe, 1976), auditory stimuli (e.g., Romanski, 2007; Azumo & Suzuki, 1984), and visual stimuli (e.g., Freedman & Assad, 2006; Freedman, Riesenhuber, Poggio, & Miller, 2001; Hoshi, Shima, & Tanji, 1998; Rao, Rainer, & Miller, 1997). Further support for an adaptive view of this system comes from the observation that the responses of single units in prefrontal and parietal regions adjust to code different information over the course of single trials (Kadohisa et al., 2013; Stokes et al., 2013; Rao et al., 1997) and make different stimulus distinctions in different task contexts (Freedman & Assad, 2006; Freedman et al., 2001). Accordingly, in human functional imaging, the strength of multivoxel codes in the MD system has been found to adjust according to task requirements, with perceptual discrimination increasing under conditions of high perceptual demand (Woolgar, Williams, & Rich, 2015; Woolgar, Hampshire, Thompson, & Duncan, 2011), rule discrimination increasing when rules are more complex (Woolgar, Afshar, Williams, & Rich, 2015), and a greater representation of visual objects that are at the focus of attention (Woolgar, Williams, et al., 2015). These regions are also thought to make qualitatively different distinctions between visual stimuli in different task contexts (Harel, Kravitz, & Baker, 2014). The data presented here emphasize the extent of flexibility in these regions, suggesting that they are capable of representing task relevant information from visual, auditory, rule, and motor domains.
Although each of the individual MD regions are known to respond to a wide range of cognitive demands (e.g., Fedorenko et al., 2013), it nonetheless seems likely that the different regions will support somewhat different cognitive functions. Several organizational schemes have been proposed for the pFC, including a rostrocaudal axis along which different regions support progressively more abstract control processes (Badre & D'Esposito, 2007; Koechlin & Summerfield, 2007), ventral and dorsal segregation based on the modality of the information being processed (Goldman-Rakic, 1998), different types of attentional orienting (Corbetta & Shulman, 2002) or what the information will be used for (O'Reilly, 2010), and a medial/lateral segregation based on conflict monitoring and task set implementation (Botvinick, 2008), although some of these accounts have been challenged experimentally (Crittenden & Duncan, 2014; Grinband et al., 2011). One prominent subdivision of the MD system draws a distinction between an FP subnetwork comprising the MD regions on the dorsal lateral prefrontal surface and the IPS, and a CO subnetwork comprising cortex around ACC/pre-SMA, AI/FO, and aIFS. This distinction is born out in analysis of resting state connectivity (Power & Petersen, 2013; Power et al., 2011), and the two subnetworks have been ascribed various different functions, for example, supporting transient versus sustained control processes (Power & Petersen, 2013; Dosenbach et al., 2007), “executive” versus “salience” systems (Seeley et al., 2007), and transformation versus maintenance of information (Hampshire, Highfield, Parkin, & Owen, 2012). In our data, there was no evidence for differences in the frequency with which information coding was reported in the seven (bilateral) MD regions separately. Subdividing the MD system into FP and CO subnetworks also resulted in comparable levels of coding overall in each subnetwork. However, there was a significant difference in the profile of task features coded by these two subnetworks, with more coding of visual information in FP than in CO and more coding of motor information in CO than in FP. In CO, motor points were reported both in the ACC/pre-SMA region known to support motor function and also in the aIFS. Clarification of the basis of the subnetwork coding difference, and how we should interpret it, will require further work.
Visual, auditory, and motor regions principally coded information from their own domain. However, the visual and motor networks also showed some domain generality, with coding of other task features. Particularly salient was the overlap between the maps for visual and rule information in the visual cortex (Figure 4E). In our review, it was difficult to completely rule out confounds between domains. For example, task rules were usually cued visually, meaning that the visual properties of the cues, as much as representation of the abstract rules per se, could drive discrimination between rules. However, there are some cases of rule coding in the visual cortex where this explanation is not sufficient. For example, we previously reported that discrimination between two stimulus–response mapping rules in the visual cortex generalizes over the two visual stimuli used to cue each rule (Woolgar, Thompson, et al., 2011). Similarly, Zhang et al. (2013) found that rule discrimination in the calcarine sulcus generalized over externally cued and internally chosen rules, and Soon, Namburi, and Chee (2013) reported rule discrimination in the visual cortex when rules were cued with an auditory cue. In some cases, rule discrimination in the visual cortex may reflect different preparatory signals, for example, if the two rules direct attention to different visual features (e.g., Zhang et al., 2013) or object categories (e.g., Soon et al., 2013), but this is not always the case: the two rules of Woolgar, Thompson, et al. (2011) required attention to the same features of identical visual stimuli. Intriguingly, both rule and response coding has previously been reported in the firing rates of single units in V4 of the macaque visual cortex (Mirabella et al., 2007).
In the motor cortex, the majority of reported coding was for discrimination between motor movements, but this region also showed appreciable coding of visual stimuli. Interestingly, population level responses in the primary motor cortex of the macaque have been reported to encode visual stimuli and stimulus–response mapping rules (e.g., Riehle, Kornblum, & Requin, 1994, 1997; Zhang, Riehle, Requin, & Kornblum, 1997). In the MVPA papers we studied, it was often difficult to say precisely what aspects of a stimulus underpinned a given multivoxel discrimination. For example, visual presentation of a familiar object might evoke representation of its associated properties in other sensory domains (e.g., implied somatosensory properties when watching manual exploration of objects; Kaplan & Meyer, 2012). We excluded any papers in which there were obvious associations between our task features, and in our stricter analysis, we also excluded any studies in which higher-level features such as semantic category differed between decoded items, or cases where items might evoke representations of associated motor actions. The remaining points of visual discrimination in the motor cortex were for discrimination between Gabor patches differing in color and spatial frequency (Pollmann, Zinke, Baumgartner, Geringswald, & Hanke, 2014), the spatial location of a target (Kalberlah, Chen, Heinzle, & Haynes, 2011), radial versus concentric glass patterns (Mayhew & Kourtzi, 2013; Mayhew, Li, Storrar, Tsvetanov, & Kourtzi, 2010), and between two abstract shapes cuing the same rule (Reverberi, Gorgen, & Haynes, 2012a). In one study, radial and concentric patterns had been associated with differential button presses during training, although during scanning, participants performed an unrelated task (Mayhew et al., 2010). In all other cases, any button press responses given by participants were orthogonal (Mayhew & Kourtzi, 2013) or unrelated (Pollmann et al., 2014; Reverberi et al., 2012a; Kalberlah et al., 2011; Mayhew et al., 2010) to the visual discrimination.
A few of the studies we included reported multivoxel coding in the DMN. In some cases, the reported discrimination in the DMN reflected participant intentions, such as coding of internally selected task choices (Momennejad & Haynes, 2012; Vickery et al., 2011; Haynes et al., 2007) or externally instructed task rules (Soon et al., 2013; Nee & Brown, 2012) during preparatory periods, the time delay after which participants will self-initiate a switch (Momennejad & Haynes, 2012), and the button which the participant intends to press (Soon, Brass, Heinze, & Haynes, 2008). In other cases, it reflected aspects of active tasks including current rule (Zhang et al., 2013; Reverberi et al., 2012a; Reverberi, Gorgen, & Haynes, 2012b) and stimulus (e.g., orientation of a Gabor [Kahnt, Grueschow, Speck, & Haynes, 2011], concentric versus radial glass patterns [Mayhew & Kourtzi, 2013], and harmonicity of a sound [Giordano, McAdams, Zatorre, Kriegeskorte, & Belin, 2013]). Interestingly, this network has recently been reported to show activation during task switching and multivoxel discrimination between the tasks being switched to (Crittenden, Mitchell, & Duncan, 2015). Additionally, we recently reported multivoxel discrimination between stimulus–response mapping rules in the precuneus, overlapping a major node of the DMN, during an active stimulus–response task (Woolgar, Afshar, et al., 2015). Those data suggest a role for DMN that is qualitatively different from the internally driven activities such as mind wandering and introspection with which this network is more typically associated (e.g., Buckner, Andrews-Hanna, & Schacter, 2008).
There was more coding of motor information in the DMN than in Other, but all five DMN motor coding points came from a single study (Soon et al., 2008). Four of these points corresponded to discriminatory activation in preparation of a left versus right button press at a time point before the participant had indicated their conscious intention to press a button, and the remaining point was for response preparation when participants were cued to make a choice. There were no motor coding points in the DMN during button press execution.
An important challenge for MVPA is to account for variables that differ between conditions on an individual participant basis, such as differences in RT (Woolgar et al., 2014; Todd et al., 2013). Because MVPA is usually carried out at the level of individual participants, with a directionless summary statistic (e.g., classification accuracy) taken to the second level, any effect of difficulty, effort, attention, time on task, trial order (etc.) will not average out at the group level. This may be a particular concern in regions such as the MD and DMN networks, which are known to show different overall activity levels according to task demand. It is difficult to estimate the extent to which these factors have contributed to the data analyzed here. Some of the included studies matched their conditions for difficulty (e.g., Zhang et al., 2013), explicitly accounted for differences in RT in their analysis (e.g., Woolgar, Thompson, et al., 2011), or used designs in which difficulty was unlikely to artifactually drive coding (e.g., passive viewing, Kaplan & Meyer, 2012), but many did not. Other studies sought to account for univariate effects of difficulty that could drive multivariate results, for example, by normalizing the multivoxel patterns to remove overall activation differences between conditions at the level of individual participants (e.g., Gilbert, 2011). However, because the effect of difficulty would not necessarily manifest as an overall activation difference, this could still fail to remove the effect of difficulty on decoding. In our stricter analysis, we excluded analyses in which there was an obvious difference in difficulty between discriminated conditions, but most studies did not report whether there were any differences between conditions on an individual participant basis. Note, though, that we have previously examined the extent to which trial by trial differences in RT contribute to decoding in empirical data and found the contribution to be minor (Crittenden et al., 2015; Erez & Duncan, 2015; Woolgar et al., 2014).
We summarized 100 independent analyses, reported in 57 published papers, that isolated the multivoxel representation of visual and auditory sensory input, task rules, or motor output. The results confirm the power of the MVPA method, with predominant coding of visual, auditory, and response distinctions in the expected sensory and motor regions. Outside sensory and motor areas, the results were also structured, with a specific network of frontal and parietal regions involved in coding several different types of information. Consistent with the observation of similar frontoparietal activity across many tasks and the suggestion that neurons in these regions adapt their function as needed for current behavior (Duncan 2001), frontoparietal cortex codes information from across sensory and task domains.
Acknowledgments
This work was supported by the Australian Research Council's (ARC) Discovery Projects funding scheme (grant no. DP12102835 to A. W. and J. D.). A. W. is a recipient of an ARC Fellowship (Discovery Early Career Researcher Award, DECRA, grant no. DE120100898), J. J. is a recipient of an International Macquarie University Research Excellence Scholarship, and J. D. is supported by the Medical Research Council (United Kingdom) intramural program (grant no. MC-A060-5PQ10). The authors thank Jonathan Power for providing the canonical partition of resting state networks.
Reprint requests should be sent to Alexandra Woolgar, Perception in Action Research Centre and Department of Cognitive Science, Macquarie University, Sydney, New South Wales 2109, Australia, or via e-mail: alexandra.woolgar@mq.edu.au.
Note
In two cases, coordinates were not reported, but a list of peaks was sent by e-mail to AW.