Abstract

Most contemporary theories of semantic memory assume that concepts are formed from the distillation of information arising in distinct sensory and verbal modalities. The neural basis of this distillation or convergence of information was the focus of this study. Specifically, we explored two commonly posed hypotheses: (a) that the human middle temporal gyrus (MTG) provides a crucial semantic interface given the fact that it interposes auditory and visual processing streams and (b) that the anterior temporal region—especially its ventral surface (vATL)—provides a critical region for the multimodal integration of information. By utilizing distortion-corrected fMRI and an established semantic association assessment (commonly used in neuropsychological investigations), we compared the activation patterns observed for both the verbal and nonverbal versions of the same task. The results are consistent with the two hypotheses simultaneously: Both MTG and vATL are activated in common for word and picture semantic processing. Additional planned, ROI analyses show that this result follows from two principal axes of convergence in the temporal lobe: both lateral (toward MTG) and longitudinal (toward the anterior temporal lobe).

INTRODUCTION

Classical models of conceptualization took the view that concepts were coded as distributed representations across multiple, modality-specific association regions, reflecting the particular sensorimotor and verbal experience associated with each concept (Eggert, 1977). Most contemporary theories of semantic memory adopt a very similar notion but assume that, in addition, there are cortical regions that are crucial for the integration, convergence or distillation of the sensorimotor–verbal “raw ingredients” into coherent, transmodal semantic representations (Lambon Ralph, Sage, Jones, & Mayberry, 2010; Patterson, Nestor, & Rogers, 2007; Rogers et al., 2004; Tranel, Damasio, & Damasio, 1997). The modern literature tends to focus on one of two regions as being a crucial interface area: the middle temporal gyrus (MTG) or the anterior temporal lobe (ATL) region (especially its ventral surface: vATL). Despite the prominence of both hypotheses, the nature of semantic processing in the two areas has rarely, if ever, been probed simultaneously—and therefore, this was the principal aim of the current study.

One major motivation for considering the MTG as core to information convergence is the fact that it interposes auditory and visual processing streams (Binder, Desai, Graves, & Conant, 2009). Indeed, an important seed for this hypothesis arises from comparative neurology, which has demonstrated that the primate STS is responsive to multiple sensory inputs, reflecting its multimodal connectivity, and is assumed to be the homologue of the human MTG (Binder et al., 2009; Bruce, Desimone, & Gross, 1981; Seltzer & Pandya, 1978). Consistent with this hypothesis, past lesion analysis of aphasic patients has indicated that verbal-only comprehension impairment is associated with pSTG (superior temporal gyrus) lesions while combined verbal and nonverbal comprehension deficits are found when the lesion augments to include pMTG (as well as other regions including inferior parietal; e.g., Chertkow, Bub, Deaudon, & Whitehead, 1997—a study that utilized similar semantic assessments as those adopted in the current fMRI investigation). Finally, a recent repetitive TMS (rTMS) study also found that stimulation of pMTG generated a selective slowing for both verbal and nonverbal versions of a semantic association test (Hoffman, Pobric, Drakesmith, & Lambon Ralph, 2011). Again, this specific study is highly relevant to the present fMRI investigation because it used both the same semantic assessment and control tasks.

There is now a growing consensus that, in addition to modality-specific regions (i.e., those that code information and statistics for a single sensory modality or stimulus type [e.g., verbal vs. nonverbal]), areas within the ATL also contribute critically to semantic memory (Patterson et al., 2007). This reflects convergent evidence from functional neuroimaging (Binney, Embleton, Jefferies, Parker, & Lambon Ralph, 2010; Visser, Jefferies, & Lambon Ralph, 2009; Hauk, Davis, Ford, Pulvermüller, & Marslen-Wilson, 2006; Marinkovic et al., 2003), neuropsychological studies of semantic dementia (Lambon Ralph et al., 2010; Patterson et al., 2007), and rTMS investigations (Lambon Ralph, Pobric, & Jefferies, 2009; Pobric, Jefferies, & Lambon Ralph, 2007). Each of these methodologies has implicated ATL regions in multimodal semantic processing. For example, patients with semantic dementia present with receptive and expressive semantic impairments across all modalities (Piwnica-Worms, Omar, Hailstone, & Warren, 2010; Goll et al., 2009; Luzzi et al., 2007; Coccia, Bartolini, Luzzi, Provinciali, & Lambon Ralph, 2004; Bozeat, Lambon Ralph, Patterson, Garrard, & Hodges, 2000) in the context of bilateral atrophy focused in the ATL (Hodges & Patterson, 2007). Likewise, rTMS investigations have been able to demonstrate not only that left and right ATL regions are implicated in both verbal and nonverbal semantic processing (Pobric, Jefferies, & Lambon Ralph, 2010c) but also that it forms the proposed transmodal representation hub by which various specific sources of information are drawn together (Pobric, Jefferies, & Lambon Ralph, 2010b).

Given the recent acceleration of interest in the ATL and its role in semantic representation, the second aim of the current study was to explore the contribution that each ATL subregion makes to verbal versus nonverbal semantic processing. In a previous study (Visser & Lambon Ralph, 2011), we varied the sensory modality of the materials (auditory vs. visual) and observed multimodal semantic processing in the ventrolateral surface of the ATL but auditory-only activation in anterior STS (aSTS). In the present investigation, we explored a different type of modality variation, namely verbal versus nonverbal stimuli. The processing of pictures requires visual decoding and semantic activation. In contrast, the processing of written words involves not only visual decoding but, for semitransparent orthographies such as English at least, automatic activation of acoustic–phonological regions, which can then also activate semantic information (Spitsyna, Warren, Scott, Turkheimer, & Wise, 2006).

The ATL consists of several anatomically distinct regions (including the fusiform gyrus [FG], temporal pole, inferior temporal gyrus [ITG], MTG, and STG), and it is not clear if they all contribute in the same way. Consequently, the current study investigated each specific ATL region with regard to the activation induced by a semantic task, which varied stimulus type (verbal vs. nonverbal). We achieved this through a set of a priori ROIs (the gyral divisions of the ATL) and contrasted them with the same set of regions in the posterior temporal lobe. The neuropsychological literature indicates that posterior temporal lesions result in stimulus-specific impairments (i.e., where recognition and comprehension is impaired for one stimulus type but not others; as found in visual agnosia or word deafness, for example), whereas damage to the ATL is associated with pan-modal semantic impairments (Karnath, Ruter, Mandler, & Himmelbach, 2009; Patterson et al., 2007; Griffiths, 2002).

Closely related ideas can be found in computational models of semantic representation. Specifically, a number of models include a representational hub enabling cross-modal translation between modality-specific regions and extraction of modality-invariant statistical structures (the hub-and-spoke model; Rogers et al., 2004; Plaut, 2002). The Plaut (2002) framework incorporated the notion that there is a gradient of convergence across this representational layer. If this is true, then the BOLD signal in ATL hub regions should respond equally to all stimulus types and sensory modalities; in contrast, areas further away (e.g., in the posterior temporal lobe) should show more stimulus- and modality-specific responses. The ROI comparisons of activation in various subregions of posterior versus ATL allowed us to test the neural basis of this hypothesis and to investigate, simultaneously, the processing role of the MTG in this context.

METHODS

The core questions to be explored in this study rely on the ability to detect signal in anterior as well as posterior temporal regions. Past neuroimaging studies have varied considerably regarding the likelihood that semantic processing generates signal in the ATL. A recent meta-analysis of the neuroimaging literature on semantic memory found that this variability reflected a number of methodological and technical issues (Visser, Jefferies, & Lambon Ralph, 2009). These included reduced field of view, which if aligned to include the top of the brain then tends to miss the ATL and, in particular, the basal region, which is activated by semantic tasks (e.g., Visser & Lambon Ralph, 2011; Binney et al., 2010); the baseline contrast (very low level baselines, e.g., “rest,” are less likely to demonstrate ATL-related activations perhaps because during “rest” participants engage in silent speech and other language-semantic related mental processes; McKiernan, D'Angelo, Kaufman, & Binder, 2006; Binder et al., 1999); and the imaging modality (PET studies were significantly more likely to generate ATL activations than fMRI). This latter aspect is most likely to reflect EPI signal distortion and loss due to the varying magnetic susceptibility that is most pronounced in the anterior, inferior, and polar aspects of the temporal lobes and other regions including OFC (Weiskopf, Hutton, Josephs, & Deichmann, 2006).

As a consequence, this study was designed to avoid or reduce these challenges, so that we could be more confident of observing activation in ATL areas and thus probe the nature and function of regions within the ATL more systematically. Specifically, we employed a full field of view, utilized an active baseline task and a spatial distortion correction based on spin-echo EPI (Embleton, Haroon, Lambon Ralph, Morris, & Parker, 2010). Critically for the present investigation, this imaging “recipe” (in the context of modern MR scanners with parallel head coils, improved shimming, etc.) has allowed us to observe reliable signal even in the most demanding regions (inferior, anterior temporal areas) for a number of different semantic tasks (Visser & Lambon Ralph, 2011; Binney et al., 2010).

In addition, in an attempt to strengthen the comparability between fMRI, neuropsychological investigations, and rTMS explorations, we selected established neuropsychological assessments as the basis for the active semantic tasks in the present fMRI study (in a recent investigation, we found that this approach can generate important, convergent results; Binney et al., 2010). These tests (and their matched, nonsemantic control tasks—for subtraction) have been used successfully to explore verbal and nonverbal semantic processing both in semantic dementia and stroke aphasia (Bozeat et al., 2000; Chertkow et al., 1997) and in rTMS explorations of the lateral ATL and pMTG (Hoffman et al., 2011; Pobric et al., 2010c). A similar task was also used in an early seminal PET-imaging study of semantic processing (Vandenberghe, Price, Wise, Josephs, & Frackowiak, 1996), which highlighted multimodal semantic processing throughout the temporal lobe. Given the success and utility of this semantic task across neuropsychology, rTMS, and PET imaging, we adopted the same assessment and used the higher spatial resolution of distortion-corrected fMRI to probe the relative contribution of specific anterior and posterior temporal lobe regions to verbal and nonverbal version of this task.

Task and Stimuli

Fifteen (right-handed; five men) participants were asked to perform the word and picture versions of the Camel and Cactus task (CCT) and the Pyramids and Palm Trees test (PPT; Bozeat et al., 2000; Howard & Patterson, 1992). These are established neuropsychological tests that license an assessment of verbal or nonverbal comprehension (by varying whether the concepts are represented as pictures or their written names; Jefferies & Lambon Ralph, 2006; Bozeat et al., 2000). In these tests of semantic association, participants are required to match a probe concept (e.g., pyramid) to an associated item (choices: palmtree or firtree). Target choices are presented alongside foils, which come from the same semantic category but are not associated with the probe item. In this imaging study, the probe item was presented at the top of the screen with the choices displayed below. Participants were asked to decide which of the bottom pictures/words was more associated in meaning with the top picture/word via a button press.

Performance on these semantic tests was contrasted with difficulty-matched nonsemantic control tasks (also used in the parallel investigation of ATL rTMS: Hoffman et al., 2011; Pobric, Jefferies, & Lambon Ralph, 2010a). These captured the same basic perceptual, motor, and decision requirements found in the main task but did not require any semantic processes. Stimuli were visually scrambled versions of the pictures/words from the semantic task. Like the main task, a probe stimulus was presented at the top of the screen with two choices presented below. One of these was an inverted copy of the probe picture. Participants were asked to indicate, via button press, which stimulus matched the probe item. We adopted the inversion of the probe item because our pilot data indicated that a simple identity match was too easy and resulted in significantly faster decision times in the control than semantic task. By asking participants to match an inverted copy of the probe, decisions times were matched to the semantic task (the equivalency between the semantic and control tasks was replicated in the parallel rTMS study; Hoffman et al., 2011; Pobric et al., 2010a). These previous results were replicated by the behavioral data collected in the present fMRI study. Analyses of these data showed that there were no significant differences between the word or picture semantic tasks [pictures: mean RT = 2391 msec (SD = 349.7 msec); mean accuracy = 84% (SD = 5%); words: mean RT = 2368 msec (SD = 327.5 msec); mean accuracy = 88% (SD = 5%): all paired t tests, ns]. Likewise, the control tasks were at least as demanding as the target semantic tasks [pictures: mean RT = 2617 msec (SD = 248.0); mean accuracy = 90% (SD = 10%); words: mean RT = 2333 msec (SD = 315.1); mean accuracy = 90% (SD = 3%)].

Experimental Design

We used a blocked design, which included 120 blocks of 21 sec each. The CCT-derived elements included 11 blocks for each stimulus type (i.e., pictures, words, scrambled pictures, and scrambled words), resulting in 44 blocks. The PPT elements allowed us to generate four blocks for each stimulus type, resulting in 16 blocks. In addition, each task block was preceded by a rest block in which participants focused on a fixation cross, resulting in 60 rest block. We used pilot data to guide the trial time for each task. Because the CCT task is harder than the PPT, the CCT blocks included three trials of 7 sec each, with a stimulus presentation time of 6 sec and a fixation point of 1 sec. The PPT blocks included four trials with a stimulus presentation of 4750 msec and a fixation point of 500 msec.

Image Acquisition

All imaging was performed on a 3T Philips Achieva scanner using an eight-element SENSE head coil with a sense factor of 2.5. The single-shot SE-EPI fMRI sequence included 42 slices acquired in an ascending order (with echo time = 70 msec, repetition time = 4075 msec, acquisition matrix = 96 × 96, reconstructed resolution = 2.5 × 2.5 mm, and slice thickness = 3 mm). The experiment included three separate runs during which two fMRI acquisitions were made. To compute a spatial remapping matrix, a prescan was obtained with interleaved dual direction phase encoding and the participant at rest (20 image volumes acquired, 10 for left-to-right phase encoding (KL) and the same number for the opposite right-to-left (KR) phase encoding). This was followed by the main fMRI image sequence of 208 time points with a single-phase encoding direction (KL) during which the functional task was performed. A high-resolution T2 weighted turbo spin-echo scan with in-plane resolution of 0.938 mm and slice thickness of 2.1 mm was also obtained as a structural reference to provide a qualitative indication of distortion correction accuracy.

Distortion Correction

The spatial remapping correction was computed using the method developed and applied elsewhere (Embleton et al., 2010). In brief, mean KL and KR images were produced from the postreconstruction 10 KL and 10 KR direction images acquired in the prescan. During the correction process a spatial transformation matrix applied to transform the mean KL image into corrected space was obtained for intervals of 0.1 pixels in the phase encoding direction resulting in a shift matrix of size 96 × 960 × 42. The 208 time points in the functional acquisition were then corrected by first registering each 3-D volume to the original distorted mean KL volume using a 6 degrees of freedom translation and rotation algorithm (FLIRT, FMRIB Software Library, Oxford) and then applying the matrix of pixel shift values to the registered images. This resulted in three distortion-corrected data set of 208 volumes maintaining the original temporal spacing and repetition time of 4075 msec.

SPM Analyses

Image analysis was carried out with SPM5 software (Wellcome Department of Imaging Neuroscience, London, UK; www.fil.ion.ucl.ac.uk/spm). Preprocessing of functional MR images included movement correction, slice time correction, coregistration with the anatomical data and smoothing using a Gaussian filter with 8-mm FWHM. Processing and statistical analyses: conditions of interest corresponding to pictures, words, scrambled pictures, and scrambled words were modeled using a box-car function convolved with the canonical hemodynamic response function. Low-frequency drifts were removed using a temporal high-pass filter (default cutoff of 128 sec). Six different additional covariates corresponding to the parameters of movement correction obtained during the realignment step of functional scans were applied to regress out movement effects. After estimation of the model parameters for each subject, t tests were used for the second-level analyses, and the conjunction analysis was tested using a full factorial design. The CCT and PPT tasks were combined for analyses, because significant differences were not found when contrasting these two tasks. Subtraction analyses were used to examine activation for semantic processing for words and/or pictures over the matching control tasks. Resultant contrast maps were thresholded at p < .001, with an extent threshold of 30 voxels. Results are presented at cluster level inference. As well as comparing activation between specific conditions, we also investigated which brain regions were common to both verbal and nonverbal semantic processing (all semantic conditions—baseline tasks). This global contrast can reflect common processing but can also result from strong activation in only a subset of the conditions (Nichols, Brett, Andersson, Wager, & Poline, 2005). An additional conjunction analysis was conducted, therefore, which only includes voxels that are activated for the word as well as the picture task (Friston, Penny, & Glaser, 2005; Nichols et al., 2005). The threshold for this analysis was set at p < .001, uncorrected.

A key aim for this study was to explore in detail how different anatomical subdivisions of the temporal lobe contribute to semantic processing. In addition to the whole-brain analyses, therefore, we also used an ROI-based analysis of subregions within the anterior versus posterior temporal lobe. This ROI analysis was conducted using Marsbar (marsbar.sourceforge.net/). This method overcomes the multiple comparison problem and is ideal for specific regional hypothesis testing (Brett, Anton, Valabregue, & Poline, 2002). These ROIs are presented in Figure 2. Following a previous fMRI–patient–rTMS comparative study (Binney et al., 2010), these ROIs were defined using a combination of lesion and neuroanatomical maps. The term ATL does not have a consistent or precise anatomical definition and, in the context of semantic memory, is sometimes used to refer to the collection of temporal lobe regions that are commonly affected in semantic dementia (Binney et al., 2010). Following this observation, Binney et al. utilized the semantic dementia (SD) hypometabolism map from Nestor, Fryer, and Hodges (2006) as a working definition for the semantically related ATL. This was then subdivided into four subregions using the AAL map provided in the Wake Forest University PickAtlas toolbox, namely the fusiform, inferior, middle, and superior temporal gyri (Maldjian, Laurienti, Kraft, & Burdette, 2003; Tzourio-Mazoyer et al., 2002). These same ATL ROIs were used in this study.

Following Plaut's (2002) computational model of graded semantic representations that arise from differential connectivity (see above), our hypothesis was that some parts of the temporal lobe will be dominantly involved in stimulus-specific processes whereas others will underlie stimulus-invariant processes. More precisely, regions far away from stimulus-specific regions (such as the ATL) will underlie amodal processes and will respond equally to words and pictures. In contrast, regions near more posterior, stimulus-specific areas will be dominated by semantic processing in relation to that particular modality. To investigate and compare the pattern of activation in the anterior and posterior temporal lobes, we created a posterior temporal mask to cover the regions that are commonly activated by verbal or visual shape processing (Okada & Hickok, 2006a, 2006b; Martin, Wiggs, Ungerleider, & Haxby, 1996) using the AAL (automated anatomical labeling; Tzourio-Mazoyer et al., 2002) temporal lobe mask provided in the Wake Forest University PickAtlas toolbox (Maldjian et al., 2003). Four specific posterior temporal ROIs were generated (shown in Figure 2): fusiform (y = −24 to −60), inferior (y = −24 to −58), middle (y = −21 to −56), and superior regions (y = −19 to −51).

Finally, we generated ROIs to compare the fMRI data to the results of a recent rTMS investigation of the lateral ATL that utilized the same tasks as this fMRI study. The ATL rTMS generated a selective slowing of both verbal and nonverbal versions of the semantic tasks but had no effect on the same visual-matching control task (see Pobric et al., 2010a) Accordingly, we used a second set of ROI analyses (based on a 2-cm sphere centered on the mean rTMS stimulation coordinates in Montreal Neurological Institute [MNI] space: 52, 2, −28 and −53, 4, −32) to compare the results found in rTMS directly against the fMRI (given that the same semantic and control tasks were used in both).

RESULTS

Peak activations for the different contrasts are summarized in Table 1. The results are presented at cluster level inference and corrected for multiple comparisons. First, we examined areas involved in common semantic processing of both pictures and words. Activation in the left inferior frontal lobe, left posterior temporal lobe, and the bilateral ATL (aMTG and basal surface) was found when all semantic tasks were combined and contrasted with the control tasks (allsemantics > control; see Figure 1A). The relative importance of word versus picture semantic processing is, perhaps, most clearly demonstrated by plotting the beta maps for each condition (see Figure 1B). These show that the core of the peri-sylvian region (in the midst of the middle cerebral artery territory and the center of lesions in patients with aphasia after stroke) loads positively yet selectively for wordsbaseline (blue overlay) while a posterior inferior temporal region exhibits the complementary loading, for picturesbaseline only (red overlay). The common areas (pink overlay) are found, principally, in the same areas that reach statistical threshold (Figure 1A)—covering (i) the anterior to posterior MTG, extending to angular gyrus (AG); (ii) the basal ATLs bilaterally; (iii) inferior pFC extending to more dorsolateral regions; and (iv) medial frontal regions. It is striking that the very same set of anatomical regions were highlighted in a large-scale meta-analysis of the imaging literature (Binder et al., 2009), which applied stringent selection criteria for the included studies. Like the present imaging results, this meta-analysis highlighted the importance of inferior pFC and a large arc of temporo-parietal cortex including the entire MTG and AG. The only area of difference is in the basal temporal regions, which exhibited considerable activation in this and other recent distortion-corrected fMRI investigations (Visser & Lambon Ralph, 2011; Binney et al., 2010; Visser, Embleton, Jefferies, Parker, & Lambon Ralph, 2010) but which played a much more minor role in the meta-analysis of Binder et al. This is not especially surprising, however, given the range of technical issues associated with the likelihood of observing activation in this area (Visser, Jefferies, & Lambon Ralph, 2009). Indeed the inherent problems associated with gradient-echo EPI are such that even with a powerful task, a high-level baseline and a large number of participants, there is still very poor signal-to-noise in this region (see, e.g., Figure 6 of Binder et al., 2010). Furthermore, in the current and associated spin-echo EPI-based studies, it is this same vATL region that requires the greatest amount of correction (e.g., see Figure 2B of Visser et al., 2010).

Table 1. 

Activated Clusters during Semantic Processing for Words and/or Pictures

Contrast
Brain Region (COG)
Voxels
p
MNI Coordinates
x
 
y
 
z
 
Semantic > control L. middle temporal lobe 380 <.001 −57 −42 −3 
R. ATL 35 .029 42 21 −33 
L. inferior frontal gyrus 137 <.001 −54 27 
L. ATL 103 <.001 −57 −15 −24 
Picture > control L. middle temporal lobe 80 <.001 −57 −45 −3 
R. cerebellum 30 .065 18 −78 −30 
Word > control L. middle temporal lobe 194 <.001 −51 −30 −3 
L. inferior frontal gyrus 217 <.001 −54 33 
R. ATL 30 .064 39 21 −33 
L. ATL 54 <.001 −45 −33 
L. FG 135 .004 −39 −39 −24 
L. AG 46 .010 −42 −69 36 
L. ATL 49 .007 −63 −21 −21 
Picture > word L. occipital lobe 797 <.001 39 −84 15 
R. occipital lobe 354 <.001 −39 −81 
R. inferior parietal gyrus 42 .021 24 −66 39 
Word > picture L. occipital lobe 143 <.001 −9 −78 12 
L. posterior cingulate 111 <.001 −15 −45 18 
R. posterior cingulate 157 <.001 21 −42 18 
L. middle temporal lobe 62 .003 −54 −30 −3 
L. superior frontal gyrus 39 .28 −6 69 
L. pulvinar 101 <.001 −12 −39 
Contrast
Brain Region (COG)
Voxels
p
MNI Coordinates
x
 
y
 
z
 
Semantic > control L. middle temporal lobe 380 <.001 −57 −42 −3 
R. ATL 35 .029 42 21 −33 
L. inferior frontal gyrus 137 <.001 −54 27 
L. ATL 103 <.001 −57 −15 −24 
Picture > control L. middle temporal lobe 80 <.001 −57 −45 −3 
R. cerebellum 30 .065 18 −78 −30 
Word > control L. middle temporal lobe 194 <.001 −51 −30 −3 
L. inferior frontal gyrus 217 <.001 −54 33 
R. ATL 30 .064 39 21 −33 
L. ATL 54 <.001 −45 −33 
L. FG 135 .004 −39 −39 −24 
L. AG 46 .010 −42 −69 36 
L. ATL 49 .007 −63 −21 −21 
Picture > word L. occipital lobe 797 <.001 39 −84 15 
R. occipital lobe 354 <.001 −39 −81 
R. inferior parietal gyrus 42 .021 24 −66 39 
Word > picture L. occipital lobe 143 <.001 −9 −78 12 
L. posterior cingulate 111 <.001 −15 −45 18 
R. posterior cingulate 157 <.001 21 −42 18 
L. middle temporal lobe 62 .003 −54 −30 −3 
L. superior frontal gyrus 39 .28 −6 69 
L. pulvinar 101 <.001 −12 −39 

MNI coordinates of the center of gravity of each cluster. p values are presented at cluster level inference and corrected for multiple comparisons. COG= center of gravity; L. = left; R. = right.

Figure 1. 

(A) Activated regions for the contrast allsemantic–controltasks at different statistical thresholds (voxel extent of 30 in all cases). (B) Maps of the positive beta values (thresholded at >0.1) for the pictures–baseline (red) and words–baseline (blue). Areas in common show up as pink when the two overlays are combined. (C) A figure based on the independent meta-analysis of semantic tasks conducted by Binder et al. (2009; elements reproduced with permission). The parallel between the current neuroimaging results [both the statistical maps (A) and the areas in common (B)] are clearly apparent, with multimodal activation in AG, the entire length of MTG, inferior pFC, and the basal ATL bilaterally. Note that the greater activation of basal ATL in the current study over that noted in the meta-analysis is very likely to reflect the technical challenges associated with using fMRI in this region (see main text and Visser et al., 2009).

Figure 1. 

(A) Activated regions for the contrast allsemantic–controltasks at different statistical thresholds (voxel extent of 30 in all cases). (B) Maps of the positive beta values (thresholded at >0.1) for the pictures–baseline (red) and words–baseline (blue). Areas in common show up as pink when the two overlays are combined. (C) A figure based on the independent meta-analysis of semantic tasks conducted by Binder et al. (2009; elements reproduced with permission). The parallel between the current neuroimaging results [both the statistical maps (A) and the areas in common (B)] are clearly apparent, with multimodal activation in AG, the entire length of MTG, inferior pFC, and the basal ATL bilaterally. Note that the greater activation of basal ATL in the current study over that noted in the meta-analysis is very likely to reflect the technical challenges associated with using fMRI in this region (see main text and Visser et al., 2009).

A conjunction analysis showed the same activated clusters (although they were smaller), indicating that these correspond to multi-modal or modality-invariant semantic processing areas. The additional voxels found in the allsemantics > control contrast but not in the more stringent conjunction analyses might reflect strong activation in one modality but not others. Alternatively, the differences might reflect differential sensitivity to the same underlying activation given that this form of conjunction analysis is much more stringent than a simple contrast that combines the semantic conditions. To examine these possibilities, we extracted the clusters from the allsemantics > control contrast and, within these, directly compared pictures > words and words > pictures. No significant differences between the word or picture versions of the semantic tasks were found, consistent with the idea that these regions correspond to multimodal or stimulus-invariant semantic processing areas.

For the second part of the whole-brain analyses, we investigated which areas showed greater activation for pictures or words (see Table 1). The pictures > words contrast resulted in greater activation in posterior occipital and inferior temporal regions bilaterally. Greater activation for words > pictures was found in left posterior middle and superior temporal areas. It should be noted that this differential pMTG activation overlaps with the pMTG area found in common for words and pictures. This suggests that the pMTG area exhibits stimulus variation but is not stimulus specific: Semantic processing in this region is common to both verbal and nonverbal input but relatively greater for word input. In contrast, processing in occipital and posterior, inferior temporal areas may be relatively specific to nonverbal/picture input. This fits with the observation that damage to this region results in visual agnosia and other recognition deficits (Luzzatti, Rumiati, & Ghirardi, 1998; Humphreys & Riddoch, 1987). In addition, other neuroimaging studies have associated the inferior temporal and fusiform areas with visual form processing (Hocking & Price, 2009; Ishai, Ungerleider, Martin, Schouten, & Haxby, 1999; Martin et al., 1996).

As noted in the Methods, the second phase of our analyses utilized an ROI-based approach to contrast anterior and posterior temporal regions. This ROI analysis also permitted a contrast between these fMRI results and the temporal lobe regions implicated in semantic dementia and rTMS semantic studies. The results for these (bilateral) ROI analyses at the group level are summarized in Table 2 and Figure 2. The posterior temporal lobe was predominantly involved in stimulus-specific processes. As noted above, the BOLD response in the inferior temporal regions (i.e., inferior temporal lobe and FG) was higher for pictures than words, whereas the reversed pattern was observed in the superior and middle temporal lobe. This pattern was repeated in the ATL subregions, but the pattern was less extreme (see Figure 2), consistent with convergence in a rostral direction. Specifically, as expected stimulus-invariant regions (responding equally to words and pictures) were found in the ATL, namely the inferior and MTGs. Thirdly, we found that the aSTG (words > pictures) as well as the anterior FG (pictures > words) showed a stimulus-specific pattern, similar to the corresponding posterior regions (see Figure 2). We compared the effect sizes for the anterior and posterior ROI in the STG as well as in the FG, separately. A paired sampled t test for the 15 participants confirmed that the effect sizes did not differ significantly between these particular posterior and anterior regions (t(14) = 1.21, p > .24 for the fusiform and t(14) = 1.30, p > .21 for the STG). In contrast, the ITG responded in a stimulus-specific manner in the posterior temporal lobe but became stimulus-invariant in a rostral direction (t(14) = 4.57, p < .001). Finally, a comparison of the effect sizes for posterior versus anterior MTG found no significant difference (t(14) = 1.07, p > .30). This is not surprising, perhaps, because although the pMTG was relatively more yoked to verbal processing, its activity was not selective to verbal materials alone (see above). Instead the shift from posterior to anterior MTG is more graded and subtle, such that pMTG responds to semantic processing for words and pictures (with greater activation for words than pictures), but it becomes firmly and equivalently responsive to words and pictures in the aMTG region—a pattern, which as noted above, aligns with a recent meta-analysis suggesting that a multimodal interface is situated along the entire MTG (Binder et al., 2009).

Table 2. 

Pattern of Common versus Modality-specific Activation across Anterior versus Posterior Temporal Regions

ROI
Common Semantic > Control
Picture > Word
Word > Picture
ATL 
Fusiform t = 2.36, p = .02 t = 2.27, p = .02 ns 
η2 = 0.67 η2 = 0.42 
ITG t = 3.41, p < .01 ns ns 
η2 = 0.78 
MTG t = 3.57, p < .01 ns ns 
η2 = 0.63 
STG ns ns t = 1.77, p = .05 
η2 = 0.29 
rTMS right ATL t = 2.31, p = .02 ns ns 
η2 = 0.62 
rTMS left ATL t = 3.02, p < .001 ns ns 
η2 = 0.96 
 
Posterior Temporal Lobe 
Fusiform t = 3.64, p < .01 t = 4.37, p > .001 ns 
η2 = 0.60 η2 = 0.53 
ITG t = 4.08, p < .001 t = 4.09, p < .001 ns 
η2 = 0.49 η2 = 0.33 
MTG t = 5.70, p < .001 ns t = 2.41, p = .01 
η2 = 0.7 η2 = 0.12 
STG ns ns t = 3.89, p < .001 
η2 = 0.24 
ROI
Common Semantic > Control
Picture > Word
Word > Picture
ATL 
Fusiform t = 2.36, p = .02 t = 2.27, p = .02 ns 
η2 = 0.67 η2 = 0.42 
ITG t = 3.41, p < .01 ns ns 
η2 = 0.78 
MTG t = 3.57, p < .01 ns ns 
η2 = 0.63 
STG ns ns t = 1.77, p = .05 
η2 = 0.29 
rTMS right ATL t = 2.31, p = .02 ns ns 
η2 = 0.62 
rTMS left ATL t = 3.02, p < .001 ns ns 
η2 = 0.96 
 
Posterior Temporal Lobe 
Fusiform t = 3.64, p < .01 t = 4.37, p > .001 ns 
η2 = 0.60 η2 = 0.53 
ITG t = 4.08, p < .001 t = 4.09, p < .001 ns 
η2 = 0.49 η2 = 0.33 
MTG t = 5.70, p < .001 ns t = 2.41, p = .01 
η2 = 0.7 η2 = 0.12 
STG ns ns t = 3.89, p < .001 
η2 = 0.24 

Values corrected for multiple comparisons within MARSBAR.

Figure 2. 

(A) ROIs in the posterior and anterior parts of the temporal lobe referring to superior (yellow), middle (green), inferior (purple), and fusiform (cyan) regions. These regions are defined for the Marsbar analysis (see Methods section for their definition). (B) Plots of the effect size of the [picture minus word] contrast for each subregion (pairing anterior vs. posterior ROI). Positive values indicate that a subregion was relatively more active for the picture than word semantic task, while negative values denote the reverse pattern (see Figure 1 and Table 1 for the overall semantic task results). An asterisk highlights which contrasts are significantly different to zero. The plus sign indicates a significant difference between an anterior versus posterior pairing. The dual directions of convergence can be observed in these results: (i) verbal versus nonverbal modality differences are more apparent in the posterior than anterior ROIs (consistent with a caudal → rostral convergence) and (ii) STG versus FG exhibit the greatest (opposite) contrast between words and pictures while little difference is found in MTG and ITG (consistent with a lateral convergence: STG → ITG/MTG ← FG).

Figure 2. 

(A) ROIs in the posterior and anterior parts of the temporal lobe referring to superior (yellow), middle (green), inferior (purple), and fusiform (cyan) regions. These regions are defined for the Marsbar analysis (see Methods section for their definition). (B) Plots of the effect size of the [picture minus word] contrast for each subregion (pairing anterior vs. posterior ROI). Positive values indicate that a subregion was relatively more active for the picture than word semantic task, while negative values denote the reverse pattern (see Figure 1 and Table 1 for the overall semantic task results). An asterisk highlights which contrasts are significantly different to zero. The plus sign indicates a significant difference between an anterior versus posterior pairing. The dual directions of convergence can be observed in these results: (i) verbal versus nonverbal modality differences are more apparent in the posterior than anterior ROIs (consistent with a caudal → rostral convergence) and (ii) STG versus FG exhibit the greatest (opposite) contrast between words and pictures while little difference is found in MTG and ITG (consistent with a lateral convergence: STG → ITG/MTG ← FG).

These four observations lead us to suggest a two-direction gradient of convergence in the temporal lobe. Within this view, the posterior temporal regions underpin stimulus-oriented semantic processes, aligning with the current literature. In contrast, the anterior inferior and middle temporal regions are core to transmodal semantic processes. As such, the current results show a caudal-to-rostral gradient of convergence, which is particularly pronounced in the ITG and is present but somewhat weaker in the MTG, STG, and fusiform. The second direction of convergence is orthogonal (coronally oriented) to this first one, running from superior–lateral (auditory–verbal) or inferomedial (visual) stimulus-specific processing to ITG/MTG stimulus invariance. We tested and confirmed this second gradient of convergence by comparing the effect sizes for anterior ITG and MTG against the anterior fusiform (t(14) = 3.01, p < .01) and aSTG (t(14) = −2.52, p < .024), respectively.

The stimulus invariance of the anterior ITG and MTG in the current results aligns well with SD patient and TMS data (Lambon Ralph et al., 2009, 2010; Patterson et al., 2007; Pobric et al., 2007). Specifically, in these ATL regions, ITG and MTG responded equally to the word and picture semantic conditions. The same is true for the current fMRI activations in the rTMS ROI (overlapping with the MTG and ITG ROIs). In keeping with these neuroimaging results, the previous rTMS study found that stimulation to either left or right anterolateral areas produced a selective slowing of the picture and word versions of the same task (Pobric et al., 2010a). Furthermore, the atrophy in SD patients is especially pronounced in these inferolateral ATL areas (Nestor et al., 2006; Mummery et al., 2000).

The FG and STG demonstrated a relative rather than absolute difference between words and pictures. The FG exhibited greater activation for the picture task. In line with Plaut's differential connection strength hypothesis, this result fits with clear evidence that there is very strong visual input to fusiform and other relatively medial, anterior temporal areas (Chao, Haxby, & Martin, 1999; Moore & Price, 1999). In contrast, activation in the STG was higher during the word compared with the picture task. The aSTG region implicated in this study has been associated by other neuroimaging studies of auditory words, written words, and sentence processing (Spitsyna et al., 2006; Scott & Johnsrude, 2003; Scott, Leff, & Wise, 2003; Scott, Blank, Rosen, & Wise, 2000) as well as high-order/semantic processing of nonverbal sounds (Visser & Lambon Ralph, 2011; Griffiths & Warren, 2004). On the basis of the current result and these literatures, we suggest that these regions behave in a modality-specific fashion (as opposed to the aITG and aMTG, which respond to all modalities). The STG was the only region that was not significantly activated for the allsemantic > control contrast (see Table 2), suggesting that this region was particularly specialized toward the verbal modality. This may also fit directly with Brodmann's original observation that there is a very strong cytoarchitectural distinction between BA22 (STG) and the rest of the temporal lobe. In contrast, he noted that there were no absolute boundaries of the same degree between other anterior regions (BA 21, BA 20, and BA 38: Brodmann, 2005).

These results are consistent with a previous lesion overlap study of verbal versus multimodal comprehension impairments in stroke aphasia (Chertkow et al., 1997). This study found that patients with verbal-only comprehension deficits had lesions confined to the pSTG area but that patients with verbal and nonverbal deficits (as measured by a similar semantic association task as utilized in this fMRI study) had larger lesions that encroached upon pMTG (as well as a greater parietal [AG] extension).

DISCUSSION

The neural regions underlying semantic memory have been widely investigated in different research fields, including neuropsychology, neuroimaging and computational modeling. By combining the theories arising from these different literatures, we can improve our understanding of the role that various temporal lobe regions play in semantic representation. The current study bridged between different research fields by utilizing distortion-corrected fMRI and a semantic decision task used in many neuropsychological investigations, recent rTMS studies and a seminal PET-imaging investigation (Hoffman et al., 2011; Pobric et al., 2010a; Jefferies & Lambon Ralph, 2006; Bozeat et al., 2000; Vandenberghe et al., 1996). The study set out to assess two hypotheses with regard to the integration of different information sources that underpins semantic representation: (a) the MTG integrates auditory and visual information given that it interposes between the visual and auditory processing streams (and is the human homologue of primate STS; Binder et al., 2009) and (b) that the ATL region—through its multiple connections to various modality-specific sources (Gloor, 1997; Morán, Mufson, & Mesulam, 1987)—generates transmodal representations and the core “hub” for coherent concepts (Lambon Ralph et al., 2010; Pobric et al., 2010b; Patterson et al., 2007; Rogers et al., 2004). The study clearly supports both hypotheses and further suggests that there are two principal directions of information convergence in the temporal lobes: (i) longitudinally (caudal → rostral) and (ii) laterally (STG → ITG/MTG ← FG).

These results relate directly to those computational models of semantic memory, which suggest that (1) the semantic system includes a modality-invariant hub, which extracts pan-modal statistical structure from modality-specific regions (Lambon Ralph et al., 2010; Pobric et al., 2010b; Patterson et al., 2007; Rogers et al., 2004) and (2) that the degree of functional specialization depends on the location within this system (Rogers & McClelland, 2004; Plaut, 2002). More specifically, Plaut's (2002) computational model suggested a functional specialization within the semantic system that is driven by the relative distance/connection strength of modality-specific regions. According to this view, regions near modality-specific areas are functionally more specialized to the corresponding input, whereas processing in regions that are equidistant from all inputs becomes modality invariant. We extended this purely computational idea and tested the hypothesis that there is a gradient of convergence along the caudo-rostral axis of the temporal lobe such that modality-specific information is processed in relatively specialized parts of the posterior temporal lobe, whereas the anterior regions are more modality invariant. The current fMRI results support and extend this joint neuroanatomical-computational hypothesis. The anterior ITG and MTG are characterized by multimodal semantic processing consistent with the fact that semantic dementia patients (who have multimodal semantic impairment) have considerable atrophy in this same ATL region (Binney et al., 2010; Galton et al., 2001). In contrast, two ATL subregions exhibited a degree of variation across modalities: (i) the anterior fusiform was relatively more activated by the pictorial task (in keeping with the significant visual input to this region; Chao, Weisberg, & Martin, 2002; Chao et al., 1999; Moore & Price, 1999); (ii) the anterior STG exhibiting activation for the verbal but not the pictorial semantic task (aligning with a previous study that observed auditory but nor visual semantic processing in this same aSTS area; Visser & Lambon Ralph, 2011). Although the words were presented in orthographic form, it seems most likely that reading utilizes preexisting phonological representations that are primarily formed from auditory experience, and thus, the STG is commonly associated with reading (Jobard, Vigneau, Mazoyer, & Tzourio-Mazoyer, 2007; Spitsyna et al., 2006; Harm & Seidenberg, 2004; Patterson & Lambon Ralph, 1999; Just, Carpenter, Keller, Eddy, & Thulborn, 1996). In contrast to the ATL, all posterior temporal regions demonstrated stimulus differences, albeit still exhibiting graded variation. Thus, the inferior, posterior regions were relatively yoked to the pictorial task and the posterior STG to the verbal modality. The posterior MTG was the most graded in its response; it was activated more by verbal than pictorial information, yet it responded significantly to both stimulus types. This observation aligns with the proposal that a multimodal interface is situated along the entire MTG (Binder et al., 2009), although the function of its most posterior aspect is somewhat more yoked to verbal than pictorial information.

These results are consistent with the neuropsychological, TMS, and neuroimaging literatures, which associate the posterior regions with stimulus-specific processes in contrast to the stimulus-invariant semantic role for the ATL. For example, ATL damage in SD patients results in semantic impairments in all modalities, including spoken and written words, pictures, sounds, touch, and smell (Coccia et al., 2004; Rogers et al., 2004; Lambon Ralph & Howard, 2000; Lambon Ralph, Howard, Nightingale, & Ellis, 1998). Likewise, a series of recent rTMS studies indicate that left and right ATL regions are implicated in verbal and nonverbal semantic processing (including one study that used the same semantic and control tasks as the present fMRI study; Pobric et al., 2010a; Lambon Ralph et al., 2009). In contrast, damage to the posterior ITG results in visual agnosia (Karnath et al., 2009; James, Culham, Humphrey, Milner, & Goodale, 2003) whereas word deafness/auditory agnosia is caused by bilateral lesions in the posterior STG (Griffiths, 2002). Likewise, the superior versus ventral contrast in the temporal lobe is mirrored by past neuroimaging studies that have associated the pSTG and pITG with phonological and visual form processing, respectively (Okada & Hickok, 2006a, 2006b; Chao et al., 1999, 2002; Scott et al., 2000; Moore & Price, 1999; Martin et al., 1996; Demonet et al., 1992). Finally, as noted previously, the more general processing of the pMTG found in the current study replicates a recent rTMS study of the same region using the same tasks (Hoffman et al., 2011) as well as studies of patients with multimodal semantic impairment following stroke (Noonan, Jefferies, Corbett, & Lambon Ralph, 2010; Jefferies & Lambon Ralph, 2006; Chertkow et al., 1997).

We should note that, in addition to the ATL and MTG, the current study like most other semantic investigations (see Figure 1C; Binder et al., 2009) also found significant multimodal activation in AG and inferior pFC. Past neuropsychological and TMS studies suggest that these regions do not code semantic representations per se but are crucial in “semantic control”—that is, underpin various executive processes that manipulate and gate our very rich semantic database of knowledge to generate time- and context-appropriate behavior (Corbett, Jefferies, & Lambon Ralph, 2011; Noonan et al., 2010; Jefferies & Lambon Ralph, 2006). The notion that LIPFC involvement might reflect controlled semantic processing was first poised by various fMRI studies (Wagner, Maril, Bjork, & Schacter, 2001; Thompson-Schill, D'Esposito, Aguirre, & Farrah, 1997). Although not the focus of those studies, these investigations also reported activation in posterior temporal and inferior parietal areas as well. The importance of prefrontal and posterior temporo-parietal regions in controlled multimodal semantic processing has been underlined by a series of studies of semantic aphasia (Jefferies & Lambon Ralph, 2006; Head, 1926)—patients with multimodal semantic impairment following stroke. Various detailed neuropsychological investigations of verbal and nonverbal tasks indicate that these patients do not seem to have a degradation of semantic representation as observed in other neurological conditions such as semantic dementia, but rather the patients' impaired performance reflects deficits in the executive control processes that underpin semantic processing (Corbett et al., 2011; Noonan et al., 2010). These neuroimaging and neuropsychological studies have been supported, more recently, by a series of rTMS investigations of neurologically-intact participants, which have demonstrated that both inferior prefrontal areas (centered on the anterior pars triangularis and pars orbitalis) and posterior temporo-parietal areas underpin semantic control (Whitney, Kirk, O'Sullivan, Lambon Ralph, & Jefferies, 2011; Hoffman et al., 2010).

We finish by turning our attention from neuroanatomical considerations to cognitive models and theories of semantic representation. Specifically, why is it necessary to have any form of transmodal hub for the formation of semantic memory? Classic and some contemporary models of semantic memory (or conceptualisation) assume that modality-specific regions interact directly to give rise to multimodal concepts (for review, see Lambon Ralph et al., 2010; Lambon Ralph & Patterson, 2008). The hub-and-spoke theory of semantic representation builds upon these classic notions by suggesting that, in addition to the multiple, modality-specific sources of information, an ATL hub adds a transmodal representation coding for pan-modal statistical structure (Lambon Ralph et al., 2010; Pobric et al., 2010b; Rogers & McClelland, 2004). The model is best described as “hub-and-spoke” in the sense that conceptual knowledge requires a combination of the transmodal and modality-specific representations (Pobric et al., 2010b; Patterson et al., 2007). Modality-specific information is a crucial source for building concepts but is insufficient by itself. This is because attributes or features combine in complex and nonlinear ways when forming a concept (Lambon Ralph et al., 2010; Wittgenstein, 2001). A transmodal hub provides two crucial “semantic” functions: (a) it allows for the correct, sometimes nonlinear mappings to be learned between modality-specific “attributes” and the relevant concept and (b) these transmodal representations provide the foundation for making semantic generalisations on the basis of conceptual rather than superficial similarities (arguably the core function of semantics: Lambon Ralph et al., 2010; Lambon Ralph & Patterson, 2008). Building on these ideas, two recent studies have demonstrated that, when this representational hub breaks down as it does in semantic dementia, then patients make both overgeneralization errors (e.g., accepting a wolf as a dog) and undergeneralizations (e.g., incorrectly rejecting atypical exemplars, such as a Chuhuahua, as a dog) simultaneously to the same concept (Mayberry, Sage, & Lambon Ralph, 2011; Lambon Ralph et al., 2010).

Acknowledgments

This study was supported by MRC program grants (G0501632 and MR/J004146/1) and an MRC pathfinder grant (g0300952). We thank Jeroen Visser for his help with the preparation of the stimuli.

Reprint requests should be sent to Prof. Matthew A. Lambon Ralph, Neuroscience and Aphasia Research Unit (NARU), Zochonis Building, School of Psychological Sciences, University of Manchester, Oxford Road, Manchester, M13 9PL, UK, or via e-mail: matt.lambon-ralph@manchester.ac.uk.

REFERENCES

REFERENCES
Binder
,
J. R.
,
Desai
,
R. H.
,
Graves
,
W. W.
, &
Conant
,
L. L.
(
2009
).
Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies.
Cerebral Cortex
,
55
,
2767
2796
.
Binder
,
J. R.
,
Frost
,
J. A.
,
Hammeke
,
T. A.
,
Bellgowan
,
P. S. F.
,
Rao
,
S. M.
, &
Cox
,
R. W.
(
1999
).
Conceptual processing during the conscious resting state: A functional MRI study.
Journal of Cognitive Neuroscience
,
11
,
80
93
.
Binder
,
J. R.
,
Gross
,
W. L.
,
Allendorfer
,
J. B.
,
Bonilha
,
L.
,
Chapin
,
J.
,
Edwards
,
J. C.
,
et al
(
2010
).
Mapping anterior temporal lobe language areas with fMRI: A multicenter normative study.
Neuroimage
,
54
,
1465
1475
.
Binney
,
R. J.
,
Embleton
,
K. V.
,
Jefferies
,
E.
,
Parker
,
G. J. M.
, &
Lambon Ralph
,
M. A.
(
2010
).
The ventral and inferolateral aspects of the anterior temporal lobe are crucial in semantic memory: Evidence from a novel direct comparison of distortion-corrected fMRI, rTMS, and semantic dementia.
Cerebral Cortex
,
20
,
2728
2738
.
Bozeat
,
S.
,
Lambon Ralph
,
M. A.
,
Patterson
,
K.
,
Garrard
,
P.
, &
Hodges
,
J. R.
(
2000
).
Non-verbal semantic impairment in semantic dementia.
Neuropsychologia
,
38
,
1207
1215
.
Brett
,
M.
,
Anton
,
J. L.
,
Valabregue
,
R.
, &
Poline
,
J. B.
(
2002
).
Region of interest analysis using an SPM toolbox.
Neuroimage
,
16
,
1140
1141
.
Brodmann
,
K.
(
2005
).
Brodmann's “localisation in the cerebral cortex”.
New York
:
Springer
.
Bruce
,
C.
,
Desimone
,
R.
, &
Gross
,
C. G.
(
1981
).
Visual properties of neurons in a polysensory area in superior temporal sulcus of the macaque.
Journal of Neurophysiology
,
46
,
369
384
.
Chao
,
L. L.
,
Haxby
,
J. V.
, &
Martin
,
A.
(
1999
).
Attribute-based neural substrates in temporal cortex for perceiving and knowing about objects.
Nature Neuroscience
,
2
,
913
919
.
Chao
,
L. L.
,
Weisberg
,
J.
, &
Martin
,
A.
(
2002
).
Experience-dependent modulation of category-related cortical activity.
Cerebral Cortex
,
12
,
545
551
.
Chertkow
,
H.
,
Bub
,
D.
,
Deaudon
,
C.
, &
Whitehead
,
V.
(
1997
).
On the status of object concepts in aphasia.
Brain and Language
,
58
,
203
232
.
Coccia
,
M.
,
Bartolini
,
M.
,
Luzzi
,
S.
,
Provinciali
,
L.
, &
Lambon Ralph
,
M. A.
(
2004
).
Semantic memory is an amodal, dynamic system: Evidence from the interaction of naming and object use in semantic dementia.
Cognitive Neuropsychology
,
21
,
513
527
.
Corbett
,
F.
,
Jefferies
,
E.
, &
Lambon Ralph
,
M. A.
(
2011
).
Deregulated semantic cognition follows prefrontal and temporo-parietal damage: Evidence from the impact of task constraint on nonverbal object use.
Journal of Cognitive Neuroscience
,
23
,
1125
1135
.
Demonet
,
J. F.
,
Chollet
,
F.
,
Ramsay
,
S.
,
Cardebat
,
D.
,
Nespoulous
,
J. L.
,
Wise
,
R.
,
et al
(
1992
).
The anatomy of phonological and semantic processing in normal subjects.
Brain
,
115
,
1753
1768
.
Eggert
,
G. H.
(
1977
).
Wernicke's works on aphasia: A source-book and review.
The Hague
:
Mouton
.
Embleton
,
K.
,
Haroon
,
H.
,
Lambon Ralph
,
M. A.
,
Morris
,
D.
, &
Parker
,
G. J. M.
(
2010
).
Distortion correction for diffusion weighted MRI tractogrophy and fMRI in the temporal lobes.
Human Brain Mapping
,
31
,
1570
1587
.
Friston
,
K. J.
,
Penny
,
W. D.
, &
Glaser
,
D. E.
(
2005
).
Conjunction revisited.
Neuroimage
,
25
,
661
667
.
Galton
,
C. J.
,
Patterson
,
K.
,
Graham
,
K.
,
Lambon Ralph
,
M. A.
,
Williams
,
G.
,
Antoun
,
N.
,
et al
(
2001
).
Differing patterns of temporal atrophy in Alzheimer's disease and semantic dementia.
Neurology
,
57
,
216
225
.
Gloor
,
P.
(
1997
).
The temporal lobe and the limbic system.
Oxford
:
Oxford University Press
.
Goll
,
J. C.
,
Crutch
,
S. J.
,
Loo
,
J. H. Y.
,
Rohrer
,
J. D.
,
Frost
,
C.
,
Bamiou
,
D.-E.
,
et al
(
2009
).
Non-verbal sound processing in the primary progressive aphasias.
Brain
,
133
,
272
285
.
Griffiths
,
T. D.
(
2002
).
Central auditory processing disorders.
Current Opinion in Neurology
,
15
,
31
33
.
Griffiths
,
T. D.
, &
Warren
,
J. D.
(
2004
).
What is an auditory object?
Nature Reviews Neuroscience
,
5
,
887
892
.
Harm
,
M. W.
, &
Seidenberg
,
M. S.
(
2004
).
Computing the meanings of words in reading: Cooperative division of labor between visual and phonological processes.
Psychological Review
,
111
,
662
720
.
Hauk
,
O.
,
Davis
,
M. H.
,
Ford
,
M.
,
Pulvermüller
,
F.
, &
Marslen-Wilson
,
W. D.
(
2006
).
The time course of visual word recognition as revealed by linear regression analysis of ERP data.
Neuroimage
,
30
,
1383
1400
.
Head
,
H.
(
1926
).
Aphasia and kindred disorders of speech.
London
:
Cambridge University Press
.
Hocking
,
J.
, &
Price
,
C. J.
(
2009
).
Dissociating verbal and nonverbal audiovisual object processing.
Brain and Language
,
108
,
89
96
.
Hodges
,
J. R.
, &
Patterson
,
K.
(
2007
).
Semantic dementia: A unique clinicopathological syndrome.
The Lancet Neurology
,
6
,
1004
1014
.
Hoffman
,
P.
,
Jefferies
,
E.
, &
Lambon Ralph
,
M. A.
(
2010
).
Ventrolateral prefrontal cortex plays an executive regulation role in comprehension of abstract words: Convergent neuropsychological and repetitive TMS evidence.
Journal of Neuroscience
,
30
,
15450
15456
.
Hoffman
,
P.
,
Pobric
,
G.
,
Drakesmith
,
M.
, &
Lambon Ralph
,
M. A.
(
2011
).
Posterior middle temporal gyrus is necessary for verbal and non-verbal semantic cognition: Evidence from rTMS.
Aphasiology.
doi: 10.1080/02687038.2011.608838
.
Howard
,
D.
, &
Patterson
,
K.
(
1992
).
Pyramids and Palm Trees; a test from semantic access from pictures and words/
Bury St. Edmunds, UK
:
Thames Valley Test Company
.
Humphreys
,
G. W.
, &
Riddoch
,
M. J.
(
1987
).
On telling your fruit from your vegetables—A consideration of category-specific deficits after brain-damage.
Trends in Neurosciences
,
10
,
145
148
.
Ishai
,
A.
,
Ungerleider
,
L. G.
,
Martin
,
A.
,
Schouten
,
H. L.
, &
Haxby
,
J. V.
(
1999
).
Distributed representation of objects in the human ventral visual pathway.
Proceedings of the National Academy of Sciences, U.S.A.
,
96
,
9379
9384
.
James
,
T. W.
,
Culham
,
J.
,
Humphrey
,
G. K.
,
Milner
,
A. D.
, &
Goodale
,
M. A.
(
2003
).
Ventral occipital lesions impair object recognition but not object-directed grasping: An fMRI study.
Brain
,
126
,
2463
2475
.
Jefferies
,
E.
, &
Lambon Ralph
,
M. A.
(
2006
).
Semantic impairment in stroke aphasia versus semantic dementia: A case-series comparison.
Brain
,
129
,
2132
2147
.
Jobard
,
G.
,
Vigneau
,
B.
,
Mazoyer
,
B.
, &
Tzourio-Mazoyer
,
N.
(
2007
).
Impact of modality and linguistic complexity during reading and listening tasks.
Neuroimage
,
34
,
784
800
.
Just
,
M. A.
,
Carpenter
,
P. A.
,
Keller
,
T. A.
,
Eddy
,
W. F.
, &
Thulborn
,
K. R.
(
1996
).
Brain activation modulated by sentence comprehension.
Science
,
274
,
114
117
.
Karnath
,
H. O.
,
Ruter
,
J.
,
Mandler
,
A.
, &
Himmelbach
,
M.
(
2009
).
The anatomy of object recognition-visual form agnosia caused by medial occipitotemporal stroke.
Journal of Neuroscience
,
29
,
5854
5862
.
Lambon Ralph
,
M. A.
, &
Howard
,
D.
(
2000
).
Gogi aphasia or semantic dementia? Simulating and assessing poor verbal comprehension in a case of progressive fluent aphasia.
Cognitive Neuropsychology
,
17
,
437
465
.
Lambon Ralph
,
M. A.
,
Howard
,
D.
,
Nightingale
,
G.
, &
Ellis
,
A. W.
(
1998
).
Are living and non-living category-specific deficits causally linked to impaired perceptual or associative knowledge? Evidence from a category-specific double dissociation.
Neurocase
,
4
,
311
338
.
Lambon Ralph
,
M. A.
, &
Patterson
,
K.
(
2008
).
Generalization and differentiation in semantic memory: Insights from semantic dementia.
Annals of the New York Academy of Sciences
,
1124
,
61
76
.
Lambon Ralph
,
M. A.
,
Pobric
,
G.
, &
Jefferies
,
E.
(
2009
).
Conceptual knowledge is underpinned by the temporal pole bilaterally: Convergent evidence from rTMS.
Cerebral Cortex
,
19
,
832
838
.
Lambon Ralph
,
M. A.
,
Sage
,
K.
,
Jones
,
R. W.
, &
Mayberry
,
E. J.
(
2010
).
Coherent concepts are computed in the anterior temporal lobes.
Proceedings of the National Academy of Sciences
,
107
,
2717
2722
.
Luzzatti
,
C.
,
Rumiati
,
R.
, &
Ghirardi
,
G.
(
1998
).
A functional model of visuo-verbal disconnection and the neuroanatomical constraints of optic aphasia.
Neurocase
,
4
,
71
87
.
Luzzi
,
S.
,
Snowden
,
J. S.
,
Neary
,
D.
,
Coccia
,
M.
,
Provinciali
,
L.
, &
Lambon Ralph
,
M. A.
(
2007
).
Distinct patterns of olfactory impairment in Alzheimer's disease, semantic dementia, frontotemporal dementia, and corticobasal degeneration.
Neuropsychologia
,
45
,
1823
1831
.
Maldjian
,
J. A.
,
Laurienti
,
P. J.
,
Kraft
,
R. A.
, &
Burdette
,
J. H.
(
2003
).
An automated method for neuroanatomic and cytoarchitectonic atlas-based interrogation of fMRI data sets.
Neuroimage
,
19
,
1233
1239
.
Marinkovic
,
K.
,
Dhond
,
R. P.
,
Dale
,
A. M.
,
Glessner
,
M.
,
Carr
,
V.
, &
Halgren
,
E.
(
2003
).
Spatiotemporal dynamics of modality-specific and supramodal word processing.
Neuron
,
38
,
487
497
.
Martin
,
A.
,
Wiggs
,
C. L.
,
Ungerleider
,
L. G.
, &
Haxby
,
J. V.
(
1996
).
Neural correlates of category-specific knowledge.
Nature
,
379
,
649
652
.
Mayberry
,
E. J.
,
Sage
,
K.
, &
Lambon Ralph
,
M. A.
(
2011
).
At the edge of semantic space: The breakdown of coherent concepts in semantic dementia is constrained by typicality and severity but not modality.
Journal of Cognitive Neuroscience
,
23
,
2240
2251
.
McKiernan
,
K. A.
,
D'Angelo
,
B. R.
,
Kaufman
,
J. N.
, &
Binder
,
J. R.
(
2006
).
Interrupting the “stream of consciousness”: An fMRI investigation.
Neuroimage
,
29
,
1185
1191
.
Moore
,
C. J.
, &
Price
,
C. J.
(
1999
).
A functional neuroimaging study of the variables that generate category-specific object processing differences.
Brain
,
122
,
943
962
.
Morán
,
M. A.
,
Mufson
,
E. J.
, &
Mesulam
,
M. M.
(
1987
).
Neural inputs into the temporopolar cortex of the rhesus monkey.
The Journal of Comparative Neurology
,
256
,
88
103
.
Mummery
,
C. J.
,
Patterson
,
K.
,
Price
,
C. J.
,
Ashburner
,
J.
,
Frackowiak
,
R. S. J.
, &
Hodges
,
J. R.
(
2000
).
A voxel-based morphometry study of semantic dementia: Relationship between temporal lobe atrophy and semantic memory.
Annals of Neurology
,
47
,
36
45
.
Nestor
,
P. J.
,
Fryer
,
T. D.
, &
Hodges
,
J. R.
(
2006
).
Declarative memory impairments in Alzheimer's disease and semantic dementia.
Neuroimage
,
30
,
1010
1020
.
Nichols
,
T.
,
Brett
,
M.
,
Andersson
,
J.
,
Wager
,
T.
, &
Poline
,
J.-B.
(
2005
).
Valid conjunction inference with the minimum statistic.
Neuroimage
,
25
,
653
660
.
Noonan
,
K. A.
,
Jefferies
,
E.
,
Corbett
,
F.
, &
Lambon Ralph
,
M. A.
(
2010
).
Elucidating the nature of deregulated semantic cognition in semantic aphasia: Evidence for the roles of prefrontal and temporoparietal cortices.
Journal of Cognitive Neuroscience
,
22
,
1597
1613
.
Okada
,
K.
, &
Hickok
,
G.
(
2006a
).
Left posterior auditory-related cortices participate both in speech perception and speech production: Neural overlap revealed by fMRI.
Brain and Language
,
98
,
112
117
.
Okada
,
K.
, &
Hickok
,
G.
(
2006b
).
Identification of lexical-phonological networks in the superior temporal sulcus using functional magnetic resonance imaging.
NeuroReport
,
17
,
1293
1296
.
Patterson
,
K.
, &
Lambon Ralph
,
M. A.
(
1999
).
Selective disorders of reading?
Current Opinion in Neurobiology
,
9
,
235
239
.
Patterson
,
K.
,
Nestor
,
P. J.
, &
Rogers
,
T. T.
(
2007
).
Where do you know what you know? The representation of semantic knowledge in the human brain.
Nature Reviews Neuroscience
,
8
,
976
987
.
Piwnica-Worms
,
K. E.
,
Omar
,
R.
,
Hailstone
,
J. C.
, &
Warren
,
J. D.
(
2010
).
Flavour processing in semantic dementia.
Cortex
,
46
,
761
768
.
Plaut
,
D. C.
(
2002
).
Graded modality-specific specialisation in semantics: A computational account of optic aphasia.
Cognitive Neuropsychology
,
19
,
603
639
.
Pobric
,
G.
,
Jefferies
,
E.
, &
Lambon Ralph
,
M. A.
(
2007
).
Anterior temporal lobes mediate semantic representation: Mimicking semantic dementia by using rTMS in normal participants.
Proceedings of the National Academy of Sciences
,
104
,
20137
20141
.
Pobric
,
G.
,
Jefferies
,
E.
, &
Lambon Ralph
,
M. A.
(
2010a
).
Amodal semantic representations depend on both left and right anterior temporal lobes: New rTMS evidence.
Neuropsychologia
,
48
,
1336
1342
.
Pobric
,
G.
,
Jefferies
,
E.
, &
Lambon Ralph
,
M. A.
(
2010b
).
Category-specific versus category-general semantic impairment induced by transcranial magnetic stimulation.
Current Biology
,
20
,
964
968
.
Pobric
,
G.
,
Jefferies
,
E.
, &
Lambon Ralph
,
M. A.
(
2010c
).
Amodal semantic representations depend on both anterior temporal lobes: Evidence from repetitive transcranial magnetic stimulation.
Neuropsychologia
,
48
,
1336
1342
.
Rogers
,
T.
, &
McClelland
,
J. L.
(
2004
).
Semantic cognition: A parallel distributed processing approach.
Cambridge, MA
:
MIT Press
.
Rogers
,
T. T.
,
Lambon Ralph
,
M. A.
,
Garrard
,
P.
,
Bozeat
,
S.
,
McClelland
,
J. L.
,
Hodges
,
J. R.
,
et al
(
2004
).
Structure and deterioration of semantic memory: A neuropsychological and computational investigation.
Psychological Review
,
111
,
205
235
.
Scott
,
S. K.
,
Blank
,
C. C.
,
Rosen
,
S.
, &
Wise
,
R. J. S.
(
2000
).
Identification of a pathway for intelligible speech in the left temporal lobe.
Brain
,
123
,
2400
2406
.
Scott
,
S. K.
, &
Johnsrude
,
I. S.
(
2003
).
The neuroanatomical and functional organization of speech perception.
Trends in Neurosciences
,
26
,
100
107
.
Scott
,
S. K.
,
Leff
,
A. P.
, &
Wise
,
R. J. S.
(
2003
).
Going beyond the information given: A neural system supporting semantic interpretation.
Neuroimage
,
19
,
870
876
.
Seltzer
,
B.
, &
Pandya
,
D. N.
(
1978
).
Afferent cortical connections and architectonics of superior temporal sulcus and surrounding cortex in rhesus-monkey.
Brain Research
,
149
,
1
24
.
Spitsyna
,
G.
,
Warren
,
J. E.
,
Scott
,
S. K.
,
Turkheimer
,
F. E.
, &
Wise
,
R. J. S.
(
2006
).
Converging language streams in the human temporal lobe.
Journal of Neuroscience
,
26
,
7328
7336
.
Thompson-Schill
,
S. L.
,
D'Esposito
,
M.
,
Aguirre
,
G. K.
, &
Farah
,
M. J.
(
1997
).
Role of left inferior prefrontal cortex in retrieval of semantic knowledge: A reevaluation.
Proceedings of the National Academy of Sciences, U.S.A.
,
94
,
14792
14797
.
Tranel
,
D.
,
Damasio
,
H.
, &
Damasio
,
A. R.
(
1997
).
A neural basis for the retrieval of conceptual knowledge.
Neuropsychologia
,
35
,
1319
1327
.
Tzourio-Mazoyer
,
N.
,
Landeau
,
B.
,
Papathanassiou
,
D.
,
Crivello
,
F.
,
Etard
,
O.
,
Delcroix
,
N.
,
et al
(
2002
).
Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain.
Neuroimage
,
15
,
273
289
.
Vandenberghe
,
R.
,
Price
,
C.
,
Wise
,
R.
,
Josephs
,
O.
, &
Frackowiak
,
R. S. J.
(
1996
).
Functional anatomy of a common semantic system for words and pictures.
Nature
,
383
,
254
256
.
Visser
,
M.
,
Embleton
,
K. V.
,
Jefferies
,
E.
,
Parker
,
G. J.
, &
Lambon Ralph
,
M. A.
(
2010
).
The inferior, anterior temporal lobes and semantic memory clarified: Novel evidence from distortion-corrected fMRI.
Neuropsychologia
,
48
,
1689
1696
.
Visser
,
M.
,
Jefferies
,
E.
, &
Lambon Ralph
,
M. A.
(
2009
).
Semantic processing in the anterior temporal lobes: A meta-analysis of the functional neuroimaging literature.
Journal of Cognitive Neuroscience
,
22
,
1083
1094
.
Visser
,
M.
, &
Lambon Ralph
,
M. A.
(
2011
).
Differential contributions of the ventral anterior temporal lobe and the anterior superior temporal gyrus to semantic processes.
Journal of Cognitive Neuroscience
,
23
,
3121
3131
.
Wagner
,
A. D.
,
Maril
,
A.
,
Bjork
,
R. A.
, &
Schacter
,
D. L.
(
2001
).
Prefrontal contributions to executive control: fMRI evidence for functional distinctions within lateral prefrontal cortex.
Neuroimage
,
14
,
1337
1347
.
Weiskopf
,
N.
,
Hutton
,
C.
,
Josephs
,
O.
, &
Deichmann
,
R.
(
2006
).
Optimal EPI parameters for reduction of susceptibility-induced BOLD sensitivity losses: A whole-brain analysis at 3 T and 1.5 T.
Neuroimage
,
33
,
493
504
.
Whitney
,
C.
,
Kirk
,
M.
,
O'Sullivan
,
J.
,
Lambon Ralph
,
M. A.
, &
Jefferies
,
E.
(
2011
).
The neural organization of semantic control: TMS evidence for a distributed network in left inferior frontal and posterior middle temporal gyrus.
Cerebral Cortex
,
21
,
1066
1075
.
Wittgenstein
,
L.
(
2001
).
Philosophical investigations: The German text, with a revised English translation 50th anniversary commemorative edition.
Oxford
:
Wiley-Blackwell
.