Abstract

Studies of semantic dementia and repetitive TMS have suggested that the bilateral anterior temporal lobes (ATLs) underpin a modality-invariant representational hub within the semantic system. However, it is not clear whether all ATL subregions contribute in the same way. We utilized distortion-corrected fMRI to investigate the pattern of activation in the left and right ATL when participants performed a semantic decision task on auditory words, environmental sounds, or pictures. This showed that the ATL is not functionally homogeneous but is more graded. Both left and right ventral ATL (vATL) responded to all modalities in keeping with the notion that this region underpins multimodality semantic processing. In addition, there were graded differences across the hemispheres. Semantic processing of both picture and environmental sound stimuli was associated with equivalent bilateral vATL activation, whereas auditory words generated greater activation in left than right vATL. This graded specialization for auditory stimuli would appear to reflect the input from the left superior ATL, which responded solely to semantic decisions on the basis of spoken words and environmental sounds, suggesting that this region is specialized to auditory stimuli. A final noteworthy result was that these regions were activated for domain level decisions to singly presented stimuli, which appears to be incompatible with the hypotheses that the ATL is dedicated (a) to the representation of specific entities or (b) for combinatorial semantic processes.

INTRODUCTION

Semantic knowledge enables us to recognize an object in different contexts, to identify its relatedness to other concepts and to generalize semantic information. Recent work has generated a model in which an anterior temporal lobe (ATL) modality- and time-invariant hub provides a core substrate for the formation of coherent semantic representations (Lambon Ralph, Sage, Jones, & Mayberry, 2010; Lambon Ralph, Pobric, & Jefferies, 2009; Patterson, Nestor, & Rogers, 2007; Rogers et al., 2004). In this model, multiple distributed brain regions process and represent modality-specific sources of perceptual and verbal information. In addition, the ATL hub provides cross-modal translation between these information sources and, in doing so, licenses the formation of modality-invariant concepts (Lambon Ralph et al., 2010; Pobric, Jefferies, & Lambon Ralph, 2010). In other words, irrespective of the input modality (e.g., written words, pictures, auditory words, or sounds) or changes in surface features, this ATL system enables activation of the same semantic representation and the ability to generalize appropriately across different exemplars of the same concept. Previously, it has been suggested these semantic processes emerge from direct communication between modality-specific regions and do not require an amodal semantic hub (Martin, 2007). However, computational models have shown that such complex representations can only be achieved by the addition of intermediate processing units (Rogers et al., 2004; Plaut, 2002). Anatomically, the ATL is an ideal location for this role because it is connected with secondary perceptual and motor cortices (Gloor, 1997). In addition, the damage to anterior parts of the temporal lobe in semantic dementia (SD) results in multimodal semantic impairments, with preserved processing of episodic memory, perceptual and syntax information, as well as other aspects of higher order cognition (Lambon Ralph & Patterson, 2008; Patterson et al., 2007; Snowden, 2002; Hodges, Patterson, Oxbury, & Funnell, 1992). For example, comprehension is impaired for spoken and written words, pictures, sounds, smell, taste, and touch (Piwnica-Worms, Omar, Hailstone, & Warren, 2010; Luzzi et al., 2007; Coccia, Bartolini, Luzzi, Provinciali, & Lambon Ralph, 2004; Bozeat, Lambon Ralph, Patterson, Garrard, & Hodges, 2000; Lambon Ralph & Howard, 2000; Lambon Ralph, Howard, Nightingale, & Ellis, 1998). Furthermore, SD patients are impaired in production tasks such as picture naming, tactile naming, object drawing, object use, and verbal descriptions of objects (Coccia et al., 2004; Lambon Ralph, McClelland, Patterson, Galton, & Hodges, 2001; Bozeat et al., 2000; Lambon Ralph, Graham, Patterson, & Hodges, 1999; Lambon Ralph, Graham, Ellis, & Hodges, 1998). These selective, multimodal semantic impairments advocate for a key role of the damaged area, bilaterally, in conceptualization. Indeed, it has been shown that SD patients have difficulty computing accurate generalizations on the basis of conceptual rather than superficial similarity (Lambon Ralph et al., 2010; Lambon Ralph & Patterson, 2008). Repetitive TMS studies of the left or right lateral ATL in neurologically intact participants also indicate an important role of this bilateral region in amodal semantic processing (Pobric et al., 2010; Lambon Ralph et al., 2009; Pobric, Jefferies, & Lambon Ralph, 2007). Additional convergent evidence for the role of the ATL in semantic processing arises from studies using intracortical recordings in neurosurgical patients (Halgren, Baudena, Heit, Clarke, & Marinkovic, 1994), functional neuroimaging studies of comprehension using PET (Devlin et al., 2000; Perani et al., 1999; Vandenberghe, Price, Wise, Josephs, & Frackowiak, 1996), and magnetoencephalography (MEG) (Marinkovic et al., 2003). Although there is now substantial, multimethod convergent evidence in favor of a semantic role for the bilateral ATL, much less is known about which specific regions contribute and what their roles are. In SD, the area of atrophy is large, covering anatomically distinct subregions including the fusiform gyrus and inferior, middle, and superior temporal lobe, reflecting multiple cytoarchitectural areas (Ding, Van Hoesen, Cassell, & Poremba, 2009; Brodmann, 2005), and it is not clear whether each of these subregions contributes in the same way. The current literature suggests that the superior part of the left ATL might be functionally specialized for verbal information. This region has been associated with verbal information (Hocking & Price, 2009), speech-like stimuli (Overath, Kumar, von Kriegstein, & Griffiths, 2008), and intelligible speech (Spitsyna, Warren, Scott, Turkheimer, & Wise, 2006; Sharp, Scott, & Wise, 2004; Scott, Blank, Rosen, & Wise, 2000). In contrast, the bilateral ventral ATL (vATL), of which the left is sometimes referred to as the “basal temporal language area” (BTLA; Spitsyna et al., 2006; Sharp et al., 2004), might have a somewhat different semantic function. This area exhibits maximal damage in SD patients, indicating that this region might contribute foremost to the patients' semantic impairments (Galton et al., 2001; Mummery et al., 2000), a hypothesis reinforced by two recent studies. The first found that accuracy on semantic tasks in SD was correlated with the FDG-PET rCBF signal in this specific bilateral ventral region (Mion et al., 2010), and a second study revealed an almost identical peak of fMRI activation for written synonym judgment performance in neurologically intact participants (Binney, Embleton, Jefferies, Parker, & Lambon Ralph, 2010). This region has also been associated with visually invariant category processing, which is a vital feature of perception, enabling recognition regardless of changes in location and orientation (Liu, Agam, Madsen, & Kreiman, 2009). Furthermore, other studies have found that the anterior superior temporal gyrus (aSTG) and the vATL are often coactivated, suggesting that these regions cooperate to accomplish auditory comprehension (Spitsyna et al., 2006; Sharp et al., 2004). The above findings suggest, therefore, a cross-modal nature to the representations hosted in the bilateral vATL.

As well as providing the first systematic investigation of different areas within the entire ATL region, this study was also concerned with the role of left versus right ATL to semantic cognition. Patients with clear semantic representational deficits often present with bilateral ATL damage (e.g., SD, herpes simplex virus encephalitis, etc.), and rTMS to left or right lateral ATL generates verbal and nonverbal semantic impairment (Pobric et al., 2010; Lambon Ralph et al., 2009; Lambon Ralph, Lowe, & Rogers, 2007). This suggests that both hemispheres contribute to semantic processing, across the same modalities. Within-patient group comparisons find, however, that exact levels of performance are dependent on the degree of left versus right atrophy. SD patients with more left than right ATL atrophy are relatively more anomic and have greater problems with verbal than nonverbal comprehension (with the opposite pattern for SD patients with right-biased ATL atrophy; Mion et al., 2010; Snowden, Thompson, & Neary, 2004; Lambon Ralph et al., 2001). There have been at least two interpretations of such findings. The first is that there is a single functional yet bilaterally represented semantic system and that differential task performance reflects the impact of greater connectivity of the left ATL region to left-dominant language systems. The second is that there is an inbuilt, strict functional division in which the left ATL is specialized for verbal semantic processing and the right for nonverbal semantic processing. These two alternative hypotheses are hard to differentiate using SD data given that patients always have some degree of bilateral damage and hypometabolism (Mion et al., 2010) and, although clearly favoring the first hypothesis, ATL rTMS studies do not generate the same effect size as observed in patient studies. Thus, probing left versus right ATL activation across different modalities of stimuli in neurologically intact participants should be able to delineate more clearly between these hypotheses.

The vATL is often omitted from neuroanatomical models of verbal and nonverbal semantics (Martin, 2007; Catani & Ffytche, 2005). However, it would appear that this relates to an absence of evidence rather than evidence of absence. Models based on stroke aphasic patients are unlikely to include the inferior ATL because the territory of the MCA rarely includes this region (Schwartz et al., 2009; Wise, 2003). In addition, the fact that the majority of semantic neuroimaging studies do not implicate the bilateral vATL (Visser, Jefferies, & Lambon Ralph, 2010) seems to be related to a series of technical and methodological issues. The fMRI signal is distorted in this region because of varying magnetic susceptibility (Visser, Jefferies, et al., 2010; Devlin et al., 2000). In addition, many previous fMRI and PET studies have not included a sufficient field of view to cover the whole brain, thereby often omitting the vATL if the dorsal surface of the brain is included in the acquisition (Visser, Jefferies, et al., 2010). Thus, both stroke aphasia and past neuroimaging studies have, in effect, been silent on the bilateral vATL, because it has not been consistently sampled.

To investigate the contribution of the various ATL regions in semantic processing, we used distortion-corrected fMRI and a semantic paradigm based on auditory words, pictures, or environmental sounds. Previous studies have used this new distortion correction technique and have successfully found bilateral vATL activation during semantic decisions to visually presented words or pictures (Visser, Embleton, Jefferies, Parker, & Lambon Ralph, 2010, submitted; Binney et al., 2010). If the bilateral vATL is a core region for the amodal semantic hub, then we would expect to observe activation in response to all three types of stimuli. In contrast, if the left superior ATL is more specialized, then it might only respond to auditory–verbal input. Given its proximity and connectivity to primary auditory and association cortices (Rauschecker & Scott, 2009; Scott et al., 2000), it is possible that the left superior ATL plays a more general role in higher-order auditory recognition. However, some neuroimaging studies of semantic processing have found activation in the left superior ATL when using other stimuli, such as visually presented words and objects (for a review, see Visser, Jefferies, et al., 2010; Amedi, von Kriegstein, van Atteveldt, Beauchamp, & Naumer, 2005). It is not clear, therefore, whether this region processes purely verbal information or whether it is involved in multimodal processing. To date, no previous study has directly compared the same semantic decision across various auditory and visual stimuli to adjudicate between these possibilities.

A few studies have investigated the neural correlates associated with semantic processing of nonverbal auditory information (i.e., environmental sounds), yet they have generated inconsistent results, associating it with inferior ATL activation, posterior fusiform or posterior STG activation (Hocking & Price, 2008, 2009; Tranel, Grabowski, Lyon, & Damasio, 2005). It is possible that this inconsistency might be because some studies used cross-modality (visual–auditory) matching, making it more difficult to isolate the neural regions that underpin nonverbal auditory semantic processing, specifically. We adopted, therefore, a behavioral paradigm in which each stimulus type (environmental sounds, spoken names, and pictures) was presented in isolation. This allowed us to probe directly and independently, which ATL subregions respond to each stimulus type.

A further key aim of this study was to test the hypothesis that the anterior temporal regions are specialized for the representation of specific entities (Tranel, Damasio, & Damasio, 1997; Damasio, Grabowski, Tranel, Hichwa, & Damasio, 1996). The alternative is that the strength of response in this region is graded by specificity but is present for all levels of conceptual detail. For example in SD, although performance is better on domain than specific level distinctions, this effect is relative rather than absolute. Indeed, the same graded performance difference emerges in an implemented computational model of semantic memory as an emergent property of semantic processing per se rather than programming a specificity effect directly into the model (Rogers et al., 2004). In this model, only partial activation is required to separate living from nonliving entities, whereas distinguishing specific level concepts requires much greater and precise semantic activation. If this graded hypothesis is correct then, with sufficient power and sensitive imaging methods, we should find ATL activation even during the domain-level (animal vs. manmade object) decisions.

A final aim of the study was to test an alternative notion about the ATL region, namely, that it is required for “combinatorial semantics” (Hickok & Poeppel, 2007). In this theory, activation of meaning for single stimuli is achieved in posterior regions and the ATL is only required when a meaning is built up over a series of stimuli (e.g., a sentence). An alternative hypothesis is that the ATL might exhibit increased activation for these kinds of multielement stimuli, simply because there are many more elements to be processed (Visser, Embleton, et al., 2010). These alternatives can be difficult to adjudicate between on the basis of previous imaging data because semantic tasks often involve comparison or selection between multiple stimuli. The use of singly presented stimuli in the present study, however, licensed a direct test of the two hypotheses; if the ATL has a more general semantic role even for single stimuli (as suggested by SD patients' poor single word comprehension and naming performance; Rogers et al., 2004; Lambon Ralph et al., 2001), then we should observe ATL activation.

In summary, we had five different research questions: (1) Which ATL subregions contribute to semantic processing? (2) Does their involvement vary in graded or absolute ways across input modality? (3) Are there differences in the contribution of left versus right ATL regions to semantic processing? (4) Is the ATL involved even for superordinate semantic distinctions? (5) Is the ATL involved in semantic processing even for singly presented items that require no combination of meaning across multiple stimuli?

METHODS

Task and Stimuli

Twenty (right-handed) participants were presented with blocks of pictures, auditory words, and environmental sounds while in the scanner. They were asked to indicate with a button press whether the item was living or nonliving. In addition, we included two nonsemantic control tasks, which captured the same basic visual or auditory processes and motor requirements. For the visual control task, we included scrambled versions of the picture stimuli. These stimuli were created by scrambling the pictures into 80 pieces and rearranging them in a random fashion (using the Java Runtime Environment; www.SunMicrosystems.com).

For the auditory control task, we presented pink or brown noise bursts. Participants were asked to indicate with a button press whether the item was high or low. The auditory stimuli were either high or low sounding noise burst (i.e., pink vs. brown noise), whereas the scrambled items were presented either high or low on the screen. To match the semantic and control visual tasks as much as possible, the pictures were also presented high or low on the screen during the semantic decision trials. To ensure that the noise bursts were recognized as either high or low, we performed a pilot study in which participants rated the sounds. Only the six sounds that were consistently recognized as high and low were included in the current experiment. Before the fMRI experiment, participants practiced the task for 15 min to ensure that they were confident about their performance. NordicNeuroLab headphones were used to present the auditory stimuli (www.nordicneurolab.com).

Experimental Design

Each condition contained 90 items, except for the auditory control task, which contained six (see previous section). Half of the semantic items were living, and the other half were nonliving. Furthermore, half of the semantic and control pictures were presented high on the screen, and the other half were low. This variation of screen position was the basis for the visual control task (i.e., was the item high or low on the screen?). The position of the visual semantic stimuli also varied across the same screen positions so as to avoid potential differences in brain activation because of variation of position or associated eye movements. Different items were used in the experimental conditions so as to avoid repetition effects. We used a blocked design and sampled each condition block for nine times. Blocks were randomized, and the design matrix was manually adjusted to ensure optimal design efficiency (Henson, 2006). Each block lasted 15 sec and contained 10 items. Each item was presented for 500 msec with an ISI of 1 sec.

Image Acquisition

All imaging was performed on a 3 T Philips Achieva scanner using an eight-element SENSE head coil with a sense factor of 2.5. The SE-EPI fMRI sequence included 42 slices with echo time (TE) = 75 msec, repetition time (TR) = 4075 msec, acquisition matrix = 96 × 96, reconstructed resolution = 2.5 × 2.5 mm, and slice thickness = 3 mm. To compute a spatial remapping matrix, a prescan was obtained with interleaved dual direction phase encoding and the participant at rest (20 image volumes acquired, 10 for left-to-right phase encoding [KL], and the same number for the opposite right-to-left [KR] phase encoding). This was followed by the main fMRI image sequence of 310 time points with a single-phase encoding direction (KL), during which the functional task was performed. A high-resolution T2-weighted turbo spin echo scan with in-plane resolution of 0.938 mm and slice thickness of 2.1 mm was also obtained as a structural reference to provide a qualitative indication of distortion correction accuracy.

Distortion Correction

The spatial remapping correction was computed using the method developed and applied elsewhere (Embleton, Haroon, Lambon Ralph, Morris, & Parker, 2010; Visser, Embleton, et al., 2010). In brief, mean KL and KR images were produced from the 10 KL and 10 KR direction images acquired in the prescan. During the correction process, a spatial transformation matrix applied to transform the mean KL image into corrected space was obtained for intervals of 0.1 pixels in the phase encoding direction resulting in a shift matrix of size 96 × 960 × 42. The 310 time points in the functional acquisition were then corrected by first registering each 3-D volume to the original distorted mean KL volume using a six degrees of freedom translation and rotation algorithm (FLIRT, FSL, Oxford) and then applying the matrix of pixel shift values to the registered images. This resulted in a distortion-corrected data set of 310 volumes, maintaining the original temporal spacing and TR of 4075 msec.

SPM Analyses

Image analysis was carried out with SPM5 software (Wellcome Department of Imaging Neuroscience, London, UK; www.fil.ion.ucl.ac.uk/spm). Preprocessing of functional MRIs included movement correction, slice time correction, coregistration with the anatomical data, and smoothing using a Gaussian filter with 10-mm FWHM. Subtraction analyses were carried out to examine activation of the semantic tasks versus their matching control tasks. In addition, all semantic tasks were collapsed and contrasted against the control tasks. The threshold was set at p < .005 with an extent threshold of 15 voxels. Nichols, Brett, Andersson, Wager, and Poline (2005) noted that significantly activated voxels can arise from a large effect in one condition (e.g., words) co-occurring with a small (insignificant) effect in the other condition (e.g., pictures). We ran a conjunction analysis, therefore, which included only voxels that are activated for all three input modalities (Friston, Penny, & Glaser, 2005; Nichols et al., 2005). Threshold was set at p < .005, uncorrected.

In addition, we were particularly interested in the functional role of the bilateral vATL and the left aSTG, identified in previous independent studies (see below). We investigated these areas by using a ROI analysis. We defined three ROIs. The first ROI was based on the results of a PET study that found a clear relationship between the left superior ATL and processing of intelligible speech (Scott et al., 2000). To mirror the form of this aSTG/STS region, we created a lozenge-shaped ROI from three sequential 7-mm spheres. Two spheres were placed at the peaks reported in the original study, and an additional, intermediate sphere was added to make a continuous, single ROI (centers at −54 6 −16, −60 −12 −15, and −66 −12 −12). We used two complementary ROI for the vATL on the basis of two separate, independent literatures. For the first vATL ROI, we created a 5-mm sphere around the location of peak activation (−38 −18 −32) from a previous language-related study (Sharp et al., 2004). On the basis of these results and the distribution of atrophy in SD patients, our working hypothesis is that the semantic function of this ventral region is bilateral, and therefore, we mirrored this ROI in the right hemisphere. Our final ROI was based on a study that utilized intracranial electrode recordings (Liu et al., 2009). As noted in the Introduction, this region was associated with visually invariant, category processing. We drew a 5-mm sphere around the position of the intracranial electrode positioned at (−27 −12 −35). Despite the fact that these two previous studies were based on verbal or visual input modalities and used very different methods, it is intriguing that both identified a similar locus of activity within the vATL (see Introduction and Binney et al., 2010). We analyzed the average activation in these ROIs for each contrast using Marsbar (marsbar.sourceforge.net/). This method overcomes the multiple comparison problem and is ideal for testing specific regional hypotheses (Brett, Anton, Valabregue, & Poline, 2002).

RESULTS

Activated regions for the different contrasts in the whole brain analysis are shown in Figure 1. First, we examined areas involved in common semantic processing of pictures, auditory words, and environmental sounds (i.e., all semantic tasks combined and contrasted with their corresponding control tasks). This all semantics > control contrast resulted in three bilateral clusters in inferior frontal gyrus, STG, and inferior temporal gyrus (see Figure 1A). The inferior temporal cluster extended from the occipital lobe anteriorly until y = 4 (in MNI space) in both hemispheres. The cluster in the superior temporal lobe extended anteriorly until y = 7 in the left hemisphere and y = 14 in the right hemisphere. The local peak maxima of each cluster that survived correction for multiple comparisons at p < .05 are summarized in Table 1.

Figure 1. 

Rendered images showing the regions activated by domain-level decisions. (A) Activated regions for the contrast “all semantic > control” (red) and for the conjunction analysis (purple). (B–D) Activated clusters for the contrasts “pictures > control” (blue), “auditory words > control” (cyan), and “environmental sounds > control” (yellow). The threshold is set at p < .005, with a voxel extend of 15, and for the conjunction analysis at p < .005, uncorrected.

Figure 1. 

Rendered images showing the regions activated by domain-level decisions. (A) Activated regions for the contrast “all semantic > control” (red) and for the conjunction analysis (purple). (B–D) Activated clusters for the contrasts “pictures > control” (blue), “auditory words > control” (cyan), and “environmental sounds > control” (yellow). The threshold is set at p < .005, with a voxel extend of 15, and for the conjunction analysis at p < .005, uncorrected.

Table 1. 

Activated Clusters for the Contrast all semantic tasks > control

Brain Region (COG)
Voxels
Max Z Value
MNI Coordinates
x
y
z
L. superior temporal gyrus 488 6.93 −54 −24 
5.55 −51 −36 
5.03 −57 −6 −6 
L. inferior temporal gyrus 1530 6.6 −39 −60 −15 
6.57 −39 −45 −18 
5.63 −36 −9 −36 
R. superior temporal gyrus 687 6.18 60 −18 
5.94 54 −30 
5.73 60 −3 −5 
R. inferior temporal gyrus 1287 5.75 45 −39 −24 
5.74 42 −45 −18 
5.16 35 −5 −36 
R. inferior frontal cortex 492 4.98 36 33 
4.97 27 21 −3 
4.56 48 24 
L. inferior frontal cortex 223 4.25 −51 30 
4.07 −30 21 
4.07 −33 30 
R. anterior cingulate gyrus 131 4.18 12 21 33 
3.74 15 45 
L. occipital lobe 127 4.17 −93 −12 
3.85 −5 −102 
L. putamen 105 4.01 −30 −15 12 
3.94 −24 −5 
Brain Region (COG)
Voxels
Max Z Value
MNI Coordinates
x
y
z
L. superior temporal gyrus 488 6.93 −54 −24 
5.55 −51 −36 
5.03 −57 −6 −6 
L. inferior temporal gyrus 1530 6.6 −39 −60 −15 
6.57 −39 −45 −18 
5.63 −36 −9 −36 
R. superior temporal gyrus 687 6.18 60 −18 
5.94 54 −30 
5.73 60 −3 −5 
R. inferior temporal gyrus 1287 5.75 45 −39 −24 
5.74 42 −45 −18 
5.16 35 −5 −36 
R. inferior frontal cortex 492 4.98 36 33 
4.97 27 21 −3 
4.56 48 24 
L. inferior frontal cortex 223 4.25 −51 30 
4.07 −30 21 
4.07 −33 30 
R. anterior cingulate gyrus 131 4.18 12 21 33 
3.74 15 45 
L. occipital lobe 127 4.17 −93 −12 
3.85 −5 −102 
L. putamen 105 4.01 −30 −15 12 
3.94 −24 −5 

MNI coordinates of the local peak maxima of each activated cluster that survived correction for multiple comparison at p < .05. L. = left, R. = right.

Next, we examined the activation in each condition contrasted against the matching control task (see Figure 1BD) and compared the results with the contrast all semantics > control. The activation patterns for environmental sounds > control and auditory words > control were very similar to each other and to the overall pattern found for all semantics > control (as summarized above). The pattern for the contrast pictures > control also identified bilateral inferior temporal gyrus (ITG) and inferior frontal gyrus (IFG) activations (although the IFG peak activations were in a slightly different location) but deviated from the other two conditions in that no activation was observed in the STG. The differences across conditions were confirmed by the stringent conjunction analysis, which did not identify any common STG activation but highlighted significant common activation for all conditions along the ventral temporal lobe, bilaterally (see Table 2 and Figure 1). The conjunction analyses did not show any IFG activation, because the location of IFG activation observed in each of the three single contrasts differed slightly, leading to a null result in the conjunction analysis.

Table 2. 

Peak Coordinates of the Conjunction Analysis of Pictures, Auditory Words, and Environmental Sounds

Contrast
Brain Region (COG)
Max Z Value
MNI Coordinates
x
y
z
Conjunction L. inferior temporal gyrus 5.18 −45 −57 −15 
L. inferior temporal gyrus 4.48 −39 −18 −30 
R. inferior temporal gyrus 3.45 45 −36 −24 
R. inferior temporal gyrus 3.38 33 −3 −42 
R. inferior temporal gyrus 2.73 36 −9 −30 
R. cerebellum 2.95 42 −81 −30 
Contrast
Brain Region (COG)
Max Z Value
MNI Coordinates
x
y
z
Conjunction L. inferior temporal gyrus 5.18 −45 −57 −15 
L. inferior temporal gyrus 4.48 −39 −18 −30 
R. inferior temporal gyrus 3.45 45 −36 −24 
R. inferior temporal gyrus 3.38 33 −3 −42 
R. inferior temporal gyrus 2.73 36 −9 −30 
R. cerebellum 2.95 42 −81 −30 

The threshold was set at p < .005, uncorrected. L. = left, R. = right.

We conducted ROI analyses to focus on the key ATL targets in more detail (see Introduction). The results for the aSTG and vATL ROIs are represented in Figure 2 and summarized in Table 3. Consistent with the hypotheses from Binney et al. (2010) and Sharp et al. (2004) and the multimodal impairments observed in SD (Bozeat et al., 2000), the vATL ROI was found to be significantly activated, bilaterally, in all three conditions. Intriguingly, the effect size for the pictures > control and environmental sounds > control contrasts were similar in left and right vATL, whereas there was an asymmetric pattern (left > right) of significant activations for the spoken word condition (see Figure 2B and C). This would appear to mirror the graded results found in SD (see Introduction and Discussion).

Figure 2. 

Location and summary of the ROI analyses that compare the [semantic > control] contrast across three specific ATL targets. Comparative activation for each stimulus type is shown for the aSTG ROI (blue, A), left vATL (red, B), and right vATL (green, C). See text for more detailed descriptions.

Figure 2. 

Location and summary of the ROI analyses that compare the [semantic > control] contrast across three specific ATL targets. Comparative activation for each stimulus type is shown for the aSTG ROI (blue, A), left vATL (red, B), and right vATL (green, C). See text for more detailed descriptions.

Table 3. 

Summary of the ROI Analysis to Probe the Differential Contribution of Inferior and Superior ATL Regions to Semantic Processing

Region of Interest
Common Semantic > Control
Picture > Control
Auditory Word > Control
Environmental Sounds > Control
aSTG (Scott et al., 2000p = .064 ns p = .018 p = .024 
t = 1.58 t = 2.23 t = 2.1 
η2 = 0.11 η2 = 0.08 η2 = 0.07 
Left vATL (Sharp et al., 2004p < .001 p < .001 p < .001 p < .001 
t = 7.43 t = 4.0 t = 6.33 t = 5.72 
η2 = 0.75 η2 = 0.19 η2 = 0.34 η2 = 0.22 
Right vATL (Sharp et al., 2004p > .002 p > .001 p = .02 p = .013 
t = 4.2 t = 4.12 t = 2.18 t = 2.4 
η2 = 0.50 η2 = 0.21 η2 = 0.15 η2 = 0.14 
Left vATL (Liu et al., 2009p = .006 p = .076 p = .004 p = .027 
t = 2.76 t = 1.48 t = 2.95 t = 2.03 
η2 = 0.36 η2 = 0.06 η2 = 0.23 η2 = 0.08 
Region of Interest
Common Semantic > Control
Picture > Control
Auditory Word > Control
Environmental Sounds > Control
aSTG (Scott et al., 2000p = .064 ns p = .018 p = .024 
t = 1.58 t = 2.23 t = 2.1 
η2 = 0.11 η2 = 0.08 η2 = 0.07 
Left vATL (Sharp et al., 2004p < .001 p < .001 p < .001 p < .001 
t = 7.43 t = 4.0 t = 6.33 t = 5.72 
η2 = 0.75 η2 = 0.19 η2 = 0.34 η2 = 0.22 
Right vATL (Sharp et al., 2004p > .002 p > .001 p = .02 p = .013 
t = 4.2 t = 4.12 t = 2.18 t = 2.4 
η2 = 0.50 η2 = 0.21 η2 = 0.15 η2 = 0.14 
Left vATL (Liu et al., 2009p = .006 p = .076 p = .004 p = .027 
t = 2.76 t = 1.48 t = 2.95 t = 2.03 
η2 = 0.36 η2 = 0.06 η2 = 0.23 η2 = 0.08 

The ROIs were based on independent imaging studies. The results show that the superior part of the ATL (aSTG; from Scott et al., 2000) is specialized for auditory stimuli. In contrast, the vATL (defined in terms of peaks from Liu et al., 2009 or Sharp et al., 2004) is associated with multimodal semantic processes.

Similar results were found for the vATL region defined on the basis of the intracranial electrode study reported by Liu et al. (2009; see Table 3 for details). In contrast, the pattern observed for the left aSTG ROI (Figure 2A) mirrored that observed in the whole-brain analysis. This left superior ATL subregion was significantly and equivalently activated by both auditory semantic conditions but not by the picture-based task. Indeed, this ROI analysis indicated that there was not even partial activation for the picture-based task in the left aSTG, given that the pictures > control contrast resulted in a negative value.

DISCUSSION

Previous studies have suggested that the ATLs act as a representational hub, which when combined with modality-specific information, form coherent, amodal semantic representations (Lambon Ralph et al., 2009, 2010; Patterson et al., 2007; Rogers et al., 2004). This theory was originally based on the multimodal impairments observed in SD patients, associated with the underlying bilateral atrophy of the ATL. Although these patient studies associate the ATL with semantic processes, the atrophy covers a large area, and it is not clear, therefore, if all ATL regions contribute in the same way. Accordingly, it is important to supplement the patient results with neuroimaging data. The current distortion-corrected fMRI study investigated five specific questions: (1) Which ATL subregions contribute to semantic processing? (2) Does their involvement vary in graded or absolute ways across input modality? (3) Are there differences in the contribution of left versus right ATL regions to semantic processing? (4) Is the ATL involved even for superordinate semantic distinctions? (5) Is the ATL involved in semantic processing even for singly-presented items that require no combination of meaning across multiple stimuli? To investigate these issues, we used a domain-level (living vs. manmade) semantic decision about single stimuli, presented from different input modalities (i.e., auditory words, pictures, and environmental sounds). The results are summarized and discussed below.

The current fMRI results show that the left superior part of the ATL is specialized for auditory processing (both auditory sounds and words), whereas the ventral bilateral ATL/BTLA underlies semantic processing for auditory words, environmental sounds, and pictures. This amodal response characteristic of the vATL provides direct support for the notion that this region is core to the integration of multiple modality-specific sources of information and to the formation of modality-invariant, coherent representations (Binney et al., 2010; Lambon Ralph et al., 2009, 2010; Patterson et al., 2007; Spitsyna et al., 2006; Sharp et al., 2004).

Whether the semantic system requires modality-invariant representational hub is still under debate. For example, on the basis of patient studies, the classical view proposed by Wernicke and Meynert (see Eggert, 1977) suggested that modality-specific regions interact directly to generate conceptual information. Indeed, perhaps the most popular view arising from neuroimaging studies is that concepts are represented in distributed systems across the brain (for a review, see Martin & Chao, 2001). These theories do not posit the need of an amodal semantic hub or sometimes argue directly against it (Martin, 2007). However, Lambon Ralph et al. (2010) suggested that a “distributed only” system is not ideal for the complex computations required to generate coherent semantic representations (see also Rogers & McClelland, 2004). A key, perhaps defining, function of coherent conceptual representations is the ability to fuse the many modality-specific sources of information together accurately and then to be able to compute generalizations on the basis of semantic rather than superficial similarities (an idea that can be found in philosophy, cognitive science, as well as contemporary neuropsychology; Lambon Ralph et al., 2010; Wittgenstein, 2001; Smith & Medin, 1981). It has been suggested that, as this involves complex nonlinear mappings, such computations require the addition of an intermediate (“hidden”) layer mediating between the modality-specific source regions (Lambon Ralph et al., 2010; Rogers et al., 2004; Plaut, 2002). As such, the vATL is an ideal neural location for such a modality-invariant representational hub, given that it is widely connected with secondary perceptual and motor cortices (Gloor, 1997).

Simmons and Martin (2009) argued that, although there may be an amodal semantic hub, it is not likely that the anterior temporal region underlies such a system as this area is not well represented in the neuroimaging literature. However, in a recent meta-analysis, we showed that the lack of imaging evidence associating the ATL with semantic processes can be explained by various technical and design issues (see Introduction; Visser, Jefferies, et al., 2010). By using distortion-corrected fMRI, we have obtained convergent data from three studies showing vATL activation during various semantic tasks on the basis of visually presented stimuli (Visser, Embleton, et al., 2010, submitted; Binney et al., 2010). By adding auditory stimuli to the current study, we have been able to demonstrate that the bilateral vATL/BTLA is activated during semantic processing for all input modalities. This is consistent not only with the hypothesis that the bilateral vATL underpins the modality-invariant hub within the hub-and-spoke semantic framework (Lambon Ralph et al., 2009, 2010; Patterson et al., 2007; Rogers et al., 2004) but is also consistent with the fact that this is a maximal area of atrophy in SD (Visser, Embleton, et al., 2010, submitted; Binney et al., 2010; Galton et al., 2001; Mummery et al., 2000).

Although the vATL region responded significantly and bilaterally to all three types of stimuli, we also found an intriguing graded difference across left and right vATL. Specifically, we found equivalent degrees of activation for the picture-based and environmental sound conditions but left > right activation for the spoken words. As noted in the Results section, this finding mirrors the pattern found when SD patients are divided by their distribution of atrophy. Specifically, patients with more left > right ATL atrophy demonstrate worse performance for verbal than picture-based materials, whereas the reverse trend is true for SD patients with right > left atrophy (Mion et al., 2010; Snowden et al., 2004; Lambon Ralph et al., 2001). The current imaging results suggest that the differences observed in the SD patients actually reflect a graded division of labor across the intact left and right vATL. Previous computational models have shown that this kind of graded difference emerges automatically when the nature of the input or the pattern of connectivity are taken into account (Lambon Ralph et al., 2001). Following this idea, our working assumption is that the left > right difference for spoken stimuli reflects the stronger connectivity of left STG auditory regions to the left vATL. As such, although both vATL will contribute to the processing of all stimuli, the left vATL may become somewhat more important for auditory–verbal stimuli simply on the basis of connectivity and the quality of the auditory input required for semantic decoding of spoken words (Rauschecker & Scott, 2009). Overall, these results suggest that the bilateral vATL forms the hub of a single semantic system, processing the meaning of all stimulus types. In addition, there are some limited, graded differences for verbal in the left vATL, which might reflect its pattern of connectivity to auditory- or language-related areas (Lambon Ralph et al., 2001).

The results from the current study show that the left aSTG is predominantly involved in aspects of high-order auditory processing that, perhaps jointly with the vATL (Sharp et al., 2004), permits semantic processing of auditory stimuli. To our knowledge, this is the first study to show a similar pattern of the BOLD response for verbal and nonverbal auditory processing in the aSTG. Other imaging studies that included auditory–verbal and/or nonverbal information were not designed to examine this issue (Hocking & Price, 2008, 2009; Tranel et al., 2005). However, the left aSTG has been consistently associated with auditory–verbal semantic processes, namely, with the processing of verbal information (Hocking & Price, 2009), intelligible speech (Scott et al., 2000), or speech-like stimuli (Overath et al., 2008). In addition, a separate line of investigation has suggested that the anterior superior temporal lobe is involved in invariant processing of auditory information, enabling auditory object recognition regardless of irrelevant changes (Overath et al., 2008; Warren, Scott, Price, & Griffiths, 2006; Griffiths & Warren, 2004). The current results suggest that the left aSTG is involved in processing of auditory words and environmental sounds. Although the exact acoustic properties that are relevant for the processing of environmental sounds and auditory words differ, the same mechanism might underlie the extraction of an invariant auditory “object” that is used to or is a part of the extraction of meaning for verbal and nonverbal auditory input (Griffiths & Warren, 2004).

A further aim was to investigate whether the ATL is specialized for specific entities. Previous imaging studies have shown ATL activation during semantic processing at specific or basic level but not at domain level (Rogers et al., 2004; Tyler et al., 2004). In addition, it has been noted that the semantic impairments observed in SD patients exhibit a specificity gradient with better performance at the domain level (Rogers & McClelland, 2004; Tyler et al., 2004; Warrington, 1975). These and other studies (Tranel et al., 1997; Damasio et al., 1996) could be taken to associate the ATL solely with semantic processes at the specific level. However, the current results show that the ATL is also activated for domain-level decisions. These results fit with computational models of semantic memory in which specific versus domain level differences reflect graded rather than absolute variations (see Introduction; Rogers et al., 2004). If correct, this would predict that semantic areas should be activated for all levels of specificity but less so for domain level judgments, because it takes the semantic system less computational effort to activate the minimal level of information required to make these relatively easy decisions.

Finally, we note that the current study found ATL activation for semantic decisions made to singly presented stimuli, which is consistent with the fact that SD patients have poor single-word comprehension and naming performance (Rogers et al., 2004; Lambon Ralph et al., 2001). These observations appear to be incompatible with the hypothesis that the ATL region supports “combinatorial semantics” (Hickok & Poeppel, 2007). Instead, the observation that ATL regions are reliably activated by sentences and other multielement stimuli might simply follow from the fact that there are many more elements to be processed (Visser, Embleton, et al., 2010).

Acknowledgments

This study was supported by an MRC program grant (G0501632) and MRC pathfinder grant (G0300952).

Reprint requests should be sent to Prof. M. A. Lambon Ralph, Neuroscience and Aphasia Research Unit, Zochonis Building, School of Psychological Sciences, University of Manchester, Oxford Road, Manchester, M13 9PL, UK, or via e-mail: matt.lambon-ralph@manchester.ac.uk.

REFERENCES

REFERENCES
Amedi
,
A.
,
von Kriegstein
,
K.
,
van Atteveldt
,
N. M.
,
Beauchamp
,
M. S.
, &
Naumer
,
M. J.
(
2005
).
Functional imaging of human crossmodal identification and object recognition.
Experimental Brain Research
,
166
,
559
571
.
Binney
,
R. J.
,
Embleton
,
K. V.
,
Jefferies
,
E.
,
Parker
,
G. J. M.
, &
Lambon Ralph
,
M. A.
(
2010
).
The ventral and inferolateral aspects of the anterior temporal lobe are crucial in semantic memory: Evidence from a novel direct comparison of distortion-corrected fMRI, rTMS, and semantic dementia.
Cerebral Cortex
,
20
,
2728
2738
.
Bozeat
,
S.
,
Lambon Ralph
,
M. A.
,
Patterson
,
K.
,
Garrard
,
P.
, &
Hodges
,
J. R.
(
2000
).
Non-verbal semantic impairment in semantic dementia.
Neuropsychologia
,
38
,
1207
1215
.
Brett
,
M.
,
Anton
,
J. L.
,
Valabregue
,
R.
, &
Poline
,
J. B.
(
2002
).
Region of interest analysis using an SPM toolbox.
Neuroimage
,
16
,
497
.
Brodmann
,
K.
(
2005
).
Brodmann's “localisation in the cerebral cortex”
(L. Garey, Trans.).
New York
:
Springer
.
Catani
,
M.
, &
Ffytche
,
D. H.
(
2005
).
The rises and falls of disconnection syndromes.
Brain
,
128
,
2224
2239
.
Coccia
,
M.
,
Bartolini
,
M.
,
Luzzi
,
S.
,
Provinciali
,
L.
, &
Lambon Ralph
,
M. A.
(
2004
).
Semantic memory is an amodal, dynamic system: Evidence from the interaction of naming and object use in semantic dementia.
Cognitive Neuropsychology
,
21
,
513
527
.
Damasio
,
H.
,
Grabowski
,
T. J.
,
Tranel
,
D.
,
Hichwa
,
R. D.
, &
Damasio
,
A. R.
(
1996
).
A neural basis for lexical retrieval.
Nature
,
380
,
499
505
.
Devlin
,
J. T.
,
Russell
,
R. P.
,
Davis
,
M. H.
,
Price
,
C. J.
,
Wilson
,
J.
,
Moss
,
H. E.
,
et al
(
2000
).
Susceptibility-induced loss of signal: Comparing PET and fMRI on a semantic task.
Neuroimage
,
11
,
589
600
.
Ding
,
S.
,
Van Hoesen
,
G. W.
,
Cassell
,
M. D.
, &
Poremba
,
A.
(
2009
).
Parcellation of human temporal polar cortex: A combined analysis of multiple cytoarchitectonic, chemoarchitectonic, and pathological markers.
Journal of Comparative Neurology
,
514
,
595
623
.
Eggert
,
G. H.
(
1977
).
Wernicke's work on aphasia; A source-book and review.
The Hague
:
Mouton
.
Embleton
,
K.
,
Haroon
,
H.
,
Lambon Ralph
,
M. A.
,
Morris
,
D.
, &
Parker
,
G. J. M.
(
2010
).
Distortion correction for diffusion weighted MRI tractogrophy and fMRI in the temporal lobes.
Human Brain Mapping
,
31
,
1570
1587
.
Friston
,
K. J.
,
Penny
,
W. D.
, &
Glaser
,
D. E.
(
2005
).
Conjunction revisited.
Neuroimage
,
25
,
661
667
.
Galton
,
C. J.
,
Patterson
,
K.
,
Graham
,
K.
,
Lambon Ralph
,
M. A.
,
Williams
,
G.
,
Antoun
,
N.
,
et al
(
2001
).
Differing patterns of temporal atrophy in Alzheimer's disease and semantic dementia.
Neurology
,
57
,
216
225
.
Gloor
,
P.
(
1997
).
The temporal lobe and the limbic system.
Oxford, UK
:
Oxford University Press
.
Griffiths
,
T. D.
, &
Warren
,
J. D.
(
2004
).
What is an auditory object?
Nature Reviews Neuroscience
,
5
,
887
892
.
Halgren
,
E.
,
Baudena
,
P.
,
Heit
,
G.
,
Clarke
,
M.
, &
Marinkovic
,
K.
(
1994
).
Spatiotemporal stages in face and word-processing: 1. Depth recorded potentials in the human occipital and parietal lobes.
Journal of Physiology-Paris
,
88
,
1
50
.
Henson
,
R. N. A.
(
2006
).
Efficient experimental design for fMRI.
In K. J. Friston, J. T. Ashburner, S. J. Kiebel, T. E. Nichols, & W. D. Penny (Eds.),
Statistical parametric mapping: The analysis of functional brain images
(pp.
193
210
).
London
:
Academic Press
.
Hickok
,
G.
, &
Poeppel
,
D.
(
2007
).
The cortical organization of speech processing.
Nature Reviews Neuroscience
,
8
,
393
402
.
Hocking
,
J.
, &
Price
,
C. J.
(
2008
).
The role of the posterior superior temporal sulcus in audiovisual processing.
Cerebral Cortex
,
18
,
2439
2449
.
Hocking
,
J.
, &
Price
,
C. J.
(
2009
).
Dissociating verbal and nonverbal audiovisual object processing.
Brain and Language
,
108
,
89
96
.
Hodges
,
J. R.
,
Patterson
,
K.
,
Oxbury
,
S.
, &
Funnell
,
E.
(
1992
).
Semantic dementia: Progressive fluent aphasia with temporal-lobe atrophy.
Brain
,
115
,
1783
1806
.
Lambon Ralph
,
M. A.
,
Graham
,
K. S.
,
Ellis
,
A. W.
, &
Hodges
,
J. R.
(
1998
).
Naming in semantic dementia—What matters?
Neuropsychologia
,
36
,
775
784
.
Lambon Ralph
,
M. A.
,
Graham
,
K. S.
,
Patterson
,
K.
, &
Hodges
,
J. R.
(
1999
).
Is a picture worth a thousand words? Evidence from concept definitions by patients with semantic dementia.
Brain and Language
,
70
,
309
335
.
Lambon Ralph
,
M. A.
, &
Howard
,
D.
(
2000
).
Gogi aphasia or semantic dementia? Simulating and assessing poor verbal comprehension in a case of progressive fluent aphasia.
Cognitive Neuropsychology
,
17
,
437
465
.
Lambon Ralph
,
M. A.
,
Howard
,
D.
,
Nightingale
,
G.
, &
Ellis
,
A. W.
(
1998
).
Are living and non-living category-specific deficits causally linked to impaired perceptual or associative knowledge? Evidence from a category-specific double dissociation.
Neurocase
,
4
,
311
338
.
Lambon Ralph
,
M. A.
,
Lowe
,
C.
, &
Rogers
,
T. T.
(
2007
).
Neural basis of category-specific semantic deficits for living things: Evidence from semantic dementia, HSVE and a neural network model.
Brain
,
130
,
1127
1137
.
Lambon Ralph
,
M. A.
,
McClelland
,
J. L.
,
Patterson
,
K.
,
Galton
,
C. J.
, &
Hodges
,
J. R.
(
2001
).
No right to speak? The relationship between object naming and semantic impairment: Neuropsychological abstract evidence and a computational model.
Journal of Cognitive Neuroscience
,
13
,
341
356
.
Lambon Ralph
,
M. A.
, &
Patterson
,
K.
(
2008
).
Generalization and differentiation in semantic memory: Insights from semantic dementia.
Annals of the New York Academy of Sciences
,
1124
,
61
76
.
Lambon Ralph
,
M. A.
,
Pobric
,
G.
, &
Jefferies
,
E.
(
2009
).
Conceptual knowledge is underpinned by the temporal pole bilaterally: Convergent evidence from rTMS.
Cerebral Cortex
,
19
,
832
838
.
Lambon Ralph
,
M. A.
,
Sage
,
K.
,
Jones
,
R. W.
, &
Mayberry
,
E. J.
(
2010
).
Coherent concepts are computed in the anterior temporal lobes.
Proceedings of the National Academy of Sciences, U.S.A.
,
107
,
2717
2722
.
Liu
,
H. S.
,
Agam
,
Y.
,
Madsen
,
J. R.
, &
Kreiman
,
G.
(
2009
).
Timing, timing, timing: Fast decoding of object information from intracranial field potentials in human visual cortex.
Neuron
,
62
,
281
290
.
Luzzi
,
S.
,
Snowden
,
J. S.
,
Neary
,
D.
,
Coccia
,
M.
,
Provinciali
,
L.
, &
Lambon Ralph
,
M. A.
(
2007
).
Distinct patterns of olfactory impairment in Alzheimer's disease, semantic dementia, frontotemporal dementia, and corticobasal degeneration.
Neuropsychologia
,
45
,
1823
1831
.
Marinkovic
,
K.
,
Dhond
,
R. P.
,
Dale
,
A. M.
,
Glessner
,
M.
,
Carr
,
V.
, &
Halgren
,
E.
(
2003
).
Spatiotemporal dynamics of modality-specific and supramodal word processing.
Neuron
,
38
,
487
497
.
Martin
,
A.
(
2007
).
The representation of object concepts in the brain.
Annual Review of Psychology
,
58
,
25
45
.
Martin
,
A.
, &
Chao
,
L. L.
(
2001
).
Semantic memory and the brain: Structure and processes.
Current Opinion in Neurobiology
,
11
,
194
201
.
Mion
,
M.
,
Patterson
,
K.
,
Acosta-Cabronero
,
J.
,
Pengas
,
G.
,
Izquierdo-Garcia
,
D.
,
Hong
,
Y. T.
,
et al
(
2010
).
What the left and right anterior fusiform gyri tell us about semantic memory.
Brain
,
133
,
3256
3268
.
Mummery
,
C. J.
,
Patterson
,
K.
,
Price
,
C. J.
,
Ashburner
,
J.
,
Frackowiak
,
R. S. J.
, &
Hodges
,
J. R.
(
2000
).
A voxel-based morphometry study of semantic dementia: Relationship between temporal lobe atrophy and semantic memory.
Annals of Neurology
,
47
,
36
45
.
Nichols
,
T.
,
Brett
,
M.
,
Andersson
,
J.
,
Wager
,
T.
, &
Poline
,
J.-B.
(
2005
).
Valid conjunction inference with the minimum statistic.
Neuroimage
,
25
,
653
660
.
Overath
,
T.
,
Kumar
,
S.
,
von Kriegstein
,
K.
, &
Griffiths
,
T. D.
(
2008
).
Encoding of spectral correlation over time in auditory cortex.
Journal of Neuroscience
,
28
,
13268
13273
.
Patterson
,
K.
,
Nestor
,
P. J.
, &
Rogers
,
T. T.
(
2007
).
Where do you know what you know? The representation of semantic knowledge in the human brain.
Nature Reviews Neuroscience
,
8
,
976
987
.
Perani
,
D.
,
Schnur
,
T.
,
Tettamanti
,
C.
,
Gorno-Tempini
,
M.
,
Cappa
,
S. F.
, &
Fazio
,
F.
(
1999
).
Word and picture matching: A PET study of semantic category effects.
Neuropsychologia
,
37
,
293
306
.
Piwnica-Worms
,
K. E.
,
Omar
,
R.
,
Hailstone
,
J. C.
, &
Warren
,
J. D.
(
2010
).
Flavour processing in semantic dementia.
Cortex
,
46
,
761
768
.
Plaut
,
D. C.
(
2002
).
Graded modality-specific specialization in semantics: A computational account of optic aphasia.
Cognitive Neuropsychology
,
19
,
603
639
.
Pobric
,
G.
,
Jefferies
,
E.
, &
Lambon Ralph
,
M. A.
(
2007
).
Anterior temporal lobes mediate semantic representation: Mimicking semantic dementia by using rTMS in normal participants.
Proceedings of the National Academy of Sciences, U.S.A.
,
104
,
20137
20141
.
Pobric
,
G.
,
Jefferies
,
E.
, &
Lambon Ralph
,
M. A.
(
2010
).
Amodal semantic representations depend on both left and right anterior temporal lobes: New rTMS evidence.
Neuropsychologia
,
48
,
1336
1342
.
Rauschecker
,
J. P.
, &
Scott
,
S. K.
(
2009
).
Maps and streams in the auditory cortex: Nonhuman primates illuminate human speech processing.
Nature Neuroscience
,
12
,
718
724
.
Rogers
,
T.
, &
McClelland
,
J. L.
(
2004
).
Semantic cognition: A parallel distributed processing approach.
Cambridge, MA
:
MIT Press
.
Rogers
,
T. T.
,
Lambon Ralph
,
M. A.
,
Garrard
,
P.
,
Bozeat
,
S.
,
McClelland
,
J. L.
,
Hodges
,
J. R.
,
et al
(
2004
).
Structure and deterioration of semantic memory: A neuropsychological and computational investigation.
Psychological Review
,
111
,
205
235
.
Schwartz
,
M. F.
,
Kimberg
,
D. Y.
,
Walker
,
G. M.
,
Faseyitan
,
O.
,
Brecher
,
A.
,
Dell
,
G. D.
,
et al
(
2009
).
Anterior temporal involvement in semantic word retrieval: Voxel-based lesion-symptom mapping evidence from aphasia.
Brain
,
132
,
3411
3427
.
Scott
,
S. K.
,
Blank
,
C. C.
,
Rosen
,
S.
, &
Wise
,
R. J. S.
(
2000
).
Identification of a pathway for intelligible speech in the left temporal lobe.
Brain
,
123
,
2400
2406
.
Sharp
,
D. J.
,
Scott
,
S. K.
, &
Wise
,
R. J. S.
(
2004
).
Retrieving meaning after temporal lobe infarction: The role of the basal language area.
Annals of Neurology
,
56
,
836
846
.
Simmons
,
W. K.
, &
Martin
,
A.
(
2009
).
The anterior temporal lobes and the functional architecture of semantic memory.
Journal of the International Neuropsychological Society
,
15
,
645
649
.
Smith
,
E. E.
, &
Medin
,
D. L.
(
1981
).
Categories and concepts.
Cambridge, MA
:
Harvard University Press
.
Snowden
,
J.
(
2002
).
Disorders of semantic memory.
In A. Baddeley, B. Wilson, & M. Kopelman (Eds.),
Handbook of memory disorder
(2nd ed., pp.
293
314
).
Chichester
:
Wiley
.
Snowden
,
J. S.
,
Thompson
,
J. C.
, &
Neary
,
D.
(
2004
).
Knowledge of famous faces and names in semantic dementia.
Brain
,
127
,
860
872
.
Spitsyna
,
G.
,
Warren
,
J. E.
,
Scott
,
S. K.
,
Turkheimer
,
F. E.
, &
Wise
,
R. J. S.
(
2006
).
Converging language streams in the human temporal lobe.
Journal of Neuroscience
,
26
,
7328
7336
.
Tranel
,
D.
,
Damasio
,
H.
, &
Damasio
,
A. R.
(
1997
).
A neural basis for the retrieval of conceptual knowledge.
Neuropsychologia
,
35
,
1319
1327
.
Tranel
,
D.
,
Grabowski
,
T. J.
,
Lyon
,
J.
, &
Damasio
,
H.
(
2005
).
Naming the same entities from visual or from auditory stimulation engages similar regions of left inferotemporal cortices.
Journal of Cognitive Neuroscience
,
17
,
1293
1305
.
Tyler
,
L. K.
,
Stamatakis
,
E. A.
,
Bright
,
P.
,
Acres
,
K.
,
Abdallah
,
S.
,
Rodd
,
J. M.
,
et al
(
2004
).
Processing objects at different levels of specificity.
Journal of Cognitive Neuroscience
,
16
,
351
362
.
Vandenberghe
,
R.
,
Price
,
C.
,
Wise
,
R.
,
Josephs
,
O.
, &
Frackowiak
,
R. S. J.
(
1996
).
Functional anatomy of a common semantic system for words and pictures.
Nature
,
383
,
254
256
.
Visser
,
M.
,
Embleton
,
K. V.
,
Jefferies
,
E.
,
Parker
,
G. J.
, &
Lambon Ralph
,
M. A.
(
2010
).
The inferior, anterior temporal lobes and semantic memory clarified: Novel evidence from distortion-corrected fMRI.
Neuropsychologia
,
48
,
1689
1696
.
Visser
,
M.
,
Jefferies
,
E.
, &
Lambon Ralph
,
M. A.
(
2010
).
Semantic processing in the anterior temporal lobes: A meta-analysis of the functional neuroimaging literature.
Journal of Cognitive Neuroscience
,
22
,
1083
1094
.
Visser
,
M.
,
Embleton
,
K. V.
,
Jefferies
,
E.
,
Parker
,
G. J.
, &
Lambon Ralph
,
M. A.
(
submitted
).
Evidence for a rostral gradient of convergence in the temporal lobes: An fMRI study of verbal and non-verbal semantic processing
.
Warren
,
J. D.
,
Scott
,
S. K.
,
Price
,
C. J.
, &
Griffiths
,
T. D.
(
2006
).
Human brain mechanisms for the early analysis of voices.
Neuroimage
,
31
,
1389
1397
.
Warrington
,
E. K.
(
1975
).
The selective impairment of semantic memory.
Quarterly Journal of Experimental Psychology
,
27
,
635
657
.
Wise
,
R. J. S.
(
2003
).
Language systems in normal and aphasic human subjects: Functional imaging studies and inferences from animal studies.
British Medical Bulletin
,
65
,
95
119
.
Wittgenstein
,
L.
(
2001
).
Philosophical investigations: The German text, with a revised English translation 50th anniversary commemorative edition.
Oxford, UK
:
Wiley-Blackwell
.