Abstract

We carried out an fMRI study with a twofold purpose: to investigate the relationship between networks dedicated to semantic and visual processing and to address the issue of whether semantic memory is subserved by a unique network or by different subsystems, according to semantic category or feature type. To achieve our goals, we administered a word–picture matching task, with within-category foils, to 15 healthy subjects during scanning. Semantic distance between the target and the foil and semantic domain of the target–foil pairs were varied orthogonally. Our results suggest that an amodal, undifferentiated network for the semantic processing of living things and artifacts is located in the anterolateral aspects of the temporal lobes; in fact, activity in this substrate was driven by semantic distance, not by semantic category. By contrast, activity in ventral occipito-temporal cortex was driven by category, not by semantic distance. We interpret the latter finding as the effect exerted by systematic differences between living things and artifacts at the level of their structural representations and possibly of their lower-level visual features. Finally, we attempt to reconcile contrasting data in the neuropsychological and functional imaging literature on semantic substrate and category specificity.

INTRODUCTION

Since the seminal paper by Warrington (1975) on the “selective impairment of semantic memory,” a great deal of neuropsychological evidence has been accumulated supporting the view that semantic memory represents, to some extent, an isolable system at both the functional and the neuronanatomical level. In the early 1990s, major support for this view came from research on semantic dementia. This disease is characterized by a typical neuropsychological and lesion profile: It includes a selective impairment of conceptual knowledge and atrophic changes involving the anterior temporal lobe structures and is frequently more pronounced in the left hemisphere (Garrard & Hodges, 2000; Hodges, Patterson, Oxbury, & Funnell, 1992).

Within the functional system subserving conceptual knowledge, deficits are sometimes specific to items referring to particular semantic categories. Usually, in these cases, categories belonging to the living domain (such as animals, fruits, and vegetables) are more impaired, whereas nonliving categories (e.g., vehicles, tools, and furniture) are relatively spared. However, the reverse pattern of impairment (i.e., better performance on living than on nonliving things) has also been observed (see Capitani, Laiacona, Mahon, & Caramazza, 2003; Gainotti, 2000, for reviews). The finding that semantic impairments due to brain damage are sometimes category-specific suggests that semantic memory has an internal organization. According to the domain-specific hypothesis (DSH; Caramazza & Shelton, 1998), dedicated neural substrates might have evolved for effectively recognizing members of categories that are relevant for survival (i.e., conspecifics, plant life, animals, and possibly tools). According to a concurrent view (Warrington & Shallice, 1984), the first-order organizing principle of semantic memory is not category, but rather, type of knowledge. In keeping with the so-called sensory/functional theory (SFT), dedicated neural substrates store conceptual knowledge according to feature type. Perceptual knowledge, which refers to what things look like (e.g., that bananas are yellow), is segregated from functional knowledge, which refers to what things are used for (e.g., that knives are used for cutting). Based on the assumption that perceptual knowledge is more important for distinguishing living things from one another and functional knowledge for distinguishing nonliving things, selective damage to either the perceptual or the functional neural substrate is expected to have an unequal effect on items referring to the two semantic domains.

According to a different approach to category specificity (e.g., Funnell & Sheridan, 1992), a relatively more severe impairment for living things might arise because items referring to this domain pose higher demands to an undifferentiated semantic substrate, perhaps due to the effect of a cross-domain imbalance on several confounding factors such as concept familiarity and word frequency (living things are, on average, less familiar and designated by less frequently used words). More recently, some authors have proposed that living things may be more error prone because they show, on average, a smaller semantic distance (i.e., a higher degree of featural overlap) between category members than nonliving items (Zannino, Perri, Pasqualetti, Caltagirone, & Carlesimo, 2006a; Cree & McRae, 2003).

The assumption of separable neurological substrates crucially involved in living things and nonliving things impairments is a common tenet of both DSH and SFT; however, approaches to category specificity based on an imbalance of processing demands (IPD) do not share this assumption, arguing for an amodal (i.e., not organized according to feature types) and domain-general semantic system. Unfortunately, to date, the large neuropsychological literature devoted to this topic has not provided a clear-cut picture of the neurological correlates for living things impairments as opposed to nonliving things impairments. Thus, the distinction proposed by Gainotti (2000) between fronto-parietal regions and temporal regions subserving artifacts and living things semantic knowledge, respectively, is not, in our opinion, fully supported by the empirical data. In fact, no obvious semantic deficit for either artifacts or living things is associated with fronto-parietal lesions (Chertkow, Bub, Deaudon, & Whitehead, 1997; Hart & Gordon, 1990), whereas in patients suffering from semantic dementia, the observed anterior temporal lobe atrophy is usually associated with a comparable semantic deficit across domains (Garrard, Lambon Ralph, & Hodges, 2002).

By the late 1990s, progress in functional neuroimaging (mainly PET and fMRI techniques) provided new sources of evidence for adjudicating between different hypotheses on the neural correlates of semantic memory and category specificity. However, the evidence collected by means of these new methodologies is not easy to reconcile with the lesion literature. A major source of inconsistency regards the pattern of activity typically associated with semantic tasks in neuroimaging studies, which does not overlap the lesion pattern typically observed in brain-damaged patients showing semantic impairments (Brambati et al., 2006; Rogers et al., 2006). In particular, the anterior aspects of the (left) temporal lobe, indicated by the lesion literature as a critical substrate for semantic memory, are very inconsistently active in neuroimaging studies. By contrast, activations in left ventrolateral prefrontal cortex and in ventral occipito-temporal cortex are nearly ubiquitous in the neuroimaging literature on semantics, whereas patients with brain damage confined to these sites do not show any evident semantic impairment (Thompson-Schill, D'Esposito, Aguirre, & Farah, 1997), thus raising the question of the true nature of the cognitive processes instantiated in those regions.

Ventrolateral prefrontal cortex (VLPFC) encompasses the inferior frontal gyrus, comprising approximately BA 44/45 and part of BA 47 (Badre & Wagener, 2007). In the left hemisphere, this region was found to be activated every time subtractions between more demanding and less demanding conditions were carried out not only when subjects were involved in tasks requiring the semantic processing of words and pictures but also when they were involved in a variety of nonsemantic tasks (Duncan & Owen, 2000). Accordingly, some authors have suggested that left VLPFC may house a general-purpose device involved in the execution of different cognitive tasks without contributing specifically to instantiating semantic memory (see Thompson-Schill, 2003 for a review of this topic).

Ventral occipito-temporal activations are perhaps the most thoroughly investigated finding in the neuroimaging of semantic processing. As this substrate houses the visual ventral stream, that is, a system devoted to the visual recognition of objects, first described in nonhuman primates (Ungerleider & Haxby, 1994), its activation during semantic tasks suggests that “information about object-specific features may be stored within the same neural systems that are active during perception” (Martin & Chao, 2001, p. 194).1 In other words, the activation of the ventral stream during the execution of semantic tasks has been interpreted as supporting the view that semantic knowledge is grounded in modality-specific sensorimotor systems rather than being an amodal representational device (Barsalou, Kyle Simmons, Barbey, & Wilson, 2003). In this framework, the finding of cross-domain differential patterns of activity in cortical regions dedicated to perceptual processing has been interpreted as lending support to the SFT, which assumes a differential distribution of functional and perceptual features across domains. The major problem with this claim is that these differential activations might be located at a purely visual processing level and not at the level of a “perceptual symbol system” (Barsalou et al., 2003). There is little doubt that perception encompasses multiple hierarchically organized steps moving toward a growing abstraction from the perceived stimuli (Grill-Spector, Kourtzi, & Kanwisher, 2001). At the more abstract end of the processing axis, a structural description system specifies long-term memories of the visual properties shared by members of particular object classes (Humphreys, Riddoch, & Quinlan, 1988). A major issue when modeling the relationship between semantics and vision is which level of visual processing interacts with semantics (or is even identical with semantic knowledge). The above quoted passage by Martin and Chao (2001) is neutral to the issue of whether perceptual systems supporting semantic knowledge are dedicated to high- or low-level sensory processing. By contrast, Riddoch and Humphreys (2003) clearly state that semantics and vision interact at the level of the structural representation system. In fact, according to these authors, the structural representation system is the device we use for recognizing visually perceived stimuli and also for answering verbal questions such as “does a dog have legs?”. Thus, there is no difference between features specifying the structural representation and visual semantic features. However, in this view, semantic knowledge also encompasses functional and encyclopedic features that are not encoded in structural representations (see Capitani et al., 2003, p. 216 and Coltheart et al., 1998, p. 365, for further discussions on this point). An interesting contribution to this issue was recently made by Patterson, Nestor, and Rogers (2007) and Rogers et al. (2004, 2006). These authors suggest that semantic knowledge is sustained both by modality-specific sensory–motor and language systems and by a supramodal semantic hub. According to this view, visual and verbal modalities (and other sensory–motor representations) contribute to a more abstract level of semantic representation housed in anterior temporal cortex (the semantic hub). We will return to this issue in the Discussion, where we propose a partial revision of the latter account.

The third source of inconsistency between the neuroimaging and the lesion literature regards the role of the anterolateral aspects of the temporal lobes. Although this substrate is consistently affected in semantic dementia, it has been shown to be associated with semantic processing in only a few neuroimaging studies. To explain this unexpected lack of anterolateral temporal activity associated with semantic processing, in most studies, imaging artifacts due to air–tissue interfaces near these regions have been invoked (Devlin et al., 2000). However, a major problem with the recourse to purported nonsemantic control tasks is that they might imply uncontrolled semantic processing (Binder et al., 1999). Instead of contrasting semantic tasks with nonsemantic baselines, we suggest that contrasts involving the controlled modulation of single semantic factors might be better suited to drive the activity into anterolateral aspects of the temporal lobes. This issue is one focus of the present article and will be taken up in the Discussion.

The purpose of the present study was twofold. First, we aimed to address the issue of whether semantic memory is grounded in perceptual systems or whether it represents an amodal symbol system. Although the former view prevails, particularly in the functional neuroimaging literature, the relationships between vision and semantics are far from being well understood and the hypothesis that semantic knowledge is based on an amodal representational device (see Barsalou et al., 2003 for discussion), perhaps interacting with modality-specific networks (Patterson et al., 2007), deserves further consideration. The second issue we wanted to address is whether there are separable semantic systems, which are critically involved in processing living things and artifacts (as proposed by DSH and SFT), or whether semantic memory is subserved by an undifferentiated substrate for processing concepts across domains (as proposed by IPD-based approaches to category specificity). Note that the two issues are partially independent as DSH and SFT are, in principle, neutral for adjudicating between amodal and perceptual models of conceptual knowledge. In fact, boundaries based on semantic categories or feature types are theoretically possible in both kinds of representations (i.e., amodal and perceptually grounded).

To investigate this issue, we had normal control subjects perform a word–picture matching task in which semantic distance (SemD) between target and foil (low SemD vs. high SemD) and semantic domain (living things vs. artifacts) were varied orthogonally. We also included a nonsemantic baseline task in which subjects were asked to judge whether two scrambled line drawings were identical or different. Using a 2 × 2 design paradigm, we contrasted activity on living versus nonliving items and activity on items with low versus high SemD. By using this paradigm, we were able to take apart the effects exerted by our semantic factor (i.e., SemD) from the effect of domain and from the effect of two additional factors that are known to differ systematically across domains, namely, knowledge type (see SFT) and attributes at the structural description level (Humphreys et al., 1988).2 Consequently, we were able to make two clear-cut predictions. First, if ventral occipito-temporal cortex subserves vision, but not semantics, then activity in this region should be driven by the factor domain (due to systematic differences in the structural representations of living things and artifacts) and it should not be sensitive to SemD. Second, if anterolateral aspects of the temporal lobes house an undifferentiated substrate for semantic processing across categories, then activity in these regions should be sensitive to SemD and insensitive to domain.

As for left VLPFC, on the assumption that this substrate does not contribute critically to semantic memory but more generally to the execution of several cognitive tasks, we expected that the only factor driving activity in this region would be task difficulty. In other words, we expected that if a significant difference was found in terms of accuracy or reaction times, according to the two manipulated factors, the contrasts hard semantic distance versus easy semantic distance and hard semantic domain versus easy semantic domain would have an analogous activation pattern in this cortical region.

METHODS

Participants

Fifteen subjects (5 men and 10 women) with a mean age of 28 years (range = 21–33 years), all right-handed according to the Salmaso and Longoni (1985) inventory, participated in the study. All were drug-free, with no history of neurological or psychiatric symptoms. After the procedures were explained to them, the subjects gave their written informed consent in a protocol approved by the Joint Ethics Committee of the Fondazione Santa Lucia.

Stimuli and Procedure

Subjects were administered a word–picture matching task (semantic condition) and a control task (nonsemantic baseline). The stimuli in the semantic task consisted of two line drawings, arranged vertically on a screen, depicting the target and a within-category foil (see Figure 1A). The name of the target was printed centrally, between the two drawings. Subjects had to press a key on a two-key button box with their right index or middle finger, according to the position of the target drawing on the screen (either top or bottom). Reaction times and accuracy were recorded for each item. Stimuli were constructed using a corpus of 64 concrete nouns and related drawings. The 64 concepts that comprised this corpus were drawn from four semantic categories, two of which referred to the living domain (fruits and animals) and two to the nonliving domain (furniture and vehicles). Most neuroimaging studies include as nonliving items objects belonging to the category of tools. Because we relied on a previous feature listing study for computation of the SemD index (see below), the choice as to the semantic categories to be included in the present study was committed to this constraint. However, as we were more interested in cross-domain differences at the level of visual structural representations rather than in possible differences resulting from tools being associated with particular action patterns, the exclusion of items from this category does not represent a priori a shortcoming of the present investigation.

Figure 1. 

Examples of the stimuli used during the word–picture matching (A) and the control task (B).

Figure 1. 

Examples of the stimuli used during the word–picture matching (A) and the control task (B).

Living things and artifacts included in the present study were equated for the following variables: age of acquisition, prototypicality, word frequency, and visual complexity of the drawings. Familiarity was significantly lower in the living domain, whereas name agreement was lower in the nonliving domain. A full description of this material can be found in Zannino, Perri, Pasqualetti, Caltagirone, and Carlesimo (2006b).

For each item of the word–picture matching task, we computed a measure of SemD between target and foil. Based on data collected by means of a previous feature listing task (Zannino et al., 2006a; the complete database is available free online at www.hsantalucia.it), we were able to represent each concept to be used in the word–picture matching task as a counting vector which had as many positions as the overall amount of unique features produced by the subjects enrolled in the feature listing study (e.g., <has legs> or <used for making cakes>). The value of each position was the number of subjects who listed that feature for that concept (this index, termed dominance, ranged from 0 to 30, as 30 subjects were enrolled in the study); for example, 6 of the 30 subjects listed <found in zoo> for elephant. To obtain semantic distance values relative to each target–foil pair, following Cree and McRae (2003) and Tversky (1977), χ2 distances were computed between the two corresponding vectors. This index was used to vary orthogonally the factors SemD (low vs. high) and domain (living vs. nonliving) in a 2 × 2 design paradigm.

For constructing the word–picture matching task, we created 240 stimuli (equally distributed in the four semantic categories) according to the following four conditions: living things with high SemD (e.g., walnut–lemon or fox–giraffe); living things with low SemD (e.g., apricot–cherry, or squirrel–mouse); nonliving things with high SemD (e.g., couch–refrigerator or tricycle–ambulance); and nonliving things with low SemD (e.g., dresser–wardrobe or aeroplane–helicopter). SemD between stimuli in the high and low conditions was significantly different [F(1, 239) = 513.65, p < .001]; on the contrary, it was balanced across domains [F(1, 239) = 0.00, p = .954]. Also, within the high SemD and low SemD conditions, no cross-domain differences were observable [high living vs. high nonliving: F(1, 119) = 0.274, p = .602; low living vs. low nonliving: F(1, 119) = 0.129, p = .721]. As 240 stimuli were constructed based on 64 concepts, each concept occurred in more than one stimulus (range = 4–11; mean = 7.5, SD = 1.75). However, the following constraints were observed: (i) the same pair of pictures could not occur in two different stimuli; (ii) each concept was used approximately the same number of times as a target (mean = 3.75, SD = 1.30) and as a foil (mean = 3.75, SD = 1.4); (iii) the average number of times the same concept appeared as a target or a foil was approximately the same across domains. Mean (SD) for target in the living and nonliving domains, respectively: 3.75 (1.34) and 3.75 (1.27). Mean (SD) for foil in the living and nonliving domains, respectively: 3.75 (1.46) and 3.75 (1.39).

In the baseline condition, subjects were administered a same/different judgment task with scrambled drawings (see Figure 1B). Scrambling was obtained by randomly rearranging portions of the same line drawings used in the semantic task. For this task, we created 120 stimuli. The stimuli in this condition consisted of two scrambled drawings arranged vertically on a screen and separated by an array of “Xs.” Subjects had to judge whether the scrambles at the top and the bottom of the screen where the same or different. They responded using a button box, as previously described.

fMRI Acquisition

For the fMRI experiment, the 240 stimuli of the word–picture matching task and the 120 stimuli of the control task were divided into four sets of 90 stimuli each. Each set was administered in a separate fMRI run, with the order of presentation of the four sets randomized across subjects. The 90 stimuli in each set were balanced for semantic category and SemD. Each fMRI run consisted of three word–picture matching blocks (20 trials/block) alternated with control blocks (10 trials/block). In the word–picture matching blocks, no two consecutive items were drawn from the same semantic category. Each trial started with the presentation of a central fixation cross (400 msec) followed by the task-relevant visual stimulus, which remained on the screen for 500 msec. Keypress responses were recorded for a maximum of 1500 msec after the stimulus presentation. The interstimulus interval was uniformly distributed over 3 to 4 sec. Each fMRI run lasted approximately 6 min.

Imaging was carried out in a 3-T Siemens Allegra head scanner (Siemens, Erlangen, Germany). BOLD contrast was obtained using echo-planar T2*-weighted imaging (EPI). Each functional image comprised 32 transverse slices (3 × 3 in-plane, TR 65 msec, 64 × 64, TE = 30) and covered the whole cerebral cortex. One hundred sixty brain volumes were acquired consecutively for each fMRI run. Repetition time was 2.08 sec, and in-plane resolution was 3 × 3 mm; slice thickness and gap were 2.5 and 1.25 mm, respectively.

fMRI Analysis

Data were analyzed using SPM2 (Wellcome Department of Cognitive Neurology, www.fil.ion.ucl.ac.uk) implemented in MATLAB 6.5 (The MathWorks Inc., Natick, MA) for data preprocessing and statistical analyses. The first four volumes of each fMRI run were discarded to allow for stabilization of longitudinal magnetization (leaving a total of 624 volumes for each subject). Preprocessing included rigid-body transformation to correct for head movement (realignment). Slice acquisition delays were corrected using the middle slice as reference (slice timing). All images were normalized to the standard SPM2 EPI template in the MNI space (Collins, Neelin, Peters, & Evans, 1994) using the mean of the functional volumes; then, they were spatially smoothed using an isotropic Gaussian kernel (full width at half maximum = 8 mm) to increase the signal-to-noise ratio and to facilitate group analyses.

Statistical inference was based on a random effect approach (Penny & Holmes, 2004). This comprised two steps. In the first step, each subject's data were best fitted at every voxel using a liner combination of predictors (general liner model). These were the timings of 10 event types, convolved with the SPM2 standard hemodynamic response function. The 10 event types included the following: animals with low SemD, animals with high SemD, fruits with low SemD, fruits with high SemD, vehicles with low SemD, vehicles with high SemD, furniture with low SemD, furniture with high SemD; “same” and “different” trials for the control task. Only correctly performed trials were included in the analysis. For each subject, linear compounds (contrasts) were used to determine: (i) the average effect of all word–picture matching trials versus control trials (one contrast image per subject, averaging the three fMRI runs); (ii) the responses for the four conditions of interest, given by the factorial crossing SemD (high/low) × Domain (living/nonliving) and resulting in four contrast images per subject (each averaging animals/fruits or vehicles/furniture, and fMRI runs).

In the second step of the fMRI analyses (random effects, group-level analyses), we tested for the overall effect of word–picture matching versus control task using a one-sample t test. The four contrast images relating to the condition-specific effects of interest were assessed using a within-subject ANOVA with the following factors: SemD (high/low) and category (living/nonliving). For the ANOVA, correction for nonsphericity (Friston et al., 2002) was used to account for possible differences in error variance across conditions and non-independent error terms for the repeated measures. Statistical thresholds were set to p ≤ .05, corrected for multiple comparison (family-wise error) at the cluster level (cluster size estimated at p-uncorrected = .001).

RESULTS

Behavioral Data

The semantic task proved less demanding than the control task in terms of accuracy. In fact, the mean response accuracy across the 15 experimental subjects was 87% and 80% in the semantic and in the control conditions, respectively [F(1, 14) = 11.57, p = .004]. However, mean reaction times for the correct responses showed a trend in the reverse direction. In fact, RTs on scrambles (948 msec) were shorter than RTs on the word–picture matching items (981 msec) at a nearly significant level [F(1, 14) = 3.46, p = .084], thus suggesting a tradeoff effect.

As for the semantic condition, both domain and SemD affected subjects' performances. A two-way ANOVA on accuracy yielded significant main effects of domain [F(1, 14) = 6.51, p = .023] and SemD [F(1, 14) = 37.43, p < .001]. The mean percentage of correct responses was significantly higher on nonliving items (90%, SD = 8) than on living things (86%, SD = 11) and on items with high SemD (92%, SD = 6) than on items with low SemD (83%, SD = 11). The Domain × SemD interaction was far from significant [F(1, 14) = 0.07, p = .794]. An analogue ANOVA on reaction times (on correct responses) yielded a similar pattern of results. A main effect of domain was found [F(1, 14) = 94.67, p < .001] due to shorter RTs on nonliving (951 msec, SD = 101) than on living items (1015 msec, SD = 97). Also, the main effect of SemD was highly significant [F(1, 14) = 56.57, p < .001] due to shorter RTs on items with high SemD (960 msec, SD = 103) than on items with low SemD (1005 msec, SD = 100). Also, in this case, the Domain × SemD interaction was far from significant [F(1, 14) = 0.03, p = .861].

In summary, accuracy and reaction time data showed that items with low SemD are harder to process than those with high SemD, irrespective of the semantic domain they refer to. On the other hand, living things are harder than nonliving things at both levels of SemD. Finally, the semantic task turned out to be slightly easier than the control task.

fMRI Data

We will start our presentation of the imaging results by describing the overall effect of word–picture matching versus the control task. Then, we will report the results that refer to the effect of domain and SemD. When describing each effect, we will focus on the activity in a few regions critical for our predictions. In particular, we were interested in the activation patterns of ventral occipito-temporal cortex and of the anterolateral aspects of the temporal lobes which, according to our hypothesis, subserve visual and semantic processing, respectively.

Effect of Word–Picture Matching versus Control Task

Performing the word–picture matching task relative to performing a same/different judgment task on scrambled figures increased activity in a number of regions, including left VLPFC and ventral occipito-temporal cortex bilaterally (see Table 1 and Figure 2). Enhanced activity was also observed in the left thalamus and bilaterally in dorsal parieto-occipital cortex, in the precuneus, and in the posterior cingulate. Note that no differential activation was found in the anterolateral aspects of the temporal lobes. As for the activations in the left frontal lobe, two distinct foci were observed. One more dorsal cluster was located in left premotor cortex, within the middle frontal gyrus (BA 6), whereas a more ventral focus of activation was observed in the left inferior frontal gyrus (BA 44). In the ventral occipito-temporal regions, performing the word–picture matching relative to the control task was associated mainly with left-sided, increased activity encompassing the fusiform gyri (BA 19/36/37) and extending anteriorly to parahippocampal cortices (BA 28/35) and laterally to the posterior inferior temporal gyri (BA 37) and the inferior occipital gyri (BA 19).

Table 1. 

Semantic Task Relative to Control Task

Anatomical Regions
Coordinates (x, y, z)
Z
Cluster Size
p
Word–Picture > Control Task 
Frontal cortex 
 L Middle frontal gyrus, BA 6 −28, 4, 64 3.97 197 .007 
 L Inferior frontal gyrus, BA 44 −36, 16, 18 4.96 877 <.001 
Ventral occipito-temporal cortex 
 L Fusiform gyrus, BA 37 −42, −50, −28 5.52 5490 <.001 
 L Inferior occipital gyrus/fusiform, BA 19 −46, −72, −18 5.21 
 L Inferior temporal gyrus/fusiform, BA 37 −58, −56, −12 5.50 
 L Fusiform/Parahippocampal gyrus, BA 36/35 −32, −36, −22 5.39 
 L Parahippocampal gyrus, BA 36/28 −26, −24, −20 4.39 
 R Fusiform gyrus, BA 19 46, −74, −12 5.30 1134 <.001 
 R Inferior occipital gyrus, BA 19 44, −82, −12 5.08 
 R Inferior temporal gyrus/fusiform, BA 20/36 40, −36, −28 4.68 
Other 
 L Middle occipital gyrus, BA 19a −38, −84, 30 5.31 
 R Angular gyrus, BA 39 48, −74, 28 4.14 145 .028 
 L Precuneus, BA 7 −4, −52, 40 5.00 581 <.001 
 Precuneus, BA 7 0, −58, 50 4.68 
 L Posterior cingulate, BA 30 −4, −52, 12 3.84 185 .009 
 R Posterior cingulate, BA 30 4, −56, 10 3.75 
 L Thalamusa −12, −14, 18 4.48 
 L Thalamusa −4, −18, 8 4.04 
Anatomical Regions
Coordinates (x, y, z)
Z
Cluster Size
p
Word–Picture > Control Task 
Frontal cortex 
 L Middle frontal gyrus, BA 6 −28, 4, 64 3.97 197 .007 
 L Inferior frontal gyrus, BA 44 −36, 16, 18 4.96 877 <.001 
Ventral occipito-temporal cortex 
 L Fusiform gyrus, BA 37 −42, −50, −28 5.52 5490 <.001 
 L Inferior occipital gyrus/fusiform, BA 19 −46, −72, −18 5.21 
 L Inferior temporal gyrus/fusiform, BA 37 −58, −56, −12 5.50 
 L Fusiform/Parahippocampal gyrus, BA 36/35 −32, −36, −22 5.39 
 L Parahippocampal gyrus, BA 36/28 −26, −24, −20 4.39 
 R Fusiform gyrus, BA 19 46, −74, −12 5.30 1134 <.001 
 R Inferior occipital gyrus, BA 19 44, −82, −12 5.08 
 R Inferior temporal gyrus/fusiform, BA 20/36 40, −36, −28 4.68 
Other 
 L Middle occipital gyrus, BA 19a −38, −84, 30 5.31 
 R Angular gyrus, BA 39 48, −74, 28 4.14 145 .028 
 L Precuneus, BA 7 −4, −52, 40 5.00 581 <.001 
 Precuneus, BA 7 0, −58, 50 4.68 
 L Posterior cingulate, BA 30 −4, −52, 12 3.84 185 .009 
 R Posterior cingulate, BA 30 4, −56, 10 3.75 
 L Thalamusa −12, −14, 18 4.48 
 L Thalamusa −4, −18, 8 4.04 

Anatomical and Brodmann's areas, MNI coordinates of the maxima within each cluster, Z values, cluster size, and p values corrected at cluster level of the regions that showed a main effect of the considered variables. Coordinates are in millimeters: x = distance to right (+) or left (−) of the mid-sagittal plane; y = distance anterior (+) or posterior (−) to vertical plane through anterior commissure; z = distance above (+) or below (−) intercommissural (AC–PC) line.

aThese loci are comprised in the cluster with the main peak at −42 −50 −28 with size 5490.

Figure 2. 

Activation foci for word–picture matching relative to control task. All clusters are rendered on the surface of the MNI brain template. The transverse section shows the activations at the level of ventral occipito-temporal cortex. For display purpose, SPM thresholds are set to p-uncorrected = .001.

Figure 2. 

Activation foci for word–picture matching relative to control task. All clusters are rendered on the surface of the MNI brain template. The transverse section shows the activations at the level of ventral occipito-temporal cortex. For display purpose, SPM thresholds are set to p-uncorrected = .001.

Domain Effect

Table 2 shows significant differential activations related to the main effect of living things (living > nonliving) and artifacts (nonliving > living). Living things, relative to nonliving things, were associated with a bilateral increase of activity in prefrontal cortex and in the lateral aspects of ventral occipito-temporal regions (see Figure 3). Additional loci of increased activity were found bilaterally in the anterior cingulate, in the inferior parietal lobule, and in the thalamus. No differential activation was found in the anterolateral aspects of the temporal lobes. As for the frontal regions, several bilateral symmetrical peaks of increased activity were observed. Moving in the dorsoventral direction, the first two foci were found in left and right premotor cortex (BA 6), within the middle frontal gyrus. The left-sided focus broadly overlapped with the activation observed also in the previous contrast (i.e., word–picture > control task). Two more ventral symmetrical foci were located in VLPFC at the level of the inferior frontal gyri (BA 44); also, in this case, a corresponding left-sided peak of activity was observed when analyzing the overall effect of word–picture matching relative to the control task. Finally, two large activation clusters were found bilaterally extending from the insulae to VLPFC (BA 45, inferior frontal gyri). Within ventral occipito-temporal cortex, a large region of bilateral increased activity was found that encompassed the fusiform gyri and extended laterally to the inferior occipital gyrus and to the posterior aspects of the inferior temporal gyrus. This region broadly overlapped with the more lateral aspects of the activation pattern associated with the effect of word–picture matching relative to the control task (see Figure 2 and Figure 4A).

Table 2. 

Differential Activation Sites According to Domain

Anatomical Regions
Coordinates (x, y, z)
Z
Cluster Size
p
Living > Nonliving 
Frontal cortex 
 L Middle frontal gyrus, BA 6 −32, −4, 60 4.65 714 <.001 
 L Inferior frontal gyrus, BA 44 −50, 6, 28 4.78 493 <.001 
 L Insula −30, 24, 2 5.30 997 <.001 
 L Inferior frontal gyrus, BA 45 −38, 34, 20 5.13 
 R Middle frontal gyrus, BA 6 28, −2, 56 4.73 367 <.001 
 R Inferior frontal gyrus, BA 44 48, 6, 24 4.78 429 <.001 
 R Insula 28, 26, −4 6.26 1341 <.001 
 R Inferior frontal gyrus, BA 45 48, 36, 22 4.16 
Ventral occipito-temporal cortex 
 L Fusiform gyrus, BA 37 −46, −54, −20 5.89 840 <.001 
 L Fusiform gyrus, BA 19 −44, −66, −14 4.89 
 L Inferior occipital gyrus, BA 18 −50, −78, −6 4.51 
 R Fusiform gyrus, BA 37 50, −56, −20 6.23 1381 <.001 
 R Inferior temporal gyrus, BA 37 52, −64, −16 6.16 
 R Inferior occipital gyrus, BA 18 48, −80, −8 5.89 
Other 
 L Anterior cingulate, BA 32 −6, 8, 48 5.11 1172 <.001 
 R Anterior cingulate, BA 32 8, 30, 36 4.14 
 L Inferior parietal lobule, BA 40 −26, −60, 48 5.76 1235 <.001 
 L Inferior parietal lobule, BA 40 −48, −42, 48 4.89 
 R Inferior parietal lobule, BA 40 42, −36, 40 5.14 1238 <.001 
 R Inferior parietal lobule, BA 40 30, −54, 42 5.11 
 L Thalamus −10, −20, 6 4.68 253 .005 
 R Putamen 30, −18, 8 4.30 650 <.001 
 R Thalamus 12, −14, 6 4.24 
 
Nonliving > Living 
Ventral occipito-temporal cortex 
 L Fusiform gyrus, BA 37 −30, −50, −14 >8 8061 <.001 
 L Lingual gyrus, BA 18 −10, −58, 2 5.63 
 L Lingual gyrus, BA 19 −8, −75, −10 5.31 
 R Parahippocampal gyrus, BA 36 26, −36, −22 7.54 
 R Fusiform gyrus, BA 37 32, −44, −16 7.25 
 R Lingual gyrus, BA 19 18, −56, −10 5.65 
 R Lingual gyrus, BA 18 16, −86, −12 6.22 
Other 
 L Middle occipital gyrusa −28, −88, 8 4.84 
 R Middle occipital gyrus, BA 18 36, −88, 18 5.14 234 .007 
Anatomical Regions
Coordinates (x, y, z)
Z
Cluster Size
p
Living > Nonliving 
Frontal cortex 
 L Middle frontal gyrus, BA 6 −32, −4, 60 4.65 714 <.001 
 L Inferior frontal gyrus, BA 44 −50, 6, 28 4.78 493 <.001 
 L Insula −30, 24, 2 5.30 997 <.001 
 L Inferior frontal gyrus, BA 45 −38, 34, 20 5.13 
 R Middle frontal gyrus, BA 6 28, −2, 56 4.73 367 <.001 
 R Inferior frontal gyrus, BA 44 48, 6, 24 4.78 429 <.001 
 R Insula 28, 26, −4 6.26 1341 <.001 
 R Inferior frontal gyrus, BA 45 48, 36, 22 4.16 
Ventral occipito-temporal cortex 
 L Fusiform gyrus, BA 37 −46, −54, −20 5.89 840 <.001 
 L Fusiform gyrus, BA 19 −44, −66, −14 4.89 
 L Inferior occipital gyrus, BA 18 −50, −78, −6 4.51 
 R Fusiform gyrus, BA 37 50, −56, −20 6.23 1381 <.001 
 R Inferior temporal gyrus, BA 37 52, −64, −16 6.16 
 R Inferior occipital gyrus, BA 18 48, −80, −8 5.89 
Other 
 L Anterior cingulate, BA 32 −6, 8, 48 5.11 1172 <.001 
 R Anterior cingulate, BA 32 8, 30, 36 4.14 
 L Inferior parietal lobule, BA 40 −26, −60, 48 5.76 1235 <.001 
 L Inferior parietal lobule, BA 40 −48, −42, 48 4.89 
 R Inferior parietal lobule, BA 40 42, −36, 40 5.14 1238 <.001 
 R Inferior parietal lobule, BA 40 30, −54, 42 5.11 
 L Thalamus −10, −20, 6 4.68 253 .005 
 R Putamen 30, −18, 8 4.30 650 <.001 
 R Thalamus 12, −14, 6 4.24 
 
Nonliving > Living 
Ventral occipito-temporal cortex 
 L Fusiform gyrus, BA 37 −30, −50, −14 >8 8061 <.001 
 L Lingual gyrus, BA 18 −10, −58, 2 5.63 
 L Lingual gyrus, BA 19 −8, −75, −10 5.31 
 R Parahippocampal gyrus, BA 36 26, −36, −22 7.54 
 R Fusiform gyrus, BA 37 32, −44, −16 7.25 
 R Lingual gyrus, BA 19 18, −56, −10 5.65 
 R Lingual gyrus, BA 18 16, −86, −12 6.22 
Other 
 L Middle occipital gyrusa −28, −88, 8 4.84 
 R Middle occipital gyrus, BA 18 36, −88, 18 5.14 234 .007 

Anatomical and Brodmann's areas, MNI coordinates of the maxima within each cluster, Z values, cluster size, and p values corrected at cluster level of the regions that showed a main effect of the considered variables. Coordinates are in millimeters: x = distance to right (+) or left (−) of the mid-sagittal plane; y = distance anterior (+) or posterior (−) to vertical plane through anterior commissure; z = distance above (+) or below (−) intercommissural (AC–PC) line.

aThis locus is comprised in the cluster with the main peak at −30 −50 −14 with size 8061.

Figure 3. 

Cortical activations for living things relative to nonliving things. All clusters are rendered on the surface of the MNI brain template. For display purpose, SPM thresholds are set to p-uncorrected = .001.

Figure 3. 

Cortical activations for living things relative to nonliving things. All clusters are rendered on the surface of the MNI brain template. For display purpose, SPM thresholds are set to p-uncorrected = .001.

Figure 4. 

(A) Transverse section showing activations for living things (red) and nonliving things (green) in ventral occipito-temporal regions. B and C show the effect sizes for the four experimental conditions at the peak of activation for living (B) and nonliving (C). The level of activity is plotted in arbitrary units ([a.u.], ± SEM) and it is mean adjusted to zero. For display purpose, SPM thresholds are set to p-uncorrected = .001.

Figure 4. 

(A) Transverse section showing activations for living things (red) and nonliving things (green) in ventral occipito-temporal regions. B and C show the effect sizes for the four experimental conditions at the peak of activation for living (B) and nonliving (C). The level of activity is plotted in arbitrary units ([a.u.], ± SEM) and it is mean adjusted to zero. For display purpose, SPM thresholds are set to p-uncorrected = .001.

At variance with the main effect of living things, the main effect of nonliving things failed to reveal any peak of differential activation in the frontal regions. However, in keeping with the previous contrast (i.e., living > nonliving), no differential activation was found in the anterolateral aspects of the temporal lobes. Besides the activation pattern within the ventral occipito-temporal region, which will be described next, the only significant peaks of activation were found bilaterally on the dorsal aspects of the occipital lobes. As for the ventral occipito-temporal regions, a large cluster of enhanced activity was observed bilaterally that encompassed the fusiform and the lingual gyri and extended anteriorly to the right parahippocampal gyrus. These regions broadly overlap the more medial aspects of the activation pattern associated with the effect of word–picture matching relative to the control task (see Figure 2). Thus, as can be seen in Figure 4A, activations associated with artifacts were distributed more medially, whereas activations associated with living things clustered more laterally in ventral occipito-temporal cortex.

As can be seen in Figure 4, the main effects of living things (Figure 4B) and artifacts (Figure 4C) in ventral occipito-temporal cortex were not driven by a subset of items with either low or high SemD. In fact, an inspection of the effect sizes of the four different stimulus conditions (living low SemD, living high SemD, nonliving low SemD, nonliving high SemD) at the peaks of the domain effects reveals that, in these regions, living things and artifacts effect sizes are very similar across the two SemD conditions; this was confirmed by the lack of any significant Domain × SemD interaction.

Semantic Distance Effect

Items with low SemD between target and foil, as compared to items with high SemD, were associated with several foci of enhanced activity located mainly in the frontal lobes (see Table 3 and Figure 5). As can be seen comparing Figure 5 with Figure 3, the pattern of activations in prefrontal cortex has many similarities with that associated with living items (living > nonliving). In particular, two foci in the inferior frontal gyrus, anterior to premotor cortex (BA 44/45), were observed bilaterally in both contrasts. The activation patterns also corresponded bilaterally at the level of the insula and, in a more dorsal focus, in the middle frontal gyrus (BA 6). The latter bilateral focus, as well as the more ventral of the two foci in the inferior frontal gyri (BA 44), was also evident in the overall effect of the experimental task relative to the control task (see Figure 2). Outside the frontal lobes, an additional overlap between the activation pattern associated with low SemD and living things was observed at the level of the anterior cingulate (BA 32). Finally, a peak of activation was observed in the left superior parietal lobule (BA7). Note that no differential activation pattern associated with low SemD was found in either ventral occipito-temporal cortex or the anterolateral aspects of the temporal lobes.

Table 3. 

Differential Activation Sites According to Semantic Distance

Anatomical Regions
Coordinates (x, y, z)
Z
Cluster Size
p
Low SemD > High SemD 
Frontal cortex 
 L Middle frontal gyrus, BA 6 −22, −4, 54 4.03 227 .008 
 L Inferior frontal gyrus, BA 44 −50, 16, 22 4.69 970 <.001 
 L Inferior frontal gyrus, BA 45 −50, 32, 24 3.95 
 L Insula −34, 22, 0 5.17 442 <.001 
 R Middle frontal gyrus, BA 6 30, 4, 58 4.54 365 <.001 
 R Middle frontal gyrus, BA 46 50, 38, 24 4.51 453 <.001 
 R Inferior frontal gyrus, BA 44 46, 10, 28 4.10 
 R Insula 30, 26, −2 5.82 520 <.001 
Other 
 L Superior parietal lobule, BA 7 −20, −64, 48 4.86 800 <.001 
 R Anterior cingulate, BA 32 4, 16, 48 5.68 1260 <.001 
 Anterior cingulate, BA 32 0, 32, 44 4.44 
 
High SemD > Low SemD 
Frontal cortex 
 L Superior frontal gyrus, BA 9 −22, 24, 38 5.15 519 <.001 
 L Middle frontal gyrus, BA 9 −20, 42, 36 3.24 
 R Superior frontal gyrus, BA 10 10, 66, 10 6.03 2722 <.001 
 L Superior frontal gyrus, BA 10 −10, 46, −6 5.01 
Anterior temporal cortex 
 L Middle temporal gyrus, BA 21 −62, −16, −16 5.28 316 .001 
 R Middle temporal gyrus, BA 21 62, −8, −22 4.37 247 .005 
Other 
 L Middle cingulate, BA 23 −2, −18, 40 5.05 2771 <.001 
 L Posterior cingulate, BA 30 −12, −62, 14 4.88 
 L Posterior cingulate, BA 23 −4, −60, 30 4.81 
 L Angular gyrus, BA 39 −48, −72, 36 4.05 191 .018 
Anatomical Regions
Coordinates (x, y, z)
Z
Cluster Size
p
Low SemD > High SemD 
Frontal cortex 
 L Middle frontal gyrus, BA 6 −22, −4, 54 4.03 227 .008 
 L Inferior frontal gyrus, BA 44 −50, 16, 22 4.69 970 <.001 
 L Inferior frontal gyrus, BA 45 −50, 32, 24 3.95 
 L Insula −34, 22, 0 5.17 442 <.001 
 R Middle frontal gyrus, BA 6 30, 4, 58 4.54 365 <.001 
 R Middle frontal gyrus, BA 46 50, 38, 24 4.51 453 <.001 
 R Inferior frontal gyrus, BA 44 46, 10, 28 4.10 
 R Insula 30, 26, −2 5.82 520 <.001 
Other 
 L Superior parietal lobule, BA 7 −20, −64, 48 4.86 800 <.001 
 R Anterior cingulate, BA 32 4, 16, 48 5.68 1260 <.001 
 Anterior cingulate, BA 32 0, 32, 44 4.44 
 
High SemD > Low SemD 
Frontal cortex 
 L Superior frontal gyrus, BA 9 −22, 24, 38 5.15 519 <.001 
 L Middle frontal gyrus, BA 9 −20, 42, 36 3.24 
 R Superior frontal gyrus, BA 10 10, 66, 10 6.03 2722 <.001 
 L Superior frontal gyrus, BA 10 −10, 46, −6 5.01 
Anterior temporal cortex 
 L Middle temporal gyrus, BA 21 −62, −16, −16 5.28 316 .001 
 R Middle temporal gyrus, BA 21 62, −8, −22 4.37 247 .005 
Other 
 L Middle cingulate, BA 23 −2, −18, 40 5.05 2771 <.001 
 L Posterior cingulate, BA 30 −12, −62, 14 4.88 
 L Posterior cingulate, BA 23 −4, −60, 30 4.81 
 L Angular gyrus, BA 39 −48, −72, 36 4.05 191 .018 

Anatomical and Brodmann's areas, MNI coordinates of the maxima within each cluster, Z values, cluster size, and p values corrected at cluster level of the regions that showed a main effect of the considered variables. Coordinates are in millimeters: x = distance to right (+) or left (−) of the mid-sagittal plane; y = distance anterior (+) or posterior (−) to vertical plane through anterior commissure; z = distance above (+) or below (−) intercommissural (AC–PC) line.

Figure 5. 

Cortical areas demonstrating activations for low SemD relative to high SemD. All clusters are rendered on the surface of the MNI brain template. For display purpose, SPM thresholds are set to p-uncorrected = .01.

Figure 5. 

Cortical areas demonstrating activations for low SemD relative to high SemD. All clusters are rendered on the surface of the MNI brain template. For display purpose, SPM thresholds are set to p-uncorrected = .01.

As can be seen in Figure 6A, relative to items with low SemD, items with high SemD between target and foil were associated with bilateral enhanced activity at the level of the frontal pole (BA 10). Within frontal cortex, an additional peak was located in the left superior frontal gyrus (BA 9). More interestingly, two bilateral symmetrical peaks of activation were found in the anterolateral aspects of the temporal lobes (BA 21). The cluster of enhanced activity, particularly on the left, extended anteriorly to the pole and to the inferior aspects of the temporal lobe. To further confirm the sensitivity of this temporal region to increasing SemD, we performed an additional analysis, weighting the expected fMRI activation for each trial according to the semantic distance between target and foil (parametric analysis). This revealed a significant effect in anterolateral temporal cortex bilaterally (x, y, z = −62, −6, −24, Z-score = 5.05; and x, y, z = 60, −4, −20, Z-score = 3.60), where activity increased linearly with increasing SemD between target and foil. The peaks of this analysis were found in regions very near to those activated by the categorical comparison “high SemD > low SemD” (cf. Table 3).

Figure 6. 

(A) Cortical activations for high SemD relative to low SemD. All clusters are rendered on the surface of the MNI brain template. (B) The effect sizes for the four experimental conditions at the peak of activation for high SemD items, within the anterolateral aspects of the temporal lobes. The level of activity is plotted in arbitrary units ([a.u.], ± SEM) and it is mean adjusted to zero. For display purpose, SPM thresholds are set to p-uncorrected = .01.

Figure 6. 

(A) Cortical activations for high SemD relative to low SemD. All clusters are rendered on the surface of the MNI brain template. (B) The effect sizes for the four experimental conditions at the peak of activation for high SemD items, within the anterolateral aspects of the temporal lobes. The level of activity is plotted in arbitrary units ([a.u.], ± SEM) and it is mean adjusted to zero. For display purpose, SPM thresholds are set to p-uncorrected = .01.

Outside these regions, enhanced activity associated with high SemD was found in the middle and posterior cingulate bilaterally and in the left angular gyrus. Note that no differential activation pattern was found in ventral occipito-temporal cortex. The lack of a significant SemD effect in ventral occipito-temporal cortex (i.e., in those regions proven to be sensitive to domain) was consistent with the effect-size plots shown on Figure 4.

As we know from the previous Results section, activity in the anterolateral aspects of the temporal lobes was not driven by domain at a significant level; furthermore, no significant Domain × SemD interaction was observed. By contrast, in these regions, the only significant effect was that of SemD (high SemD > low SemD). As in the approach used earlier, we were also interested in ascertaining whether domain exerted an effect in this substrate, even at a subthreshold level. Therefore, we inspected the effect sizes of the four different stimulus conditions (living low SemD, living high SemD, nonliving low SemD, and nonliving high SemD) at the two peaks of the SemD effects in lateral temporal cortex. As can be observed in Figure 6B, the SemD effects were not driven by a subset of items in the semantic domains of living things or artifacts.

DISCUSSION

We will discuss our results with reference to the two main issues we raised in the Introduction: (i) the relationship between vision and semantics at the neuroanatomical level and (ii) the putative existence of separable substrates within the semantic system. When discussing both of these issues, we will also take into account data from the lesion literature on semantic memory and category specificity in an attempt to reconcile currently available functional and lesion data.

Reconciling Lesion and Functional Literature on the Involvement of Anterolateral Temporal Cortices and Left Prefrontal Cortex in Semantic Processing

The prevailing view in the functional neuroimaging literature is that semantic and perceptual processing of objects and words share the same neural substrates (Martin & Chao, 2001). The most thoroughly investigated of these purportedly common substrates of perception and semantics is ventral occipito-temporal cortex, which houses the ventral stream, that is, the functionally specialized processing pathway for visual object identification, which was first recognized in nonhuman primates. At variance with this view, lesions of this substrate in humans do not cause a dramatic loss of semantic knowledge; rather, they cause modality-specific visual recognition disorders, namely, agnosia (Feinberg, Schindler, Ochoa, Kwan, & Farah, 1994). By contrast, according to most of the neuropsychological evidence, a severe cross-modal semantic knowledge impairment is associated with (left-skewed) damage to the anterolateral aspects of the temporal lobes, which is the lesional hallmark of patients suffering from semantic dementia (Garrard & Hodges, 2000).

First, we will discuss why functional neuroimaging may have failed to provide consistent evidence supporting the role of the anterolateral aspects of (left) temporal cortex in semantic processing. When a semantic task is contrasted with nonsemantic baseline tasks (or simple rest conditions) both PET and fMRI studies have only occasionally found a differential activation (semantic > baseline/rest) in anterior temporal cortex (see, e.g., Giraud & Price, 2001; Mummery, Patterson, Hodges, & Price, 1998; Vandenberghe, Price, Wise, Josephs, & Frackowiak, 1996); indeed, in the vast majority of cases, no differential activation could be detected in this substrate (see, e.g., Hauk, Johnsrude, & Pulvermüller, 2004; Whatmough, Chertkow, Murtha, & Hanratty, 2002; Okada et al., 2000; Chao & Martin, 1999; Perani et al., 1999; Ricci et al., 1999; Cappa, Perani, Schnur, Tettamanti, & Fazio, 1998; Martin, Wiggs, Ungerleider, & Haxby, 1996). According to some authors (Rogers et al., 2006; Devlin et al., 2000, 2002), in fMRI studies, an artifact due to the nearby air–tissue interfaces may be responsible for the lack of differential activation in the anterior aspects of temporal cortex. However, this null result has also been found often in PET studies (see, e.g., Whatmough et al., 2002; Chao & Martin, 1999; Perani et al., 1999; Ricci et al., 1999; Cappa et al., 1998; Martin et al., 1996), which do not have this limitation (see Brambati et al., 2006 for a similar argument). We believe that the currently used baseline conditions are ill-suited to inhibit semantic processing. In fact, because conceptual knowledge is likely involved every time-conscious mental activity is required, the major problem with purported nonsemantic baseline conditions is that they represent uncontrolled semantic activity. In other words, we cannot prevent subjects from thinking during baseline, but we are not aware of their thoughts (see Binder et al., 1999, for similar considerations; see also Price, Devlin, Moore, Morton, & Laird, 2005; Giraud & Price, 2001). In keeping with this view, a differential activation in the anterior aspects of the temporal lobes has often been observed when semantic processing was modulated in a more systematic and controlled fashion. Thus, for example, in several imaging studies using the semantic priming paradigm, decreased activity in anterior temporal cortex for related words relative to unrelated words was found (Copland et al., 2003; Rissman, Eliassen, & Blumstein, 2003). Furthermore, the decrease in the amplitude of the N400 potential usually associated with priming effects in electroencephalographic studies has been attributed to a modulation of the neural activity in the same neural substrate (Rossell, Price, & Nobre, 2003; Silva-Pereyra et al., 2003). Recently, in a PET study using a paradigm more similar to the present one, Rogers et al. (2006) did not find any differential activation in the anterior temporal lobes when they contrasted semantic tasks with baseline scans; however, they did find a differential activation in these regions when they contrasted scans referring to different levels of semantic categorization, that is, different word–picture matching tasks that used as category labels either specific names (e.g., Labrador) or intermediate names (e.g., dog) or general names (e.g., animal). Similarly, in a PET study using a naming task, Whatmough et al. (2002) found more activation for pictures of familiar objects than for pictures of unfamiliar objects in the left anterior middle temporal gyrus. However, they found no differential activation in the anterior aspects of the temporal lobes when they contrasted the pictures to be named with the purported nonsemantic baseline. In the present fMRI study, the subtraction between word–picture matching and the control task did not reveal any differential activation in anterior temporal cortex; conversely, the subtraction between target–foil pairs with high SemD and low SemD was associated with a bilateral activation of anterolateral temporal cortex. Notably, at variance with the results of most of the abovementioned studies, in our study and in the one by Whatmough et al., the less demanding semantic condition (i.e., stimuli with high SemD between target and foil and more familiar items, respectively) was associated with greater activity in lateral temporal cortex. The finding of increased activation in the high SemD condition suggests that feature-based semantic representations might be stored in this substrate and that the extent of its activation is proportional to the number of recruited features. In fact, in high SemD items, the two depicted concepts overlap less than in low SemD items. Thus, on the assumption that both concepts (i.e., target and foil) are processed when subjects perform the word–picture matching task, the overall number of active features is, on average, higher in the high SemD condition. In the above quoted study, Rogers et al. argued that anterolateral temporal cortex is most strongly taxed when subjects are requested to carry out specific, relative to more general, classification tasks; in fact, this conclusion is in keeping with the one we put forward based on the number of active features because specific (i.e., subordinate) concepts such as “dog” have more features in their semantic representations than more general (i.e., superordinate) concepts such as “animal.” In fact, superordinate representations only contain features shared across their subordinate concept (e.g., <eat> for animal), and they lack features identifying the members of subordinate categories (e.g., <bark> for dog).

Because it is likely that exemplars of semantic far concepts are visually less similar to each other than exemplars of semantic near concepts, it is critical that the above discussed effect of SemD not be considered as an epiphenomenon of a visual similarity effect. We believe that there are some good reasons for ruling out this possible confound. First, consider that SemD was computed based on all feature types (encyclopedic, functional, and superordinate as well as visual and nonvisual sensory features) and visual features are only a minority of them. Second (see Rogers et al., 2004), visual features described with the same verbal label in a feature listing task (e.g., <has a head>) might refer to very different visual attributes of the corresponding concept exemplars (e.g., a horse and a cat). Finally, although it seems likely that an increasing verbal feature overlap between target and foil might correlate with an increasing visual feature overlap, this is an undemonstrated assumption. By contrast, Rogers et al. (2004) demonstrated that verbal norms, like those we used to compute SemD, and visual norms may yield nonoverlapping similarity structures, particularly in the fruit category (which represents a quarter of our test items).

Another source of disagreement between the neuropsychological and the functional neuroimaging literature regards the role of left VLPFC in semantic processing. Although this region has almost always proved to be active when subjects are involved in semantic tasks, lesions to this site do not produce any clear-cut semantic impairments. Furthermore, several nondirectly semantic factors have been shown to drive the activity of this region. According to Duncan and Owen (2000), response conflict (e.g., the Stroop effect), task novelty (initial learning of an unfamiliar cognitive task vs. well-practiced performance), perceptual difficulty (e.g., object recognition in conventional vs. unconventional viewpoints), and, finally, manipulation of the number of elements and delay in working memory tasks, have proved to modulate activity in left VLPFC. Sustainers of the semantic specialization of left VLPFC would argue that all these tasks modulate VLPFC because they all entail some semantic processing; this is certainly true if, as we claimed above, semantics are likely involved every time a conscious mental activity is required; however, this does not help much in attempting to delimitate the functional role of left VLPFC. The point here is whether VLPFC is sensitive to the properties of semantic representations per se or whether this substrate is driven by the nature of the cognitive operation one is required to carry out on these semantic representations. Certainly, the role of executive control traditionally ascribed to the frontal lobes by the neuropsychological literature is more in keeping with the latter hypothesis. Recent work by Thompson-Schill, D'Esposito, and Kan (1999) also supports this point of view. These authors found that activity in left VLPFC was driven by manipulating the demand for selection of semantic knowledge among competing alternatives and that activity in the temporal lobe was driven by factors affecting semantic retrieval. We think that this distinction substantially parallels the one we made above between operations carried out on semantic representations and semantic representations per se. Although this is not the main focus of the present investigation, we think that our data may provide further support for this view. In fact, we found that two different subtractions, namely, living things > nonliving things and low SemD > high SemD, yielded the same pattern of differential activation in left VLPFC (BA 44/45). In both cases, behavioral data (i.e., reaction times and accuracy) proved that increased activity in this substrate was associated with the more demanding condition. Because the factors SemD and domain were varied orthogonally, it is very unlikely that some properties of the semantic representations of the stimuli were responsible for this effect. In fact, in the more demanding condition in the subtraction involving domain (i.e., living things), half of the items had low SemD and half had high SemD. Conversely, in the more demanding condition in the subtraction involving SemD (i.e., low SemD), half of the items were from the living domain and half from the nonliving domain. Consequently, the more parsimonious account is that activity in left VLPFC was drawn by task difficulty rather than by properties pertaining to long-term stored memories (such as semantic representations or structural descriptions).

Visual versus Semantic Processing and the Internal Organization of Semantic Memory

Thus far, we have suggested that when semantic factors are properly controlled, functional neuroimaging data are, in fact, in keeping with neuropsychological data as to the critical role of the anterolateral aspects of temporal cortex in semantic processing; the same conclusion was reached recently by Rogers et al. (2006). We will now attempt to demonstrate that the ventral occipito-temporal regions of the ventral stream are not critically involved in semantic processing. This claim is clearly at variance with the abovementioned view that vision (and more generally perception) and semantics share the same neural substrates; however, it is in keeping with data drawn from the neuropsychological literature suggesting that anterolateral temporal cortex lesions (as they occur in semantic dementia) give rise to an incomparably more severe loss of semantic knowledge than lesions of the ventral stream, which are usually associated with modality-specific visual recognition deficits (Feinberg et al., 1994).

The most clear-cut result of the present study is that SemD and domain modulate the activity of two different networks housed, respectively, in anterolateral temporal cortices and in ventral occipito-temporal cortices. As for the effects observed in ventral occipito-temporal cortex, our main finding was a differential pattern of activation for the two semantic domains considered. Indeed, the word–picture matching for artifacts was associated with increased activity in a more medial region than that active during the word–picture matching for living things. This finding replicates data from Chao, Weisberg, and Martin (2002) and Chao, Haxby, and Martin (1999), but is at variance with most of the functional imaging literature, which found only a living things effect in lateral occipito-temporal cortex (Rogers, Hocking, Mechelli, Patterson, & Price, 2005). More interesting for our purposes, however, is the lack of any SemD effect in this region. This finding is particularly robust because it is supported by the lack of any Domain × SemD interaction, indicating that the size of the domain effect was comparable for items with high and low SemD; furthermore, inspection of the effect sizes at the peaks of the main effect of domain confirmed that these substrates were not modulated by SemD. Based on the lack of any SemD effect, we argue that ventral occipito-temporal cortex is not involved in semantic processing. By contrast, we hold that the finding of a significant category effect in the same cortical region was likely due to systematic differences in the visual characteristics of living things and artifacts. As we used pictures in the semantic task, we were unable to separate the visual effects due to the properties of the pictorial stimuli from the effects due to higher levels of visual processing, such as the retrieval of stored structural representations. The only point we wish to make here is that if the ventral stream processes semantics, then it should be sensitive to a clearly semantic factor such as the size of featural overlap between target and foil (i.e., SemD), and that our results are clearly at variance with this prediction.

We are aware that conclusions based on null results are not very convincing. As argued in the Introduction, however, our hypothesis yielded a more complex pattern of predictions, which was confirmed by our experimental data. Not only did we expect a significant domain effect and a null SemD effect in the ventral stream, but we also expected the reverse pattern of results in anterolateral temporal cortices; that is, SemD was expected to draw activity into the anterolateral aspects of temporal cortices but, as SemD was equated across domains, no significant domain effect was expected in this substrate.

The prediction that the factor domain would not modulate activity in anterolateral temporal cortex (provided that SemD was equated across domains) is also critical to the issue of a possible internal organization of semantic memory according to feature type or semantic domain per se. If we admit that the anterolateral aspects of the temporal lobes house semantic memory, then, according to the DSH, this substrate should be sensitive to the factor domain because of the supposed internal specialization of the semantic network according to this factor. The same prediction is made by the SFT on the assumption that semantic memory is divided according to feature type and that living things' and artifacts' semantic representations differ in a systematic way as to the distribution of sensory and functional features. The only approach to category specificity which is in keeping with our finding of a null result for the domain effect in a substrate devoted to semantic processing is the one based on the IPD. In fact, this view predicts that there will be no domain effect in this cortical area provided that, as in the present study, any imbalance in the processing demands across domains is avoided by controlling for semantic distance.

The simultaneous finding of a significant effect of SemD and a null effect domain in the anterolateral temporal cortices strongly argues for an amodal and domain-general semantic system housed in this substrate; this conclusion is in keeping with the semantic hub hypothesis proposed by Patterson et al. (2007) and Rogers et al. (2004, 2006). These authors reject the assumption that a distributed network of sensory–motor systems represents the entire neural basis of semantics and propose a “distributed plus hub view,” which holds that the neural substrate underpinning semantics encompasses a distributed modality-specific network (grounded in sensory-, motor-, and language-specific systems) and an amodal “hub” housed in anterolateral temporal cortex. Our view shares many features with the semantic hub hypothesis; we also believe that semantic memory is amodal (i.e., not organized according to feature types) and domain-general (i.e., not organized according to semantic domains). However, we disagree with one important point of the semantic hub hypothesis. Indeed, we believe that language-based similarity structures should not be considered at the same hierarchical level as sensory–motor modality-specific representations and, further, that they directly reflect the structure of the amodal semantic system. If we assume that this system is housed in anterolateral temporal cortex, the finding that a language-based measure of similarity (which SemD clearly is) drives the activity in this substrate argues in favor of our hypothesis.

Contribution of Our Study to the Issues of Functional and Neural Locus of Category Specificity

We have attempted to show how our data speak to the problem of reconciling the lesion and the functional literature regarding the neural substrate of semantics and vision. The final section of the discussion is devoted to envisaging our results in the framework of current issues on the substrate of category specificity.

Based on a review of over 60 cases of category-specific impairments following brain damage, Gainotti (2000) concluded that, in keeping with the SFT, “the fronto-parietal areas of the left hemisphere are dedicated to ‘functional’ information, crucially contributing to the semantic representation of man-made objects, just as the anteromedial and inferior parts of the temporal lobes are dedicated to the processing of sensory attributes that are essential in the semantic representation of living entities (p. 553).” In our opinion, such a clear-cut picture is not fully supported by the empirical evidence. In fact, as Gainotti acknowledged, five out of the six patients who showed a category-specific semantic deficit for artifacts suffered from a lesion encompassing the left temporal lobe (Gainotti, 2000, p. 550), thus undermining the dichotomy between fronto-parietal damage in nonliving impairments versus temporal damage in living impairments. By contrast, deficits limited to fronto-parietal regions (i.e., sparing the temporal lobes) are usually not associated with impairment of conceptual knowledge.

Recently, some researchers (Noppeney et al., 2007; Lambon Ralph, Patterson, Garrard, & Hodges, 2003; Devlin et al., 2002; Garrard et al., 2002) drew attention to the distinction between anterolateral and anteromedial structures within the temporal lobes. According to these authors, there are some hints in the current literature that medial regions might be critical in deficits for living things. Noppeney et al. (2007) and Devlin et al. (2002) based this conclusion on a comparison between patients suffering from herpes simplex encephalitis (HSE) and patients suffering from semantic dementia. These authors pointed out that whereas HSE causes medial temporal damage and disproportionate deficits for living things, semantic dementia causes more lateral temporal damage usually associated with a comparable impairment across domains. On the other hand, Lambon Ralph et al. (2003) and Garrard et al. (2002) acknowledged that the degree of the disproportionate living impairment in semantic dementia is rather unimpressive compared with the dissociations exhibited by HSE patients. However, they also pointed out that there is a trend toward a living things impairment among patients suffering from semantic dementia. More importantly, these authors suggest that patients showing a more clear-cut living things impairment are characterized by major involvement of the medial temporal structures and atypical right-skewed atrophy.

As we noted in the Introduction, in recent years, IPD-based approaches to category specificity have increasingly gained credit. In keeping with this general view, the role attributed by Devlin et al. (2002) and Noppeney et al. (200) to the anteromedial temporal lobe structures (particularly to perirhinal cortex) is one of “differentiating between concepts that are ‘tightly packed’ in semantic space such as living things” (Noppeney et al., 2007, p. 1138). The experimental data collected in the present study allow us to propose a further refinement of this view. As discussed above, we maintain that visual and semantic processing represent two different levels of information processing. According to our data, the substrate for semantics is likely in the anterolateral aspects of the temporal lobes. This substrate may have evolved at the expense of the visual ventral stream, which in monkeys extends to dorsolateral temporal cortex. In fact, TE, which is considered the final purely visual stage along the ventral visual pathway, extends into BA 21 in the monkey (Tamura & Tanaka, 2001). This hypothesis was first advanced by Feinberg et al. (1994) to account for the finding that medial temporal lesions, rather than lateral ones, cause visual associative agnosia in humans. As these authors claim, “the occurrence of severe objects recognition disorders in our patients with medial occipito-temporal lesions suggests that in humans the areas critical for object perception and recognition occupy a more ventral medial position when compared with the monkey” (Feinberg et al., 1994, p. 408). To date, there is a large consensus that living things pose higher demands at both of these levels of processing, that is, at the semantic level (as suggested by IPD-based approaches) and at the visual structural representation level, due to the effect of a greater structural similarity among living things relative to artifacts, as proposed by Humphreys et al. (1988) (see also Tranel, Logan, Frank, & Damasio, 1997). The behavioral results of the present study are in keeping with this assumption; in fact, longer reaction times and lower accuracy were associated with living items even though SemD was equated across domains (note, however, that living items were also less familiar relative to nonliving items). Moreover, as Coltheart et al. (1998) pointed out, category specificity may sometimes arise as a summation effect of a semantic and a visual category effect. We argue that precisely this summation effect may be responsible for the straightforward dissociations observed in HSE patients. Indeed, in these patients, a category-specific impairment penalizing living things might be simultaneously present at two different processing levels: at the semantic level (due to involvement of more lateral aspects of temporal cortex), and at the visual level (due to involvement of the more medial aspects of the temporal lobes). Lambon Ralph, Lowe, and Rogers (2007) recently put forward an alternative explanation to account for the different degree of category specificity exhibited by patients suffering from HSE and semantic dementia. These authors argue that, although semantic dementia causes a somewhat more lateral involvement of the temporal lobes relative to HSE, regions of damage are highly overlapping in the two diseases. Thus, according their view, different profiles of category specificity across diseases are better accounted for by assuming different forms of damage to the same neuroanatomical substrate rather than the involvement of different neuroanatomical loci.

Reprint requests should be sent to Gian Daniele Zannino, Laboratorio di Neurologia Clinica e Comportamentale, I.R.C.C.S. S. Lucia, Via Ardeatina, 306, 00179 Rome, Italy, or via e-mail: g.zannino@hsantalucia.it.

Notes

1. 

Another network (encompassing the left posterior temporal gyrus, the inferior anterior parietal lobe, and a ventral premotor area in the frontal lobe), assumed to represent visuomotor features of manipulable objects (Devlin et al., 2002), was found to be activated specifically by tools, probably because of the salience of their visuomotor features. As the present study did not include items referring to this semantic category, we will not discuss the literature devoted to this substrate.

2. 

Note that both low SemD items and high SemD items were half living and half nonliving.

REFERENCES

Badre
,
D.
, &
Wagener
,
A. D.
(
2007
).
Left ventrolateral prefrontal cortex and cognitive control of memory.
Neuropsychologia
,
45
,
2883
2901
.
Barsalou
,
L. W.
,
Kyle Simmons
,
W. K.
,
Barbey
,
A. K.
, &
Wilson
,
C. D.
(
2003
).
Grounding conceptual knowledge in modality-specific systems.
Trends in Cognitive Sciences
,
7
,
84
91
.
Binder
,
J. R.
,
Frost
,
J. A.
,
Hammeke
,
T. A.
,
Bellgowan
,
P. S. F.
,
Rao
,
S. M.
, &
Cox
,
R. W.
(
1999
).
Conceptual processing during the conscious resting state: A functional MRI study.
Journal of Cognitive Neuroscience
,
11
,
80
93
.
Brambati
,
S. M.
,
Myers
,
D.
,
Wilson
,
A.
,
Rankin
,
K. P.
,
Allison
,
S. C.
,
Rosen
,
H. J.
,
et al
(
2006
).
The anatomy of category-specific object naming in neurodegenerative diseases.
Journal of Cognitive Neuroscience
,
18
,
1644
1653
.
Capitani
,
E.
,
Laiacona
,
M.
,
Mahon
,
B.
, &
Caramazza
,
A.
(
2003
).
What are the facts of semantic category-specific deficits? A critical review of the clinical evidence.
Cognitive Neuropsychology
,
20
,
213
261
.
Cappa
,
S. F.
,
Perani
,
D.
,
Schnur
,
T.
,
Tettamanti
,
M.
, &
Fazio
,
F.
(
1998
).
The effects of semantic category and knowledge type on lexical–semantic access: A PET study.
Neuroimage
,
8
,
350
359
.
Caramazza
,
A.
, &
Shelton
,
J. R.
(
1998
).
Domain-specific knowledge systems in the brain: The animate–inanimate distinction.
Journal of Cognitive Neuroscience
,
10
,
1
34
.
Chao
,
L. L.
,
Haxby
,
J. V.
, &
Martin
,
A.
(
1999
).
Attribute-based neural substrates in temporal cortex for perceiving and knowing about objects.
Nature Neuroscience
,
2
,
913
919
.
Chao
,
L. L.
, &
Martin
,
A.
(
1999
).
Cortical regions associated with perceiving, naming, and knowing about colors.
Journal of Cognitive Neuroscience
,
11
,
25
35
.
Chao
,
L. L.
,
Weisberg
,
J.
, &
Martin
,
A.
(
2002
).
Experience-dependent modulation of category-related cortical activity.
Cerebral Cortex
,
12
,
545
551
.
Chertkow
,
H.
,
Bub
,
D.
,
Deaudon
,
C.
, &
Whitehead
,
V.
(
1997
).
On the status of object concepts in aphasia.
Brain and Language
,
58
,
203
232
.
Collins
,
D. L.
,
Neelin
,
P.
,
Peters
,
T. M.
, &
Evans
,
A. C.
(
1994
).
Automatic 3D intersubject registration of MR volumetric data in standardized Talairach space.
Journal of Computer Assisted Tomography
,
18
,
192
205
.
Coltheart
,
M.
,
Inglis
,
L.
,
Cupples
,
L.
,
Michie
,
P.
,
Bates
,
A.
, &
Budd
,
B.
(
1998
).
A semantic subsystem of visual attributes.
Neurocase
,
4
,
353
370
.
Copland
,
D. A.
,
de Zubicaray
,
G. I.
,
McMahon
,
K.
,
Wilson
,
S. J.
,
Eastburn
,
M.
, &
Chenery
,
H. J.
(
2003
).
Brain activity during automatic semantic priming revealed by event-related functional magnetic resonance imaging.
Neuroimage
,
20
,
302
310
.
Cree
,
G. S.
, &
McRae
,
K.
(
2003
).
Analyzing the factors underlying the structure and computation of the meaning of chipmunk, cherry, chisel, cheese, and cello (and many other such concrete nouns).
Journal of Experimental Psychology: General
,
132
,
163
201
.
Devlin
,
J. T.
,
Moore
,
C. J.
,
Mummery
,
C. J.
,
Gorno-Tempini
,
M. L.
,
Phillips
,
J. A.
,
Noppeney
,
U.
,
et al
(
2002
).
Anatomic constraints on cognitive theories of category specificity.
Neuroimage
,
15
,
675
685
.
Devlin
,
J. T.
,
Russell
,
R. P.
,
Davis
,
M. H.
,
Price
,
C. J.
,
Wilson
,
J.
,
Moss
,
H. E.
,
et al
(
2000
).
Susceptibility-induced loss of signal: Comparing PET and fMRI on a semantic task.
Neuroimage
,
11
,
589
600
.
Duncan
,
J.
, &
Owen
,
A. M.
(
2000
).
Common regions of the human frontal lobe recruited by diverse cognitive demands.
Trends in Neurosciences
,
23
,
475
483
.
Feinberg
,
T. E.
,
Schindler
,
R. J.
,
Ochoa
,
E.
,
Kwan
,
P. C.
, &
Farah
,
M. J.
(
1994
).
Associative visual agnosia and alexia without prosopagnosia.
Cortex
,
30
,
395
412
.
Friston
,
K. J.
,
Glaser
,
D. E.
,
Henson
,
R. N.
,
Kiebel
,
S.
,
Phillips
,
C.
, &
Ashburner
,
J.
(
2002
).
Classical and Bayesian inference in neuroimaging: Applications.
Neuroimage
,
16
,
484
512
.
Funnell
,
E.
, &
Sheridan
,
J.
(
1992
).
Categories of knowledge: Unfamiliar aspects of living and nonliving things.
Cognitive Neuropsychology
,
9
,
135
153
.
Gainotti
,
G.
(
2000
).
What the locus of brain lesion tells us about the nature of the cognitive defect underlying category-specific disorders: A review.
Cortex
,
36
,
539
559
.
Garrard
,
P.
, &
Hodges
,
J. R.
(
2000
).
Semantic dementia: Clinical, radiological and pathological perspectives.
Journal of Neurology
,
247
,
409
422
.
Garrard
,
P.
,
Lambon Ralph
,
M. A.
, &
Hodges
,
J. R.
(
2002
).
Semantic dementia: A category-specific paradox.
In E. M. E. Forde & G. W. Humphreys (Eds.),
Category specificity in brain and mind
(pp.
149
180
).
New York
:
Psychology Press
.
Giraud
,
A. L.
, &
Price
,
C. J.
(
2001
).
The constraints functional neuroimaging places on models of auditory word processing.
Journal of Cognitive Neuroscience
,
13
,
754
765
.
Grill-Spector
,
K.
,
Kourtzi
,
Z.
, &
Kanwisher
,
N.
(
2001
).
The lateral occipital complex and its role in object recognition.
Vision Research
,
41
,
1409
1422
.
Hart
,
J.
, &
Gordon
,
B.
(
1990
).
Delineation of single-word semantic comprehension deficits in aphasia, with anatomical correlation.
Annals of Neurology
,
27
,
226
231
.
Hauk
,
O.
,
Johnsrude
,
I.
, &
Pulvermüller
,
F.
(
2004
).
Somatotopic representation of action words in human motor and premotor cortex.
Neuron
,
41
,
301
307
.
Hodges
,
J. R.
,
Patterson
,
K.
,
Oxbury
,
S.
, &
Funnell
,
E.
(
1992
).
Semantic dementia: Progressive fluent aphasia with temporal lobe atrophy.
Brain
,
115
,
1783
1806
.
Humphreys
,
G. W.
,
Riddoch
,
M. J.
, &
Quinlan
,
P. T.
(
1988
).
Cascade processes in picture identification.
Cognitive Neuropsychology
,
5
,
67
103
.
Lambon Ralph
,
M. A.
,
Lowe
,
C.
, &
Rogers
,
T. T.
(
2007
).
Neural basis of category-specific semantic deficits for living things: Evidence from semantic dementia, HSVE and a neural network model.
Brain
,
130
,
1127
1137
.
Lambon Ralph
,
M. A.
,
Patterson
,
K.
,
Garrard
,
P.
, &
Hodges
,
J. H.
(
2003
).
Semantic dementia with category specificity: A comparative case-series study.
Cognitive Neuropsychology
,
20
,
307
326
.
Martin
,
A.
, &
Chao
,
L. L.
(
2001
).
Semantic memory and the brain: Structure and processes.
Current Opinion in Neurobiology
,
11
,
194
201
.
Martin
,
A.
,
Wiggs
,
C. L.
,
Ungerleider
,
L. G.
, &
Haxby
,
J. V.
(
1996
).
Neural correlates of category-specific knowledge.
Nature
,
379
,
649
652
.
Mummery
,
C. J.
,
Patterson
,
K.
,
Hodges
,
J. R.
, &
Price
,
C. J.
(
1998
).
Functional neuroanatomy of semantic system: Divisible by what?
Journal of Cognitive Neuroscience
,
10
,
766
777
.
Noppeney
,
U.
,
Patterson
,
K.
,
Tyler
,
L. K.
,
Moss
,
H.
,
Stamatakis
,
E. A.
,
Brigth
,
P.
,
et al
(
2007
).
Temporal lobe lesions and semantic impairment: A comparison of herpes simplex virus encephalitis and semantic dementia.
Brain
,
130
,
1138
1147
.
Okada
,
T.
,
Tanaka
,
S.
,
Nakai
,
T.
,
Nishizawa
,
S.
,
Inui
,
T.
,
Sadato
,
N.
,
et al
(
2000
).
Naming of animals and tools: A functional magnetic resonance imaging study of categorical differences in the human brain areas commonly used for naming visually presented objects.
Neuroscience Letters
,
296
,
33
36
.
Patterson
,
K.
,
Nestor
,
P. J.
, &
Rogers
,
T. T.
(
2007
).
Where do you know what you know? The representation of semantic knowledge in the human brain.
Nature Reviews Neuroscience
,
8
,
976
987
.
Penny
,
W.
, &
Holmes
,
A.
(
2004
).
Random effect analysis.
In R. S. J. Frackowiak, K. J. Friston, C. D. Frith, R. J. Dolan, C. J. Price, S. Zeki, et al. (Eds.),
Human brain function
(pp.
843
851
).
San Diego
:
Elsevier
.
Perani
,
D.
,
Schnur
,
T.
,
Tettamanti
,
M.
,
Gorno-Tempini
,
M.
,
Cappa
,
S. F.
, &
Fazio
,
F.
(
1999
).
Word and picture matching: A PET study of semantic category effects.
Neuropsychologia
,
37
,
293
306
.
Price
,
C. K.
,
Devlin
,
J. T.
,
Moore
,
C. J.
,
Morton
,
C.
, &
Laird
,
A. R.
(
2005
).
Meta-analyses of object naming: Effect of baseline.
Human Brain Mapping
,
25
,
70
82
.
Ricci
,
P. T.
,
Zelkowicz
,
B. J.
,
Nebes
,
R. D.
,
Meltzer
,
C. C.
,
Mintun
,
M. A.
, &
Becker
,
J. T.
(
1999
).
Functional neuroanatomy of semantic memory: Recognition of semantic associations.
Neuroimage
,
9
,
88
96
.
Riddoch
,
M. J.
, &
Humphreys
,
G. W.
(
2003
).
Visual agnosia.
Neurologic Clinics of North America
,
21
,
501
520
.
Rissman
,
J.
,
Eliassen
,
J. C.
, &
Blumstein
,
S. E.
(
2003
).
An event-related fMRI investigation of implicit semantic priming.
Journal of Cognitive Neuroscience
,
15
,
1160
1175
.
Rogers
,
T. T.
,
Hocking
,
J.
,
Mechelli
,
A.
,
Patterson
,
K.
, &
Price
,
C. J.
(
2005
).
Fusiform activation to animals is driven by the process, not the stimulus.
Journal of Cognitive Neuroscience
,
17
,
434
445
.
Rogers
,
T. T.
,
Hocking
,
J.
,
Noppeney
,
U.
,
Mechelli
,
A.
,
Gorno-Tempini
,
M. L.
,
Patterson
,
K.
,
et al
(
2006
).
Anterior temporal cortex and semantic memory: Reconciling findings from neuropsychology and functional imaging.
Cognitive, Affective, & Behavioral Neuroscience
,
6
,
201
213
.
Rogers
,
T. T.
,
Lambon Ralph
,
M. A.
,
Garrard
,
P.
,
Bozeat
,
S.
,
McClelland
,
J. L.
,
Hodges
,
J. R.
,
et al
(
2004
).
Structure and deterioration of semantic memory: A neuropsychological and computational investigation.
Psychological Review
,
111
,
205
235
.
Rossell
,
S. L.
,
Price
,
C. J.
, &
Nobre
,
A. C.
(
2003
).
The anatomy and time course of semantic priming investigated by fMRI and ERPs.
Neuropsychologia
,
41
,
550
564
.
Salmaso
,
D.
, &
Longoni
,
A. M.
(
1985
).
Problems in the assessment of hand preference.
Cortex
,
21
,
533
549
.
Silva-Pereyra
,
J.
,
Rivera-Gaxiola
,
M.
,
Aubert
,
E.
,
Bosch
,
J.
,
Galán
,
L.
, &
Salazar
,
A.
(
2003
).
N400 during lexical decision tasks: A current source localization study.
Clinical Neurophysiology
,
114
,
2469
2486
.
Tamura
,
H.
, &
Tanaka
,
K.
(
2001
).
Visual response properties of cells in the ventral and dorsal parts of the macaque inferotemporal cortex.
Cerebral Cortex
,
11
,
384
399
.
Thompson-Schill
,
S. L.
(
2003
).
Neuroimaging studies of semantic memory: Inferring “how” from “where”.
Neuropsychologia
,
41
,
280
292
.
Thompson-Schill
,
S. L.
,
D'Esposito
,
M.
,
Aguirre
,
G. K.
, &
Farah
,
M. J.
(
1997
).
Role of left inferior prefrontal cortex in retrieval of semantic knowledge: A reevaluation.
Proceedings of the National Academy of Sciences, U.S.A.
,
94
,
14792
14797
.
Thompson-Schill
,
S. L.
,
D'Esposito
,
M.
, &
Kan
,
I. P.
(
1999
).
Effects of repetition and competition on activity in left prefrontal cortex during word generation.
Neuron
,
23
,
513
522
.
Tranel
,
D.
,
Logan
,
C. G.
,
Frank
,
R. J.
, &
Damasio
,
A. R.
(
1997
).
Explaining category-related effects in the retrieval of conceptual and lexical knowledge for concrete entities: Operationalization and analysis of factors.
Neuropsychologia
,
35
,
1329
1339
.
Tversky
,
A.
(
1977
).
Features of similarity.
Psychological Review
,
84
,
327
352
.
Ungerleider
,
L. G.
, &
Haxby
,
J. V.
(
1994
).
“What” and “where” in human brain.
Current Opinion in Neurobiology
,
4
,
157
165
.
Vandenberghe
,
R.
,
Price
,
C.
,
Wise
,
R.
,
Josephs
,
O.
, &
Frackowiak
,
R. S. J.
(
1996
).
Functional anatomy of common semantic system for words and pictures.
Nature
,
383
,
254
256
.
Warrington
,
E. K.
(
1975
).
The selective impairment of semantic memory.
Quarterly Journal of Experimental Psychology
,
27
,
635
657
.
Warrington
,
E. K.
, &
Shallice
,
T.
(
1984
).
Category-specific semantic impairments.
Brain
,
107
,
829
859
.
Whatmough
,
C.
,
Chertkow
,
H.
,
Murtha
,
S.
, &
Hanratty
,
K.
(
2002
).
Dissociable brain regions process object meaning and object structure during picture naming.
Neuropsychologia
,
40
,
174
186
.
Zannino
,
G. D.
,
Perri
,
R.
,
Pasqualetti
,
P.
,
Caltagirone
,
C.
, &
Carlesimo
,
G. A.
(
2006a
).
Analysis of the semantic representations of living and nonliving concepts: A normative study.
Cognitive Neuropsychology
,
23
,
515
540
.
Zannino
,
G. D.
,
Perri
,
R.
,
Pasqualetti
,
P.
,
Caltagirone
,
C.
, &
Carlesimo
,
G. A.
(
2006b
).
(Category-specific) semantic deficit in Alzheimer's patients: The role of semantic distance.
Neuropsychologia
,
44
,
52
61
.