Abstract

Humans can recognize common objects by touch extremely well whenever vision is unavailable. Despite its importance to a thorough understanding of human object recognition, the neuroscientific study of this topic has been relatively neglected. To date, the few published studies have addressed the haptic recognition of nonbiological objects. We now focus on haptic recognition of the human body, a particularly salient object category for touch. Neuroimaging studies demonstrate that regions of the occipito-temporal cortex are specialized for visual perception of faces (fusiform face area, FFA) and other body parts (extrastriate body area, EBA). Are the same category-sensitive regions activated when these components of the body are recognized haptically? Here, we use fMRI to compare brain organization for haptic and visual recognition of human body parts. Sixteen subjects identified exemplars of faces, hands, feet, and nonbiological control objects using vision and haptics separately. We identified two discrete regions within the fusiform gyrus (FFA and the haptic face region) that were each sensitive to both haptically and visually presented faces; however, these two regions differed significantly in their response patterns. Similarly, two regions within the lateral occipito-temporal area (EBA and the haptic body region) were each sensitive to body parts in both modalities, although the response patterns differed. Thus, although the fusiform gyrus and the lateral occipito-temporal cortex appear to exhibit modality-independent, category-sensitive activity, our results also indicate a degree of functional specialization related to sensory modality within these structures.

INTRODUCTION

Humans can easily recognize common objects by touch (Klatzky, Lederman, & Metzger, 1985). Despite this proficiency, little is known about the cortical mechanisms that underlie haptic object recognition. Recent neuroimaging studies indicate that, like vision, haptics utilize the ventral object recognition pathway (Zhang, Weisser, Stilla, Prather, & Sathian, 2004; James et al., 2002; Amedi, Malach, Hendler, Peled, & Zohary, 2001). Moreover, lesions involving the occipital and temporal lobes can cause haptic and visual agnosia, a deficit in object recognition unexplained by sensory or cognitive disorders (Ohtake et al., 2001; Feinberg, Rothi, & Heilman, 1986). These results indicate that the occipito-temporal area plays an important role in haptic object representation.

However, the functional organization that underlies the relationship between haptics and vision within this region remains unclear. Pattern classification methods applied to fMRI data (e.g., multivoxel pattern analysis [MVPA]) reveal that visual object categories can be represented by distinct distributed response patterns in the ventral pathway (Spiridon & Kanwisher, 2002; Haxby et al., 2001). In a subsequent study, MVPA was used to compare activation patterns between visual and haptic recognition of common inanimate objects and human faces (Pietrini et al., 2004). Patterns characterizing visual and haptic recognition were similar for nonbiological common objects, but not for human faces. This result suggests that the representation of biological objects may not be cross-modally shared by a widely distributed network. However, MVPA does not examine the functional specialization of individual brain regions, thus whether any such area is functionally specialized for both haptic and visual recognition of biological stimuli (e.g., faces and other body parts) remains an open question.

In this article, we investigate whether haptic and visual recognition of biological objects depends on common brain structures by examining blood oxygen level dependent (BOLD) responses in regions that are preferentially activated by specific object categories. We describe a response within a region as being sensitive to a biological object category if activation is higher to objects in this category than to nonbiological control objects, regardless of activation in the other biological categories. The ventral pathway is known to contain several category-sensitive regions for visually presented biological objects: The fusiform face area (FFA) most strongly responds to human faces (Kanwisher, McDermott, & Chun, 1997), whereas the extrastriate body area (EBA) in the lateral occipito-temporal region most strongly responds to all body parts except the whole face (Downing, Jiang, Shuman, & Kanwisher, 2001). There has been considerable interest in these visual category-sensitive regions, as they may form unique functional modules for biological objects (e.g., Kanwisher, 2000). Whether these same areas are also involved in haptic recognition of biological objects is currently unknown.

Our previous work suggests that the occipito-temporal region is also involved in haptic face recognition. fMRI revealed that haptic face recognition tasks performed by highly trained subjects elicited activation in the fusiform gyrus (Kilgour, Kitada, Servos, James, & Lederman, 2005). Moreover, a prosopagnosic individual with damage to the ventral occipito-temporal region (including the fusiform gyrus) could not differentiate rigid 3-D molds of upright faces by haptics or vision, but was successful at differentiating upright teapots (Kilgour, de Gelder, & Lederman, 2004). Although these results provide initial evidence that the ventral occipito-temporal region is necessary for haptic face recognition, they did not consider whether the same brain regions within this large area mediate both haptic and visual face recognition. Moreover, to our knowledge, no study has more generally determined the neural substrates for haptic recognition of body parts.

We hypothesized that the fusiform gyrus and the lateral occipito-temporal cortex would demonstrate their characteristic category sensitivity regardless of sensory modality, but that corresponding response patterns for haptics and vision might differ within these structures anatomically and functionally. We initially conducted a group analysis to determine whether haptic and visual identification of faces and other body parts (hands and feet) activate the same network of cortical regions within the whole brain. We then conducted a functional region-of-interest (fROI) analysis, which is better suited for determining category-sensitive responses in the occipito-temporal cortex (Saxe, Brett, & Kanwisher, 2006). We defined four fROIs within each participant by identifying the peak voxel in the fusiform gyrus when contrasting either haptic or visual identification of faces with that of bottles, and the peak voxel in the lateral occipito-temporal cortex when contrasting either haptic or visual identification of other body parts with that of bottles. We call these fROIs the “haptic face region” (HFR), the visual face area (“fusiform face area,” FFA), the “haptic body region” (HBR), and the visual body area (“extrastriate body area,” EBA), respectively.

These category-sensitive regions were examined in terms of anatomical locations and activity patterns across object category (faces, other body parts) and modalities. We predicted that haptics and vision would demonstrate the same regional sensitivity to a given object category. Any qualitative difference in the activity patterns between regions sensitive to the same object category (i.e., a statistically significant region-by-condition interaction) would reveal these regions to be differentially involved in haptic and visual object recognition or functionally distinct (Henson, 2006). Because visual imagery can activate the occipital cortex during haptic object perception (e.g., Zhang et al., 2004), three supplemental measures were used to determine whether visual imagery could account for category-sensitive activation during the current haptic task.

METHODS

Subjects

Sixteen healthy right-handed (Oldfield, 1971) volunteers (12 men and 4 women; 18–35 years) participated after giving written informed consent. The study was cleared by the Health Sciences Research Ethics Board of Queen's University, Canada. None of the volunteers had a history of symptoms requiring neurological, psychological, or other medical care.

Stimuli

We used four object categories: clay casts of faces, hands, feet, and bottles (Figures 1A and 2A). Bottles were chosen as inanimate control objects because they were similar to objects in the other categories in terms of familiarity and size. Each category of object contained three different exemplars. (In pilot testing, we used seven exemplars, but more intensive training was required in order to successfully identify over three different exemplars.) The biological objects were constructed from human models; the original bottles were approximately 700 ml in volume. All objects were similar in size and spatial proportion to the real objects, with most features preserved. The three exemplars of each object category were assigned numeric labels (i.e., 1, 2, and 3) to equate exemplar names among object categories.

Figure 1. 

Haptic object identification task. (A) Four different categories of object were used for the experiment: a clay face mask, a hand cast, a foot cast, and a bottle. Each object category contained three different exemplars (see also Figure 2A). The three exemplars of each object category were given a numeric code (1, 2, and 3). (B) Each exemplar was mounted on a sheet of Plexiglas moved on a Plexiglas slider. (C) Task schedule of a single run. (D) In each task block, the subject explored the presented object with one hand. The subject was asked to start exploring the object as soon as a white box appeared on screen. After the 10-sec exploration, the subject was asked to stop the exploration when the white cross reappeared on the screen, and then to respond by pressing a button, which corresponds to the numeric code of the object, with the other hand. The neural activity during the task block was modeled with a boxcar function for each object category. The regressor shown in the figure was convolved with a canonical hemodynamic response function.

Figure 1. 

Haptic object identification task. (A) Four different categories of object were used for the experiment: a clay face mask, a hand cast, a foot cast, and a bottle. Each object category contained three different exemplars (see also Figure 2A). The three exemplars of each object category were given a numeric code (1, 2, and 3). (B) Each exemplar was mounted on a sheet of Plexiglas moved on a Plexiglas slider. (C) Task schedule of a single run. (D) In each task block, the subject explored the presented object with one hand. The subject was asked to start exploring the object as soon as a white box appeared on screen. After the 10-sec exploration, the subject was asked to stop the exploration when the white cross reappeared on the screen, and then to respond by pressing a button, which corresponds to the numeric code of the object, with the other hand. The neural activity during the task block was modeled with a boxcar function for each object category. The regressor shown in the figure was convolved with a canonical hemodynamic response function.

Figure 2. 

Visual object identification task. (A) Visual stimuli. Two black-and-white photographic images of each exemplar were used for the task. (B) Task schedule of a single run. (C) The subjects were asked to identify the three exemplars of each object category by pressing one of the three buttons of the response pad. In the baseline condition, the subjects were asked to fixate a white cross. The neural activity during the task block was modeled with a boxcar function for each object category.

Figure 2. 

Visual object identification task. (A) Visual stimuli. Two black-and-white photographic images of each exemplar were used for the task. (B) Task schedule of a single run. (C) The subjects were asked to identify the three exemplars of each object category by pressing one of the three buttons of the response pad. In the baseline condition, the subjects were asked to fixate a white cross. The neural activity during the task block was modeled with a boxcar function for each object category.

fMRI Data Acquisition

Functional magnetic resonance images were acquired on a 3-Tesla whole-body scanner (Trio; Siemens, Erlangen, Germany). Standard sequence parameters were used to obtain the functional images as follows: gradient-echo EPI; repetition time (TR) = 2000 msec; echo time (TE) = 30 msec; flip angle = 78°; 32 axial slices of 3-mm thickness with 25% slice gap; field of view = 192 × 192 mm; and in-plane resolution = 3.0 × 3.0 mm. A single volume covered approximately the whole brain, except for the bottom of the orbito-frontal cortex and the cerebellum in larger subjects. Before the acquisition of functional images, T1-weighted high-resolution anatomical images were obtained (voxel size = 0.9 × 0.9 × 1 mm).

The entire experiment was conducted on three separate days. The training of the haptic object identification task was conducted on the first day, the actual haptic experiment was conducted on the second day, and the visual object identification task and supplementary visual imagery task were conducted on the third day. The total time was 4 to 5 hr for each subject.

Haptic Object Identification Task

This task was designed to examine haptic sensitivity to human faces and other body parts. In order to discourage subjects from imagining the objects visually during the task, the haptic identification task was performed before the visual object identification task. The subjects were not allowed to see the object until the final run of the haptic object identification task was over.

Haptic Stimulus Presentation

The subjects lay supine on a bed with their eyes open and their ears plugged, and were instructed to relax. The subjects were asked to fixate their view on a white cross on the screen, which they viewed through a mirror over the head coil. The mirror was oriented toward the half-transparent screen placed on the back of the scanner bore. The subjects could not see their hand or the presented objects. A Plexiglas table was placed over the lower half of the body with the front edge at about the level of the abdomen. The stimuli were presented to the subject on the Plexiglas table with a sliding platform (Figure 1B). The orientation of the presented objects was constrained by the physical limitations of scanning; the subjects' hands were somewhat restricted and the scanner bore had limited space (a maximum height of 19 cm from the surface of the Plexiglas table to the shell of the bore); thus, the orientation of each object category was adjusted so that the subject could comfortably explore the object using one hand (Figure 1A). One hand at a time was used in order to minimize movement artifacts, and the hands were tested separately to examine whether activation is affected by the hand used to explore objects. When one hand was exploring an object, the other hand was used to identify each exemplar with a response pad.

Each subject completed four runs (336 sec per run) of the haptic object identification task with each hand (TR = 2 sec, 168 volumes were collected for each run; Figure 1C). A single run consisted of four repetitions of the 80-sec task period. A 10-sec baseline period preceded the first task period and a 6-sec baseline followed the last task period. A single task period consisted of four 10-sec task blocks, each alternated with a 10-sec baseline condition. In each task block of a single task period, one exemplar from the four object categories was presented (Figure 1D). The order of the presentation of the object categories was counterbalanced across task periods. Hand order was counterbalanced across subjects and the runs for one hand were implemented after all runs for the other hand were completed. A software package (Presentation; Neurobehavioral Systems, Albany, CA) was used to present visual stimuli to the subject and to present auditory cues to the experimenter through headphones. The auditory cues were necessary for presentation of objects with precise timing during the haptic task.

Task

Before the fMRI experiment, blindfolded subjects were trained to use each hand to identify the exemplars within 10 sec and with >85% accuracy. During this training, the subjects were asked to give both the basic-level category name and the assigned number (e.g., Face 1) of the object to ensure that subjects were identifying each object at the subordinate level. Participants identified objects with ∼93% accuracy in about 7.8 sec (on average) during the last training test. The hand movements used to explore objects were comparable across object categories, consisting mainly of enclosure (i.e., grasp) and contour following (i.e., edge following) exploratory procedures. The training took less than 1 hr.

As a single fMRI run contained four task blocks for each object category and four exemplars from each object category were presented. We used three new exemplars and repeated one of the exemplars from each object category. The repeated object was pseudorandomly selected from the three exemplars for each run, but all repeated objects for each run had the same numeric code across the object categories. The order of exemplar presentation within each object category was pseudorandomized such that the same exemplar was not repeated in succession.

In each task block, each subject explored the presented object with one hand (Figure 1D). Subjects were asked to start exploring the object as soon as a white box (viewing angle of 0.9° × 0.9°) appeared on screen. Subjects were asked to cease exploration when the white cross reappeared on screen (i.e., after 10 sec), and to respond by pressing a button with the other hand as soon as possible. To match the sensorimotor components between object categories, subjects were instructed to carry on exploring the object to confirm their answer if they had already identified the object within 10 sec. The subjects were also instructed to keep their speed of exploration constant across the object categories. Subjects rested their exploring hand on their chests during the baseline period.

Visual Object Identification Task

We endeavored to make this task as similar as possible to the haptic task, except for the modality of presentation. Changes to the design were only introduced when we deemed it necessary, and we employed similar task schedules as those used in previous studies of visual object recognition (e.g., Peelen & Downing, 2005a).

Visual Stimulus Presentation

Two monochromatic photographic images of each exemplar were used for the task (2 images × 3 exemplars = 6 images per object category; Figure 2A). We chose to use multiple photos of each exemplar to make task difficulty more comparable to the haptic object identification task. The two photos were taken from different angles. The differences in size and perceived brightness of these photographs were minimized using photo-editing software (Photoshop; Adobe Systems, San Jose, CA). The borders of the hand and foot casts were not shown in the photographs. Visual stimuli were back-projected via an LCD projector (LT 265; NEC Viewtechnology, Tokyo, Japan) on a translucent screen located at the rear of the scanner. Visual stimuli were generated on a Windows laptop using the presentation program. The stimuli and the white fixation cross subtended a visual angle of approximately 8.0° and 0.9°, respectively. Subjects did not touch any object during the task.

Task

The visual identification task consisted of four runs, each lasting 288 sec (TR = 2 sec, 144 volumes/run; Figure 2B). Each run consisted of four 14-sec baseline periods, four 54-sec task periods, and one 16-sec baseline period. Each task period consisted of four 12-sec task blocks, each alternated with a 2-sec baseline. Within each 12-sec task block, six different images appeared for 0.75 sec with an interstimulus interval of 1.25 sec (Figure 2C). Subjects were asked to recognize the three exemplars of each object category by pressing one of the three buttons on the response pad. Only one hand was used to respond, which was alternated among the subjects. In the baseline condition, subjects were asked to fixate a white cross. The fMRI experiment was conducted after ∼30 min of training, which lasted until subjects reached an accuracy of ≥85% outside of the scanner.

Data Processing

Image processing and statistical analyses were performed using the Statistical Parametric Mapping package (SPM99; Wellcome Department of Cognitive Neurology, London, UK) implemented in MATLAB (Mathworks, Sherborn, MA, USA; Friston, Ashburner, Frith, Heather, & Frackowiak, 1995; Friston, Holmes, et al., 1995). The five volumes were additionally acquired at the onset of each run to allow the MR signal to reach an equilibrium state. These volumes were discarded and not used for further analysis. Functional images from each run were realigned to the first scan. All functional images and the T1-weighted anatomical images were then coregistered to the first scan of the haptic identification task. Each coregistered T1-weighted anatomical image was normalized to a standard T1 template image (ICBM 152), defining Montreal Neurological Institute (MNI) space. The parameters from this normalization process were then applied to the functional images which were resampled to a final resolution of 2 × 2 × 2 mm3.

Statistical Analysis

First, individual contrasts for the haptic and visual object identification tasks were incorporated into a random effects model (Friston, Holmes, & Worsley, 1999). This analysis was used to evaluate the network of cortical areas recruited during haptic and visual object identification in the whole brain. The normalized functional images were filtered using a Gaussian kernel of 8 mm full width at half maximum (FWHM) in the x-, y-, and z-axes for the random effects group analysis.

We then conducted an fROI analysis (Saxe et al., 2006) to test the hypothesis that areas that are sensitive to perception of objects in one sensory domain (visual or haptic) will be sensitive to the same category of objects in the other sensory domain in the occipito-temporal cortex. The fROI analysis was conducted with functional images filtered using a smaller Gaussian kernel of 4 mm FWHM.

Random Effects Group Analysis

Initial individual analysis

We fitted a general linear model to the fMRI data from each subject (Worsley & Friston, 1995; Friston, Jezzard, & Turner, 1994). The time series for each voxel was high-pass filtered at 1/200 Hz and low-pass filtered by a canonical hemodynamic response function. Regardless of the task, the BOLD response during the task blocks was modeled with a boxcar function convolved with a canonical hemodynamic response function. Each run included four task-related regressors of a boxcar function, one for each object category. Two design matrices were prepared for each subject; one comprising four runs of the visual identification task and one comprising eight runs (4 right-handed and 4 left-handed) of the haptic identification task. Motion-related artifacts were minimized by incorporating six parameters (three displacements and three rotations) from the rigid-body realignment stage into each model. In the first-level individual analysis, the estimates for each condition in each individual were compared using linear contrasts. Because recognition of faces and of other body parts appear to activate different occipito-temporal regions (Downing et al., 2001; Kanwisher et al., 1997), the contrast between face identification and bottle identification, and the contrast between identification of hands/feet and bottle identification, were evaluated separately. This procedure was repeated for the visual identification conditions.

Subsequent group analysis

The weighted sum of the parameter estimates in the individual analysis constituted contrast images, which were then used for the group analysis. The contrast images obtained from the individual analyses represent the normalized task-related increment of the MR signal of each subject. For each contrast, a one-sample t test was performed for every voxel in the brain to obtain population inferences. The resulting set of voxel values for each contrast constituted the SPM{t}. The SPM{t} was transformed to normal distribution units [SPM{Z}]. The threshold for SPM{Z} was set at Z > 3.09. The statistical threshold for the spatial extent test on the clusters was set at p < .05 and corrected for multiple comparisons over the search volume (Friston, Holmes, Poline, Price, & Frith, 1996). Because we had a priori hypothesis that faces would activate the middle fusiform gyrus more than control objects, we limited our search volume to the fusiform gyrus. The anterior and posterior borders were set at −30 and −70 in the y-coordinates of the MNI template (search volume, 3405 voxels, 27,240 mm3) according to previous studies (Kilgour et al., 2005; Kanwisher et al., 1997). Similarly, the search volume was limited to the bilateral middle and inferior temporal gyrus, and the middle and inferior occipital gyrus for the contrast of nonface body parts versus bottle. The anterior and posterior borders of the search volume were set at y = − 50 and y = −80 (search volume, 12,321 voxels, 98,568 mm3) based on previous studies (Spiridon, Fischl, & Kanwisher, 2006; Astafiev, Stanley, Shulman, & Corbetta, 2004; Downing et al., 2001). The search volume and location of activation foci were determined using the probabilistic atlas of Shattuck et al. (2008). The search volume for activation of other brain regions was set to the whole brain.

Functional Region of Interest Analysis

After examining activation across the whole brain in response to haptic and visual object identification, we conducted an fROI analysis, a procedure to localize category-sensitive response in each individual. The group-average analysis does not necessarily reflect category-sensitive activation of individuals because their locations can vary across subjects, even in a standard stereotaxic space (Saxe et al., 2006). Thus, this individual analysis is a more accurate measure than the group analysis for examining our hypothesis. The analysis was conducted with functional images filtered using a smaller Gaussian kernel of 4 mm FWHM. Because we had a priori hypotheses about the locations of activation, we used a liberal threshold of SPM{Z} of Z > 2.33 (equivalent to p < .01, uncorrected for multiple comparisons).

We localized face-sensitive regions in the middle fusiform gyrus and its adjacent sulci because this region can contain face-related regions for haptics (Kilgour et al., 2005) as well as vision (Kanwisher et al., 1997). The anterior and posterior borders of the middle fusiform gyrus were the same as those used for the group analysis. Face-sensitive activation was individually defined by the contrast between faces and control objects (bottles) (e.g., Peelen & Downing, 2005a) in this region. We took the most significantly activated voxel found in the fusiform gyrus and adjacent sulci for haptic, and for visual identification of faces versus bottles as the center of the HFR and the FFA, respectively.

To localize activity in response to nonface body parts, we searched within the middle and inferior temporal gyrus, and the middle and inferior occipital gyrus. The anterior and posterior borders of the search region were the same as in the group analysis. We took the most significantly activated voxel found in this region for haptic, and for visual identification of hands/feet versus bottles as the center of the HBR and the EBA, respectively.

After we identified the four peak voxels, characterizing the centers of four functional regions in each individual, raw data were extracted from 4 mm radius spheres centered on these peaks (4 mm is the size of the spatial smoothing kernel applied to these data). We did not use a canonical hemodynamic response for analysis, but instead, a percent signal change of each object category was calculated as (100 × [(signal value during identification of an object category − signal value for the baseline condition)/signal value for the baseline condition]). Signal value for the baseline was calculated by averaging signal over the initial and final 6-sec baseline periods in each run. The haptic and visual experiments were conducted with different timings and these periods were the only baseline periods common to the two tasks.1 The hemodynamic lag and mixing of effects in the task block were accounted for by excluding the first two scans in each task block from the calculations. Thus, the final estimate of the percent signal change is based on a steady-state response between 4 and 10 sec into each haptic test block, and between 4 and 12 sec into each visual test block. These values were used to calculate signal change between a biological object and a control object (relative percent; signal change).

RESULTS

Task Performance

Haptic Object Identification Task

Accuracy and response times were similar regardless of object category or hand used (Figure 3A). Two-way analyses of variance (ANOVA) (2 hands × 4 object categories) showed no significant effect for either measure (ps > .7 for accuracy; ps > .1 for response time).

Figure 3. 

Behavioral results. These data are presented as the mean ± SEM. n = number of subjects.

Figure 3. 

Behavioral results. These data are presented as the mean ± SEM. n = number of subjects.

Visual Object Identification Task

Accuracy was constant across object categories (Figure 3B). A one-way ANOVA showed no significant difference (p > .1) among the four object categories. A similar ANOVA on the response times showed a significant effect of object category [F(3, 45) = 8.04, p < .001]. Pairwise comparisons with Sidak–Bonferroni corrections showed that the hand condition resulted in significantly longer response times (980 ± 29 msec) than either face (907 ± 23 msec, p < .05) or bottle (904 ± 25 msec, p < .01) conditions. However, this difference in response time is unlikely to affect results: Response times for the foot condition were not significantly different from those for the bottle and face conditions, and the patterns of activity elicited by hands and feet were highly similar in both sensory modalities. Data from both nonface body part conditions (hand and foot) were combined in all fMRI analyses (see below).

fMRI Results

A group-average analysis was initially conducted to identify regions involved in haptic and visual identification of faces and other body parts (hand and foot) in the whole brain.

Collapsing over Hand Used in Haptic Conditions

Each hand was tested separately to determine whether activation sensitive to faces and other body parts in the occipito-temporal cortex was affected by the hand used for exploration. We examined the effect of hand on activity for each body part by comparing the contrast of each biological object versus control objects between the two hands. As we found no significant response-hand-specific activation in the occipito-temporal cortex, means of the right and left response-hand conditions were used in subsequent analyses.

Whole-brain Group-average Analysis

We conducted random effect group analyses to examine if haptic and visual identification of faces and other body parts activate the same network of cortical areas in the whole brain. We depicted activation elicited by haptic and visual identification of (i) faces and of (ii) hands and feet, both relative to identification of nonbiological control objects (bottles). Table 1 shows the coordinates of the foci observed for the four contrasts (2 modalities × 2 biological object types). The contrast between haptic identification of faces and control object revealed three significant clusters of activation: in the right inferior temporal gyrus, in the left superior parietal lobe, and in the right inferolateral frontal cortex (Figure 4). The same contrast for the visual task revealed four significant clusters of activation. The largest cluster extended over the right fusiform gyrus and the middle and inferior occipital gyri. The other three clusters were located in the left middle and inferior occipital gyri, in the right lateral prefrontal cortex, and in the angular gyrus. The haptic and visual activation depicted by the contrast of faces and control object (bottles) overlapped in the right prefrontal cortex (center of mass of overlap: x = 52.1, y = 26.4, z = 20.5; volume of overlap: 528 mm3). Although the haptic and visual activation by the same contrast produced adjacent activity in the occipito-temporal region, no overlap was observed (at the significance threshold employed here).

Table 1. 

Random Effect Group-average Analyses


Volume (mm3)
Anatomical Region
Hem
x
y
z
Z Value
Face vs. Control Object 
Haptics 2304 Inferior temporal gyrus 50 −62 −8 4.38 
1880 Superior parietal lobe −36 −44 56 3.93 
7352 Precentral gyrus 42 −2 34 4.61 
Inferior frontal gyrus 46 28 14 4.07 
Middle frontal gyrus 36 28 22 3.29 
Vision 5160 Fusiform gyrus 46 −58 −18 5.78 
Inferior occipital gyrus 32 −88 −10 5.36 
Middle occipital gyrus 22 −96 −2 5.28 
2408 Middle occipital gyrus −40 −86 −8 4.66 
Inferior occipital gyrus −30 −94 −10 4.45 
1520 Middle frontal gyrus 48 26 26 4.20 
Inferior frontal gyrus 56 26 26 3.88 
2552 Angular gyrus 56 −48 16 4.16 
 
Hand and Foot vs. Control Object 
Haptics 880 Inferior temporal gyrus 56 −64 −8 3.96 
1448 Superior parietal lobe 28 −54 68 4.37 
3624 Precentral gyrus 34 −8 60 3.84 
Supramarginal gyrus 44 −30 46 3.56 
2640 Precentral gyrus 48 28 3.55 
Inferior frontal gyrus 56 28 24 3.47 
Vision 18704 Middle occipital gyrus 54 −70 5.49 
Inferior occipital gyrus 16 −92 −8 5.14 
Inferior temporal gyrus 50 −64 −10 5.19 
Lingual gyrus 14 −94 −4 5.09 
Middle temporal gyrus 56 −62 5.04 
Angular gyrus 56 −62 22 5.14 
1808 Lingual gyrus −12 −58 3.70 
Superior parietal lobe −12 −68 42 3.63 
7472 Middle occipital gyrus −48 −78 5.49 
Inferior occipital gyrus −42 −82 −10 4.76 
Middle temporal gyrus −54 −62 3.50 
1808 Middle occipital gyrus −34 −78 32 3.91 
8712 Superior occipital gyrus 30 −80 28 4.77 
Cingulate gyrus −40 22 4.01 
1568 Superior parietal lobe 22 −52 58 4.06 
Supramarginal gyrus 46 −28 42 3.38 

Volume (mm3)
Anatomical Region
Hem
x
y
z
Z Value
Face vs. Control Object 
Haptics 2304 Inferior temporal gyrus 50 −62 −8 4.38 
1880 Superior parietal lobe −36 −44 56 3.93 
7352 Precentral gyrus 42 −2 34 4.61 
Inferior frontal gyrus 46 28 14 4.07 
Middle frontal gyrus 36 28 22 3.29 
Vision 5160 Fusiform gyrus 46 −58 −18 5.78 
Inferior occipital gyrus 32 −88 −10 5.36 
Middle occipital gyrus 22 −96 −2 5.28 
2408 Middle occipital gyrus −40 −86 −8 4.66 
Inferior occipital gyrus −30 −94 −10 4.45 
1520 Middle frontal gyrus 48 26 26 4.20 
Inferior frontal gyrus 56 26 26 3.88 
2552 Angular gyrus 56 −48 16 4.16 
 
Hand and Foot vs. Control Object 
Haptics 880 Inferior temporal gyrus 56 −64 −8 3.96 
1448 Superior parietal lobe 28 −54 68 4.37 
3624 Precentral gyrus 34 −8 60 3.84 
Supramarginal gyrus 44 −30 46 3.56 
2640 Precentral gyrus 48 28 3.55 
Inferior frontal gyrus 56 28 24 3.47 
Vision 18704 Middle occipital gyrus 54 −70 5.49 
Inferior occipital gyrus 16 −92 −8 5.14 
Inferior temporal gyrus 50 −64 −10 5.19 
Lingual gyrus 14 −94 −4 5.09 
Middle temporal gyrus 56 −62 5.04 
Angular gyrus 56 −62 22 5.14 
1808 Lingual gyrus −12 −58 3.70 
Superior parietal lobe −12 −68 42 3.63 
7472 Middle occipital gyrus −48 −78 5.49 
Inferior occipital gyrus −42 −82 −10 4.76 
Middle temporal gyrus −54 −62 3.50 
1808 Middle occipital gyrus −34 −78 32 3.91 
8712 Superior occipital gyrus 30 −80 28 4.77 
Cingulate gyrus −40 22 4.01 
1568 Superior parietal lobe 22 −52 58 4.06 
Supramarginal gyrus 46 −28 42 3.38 

The size of activation was thresholded at p < .05, corrected for multiple comparisons, when the height threshold was set at Z > 3.09.

Hem = hemisphere; R = right; L = left.

Figure 4. 

Brain regions activated by identification of human body parts in the random effect group-average analysis. (A, C) Statistical parametric map of the average neural activity within the group during the face identification compared with that of the control object and during identification of nonface body parts compared with that of the control object. The comparison was made in each sensory modality. The 3-D information was collapsed into two-dimensional sagittal, coronal, and transverse images (i.e., maximum-intensity projections viewed from the right, back, and top of the brain). (B, D) The activation patterns during identification of faces and other body parts compared to the control object were superimposed on the coronal, sagittal, and transverse planes of T1-weighted, high-resolution MRI averaged across the subjects.

Figure 4. 

Brain regions activated by identification of human body parts in the random effect group-average analysis. (A, C) Statistical parametric map of the average neural activity within the group during the face identification compared with that of the control object and during identification of nonface body parts compared with that of the control object. The comparison was made in each sensory modality. The 3-D information was collapsed into two-dimensional sagittal, coronal, and transverse images (i.e., maximum-intensity projections viewed from the right, back, and top of the brain). (B, D) The activation patterns during identification of faces and other body parts compared to the control object were superimposed on the coronal, sagittal, and transverse planes of T1-weighted, high-resolution MRI averaged across the subjects.

Haptic identification of nonface body parts (hand and foot) relative to nonbiological control objects activated the right inferior temporal gyrus, the superior parietal lobe, the precentral gyrus, the supramarginal gyrus, and the inferior frontal gyrus. The cluster of activation in the precentral gyrus covered the right central sulcus. The same contrast in the visual domain activated the middle and inferior occipital gyri, the lingual gyrus, the middle temporal gyrus and the superior parietal lobe bilaterally, the right inferior temporal gyrus, the right inferior parietal lobe, the right superior occipital gyrus, the right cingulate gyrus, and the left middle occipital gyrus.2 The haptic and visual networks overlapped in the right posterior inferior temporal gyrus (x = 53.7, y = −60.9, z = −10.0; 528 mm3) and in the right supramarginal gyrus (x = 44.7, y = −29.0, z = 42.3; 96 mm3).

Collectively, the group analyses demonstrate that haptic and visual object identification activate largely disjoint networks. However, the group-average analysis does not necessarily reflect activation of individuals in the occipito-temporal cortex. Category-sensitive responses in this region can be spatially limited (Spiridon et al., 2006) and their locations can vary across subjects, even in a standard stereotaxic space (Saxe et al., 2006). The next section presents fROI analyses where we investigated in individual subjects whether regions demonstrated to be sensitive to a particular category of biological objects in one sensory modality (vision or haptics) are sensitive to the same category of objects in the other modality.

Functional Region-of-interest Analysis

We began by identifying regions that were most sensitive to faces and body parts (compared to control objects) in both the visual and haptic domains. We focused our analysis on the right hemisphere because the group-average analysis showed significant activation by haptic and visual identification of faces and other body parts in the right occipito-temporal cortex. Table 2 gives the coordinates for the four functionally defined regions in each participant. In general, activity in the haptic conditions (Z values > 2.4) was weaker than in the visual conditions (Z values > 5.6, equivalent to p < .01, corrected for multiple comparisons), although the haptic face-sensitive region (HFR) and the body-part-sensitive region (HBR) could still be identified in 15 out of 16 participants. In order to confirm the reliability of the HFR and the HBR, we split the data into halves (odd and even scans) and analyzed each half separately. Local peaks of activation were consistently located within less than 2 mm (on average) radius of the coordinates shown in Table 2. This reassured us that we were not merely observing false positives.

Table 2. 

MNI Coordinates of Category-sensitive Areas in fROI Analysis

Subject
MNI Coordinate
Z Value
MNI Coordinate
Z Value
Distance (mm)
x
y
z
x
y
z
Haptically Defined Regions
Visually Defined Regions
Face-sensitive Area 
 HFR FFA 
s01 42 −50 −26 2.68 42 −50 −18 Inf 8.0 
s02 52 −44 −14 2.49 48 −52 −12 Inf 9.2 
s03 44 −56 −12 2.59 36 −52 −16 Inf 9.8 
s04 48 −52 −18 4.62 40 −66 −12 Inf 17.2 
s05 44 −58 −12 3.31 44 −58 −12 Inf 
s06 40 −66 −14 3.97 44 −48 −24 Inf 21.0 
s07 48 −48 −18 4.19 48 −54 −24 5.66 8.5 
s08 30 −68 −8 2.44 40 −62 −12 Inf 12.3 
s09 46 −68 −12 3.90 48 −66 −16 6.59 4.9 
s10 52 −46 −14 4.16 46 −62 −18 7.68 17.5 
s11 48 −58 −16 6.71 42 −54 −14 Inf 7.5 
s12 42 −58 −22 4.21 42 −54 −20 Inf 4.5 
s13 40 −30 −20 3.34 46 −64 −18 7.75 34.6 
s14 46 −54 −8 2.42 46 −62 −14 6.81 10.0 
s15 ns 46 −66 −20 7.33 
s16 40 −54 −8 4.30 42 −30 −16 6.81 25.4 
Mean 44.1 −54 −14.8  43.8 −56.3 −16.6  12.7 
SEM 1.4 2.6 1.4  0.9 2.3 1.0 
Maximum 52 −30 −8  48 −30 −12 
Minimum 30 −68 −26  36 −66 −24 
 
Nonface Body-part-sensitive Area 
 HBR EBA 
s01 54 −64 −4 4.5 50 −68 −2 Inf 6.0 
s02 56 −58 4.87 56 −64 12 Inf 13.4 
s03 52 −64 −8 2.82 48 −66 −2 6.15 7.5 
s04 66 −56 4.77 46 −72 −2 Inf 26.8 
s05 66 −52 −2 4.46 58 −60 −4 Inf 11.5 
s06 62 −50 3.43 54 −68 Inf 19.8 
s07 50 −72 6.37 54 −66 −2 Inf 8.2 
s08 54 −64 −6 4.88 58 −66 7.34 7.5 
s09 54 −70 −12 2.43 54 −64 −10 7.74 6.3 
s10 48 −62 12 4.83 58 −68 −2 7.3 18.2 
s11 40 −68 12 7.08 52 −56 −2 Inf 22.0 
s12 48 −56 −6 5.95 50 −68 10 7.55 20.1 
s13 58 −58 5.18 56 −66 Inf 10.2 
s14 46 −62 −2 7.42 46 −54 −8 6.07 10.0 
s15 ns 52 −72 −6 7.8 
s16 48 −70 −4 5.39 50 −72 6.52 4.9 
Mean 53.5 −61.7 −0.5  52.6 −65.6 −0.6  12.8 
SEM 1.9 1.7 1.8  1.0 1.3 1.5 
Maximum 66 −50 12  58 −54 12 
Minimum 40 −72 −12  46 −72 −10 
Subject
MNI Coordinate
Z Value
MNI Coordinate
Z Value
Distance (mm)
x
y
z
x
y
z
Haptically Defined Regions
Visually Defined Regions
Face-sensitive Area 
 HFR FFA 
s01 42 −50 −26 2.68 42 −50 −18 Inf 8.0 
s02 52 −44 −14 2.49 48 −52 −12 Inf 9.2 
s03 44 −56 −12 2.59 36 −52 −16 Inf 9.8 
s04 48 −52 −18 4.62 40 −66 −12 Inf 17.2 
s05 44 −58 −12 3.31 44 −58 −12 Inf 
s06 40 −66 −14 3.97 44 −48 −24 Inf 21.0 
s07 48 −48 −18 4.19 48 −54 −24 5.66 8.5 
s08 30 −68 −8 2.44 40 −62 −12 Inf 12.3 
s09 46 −68 −12 3.90 48 −66 −16 6.59 4.9 
s10 52 −46 −14 4.16 46 −62 −18 7.68 17.5 
s11 48 −58 −16 6.71 42 −54 −14 Inf 7.5 
s12 42 −58 −22 4.21 42 −54 −20 Inf 4.5 
s13 40 −30 −20 3.34 46 −64 −18 7.75 34.6 
s14 46 −54 −8 2.42 46 −62 −14 6.81 10.0 
s15 ns 46 −66 −20 7.33 
s16 40 −54 −8 4.30 42 −30 −16 6.81 25.4 
Mean 44.1 −54 −14.8  43.8 −56.3 −16.6  12.7 
SEM 1.4 2.6 1.4  0.9 2.3 1.0 
Maximum 52 −30 −8  48 −30 −12 
Minimum 30 −68 −26  36 −66 −24 
 
Nonface Body-part-sensitive Area 
 HBR EBA 
s01 54 −64 −4 4.5 50 −68 −2 Inf 6.0 
s02 56 −58 4.87 56 −64 12 Inf 13.4 
s03 52 −64 −8 2.82 48 −66 −2 6.15 7.5 
s04 66 −56 4.77 46 −72 −2 Inf 26.8 
s05 66 −52 −2 4.46 58 −60 −4 Inf 11.5 
s06 62 −50 3.43 54 −68 Inf 19.8 
s07 50 −72 6.37 54 −66 −2 Inf 8.2 
s08 54 −64 −6 4.88 58 −66 7.34 7.5 
s09 54 −70 −12 2.43 54 −64 −10 7.74 6.3 
s10 48 −62 12 4.83 58 −68 −2 7.3 18.2 
s11 40 −68 12 7.08 52 −56 −2 Inf 22.0 
s12 48 −56 −6 5.95 50 −68 10 7.55 20.1 
s13 58 −58 5.18 56 −66 Inf 10.2 
s14 46 −62 −2 7.42 46 −54 −8 6.07 10.0 
s15 ns 52 −72 −6 7.8 
s16 48 −70 −4 5.39 50 −72 6.52 4.9 
Mean 53.5 −61.7 −0.5  52.6 −65.6 −0.6  12.8 
SEM 1.9 1.7 1.8  1.0 1.3 1.5 
Maximum 66 −50 12  58 −54 12 
Minimum 40 −72 −12  46 −72 −10 

Face-sensitive and body-sensitive regions were only searched for in the fusiform region and the posterior lateral occipito-temporal region, respectively. Inf, Z > 8.0; x, y, and z are stereotaxic coordinates (mm); Distance = distance between coordinate of peak activation between the HFR and the FFA and between the HBR and the EBA. The statistical threshold for the SPM{t} was set at Z < 2.33, equivalent to p < .01, uncorrected for multiple comparisons. ns = no significant activation.

The location of the foci of category-sensitive regions differed substantially among subjects within the search region (maximum intersubject Euclidean distance of 41 mm). This is consistent with previous studies, which reveal substantial individual differences in location of category-sensitive regions. For example, the maximum intersubject Euclidean distance was 31 mm in a study by Kanwisher et al. (1997). Despite this variability, active regions could still overlap among subjects depending on how extensive they are.

Is There Colocalization between the FFA and HFR, and between the EBA and HBR?

If the peak coordinates for the visual face-sensitive region (FFA) and the HFR were similar within subjects, this would constitute evidence that haptically and visually perceived faces activated the same region; the same logic can be applied to the visual body-part-sensitive area (EBA) and HBR coordinates. The Euclidean distances between FFA and HFR peaks, and between EBA and HBR peaks, within participants, are shown in Table 2. This distance varies widely among participants, but averages 13 mm for body-sensitive regions and for face-sensitive regions. Given the spatial smoothing applied to the individual data (i.e., 4-mm FWHM, 7-mm effective spatial resolution), these distances are great enough for the centers of mass of each functionally defined region to constitute distinguishable voxels.

We then looked for any consistency in the spatial relationship between the FFA and the HFR and between the EBA and the HBR by conducting t tests on the differences in location (in x, y, and z); this would reveal whether one area was consistently medial, anterior, or inferior to the other area. Paired t tests for three dimensions were not significant for the face-sensitive areas (HFR vs. FFA) (ps > .2) or for the body-part-sensitive areas (HBR vs. EBA) (ps > .1). This also suggests that, although the peak activity for haptically and visually identified faces (or body parts) may be in somewhat different locations, there is substantial variability across subjects and no apparent consistency in the spatial relationship between the two face (or body-part) regions.

Activation Profiles in Each of the Four Category-sensitive Regions

Another way to determine whether the regions most activated by haptic and visual identification of faces (or body parts) are really different is to examine their functional profiles. If the two “face regions” respond in different ways (i.e., demonstrate a Region × Condition interaction), this would provide strong evidence that they are functionally different (Henson, 2006). In order to evaluate the response pattern across stimuli and sensory modalities in each of the four identified regions, relative percent signal change was calculated. “Face sensitivity” (FS) was calculated as difference between percent signal change for (haptic or visual) perception of faces and (haptic or visual) perception of control stimuli (bottles). “Body-part sensitivity” (BS) was similarly calculated as the difference between mean percent signal change of the two nonface body parts (in either the haptic or visual domain) and percent signal change of the control object.

The Functional Profiles of the Fusiform Regions Localized by Haptic and Visual Face Identification

Figures 5A and 6 show signal change evoked by faces and other body parts relative to the control object (FS and BS) in the HFR and FFA. We began by asking whether these two regions are sensitive to faces presented in the other (nondefining) modality, and other body parts presented in either modality. Next we compared patterns of sensitivity to haptically and visually identified faces and other body parts (FS and BS) between the two regions in order to evaluate whether they are functionally different.

Figure 5. 

fROI analysis. The bar graphs indicate the signal change of face and the other body parts relative to the control object (FS and BS). The gray bar indicates the condition used to define the region. (A) Face-sensitive regions (HFR and FFA); (B) Nonface body parts regions (HBR and EBA). Data are presented as the mean ± SEM. n = the number of subjects. Asterisks and ns above each bar indicate the results of one-sample t tests on the sensitivity score (FS and BS). Asterisks above a pair of bars show the result of post hoc pairwise comparison.

Figure 5. 

fROI analysis. The bar graphs indicate the signal change of face and the other body parts relative to the control object (FS and BS). The gray bar indicates the condition used to define the region. (A) Face-sensitive regions (HFR and FFA); (B) Nonface body parts regions (HBR and EBA). Data are presented as the mean ± SEM. n = the number of subjects. Asterisks and ns above each bar indicate the results of one-sample t tests on the sensitivity score (FS and BS). Asterisks above a pair of bars show the result of post hoc pairwise comparison.

Figure 6. 

Time series of relative % signal change. Each data point was calculated as 100 × [(signal value of a biological object category − signal value of control objects)/baseline]. Gray area indicates the task block. The signal value at each time point was extracted from the functional image scanned for the next 2 sec (e.g., the signal collected between 0 and 2 sec is shown at time zero). Data are presented as the mean ± SEM.

Figure 6. 

Time series of relative % signal change. Each data point was calculated as 100 × [(signal value of a biological object category − signal value of control objects)/baseline]. Gray area indicates the task block. The signal value at each time point was extracted from the functional image scanned for the next 2 sec (e.g., the signal collected between 0 and 2 sec is shown at time zero). Data are presented as the mean ± SEM.

Sensitivity to faces and other body parts in the HFR and the FFA

The HFR was significantly more active for visually presented faces than for visually presented control stimuli (one-tailed, one-sample t tests on FS score; p < .05), and for hands and feet more than for control stimuli when these were presented either haptically or visually (one-tailed one-sample t tests on BS scores; p = .05 for haptics and p < .01 for vision). These results confirm that the HFR is sensitive to visually presented faces, and to both haptically and visually presented nonface body parts.

Similarly, the FFA was significantly more activated by haptically identified faces than control objects (one-tailed, one-sample t tests on FS score; p < .001). The FFA was also significantly activated by visually identified hands and feet relative to visually identified control objects (one-tailed, one-sample t test on BS score; p < .001 for vision), but not by haptically identified hands and feet (p > .4). These results demonstrate that the FFA is sensitive to haptically identified faces, and to visually, but not haptically, perceived nonface body parts.

Activation patterns of faces and other body parts in the HFR and the FFA

In order to compare response patterns across these two regions, we conducted a three-way ANOVA (2 regions × 2 sensory modalities × 2 object categories: face and nonface body parts) on the relative percent signal change values. This produced significant main effects of region [F(1, 14) = 7.2, p < .05], with the FFA yielding higher signal change than the HFR; of sensory modality [F(1, 14) = 9.1, p < .01], with vision producing higher signal change than haptics; and of object category [F(1, 14) = 22.4, p < .01], with faces producing higher signal change than other body parts. In addition, we observed significant interactions among all three factors [F(1, 14) = 28.3, p < .001] and between region and object category [F(1, 14) = 40.7, p < .001]. These significant interactions involving region and object category are important because they demonstrate that the two regions, which were localized by different sensory modalities, are indeed functionally different (Henson, 2006).

Pairwise comparisons with Sidak–Bonferroni correction revealed no significant differences between activation to faces and other body parts in the HFR, regardless of sensory modality (ps > .05). Thus, the HFR appears equally sensitive to visually identified faces and other body parts. The FFA, on the other hand, was more strongly activated by identification of faces than by other body parts in both sensory modalities (ps < .001). Thus, although the FFA appears somewhat sensitive to visually presented body parts (see previous section), it is more sensitive to faces than nonface body parts presented in either modality.

The Functional Profiles in Lateral Occipito-temporal Regions Localized by Haptic and Visual Identification of Nonface Body Parts

Figures 5B and 6 show the signal change evoked by faces and other body parts relative to control objects (FS and BS, respectively) in the HBR and EBA. The aforementioned analyses performed for the HFR and the FFA were also conducted for these regions. More specifically, we began by asking whether these two regions are sensitive to nonface body parts presented in the other (nondefining) modality, and faces presented in either modality. Next, we compared patterns of sensitivity to haptically and visually identified faces and other body parts (FS and BS) between the two regions in order to evaluate whether they are functionally different.

Sensitivity to faces and other body parts in the HBR and the EBA

The HBR was significantly activated by visual identification of nonface body parts versus control objects (one-tailed, one-sample t tests on BS score; p < .001). Whereas haptically identified faces produced more signal than haptically identified control objects (one-tailed, one-sample t test on FS score; p < .001), visually identified faces did not (p > .1). In other words, the HBR was sensitive to visually presented nonface body parts; it was also sensitive to haptically, but not visually, identified faces.

Similarly, the EBA was more strongly activated by haptic identification of nonface body parts than control objects (one-tailed, one-sample t test on BS score; p < .01). However, unlike the HBR, the EBA was sensitive to both haptically and visually identified faces (one-tailed, one-sample t tests on FS score; ps < .05).

Activation patterns of faces and other body parts in the HBR and the EBA

A three-way ANOVA [(2 category-sensitive regions: HBR and EBA) × (2 sensory modalities) × (2 object categories: face and nonface body parts)] on the relative percent signal change was conducted. This analysis revealed significant main effects of region [F(1, 14) = 10.0, p < .01], with the EBA yielding higher signal change than the HBR, and of object category [F(1, 14) = 107.8, p < .001], with nonface body parts producing higher signal change than faces. However, this analysis also showed significant interactions among all three factors [F(1, 14) = 7.1, p < .05]. This result confirms that EBA and HBR regions are functionally different.

Post hoc pairwise comparisons (with Sidak–Bonferroni correction) showed that the HBR was more strongly activated by the identification of nonface body parts as opposed to faces, regardless of sensory modality (ps < .01). Thus, the HBR was more sensitive to nonface body parts than faces. The EBA was also more strongly activated by visual identification of nonface body parts than faces (p < .001), whereas activation by haptic identification of faces and other body parts in the EBA was not significantly different (p > .3). This result shows that the EBA is equally sensitive to haptically identified faces and nonface body parts.

Finally, we confirmed that patterns of object sensitivity were different between the HFR and the HBR and between the FFA and the EBA. A three-way ANOVA [(2 regions: HFR and HBR) × (2 sensory modalities) × (2 object categories)] of the relative percent signal change showed a significant interaction between region and object category [F(1, 14) = 26.4, p < .001]. The same ANOVA for the FFA and the EBA also showed significant three-way [F(1, 15) = 73.6, p < .001] and two-way [Region × Object category, F(1, 15) = 111.6, p < .001] interactions. These results confirm that the two regions localized by each sensory modality (i.e., HFR and HBR, FFA and EBA) were functionally different.

Sensorimotor Activation of the Central Sulcus during the Haptic Task

A recent study showed that the EBA can be activated by pointing to the location of a visual target using the hand, as well as recognition of body parts (Astafiev et al., 2004). It is therefore possible that the HBR and the EBA were activated by haptic identification of nonface body parts because the subject may have explored nonface body parts more intensively than the control objects. The group-average analysis showed that the anterior part of the right central sulcus was activated by haptic identification of hands and feet relative to the control objects, whereas no significant activation was observed for the other haptic contrast (Figure 4). Hence, we conducted an additional fROI analysis to examine the functional relationship between the body-part-sensitive areas and the right central sulcus.

Nine individuals exhibited significant activity in the hand area of the right central sulcus by the contrast of hands and feet versus control objects. The mean coordinates of peak activation in the right central sulcus were: x = 38.7 ± 1.6, y = −17.8 ± 1.2, and z = 56.9 ± 2.0. However, there was no clear positive correlation over subjects between signal change in this central sulcus region and the EBA, or between this region and the HBR (r values < .3). This result suggests that the body-part-sensitive activation in the right central sulcus was not related to body-part-sensitive activation in the EBA and the HBR.

Response Time and Activation of the Category-sensitive Areas

Visual identification of hand showed significantly higher response time than that of the control object. Because such difference may cause body-part-sensitive activation, we examined whether relative percent signal change of visual identification of nonface body parts (BS) was influenced by response time of nonface body parts relative to the control object. However, there was no significant correlation between the relative response time and BS scores in any category-sensitive area (rs < .5, ps > .07).3

Supplemental Investigation into the Use of a Visual-mediation Heuristic

We showed that regions in the occipito-temporal area respond to faces and other body parts, regardless of the sensory modality. It is unlikely that haptic activation in the occipito-temporal area may be explained by visual imagery because the response patterns were different for haptics and vision. As a subsidiary investigation, we explored the extent to which visual imagery might account for category-sensitive activation during the haptic task in three complementary ways.

Correlation between VVIQ and Activation during Haptic Object Identification

Based on a previous study (Zhang et al., 2004), we initially examined to what extent activation was explained by the score obtained by the Vividness of Visual Imagery Questionnaire (VVIQ) (Marks, 1973). A linear regression analysis showed that the VVIQ score accounted for only a limited portion of the variance in signal change between face and the control object in the HFR and the FFA and between nonface body parts and the control object in the HBR and the EBA (absolute r values < .4). This result suggests that the VVIQ does not strongly predict the category-sensitive signal increase in these category-sensitive areas.

Comparison of Haptic Activation in Subjects Who Reported Using vs. Not Using Visual Imagery

The BOLD signal during haptic perception was compared between subjects who reported the use of visual-mediation heuristics (8 of 16 subjects) and those who denied using a visual-mediation heuristic (8 subjects) (Zangaladze, Epstein, Grafton, & Sathian, 1999). There was no clear evidence that the group reporting the use of visual mediation showed stronger sensitivity than the other group. Two-way ANOVAs [(2 groups) × (2 object categories: faces and nonface body parts)] of the relative percent signal change, conducted separately in each of the four functionally defined regions, showed no significant effect of group in any region (ps > .3). Moreover, there was no significant interaction between the group and object category in any region (ps > .3). This result suggests that subjective report of visual-mediation heuristics did not influence signal levels in visually or haptically localized category-sensitive areas.

Parallel Visual Imagery Task

Finally, we measured activation in category-sensitive regions when the subjects visually imagined the exemplars of object categories. We reasoned that activation during visual imagery should be as strong as activation during haptic object identification if the category-sensitive areas were activated by visual imagery during the haptic task. A task examining visual imagery (Amedi et al., 2001) was performed by 15 out of 16 subjects immediately after the visual identification task was completed. We adopted the same task schedule as in the haptic identification task except for the task instruction (Figure 1C and D); instead of exploring the object during each task block, the subjects were asked to visually imagine the exemplars for an object category. No visual image was presented in the task block. Subjects did not touch any exemplar during this experiment.

The auditory cue was presented binaurally through headphones. Subjects' eyes were kept open to match the conditions for the haptic and visual imagery tasks. The name of the object category was given 2 sec before the task block, and one of the three numerical codes was presented every 3.3 sec within the 10-sec task block. Subjects were asked to press the button corresponding to the numerical code when they had formed a visual image as vividly as they could. They repeated four runs.

Subjects reported that they successfully imagined most of the objects as vividly as possible (95.5 ± 1.6%). A one-way repeated-measures ANOVA (four object categories) showed no significant effect for either percentage of reporting successful visual imagery (p > .7) or response time (p > .06). We did not observe clear object-sensitive activation (Figure 7). One-tailed, one-sample t tests showed that neither faces nor other body parts elicited significantly higher activity than the control object in any of the four functionally defined regions (ps > .05). Collectively, the three supplementary measures suggest that it is unlikely that visual imagery accounts for haptic object-sensitive activation.

Figure 7. 

Supplementary results of visual imagery task. The bar graphs indicate the signal change of face and the other body parts relative to the control object (FS and BS). Unlike the haptic and visual object identification tasks, visual imagery of faces and other body parts did not produce significantly higher signal than that of the control object. ns indicates the results of one-sample t tests on the relative % signal change (FS and BS). The number of subjects was 14 for the HFR and HBR, and 15 for the FFA and EBA.

Figure 7. 

Supplementary results of visual imagery task. The bar graphs indicate the signal change of face and the other body parts relative to the control object (FS and BS). Unlike the haptic and visual object identification tasks, visual imagery of faces and other body parts did not produce significantly higher signal than that of the control object. ns indicates the results of one-sample t tests on the relative % signal change (FS and BS). The number of subjects was 14 for the HFR and HBR, and 15 for the FFA and EBA.

DISCUSSION

The present study examined the relationship between the functional architectures supporting haptic and visual object identification. The group-average analysis revealed that haptic and visual object identification appear to activate largely separate neural networks. Subsequent individual analyses (fROI) further showed that whether haptics or vision was used, subareas within the fusiform region were sensitive to faces, whereas subareas within the lateral occipito-temporal region were sensitive to body parts. This convergence was incomplete, however, as regions most sensitive to the functional localizer contrasts (faces vs. control objects in the fusiform gyrus; nonface body parts vs. control objects in the lateral occipito-temporal cortex) differed for haptics and vision both in terms of precise anatomical location and functional specialization (i.e., functional profiles across conditions). It is unlikely that visual imagery can explain haptic activation in the occipito-temporal region because the response patterns were anatomically and functionally different for haptics and vision. Hence, we conclude that within the ventral visual pathway, the functional architectures for haptic and visual identification of human body parts are different; nevertheless, it is important to note that at a more coarse anatomical level, category sensitivity is shared by haptics and vision within both the fusiform gyrus and the lateral occipito-temporal cortex.

Task Design

To minimize visualization during the haptic object identification task, participants performed the visual task in a separate scanning session a day later. The mean distance between haptically and visually defined category-sensitive areas exceeded 10 mm for both faces and other body parts. We note that the locations of visually defined category-sensitive areas (including FFA and EBA) change very little over time (mean 2.9 mm), even when interexperiment runs are conducted 3 weeks apart (Peelen & Downing, 2005b). Hence, it is unlikely that the difference in location of the haptically and visually defined category-sensitive areas is due to the fact that corresponding sessions were one day apart. Likewise, it is unlikely that the reliable differences observed across subjects between haptically and visually evoked activation were caused by intersession differences, which ought to be random from subject to subject.

The major difference in the task design for the two sensory modalities was related to task schedules (Figures 1 and 2). It is possible that one sensory modality showed more activation than the other because the degrees of freedom in each individual analysis were different for the sensory modalities. However, it is unlikely that the difference in the task schedules for haptics and vision produced different response patterns in category-sensitive areas (a Region × Condition interaction), as this would manifest as a main effect between modalities.

Face-sensitive Regions in the Fusiform Gyrus

We documented that the FFA was most strongly activated by haptically identified faces among all object categories tested. Although Pietrini et al. (2004) showed that activation patterns in the ventral visual pathway for haptic and visual face recognition were uncorrelated, whether any brain region is functionally specialized for both haptic and visual face recognition was unknown. Our earlier lesion study showed that the intact occipito-temporal region is necessary for haptic face perception (Kilgour et al., 2004). Our fMRI study further revealed that haptic face recognition tasks performed by highly trained neurologically intact subjects elicit activation in the fusiform gyrus (Kilgour et al., 2005). However, that study did not directly ask whether the FFA, a small subregion within the fusiform gyrus defined on the basis of activation to visually presented faces, is also involved in haptic face recognition. Thus, the current article extends our previous findings by demonstrating that the FFA is also involved in haptic face identification. This result is consistent with the view that the FFA represents a cortical functional module for face processing (Grill-Spector, Knouf, & Kanwisher, 2004; Hasson, Hendler, Ben Bashat, & Malach, 2001; Tong, Nakayama, Vaughan, & Kanwisher, 1998; Kanwisher et al., 1997).

To localize the HFR, we used methods analogous to those used to localize the visual FFA. The distance between the HFR and the FFA within subjects was highly variable but averaged greater than 10 mm. Because the effective spatial resolution of these data was approximately 7 mm, we concluded that these peaks occurred in somewhat different locations, despite an inconsistent spatial relationship between the regions across subjects. Furthermore, the activation profiles across object categories within these two regions were significantly different. The significantly different functional profiles of the FFA and the HFR suggest that these are indeed discrete, distinguishable subareas (Henson, 2006). We conclude, therefore, that although subareas within the fusiform gyrus are sensitive to faces regardless of presentation modality, the subarea that is most sensitive to haptically presented faces is functionally distinct from the subarea that is most sensitive to visually presented faces. It is reasonable to speculate that both visual and haptic face recognition recruit distributed regions within the fusiform gyrus (including the HFR and FFA), but that the degree to which these areas are recruited depends on the modality.

It has been also proposed that the FFA provides a mechanism for distinguishing visually similar exemplars of any object category for which the viewer has substantial expertise (Gauthier, Skudlarski, Gore, & Anderson, 2000; Tarr & Gauthier, 2000). In addition, James, Servos, Kilgour, Huh, and Lederman (2006) and Kilgour et al. (2005) have shown left-hemisphere activation of the fusiform gyrus in haptic face recognition tasks for which participants were highly trained. In contrast, the current study provided considerably less training and did not reveal significant left FFA activity in the group analysis.

Body-part-sensitive Regions in the Lateral Occipito-temporal Cortex

EBA activation was higher during haptic identification of human body parts than of nonbiological control objects. To our knowledge, this result offers the first evidence that the EBA is also sensitive to haptically identified body parts. Our finding extends previous results that show the EBA plays an important role in visually processing static images of the human body (Urgesi, Berlucchi, & Aglioti, 2004; Downing et al., 2001) by revealing that this region may also be critical for haptic recognition of human body parts.

One intriguing difference between sensory modalities was that haptically, but not visually, identified faces produced as strong a signal change as nonface body parts (Figure 5). Downing et al. (2001) showed that the EBA elicits a strong response not only to the visual presentation of body parts but also to faces when only part of a face (e.g., lips) is visually presented. Haptically acquired information is typically piecemeal and sequential, especially when the object is larger than the palm (Lederman & Klatzky, 1990). Indeed, 75% of our subjects reported that they identified an individual face mask by focusing on parts of the face. Accordingly, we suggest that the difference in sensitivity between haptically and visually identified faces in this region reflects modality-specific acquisition.

We localized the HBR using similar procedures to those used to localize the visual EBA. The mean distance (across subjects) between the HBR and the EBA was greater than 10 mm. Given an effective spatial resolution of approximately 7 mm, this result suggests that these peaks were located in somewhat different locations, although there was no consistent spatial relationship between the regions across subjects. This result is consistent with a previous finding that body-sensitive activation for visually presented items can be observed more than 10 mm from the peak of body-sensitive activation (Spiridon et al., 2006). In contrast, strong visual face sensitivity was more spatially focused within the fusiform gyrus. This finding may also explain why the group-average analysis showed overlapping activation for haptic and visual identification of body parts in the inferior temporal gyrus, while no such overlap of activation was found for faces (Figure 4).

Although the overall response patterns were similar for the EBA and the HBR, the response profiles across object categories in the two regions were significantly different. This result suggests that the subregion that is most sensitive to haptically presented body parts is functionally distinct from the subregion that is most sensitive to visually presented body parts. Hence, as previously argued with respect to face-sensitive regions in the fusiform gyrus, both visual and haptic body-part recognition may recruit distributed regions in the lateral occipito-temporal cortex, including the HBR and the EBA, but the degree to which different areas are recruited may be modality-dependent.

Is Haptic Object Identification Visually Mediated?

Might haptic object processing recruit the occipito-temporal areas because haptic inputs are translated into a visual image? This visual-mediation heuristic has been shown to improve haptic recognition of 2-D objects (Lederman, Klatzky, Chataway, & Summers, 1990). Hence, the category sensitivity we observe might result from visual imagery facilitating haptic recognition of human body parts.

We think this is unlikely for several reasons. First, visual mediation is not always necessary inasmuch as congenitally blind individuals can haptically recognize faces with no visual experience (Pietrini et al., 2004). Second, the activation peaks for haptics and vision were in different locations; moreover, the response patterns differed across conditions. Third, our supplemental data consistently suggest that a visual-mediation heuristic can at best account for only a limited portion of the activation during haptic object identification.

Alternatively, we speculate that category-sensitive areas may constitute haptic, as well as visual, functional modules for the recognition of human body parts. In other words, these visually category-sensitive areas may contain neuronal populations that are more sensitive to haptic than to visual presentation of faces and other body parts. This view is in accord with previous studies that appear to demonstrate multisensory activation in the lateral occipital complex. For instance, Amedi et al. (2001) showed overlap of activation in the lateral occipital complex for haptic and visual identification of nonbiological 3-D (e.g., fork) as opposed to 2-D (e.g., sandpapers) common objects. Because haptic activation was higher than activation during a corresponding visual imagery task, the authors suggested that neuronal populations in the occipito-temporal cortex may constitute a multisensory object-related network (see also James et al., 2002).

Individual Differences in the Location of Category-sensitive Regions

We searched for category-sensitive regions within relatively large areas defined by visible anatomical landmarks, and the coordinate locations of category-sensitive regions differed substantially among subjects (Table 2). This is consistent with previous reports: For example, the location of FFA was observed to vary among individuals within the fusiform gyrus by as much as 31 mm (Euclidean distance) according to Kanwisher et al. (1997). Furthermore, others have suggested substantial variability in location of activation foci among subjects in response to visual presentation of body parts relative to inanimate objects (Spiridon et al., 2006; Astafiev et al., 2004). Hence, we were not surprised to observe relatively large individual differences in the location of the foci of category-sensitive regions in this study.

It is difficult to define anatomical areas within which we will find category-sensitive regions because the anatomical organization of the occipito-temporal region is not well understood in humans. For the moment, functional definitions, if carefully applied, may be the best approach to a characterization of this large and presumably functionally heterogeneous region. Future research is necessary to establish whether this variability is due to variability in anatomical organization across individuals, or is due to cognitive differences among subjects that manifest as different networks of activity during recognition of human faces and body parts.

In summary, the current results extend the sparse haptic object recognition literature by furthering our understanding of the neural mechanisms that subserve haptic recognition of human body parts. In conjunction with our previous human lesion study (Kilgour et al., 2004), these results indicate that the occipito-temporal region plays an important role in haptic, as well as visual, recognition of human faces and other body parts (hands and feet). We offer initial evidence that the FFA and the EBA are also involved in the haptic recognition of faces and body parts. Both haptically and visually defined areas showed modality-independent category-sensitive activity. However, this claim must be viewed within the context defined by additional observations, which revealed that subregions most sensitive to haptic input were both anatomically and functionally distinguishable from those most sensitive to visual input. It seems unlikely that the activity observed within category-sensitive regions is the result of visual mediation during haptic object identification.

Acknowledgments

This study was supported by a JSPS Postdoctoral Fellowship for Research Abroad to R. K., and grants from the National Sciences and Engineering Research Council of Canada and the Canadian Institutes of Health Research to S. L. We thank S. David for assistance with neuroimaging; S. Aziz and R. Gomez for technical advice regarding stimulus production; R. Eves, Y. Nawa, T. Koyama, A. Miura, K. Ito, E. Rennert-May, E. Direnfeld, and C. Hamilton for technical assistance; and K. Sathian for valuable comments on an earlier manuscript.

Reprint requests should be sent to Ryo Kitada, National Institute for Physiological Sciences, Okazaki, Aichi, 444-8585, Japan, or via e-mail: kitada@nips.ac.jp.

Notes

1. 

We also examined the result of the fROI analysis by calculating baselines in two other ways: (1) mean of the two long baseline periods (10 sec for haptics and 14 sec for vision) preceding and following each task block, and (2) mean of all baselines across each run. The first two scans for each baseline period were excluded from the calculation. The statistical results remained the same as the original, regardless of which alternate baseline measure was used.

2. 

We also examined brain activation produced by the contrast of nonface body parts versus control objects (bottles) with the difference in response time modeled as a covariate. This contrast produced two significant clusters of activation: one in the right middle occipital and temporal gyrus and the other in the left middle occipital gyrus. This cluster of activation also overlapped with activation produced by the haptic contrast of nonface body parts versus control (center of mass x = 52.0, y = −63.8, z = −7.5, volume 96 mm3). This result is consistent with our original result (RTs not included as covariates), namely, that the contrast activated the occipito-temporal regions bilaterally.

3. 

We further examined the results of the fROI data when RT differences in the visual task were modeled as covariates. The significant RT difference in the visual task did not affect our main results. More specifically, we observed haptic face sensitivity in the FFA, visual face sensitivity in the HFR, haptic body-part sensitivity in the EBA, and visual body-part sensitivity in the HBR. We also observed different patterns of activation between the HFR and the FFA and between the HBR and the EBA.

REFERENCES

REFERENCES
Amedi
,
A.
,
Malach
,
R.
,
Hendler
,
T.
,
Peled
,
S.
, &
Zohary
,
E.
(
2001
).
Visuo-haptic object-related activation in the ventral visual pathway.
Nature Neuroscience
,
4
,
324
330
.
Astafiev
,
S. V.
,
Stanley
,
C. M.
,
Shulman
,
G. L.
, &
Corbetta
,
M.
(
2004
).
Extrastriate body area in human occipital cortex responds to the performance of motor actions.
Nature Neuroscience
,
7
,
542
548
.
Downing
,
P. E.
,
Jiang
,
Y.
,
Shuman
,
M.
, &
Kanwisher
,
N.
(
2001
).
A cortical area selective for visual processing of the human body.
Science
,
293
,
2470
2473
.
Feinberg
,
T. E.
,
Rothi
,
L. J.
, &
Heilman
,
K. M.
(
1986
).
Multimodal agnosia after unilateral left hemisphere lesion.
Neurology
,
36
,
864
867
.
Friston
,
K. J.
,
Ashburner
,
J.
,
Frith
,
C. D.
,
Heather
,
J. D.
, &
Frackowiak
,
R. S. J.
(
1995
).
Spatial registration and normalization of images.
Human Brain Mapping
,
2
,
165
189
.
Friston
,
K. J.
,
Holmes
,
A.
,
Poline
,
J. B.
,
Price
,
C. J.
, &
Frith
,
C. D.
(
1996
).
Detecting activations in PET and fMRI: Levels of inference and power.
Neuroimage
,
4
,
223
235
.
Friston
,
K. J.
,
Holmes
,
A. P.
, &
Worsley
,
K. J.
(
1999
).
How many subjects constitute a study?
Neuroimage
,
10
,
1
5
.
Friston
,
K. J.
,
Holmes
,
A. P.
,
Worsley
,
K. J.
,
Poline
,
J. B.
,
Frith
,
C. D.
, &
Frackowiak
,
R. S. J.
(
1995
).
Statistical parametric maps in functional imaging: A general linear approach.
Human Brain Mapping
,
2
,
189
210
.
Friston
,
K. J.
,
Jezzard
,
P.
, &
Turner
,
R.
(
1994
).
Analysis of functional MRI time-series.
Human Brain Mapping
,
1
,
153
171
.
Gauthier
,
I.
,
Skudlarski
,
P.
,
Gore
,
J. C.
, &
Anderson
,
A. W.
(
2000
).
Expertise for cars and birds recruits brain areas involved in face recognition.
Nature Neuroscience
,
3
,
191
197
.
Grill-Spector
,
K.
,
Knouf
,
N.
, &
Kanwisher
,
N.
(
2004
).
The fusiform face area subserves face perception, not generic within-category identification.
Nature Neuroscience
,
7
,
555
562
.
Hasson
,
U.
,
Hendler
,
T.
,
Ben Bashat
,
D.
, &
Malach
,
R.
(
2001
).
Vase or face? A neural correlate of shape-selective grouping processes in the human brain.
Journal of Cognitive Neuroscience
,
13
,
744
753
.
Haxby
,
J. V.
,
Gobbini
,
M. I.
,
Furey
,
M. L.
,
Ishai
,
A.
,
Schouten
,
J. L.
, &
Pietrini
,
P.
(
2001
).
Distributed and overlapping representations of faces and objects in ventral temporal cortex.
Science
,
293
,
2425
2430
.
Henson
,
R.
(
2006
).
Forward inference using functional neuroimaging: Dissociations versus associations.
Trends in Cognitive Sciences
,
10
,
64
69
.
James
,
T. W.
,
Humphrey
,
G. K.
,
Gati
,
J. S.
,
Servos
,
P.
,
Menon
,
R. S.
, &
Goodale
,
M. A.
(
2002
).
Haptic study of three-dimensional objects activates extrastriate visual areas.
Neuropsychologia
,
40
,
1706
1714
.
James
,
T. W.
,
Servos
,
P.
,
Kilgour
,
A. R.
,
Huh
,
E.
, &
Lederman
,
S. J.
(
2006
).
The influence of familiarity on brain activation during haptic exploration of 3-D facemasks.
Neuroscience Letters
,
397
,
269
273
.
Kanwisher
,
N.
(
2000
).
Domain specificity in face perception.
Nature Neuroscience
,
3
,
759
763
.
Kanwisher
,
N.
,
McDermott
,
J.
, &
Chun
,
M. M.
(
1997
).
The fusiform face area: A module in human extrastriate cortex specialized for face perception.
Journal of Neuroscience
,
17
,
4302
4311
.
Kilgour
,
A. R.
,
de Gelder
,
B.
, &
Lederman
,
S. J.
(
2004
).
Haptic face recognition and prosopagnosia.
Neuropsychologia
,
42
,
707
712
.
Kilgour
,
A. R.
,
Kitada
,
R.
,
Servos
,
P.
,
James
,
T. W.
, &
Lederman
,
S. J.
(
2005
).
Haptic face identification activates ventral occipital and temporal areas: An fMRI study.
Brain and Cognition
,
59
,
246
257
.
Klatzky
,
R. L.
,
Lederman
,
S. J.
, &
Metzger
,
V. A.
(
1985
).
Identifying objects by touch: An “expert system”.
Perception & Psychophysics
,
37
,
299
302
.
Lederman
,
S. J.
, &
Klatzky
,
R. L.
(
1990
).
Haptic identification of common objects: Knowledge-driven exploration.
Cognitive Psychology
,
22
,
421
459
.
Lederman
,
S. J.
,
Klatzky
,
R. L.
,
Chataway
,
C.
, &
Summers
,
C.
(
1990
).
Visual mediation and the haptic recognition of two dimensional pictures of common objects.
Perception & Psychophysics
,
47
,
54
64
.
Marks
,
D.
(
1973
).
Visual imagery differences in the recall of pictures.
British Journal of Psychology
,
64
,
17
24
.
Ohtake
,
H.
,
Fujii
,
T.
,
Yamadori
,
A.
,
Fujimori
,
M.
,
Hayakawa
,
Y.
, &
Suzuki
,
K.
(
2001
).
The influence of misnaming on object recognition: A case of multimodal agnosia.
Cortex
,
37
,
175
186
.
Oldfield
,
R. C.
(
1971
).
The assessment and analysis of handedness: The Edinburgh inventory.
Neuropsychologia
,
9
,
97
113
.
Peelen
,
M. V.
, &
Downing
,
P. E.
(
2005a
).
Selectivity for the human body in the fusiform gyrus.
Journal of Neurophysiology
,
93
,
603
608
.
Peelen
,
M. V.
, &
Downing
,
P. E.
(
2005b
).
Within-subject reproducibility of category-specific visual activation with functional MRI.
Human Brain Mapping
,
25
,
402
408
.
Pietrini
,
P.
,
Furey
,
M. L.
,
Ricciardi
,
E.
,
Gobbini
,
M. I.
,
Wu
,
W. H.
,
Cohen
,
L.
,
et al
(
2004
).
Beyond sensory images: Object-based representation in the human ventral pathway.
Proceedings of the National Academy of Sciences, U.S.A.
,
101
,
5658
5663
.
Saxe
,
R.
,
Brett
,
M.
, &
Kanwisher
,
N.
(
2006
).
Divide and conquer: A defense of functional localizers.
Neuroimage
,
30
,
1088
1096
.
Shattuck
,
D. W.
,
Mirza
,
M.
,
Adisetiyo
,
V.
,
Hojatkashani
,
C.
,
Salamon
,
G.
,
Narr
,
K. L.
,
et al
(
2008
).
Construction of a 3D probabilistic atlas of human cortical structures.
Neuroimage
,
39
,
1064
1080
.
Spiridon
,
M.
,
Fischl
,
B.
, &
Kanwisher
,
N.
(
2006
).
Location and spatial profile of category-selective regions in human extrastriate cortex.
Human Brain Mapping
,
27
,
77
89
.
Spiridon
,
M.
, &
Kanwisher
,
N.
(
2002
).
How distributed is visual category information in human occipito-temporal cortex? An fMRI study.
Neuron
,
35
,
1157
1165
.
Tarr
,
M. J.
, &
Gauthier
,
I.
(
2000
).
FFA: A flexible fusiform area for subordinate-level visual processing automatized by expertise.
Nature Neuroscience
,
3
,
764
769
.
Tong
,
F.
,
Nakayama
,
K.
,
Vaughan
,
J. T.
, &
Kanwisher
,
N.
(
1998
).
Binocular rivalry and visual awareness in human extrastriate cortex.
Neuron
,
21
,
753
759
.
Urgesi
,
C.
,
Berlucchi
,
G.
, &
Aglioti
,
S. M.
(
2004
).
Magnetic stimulation of extrastriate body area impairs visual processing of nonfacial body parts.
Current Biology
,
14
,
2130
2134
.
Worsley
,
K. J.
, &
Friston
,
K. J.
(
1995
).
Analysis of fMRI time-series revisited—Again.
Neuroimage
,
2
,
173
181
.
Zangaladze
,
A.
,
Epstein
,
C. M.
,
Grafton
,
S. T.
, &
Sathian
,
K.
(
1999
).
Involvement of visual cortex in tactile discrimination of orientation.
Nature
,
401
,
587
590
.
Zhang
,
M.
,
Weisser
,
V. D.
,
Stilla
,
R.
,
Prather
,
S. C.
, &
Sathian
,
K.
(
2004
).
Multisensory cortical processing of object shape and its relation to mental imagery.
Cognitive, Affective, & Behavioral Neuroscience
,
4
,
251
259
.

Author notes

*

Now at National Institute for Physiological Sciences, Okazaki, Japan.