Abstract

Previous studies have provided evidence for a tool-selective region in left lateral occipitotemporal cortex (LOTC). This region responds selectively to pictures of tools and to characteristic visual tool motion. The present human fMRI study tested whether visual experience is required for the development of tool-selective responses in left LOTC. Words referring to tools, animals, and nonmanipulable objects were presented auditorily to 14 congenitally blind and 16 sighted participants. Sighted participants additionally viewed pictures of these objects. In whole-brain group analyses, sighted participants showed tool-selective activity in left LOTC in both visual and auditory tasks. Importantly, virtually identical tool-selective LOTC activity was found in the congenitally blind group performing the auditory task. Furthermore, both groups showed equally strong tool-selective activity for auditory stimuli in a tool-selective LOTC region defined by the picture-viewing task in the sighted group. Detailed analyses in individual participants showed significant tool-selective LOTC activity in 13 of 14 blind participants and 14 of 16 sighted participants. The strength and anatomical location of this activity were indistinguishable across groups. Finally, both blind and sighted groups showed significant resting state functional connectivity between left LOTC and a bilateral frontoparietal network. Together, these results indicate that tool-selective activity in left LOTC develops without ever having seen a tool or its motion. This finding puts constraints on the possible role that this region could have in tool processing and, more generally, provides new insights into the principles shaping the functional organization of OTC.

INTRODUCTION

Functional neuroimaging studies have shown that viewing pictures of tools, relative to other object categories, activates a region in the left lateral occipitotemporal cortex (LOTC tool, also labeled pMTG; Bracci, Cavina-Pratesi, Ietswaart, Caramazza, & Peelen, 2012; Chao, Haxby, & Martin, 1999). LOTC tool is located just anterior to motion-selective area hMT+ (Bracci et al., 2012; Valyear & Culham, 2010; Beauchamp, Lee, Haxby, & Martin, 2002) and closely overlaps a hand-selective region (Bracci et al., 2012). LOTC tool responds preferentially to visual motion that is characteristic of hand tool actions (Beauchamp, Lee, Haxby, & Martin, 2003; Beauchamp et al., 2002), suggesting that it may store visual motion properties of tools. An open question is whether LOTC tool is a purely visual region or whether it might also contribute to tool knowledge in the absence of vision. To address this question, we tested whether selectivity for tools in LOTC develops in individuals who have had no visual experience.

A large number of studies have reported tool-selective fMRI activity in left LOTC without presenting visual displays of tools or of tool motion. For example, tool-selective activity was observed when participants heard the names of tools or the sound of tool actions, read words denoting tools, imagined tools, or pantomimed tool actions (Lewis, 2006). These results show that LOTC tool does not require visual input to activate selectively. However, they leave open the possibility that prior visual experience with tools is necessary for such selectivity to develop. For example, hearing the sound of a tool action may activate a visual representation of tool motion through visual imagery (O'Craven & Kanwisher, 2000). It has been proposed that accessing such visual representations may be a central component of object knowledge, recruited both in visual and nonvisual tasks (Barsalou, 2008; Kan, Barsalou, Solomon, Minor, & Thompson-Schill, 2003).

An elegant way of testing the role of visual experience in shaping category selectivity in the “visual” cortex is to study individuals who have had no visual experience (Mahon, Anzellotti, Schwarzbach, Zampini, & Caramazza, 2009; Pietrini et al., 2004; Buchel, Price, & Friston, 1998). For example, studies using this approach have shown that the “visual word form area” in the left posterior fusiform gyrus responds selectively when congenitally blind participants read braille (Reich, Szwed, Cohen, & Amedi, 2011), suggesting that this region performs reading-specific computations on input from multiple modalities (Pascual-Leone & Hamilton, 2001). A proposed explanation for the identical anatomical location of this “reading center” in blind and sighted individuals is that the functional properties of OTC regions are partly determined by innate connectivity patterns between OTC and functionally specific higher-order networks (Mahon & Caramazza, 2011).

In this study, we measured fMRI responses in congenitally blind and sighted participants while they listened to names of objects of three different categories: tools, animals, and nonmanipulable objects. The sighted group additionally viewed pictures of these objects. We found virtually identical tool-selective LOTC activity in blind and sighted individuals. Moreover, resting state functional connectivity analysis showed that the tool-selective LOTC was connected to the same bilateral frontoparietal network in blind and sighted groups. These results show that tool selectivity in LOTC develops without visual experience.

METHODS

Participants

The object category experiment included 16 congenitally blind and 17 sighted participants, all right-handed. All blind participants reported that they have had no visual experience and lost vision since birth because of retinal damage (n = 10) or because of unknown pathology (n = 6). Seven blind participants had faint light perception but could not recognize patterns or shapes. Two blind participants were excluded because brain lesions were discovered on structural MRI scans. One sighted participant was discarded because of excessive head motion. The data of the remaining 14 blind (7 women; mean age = 45 years, range = 26–60 years) and 16 sighted (7 women; mean age = 38 years, range = 18–60 years) participants were analyzed.

The resting state fMRI scan was performed by the same group of congenitally blind participants that participated in the object category study. One blind participant was discarded from data analysis because of excessive head motion during the resting state scan, leaving 13 blind participants in this analysis. A new group of 34 right-handed sighted participants (20 women; mean age = 22.5 years, range = 20–26 years) participated in the resting state scan. All participants were native Mandarin Chinese speakers with no history of neurological or psychiatric disorders. All participants gave informed consent. The study was approved by the institutional review board of Beijing Normal University Imaging Center for Brain Research.

Stimuli

The study consisted of auditory and visual experiments in which 30 objects from each of three categories were presented: tools, animals, and nonmanipulable objects (Table 1). Tools included kitchen utensils, farm implements, and common household tools; animals included mammals, birds, insects, and reptiles; nonmanipulable objects included furniture, appliances, cars, buildings, and other common large nonmanipulable objects. All stimuli were disyllabic Chinese words and were matched across conditions on word frequency, familiarity, and imageability, as assessed in a separate group of college students (n = 16). In the auditory experiments, the objects were referred to by auditorily presented words. In the visual experiments, black and white photographs (400 × 400 pixels, visual angle 10.55° × 10.55°) of the same objects were presented.

Table 1. 

Complete List of Stimuli (Translated from Chinese)

Tools
Animals
Nonmanipulable
Abacus Bat Arbor 
Axe Bear Barrier 
Bayonet Bee Bathtub 
Broom Butterfly Blackboard 
Brush Camel Bookshelf 
Button Cock Castle 
Chopstick Crab Chaise longue 
Dagger Dairy cow Chimney 
Drill Dinosaur Computer 
Fan Donkey Elevator 
Fork Duck Fence 
Heavy hammer Eagle Flag pole 
Hoe Elephant Hammock 
Key Frog Pagoda 
Keyboard Hedgehog Refrigerator 
Kitchen knife Lobster Sailboat 
Knife Monkey Sink 
Light hammer Mosquito Slide 
Long spear Mouse Sofa 
Match Panda Speaker 
Pencil Peacock Statue 
Pistol Pigeon Stool 
Rake Rabbit Stove 
Rope Sheep Table 
Spade Snail Tank 
Spoon Sparrow Television 
Stick Spider Tent 
Walking stick Toad Toaster 
Whip Turtle Tombstone 
Zip Wild goose Truck 
Tools
Animals
Nonmanipulable
Abacus Bat Arbor 
Axe Bear Barrier 
Bayonet Bee Bathtub 
Broom Butterfly Blackboard 
Brush Camel Bookshelf 
Button Cock Castle 
Chopstick Crab Chaise longue 
Dagger Dairy cow Chimney 
Drill Dinosaur Computer 
Fan Donkey Elevator 
Fork Duck Fence 
Heavy hammer Eagle Flag pole 
Hoe Elephant Hammock 
Key Frog Pagoda 
Keyboard Hedgehog Refrigerator 
Kitchen knife Lobster Sailboat 
Knife Monkey Sink 
Light hammer Mosquito Slide 
Long spear Mouse Sofa 
Match Panda Speaker 
Pencil Peacock Statue 
Pistol Pigeon Stool 
Rake Rabbit Stove 
Rope Sheep Table 
Spade Snail Tank 
Spoon Sparrow Television 
Stick Spider Tent 
Walking stick Toad Toaster 
Whip Turtle Tombstone 
Zip Wild goose Truck 

Task and Design of Auditory Experiments

Both blind and sighted participants performed two auditory experiments. These experiments included the same objects but differed in the task performed on them. To maximize statistical power, the data of these experiments were collapsed and analyzed together.

The first task was a size judgment task, similar to that used in Mahon et al. (2009). Stimuli were presented in blocks of five words, all from the same category. Participants were instructed to compare the size of each of the objects to the size of the first object. If all objects had roughly the same size, participants responded by pressing a button with the index finger of the left hand, whereas if at least one of the last four objects was different in size from the first one, participants pressed a button with the right index finger. A response cue (auditory tone, duration 200 msec) was presented after the offset of the last item of the block, at which time participants responded. Each of the five trials in a block lasted 2 sec, and the last trial was followed by a 4-sec silent period for response. Thus, each block lasted 14 sec. Blocks were followed by a 14-sec period of silence. Participants performed four runs, each consisting of 10 object blocks. The first block of each run was excluded from data analysis, leaving a total of 36 blocks (12 repetitions of each of the three categories). The order of blocks was pseudorandomized with the restriction that no two consecutive blocks were from the same category.

The second task was a semantic similarity judgment task. In addition to the three object categories of interest (tools, animals, nonmanipulable objects), three additional conditions were included in this experiment: abstract verbs, concrete verbs, and abstract nouns. These conditions were not of interest to the current study, and data are not presented here. Each trial consisted of two objects, presented sequentially, from the same category. Participants were asked to judge the semantic similarity of the two objects, on a scale from 1 (highly unrelated) to 4 (highly related). They responded by pressing four buttons corresponding to the index and middle fingers of the two hands. The value 1 was assigned to the left middle finger and progressively large values were assigned to the other fingers moving from left to right so that the value 4 was assigned to the right middle finger. Each pair lasted 2 sec, and participants had an additional 2 sec to respond. Five pairs from the same category made up one block, which lasted 20 sec. Blocks were followed by a 14-sec period of silence. Participants performed four runs, each consisting of 10 blocks. The first block of each run was excluded from data analysis, leaving a total of 36 blocks (six repetitions of each of the six conditions). The order of blocks was pseudorandomized with the restriction that no two consecutive blocks were from the same category.

Task and Design of Visual Experiments

All 16 sighted participants completed one run of a passive picture viewing task in which they viewed the object photographs through a mirror attached to the head coil adjusted to allow foveal viewing of a back-projected monitor (refresh rate: 60 Hz; spatial resolution: 1024 × 768). The pictures were presented sequentially (667 msec; ISI = 0) in blocks of 30 items, all from the same category. Each block lasted 20 sec, followed by 20 sec of fixation. Each category block was repeated four times in pseudorandomized order, with the restriction that no two consecutive blocks were from the same category. Ten of the 16 sighted participants performed two additional runs of the picture viewing experiment, detecting immediate repetitions of the images (1-back). For these participants, the data of the two visual experiments were combined. In these additional runs, objects were presented sequentially in blocks of 16 pictures (800 msec; ISI = 200 msec), all from the same category. Blocks lasted 16 sec each and were separated by 10 sec of fixation. The participants pressed a button with the left index finger whenever the exact same picture was presented twice in a row, which happened zero, one, or two times in a block (with equal probability). Each category block was repeated six times within a run. The order of blocks was counterbalanced using the Latin square method. Each run started and ended with a 10-sec fixation block.

Resting State Scan

Participants were instructed to close their eyes and to not fall asleep. The resting state scan lasted 6 min and 40 sec for the congenitally blind group and 8 min for the sighted group (data from Wei et al., 2012).

Image Acquisition and Data Preprocessing

All functional and structural MRI data were collected with a 3T Siemens Tim Trio scanner at the MRI center of Beijing Normal University. Structural images were acquired with an MPRAGE sequence, with 1.3 × 1 × 1.3 mm resolution. Functional images were acquired with EPI sequences. For the object category experiments, scanning parameters were as follows: repetition time (TR) = 2000 msec, echo time (TE) = 30 msec, flip angle = 90°, matrix size = 64 × 64, field of view (FOV) = 200 × 200 mm, 33 axial slices of 4 mm, 0.6 mm interslice gap. For the resting state scan in blind participants, scanning parameters were as follows: TR = 2000 msec, TE = 33 msec, flip angle = 73°, matrix size = 64 × 64, FOV = 200 × 200 mm, 32 axial slices of 4 mm, 0.8 mm interslice gap. For the resting state scan in sighted participants, scanning parameters were as follows: TR = 2000 msec, TE = 33 msec, flip angle = 90°, matrix size = 64 × 64, FOV = 200 × 200 mm, 33 axial slices of 3 mm, 0.6 mm interslice gap.

Data preprocessing and analysis for the auditory and visual experiments were performed using BrainVoyager QX (version 2.30; Brain Innovation, Maastricht, The Netherlands) and MATLAB. Preprocessing of the functional data included three-dimensional head motion correction, linear trend removal, high-pass temporal filtering (cutoff 1 cycle per time course) and spatial smoothing (6-mm FWHM isotropic Gaussian kernel). For each participant, functional images were coregistered to the T1 anatomical images. Subsequently, anatomical images were transformed into Talairach stereotaxic space, and this transformation was applied to the aligned functional data, which was resampled to 3 × 3 × 3 mm. The resting state data were preprocessed and analyzed using Statistical Parametric Mapping software (SPM8; www.fil.ion.ucl.ac.uk/spm) and the Data Processing Assistant for Resting State fMRI toolbox (www.restfmri.net). Functional data were resampled to 3 × 3 × 3 mm voxels. The first 10 volumes of the functional images were discarded. Preprocessing of the functional data included slice time correction, three-dimensional head motion correction, spatial normalization to the Montreal Neurological Institute space using the unified segment algorithm (SPM 8), spatial smoothing (6-mm FWHM isotropic Gaussian kernel), linear trend removal, and band-pass filtering (0.01–0.1 Hz). Six head motion parameters, global mean signals, white matter, and cerebrospinal fluid signals were regressed out as nuisance covariates.

Data Analysis

Functional data of the auditory and visual experiments were analyzed using the general linear model (GLM). For each participant and each experiment, the GLM included regressors for the experimental conditions included in the design and the six motion correction parameters (x, y, z for translation and for rotation). Data from the two auditory experiments were combined in a single GLM. Similarly, for those participants who took part in both visual experiments, these data were combined in a single GLM. Predictors' time courses were modeled with a linear model of hemodynamic response using the default BrainVoyager QX “two-gamma” function.

Whole-brain random-effects group analyses were performed on the data from the auditory and visual experiments, separately for the blind and sighted groups. Statistical activation maps were thresholded at p < .001 (uncorrected for multiple comparisons) and a minimum cluster size of 30 voxels (810 mm3).

In the ROI analysis, a tool-selective ROI in LOTC was defined using the group-averaged data of the visual experiment in sighted participants (dotted outline in Figure 1A). Tools were contrasted with the averaged activity to animals and nonmanipulable objects, thresholded at p < .001. The ROI was restricted to significant voxels within a cube of 20 mm width centered on the activation peak. Parameters estimates from the auditory experiment data were then extracted for each participant and were tested using ANOVA with Category (tools, animals, nonmanipulable objects) as within-subject factor and Group (blind, sighted) as between-subject factor.

Figure 1. 

Tool-selective activity in blind and sighted participants. (A) Group-averaged activity (tools vs. animals and nonmanipulable objects) is shown for the blind group (auditory: blue color-coded) and the sighted group (auditory: red color-coded; visual: dotted line). Activation maps are thresholded at p < .001 (cluster size > 30 voxels). (B) Individual-participant LOTC tool is shown for four blind participants (auditory: blue color-coded) and four sighted participants (auditory: red color-coded; visual: orange color-coded). LH = left hemisphere.

Figure 1. 

Tool-selective activity in blind and sighted participants. (A) Group-averaged activity (tools vs. animals and nonmanipulable objects) is shown for the blind group (auditory: blue color-coded) and the sighted group (auditory: red color-coded; visual: dotted line). Activation maps are thresholded at p < .001 (cluster size > 30 voxels). (B) Individual-participant LOTC tool is shown for four blind participants (auditory: blue color-coded) and four sighted participants (auditory: red color-coded; visual: orange color-coded). LH = left hemisphere.

To compare the anatomical location and strength of tool-selective LOTC activity in the blind and sighted participants performing the auditory tasks, we contrasted tools with the averaged activity to animals and nonmanipulable objects in each individual participant, at p < .01 (uncorrected). A more lenient threshold was used for this individual-subject analysis to ensure that the ROI could be localized in most participants. The peak Talairach coordinates (x, y, z) were recorded for each participant, and the Euclidean distance between these coordinates and the average Talairach coordinates was computed within and across groups. For example, for Blind Participant 1, the Euclidean distance was computed between the Talairach coordinates of this participant and the average Talairach coordinates of the blind group (leaving Participant 1 out of the average to avoid circularity). Similarly, the Euclidean distance was computed between the Talairach coordinates of Blind Participant 1 and the average Talairach coordinates of the sighted group. This procedure resulted in two distance values for each participant, one reflecting within-group distance (e.g., blind–blind) and one reflecting between-group distance (e.g., blind–sighted). These values were then compared using ANOVA with Distance (within-group, between-group) as within-subject factor and Group (blind, sighted) as between-subject factor. The same analysis was also performed on the (absolute) distance for each of the coordinates (x, y, z) separately.

Resting state functional connectivity analysis for the left LOTC tool region was performed for the congenitally blind participants that had participated in the auditory category experiments as well as for a new group of sighted participants (Wei et al., 2012). (The group of sighted participants that participated in the category experiments did not undergo a resting state scan). The seed ROI was a 6-mm sphere centered on the average peak Talairach coordinates of LOTC tool (xyz: −50, −60, −5), defined in individual blind and sighted participants using the auditory experiment data (Figure 3A). For each participant, the mean time series of the seed ROI was correlated with the time series of all other voxels in the brain to generate a whole-brain functional connectivity map. Resulting correlation values were Fisher transformed. A gray matter functional mask (36,272 voxels) was applied before conducting random-effects group analyses. One-sample t tests were used to identify voxels showing significant functional connectivity with the LOTC tool seed ROI. Resulting whole-brain group-averaged statistical maps were thresholded at p < .001 (uncorrected for multiple comparisons) and a minimum cluster size of 30 voxels (810 mm3).

RESULTS

Behavioral Results

In the size judgment task, participants compared the real-world size of the first object of a block with the sizes of the four subsequently presented objects of the block, indicating whether or not the four objects were roughly of the same size as the reference object. The proportion “same” responses was on average 31% (tools: 30%, animals: 32%, nonmanipulable objects: 31%) and did not differ between blind and sighted groups (t < 1). In the semantic similarity judgment task, participants judged the semantic similarity of two objects, on a scale from 1 (highly unrelated) to 4 (highly related). The average similarity rating was 1.93 (tools: 1.97, animals: 1.99, nonmanipulable objects: 1.83) and did not differ between blind and sighted groups, F(1, 28) = 1.1, p = .30.

Whole-brain Analyses

To test for tool-selective brain activity, the response to tools was contrasted with the average response to animals and nonmanipulable objects in whole-brain random-effects group analyses (thresholded at p < .001, uncorrected). In the sighted group performing the auditory tasks, this contrast revealed activity in left LOTC (xyz: −51, −57, −2; peak t = 7.26; p < .0001). Interestingly, nearly identical LOTC activity was observed in congenitally blind participants performing the auditory tasks (xyz: −50, −52, −3; peak t = 8.74; p < .0001). No other regions were activated in either of the groups. As can be seen in Figure 1A, there was a close overlap between tool-selective LOTC activity in the blind and sighted groups and between tool-selective LOTC activity for auditory and visual experiments in the sighted group. Figure 1B shows tool-selective LOTC responses in several individual participants (see also Figure 3A).

ROI Analysis

To confirm the overlap between tool-selective LOTC activity in the sighted group viewing object pictures and both sighted and blind groups hearing object names (Figure 1A), auditory responses were investigated within the tool-selective region activated in the visual task (dotted outline in Figure 1A). For each participant, parameter estimates for the auditory conditions were extracted from the visually defined tool-selective ROI (see Methods) and tested in an ANOVA with Category (tools, animals, nonmanipulable objects) as within-subject factor and Group (blind, sighted) as between-subject factor. This analysis revealed a highly significant main effect of Category, F(2, 56) = 18.05; p < .0001, with tools eliciting significantly stronger responses than both animals, t(29) = 7.38, p < .0001, and nonmanipulable objects, t(29) = 4.09, p < .001. There was also a significant main effect of Group, F(1, 28) = 4.21, p < .05, reflecting overall stronger LOTC responses to auditory input in the blind than the sighted group. Strong auditory responses in LOTC have previously been observed in early blind participants relative to late blind or sighted participants (Bedny, Konkle, Pelphrey, Saxe, & Pascual-Leone, 2010). Importantly, the interaction between Category and Group was not significant, F(2, 56) = 0.26, p = .77, indicating similar tool selectivity in blind and sighted groups (Figure 2).

Figure 2. 

Functional profile of visually defined LOTC tool. Mean parameter estimates (Beta) for each stimulus category (tools, animals, nonmanipulable objects) in the auditory experiment were extracted from the group-average LOTC tool (tools vs. animals and nonmanipulable objects) defined using data from the visual experiment in sighted participants (dotted outline in Figure 1A). Error bars indicate SEM.

Figure 2. 

Functional profile of visually defined LOTC tool. Mean parameter estimates (Beta) for each stimulus category (tools, animals, nonmanipulable objects) in the auditory experiment were extracted from the group-average LOTC tool (tools vs. animals and nonmanipulable objects) defined using data from the visual experiment in sighted participants (dotted outline in Figure 1A). Error bars indicate SEM.

Distance between Tool-selective Activation Peaks

To further investigate whether tool-selective LOTC activation in congenitally blind individuals is distinguishable from tool-selective LOTC activation in sighted individuals, we compared the location and peak t value of individually localized tool-selective regions across groups. LOTC tool (tools vs. animals and nonmanipulable objects; p < .01, uncorrected) was defined in individual participants using the auditory task data. LOTC tool could be localized in 13 of 14 blind participants and 14 of 16 sighted participants (Figure 3A). The average peak t value did not differ between the two groups (blind: 4.2; sighted: 4.2; t(25) = 0.01, p = .98).

Figure 3. 

Individual-participant activation peaks. (A) Individual-participant activation peaks for the contrast [tools vs. animals and nonmanipulable objects] is shown for the blind (blue color-coded) and sighted (red color-coded) participants, using data from the auditory experiment. (B) For each Talairach coordinate (x, y, z) the within-group distance (e.g., single-subject blind minus group-averaged blind), and the between-group distance (e.g., single-subject blind minus group-averaged sighted) are shown, separately for the blind (left) and the sighted (right). SS = single subject.

Figure 3. 

Individual-participant activation peaks. (A) Individual-participant activation peaks for the contrast [tools vs. animals and nonmanipulable objects] is shown for the blind (blue color-coded) and sighted (red color-coded) participants, using data from the auditory experiment. (B) For each Talairach coordinate (x, y, z) the within-group distance (e.g., single-subject blind minus group-averaged blind), and the between-group distance (e.g., single-subject blind minus group-averaged sighted) are shown, separately for the blind (left) and the sighted (right). SS = single subject.

To test whether the anatomical location of LOTC tool differed as a function of visual experience, the Euclidean distance between each participant's peak voxel and the group-averaged coordinates of both groups (see Methods) were tested in an ANOVA with Distance (within-group, between-group) as within-subject factor and Group (blind, sighted) as between-subject factor. There was no main effect of Distance (F < 1) or Group (F < 1) and no significant interaction between Distance and Group (F < 1). These results indicate that for both blind and sighted groups the average within-group distance between activation peaks (blind–blind: 10.4 mm, sighted-sighted: 10.7 mm) was equal to the average between-group distance between activation peaks (blind-sighted: 10.6 mm, sighted-blind: 11.3 mm). In other words, the anatomical location of LOTC tool in a given blind participant was equally distant from LOTC tool in the blind group as it was from LOTC tool in the sighted group. The same distance analysis was performed for each of the three coordinates (x, y, z) separately, as illustrated in Figure 3B. For the y and z coordinates, there were no significant main effects or interactions (p > .05, for all tests). For the x coordinate, there was a significant main effect of Distance, F(1, 25) = 7.05, p = .014, reflecting a smaller between-group distance than within-group distance. Thus, if anything, the location of LOTC was more similar across groups than within groups. No other effects were significant.

These analyses on individual participant data show that the strength and anatomical location of tool-selective LOTC activity was remarkably independent of visual experience.

Resting State Functional Connectivity Analysis

All blind participants participated in a resting state scan to measure functional connectivity between LOTC tool (seed region) and all other voxels in the brain (see Methods). A new group of 34 sighted participants were included as a control group (see Methods). In both blind and sighted groups, LOTC tool was functionally connected to a bilateral frontoparietal network, with considerable overlap between groups (Figure 4A; Table 2). The sighted group showed generally larger clusters than the blind group, possibly because of the larger number of participants in the sighted group (sighted: n = 34, blind: n = 13). These results show that the functional connectivity between tool-selective LOTC and higher-order networks is qualitatively similar in blind and sighted individuals.

Figure 4. 

Resting state functional connectivity analysis. (A) Statistical maps of functional connectivity for the left LOTC tool seed region (xyz: −50, −60, −5; black dot) are presented for the blind and the sighted group. Statistical maps are thresholded at p < .001 (cluster size > 30 voxels). Table 2 gives the details of the significant clusters. (B) Functional connectivity maps for blind (yellow color-coded) and sighted (blue color-coded) are superimposed to show the degree of correspondence between the two groups (orange color-coded). LH = left hemisphere.

Figure 4. 

Resting state functional connectivity analysis. (A) Statistical maps of functional connectivity for the left LOTC tool seed region (xyz: −50, −60, −5; black dot) are presented for the blind and the sighted group. Statistical maps are thresholded at p < .001 (cluster size > 30 voxels). Table 2 gives the details of the significant clusters. (B) Functional connectivity maps for blind (yellow color-coded) and sighted (blue color-coded) are superimposed to show the degree of correspondence between the two groups (orange color-coded). LH = left hemisphere.

Table 2. 

Resting State Functional Connectivity


Blind Group (n = 13)
Sighted Group (n = 34)
x
y
z
T
mm3
x
y
z
T
mm3
L LOTC −54 −63 −3 33.39 8775 −54 −63 −3 38.47 15228 
L ATL      −30 −39 6.93 2025 
L Parietal −63 −30 39 6.56 2349 −63 −21 33 10.03 36099 
L IPS −30 −45 48 5.55 837 (confluent with L Parietal cluster) 
L SPL −27 −45 66 11.34 2997 (confluent with L Parietal cluster) 
L IFG −48 18 6.22 1809 −54 21 8.19 10854 
L IFG      −45 33 12 6.67 4590 
L MFG      −27 −3 57 6.97 4320 
L PHG      −21 −21 6.61 891 
R LOTC 63 −54 −3 9.59 4455 54 −57 −9 16.27 9477 
R Parietal 30 −51 51 7.21 9153 24 −66 48 9.12 33372 
R IFG 54 21 6.59 1512 51 27 7.81 12096 
R IFG      45 39 6.27 2646 
R MFG 30 −6 57 8.3 1647 27 51 7.05 4428 

Blind Group (n = 13)
Sighted Group (n = 34)
x
y
z
T
mm3
x
y
z
T
mm3
L LOTC −54 −63 −3 33.39 8775 −54 −63 −3 38.47 15228 
L ATL      −30 −39 6.93 2025 
L Parietal −63 −30 39 6.56 2349 −63 −21 33 10.03 36099 
L IPS −30 −45 48 5.55 837 (confluent with L Parietal cluster) 
L SPL −27 −45 66 11.34 2997 (confluent with L Parietal cluster) 
L IFG −48 18 6.22 1809 −54 21 8.19 10854 
L IFG      −45 33 12 6.67 4590 
L MFG      −27 −3 57 6.97 4320 
L PHG      −21 −21 6.61 891 
R LOTC 63 −54 −3 9.59 4455 54 −57 −9 16.27 9477 
R Parietal 30 −51 51 7.21 9153 24 −66 48 9.12 33372 
R IFG 54 21 6.59 1512 51 27 7.81 12096 
R IFG      45 39 6.27 2646 
R MFG 30 −6 57 8.3 1647 27 51 7.05 4428 

Table gives Talairach coordinates, t values, and volume (mm3) of clusters that showed significant (p < .001, uncorrected) functional connectivity to a left LOTC seed region (xyz: −50, −60, −5), separately for blind and sighted groups. ATL, anterior temporal lobe; IPS, intraparietal sulcus; SPL, superior parietal lobule; IFG, inferior frontal gyrus; MFG, middle frontal gyrus; PHG, parahippocampal gyrus.

DISCUSSION

The present results show strikingly similar left LOTC activity in congenitally blind and sighted participants when hearing words referring to tools, relative to words referring to animals and nonmanipulable objects. Whole-brain group analyses revealed strong and selective activity to tools in left LOTC of both groups. This region overlapped with the tool-selective region that was active during picture viewing in the sighted group. Indeed, the visually defined tool-selective region (in sighted) showed equally strong tool selectivity in both blind and sighted participants performing the auditory task. Tool-selective activity could be reliably observed in almost all individual participants as well, with again no difference between blind and sighted participants in terms of anatomical location or degree of selectivity. Finally, resting state functional connectivity analysis showed that, in both blind and sighted groups, the tool-selective left LOTC region was functionally connected to right LOTC as well as to a bilateral network consisting of inferior and superior parietal cortex and inferior frontal cortex. Together, these results provide evidence that tool selectivity in left LOTC is largely independent of visual experience. Below we will discuss the implications of these findings for the role of left LOTC in tool processing and, more generally, for the principles that shape the functional organization of OTC.

The finding that tool-selective LOTC develops in the absence of visual experience excludes a purely visual role for this region in tool processing. Which nonvisual tool properties might LOTC tool represent? Our data are consistent with several possibilities. One possibility is that LOTC tool stores perceptual properties of tools that are typically conveyed by the visual modality, such as tool shape or tool motion, but that can also be conveyed by other modalities. LOTC tool might store such properties in a relatively abstract multimodal code, perhaps integrating information from multiple modalities (Beauchamp, Lee, Argall, & Martin, 2004). Indeed, there is evidence that regions in LOTC are involved in extracting object shape from tactile input (Peelen, Rogers, Wing, Downing, & Bracewell, 2010; Amedi, von Kriegstein, van Atteveldt, Beauchamp, & Naumer, 2005; Reed, Shoham, & Halgren, 2004; Amedi, Jacobson, Hendler, Malach, & Zohary, 2002; Amedi, Malach, Hendler, Peled, & Zohary, 2001). Furthermore, motion-selective area hMT+, located just posterior to LOTC tool, responds to auditory motion, particularly in early blind individuals (Bedny et al., 2010; Saenz, Lewis, Huth, Fine, & Koch, 2008; Ricciardi et al., 2007; Poirier et al., 2005, 2006). These results are consistent with the idea that “visual” regions in sighted individuals may perform similar computations (e.g., motion processing) on input from nonvisual modalities in congenitally blind individuals (Pascual-Leone & Hamilton, 2001). Rather than representing perceptual tool properties, another possibility is that LOTC tool stores action-related tool properties, such as the hand movements or hand postures associated with specific tools. Several recent studies support such an account. For example, the LOTC tool response to static pictures of hands is at least as strong as its response to pictures of tools (Bracci et al., 2012). Furthermore, multivoxel activity patterns in LOTC are able to distinguish between specific hand actions (e.g., punch vs. lift) across visual (seeing the action) and motor (performing the action) domains (Oosterhof, Wiggett, Diedrichsen, Tipper, & Downing, 2010). In separate experiments, similar LOTC clusters were found to discriminate meaningless (xyz: −53, −56, 3) and object-directed hand actions (xyz: −49, −61, 2), both located near to the average coordinates of tool-selective LOTC in the current study (xyz: −50, −60, −5). Other studies have similarly shown that performing (unseen) hand movements activates LOTC (Peelen & Downing, 2005; xyz: −46, −65, −1) and modulates limb-selective LOTC regions (Orlov, Makin, & Zohary, 2010; Astafiev, Stanley, Shulman, & Corbetta, 2004). Interestingly, a study in early blind participants found increased responses in left LOTC (xyz: −54, −66, 3) when participants listened to words referring to hand actions (e.g., “tapping,” “knit,” “tickle”) relative to words describing body motion, object shapes, or sounds (Noppeney, Friston, & Price, 2003). Taken together, these studies raise the possibility that LOTC tool represents tool-associated hand actions or (sequences of) hand postures. Our current results and those of previous studies (Oosterhof et al., 2010; Orlov et al., 2010) indicate that these representations can be driven by visual input but may additionally be driven by regions involved in action planning, such as regions in frontoparietal cortex to which LOTC tool is functionally connected (Bracci et al., 2012; Simmons & Martin, 2012). Finally, another possibility consistent with our current results is that LOTC tool stores more general functional properties of tools, such as what a tool is for and/or the objects that a tool is typically applied to (Bach, Peelen, & Tipper, 2010; Goldenberg & Spatt, 2009). Retrieval of such semantic knowledge would be expected to be largely independent of whether it is cued by a picture of a tool or a word referring to a tool. However, although LOTC tool may store some of these properties, patient and fMRI studies have implicated anterior temporal cortex in storing more general semantic knowledge of tools, such as where a tool is typically found and how it is typically used (Peelen & Caramazza, 2012; Hodges, Bozeat, Lambon Ralph, Patterson, & Spatt, 2000).

The present results complement a series of recent findings showing that aspects of the functional organization of occipitotemporal cortex develop without visual experience. For example, Mahon and colleagues (2009) showed that the ventral stream distinction between different object domains is preserved in congenitally blind participants. These and other findings (Striem-Amit, Dakwar, Reich, & Amedi, 2012; Reich et al., 2011; Wolbers, Klatzky, Loomis, Wutte, & Giudice, 2011; Pietrini et al., 2004; Noppeney et al., 2003; Buchel et al., 1998; Sadato et al., 1996) indicate that aspects of the object domain organization of occipitotemporal cortex do not require visual input to develop and that such an organization can thus not be fully explained by visual characteristics of objects. The finding that the organization is so similar for blind and sighted, in terms of the anatomical location of domain-selective regions, might be explained by taking into account innately specified connectivity patterns between OTC and functionally specific higher-order networks (Mahon & Caramazza, 2011). For example, connectivity between the left “visual word form area” and left-hemisphere language regions may make this region optimally suited for reading-related computations (Striem-Amit et al., 2012; Reich et al., 2011). This account is supported by the present resting state functional connectivity findings, showing intrinsic functional connectivity between LOTC and frontoparietal regions in both blind and sighted groups. These frontoparietal regions have previously been implicated in the execution of hand–arm actions in both sighted (Grafton & Hamilton, 2007; Culham, Cavina-Pratesi, & Singhal, 2006; Johnson, Ferraina, Bianchi, & Caminiti, 1996; Lewis, 2006) and congenitally blind participants (Lingnau et al., 2012). An interesting topic for future research is to investigate which aspects of the organization of visual cortex depend on visual experience and how this relates to connectivity with functionally specific higher order networks, such as those involved in language, action understanding, or social cognition (Simmons & Martin, 2012).

In summary, we found virtually identical tool-selective LOTC responses—in terms of strength and anatomical location—in blind and sighted participants performing the same auditory task. Resting state functional connectivity showed that in both groups tool-selective LOTC was functionally connected to a bilateral frontoparietal network. Our results rule out a purely visual role for tool-selective LOTC, such as an exclusive involvement in representing the visual consequences of tool actions. Its consistent anatomical location across blind and sighted participants might be explained by functional connectivity patterns between LOTC and frontoparietal regions implicated in action execution.

Acknowledgments

We thank Zaizhu Han and Nan Lin for discussions and help in data collection. This work was supported by the 973 Program (2013CB837300) and NSFC (31171073; 31222024; 31271115). M. V. P. and A. C. were supported in part by the Fondazione Cassa di Risparmio di Trento e Rovereto.

Reprint requests should be sent to Marius V. Peelen, Center for Mind/Brain Sciences, University of Trento, 38068 Rovereto, Italy, or via e-mail: marius.peelen@unitn.it or Yanchao Bi, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China, or via e-mail: ybi@bnu.edu.cn.

REFERENCES

REFERENCES
Amedi
,
A.
,
Jacobson
,
G.
,
Hendler
,
T.
,
Malach
,
R.
, &
Zohary
,
E.
(
2002
).
Convergence of visual and tactile shape processing in the human lateral occipital complex.
Cerebral Cortex
,
12
,
1202
1212
.
Amedi
,
A.
,
Malach
,
R.
,
Hendler
,
T.
,
Peled
,
S.
, &
Zohary
,
E.
(
2001
).
Visuo-haptic object-related activation in the ventral visual pathway.
Nature Neuroscience
,
4
,
324
330
.
Amedi
,
A.
,
von Kriegstein
,
K.
,
van Atteveldt
,
N. M.
,
Beauchamp
,
M. S.
, &
Naumer
,
M. J.
(
2005
).
Functional imaging of human crossmodal identification and object recognition.
Experimental Brain Research
,
166
,
559
571
.
Astafiev
,
S. V.
,
Stanley
,
C. M.
,
Shulman
,
G. L.
, &
Corbetta
,
M.
(
2004
).
Extrastriate body area in human occipital cortex responds to the performance of motor actions.
Nature Neuroscience
,
7
,
542
548
.
Bach
,
P.
,
Peelen
,
M. V.
, &
Tipper
,
S. P.
(
2010
).
On the role of object information in action observation: An fMRI study.
Cerebral Cortex
,
20
,
2798
2809
.
Barsalou
,
L. W.
(
2008
).
Grounded cognition.
Annual Review of Psychology
,
59
,
617
645
.
Beauchamp
,
M. S.
,
Lee
,
K. E.
,
Argall
,
B. D.
, &
Martin
,
A.
(
2004
).
Integration of auditory and visual information about objects in superior temporal sulcus.
Neuron
,
41
,
809
823
.
Beauchamp
,
M. S.
,
Lee
,
K. E.
,
Haxby
,
J. V.
, &
Martin
,
A.
(
2002
).
Parallel visual motion processing streams for manipulable objects and human movements.
Neuron
,
34
,
149
159
.
Beauchamp
,
M. S.
,
Lee
,
K. E.
,
Haxby
,
J. V.
, &
Martin
,
A.
(
2003
).
fMRI responses to video and point-light displays of moving humans and manipulable objects.
Journal of Cognitive Neuroscience
,
15
,
991
1001
.
Bedny
,
M.
,
Konkle
,
T.
,
Pelphrey
,
K.
,
Saxe
,
R.
, &
Pascual-Leone
,
A.
(
2010
).
Sensitive period for a multimodal response in human visual motion area MT/MST.
Current Biology
,
20
,
1900
1906
.
Bracci
,
S.
,
Cavina-Pratesi
,
C.
,
Ietswaart
,
M.
,
Caramazza
,
A.
, &
Peelen
,
M. V.
(
2012
).
Closely overlapping responses to tools and hands in left lateral occipitotemporal cortex.
Journal of Neurophysiology
,
107
,
1443
1456
.
Buchel
,
C.
,
Price
,
C.
, &
Friston
,
K.
(
1998
).
A multimodal language region in the ventral visual pathway.
Nature
,
394
,
274
277
.
Chao
,
L. L.
,
Haxby
,
J. V.
, &
Martin
,
A.
(
1999
).
Attribute-based neural substrates in temporal cortex for perceiving and knowing about objects.
Nature Neuroscience
,
2
,
913
919
.
Culham
,
J. C.
,
Cavina-Pratesi
,
C.
, &
Singhal
,
A.
(
2006
).
The role of parietal cortex in visuomotor control: What have we learned from neuroimaging?
Neuropsychologia
,
44
,
2668
2684
.
Goldenberg
,
G.
, &
Spatt
,
J.
(
2009
).
The neural basis of tool use.
Brain
,
132
,
1645
1655
.
Grafton
,
S. T.
, &
Hamilton
,
A. F.
(
2007
).
Evidence for a distributed hierarchy of action representation in the brain.
Human Movement Science
,
26
,
590
616
.
Hodges
,
J. R.
,
Bozeat
,
S.
,
Lambon Ralph
,
M. A.
,
Patterson
,
K.
, &
Spatt
,
J.
(
2000
).
The role of conceptual knowledge in object use evidence from semantic dementia.
Brain
,
123
,
1913
1925
.
Johnson
,
P. B.
,
Ferraina
,
S.
,
Bianchi
,
L.
, &
Caminiti
,
R.
(
1996
).
Cortical networks for visual reaching: Physiological and anatomical organization of frontal and parietal lobe arm regions.
Cerebral Cortex
,
6
,
102
119
.
Kan
,
I. P.
,
Barsalou
,
L. W.
,
Solomon
,
K. O.
,
Minor
,
J. K.
, &
Thompson-Schill
,
S. L.
(
2003
).
Role of mental imagery in a property verification task: fMRI evidence for perceptual representations of conceptual knowledge.
Cognitive Neuropsychology
,
20
,
525
540
.
Lewis
,
J. W.
(
2006
).
Cortical networks related to human use of tools.
Neuroscientist
,
12
,
211
231
.
Lingnau
,
A.
,
Strnad
,
L.
,
He
,
C.
,
Fabbri
,
S.
,
Han
,
Z.
,
Bi
,
Y.
,
et al
(
2012
).
Cross-modal plasticity preserves functional specialization in posterior parietal cortex.
Cerebral Cortex
.
Mahon
,
B. Z.
,
Anzellotti
,
S.
,
Schwarzbach
,
J.
,
Zampini
,
M.
, &
Caramazza
,
A.
(
2009
).
Category-specific organization in the human brain does not require visual experience.
Neuron
,
63
,
397
405
.
Mahon
,
B. Z.
, &
Caramazza
,
A.
(
2011
).
What drives the organization of object knowledge in the brain?
Trends in Cognitive Sciences
,
15
,
97
103
.
Noppeney
,
U.
,
Friston
,
K. J.
, &
Price
,
C. J.
(
2003
).
Effects of visual deprivation on the organization of the semantic system.
Brain
,
126
,
1620
1627
.
O'Craven
,
K. M.
, &
Kanwisher
,
N.
(
2000
).
Mental imagery of faces and places activates corresponding stimulus-specific brain regions.
Journal of Cognitive Neuroscience
,
12
,
1013
1023
.
Oosterhof
,
N. N.
,
Wiggett
,
A. J.
,
Diedrichsen
,
J.
,
Tipper
,
S. P.
, &
Downing
,
P. E.
(
2010
).
Surface-based information mapping reveals crossmodal vision-action representations in human parietal and occipitotemporal cortex.
Journal of Neurophysiology
,
104
,
1077
1089
.
Orlov
,
T.
,
Makin
,
T. R.
, &
Zohary
,
E.
(
2010
).
Topographic representation of the human body in the occipitotemporal cortex.
Neuron
,
68
,
586
600
.
Pascual-Leone
,
A.
, &
Hamilton
,
R.
(
2001
).
The metamodal organization of the brain.
Progress in Brain Research
,
134
,
427
445
.
Peelen
,
M. V.
, &
Caramazza
,
A.
(
2012
).
Conceptual object representations in human anterior temporal cortex.
Journal of Neuroscience
,
32
,
15728
15736
.
Peelen
,
M. V.
, &
Downing
,
P. E.
(
2005
).
Is the extrastriate body area involved in motor actions?
Nature Neuroscience
,
8
,
125; author reply 125-126
.
Peelen
,
M. V.
,
Rogers
,
J.
,
Wing
,
A. M.
,
Downing
,
P. E.
, &
Bracewell
,
R. M.
(
2010
).
Unitary haptic perception: Integrating moving tactile inputs from anatomically adjacent and non-adjacent digits.
Experimental Brain Research
,
204
,
457
464
.
Pietrini
,
P.
,
Furey
,
M. L.
,
Ricciardi
,
E.
,
Gobbini
,
M. I.
,
Wu
,
W. H.
,
Cohen
,
L.
,
et al
(
2004
).
Beyond sensory images: Object-based representation in the human ventral pathway.
Proceedings of the National Academy of Sciences, U.S.A.
,
101
,
5658
5663
.
Poirier
,
C.
,
Collignon
,
O.
,
Devolder
,
A. G.
,
Renier
,
L.
,
Vanlierde
,
A.
,
Tranduy
,
D.
,
et al
(
2005
).
Specific activation of the V5 brain area by auditory motion processing: An fMRI study.
Brain Research, Cognitive Brain Research
,
25
,
650
658
.
Poirier
,
C.
,
Collignon
,
O.
,
Scheiber
,
C.
,
Renier
,
L.
,
Vanlierde
,
A.
,
Tranduy
,
D.
,
et al
(
2006
).
Auditory motion perception activates visual motion areas in early blind subjects.
Neuroimage
,
31
,
279
285
.
Reed
,
C. L.
,
Shoham
,
S.
, &
Halgren
,
E.
(
2004
).
Neural substrates of tactile object recognition: An fMRI study.
Human Brain Mapping
,
21
,
236
246
.
Reich
,
L.
,
Szwed
,
M.
,
Cohen
,
L.
, &
Amedi
,
A.
(
2011
).
A ventral visual stream reading center independent of visual experience.
Current Biology
,
21
,
363
368
.
Ricciardi
,
E.
,
Vanello
,
N.
,
Sani
,
L.
,
Gentili
,
C.
,
Scilingo
,
E. P.
,
Landini
,
L.
,
et al
(
2007
).
The effect of visual experience on the development of functional architecture in hMT+.
Cerebral Cortex
,
17
,
2933
2939
.
Sadato
,
N.
,
Pascual-Leone
,
A.
,
Grafman
,
J.
,
Ibanez
,
V.
,
Deiber
,
M. P.
,
Dold
,
G.
,
et al
(
1996
).
Activation of the primary visual cortex by Braille reading in blind subjects.
Nature
,
380
,
526
528
.
Saenz
,
M.
,
Lewis
,
L. B.
,
Huth
,
A. G.
,
Fine
,
I.
, &
Koch
,
C.
(
2008
).
Visual motion area MT+/V5 responds to auditory motion in human sight-recovery subjects.
Journal of Neuroscience
,
28
,
5141
5148
.
Simmons
,
W. K.
, &
Martin
,
A.
(
2012
).
Spontaneous resting state BOLD fluctuations reveal persistent domain-specific neural networks.
Social Cognitive and Affective Neuroscience
,
7
,
467
475
.
Striem-Amit
,
E.
,
Dakwar
,
O.
,
Reich
,
L.
, &
Amedi
,
A.
(
2012
).
The large-scale organization of “visual” streams emerges without visual experience.
Cerebral Cortex
,
22
,
1698
1709
.
Valyear
,
K. F.
, &
Culham
,
J. C.
(
2010
).
Observing learned object-specific functional grasps preferentially activates the ventral stream.
Journal of Cognitive Neuroscience
,
22
,
970
984
.
Wei
,
T.
,
Liang
,
X.
,
He
,
Y.
,
Zang
,
Y.
,
Han
,
Z.
,
Caramazza
,
A.
,
et al
(
2012
).
Predicting conceptual processing capacity from spontaneous neuronal activity of the left middle temporal gyrus.
Journal of Neuroscience
,
32
,
481
489
.
Wolbers
,
T.
,
Klatzky
,
R. L.
,
Loomis
,
J. M.
,
Wutte
,
M. G.
, &
Giudice
,
N. A.
(
2011
).
Modality-independent coding of spatial layout in the human brain.
Current Biology
,
21
,
984
989
.