Abstract

If conceptual retrieval is partially based on the simulation of sensorimotor experience, people with a different sensorimotor experience, such as congenitally blind people, should retrieve concepts in a different way. However, studies investigating the neural basis of several conceptual domains (e.g., actions, objects, places) have shown a very limited impact of early visual deprivation. We approached this problem by investigating brain regions that encode the perceptual similarity of action and color concepts evoked by spoken words in sighted and congenitally blind people. At first, and in line with previous findings, a contrast between action and color concepts (independently of their perceptual similarity) revealed similar activations in sighted and blind people for action concepts and partially different activations for color concepts, but outside visual areas. On the other hand, adaptation analyses based on subjective ratings of perceptual similarity showed compelling differences across groups. Perceptually similar colors and actions induced adaptation in the posterior occipital cortex of sighted people only, overlapping with regions known to represent low-level visual features of those perceptual domains. Early-blind people instead showed a stronger adaptation for perceptually similar concepts in temporal regions, arguably indexing higher reliance on a lexical-semantic code to represent perceptual knowledge. Overall, our results show that visual deprivation does changes the neural bases of conceptual retrieval, but mostly at specific levels of representation supporting perceptual similarity discrimination, reconciling apparently contrasting findings in the field.

INTRODUCTION

Embodied theories of knowledge suggest that conceptual retrieval is partly based on the simulation of sensorimotor experience (Binder & Desai, 2011; Barsalou, 1999). Part of the evidence supporting this claim comes from studies showing that visual areas in the occipital cortex are activated when people process the meaning of words referring to concrete entities (Borghesani et al., 2016; Fernandino et al., 2015; Saygin, McCullough, Alac, & Emmorey, 2010). For instance, the size of different animals (e.g., elephant vs. mouse) is encoded in the posterior occipital cortex (Borghesani et al., 2016); whether or not an object has a typical color is reflected in the activation of the color-sensitive area V4 (Fernandino et al., 2015); and sentences referring to motion activate the motion-sensitive area V5 (Saygin et al., 2010). However, it is still debated whether the activity in visual regions during conceptual retrieval reflects the simulation of perceptual experience (Barsalou, 2016) or, instead, abstract representations of semantic features (e.g., size, color, movement) that are largely innate (Leshinskaya & Caramazza, 2016) and that are active both during retrieval and perception (Stasenko, Garcea, Dombovy, & Mahon, 2014).

Congenital blindness offers an ideal model to test these hypotheses: If conceptual processing is somehow grounded into experience, blind people, who experience the world in a different way, should also show a different neurobiology of concepts, at least concerning vision-related features (Casasanto, 2011). Several studies, however, seem to provide evidence against this idea (Bedny & Saxe, 2012). For instance, when sighted and blind individuals were asked to retrieve information about highly visual entities, knowledge about small and manipulable objects (Peelen et al., 2013) activated the lateral temporal-occipital complex; thinking about big nonmanipulable objects activated the parahipoccampal place area (He et al., 2013); and processing action verbs (compared with nouns) activated the left posterior middle temporal gyrus (lpMTG; Bedny, Caramazza, Pascual-Leone, & Saxe, 2012) in both groups. These results seem to suggest that blindness leaves the neurobiology of conceptual retrieval largely unchanged and that activity in putatively visual areas do not necessarily encode visual simulations, but more abstract representations of action, movements, places, and so forth (Bedny & Saxe, 2012; Noppeney, Friston, & Price, 2003). It is crucial to note, however, that activation in the same areas does not necessarily mean identical processing between blind and sighted individuals, because there is evidence that several visual areas in early-blind individuals enhance their response to nonvisual senses (such as haptic or auditory processing; Kupers & Ptito, 2014; Frasnelli, Collignon, Voss, & Lepore, 2011), even if they may keep similar functional specialization (Dormal & Collignon, 2011). Thus, areas that support visual simulations of conceptual features in sighted individuals may support haptic or auditory simulations of the same features in blind individuals (Reich, Szwed, Cohen, & Amedi, 2011; Ricciardi et al., 2007), with the possible exception of uniquely visual features that cannot be easily remapped onto other senses (Bi, Wang, & Caramazza, 2016). Nevertheless, from the point of view of embodied and situated cognition, it is still problematic that, so far, no study reported a higher involvement of visual areas in sighted than blind individuals during conceptual retrieval.

Crucially, most of the previous studies relied on paradigms that investigated categorical knowledge that only partially depends on visual experience. For instance, different categories (e.g., animals vs. tools) can be discriminated based on touch, audition, taste, affordances, or the reward that they bring (Peelen & Downing, 2017; Bi et al., 2016). These are all features that are equally accessible to blind individuals. Moreover, previous tasks mostly involved the discrimination of concepts belonging to distinct superordinate categories rather than the analysis of concepts as individual entities (Handjaras et al., 2017) based on their idiosyncratic perceptual features. Therefore, it may not surprising that associative regions (e.g., pMTG) or even regions of the anterior part of the ventral occipital-temporal cortex (VOTC) that are not strictly visual but receive multiple inputs from different sensory and nonsensory systems (such as the emotional, language, or navigation network) might show preferential responses to some specific superordinate categories independently of sensory inputs or even sensory (visual) experience (Mattioni et al., 2019; Handjaras et al., 2017; Peelen & Downing, 2017; van den Hurk, Van Baelen, & Op de Beeck, 2017).

On the other hand, we should expect greater differences between sighted and blind individuals in brain regions that are more strictly visual and encode lower level visual similarity of objects independently of their category. Such visual features are usually encoded in the posterior portion of the occipital cortex (BA 18–BA 19; including the lingual gyrus, cuneus, V4 and V5) during visual perception of objects and scenes (Bracci & Op de Beeck, 2016; Connolly et al., 2012; Naselaris, Prenger, Kay, Oliver, & Gallant, 2009). Moreover, some of these posterior occipital regions seem to encode perceptual information (e.g., size, color, motion) also during conceptual retrieval from words (Borghesani et al., 2016; Fernandino et al., 2015; Saygin et al., 2010).

Interestingly, for our purposes, it has been shown that the anterior part of the VOTC, known for its category specificity (Bracci & Op de Beeck, 2016; Grill-Spector & Weiner, 2014), is highly multimodal and presents a similar functional and connectivity fingerprint in sighted and blind individuals (Wang et al., 2015), whereas this similarity decreases in more posterior occipital regions (Wang et al., 2015) that are highly modality specific (i.e., visual). Accordingly, several studies have shown that regions of the occipital lobe, in early-blind individuals, engage in high-level cognitive processing (e.g., memory, language, math; Van Ackeren, Barbero, Mattioni, Bottini, & Collignon, 2018; Bedny, 2017; Amedi, Raz, Pianka, Malach, & Zohary, 2003; Büchel, 2003), suggesting that the same regions that encode visual perceptual features in sighted individuals may encode different perceptual or cognitive dimensions in blind people.

Building on these previous findings, we set out to investigate which brain areas encode the perceptual similarity of concepts (actions and colors) in sighted and early-blind people. In our experiment, we asked sighted and early-blind participants to rate pairs of concepts based on their perceptual similarity and analyzed the data using a repetition suppression framework (Barron, Garvert, & Behrens, 2016; Grill-Spector, Henson, & Martin, 2006; Wheatley, Weisberg, Beauchamp, & Martin, 2005). Repetition suppression is a useful tool to investigate whether and how perceptual distance is represented in the brain. For instance, items that are perceptually close should elicit adaptation in areas that encode the relevant perceptual features (Barron et al., 2016; Grill-Spector et al., 2006). We expected to see adaptation in posterior occipital regions for perceptually similar actions and color in sighted, but not in the blind, individuals, because those regions now involve different computations. On the other hand, a categorical contrast between actions and color trials (independently of perceptual similarity) should replicate previous results showing similar category-specific activations in multimodal areas of the brain, in both sighted and blind individuals. Importantly however, the similarity across groups in multimodal regions should be stronger for conceptual categories that can be perceptually experienced both by sighted and blind individuals. Therefore, in our experiment, we choose stimuli exemplars coming from actions and color categories. Including colors that can be experienced through vision only allowed us to test whether their different epistemological status—concrete in sighted versus abstract in the blind—would influence their representation even in multimodal areas of the brain that usually show resilience to visual deprivation (Striem-Amit, Wang, Bi, & Caramazza, 2018).

METHODS

Participants

Thirty-six individuals took part to this experiment: 18 early-blind individuals (EB; eight women) and 18 sighted controls (SC; eight women). Participants were matched pairwise for sex, age (SC: mean = 36.11 years, SD = 7.52 years; EB: mean = 36.56 years, SD = 8.48 years), and years of education (SC: mean = 16.67 years, SD = 2.72 years; EB: mean = 15.39 years, SD = 3.48 years; see Table 1).

Table 1. 
Demographic Information of Sighted and Blind Participants
Sight StatusAge (years)SexYears of Formal EducationSight StatusAge (years)SexYears of Formal Education
EB01 32 18 SC01 37 18 
EB02 52 18 SC02 46 18 
EB03 30 13 SC03 34 18 
EB04 29 13 SC04 26 16 
EB05 36 18 SC05 33 18 
EB06 27 18 SC06 27 18 
EB07 39 18 SC07 40 18 
EB08 48 SC08 51 
EB09 42 13 SC09 41 18 
EB10 36 18 SC10 40 18 
EB11 49 18 SC11 43 18 
EB12 36 18 SC12 39 18 
EB13 45 18 SC13 42 18 
EB14 25 13 SC14 25 16 
EB15 29 16 SC15 30 18 
EB16 30 13 SC16 26 13 
EB17 28 18 SC17 31 18 
EB18 45 SC18 39 13 
M 36, 56   15, 39   36, 11   16, 67 
SD 8, 48   3, 48   7, 52   2, 72 
Sight StatusAge (years)SexYears of Formal EducationSight StatusAge (years)SexYears of Formal Education
EB01 32 18 SC01 37 18 
EB02 52 18 SC02 46 18 
EB03 30 13 SC03 34 18 
EB04 29 13 SC04 26 16 
EB05 36 18 SC05 33 18 
EB06 27 18 SC06 27 18 
EB07 39 18 SC07 40 18 
EB08 48 SC08 51 
EB09 42 13 SC09 41 18 
EB10 36 18 SC10 40 18 
EB11 49 18 SC11 43 18 
EB12 36 18 SC12 39 18 
EB13 45 18 SC13 42 18 
EB14 25 13 SC14 25 16 
EB15 29 16 SC15 30 18 
EB16 30 13 SC16 26 13 
EB17 28 18 SC17 31 18 
EB18 45 SC18 39 13 
M 36, 56   15, 39   36, 11   16, 67 
SD 8, 48   3, 48   7, 52   2, 72 

M = male; F = female; EB = early blind; SC = sighted control; M = mean; SD = standard deviation.

All the blind participants lost sight at birth or before 3 years of age, and all of them reported not having visual memories (Table 2). All participants were blindfolded during the task. The ethics committee of the Besta Neurological Institute approved this study (Protocol fMRI_BP_001), and participants gave their informed consent before participation.

Table 2. 
Early-Blind Participants, Additional Information
ParticipantAge Onset BlindnessCause of Blindness
EB01 Optic nerve hypoplasia 
EB02 Retinoblastoma 
EB03 Congenital retinal dystrophy 
EB04 Retinoblastoma 
EB05 Congenital microphtalmia 
EB06 Congenital microphtalmia 
EB07 Retrolental fibroplasia 
EB08 Optic nerve atrophy 
EB09 Congenital retinal dystrophy 
EB10 Retrolental fibroplasia 
EB11 Retrolental fibroplasia 
EB12 Retrolental fibroplasia 
EB13 Retrolental fibroplasia 
EB14 Leber's congenital amaurosis 
EB15 Retinitis pigmentosa 
EB16 Retrolental fibroplasia 
EB17 Agenesis of the optic nerve 
EB18 Glaucoma 
ParticipantAge Onset BlindnessCause of Blindness
EB01 Optic nerve hypoplasia 
EB02 Retinoblastoma 
EB03 Congenital retinal dystrophy 
EB04 Retinoblastoma 
EB05 Congenital microphtalmia 
EB06 Congenital microphtalmia 
EB07 Retrolental fibroplasia 
EB08 Optic nerve atrophy 
EB09 Congenital retinal dystrophy 
EB10 Retrolental fibroplasia 
EB11 Retrolental fibroplasia 
EB12 Retrolental fibroplasia 
EB13 Retrolental fibroplasia 
EB14 Leber's congenital amaurosis 
EB15 Retinitis pigmentosa 
EB16 Retrolental fibroplasia 
EB17 Agenesis of the optic nerve 
EB18 Glaucoma 

Stimuli

We selected six Italian color words (rosso/red, giallo/yellow, arancio/orange, verde/green, azzurro/blue, viola/purple) and six Italian action words (pugno/punch, graffio/scratch, schiaffo/slap, calcio/kick, salto/jump, corsa/run). Words were all highly familiar nouns and were matched across categories (color, action), by number of letters (color: mean = 5.83, SD = 0.98; action: mean = 6, SD = 1.23), frequency (Zipf scale; color: mean = 4.02, SD = 0.61; action: mean = 4.18, SD = 0.4), and orthographic neighbors (Coltheart's N; color: mean = 14, SD = 9.12; action: mean = 15.33, SD = 12.42).

Auditory files were made using a voice synthesizer (Talk To Me), with a female voice, and edited into separated audio files with the same auditory properties (44100 Hz, 32 bit, mono, 78 dB of intensity). The original duration of each audio file (range = 356–464 msec) was extended or compressed to 400 msec using the PSOLA (pitch synchronous overlap and add) algorithm and the sound-editing software Praat (Boersma & Weenink, 2018). All the resulting audio files were highly intelligible.

Experimental Design

We designed a fast event-related fMRI paradigm during which participants listened to pairs of color and action words. In each trial, the two words were played one after the other with a SOA of 2000 msec.

The intertrial interval ranged between 4000 and 16000 msec. Participants were asked to judge the similarity of the two colors or the two actions from 1 to 5 (1 = very different, 5 = very similar). Responses were collected via an ergonomic hand-shaped response box with five keys (Resonance Technology, Inc.). All participants used their right hand to provide responses (thumb = very different, pinky = very similar). Participants were told that they had about 4 sec to provide a response after the onset of the second word of the pair, and they were encouraged to use all the scale (1–5). Furthermore, the instruction was to judge the similarity of colors and actions based on their perceptual properties (avoiding reference to emotion, valence, or other nonperceptual characteristics). Blind participants were told to judge color pairs on the basis of their knowledge about the perceptual similarity between colors.

Color and action words were presented in all possible within-category combinations (15 color pairs, 15 action pairs). Each pair was presented twice in each run, in the two possible orders (e.g., red–yellow, yellow–red). Thus, there were 60 trials in each run, and the experiment consisted in five runs of 7 min. Stimuli were pseudorandomized using optseq2 to optimize the sequence of presentation of the different conditions. Three different optimized lists of trials were used across runs. List order was counterbalanced across participants.

One early-blind participant was excluded from the analyses because the participant answered to fewer than 70% of the trials throughout the experiment because of sleepiness. One run of one sighted participant was excluded from the analysis because of a technical error during the acquisition, and two other runs (one in a sighted participant, one in a blind participant) were excluded because the participant answered to fewer than 70% of the trials in that specific run.

Conceptual Similarity Ratings

To perform the adaptation analysis, we divided the trials in similar pairs (e.g., red–orange) and different pairs (e.g., red–blue). We did so based on the participants' subjective ratings. For each participant, we took the average rating for each of the 15 word pairs in the action and color categories. Then, we automatically divided the 15 pairs in five intervals (four quantiles) of nearly equal size. This subdivision was performed using the function quantile, in R (R Core Team, 2017), which divides a probability distribution into contiguous intervals of equal probabilities (i.e., 20%). The pairs in the first two intervals were the different pairs (low ratings of similarity), the pairs in the third interval were the medium pairs, and the pairs in the fourth and fifth intervals were the similar pairs (see Figure 2B). However, in some cases, ratings distributions were slightly unbalanced, due to the tendency of some participants to find more “very different” pairs than “very similar” pairs. In these cases (eight participants for action ratings [three EB]; four participants for color ratings [one EB]), the automatic split in five equal intervals was not possible. Thus, we set the boundary between the second and third intervals at the ratings average (for that given participant) and set to the minimum (one or two, depending on the cases) the number of items in the third interval (not analyzed) to balance, as much as possible, the number of pairs in the Different and Similar groups. This procedure was made so that, in these special cases (as well as in all the others), the rating values of different pairs were always below the mean and the values of similar pairs were always above the mean.

MRI Data Acquisition

Brain images were acquired at the Neurological Institute Carlo Besta in Milano on a 3-T scanner with a 32-channel head coil (Achieva TX; Philips Healthcare) and gradient EPI sequences.

In the event-related experiment, we acquired 35 slices (voxel size 3 × 3 × 3.5) with no gap. The data in-plane matrix size was 64 × 64, with field of view = 220 mm × 220 mm, repetition time = 2 sec, flip angle = 90°, and echo time = 30 msec. In all, 1210 whole-brain images were collected during the experimental sequence. The first four images of each run were excluded from the analysis for steady-state magnetization. Each participant performed five runs, with 242 volumes per run.

Anatomical data were acquired using a T1-weighted 3D-TFE sequence with the following parameters: voxel size = 1 × 1 × 1 mm, matrix size = 240 × 240, repetition time = 2.300 msec, echo time = 2.91 msec, inversion time = 900 msec, field of view = 240, 185 slices.

MRI Data Analysis

We analyzed the fMRI data using SPM12 (www.fil.ion.ucl.ac.uk/spm/software/spm12/) and MATLAB R2014b (The MathWorks, Inc.).

Preprocessing

Preprocessing included slice timing correction of the functional time series (Sladky et al., 2011), realignment of functional time series, coregistration of functional and anatomical data, spatial normalization to an echoplanar imaging template conforming to the Montreal Neurological Institute (MNI) space, and spatial smoothing (Gaussian kernel, 6 mm FWHM). Serial autocorrelation, assuming a first-order autoregressive model, was estimated using the pooled active voxels with a restricted maximum likelihood procedure, and the estimates were used to whiten the data and design matrices.

Data Analysis

Following preprocessing steps, the analysis of fMRI data, based on a mixed-effects model, was conducted in two serial steps accounting, respectively, for fixed and random effects. In all the analyses, the regressors for the conditions of interest consisted of an event-related boxcar function convolved with the canonical hemodynamic response function according to a variable epoch model (Grinband, Wager, Lindquist, Ferrera, & Hirsch, 2008). Movement parameters derived from realignment of the functional volumes (translations in x, y, and z directions and rotations around x, y, and z axes) and a constant vector were also included as covariates of no interest. We used a high-pass filter with a discrete cosine basis function and a cutoff period of 128 sec to remove artifactual low-frequency trends.

Adaptation analysis.

For each participant, the general linear model included six regressors corresponding to the three levels of similarity (different, medium, similar) in each condition (color, action). Color and action pairs in the medium condition were modeled as regressors of no interest.

At the first level of analysis, linear contrasts tested for repetition suppression (Different > Similar) collapsing across categories (action, color). The same contrasts were then repeated within each category (Color Different > Color Similar, Action Different > Action Similar). Finally, we tested for Similarity × Category interactions, testing whether the adaptation was stronger in one category compared with the other (e.g., [Color Different > Color Similar] > [Action Different > Action Similar]).

These linear contrasts generated statistical parametric maps, SPM(T). The resulting contrast images were then further spatially smoothed (Gaussian kernel 5 mm FWHM) and entered in a second-level analysis (RFX), corresponding to a random-effects model, accounting for intersubject variance. One-sample t tests were run on each group separately. Two-sample t tests were then performed to compare these effects between groups (blind vs. sighted).

Univariate analysis.

For each participant, changes in regional brain responses were estimated through a general linear model including two regressors corresponding to the two categories, action and color. The onset of each event was set at the beginning of the first word of the pair; the offset was determined by the participant response, thus including RT (Grinband et al., 2008). Linear contrasts tested for action-specific (Action > Color) and color-specific (Color > Action) BOLD activity.

These linear contrasts generated statistical parametric maps, SPM(T). The resulting contrast images were then further spatially smoothed (Gaussian kernel 5 mm FWHM) and entered in a second-level analysis, corresponding to a random-effects model, accounting for intersubject variance. One-sample t tests were run on each group separately. Two-sample t tests were then performed to compare these effects between groups (blind vs. sighted) and to perform conjunction analyses to observe if the two groups presented similar activated networks for the two contrasts of interests.

ROI Definition

The V4 and V5 ROI were drawn from the literature, considering both perceptual localizers, as well as evidence from semantic/conceptual task. These ROIs were restricted to the left hemisphere because several previous studies consistently showed unique or relatively stronger left-lateralized activation for sensorimotor simulation of color and movement during language processing (Fernandino et al., 2015; Saygin et al., 2010). We selected three peak coordinates for area V5. The first [−47 −78 −2] from a highly cited study contrasting the perception of visual motion versus static images (Dumoulin et al., 2000). The second [−44 −74 2] from a study (Saygin et al., 2010) showing V5 sensitivity to motion sentences (e.g., “The wild horse crossed the barren field”). The third from a research on the online meta-analysis tool Neurosynth (neurosynth.org/) for the topic “action.” In Neurosynth, the area in the occipital cortex with the highest action-related activation was indeed V5 (peak coordinates: −50 −72 2). To avoid ROI proliferation, we averaged these three peak coordinates to obtain a single peak (average peak: −47 −75 1).

As for V4, we selected the color-sensitive occipital ROI considering perceptual localizers, as well as evidence of color-specific activity from semantic/conceptual task. Fernandino et al. (2015) reported a color-sensitive area in the left posterior collateral sulcus (at the intersection between the lingual and the fusyform gyrus; MNI peak coordinates: −16 −71 −12) associated with color-related words. This peak is close to the posterior V4 localization done by Beauchamps and colleagues (peak coordinates: −22 −82 −16) in an MRI version of the Farnsworth-Munsell 100-Hue Test (Beauchamp, Haxby, Jennings, & DeYoe, 1999). A search in neurosynth with the keyword “color” also highlighted a left posterior color-sensitive region along the collateral sulcus with peak coordinates (−24 −90 −10). We averaged these three peaks to find the center of our ROI (average peak: −21 −81 −13).

Only a few studies before us adopted a repetition–suppression paradigm using words (either visually or auditorily presented). In all these cases, semantic relatedness was tested, and results showed repetition suppression for semantically related words in the posterior lateral-temporal cortex (PLTC). Bedny, McGill, & Thompson-Schill (2008) observed increased neural adaptation in PLTC (peak coordinates: 57 −36 21) for repeated words (fan–fan), when the words were presented in a similar context (summer–fan, ceiling–fan) compared with when different context triggered different meanings (e.g., admirer–fan, ceiling–fan). This result conceptually replicated previous studies (Wible et al., 2006; Kotz, Cappa, von Cramon, & Friederici, 2002) showing semantic adaptation in the bilateral PLTC for related (e.g., dog–cat) versus unrelated (e.g., dog–apple) word pairs (peak coordinates: −42 −27 9 and −51 −22 8). These three peaks were averaged to find the center of our ROI in both hemispheres (average peak: ±50 −28 13).

Statistical Analysis

At the whole-brain level, statistical inference was made at a corrected cluster level of p < .05 family-wise error (FWE; with a standard voxel-level threshold of p < .001 uncorrected).

All ROI analyses were performed using small volume correction using spheres with an 8-mm radius centered around the ROI peak coordinates (see previous section). Within the ROI, results were considered significant at a threshold of p < .05, FWE-corrected voxel-wise. Here, and throughout the article, brain coordinates are reported in MNI space.

RESULTS

Brain Regions Active in Sighted and Blind Individuals when Contrasting Action and Color Categories

At first, we ran classic univariate analysis, comparing items across categories, to find category-specific activations across sighted and blind individuals (Bedny et al., 2012; Noppeney et al., 2003).

Behavioral Analysis

RT analysis using a mixed ANOVA, with Category (action, color) as within-subject factor and Group (sighted, blind) as between-subject factor, showed no difference between categories, F(1, 33) = 2.37, p > .05, η2 = .07; between groups, F(1, 33) = 0.074, p > .05, η2 = .002; and no Category × Group interaction, F(1, 33) = .69, p > .05, η2 = .02.

fMRI Analysis

The contrast Action > Color did not reveal any significant difference between groups, suggesting a comparable categorical representation of action concepts, across sighted and blind (see Figure 1A, B and Table 3 for details). Indeed, a conjunction analysis between groups showed a common significant activation in the lpMTG (peak: −54 −61 5; Figure 1E).

Figure 1. 

Contrasts between categories. Regional BOLD responses are rendered over Conte-69 average midthickness surfaces. (A, B) Suprathreshold clusters (p < .05 FWE corrected) for the contrast Action > Color in sighted and blind participants, respectively. (C–D) Suprathreshold clusters (p < .05 FWE corrected) for the contrast Color > Action in sighted and blind participants, respectively. (E) Suprathreshold clusters (p < .05 FWE corrected) showing common activity in the lpMTG for the contrast Action > Color in both sighted and early-blind participants (conj. = conjunction analysis). (F) Suprathreshold voxels (p < .001 uncorrected, only for illustration purposes) showing common activity in the right precuneus for the contrast Color > Action in both sighted and early blind participants (conj. = conjunction analysis). (G) Suprathreshold clusters (p < .05 FWE corrected) showing greater activity in the rIPS, in sighted compared with early-blind participants, for the contrast Color > Action.

Figure 1. 

Contrasts between categories. Regional BOLD responses are rendered over Conte-69 average midthickness surfaces. (A, B) Suprathreshold clusters (p < .05 FWE corrected) for the contrast Action > Color in sighted and blind participants, respectively. (C–D) Suprathreshold clusters (p < .05 FWE corrected) for the contrast Color > Action in sighted and blind participants, respectively. (E) Suprathreshold clusters (p < .05 FWE corrected) showing common activity in the lpMTG for the contrast Action > Color in both sighted and early-blind participants (conj. = conjunction analysis). (F) Suprathreshold voxels (p < .001 uncorrected, only for illustration purposes) showing common activity in the right precuneus for the contrast Color > Action in both sighted and early blind participants (conj. = conjunction analysis). (G) Suprathreshold clusters (p < .05 FWE corrected) showing greater activity in the rIPS, in sighted compared with early-blind participants, for the contrast Color > Action.

Table 3. 
Regional Responses for the Comparison between Action and Color Trials across Sighted and Early Blind Participants
Areakx (mm)y (mm)z (mm)ZpFWE
Sighted, Action > Color 
 L middle temporal gyrus 1134 −57 −61 5.80 <.001 
 L inferior frontal gyrus S.C. −48 29 −1 5.31   
 L middle temporal gyrus S.C. −60 −43 29 5.15   
 R middle temporal gyrus 474 60 −10 −7 5.14 <.001 
S.C. 57 −13 4.96   
 R superior temporal gyrus S.C. 51 −28 −1 4.74   
  
Sighted, Color > Action 
 R orbital gyrus 65 27 35 −13 4.98 .009a 
 L orbital gyrus 71 −30 35 −13 4.90 .012a 
 precuneus 91 −61 29 4.11 .06b 
  
Blind, Action > Color 
 L middle temporal gyrus 315 −54 −61 4.69 <.001 
S.C. −42 −67 17 4.46   
S.C. −45 −52 14 3.49   
 R calcarine/posterior cingulate 325 −67 4.63 <.001 
S.C. −76 11 4.07   
 L calcarine S.C. −9 −82 11 4.07   
 R middle temporal gyrus 203 57 −55 4.40 .003 
S.C. 42 −55 3.94   
S.C. 63 −43 3.63   
 R inferior frontal gyrus 256 42 20 4.21 <.001 
 R superior temporal pole S.C. 48 11 −19 3.97   
 R inferior frontal gyrus S.C. 51 17 −1 3.80   
  
Blind, Color > Action 
 R precuneus 109 −52 20 4.72 .036 
 L medial frontal gyrus 174 −9 59 −7 4.20 .008 
S.C. 53 −10 3.85   
  
Sighted ∩ Bind, Action > Color 
 L middle temporal gyrus 155 −54 −61 4.69 .01 
  
Sighted > Blind, Color > Action 
 R intraparietal sulcus 146 33 −43 35 4.33 .013 
Areakx (mm)y (mm)z (mm)ZpFWE
Sighted, Action > Color 
 L middle temporal gyrus 1134 −57 −61 5.80 <.001 
 L inferior frontal gyrus S.C. −48 29 −1 5.31   
 L middle temporal gyrus S.C. −60 −43 29 5.15   
 R middle temporal gyrus 474 60 −10 −7 5.14 <.001 
S.C. 57 −13 4.96   
 R superior temporal gyrus S.C. 51 −28 −1 4.74   
  
Sighted, Color > Action 
 R orbital gyrus 65 27 35 −13 4.98 .009a 
 L orbital gyrus 71 −30 35 −13 4.90 .012a 
 precuneus 91 −61 29 4.11 .06b 
  
Blind, Action > Color 
 L middle temporal gyrus 315 −54 −61 4.69 <.001 
S.C. −42 −67 17 4.46   
S.C. −45 −52 14 3.49   
 R calcarine/posterior cingulate 325 −67 4.63 <.001 
S.C. −76 11 4.07   
 L calcarine S.C. −9 −82 11 4.07   
 R middle temporal gyrus 203 57 −55 4.40 .003 
S.C. 42 −55 3.94   
S.C. 63 −43 3.63   
 R inferior frontal gyrus 256 42 20 4.21 <.001 
 R superior temporal pole S.C. 48 11 −19 3.97   
 R inferior frontal gyrus S.C. 51 17 −1 3.80   
  
Blind, Color > Action 
 R precuneus 109 −52 20 4.72 .036 
 L medial frontal gyrus 174 −9 59 −7 4.20 .008 
S.C. 53 −10 3.85   
  
Sighted ∩ Bind, Action > Color 
 L middle temporal gyrus 155 −54 −61 4.69 .01 
  
Sighted > Blind, Color > Action 
 R intraparietal sulcus 146 33 −43 35 4.33 .013 

Significance corrections are reported at the cluster level, unless otherwise specified; cluster size threshold = 50. L = left; R = right; S.C. = same cluster.

a

Brain activations significant after FWE correction at voxel level over the whole brain.

b

Indicates marginally significant clusters (pFWE < .1).

On the other hand, a conjunction analysis between groups for Color > Action did not reveal any common activation between sighted and blind individuals after correction for multiple comparisons at the whole-brain level. However, displaying the conjunction results at a more lenient threshold (p < .001 uncorrected; Figure 1F), we could notice a unique common activity for color concepts in the right precuneus (peak: 6 −55 26). Accordingly, analysis within groups showed a significant precuneus activity in blind individuals (peak: 6 −52 20, p = .04) and a marginally significant activity in sighted individuals (peak: 0 −61 29, p = .06), with no significant difference between groups (Table 3; Figure 1C, D).

More importantly, further analysis for the contrast Color > Action revealed a cluster in the right parietal cortex, in and around the right intraparietal sulcus (rIPS), showing higher activity for color concepts in sighted compared with blind (peak: 33 −43 35; Figure 1G).

Altogether these results show both similar and different patterns of activity during conceptual processing in sighted and blind individuals, when different categories are contrasted against each other (independently of perceptual similarity). As in previous studies (Bedny et al., 2012; Noppeney et al., 2003), categorical preference was found outside modality-specific visual cortex, in areas that are considered to be highly multimodal, such as the lpMTG, the precuneus, and the IPS (Binder, Desai, Graves, & Conant, 2009). The network of activations elicited by action words (contrasted with color words) was highly similar across groups, with a common peak of activity in the lpMTG, replicating previous results (Bedny et al., 2012; Noppeney et al., 2003). On the other hand, the regions activated specifically by color words (compared with action words) were partially different across sighted and blind individuals. In particular, the rIPS was activated by color knowledge in sighted individuals more than blind individuals. The greater difference between groups in the case of colors, compared with actions, can be explained by the fact that colors lack a perceptual referent without vision, thus acquiring a different epistemological status in blind and sighted individuals (abstract vs. concrete).

Perceptual Similarity Is Encoded in Occipital Areas in the Sighted but Not in the Blind

The rationale behind adaptation analyses was that the direct contrast between pairs with high versus low perceptual differences will display neural adaptation (Barron et al., 2016; Grill-Spector et al., 2006; Wheatley et al., 2005), therefore probing regions that are specifically sensitive to the “perceptual distance” between concepts.

Behavioral Analysis

Similarity ratings were highly correlated between sighted and blind individuals, both for action (r = .99) and color concepts (r = .93; Figure 2A). To perform the adaptation analysis, we divided the trials in similar pairs (e.g., red–orange) and different pairs (e.g., red–blue) based on each participant's subjective ratings. Rating distributions for each participant and category (color, action) were divided in five intervals with a similar number of items (see Methods section for details). Stimulus pairs in the first two intervals were labeled as different (low similarity ratings), the third interval contained medium pairs, and the fourth to fifth intervals have similar pairs (high similarity ratings; Figure 2B). Overall, the average number of “different” trials was slightly larger than the “similar” ones (126 vs. 115), F(1, 33) = 8.41, p = .007, η2 = .20 (Figure 2C). However, there was no similarity by group interaction, F(1, 33) = 0.18, p =.67, η2 = .004, indicating that this unbalance (that reflected personal judgments of similarity) was the same across SC and EB (Figure 2C).1 An analysis of RTs showed that medium pairs (not analyzed in fMRI) had on average longer latencies than similar and different ones (main effect of similarity: F(2, 66) = 21.07, p < .001, η2 = .38). This was expected because pairs that are neither similar nor different would require longer and more difficult judgments. Crucially, there was no difference in RTs between different (mean = 1.80 sec, SD = 0.39) and similar pairs (mean = 1.79 sec, SD = 0.37), F(1, 33) = 0.09, p = .76, η2 = .003, and no interaction between similarity and group, F(1, 33) = 0.04, p = .84, η2 = .001 (Figure 2D).

Figure 2. 

Adaptation, behavioral analysis. (A) Similarity judgments were highly correlated across groups both for actions and color. (B) Conceptual schema of the division of word pairs in “different” and “similar” based on subjective similarity ratings. (C) Bar plot depicting the average number of items in the “different,” “medium,” and “similar” categories. The number of items in the “different” and “similar” categories is very similar across groups (number of trials ± SEM). (D) Bar plot depicting the average RT in the “different,” “medium,” and “similar” categories. The average RTs of the “different” and “similar” categories is very similar across groups (seconds ± SEM).

Figure 2. 

Adaptation, behavioral analysis. (A) Similarity judgments were highly correlated across groups both for actions and color. (B) Conceptual schema of the division of word pairs in “different” and “similar” based on subjective similarity ratings. (C) Bar plot depicting the average number of items in the “different,” “medium,” and “similar” categories. The number of items in the “different” and “similar” categories is very similar across groups (number of trials ± SEM). (D) Bar plot depicting the average RT in the “different,” “medium,” and “similar” categories. The average RTs of the “different” and “similar” categories is very similar across groups (seconds ± SEM).

fMRI Analysis

To find brain areas that showed adaptation based on conceptual similarity, we looked at the contrast Different Pairs > Similar Pairs (medium pairs were set as a regressor of no interest). In sighted individuals, similar colors elicited repetition suppression in a circumscribed region at the border between the left fusiform gyrus and the left lingual gyrus, along the posterior bank of the collateral sulcus (peak: −21 −82 −7; Figure 3A). Interestingly, the posterior collateral sulcus (pCoS) is the color-sensitive area originally defined as V4 by Zeki et al. (1991), lately identified as the posterior patch of the V4 complex (Lafer-Sousa, Conway, & Kanwisher, 2016; Beauchamp et al., 1999), and recently found to encode color-related knowledge during semantic processing (Fernandino et al., 2015). Still in sighted individuals, similar actions elicited repetition suppression in several regions of the posterior occipital cortex, including V4, V5, the posterior parahippocampal gyrus, as well as the middle occipital gyrus (see Table 4 and Figure 3B). Pulling together action and color, thus looking at which brain region encoded visual similarity independently of the category, we found a significant cluster (whole-brain analysis) in and around the left lingual gyrus (peak coordinates: −24 −70 −7; Figure 4A). Whole-brain analysis, in sighted individuals, also showed that differences between the two categories (action and color) in terms of repetition suppression did not reach significance. However, planned ROI analysis in the motion-sensitive region V5 showed a stronger repetition suppression for actions compared with colors (peak: −51 −76 8), t(33) = 3.04, p = .02.

Figure 3. 

Adaptation, fMRI results. Regional BOLD responses are rendered over Conte-69 average midthickness surfaces. (A) Suprathreshold voxels showing neural adaptation for similar color words in the left V4 in sighted participants. (B) Suprathreshold voxels showing neural adaptation for similar action words in the occipital cortex of sighted participants pairs in the temporal and somatosensory-motor cortices of early-blind participants. (C) Suprathreshold voxels showing neural adaptation for similar color words in the temporal and precentral regions of blind participants. (D) Suprathreshold voxels showing neural adaptation for similar actions words in the temporal and precentral regions of blind participants. Voxels threshold at p <.005 uncorrected, for illustration only.

Figure 3. 

Adaptation, fMRI results. Regional BOLD responses are rendered over Conte-69 average midthickness surfaces. (A) Suprathreshold voxels showing neural adaptation for similar color words in the left V4 in sighted participants. (B) Suprathreshold voxels showing neural adaptation for similar action words in the occipital cortex of sighted participants pairs in the temporal and somatosensory-motor cortices of early-blind participants. (C) Suprathreshold voxels showing neural adaptation for similar color words in the temporal and precentral regions of blind participants. (D) Suprathreshold voxels showing neural adaptation for similar actions words in the temporal and precentral regions of blind participants. Voxels threshold at p <.005 uncorrected, for illustration only.

Table 4. 
Regional Responses for Adaptation Analysis (Repetition Suppression) across Sighted and Early-Blind Participants for Color Word Pairs (Figure 3)
Areakx (mm)y (mm)z (mm)ZpFWE
Sighted, Different > Similar 
 L V4/pCoS   −21 −82 −7 3.23 .013a 
  
Blind, Different > Similar 
 R superior temporal gyrus 67 54 −1 −16 4.61 .038b 
 R superior temporal gyrus 132 60 −28 −5 4.11 .022 
S.C. 51 −22 −1 3.46   
 R middle temporal gyrus S.C. 48 −34 −4 3.46   
 L superior temporal gyrus 104 −57 −10 −10 4.23 .04 
 R precentral gyrus 249 21 −28 59 4.44 .001 
S.C. 39 −13 50 4.33   
S.C. 21 −19 74 4.14   
  
Sighted > Blind, Different > Similar 
 L lingual gyrus   −21 −82 −7 3.40 .008a 
  
Blind > Sighted, Different > Similar 
 L superior temporal gyrus   −48 −31 20 2.85 .036c 
 R superior temporal gyrus   51 −28 20 3.11 .028c 
Areakx (mm)y (mm)z (mm)ZpFWE
Sighted, Different > Similar 
 L V4/pCoS   −21 −82 −7 3.23 .013a 
  
Blind, Different > Similar 
 R superior temporal gyrus 67 54 −1 −16 4.61 .038b 
 R superior temporal gyrus 132 60 −28 −5 4.11 .022 
S.C. 51 −22 −1 3.46   
 R middle temporal gyrus S.C. 48 −34 −4 3.46   
 L superior temporal gyrus 104 −57 −10 −10 4.23 .04 
 R precentral gyrus 249 21 −28 59 4.44 .001 
S.C. 39 −13 50 4.33   
S.C. 21 −19 74 4.14   
  
Sighted > Blind, Different > Similar 
 L lingual gyrus   −21 −82 −7 3.40 .008a 
  
Blind > Sighted, Different > Similar 
 L superior temporal gyrus   −48 −31 20 2.85 .036c 
 R superior temporal gyrus   51 −28 20 3.11 .028c 

Significance corrections are reported at the cluster level, unless otherwise specified; cluster size threshold = 50. L = left; R = right; S.C. = same cluster.

a

Brain activation significant after FWE voxel correction over a small spherical volume (8-mm radius) in V4.

b

Brain activation significant after FWE correction at voxel level over the whole brain.

c

Brain activation significant after FWE voxel correction over a small spherical volume (8-mm radius) at peak coordinates for right/left PLTC.

Figure 4. 

Adaptation, fMRI results. Regional BOLD responses are rendered over Conte-69 average midthickness surfaces. (A) Suprathreshold voxels showing neural adaptation for similar word pairs (both color and actions) in the occipital cortices of sighted participants. (B) Suprathreshold voxels showing neural adaptation for similar word pairs (both color and actions) in the temporal and somatosensory-motor cortices of early blind participants. Suprathreshold voxels showing neural adaptation for similar word pairs (both color and actions) in (C) sighted compared with blind participants and (D) blind compared with sighted participants. Voxels threshold at p < .005 uncorrected, for illustration only; Planned ROIs are indicated in transparent green color on the inflated cortex. Average signal change (arbitrary unit) extracted at group maxima, within the planned ROI, as a result of small volume correction: from (E) left V4, (F) left V5, and (G) left PLTC, for illustration only. Notice that the same statistical results are obtained when, instead of using small volume correction, we compare the average voxel activity within the planned ROI: (i) left V4, Conceptual Similarity × Group interaction, F = 14.20, p < .001; (ii) left V5, Conceptual Similarity × Category × Group interaction, F = 4.84, p = .035; (iii) left PLTC, Conceptual Similarity × Group interaction, F = 5.60, p = .024; right PLTC, Conceptual Similarity × Group interaction, F = 8.51, p = .006.

Figure 4. 

Adaptation, fMRI results. Regional BOLD responses are rendered over Conte-69 average midthickness surfaces. (A) Suprathreshold voxels showing neural adaptation for similar word pairs (both color and actions) in the occipital cortices of sighted participants. (B) Suprathreshold voxels showing neural adaptation for similar word pairs (both color and actions) in the temporal and somatosensory-motor cortices of early blind participants. Suprathreshold voxels showing neural adaptation for similar word pairs (both color and actions) in (C) sighted compared with blind participants and (D) blind compared with sighted participants. Voxels threshold at p < .005 uncorrected, for illustration only; Planned ROIs are indicated in transparent green color on the inflated cortex. Average signal change (arbitrary unit) extracted at group maxima, within the planned ROI, as a result of small volume correction: from (E) left V4, (F) left V5, and (G) left PLTC, for illustration only. Notice that the same statistical results are obtained when, instead of using small volume correction, we compare the average voxel activity within the planned ROI: (i) left V4, Conceptual Similarity × Group interaction, F = 14.20, p < .001; (ii) left V5, Conceptual Similarity × Category × Group interaction, F = 4.84, p = .035; (iii) left PLTC, Conceptual Similarity × Group interaction, F = 5.60, p = .024; right PLTC, Conceptual Similarity × Group interaction, F = 8.51, p = .006.

Repetition suppression analysis in blind individuals provided very different results. Similar colors elicited repetition suppression in the superior and middle temporal gyrus, bilaterally, with a peak in the right anterior superior temporal gyrus (peak: 54 −1 −16; see Table 4 and Figure 3C). Moreover, significant repetition suppression emerged in the bilateral precentral gyrus. Similar actions also induced repetition suppression in the posterior superior temporal gyrus bilaterally, although results were relatively weaker (compared with color) and significance was reached only with ROI analysis (left peak: −54 −34 11, t(33) = 3.33, p = .03; right peak: 48 −28 8, t(33) = 3.71, p = .008; Table 5). At the whole-brain level, there was no significant difference between the two categories (action, color) in terms of adaptation effect, and pulling together the two categories, results showed that perceptual similarity in blind individuals was encoded in bilateral middle and superior temporal gyri (Figure 4B).

Table 5. 
Regional Responses for Adaptation Analysis (Repetition Suppression) across Sighted and Early-Blind Participants for Action Word Pairs
Areakx (mm)y (mm)z (mm)ZpFWE
Sighted, Different > Similar 
 R parahippocampal gyrus 208 15 −40 −7 4.64 .007 
 R lingual gyrus S.C. 24 −55 −10 3.93   
S.C. 15 −67 −4 3.67   
 R middle occipital gyrus 566 27 −88 11 4.44 .001 
S.C. 39 −76 −1 4.24   
 R superior occipital gyrus S.C. 21 −88 23 3.76   
 L superior occipital gyrus 207 −24 −88 23 4.17 .007 
S.C. −15 −94 23 3.92   
 L Lingual gyrus 156 −18 −67 −7 3.91 .02 
  
Blind, Different > Similar 
 L superior temporal gyrus   −54 −34 11 2.88 .030a 
 R superior temporal gyrus   48 −28 3.36 .008a 
  
Sighted > Blind, Different > Similar 
 R middle occipital gyrus 442 27 −85 11 4.49 .001 
S.C. 36 −76 4.11   
 R posterior fusyform gyrus S.C. 39 −70 −13 3.85   
 L V4/pCoS   −27 −79 −10 2.98 .024b 
 L V5   −45 −79 3.36 .008c 
Areakx (mm)y (mm)z (mm)ZpFWE
Sighted, Different > Similar 
 R parahippocampal gyrus 208 15 −40 −7 4.64 .007 
 R lingual gyrus S.C. 24 −55 −10 3.93   
S.C. 15 −67 −4 3.67   
 R middle occipital gyrus 566 27 −88 11 4.44 .001 
S.C. 39 −76 −1 4.24   
 R superior occipital gyrus S.C. 21 −88 23 3.76   
 L superior occipital gyrus 207 −24 −88 23 4.17 .007 
S.C. −15 −94 23 3.92   
 L Lingual gyrus 156 −18 −67 −7 3.91 .02 
  
Blind, Different > Similar 
 L superior temporal gyrus   −54 −34 11 2.88 .030a 
 R superior temporal gyrus   48 −28 3.36 .008a 
  
Sighted > Blind, Different > Similar 
 R middle occipital gyrus 442 27 −85 11 4.49 .001 
S.C. 36 −76 4.11   
 R posterior fusyform gyrus S.C. 39 −70 −13 3.85   
 L V4/pCoS   −27 −79 −10 2.98 .024b 
 L V5   −45 −79 3.36 .008c 

Significance corrections are reported at the cluster level, unless otherwise specified; cluster size threshold = 50. L = left; R = right; S.C. = same cluster.

a

Brain activation significant after FWE voxel correction over a small spherical volume (8-mm radius) at peak coordinates for right/left PLTC.

b

Brain activation significant after FWE voxel correction over a small spherical volume (8-mm radius) in V4.

c

Brain activation significant after FWE voxel correction over a small spherical volume (8-mm radius) in V5.

Between-group Comparisons

Between-group statistical comparisons confirmed the picture emerging from the within-group analysis. Pulling together color and action trials, sighted participants showed a greater adaptation for perceptually similar concepts in several occipital areas, with peaks in the left superior occipital gyrus (−24 −91 26), the left lingual gyrus (−24 −70 −7), and the right middle occipital gyrus (27 −85 11). The contrast Blind > Sighted did not show significant results at the whole-brain level. However, planned ROI analysis in PLTC revealed a significantly greater adaptation for similar concepts in blind individuals more than sighted individuals, Conceptual Similarity × Group interaction: left PLTC= −48 −31 20, t(33) = 2.75, p = .04; right PLTC = 45 −28 11, t(33) = 3.13, p = .015.

Category-specific results were also in line with what was previously observed. Perceptually similar colors induced a stronger adaptation in V4 for sighted people and in bilateral PLTC for blind people (Figure 4EG; Table 4). On the other hand, perceptually similar actions showed a stronger adaptation in different occipital areas in sighted individuals (e.g., V4, V5, right posterior fusyform), but no significantly greater adaptation was found in blind individuals (Figure 4EG; Table 5), suggesting that the adaptation effect in left/right PLTC was driven mostly by color words (although the Conceptual Similarity × Group × Category interaction was not significant in this ROI; see Figure 4G and Table 6).

Table 6. 
Regional Responses for Adaptation Analysis (Repetition Suppression) across Sighted and Early Blind, Considering Both Color and Action Trials
Areakx (mm)y (mm)z (mm)ZpFWE
Sighted, Different > Similar 
 L lingual gyrus 180 −24 −70 −7 4.33 .013 
S.C. −15 −70 −10 4.32   
  
Blind, Different > Similar 
 L middle temporal gyrus 119 −60 −10 −7 4.20 .049 
S.C. −63 −22 3.90   
 L superior temporal gyrus S.C. −51 −13 −1 3.39   
 R superior temporal gyrus 323 57 −28 4.23 .001 
S.C. 48 −37 20 3.92   
S.C. 54 −22 3.92   
 L postcentral gyrus 105 −48 −13 38 4.12 .07a 
 R precentral gyrus 121 27 −25 56 3.71 .02 
S.C. 36 −22 62 3.70   
S.C. 36 −16 44 3.45   
  
Sighted > Blind, Different > Similar 
 L superior occipital gyrus 141 −24 −91 26 4.35 .03 
S.C. −9 −97 20 3.77   
 L middle occipital gyrus S.C. −15 −100 14 3.75   
 R middle occipital gyrus 250 27 −85 11 3.97 .003 
S.C. 36 −79 3.67   
 R superior occipital gyrus S.C. 27 −79 23 3.79   
 L lingual gyrus 165 −24 −70 −7 3.91 .017 
 L middle occipital gyrus S.C. −39 −73 3.48   
 L middle occipital gyrus S.C. −27 −94 3.25   
  
Blind > Sighted, Different > Similar 
 L superior temporal gyrus   −48 −31 20 2.75 .040b 
 R superior temporal gyrus   45 −28 11 3.13 .015b 
Areakx (mm)y (mm)z (mm)ZpFWE
Sighted, Different > Similar 
 L lingual gyrus 180 −24 −70 −7 4.33 .013 
S.C. −15 −70 −10 4.32   
  
Blind, Different > Similar 
 L middle temporal gyrus 119 −60 −10 −7 4.20 .049 
S.C. −63 −22 3.90   
 L superior temporal gyrus S.C. −51 −13 −1 3.39   
 R superior temporal gyrus 323 57 −28 4.23 .001 
S.C. 48 −37 20 3.92   
S.C. 54 −22 3.92   
 L postcentral gyrus 105 −48 −13 38 4.12 .07a 
 R precentral gyrus 121 27 −25 56 3.71 .02 
S.C. 36 −22 62 3.70   
S.C. 36 −16 44 3.45   
  
Sighted > Blind, Different > Similar 
 L superior occipital gyrus 141 −24 −91 26 4.35 .03 
S.C. −9 −97 20 3.77   
 L middle occipital gyrus S.C. −15 −100 14 3.75   
 R middle occipital gyrus 250 27 −85 11 3.97 .003 
S.C. 36 −79 3.67   
 R superior occipital gyrus S.C. 27 −79 23 3.79   
 L lingual gyrus 165 −24 −70 −7 3.91 .017 
 L middle occipital gyrus S.C. −39 −73 3.48   
 L middle occipital gyrus S.C. −27 −94 3.25   
  
Blind > Sighted, Different > Similar 
 L superior temporal gyrus   −48 −31 20 2.75 .040b 
 R superior temporal gyrus   45 −28 11 3.13 .015b 

Significance corrections are reported at the cluster level, unless otherwise specified; cluster size threshold = 50. L = left; R = right; S.C. = same cluster.

a

Indicates marginally significant clusters (pFWE < 0.1).

b

Brain activation significant after FWE voxel correction over a small spherical volume (8-mm radius) at peak coordinates for right/left PLTC.

DISCUSSION

Embodied approaches to conceptual knowledge suggest that concepts are partly grounded in our sensory and motor experience of the world (Binder & Desai, 2011; Barsalou, 1999). A straightforward hypothesis emerging from these theories is that people who perceive the world in a different way, such as blind people, should also have different conceptual representations (Casasanto, 2011).

In our study, we tested this hypothesis by characterizing the brain activity of sighted and early-blind individuals while they rated the perceptual similarity of action and color concepts in fMRI. In particular, we investigated which brain regions encode the perceptual similarity of retrieved concepts using an adaptation paradigm. Results in the sighted group showed that word pairs referring to similar colors induced repetition suppression in a circumscribed region along the posterior segment of the left collateral sulcus (Figure 2E). This region is part of the V4 complex, which is involved in the perception of color (Lafer-Sousa et al., 2016; Beauchamp et al., 1999; Zeki et al., 1991), and it has been shown to be particularly sensitive to the retrieval of color knowledge (Fernandino et al., 2015). On the other hand, word pairs referring to similar actions led to adaptation in several regions in the occipital lobe, including V4, V5, the posterior parahippocampal gyrus, and the middle occipital gyrus (Figure 3B). This larger pattern of activation may reflect the multifaceted nature of action representations that are based on the integration of several visual domains, such as body, shape, movement, and scene. The fact that both colors and actions elicit repetition suppression in the V4 region is not as surprising because V4 has been implicated also in shape/object representations (Chang, Bao, & Tsao, 2017; Fernandino et al., 2015).

Crucially, early-blind participants did not show repetition suppression in posterior occipital areas, disclosing a strikingly different neurobiology of concepts as a function of visual deprivation. Posterior occipital regions are known to encode visual features in sighted individuals and to be sensitive to visual similarity independently of categorical membership (Borghesani et al., 2016; Bracci & Op de Beeck, 2016; Proklova, Kaiser, & Peelen, 2016; Fernandino et al., 2015; Connolly et al., 2012; Naselaris et al., 2009; Kriegeskorte et al., 2008). Our results are thus in line with the prediction that this level of representation (i.e., low-level visual similarity) is not available to blind people and is highly sensitive to visual deprivation, contrary to a categorical level of representation that is only partially determined by visual appearance (Peelen & Downing, 2017; Bracci & Op de Beeck, 2016). Indeed, previous studies showed that the anterior part of the VOTC, known for its category-specific parcellation (Grill-Spector & Weiner, 2014), presents a similar functional and connectivity fingerprint in sighted and blind individuals (van den Hurk et al., 2017; Wang et al., 2015), whereas this similarity decreases in more posterior occipital regions (Wang et al., 2015). We suggest that this is due, at least in part, to the fact that these posterior occipital regions represent perceptual (visual) features during conceptual retrieval in sighted, but not in blind, individuals.

On the other hand, perceptually similar color and actions induced greater adaptation in PLTC in blind individuals. Several previous studies using our adaptation methodology with sighted participants, but with shorter SOAs and semantically related words (e.g., dog–leash), elicited adaptation in the same region (Bedny et al., 2008; Wible et al., 2006; Kotz et al., 2002), probing its role in coding semantic relatedness (Bedny et al., 2008). These results suggest that blind individuals may rely (more than sighted people) on a semantic code to retrieve perceptual similarities. This hypothesis is in line with studies showing a larger use of verbal knowledge (instead of a visuospatial code) in the blind population when retrieving information from memory (Bottini, Mattioni, & Collignon, 2016; Cattaneo et al., 2008). It is important to note that the activation of a semantic code, instead of a visual one, can account for the fact that similarity ratings of actions and colors were highly comparable between sighted and blind individuals, despite their different brain activation patterns. This is particularly interesting in the case of color knowledge, which cannot be derived directly from nonvisual experience. It has been suggested that a purely amodal semantic system can contain information about the perceptual characteristics of the world (Lewis, Zettersten, & Lupyan, 2019; Louwerse & Connell, 2011) and that congenitally blind people can show a good knowledge of the similarity of basic color terms (Barilari, de Heering, Crollen, Collignon, & Bottini, 2018; Saysani, Corballis, & Corballis, 2018; Marmor, 1978). Although the indirect remapping of color knowledge onto nonvisual perceptual properties of objects (e.g., temperature) may play a role in color conceptualization in the blind, there is also evidence that verbal knowledge of color similarity can be enough to provide reliable information about few basic color categories, like the ones used in our task. However, when finer grained perceptual knowledge is evaluated, early-blind and sighted people show behavioral differences in their representation of color concepts (Kim, Elli, & Bedny, 2019).

As predicted, contrasting different categories (action vs. color), independently of within-category perceptual similarity, showed some commonalities in brain activity across the two groups (Peelen, He, Han, Caramazza, & Bi, 2014; Bedny et al., 2012; Noppeney et al., 2003), but also some differences (Striem-Amit et al., 2018). Both sighted and blind individuals engaged the lpMTG during action processing. This area is located outside the visual cortex, in a multimodal region that typically displays category specificity in a format that is at least partially independent from perceptual appearance (Peelen & Downing, 2017; Bracci & Op de Beeck, 2016) and typically resilient to visual deprivation (Dormal et al., 2017; Wang et al., 2015; Bedny et al., 2012; Noppeney et al., 2003).2

On the other hand, the contrast Color > Action did not reveal strong commonalities between groups.3 Instead, the posterior portion of the rIPS showed a stronger preference for color trials in sighted compared with blind individuals. The IPS is known to be involved in the perception of color (Cheadle & Zeki, 2014; Zeki & Stutters, 2013; Beauchamp et al., 1999) as well as other visual features (Swisher, Halko, Merabet, McMains, & Somers, 2007; Xu, 2007; Grill-Spector, 2003). Its anatomical position makes it a good candidate to work at the interface between perceptual and conceptual representations (Cheadle & Zeki, 2014) and, thus, to be sensitive to the lack of perceptual referent during conceptual retrieval. Interestingly, the peak of color-specific activity that we found (peak coordinates: 33 −43 35) is very close to the color-specific rIPS area found by Cheadle and Zeki (2014; peak coordinates: 30 −39 45). The lack of visual input in blind people prevents the formation of perceptually driven color representations, which may limit the contribution of the IPS during the retrieval of color knowledge. This is not the case for action representation, which has a direct perceptual referent also in blind individuals (via the spared senses).

What is the role of modality-specific simulations, like the one elicited in visual cortices in the sighted, during conceptual retrieval? It has been suggested that modality-specific representations may not always be necessary during conceptual retrieval and could instead be related to processing demands (Ostarek & Huettig, 2017; Bottini, Bucur, & Crepaldi, 2016). For instance, visual representations of low-level characteristics of objects may not be activated when a shallow processing such as a lexical decision (Ostarek & Huettig, 2017) or an orthogonal conceptual task (e.g., judging semantic relatedness; Martin, Douglas, Newsome, Man, & Barense, 2018) is required. In contrast, our task required a relatively deep conceptual discrimination based on perceptual features, which may have therefore triggered the reenactment of perceptual-like processing in the occipital cortex of sighted people.

From a broader theoretical point of view, the results of this study are in line with a hierarchical model of conceptual representations based on progressive levels of abstraction (Barsalou, 2016; Binder, 2016; Binder et al., 2016; Fernandino et al., 2015; Martin, 2015). At the top of the hierarchy, multimodal representations may coexist with purely symbolic ones organized in an explicit propositional code (Mahon & Caramazza, 2008) or a linguistic code based on word–word co-occurrence statistics (Landauer & Dumais, 1997). It is possible that a large number of conceptual processes can take place involving only the higher levels of representation, supported by conceptual hubs such as pMTG (Fernandino et al., 2015; Binder & Desai, 2011; Binder et al., 2009), as well as category-sensitive regions in anterior VOTC (Peelen & Downing, 2017; Binder & Desai, 2011) or language regions in temporal cortices (Anderson et al., 2017; Anderson, Bruni, Lopopolo, Poesio, & Baroni, 2015). Moreover, multimodal and abstract representations may be ideal to interact with lexical entries in the context of compositional (Binder, 2016), highly automatic (Bottini, Bucur, et al., 2016), or shallow (Ostarek & Huettig, 2017) semantic representations that are continuously required by natural language processing (Binder, 2016).

Overall, our results show that early visual deprivation does partially change the neural bases of conceptual retrieval, but only at specific levels of representation: Whereas the category specificity of concepts is largely retained, the fine-grain visual features of objects, properties, and events, as it is represented in modality-specific occipital areas, is not available to blind individuals. People who perceive the world in a different way have, at least in part, different grounded conceptual representations.

Acknowledgments

This work was supported by a European Research Council starting grant (MADVIS Grant 337573) attributed to O. C. O. C. is a research associate at the Fond National de Recherche Scientifique de Belgique. We wish to extend our gratitude to the Michela Picchetti, Mattia Verri, and Alberto Redolfi for the technical support during fMRI acquisition. We are also extremely thankful to our blind participants; the Unione Ciechi e Ipovedenti in Trento, Milano, Savona, and Trieste; and the Blind Institute of Milano. We also thank Yanchao Bi and Xiaoying Wang for sharing brain maps from their previously published data. R. B. and O. C. designed the research; R. B., S. F., A. N., and V. C. performed the research; R. B. analyzed the data in interaction with O. C.; R. B. and O. C. drafted the article; all authors revised and edit the draft and agreed on the final version of the article.

Reprint requests should be sent to Roberto Bottini, University of Trento (CIMeC), Via delle regole 101, 38123, Mattarello, Italy, or via e-mail: bottini.r@gmail.com or Olivier Collignon, University of Louvain (UCL), 10, Place du Cardinal Mercier; 1348 Louvain-La-Neuve, Belgium, or via e-mail: roberto.bottini@unitn.it.

Notes

1. 

Results remained identical when, in control analysis suggested by one of the reviewers, we adjusted the number of “different” and “similar” trials in each subject.

2. 

It is, however, important to note that activation in the same areas does not mean identical representation across blind and sighted people. Indeed, overlap of activity in VOTC of blind and sighted people may partially relate to the simulation of different sensory information (i.e., visual representation in sighted individuals and tactile representation in blind individuals), due to the well-known crossmodal plasticity observed in congenitally blind people (Frasnelli et al., 2011).

3. 

Although, at a lower threshold, a unique common activation in the right precuneus emerged for the contrast Color > Action.

REFERENCES

REFERENCES
Amedi
,
A.
,
Raz
,
N.
,
Pianka
,
P.
,
Malach
,
R.
, &
Zohary
,
E.
(
2003
).
Early “visual” cortex activation correlates with superior verbal memory performance in the blind
.
Nature Neuroscience
,
6
,
758
766
.
Anderson
,
A. J.
,
Binder
,
J. R.
,
Fernandino
,
L.
,
Humphries
,
C. J.
,
Conant
,
L. L.
,
Aguilar
,
M.
, et al
(
2017
).
Predicting neural activity patterns associated with sentences using a neurobiologically motivated model of semantic representation
.
Cerebral Cortex
,
27
,
4379
4395
.
Anderson
,
A. J.
,
Bruni
,
E.
,
Lopopolo
,
A.
,
Poesio
,
M.
, &
Baroni
,
M.
(
2015
).
Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text
.
Neuroimage
,
120
,
309
322
.
Barilari
,
M.
,
de Heering
,
A.
,
Crollen
,
V.
,
Collignon
,
O.
, &
Bottini
,
R.
(
2018
).
Is red heavier than yellow even for blind?
I-Perception
,
9
,
2041669518759123
.
Barron
,
H. C.
,
Garvert
,
M. M.
, &
Behrens
,
T. E. J.
(
2016
).
Repetition suppression: A means to index neural representations using BOLD?
Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences
,
371
,
20150355
.
Barsalou
,
L. W.
(
1999
).
Perceptual symbol systems
.
Behavioral and Brain Science
,
22
,
577
660
.
Barsalou
,
L. W.
(
2016
).
On staying grounded and avoiding quixotic dead ends
.
Psychonomic Bulletin & Review
,
23
,
1122
1142
.
Beauchamp
,
M. S.
,
Haxby
,
J. V.
,
Jennings
,
J. E.
, &
DeYoe
,
E. A.
(
1999
).
An fMRI version of the farnsworth-munsell 100-hue test reveals multiple color-selective areas in human ventral occipitotemporal cortex
.
Cerebral Cortex
,
9
,
257
263
.
Bedny
,
M.
(
2017
).
Evidence from blindness for a cognitively pluripotent cortex
.
Trends in Cognitive Sciences
,
21
,
637
648
.
Bedny
,
M.
,
Caramazza
,
A.
,
Pascual-Leone
,
A.
, &
Saxe
,
R.
(
2012
).
Typical neural representations of action verbs develop without vision
.
Cerebral Cortex
,
22
,
286
293
.
Bedny
,
M.
,
McGill
,
M.
, &
Thompson-Schill
,
S. L.
(
2008
).
Semantic adaptation and competition during word comprehension
.
Cerebral Cortex
,
18
,
2574
2585
.
Bedny
,
M.
, &
Saxe
,
R.
(
2012
).
Insights into the origins of knowledge from the cognitive neuroscience of blindness
.
Cognitive Neuropsychology
,
29
,
56
84
.
Bi
,
Y.
,
Wang
,
X.
, &
Caramazza
,
A.
(
2016
).
Object domain and modality in the ventral visual pathway
.
Trends in Cognitive Sciences
,
20
,
282
290
.
Binder
,
J. R.
(
2016
).
In defense of abstract conceptual representations
.
Psychonomic Bulletin & Review
,
23
,
1096
1108
.
Binder
,
J. R.
,
Conant
,
L. L.
,
Humphries
,
C. J.
,
Fernandino
,
L.
,
Simons
,
S. B.
,
Aguilar
,
M.
, et al
(
2016
).
Toward a brain-based componential semantic representation
.
Cognitive Neuropsychology
,
33
,
130
174
.
Binder
,
J. R.
, &
Desai
,
R. H.
(
2011
).
The neurobiology of semantic memory
.
Trends in Cognitive Sciences
,
15
,
527
536
.
Binder
,
J. R.
,
Desai
,
R. H.
,
Graves
,
W. W.
, &
Conant
,
L. L.
(
2009
).
Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies
.
Cerebral Cortex
,
19
,
2767
2796
.
Boersma
,
P.
, &
Weenink
,
D.
(
2018
).
Praat: Doing phonetics by computer
.
Retrieved from www.praat.org/.
Borghesani
,
V.
,
Pedregosa
,
F.
,
Buiatti
,
M.
,
Amadon
,
A.
,
Eger
,
E.
, &
Piazza
,
M.
(
2016
).
Word meaning in the ventral visual path: A perceptual to conceptual gradient of semantic coding
.
Neuroimage
,
143
,
128
140
.
Bottini
,
R.
,
Bucur
,
M.
, &
Crepaldi
,
D.
(
2016
).
The nature of semantic priming by subliminal spatial words. Embodied or disembodied?
Journal of Experimental Psychology: General
,
145
,
1160
1176
.
Bottini
,
R.
,
Mattioni
,
S.
, &
Collignon
,
O.
(
2016
).
Early blindness alters the spatial organization of verbal working memory
.
Cortex
,
83
,
271
279
.
Bracci
,
S.
, &
Op de Beeck
,
H.
(
2016
).
Dissociations and associations between shape and category representations in the two visual pathways
.
Journal of Neuroscience
,
36
,
432
444
.
Büchel
,
C.
(
2003
).
Cortical hierarchy turned on its head
.
Nature Neuroscience
,
6
,
657
658
.
Casasanto
,
D.
(
2011
).
Different bodies, different minds: The body specificity of language and thought
.
Current Directions in Psychological Science
,
20
,
378
383
.
Cattaneo
,
Z.
,
Vecchi
,
T.
,
Cornoldi
,
C.
,
Mammarella
,
I.
,
Bonino
,
D.
,
Ricciardi
,
E.
, et al
(
2008
).
Imagery and spatial processes in blindness and visual impairment
.
Neuroscience and Biobehavioral Reviews
,
32
,
1346
1360
.
Chang
,
L.
,
Bao
,
P.
, &
Tsao
,
D. Y.
(
2017
).
The representation of colored objects in macaque color patches
.
Nature Communications
,
8
,
2064
.
Cheadle
,
S. W.
, &
Zeki
,
S.
(
2014
).
The role of parietal cortex in the formation of color and motion based concepts
.
Frontiers in Human Neuroscience
,
8
,
535
.
Connolly
,
A. C.
,
Guntupalli
,
J. S.
,
Gors
,
J.
,
Hanke
,
M.
,
Halchenko
,
Y. O.
,
Wu
,
Y. C.
, et al
(
2012
).
The representation of biological classes in the human brain
.
Journal of Neuroscience
,
32
,
2608
2618
.
Dormal
,
G.
, &
Collignon
,
O.
(
2011
).
Functional selectivity in sensory-deprived cortices
.
Journal of Neurophysiology
,
105
,
2627
2630
.
Dormal
,
G.
,
Pelland
,
M.
,
Rezk
,
M.
,
Yakobov
,
E.
,
Lepore
,
F.
, &
Collignon
,
O.
(
2017
).
Functional preference for object sounds and voices in the brain of early blind and sighted individuals
.
Journal of Cognitive Neuroscience
,
30
,
86
106
.
Dumoulin
,
S. O.
,
Bittar
,
R. G.
,
Kabani
,
N. J.
,
Baker
,
C. L.
,
Le Goualher
,
G.
,
Bruce Pike
,
G.
, et al
(
2000
).
A new anatomical landmark for reliable identification of human area V5/MT: A quantitative analysis of sulcal patterning
.
Cerebral Cortex
,
10
,
454
463
.
Fernandino
,
L.
,
Binder
,
J. R.
,
Desai
,
R. H.
,
Pendl
,
S. L.
,
Humphries
,
C. J.
,
Gross
,
W. L.
, et al
(
2015
).
Concept representation reflects multimodal abstraction: A framework for embodied semantics
.
Cerebral Cortex
,
26
,
2018
2034
.
Frasnelli
,
J.
,
Collignon
,
O.
,
Voss
,
P.
, &
Lepore
,
F.
(
2011
).
Crossmodal plasticity in sensory loss
.
Progress in Brain Research
,
191
,
233
249
.
Grill-Spector
,
K.
(
2003
).
The neural basis of object perception
.
Current Opinion in Neurobiology
,
13
,
159
166
.
Grill-Spector
,
K.
,
Henson
,
R.
, &
Martin
,
A.
(
2006
).
Repetition and the brain: Neural models of stimulus-specific effects
.
Trends in Cognitive Sciences
,
10
,
14
23
.
Grill-Spector
,
K.
, &
Weiner
,
K. S.
(
2014
).
The functional architecture of the ventral temporal cortex and its role in categorization
.
Nature Reviews Neuroscience
,
15
,
536
548
.
Grinband
,
J.
,
Wager
,
T. D.
,
Lindquist
,
M.
,
Ferrera
,
V. P.
, &
Hirsch
,
J.
(
2008
).
Detection of time-varying signals in event-related fMRI designs
.
Neuroimage
,
43
,
509
520
.
Handjaras
,
G.
,
Leo
,
A.
,
Cecchetti
,
L.
,
Papale
,
P.
,
Lenci
,
A.
,
Marotta
,
G.
, et al
(
2017
).
Modality-independent encoding of individual concepts in the left parietal cortex
.
Neuropsychologia
,
105
,
39
49
.
He
,
C.
,
Peelen
,
M. V.
,
Han
,
Z.
,
Lin
,
N.
,
Caramazza
,
A.
, &
Bi
,
Y.
(
2013
).
Selectivity for large nonmanipulable objects in scene-selective visual cortex does not require visual experience
.
Neuroimage
,
79
,
1
9
.
Kim
,
J. S.
,
Elli
,
G. V.
, &
Bedny
,
M.
(
2019
).
Knowledge of animal appearance among sighted and blind adults
.
Proceedings of the National Academy of Sciences, U.S.A.
,
116
,
11213
11222
.
Kotz
,
S. A.
,
Cappa
,
S. F.
,
von Cramon
,
D. Y.
, &
Friederici
,
A. D.
(
2002
).
Modulation of the lexical-semantic network by auditory semantic priming: An event-related functional MRI study
.
Neuroimage
,
17
,
1761
1772
.
Kriegeskorte
,
N.
,
Mur
,
M.
,
Ruff
,
D. A.
,
Kiani
,
R.
,
Bodurka
,
J.
,
Esteky
,
H.
, et al
(
2008
).
Matching categorical object representations in inferior temporal cortex of man and monkey
.
Neuron
,
60
,
1126
1141
.
Kupers
,
R.
, &
Ptito
,
M.
(
2014
).
Compensatory plasticity and cross-modal reorganization following early visual deprivation
.
Neuroscience and Biobehavioral Reviews
,
41
,
36
52
.
Lafer-Sousa
,
R.
,
Conway
,
B. R.
, &
Kanwisher
,
N. G.
(
2016
).
Color-biased regions of the ventral visual pathway lie between face- and place-selective regions in humans, as in macaques
.
Journal of Neuroscience
,
36
,
1682
1697
.
Landauer
,
T. K.
, &
Dumais
,
S. T.
(
1997
).
A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge
.
Psychological Review
,
104
,
211
240
.
Leshinskaya
,
A.
, &
Caramazza
,
A.
(
2016
).
For a cognitive neuroscience of concepts: Moving beyond the grounding issue
.
Psychonomic Bulletin & Review
,
23
,
991
1001
.
Lewis
,
M.
,
Zettersten
,
M.
, &
Lupyan
,
G.
(
2019
).
Distributional semantics as a source of visual knowledge
.
Proceedings of the National Academy of Sciences, U.S.A.
,
116
,
19237
19238
.
Louwerse
,
M.
, &
Connell
,
L.
(
2011
).
A taste of words: Linguistic context and perceptual simulation predict the modality of words
.
Cognitive Science
,
35
,
381
398
.
Mahon
,
B. Z.
, &
Caramazza
,
A.
(
2008
).
A critical look at the embodied cognition hypothesis and a new proposal for grounding conceptual content
.
Journal of Physiology Paris
,
102
,
59
70
.
Marmor
,
G.
(
1978
).
Age at onset of blindness and the development of the semantics of color names
.
Journal of Experimental Child Psychology
,
278
,
344
345
.
Martin
,
A.
(
2015
).
GRAPES-Grounding representations in action, perception, and emotion systems: How object properties and categories are represented in the human brain
.
Psychonomic Bulletin & Review
,
979
990
.
Martin
,
C. B.
,
Douglas
,
D.
,
Newsome
,
R. N.
,
Man
,
L. L.
, &
Barense
,
M. D.
(
2018
).
Integrative and distinctive coding of visual and conceptual object features in the ventral visual stream
.
eLife
,
7
,
e31873
.
Mattioni
,
S.
,
Rezk
,
M.
,
Battal
,
C.
,
Bottini
,
R.
,
Mendoza
,
K. E. C.
,
Oosterhof
,
N. N.
, et al
(
2019
).
Similar categorical representation from sound and sight in the occipito-temporal cortex of sighted and blind
.
BioRxiv
,
719690
.
Naselaris
,
T.
,
Prenger
,
R. J.
,
Kay
,
K. N.
,
Oliver
,
M.
, &
Gallant
,
J. L.
(
2009
).
Bayesian reconstruction of natural images from human brain activity
.
Neuron
,
63
,
902
915
.
Noppeney
,
U.
,
Friston
,
K. J.
, &
Price
,
C. J.
(
2003
).
Effects of visual deprivation on the organization of the semantic system
.
Brain
,
126
,
1620
1627
.
Ostarek
,
M.
, &
Huettig
,
F.
(
2017
).
A task-dependent causal role for low-level visual processes in spoken word comprehension
.
Journal of Experimental Psychology: Learning, Memory, and Cognition
,
43
,
1215
1224
.
Peelen
,
M. V.
,
Bracci
,
S.
,
Lu
,
X.
,
Chenxi
,
H.
,
Caramazza
,
A.
, &
Bi
,
Y.
(
2013
).
Tool selectivity in left occipitotemporal cortex develops without vision
.
Journal of Cognitive Neuroscience
,
25
,
1225
1234
.
Peelen
,
M. V.
, &
Downing
,
P. E.
(
2017
).
Category selectivity in human visual cortex: Beyond visual object recognition
.
Neuropsychologia
,
105
,
177
183
.
Peelen
,
M. V.
,
He
,
C.
,
Han
,
Z.
,
Caramazza
,
A.
, &
Bi
,
Y.
(
2014
).
Nonvisual and visual object shape representations in occipitotemporal cortex: Evidence from congenitally blind and sighted adults
.
Journal of Neuroscience
,
34
,
163
170
.
Proklova
,
D.
,
Kaiser
,
D.
, &
Peelen
,
M. V.
(
2016
).
Disentangling representations of object shape and object category in human visual cortex: The animate–inanimate distinction
.
Journal of Cognitive Neuroscience
,
28
,
680
692
.
R Core Team
. (
2017
).
R: A language and environment for statistical computing
.
Vienna, Austria
. https://www.Rproject.org/.
Reich
,
L.
,
Szwed
,
M.
,
Cohen
,
L.
, &
Amedi
,
A.
(
2011
).
A ventral visual stream reading center independent of visual experience
.
Current Biology
,
21
,
363
368
.
Ricciardi
,
E.
,
Vanello
,
N.
,
Sani
,
L.
,
Gentili
,
C.
,
Scilingo
,
E. P.
,
Landini
,
L.
, et al
(
2007
).
The effect of visual experience on the development of functional architecture in hMT+
.
Cerebral Cortex
,
17
,
2933
2939
.
Saygin
,
A. P.
,
McCullough
,
S.
,
Alac
,
M.
, &
Emmorey
,
K.
(
2010
).
Modulation of BOLD response in motion-sensitive lateral temporal cortex by real and fictive motion sentences
.
Journal of Cognitive Neuroscience
,
22
,
2480
2490
.
Saysani
,
A.
,
Corballis
,
M. C.
, &
Corballis
,
P. M.
(
2018
).
Colour envisioned: Concepts of colour in the blind and sighted
.
Visual Cognition
,
26
,
382
392
.
Sladky
,
R.
,
Friston
,
K. J.
,
Tröstl
,
J.
,
Cunnington
,
R.
,
Moser
,
E.
, &
Windischberger
,
C.
(
2011
).
Slice-timing effects and their correction in functional MRI
.
Neuroimage
,
58
,
588
594
.
Stasenko
,
A.
,
Garcea
,
F. E.
,
Dombovy
,
M.
, &
Mahon
,
B. Z.
(
2014
).
When concepts lose their color: A case of object-color knowledge impairment
.
Cortex
,
58
,
217
238
.
Striem-Amit
,
E.
,
Wang
,
X.
,
Bi
,
Y.
, &
Caramazza
,
A.
(
2018
).
Neural representation of visual concepts in people born blind
.
Nature Communications
,
9
,
5250
.
Swisher
,
J. D.
,
Halko
,
M. A.
,
Merabet
,
L. B.
,
McMains
,
S. A.
, &
Somers
,
D. C.
(
2007
).
Visual topography of human intraparietal sulcus
.
Journal of Neuroscience
,
27
,
5326
5337
.
Van Ackeren
,
M. J.
,
Barbero
,
F.
,
Mattioni
,
S.
,
Bottini
,
R.
, &
Collignon
,
O.
(
2018
).
Neuronal populations in the occipital cortex of the blind synchronize to the temporal dynamics of speech
.
eLife
,
7
,
e31640
.
van den Hurk
,
J.
,
Van Baelen
,
M.
, &
Op de Beeck
,
H. P.
(
2017
).
Development of visual category selectivity in ventral visual cortex does not require visual experience
.
Proceedings of the National Academy of Sciences, U.S.A.
,
201612862
.
Wang
,
X.
,
Peelen
,
M. V.
,
Han
,
Z.
,
He
,
C.
,
Caramazza
,
A.
, &
Bi
,
Y.
(
2015
).
How visual is the visual cortex? Comparing connectional and functional fingerprints between congenitally blind and sighted individuals
.
Journal of Neuroscience
,
35
,
12545
12559
.
Wheatley
,
T.
,
Weisberg
,
J.
,
Beauchamp
,
M. S.
, &
Martin
,
A.
(
2005
).
Automatic priming of semantically related words reduces activity in the fusiform gyrus
.
Journal of Cognitive Neuroscience
,
17
,
1871
1885
.
Wible
,
C. G.
,
Han
,
S. D.
,
Spencer
,
M. H.
,
Kubicki
,
M.
,
Niznikiewicz
,
M. H.
,
Jolesz
,
F. A.
, et al
(
2006
).
Connectivity among semantic associates: An fMRI study of semantic priming
.
Brain and Language
,
97
,
294
305
.
Xu
,
Y.
(
2007
).
The role of the superior intraparietal sulcus in supporting visual short-term memory for multifeature objects
.
Journal of Neuroscience
,
27
,
11676
11686
.
Zeki
,
S.
,
Watson
,
J. D.
,
Lueck
,
C. J.
,
Friston
,
K. J.
,
Kennard
,
C.
, &
Frackowiak
,
R. S.
(
1991
).
A direct demonstration of functional specialization in human visual cortex
.
Journal of Neuroscience
,
11
,
641
649
.
Zeki
,
S.
, &
Stutters
,
J.
(
2013
).
Functional specialization and generalization for grouping of stimuli based on colour and motion
.
Neuroimage
,
73
,
156
166
.