Abstract
If conceptual retrieval is partially based on the simulation of sensorimotor experience, people with a different sensorimotor experience, such as congenitally blind people, should retrieve concepts in a different way. However, studies investigating the neural basis of several conceptual domains (e.g., actions, objects, places) have shown a very limited impact of early visual deprivation. We approached this problem by investigating brain regions that encode the perceptual similarity of action and color concepts evoked by spoken words in sighted and congenitally blind people. At first, and in line with previous findings, a contrast between action and color concepts (independently of their perceptual similarity) revealed similar activations in sighted and blind people for action concepts and partially different activations for color concepts, but outside visual areas. On the other hand, adaptation analyses based on subjective ratings of perceptual similarity showed compelling differences across groups. Perceptually similar colors and actions induced adaptation in the posterior occipital cortex of sighted people only, overlapping with regions known to represent low-level visual features of those perceptual domains. Early-blind people instead showed a stronger adaptation for perceptually similar concepts in temporal regions, arguably indexing higher reliance on a lexical-semantic code to represent perceptual knowledge. Overall, our results show that visual deprivation does changes the neural bases of conceptual retrieval, but mostly at specific levels of representation supporting perceptual similarity discrimination, reconciling apparently contrasting findings in the field.
INTRODUCTION
Embodied theories of knowledge suggest that conceptual retrieval is partly based on the simulation of sensorimotor experience (Binder & Desai, 2011; Barsalou, 1999). Part of the evidence supporting this claim comes from studies showing that visual areas in the occipital cortex are activated when people process the meaning of words referring to concrete entities (Borghesani et al., 2016; Fernandino et al., 2015; Saygin, McCullough, Alac, & Emmorey, 2010). For instance, the size of different animals (e.g., elephant vs. mouse) is encoded in the posterior occipital cortex (Borghesani et al., 2016); whether or not an object has a typical color is reflected in the activation of the color-sensitive area V4 (Fernandino et al., 2015); and sentences referring to motion activate the motion-sensitive area V5 (Saygin et al., 2010). However, it is still debated whether the activity in visual regions during conceptual retrieval reflects the simulation of perceptual experience (Barsalou, 2016) or, instead, abstract representations of semantic features (e.g., size, color, movement) that are largely innate (Leshinskaya & Caramazza, 2016) and that are active both during retrieval and perception (Stasenko, Garcea, Dombovy, & Mahon, 2014).
Congenital blindness offers an ideal model to test these hypotheses: If conceptual processing is somehow grounded into experience, blind people, who experience the world in a different way, should also show a different neurobiology of concepts, at least concerning vision-related features (Casasanto, 2011). Several studies, however, seem to provide evidence against this idea (Bedny & Saxe, 2012). For instance, when sighted and blind individuals were asked to retrieve information about highly visual entities, knowledge about small and manipulable objects (Peelen et al., 2013) activated the lateral temporal-occipital complex; thinking about big nonmanipulable objects activated the parahipoccampal place area (He et al., 2013); and processing action verbs (compared with nouns) activated the left posterior middle temporal gyrus (lpMTG; Bedny, Caramazza, Pascual-Leone, & Saxe, 2012) in both groups. These results seem to suggest that blindness leaves the neurobiology of conceptual retrieval largely unchanged and that activity in putatively visual areas do not necessarily encode visual simulations, but more abstract representations of action, movements, places, and so forth (Bedny & Saxe, 2012; Noppeney, Friston, & Price, 2003). It is crucial to note, however, that activation in the same areas does not necessarily mean identical processing between blind and sighted individuals, because there is evidence that several visual areas in early-blind individuals enhance their response to nonvisual senses (such as haptic or auditory processing; Kupers & Ptito, 2014; Frasnelli, Collignon, Voss, & Lepore, 2011), even if they may keep similar functional specialization (Dormal & Collignon, 2011). Thus, areas that support visual simulations of conceptual features in sighted individuals may support haptic or auditory simulations of the same features in blind individuals (Reich, Szwed, Cohen, & Amedi, 2011; Ricciardi et al., 2007), with the possible exception of uniquely visual features that cannot be easily remapped onto other senses (Bi, Wang, & Caramazza, 2016). Nevertheless, from the point of view of embodied and situated cognition, it is still problematic that, so far, no study reported a higher involvement of visual areas in sighted than blind individuals during conceptual retrieval.
Crucially, most of the previous studies relied on paradigms that investigated categorical knowledge that only partially depends on visual experience. For instance, different categories (e.g., animals vs. tools) can be discriminated based on touch, audition, taste, affordances, or the reward that they bring (Peelen & Downing, 2017; Bi et al., 2016). These are all features that are equally accessible to blind individuals. Moreover, previous tasks mostly involved the discrimination of concepts belonging to distinct superordinate categories rather than the analysis of concepts as individual entities (Handjaras et al., 2017) based on their idiosyncratic perceptual features. Therefore, it may not surprising that associative regions (e.g., pMTG) or even regions of the anterior part of the ventral occipital-temporal cortex (VOTC) that are not strictly visual but receive multiple inputs from different sensory and nonsensory systems (such as the emotional, language, or navigation network) might show preferential responses to some specific superordinate categories independently of sensory inputs or even sensory (visual) experience (Mattioni et al., 2019; Handjaras et al., 2017; Peelen & Downing, 2017; van den Hurk, Van Baelen, & Op de Beeck, 2017).
On the other hand, we should expect greater differences between sighted and blind individuals in brain regions that are more strictly visual and encode lower level visual similarity of objects independently of their category. Such visual features are usually encoded in the posterior portion of the occipital cortex (BA 18–BA 19; including the lingual gyrus, cuneus, V4 and V5) during visual perception of objects and scenes (Bracci & Op de Beeck, 2016; Connolly et al., 2012; Naselaris, Prenger, Kay, Oliver, & Gallant, 2009). Moreover, some of these posterior occipital regions seem to encode perceptual information (e.g., size, color, motion) also during conceptual retrieval from words (Borghesani et al., 2016; Fernandino et al., 2015; Saygin et al., 2010).
Interestingly, for our purposes, it has been shown that the anterior part of the VOTC, known for its category specificity (Bracci & Op de Beeck, 2016; Grill-Spector & Weiner, 2014), is highly multimodal and presents a similar functional and connectivity fingerprint in sighted and blind individuals (Wang et al., 2015), whereas this similarity decreases in more posterior occipital regions (Wang et al., 2015) that are highly modality specific (i.e., visual). Accordingly, several studies have shown that regions of the occipital lobe, in early-blind individuals, engage in high-level cognitive processing (e.g., memory, language, math; Van Ackeren, Barbero, Mattioni, Bottini, & Collignon, 2018; Bedny, 2017; Amedi, Raz, Pianka, Malach, & Zohary, 2003; Büchel, 2003), suggesting that the same regions that encode visual perceptual features in sighted individuals may encode different perceptual or cognitive dimensions in blind people.
Building on these previous findings, we set out to investigate which brain areas encode the perceptual similarity of concepts (actions and colors) in sighted and early-blind people. In our experiment, we asked sighted and early-blind participants to rate pairs of concepts based on their perceptual similarity and analyzed the data using a repetition suppression framework (Barron, Garvert, & Behrens, 2016; Grill-Spector, Henson, & Martin, 2006; Wheatley, Weisberg, Beauchamp, & Martin, 2005). Repetition suppression is a useful tool to investigate whether and how perceptual distance is represented in the brain. For instance, items that are perceptually close should elicit adaptation in areas that encode the relevant perceptual features (Barron et al., 2016; Grill-Spector et al., 2006). We expected to see adaptation in posterior occipital regions for perceptually similar actions and color in sighted, but not in the blind, individuals, because those regions now involve different computations. On the other hand, a categorical contrast between actions and color trials (independently of perceptual similarity) should replicate previous results showing similar category-specific activations in multimodal areas of the brain, in both sighted and blind individuals. Importantly however, the similarity across groups in multimodal regions should be stronger for conceptual categories that can be perceptually experienced both by sighted and blind individuals. Therefore, in our experiment, we choose stimuli exemplars coming from actions and color categories. Including colors that can be experienced through vision only allowed us to test whether their different epistemological status—concrete in sighted versus abstract in the blind—would influence their representation even in multimodal areas of the brain that usually show resilience to visual deprivation (Striem-Amit, Wang, Bi, & Caramazza, 2018).
METHODS
Participants
Thirty-six individuals took part to this experiment: 18 early-blind individuals (EB; eight women) and 18 sighted controls (SC; eight women). Participants were matched pairwise for sex, age (SC: mean = 36.11 years, SD = 7.52 years; EB: mean = 36.56 years, SD = 8.48 years), and years of education (SC: mean = 16.67 years, SD = 2.72 years; EB: mean = 15.39 years, SD = 3.48 years; see Table 1).
Sight Status . | Age (years) . | Sex . | Years of Formal Education . | Sight Status . | Age (years) . | Sex . | Years of Formal Education . |
---|---|---|---|---|---|---|---|
EB01 | 32 | M | 18 | SC01 | 37 | M | 18 |
EB02 | 52 | F | 18 | SC02 | 46 | F | 18 |
EB03 | 30 | M | 13 | SC03 | 34 | M | 18 |
EB04 | 29 | F | 13 | SC04 | 26 | F | 16 |
EB05 | 36 | F | 18 | SC05 | 33 | F | 18 |
EB06 | 27 | F | 18 | SC06 | 27 | F | 18 |
EB07 | 39 | M | 18 | SC07 | 40 | M | 18 |
EB08 | 48 | M | 8 | SC08 | 51 | M | 8 |
EB09 | 42 | M | 13 | SC09 | 41 | M | 18 |
EB10 | 36 | M | 18 | SC10 | 40 | M | 18 |
EB11 | 49 | M | 18 | SC11 | 43 | M | 18 |
EB12 | 36 | F | 18 | SC12 | 39 | F | 18 |
EB13 | 45 | M | 18 | SC13 | 42 | M | 18 |
EB14 | 25 | F | 13 | SC14 | 25 | F | 16 |
EB15 | 29 | F | 16 | SC15 | 30 | F | 18 |
EB16 | 30 | M | 13 | SC16 | 26 | M | 13 |
EB17 | 28 | F | 18 | SC17 | 31 | F | 18 |
EB18 | 45 | M | 8 | SC18 | 39 | M | 13 |
M | 36, 56 | 15, 39 | 36, 11 | 16, 67 | |||
SD | 8, 48 | 3, 48 | 7, 52 | 2, 72 |
Sight Status . | Age (years) . | Sex . | Years of Formal Education . | Sight Status . | Age (years) . | Sex . | Years of Formal Education . |
---|---|---|---|---|---|---|---|
EB01 | 32 | M | 18 | SC01 | 37 | M | 18 |
EB02 | 52 | F | 18 | SC02 | 46 | F | 18 |
EB03 | 30 | M | 13 | SC03 | 34 | M | 18 |
EB04 | 29 | F | 13 | SC04 | 26 | F | 16 |
EB05 | 36 | F | 18 | SC05 | 33 | F | 18 |
EB06 | 27 | F | 18 | SC06 | 27 | F | 18 |
EB07 | 39 | M | 18 | SC07 | 40 | M | 18 |
EB08 | 48 | M | 8 | SC08 | 51 | M | 8 |
EB09 | 42 | M | 13 | SC09 | 41 | M | 18 |
EB10 | 36 | M | 18 | SC10 | 40 | M | 18 |
EB11 | 49 | M | 18 | SC11 | 43 | M | 18 |
EB12 | 36 | F | 18 | SC12 | 39 | F | 18 |
EB13 | 45 | M | 18 | SC13 | 42 | M | 18 |
EB14 | 25 | F | 13 | SC14 | 25 | F | 16 |
EB15 | 29 | F | 16 | SC15 | 30 | F | 18 |
EB16 | 30 | M | 13 | SC16 | 26 | M | 13 |
EB17 | 28 | F | 18 | SC17 | 31 | F | 18 |
EB18 | 45 | M | 8 | SC18 | 39 | M | 13 |
M | 36, 56 | 15, 39 | 36, 11 | 16, 67 | |||
SD | 8, 48 | 3, 48 | 7, 52 | 2, 72 |
M = male; F = female; EB = early blind; SC = sighted control; M = mean; SD = standard deviation.
All the blind participants lost sight at birth or before 3 years of age, and all of them reported not having visual memories (Table 2). All participants were blindfolded during the task. The ethics committee of the Besta Neurological Institute approved this study (Protocol fMRI_BP_001), and participants gave their informed consent before participation.
Participant . | Age Onset Blindness . | Cause of Blindness . |
---|---|---|
EB01 | 0 | Optic nerve hypoplasia |
EB02 | 1 | Retinoblastoma |
EB03 | 0 | Congenital retinal dystrophy |
EB04 | 2 | Retinoblastoma |
EB05 | 0 | Congenital microphtalmia |
EB06 | 0 | Congenital microphtalmia |
EB07 | 0 | Retrolental fibroplasia |
EB08 | 0 | Optic nerve atrophy |
EB09 | 0 | Congenital retinal dystrophy |
EB10 | 0 | Retrolental fibroplasia |
EB11 | 0 | Retrolental fibroplasia |
EB12 | 0 | Retrolental fibroplasia |
EB13 | 0 | Retrolental fibroplasia |
EB14 | 0 | Leber's congenital amaurosis |
EB15 | 3 | Retinitis pigmentosa |
EB16 | 0 | Retrolental fibroplasia |
EB17 | 0 | Agenesis of the optic nerve |
EB18 | 0 | Glaucoma |
Participant . | Age Onset Blindness . | Cause of Blindness . |
---|---|---|
EB01 | 0 | Optic nerve hypoplasia |
EB02 | 1 | Retinoblastoma |
EB03 | 0 | Congenital retinal dystrophy |
EB04 | 2 | Retinoblastoma |
EB05 | 0 | Congenital microphtalmia |
EB06 | 0 | Congenital microphtalmia |
EB07 | 0 | Retrolental fibroplasia |
EB08 | 0 | Optic nerve atrophy |
EB09 | 0 | Congenital retinal dystrophy |
EB10 | 0 | Retrolental fibroplasia |
EB11 | 0 | Retrolental fibroplasia |
EB12 | 0 | Retrolental fibroplasia |
EB13 | 0 | Retrolental fibroplasia |
EB14 | 0 | Leber's congenital amaurosis |
EB15 | 3 | Retinitis pigmentosa |
EB16 | 0 | Retrolental fibroplasia |
EB17 | 0 | Agenesis of the optic nerve |
EB18 | 0 | Glaucoma |
Stimuli
We selected six Italian color words (rosso/red, giallo/yellow, arancio/orange, verde/green, azzurro/blue, viola/purple) and six Italian action words (pugno/punch, graffio/scratch, schiaffo/slap, calcio/kick, salto/jump, corsa/run). Words were all highly familiar nouns and were matched across categories (color, action), by number of letters (color: mean = 5.83, SD = 0.98; action: mean = 6, SD = 1.23), frequency (Zipf scale; color: mean = 4.02, SD = 0.61; action: mean = 4.18, SD = 0.4), and orthographic neighbors (Coltheart's N; color: mean = 14, SD = 9.12; action: mean = 15.33, SD = 12.42).
Auditory files were made using a voice synthesizer (Talk To Me), with a female voice, and edited into separated audio files with the same auditory properties (44100 Hz, 32 bit, mono, 78 dB of intensity). The original duration of each audio file (range = 356–464 msec) was extended or compressed to 400 msec using the PSOLA (pitch synchronous overlap and add) algorithm and the sound-editing software Praat (Boersma & Weenink, 2018). All the resulting audio files were highly intelligible.
Experimental Design
We designed a fast event-related fMRI paradigm during which participants listened to pairs of color and action words. In each trial, the two words were played one after the other with a SOA of 2000 msec.
The intertrial interval ranged between 4000 and 16000 msec. Participants were asked to judge the similarity of the two colors or the two actions from 1 to 5 (1 = very different, 5 = very similar). Responses were collected via an ergonomic hand-shaped response box with five keys (Resonance Technology, Inc.). All participants used their right hand to provide responses (thumb = very different, pinky = very similar). Participants were told that they had about 4 sec to provide a response after the onset of the second word of the pair, and they were encouraged to use all the scale (1–5). Furthermore, the instruction was to judge the similarity of colors and actions based on their perceptual properties (avoiding reference to emotion, valence, or other nonperceptual characteristics). Blind participants were told to judge color pairs on the basis of their knowledge about the perceptual similarity between colors.
Color and action words were presented in all possible within-category combinations (15 color pairs, 15 action pairs). Each pair was presented twice in each run, in the two possible orders (e.g., red–yellow, yellow–red). Thus, there were 60 trials in each run, and the experiment consisted in five runs of 7 min. Stimuli were pseudorandomized using optseq2 to optimize the sequence of presentation of the different conditions. Three different optimized lists of trials were used across runs. List order was counterbalanced across participants.
One early-blind participant was excluded from the analyses because the participant answered to fewer than 70% of the trials throughout the experiment because of sleepiness. One run of one sighted participant was excluded from the analysis because of a technical error during the acquisition, and two other runs (one in a sighted participant, one in a blind participant) were excluded because the participant answered to fewer than 70% of the trials in that specific run.
Conceptual Similarity Ratings
To perform the adaptation analysis, we divided the trials in similar pairs (e.g., red–orange) and different pairs (e.g., red–blue). We did so based on the participants' subjective ratings. For each participant, we took the average rating for each of the 15 word pairs in the action and color categories. Then, we automatically divided the 15 pairs in five intervals (four quantiles) of nearly equal size. This subdivision was performed using the function quantile, in R (R Core Team, 2017), which divides a probability distribution into contiguous intervals of equal probabilities (i.e., 20%). The pairs in the first two intervals were the different pairs (low ratings of similarity), the pairs in the third interval were the medium pairs, and the pairs in the fourth and fifth intervals were the similar pairs (see Figure 2B). However, in some cases, ratings distributions were slightly unbalanced, due to the tendency of some participants to find more “very different” pairs than “very similar” pairs. In these cases (eight participants for action ratings [three EB]; four participants for color ratings [one EB]), the automatic split in five equal intervals was not possible. Thus, we set the boundary between the second and third intervals at the ratings average (for that given participant) and set to the minimum (one or two, depending on the cases) the number of items in the third interval (not analyzed) to balance, as much as possible, the number of pairs in the Different and Similar groups. This procedure was made so that, in these special cases (as well as in all the others), the rating values of different pairs were always below the mean and the values of similar pairs were always above the mean.
MRI Data Acquisition
Brain images were acquired at the Neurological Institute Carlo Besta in Milano on a 3-T scanner with a 32-channel head coil (Achieva TX; Philips Healthcare) and gradient EPI sequences.
In the event-related experiment, we acquired 35 slices (voxel size 3 × 3 × 3.5) with no gap. The data in-plane matrix size was 64 × 64, with field of view = 220 mm × 220 mm, repetition time = 2 sec, flip angle = 90°, and echo time = 30 msec. In all, 1210 whole-brain images were collected during the experimental sequence. The first four images of each run were excluded from the analysis for steady-state magnetization. Each participant performed five runs, with 242 volumes per run.
Anatomical data were acquired using a T1-weighted 3D-TFE sequence with the following parameters: voxel size = 1 × 1 × 1 mm, matrix size = 240 × 240, repetition time = 2.300 msec, echo time = 2.91 msec, inversion time = 900 msec, field of view = 240, 185 slices.
MRI Data Analysis
We analyzed the fMRI data using SPM12 (www.fil.ion.ucl.ac.uk/spm/software/spm12/) and MATLAB R2014b (The MathWorks, Inc.).
Preprocessing
Preprocessing included slice timing correction of the functional time series (Sladky et al., 2011), realignment of functional time series, coregistration of functional and anatomical data, spatial normalization to an echoplanar imaging template conforming to the Montreal Neurological Institute (MNI) space, and spatial smoothing (Gaussian kernel, 6 mm FWHM). Serial autocorrelation, assuming a first-order autoregressive model, was estimated using the pooled active voxels with a restricted maximum likelihood procedure, and the estimates were used to whiten the data and design matrices.
Data Analysis
Following preprocessing steps, the analysis of fMRI data, based on a mixed-effects model, was conducted in two serial steps accounting, respectively, for fixed and random effects. In all the analyses, the regressors for the conditions of interest consisted of an event-related boxcar function convolved with the canonical hemodynamic response function according to a variable epoch model (Grinband, Wager, Lindquist, Ferrera, & Hirsch, 2008). Movement parameters derived from realignment of the functional volumes (translations in x, y, and z directions and rotations around x, y, and z axes) and a constant vector were also included as covariates of no interest. We used a high-pass filter with a discrete cosine basis function and a cutoff period of 128 sec to remove artifactual low-frequency trends.
Adaptation analysis.
For each participant, the general linear model included six regressors corresponding to the three levels of similarity (different, medium, similar) in each condition (color, action). Color and action pairs in the medium condition were modeled as regressors of no interest.
At the first level of analysis, linear contrasts tested for repetition suppression (Different > Similar) collapsing across categories (action, color). The same contrasts were then repeated within each category (Color Different > Color Similar, Action Different > Action Similar). Finally, we tested for Similarity × Category interactions, testing whether the adaptation was stronger in one category compared with the other (e.g., [Color Different > Color Similar] > [Action Different > Action Similar]).
These linear contrasts generated statistical parametric maps, SPM(T). The resulting contrast images were then further spatially smoothed (Gaussian kernel 5 mm FWHM) and entered in a second-level analysis (RFX), corresponding to a random-effects model, accounting for intersubject variance. One-sample t tests were run on each group separately. Two-sample t tests were then performed to compare these effects between groups (blind vs. sighted).
Univariate analysis.
For each participant, changes in regional brain responses were estimated through a general linear model including two regressors corresponding to the two categories, action and color. The onset of each event was set at the beginning of the first word of the pair; the offset was determined by the participant response, thus including RT (Grinband et al., 2008). Linear contrasts tested for action-specific (Action > Color) and color-specific (Color > Action) BOLD activity.
These linear contrasts generated statistical parametric maps, SPM(T). The resulting contrast images were then further spatially smoothed (Gaussian kernel 5 mm FWHM) and entered in a second-level analysis, corresponding to a random-effects model, accounting for intersubject variance. One-sample t tests were run on each group separately. Two-sample t tests were then performed to compare these effects between groups (blind vs. sighted) and to perform conjunction analyses to observe if the two groups presented similar activated networks for the two contrasts of interests.
ROI Definition
The V4 and V5 ROI were drawn from the literature, considering both perceptual localizers, as well as evidence from semantic/conceptual task. These ROIs were restricted to the left hemisphere because several previous studies consistently showed unique or relatively stronger left-lateralized activation for sensorimotor simulation of color and movement during language processing (Fernandino et al., 2015; Saygin et al., 2010). We selected three peak coordinates for area V5. The first [−47 −78 −2] from a highly cited study contrasting the perception of visual motion versus static images (Dumoulin et al., 2000). The second [−44 −74 2] from a study (Saygin et al., 2010) showing V5 sensitivity to motion sentences (e.g., “The wild horse crossed the barren field”). The third from a research on the online meta-analysis tool Neurosynth (neurosynth.org/) for the topic “action.” In Neurosynth, the area in the occipital cortex with the highest action-related activation was indeed V5 (peak coordinates: −50 −72 2). To avoid ROI proliferation, we averaged these three peak coordinates to obtain a single peak (average peak: −47 −75 1).
As for V4, we selected the color-sensitive occipital ROI considering perceptual localizers, as well as evidence of color-specific activity from semantic/conceptual task. Fernandino et al. (2015) reported a color-sensitive area in the left posterior collateral sulcus (at the intersection between the lingual and the fusyform gyrus; MNI peak coordinates: −16 −71 −12) associated with color-related words. This peak is close to the posterior V4 localization done by Beauchamps and colleagues (peak coordinates: −22 −82 −16) in an MRI version of the Farnsworth-Munsell 100-Hue Test (Beauchamp, Haxby, Jennings, & DeYoe, 1999). A search in neurosynth with the keyword “color” also highlighted a left posterior color-sensitive region along the collateral sulcus with peak coordinates (−24 −90 −10). We averaged these three peaks to find the center of our ROI (average peak: −21 −81 −13).
Only a few studies before us adopted a repetition–suppression paradigm using words (either visually or auditorily presented). In all these cases, semantic relatedness was tested, and results showed repetition suppression for semantically related words in the posterior lateral-temporal cortex (PLTC). Bedny, McGill, & Thompson-Schill (2008) observed increased neural adaptation in PLTC (peak coordinates: 57 −36 21) for repeated words (fan–fan), when the words were presented in a similar context (summer–fan, ceiling–fan) compared with when different context triggered different meanings (e.g., admirer–fan, ceiling–fan). This result conceptually replicated previous studies (Wible et al., 2006; Kotz, Cappa, von Cramon, & Friederici, 2002) showing semantic adaptation in the bilateral PLTC for related (e.g., dog–cat) versus unrelated (e.g., dog–apple) word pairs (peak coordinates: −42 −27 9 and −51 −22 8). These three peaks were averaged to find the center of our ROI in both hemispheres (average peak: ±50 −28 13).
Statistical Analysis
At the whole-brain level, statistical inference was made at a corrected cluster level of p < .05 family-wise error (FWE; with a standard voxel-level threshold of p < .001 uncorrected).
All ROI analyses were performed using small volume correction using spheres with an 8-mm radius centered around the ROI peak coordinates (see previous section). Within the ROI, results were considered significant at a threshold of p < .05, FWE-corrected voxel-wise. Here, and throughout the article, brain coordinates are reported in MNI space.
RESULTS
Brain Regions Active in Sighted and Blind Individuals when Contrasting Action and Color Categories
Behavioral Analysis
RT analysis using a mixed ANOVA, with Category (action, color) as within-subject factor and Group (sighted, blind) as between-subject factor, showed no difference between categories, F(1, 33) = 2.37, p > .05, η2 = .07; between groups, F(1, 33) = 0.074, p > .05, η2 = .002; and no Category × Group interaction, F(1, 33) = .69, p > .05, η2 = .02.
fMRI Analysis
The contrast Action > Color did not reveal any significant difference between groups, suggesting a comparable categorical representation of action concepts, across sighted and blind (see Figure 1A, B and Table 3 for details). Indeed, a conjunction analysis between groups showed a common significant activation in the lpMTG (peak: −54 −61 5; Figure 1E).
Contrasts between categories. Regional BOLD responses are rendered over Conte-69 average midthickness surfaces. (A, B) Suprathreshold clusters (p < .05 FWE corrected) for the contrast Action > Color in sighted and blind participants, respectively. (C–D) Suprathreshold clusters (p < .05 FWE corrected) for the contrast Color > Action in sighted and blind participants, respectively. (E) Suprathreshold clusters (p < .05 FWE corrected) showing common activity in the lpMTG for the contrast Action > Color in both sighted and early-blind participants (conj. = conjunction analysis). (F) Suprathreshold voxels (p < .001 uncorrected, only for illustration purposes) showing common activity in the right precuneus for the contrast Color > Action in both sighted and early blind participants (conj. = conjunction analysis). (G) Suprathreshold clusters (p < .05 FWE corrected) showing greater activity in the rIPS, in sighted compared with early-blind participants, for the contrast Color > Action.
Contrasts between categories. Regional BOLD responses are rendered over Conte-69 average midthickness surfaces. (A, B) Suprathreshold clusters (p < .05 FWE corrected) for the contrast Action > Color in sighted and blind participants, respectively. (C–D) Suprathreshold clusters (p < .05 FWE corrected) for the contrast Color > Action in sighted and blind participants, respectively. (E) Suprathreshold clusters (p < .05 FWE corrected) showing common activity in the lpMTG for the contrast Action > Color in both sighted and early-blind participants (conj. = conjunction analysis). (F) Suprathreshold voxels (p < .001 uncorrected, only for illustration purposes) showing common activity in the right precuneus for the contrast Color > Action in both sighted and early blind participants (conj. = conjunction analysis). (G) Suprathreshold clusters (p < .05 FWE corrected) showing greater activity in the rIPS, in sighted compared with early-blind participants, for the contrast Color > Action.
Area . | k . | x (mm) . | y (mm) . | z (mm) . | Z . | pFWE . |
---|---|---|---|---|---|---|
Sighted, Action > Color | ||||||
L middle temporal gyrus | 1134 | −57 | −61 | 8 | 5.80 | <.001 |
L inferior frontal gyrus | S.C. | −48 | 29 | −1 | 5.31 | |
L middle temporal gyrus | S.C. | −60 | −43 | 29 | 5.15 | |
R middle temporal gyrus | 474 | 60 | −10 | −7 | 5.14 | <.001 |
S.C. | 57 | 2 | −13 | 4.96 | ||
R superior temporal gyrus | S.C. | 51 | −28 | −1 | 4.74 | |
Sighted, Color > Action | ||||||
R orbital gyrus | 65 | 27 | 35 | −13 | 4.98 | .009a |
L orbital gyrus | 71 | −30 | 35 | −13 | 4.90 | .012a |
precuneus | 91 | 0 | −61 | 29 | 4.11 | .06b |
Blind, Action > Color | ||||||
L middle temporal gyrus | 315 | −54 | −61 | 5 | 4.69 | <.001 |
S.C. | −42 | −67 | 17 | 4.46 | ||
S.C. | −45 | −52 | 14 | 3.49 | ||
R calcarine/posterior cingulate | 325 | 9 | −67 | 8 | 4.63 | <.001 |
S.C. | 0 | −76 | 11 | 4.07 | ||
L calcarine | S.C. | −9 | −82 | 11 | 4.07 | |
R middle temporal gyrus | 203 | 57 | −55 | 5 | 4.40 | .003 |
S.C. | 42 | −55 | 8 | 3.94 | ||
S.C. | 63 | −43 | 8 | 3.63 | ||
R inferior frontal gyrus | 256 | 42 | 20 | 5 | 4.21 | <.001 |
R superior temporal pole | S.C. | 48 | 11 | −19 | 3.97 | |
R inferior frontal gyrus | S.C. | 51 | 17 | −1 | 3.80 | |
Blind, Color > Action | ||||||
R precuneus | 109 | 6 | −52 | 20 | 4.72 | .036 |
L medial frontal gyrus | 174 | −9 | 59 | −7 | 4.20 | .008 |
S.C. | 6 | 53 | −10 | 3.85 | ||
Sighted ∩ Bind, Action > Color | ||||||
L middle temporal gyrus | 155 | −54 | −61 | 5 | 4.69 | .01 |
Sighted > Blind, Color > Action | ||||||
R intraparietal sulcus | 146 | 33 | −43 | 35 | 4.33 | .013 |
Area . | k . | x (mm) . | y (mm) . | z (mm) . | Z . | pFWE . |
---|---|---|---|---|---|---|
Sighted, Action > Color | ||||||
L middle temporal gyrus | 1134 | −57 | −61 | 8 | 5.80 | <.001 |
L inferior frontal gyrus | S.C. | −48 | 29 | −1 | 5.31 | |
L middle temporal gyrus | S.C. | −60 | −43 | 29 | 5.15 | |
R middle temporal gyrus | 474 | 60 | −10 | −7 | 5.14 | <.001 |
S.C. | 57 | 2 | −13 | 4.96 | ||
R superior temporal gyrus | S.C. | 51 | −28 | −1 | 4.74 | |
Sighted, Color > Action | ||||||
R orbital gyrus | 65 | 27 | 35 | −13 | 4.98 | .009a |
L orbital gyrus | 71 | −30 | 35 | −13 | 4.90 | .012a |
precuneus | 91 | 0 | −61 | 29 | 4.11 | .06b |
Blind, Action > Color | ||||||
L middle temporal gyrus | 315 | −54 | −61 | 5 | 4.69 | <.001 |
S.C. | −42 | −67 | 17 | 4.46 | ||
S.C. | −45 | −52 | 14 | 3.49 | ||
R calcarine/posterior cingulate | 325 | 9 | −67 | 8 | 4.63 | <.001 |
S.C. | 0 | −76 | 11 | 4.07 | ||
L calcarine | S.C. | −9 | −82 | 11 | 4.07 | |
R middle temporal gyrus | 203 | 57 | −55 | 5 | 4.40 | .003 |
S.C. | 42 | −55 | 8 | 3.94 | ||
S.C. | 63 | −43 | 8 | 3.63 | ||
R inferior frontal gyrus | 256 | 42 | 20 | 5 | 4.21 | <.001 |
R superior temporal pole | S.C. | 48 | 11 | −19 | 3.97 | |
R inferior frontal gyrus | S.C. | 51 | 17 | −1 | 3.80 | |
Blind, Color > Action | ||||||
R precuneus | 109 | 6 | −52 | 20 | 4.72 | .036 |
L medial frontal gyrus | 174 | −9 | 59 | −7 | 4.20 | .008 |
S.C. | 6 | 53 | −10 | 3.85 | ||
Sighted ∩ Bind, Action > Color | ||||||
L middle temporal gyrus | 155 | −54 | −61 | 5 | 4.69 | .01 |
Sighted > Blind, Color > Action | ||||||
R intraparietal sulcus | 146 | 33 | −43 | 35 | 4.33 | .013 |
Significance corrections are reported at the cluster level, unless otherwise specified; cluster size threshold = 50. L = left; R = right; S.C. = same cluster.
Brain activations significant after FWE correction at voxel level over the whole brain.
Indicates marginally significant clusters (pFWE < .1).
On the other hand, a conjunction analysis between groups for Color > Action did not reveal any common activation between sighted and blind individuals after correction for multiple comparisons at the whole-brain level. However, displaying the conjunction results at a more lenient threshold (p < .001 uncorrected; Figure 1F), we could notice a unique common activity for color concepts in the right precuneus (peak: 6 −55 26). Accordingly, analysis within groups showed a significant precuneus activity in blind individuals (peak: 6 −52 20, p = .04) and a marginally significant activity in sighted individuals (peak: 0 −61 29, p = .06), with no significant difference between groups (Table 3; Figure 1C, D).
More importantly, further analysis for the contrast Color > Action revealed a cluster in the right parietal cortex, in and around the right intraparietal sulcus (rIPS), showing higher activity for color concepts in sighted compared with blind (peak: 33 −43 35; Figure 1G).
Altogether these results show both similar and different patterns of activity during conceptual processing in sighted and blind individuals, when different categories are contrasted against each other (independently of perceptual similarity). As in previous studies (Bedny et al., 2012; Noppeney et al., 2003), categorical preference was found outside modality-specific visual cortex, in areas that are considered to be highly multimodal, such as the lpMTG, the precuneus, and the IPS (Binder, Desai, Graves, & Conant, 2009). The network of activations elicited by action words (contrasted with color words) was highly similar across groups, with a common peak of activity in the lpMTG, replicating previous results (Bedny et al., 2012; Noppeney et al., 2003). On the other hand, the regions activated specifically by color words (compared with action words) were partially different across sighted and blind individuals. In particular, the rIPS was activated by color knowledge in sighted individuals more than blind individuals. The greater difference between groups in the case of colors, compared with actions, can be explained by the fact that colors lack a perceptual referent without vision, thus acquiring a different epistemological status in blind and sighted individuals (abstract vs. concrete).
Perceptual Similarity Is Encoded in Occipital Areas in the Sighted but Not in the Blind
The rationale behind adaptation analyses was that the direct contrast between pairs with high versus low perceptual differences will display neural adaptation (Barron et al., 2016; Grill-Spector et al., 2006; Wheatley et al., 2005), therefore probing regions that are specifically sensitive to the “perceptual distance” between concepts.
Behavioral Analysis
Similarity ratings were highly correlated between sighted and blind individuals, both for action (r = .99) and color concepts (r = .93; Figure 2A). To perform the adaptation analysis, we divided the trials in similar pairs (e.g., red–orange) and different pairs (e.g., red–blue) based on each participant's subjective ratings. Rating distributions for each participant and category (color, action) were divided in five intervals with a similar number of items (see Methods section for details). Stimulus pairs in the first two intervals were labeled as different (low similarity ratings), the third interval contained medium pairs, and the fourth to fifth intervals have similar pairs (high similarity ratings; Figure 2B). Overall, the average number of “different” trials was slightly larger than the “similar” ones (126 vs. 115), F(1, 33) = 8.41, p = .007, η2 = .20 (Figure 2C). However, there was no similarity by group interaction, F(1, 33) = 0.18, p =.67, η2 = .004, indicating that this unbalance (that reflected personal judgments of similarity) was the same across SC and EB (Figure 2C).1 An analysis of RTs showed that medium pairs (not analyzed in fMRI) had on average longer latencies than similar and different ones (main effect of similarity: F(2, 66) = 21.07, p < .001, η2 = .38). This was expected because pairs that are neither similar nor different would require longer and more difficult judgments. Crucially, there was no difference in RTs between different (mean = 1.80 sec, SD = 0.39) and similar pairs (mean = 1.79 sec, SD = 0.37), F(1, 33) = 0.09, p = .76, η2 = .003, and no interaction between similarity and group, F(1, 33) = 0.04, p = .84, η2 = .001 (Figure 2D).
Adaptation, behavioral analysis. (A) Similarity judgments were highly correlated across groups both for actions and color. (B) Conceptual schema of the division of word pairs in “different” and “similar” based on subjective similarity ratings. (C) Bar plot depicting the average number of items in the “different,” “medium,” and “similar” categories. The number of items in the “different” and “similar” categories is very similar across groups (number of trials ± SEM). (D) Bar plot depicting the average RT in the “different,” “medium,” and “similar” categories. The average RTs of the “different” and “similar” categories is very similar across groups (seconds ± SEM).
Adaptation, behavioral analysis. (A) Similarity judgments were highly correlated across groups both for actions and color. (B) Conceptual schema of the division of word pairs in “different” and “similar” based on subjective similarity ratings. (C) Bar plot depicting the average number of items in the “different,” “medium,” and “similar” categories. The number of items in the “different” and “similar” categories is very similar across groups (number of trials ± SEM). (D) Bar plot depicting the average RT in the “different,” “medium,” and “similar” categories. The average RTs of the “different” and “similar” categories is very similar across groups (seconds ± SEM).
fMRI Analysis
To find brain areas that showed adaptation based on conceptual similarity, we looked at the contrast Different Pairs > Similar Pairs (medium pairs were set as a regressor of no interest). In sighted individuals, similar colors elicited repetition suppression in a circumscribed region at the border between the left fusiform gyrus and the left lingual gyrus, along the posterior bank of the collateral sulcus (peak: −21 −82 −7; Figure 3A). Interestingly, the posterior collateral sulcus (pCoS) is the color-sensitive area originally defined as V4 by Zeki et al. (1991), lately identified as the posterior patch of the V4 complex (Lafer-Sousa, Conway, & Kanwisher, 2016; Beauchamp et al., 1999), and recently found to encode color-related knowledge during semantic processing (Fernandino et al., 2015). Still in sighted individuals, similar actions elicited repetition suppression in several regions of the posterior occipital cortex, including V4, V5, the posterior parahippocampal gyrus, as well as the middle occipital gyrus (see Table 4 and Figure 3B). Pulling together action and color, thus looking at which brain region encoded visual similarity independently of the category, we found a significant cluster (whole-brain analysis) in and around the left lingual gyrus (peak coordinates: −24 −70 −7; Figure 4A). Whole-brain analysis, in sighted individuals, also showed that differences between the two categories (action and color) in terms of repetition suppression did not reach significance. However, planned ROI analysis in the motion-sensitive region V5 showed a stronger repetition suppression for actions compared with colors (peak: −51 −76 8), t(33) = 3.04, p = .02.
Adaptation, fMRI results. Regional BOLD responses are rendered over Conte-69 average midthickness surfaces. (A) Suprathreshold voxels showing neural adaptation for similar color words in the left V4 in sighted participants. (B) Suprathreshold voxels showing neural adaptation for similar action words in the occipital cortex of sighted participants pairs in the temporal and somatosensory-motor cortices of early-blind participants. (C) Suprathreshold voxels showing neural adaptation for similar color words in the temporal and precentral regions of blind participants. (D) Suprathreshold voxels showing neural adaptation for similar actions words in the temporal and precentral regions of blind participants. Voxels threshold at p <.005 uncorrected, for illustration only.
Adaptation, fMRI results. Regional BOLD responses are rendered over Conte-69 average midthickness surfaces. (A) Suprathreshold voxels showing neural adaptation for similar color words in the left V4 in sighted participants. (B) Suprathreshold voxels showing neural adaptation for similar action words in the occipital cortex of sighted participants pairs in the temporal and somatosensory-motor cortices of early-blind participants. (C) Suprathreshold voxels showing neural adaptation for similar color words in the temporal and precentral regions of blind participants. (D) Suprathreshold voxels showing neural adaptation for similar actions words in the temporal and precentral regions of blind participants. Voxels threshold at p <.005 uncorrected, for illustration only.
Area . | k . | x (mm) . | y (mm) . | z (mm) . | Z . | pFWE . |
---|---|---|---|---|---|---|
Sighted, Different > Similar | ||||||
L V4/pCoS | −21 | −82 | −7 | 3.23 | .013a | |
Blind, Different > Similar | ||||||
R superior temporal gyrus | 67 | 54 | −1 | −16 | 4.61 | .038b |
R superior temporal gyrus | 132 | 60 | −28 | −5 | 4.11 | .022 |
S.C. | 51 | −22 | −1 | 3.46 | ||
R middle temporal gyrus | S.C. | 48 | −34 | −4 | 3.46 | |
L superior temporal gyrus | 104 | −57 | −10 | −10 | 4.23 | .04 |
R precentral gyrus | 249 | 21 | −28 | 59 | 4.44 | .001 |
S.C. | 39 | −13 | 50 | 4.33 | ||
S.C. | 21 | −19 | 74 | 4.14 | ||
Sighted > Blind, Different > Similar | ||||||
L lingual gyrus | −21 | −82 | −7 | 3.40 | .008a | |
Blind > Sighted, Different > Similar | ||||||
L superior temporal gyrus | −48 | −31 | 20 | 2.85 | .036c | |
R superior temporal gyrus | 51 | −28 | 20 | 3.11 | .028c |
Area . | k . | x (mm) . | y (mm) . | z (mm) . | Z . | pFWE . |
---|---|---|---|---|---|---|
Sighted, Different > Similar | ||||||
L V4/pCoS | −21 | −82 | −7 | 3.23 | .013a | |
Blind, Different > Similar | ||||||
R superior temporal gyrus | 67 | 54 | −1 | −16 | 4.61 | .038b |
R superior temporal gyrus | 132 | 60 | −28 | −5 | 4.11 | .022 |
S.C. | 51 | −22 | −1 | 3.46 | ||
R middle temporal gyrus | S.C. | 48 | −34 | −4 | 3.46 | |
L superior temporal gyrus | 104 | −57 | −10 | −10 | 4.23 | .04 |
R precentral gyrus | 249 | 21 | −28 | 59 | 4.44 | .001 |
S.C. | 39 | −13 | 50 | 4.33 | ||
S.C. | 21 | −19 | 74 | 4.14 | ||
Sighted > Blind, Different > Similar | ||||||
L lingual gyrus | −21 | −82 | −7 | 3.40 | .008a | |
Blind > Sighted, Different > Similar | ||||||
L superior temporal gyrus | −48 | −31 | 20 | 2.85 | .036c | |
R superior temporal gyrus | 51 | −28 | 20 | 3.11 | .028c |
Significance corrections are reported at the cluster level, unless otherwise specified; cluster size threshold = 50. L = left; R = right; S.C. = same cluster.
Brain activation significant after FWE voxel correction over a small spherical volume (8-mm radius) in V4.
Brain activation significant after FWE correction at voxel level over the whole brain.
Brain activation significant after FWE voxel correction over a small spherical volume (8-mm radius) at peak coordinates for right/left PLTC.
Adaptation, fMRI results. Regional BOLD responses are rendered over Conte-69 average midthickness surfaces. (A) Suprathreshold voxels showing neural adaptation for similar word pairs (both color and actions) in the occipital cortices of sighted participants. (B) Suprathreshold voxels showing neural adaptation for similar word pairs (both color and actions) in the temporal and somatosensory-motor cortices of early blind participants. Suprathreshold voxels showing neural adaptation for similar word pairs (both color and actions) in (C) sighted compared with blind participants and (D) blind compared with sighted participants. Voxels threshold at p < .005 uncorrected, for illustration only; Planned ROIs are indicated in transparent green color on the inflated cortex. Average signal change (arbitrary unit) extracted at group maxima, within the planned ROI, as a result of small volume correction: from (E) left V4, (F) left V5, and (G) left PLTC, for illustration only. Notice that the same statistical results are obtained when, instead of using small volume correction, we compare the average voxel activity within the planned ROI: (i) left V4, Conceptual Similarity × Group interaction, F = 14.20, p < .001; (ii) left V5, Conceptual Similarity × Category × Group interaction, F = 4.84, p = .035; (iii) left PLTC, Conceptual Similarity × Group interaction, F = 5.60, p = .024; right PLTC, Conceptual Similarity × Group interaction, F = 8.51, p = .006.
Adaptation, fMRI results. Regional BOLD responses are rendered over Conte-69 average midthickness surfaces. (A) Suprathreshold voxels showing neural adaptation for similar word pairs (both color and actions) in the occipital cortices of sighted participants. (B) Suprathreshold voxels showing neural adaptation for similar word pairs (both color and actions) in the temporal and somatosensory-motor cortices of early blind participants. Suprathreshold voxels showing neural adaptation for similar word pairs (both color and actions) in (C) sighted compared with blind participants and (D) blind compared with sighted participants. Voxels threshold at p < .005 uncorrected, for illustration only; Planned ROIs are indicated in transparent green color on the inflated cortex. Average signal change (arbitrary unit) extracted at group maxima, within the planned ROI, as a result of small volume correction: from (E) left V4, (F) left V5, and (G) left PLTC, for illustration only. Notice that the same statistical results are obtained when, instead of using small volume correction, we compare the average voxel activity within the planned ROI: (i) left V4, Conceptual Similarity × Group interaction, F = 14.20, p < .001; (ii) left V5, Conceptual Similarity × Category × Group interaction, F = 4.84, p = .035; (iii) left PLTC, Conceptual Similarity × Group interaction, F = 5.60, p = .024; right PLTC, Conceptual Similarity × Group interaction, F = 8.51, p = .006.
Repetition suppression analysis in blind individuals provided very different results. Similar colors elicited repetition suppression in the superior and middle temporal gyrus, bilaterally, with a peak in the right anterior superior temporal gyrus (peak: 54 −1 −16; see Table 4 and Figure 3C). Moreover, significant repetition suppression emerged in the bilateral precentral gyrus. Similar actions also induced repetition suppression in the posterior superior temporal gyrus bilaterally, although results were relatively weaker (compared with color) and significance was reached only with ROI analysis (left peak: −54 −34 11, t(33) = 3.33, p = .03; right peak: 48 −28 8, t(33) = 3.71, p = .008; Table 5). At the whole-brain level, there was no significant difference between the two categories (action, color) in terms of adaptation effect, and pulling together the two categories, results showed that perceptual similarity in blind individuals was encoded in bilateral middle and superior temporal gyri (Figure 4B).
Area . | k . | x (mm) . | y (mm) . | z (mm) . | Z . | pFWE . |
---|---|---|---|---|---|---|
Sighted, Different > Similar | ||||||
R parahippocampal gyrus | 208 | 15 | −40 | −7 | 4.64 | .007 |
R lingual gyrus | S.C. | 24 | −55 | −10 | 3.93 | |
S.C. | 15 | −67 | −4 | 3.67 | ||
R middle occipital gyrus | 566 | 27 | −88 | 11 | 4.44 | .001 |
S.C. | 39 | −76 | −1 | 4.24 | ||
R superior occipital gyrus | S.C. | 21 | −88 | 23 | 3.76 | |
L superior occipital gyrus | 207 | −24 | −88 | 23 | 4.17 | .007 |
S.C. | −15 | −94 | 23 | 3.92 | ||
L Lingual gyrus | 156 | −18 | −67 | −7 | 3.91 | .02 |
Blind, Different > Similar | ||||||
L superior temporal gyrus | −54 | −34 | 11 | 2.88 | .030a | |
R superior temporal gyrus | 48 | −28 | 8 | 3.36 | .008a | |
Sighted > Blind, Different > Similar | ||||||
R middle occipital gyrus | 442 | 27 | −85 | 11 | 4.49 | .001 |
S.C. | 36 | −76 | 1 | 4.11 | ||
R posterior fusyform gyrus | S.C. | 39 | −70 | −13 | 3.85 | |
L V4/pCoS | −27 | −79 | −10 | 2.98 | .024b | |
L V5 | −45 | −79 | 2 | 3.36 | .008c |
Area . | k . | x (mm) . | y (mm) . | z (mm) . | Z . | pFWE . |
---|---|---|---|---|---|---|
Sighted, Different > Similar | ||||||
R parahippocampal gyrus | 208 | 15 | −40 | −7 | 4.64 | .007 |
R lingual gyrus | S.C. | 24 | −55 | −10 | 3.93 | |
S.C. | 15 | −67 | −4 | 3.67 | ||
R middle occipital gyrus | 566 | 27 | −88 | 11 | 4.44 | .001 |
S.C. | 39 | −76 | −1 | 4.24 | ||
R superior occipital gyrus | S.C. | 21 | −88 | 23 | 3.76 | |
L superior occipital gyrus | 207 | −24 | −88 | 23 | 4.17 | .007 |
S.C. | −15 | −94 | 23 | 3.92 | ||
L Lingual gyrus | 156 | −18 | −67 | −7 | 3.91 | .02 |
Blind, Different > Similar | ||||||
L superior temporal gyrus | −54 | −34 | 11 | 2.88 | .030a | |
R superior temporal gyrus | 48 | −28 | 8 | 3.36 | .008a | |
Sighted > Blind, Different > Similar | ||||||
R middle occipital gyrus | 442 | 27 | −85 | 11 | 4.49 | .001 |
S.C. | 36 | −76 | 1 | 4.11 | ||
R posterior fusyform gyrus | S.C. | 39 | −70 | −13 | 3.85 | |
L V4/pCoS | −27 | −79 | −10 | 2.98 | .024b | |
L V5 | −45 | −79 | 2 | 3.36 | .008c |
Significance corrections are reported at the cluster level, unless otherwise specified; cluster size threshold = 50. L = left; R = right; S.C. = same cluster.
Brain activation significant after FWE voxel correction over a small spherical volume (8-mm radius) at peak coordinates for right/left PLTC.
Brain activation significant after FWE voxel correction over a small spherical volume (8-mm radius) in V4.
Brain activation significant after FWE voxel correction over a small spherical volume (8-mm radius) in V5.
Between-group Comparisons
Between-group statistical comparisons confirmed the picture emerging from the within-group analysis. Pulling together color and action trials, sighted participants showed a greater adaptation for perceptually similar concepts in several occipital areas, with peaks in the left superior occipital gyrus (−24 −91 26), the left lingual gyrus (−24 −70 −7), and the right middle occipital gyrus (27 −85 11). The contrast Blind > Sighted did not show significant results at the whole-brain level. However, planned ROI analysis in PLTC revealed a significantly greater adaptation for similar concepts in blind individuals more than sighted individuals, Conceptual Similarity × Group interaction: left PLTC= −48 −31 20, t(33) = 2.75, p = .04; right PLTC = 45 −28 11, t(33) = 3.13, p = .015.
Category-specific results were also in line with what was previously observed. Perceptually similar colors induced a stronger adaptation in V4 for sighted people and in bilateral PLTC for blind people (Figure 4E–G; Table 4). On the other hand, perceptually similar actions showed a stronger adaptation in different occipital areas in sighted individuals (e.g., V4, V5, right posterior fusyform), but no significantly greater adaptation was found in blind individuals (Figure 4E–G; Table 5), suggesting that the adaptation effect in left/right PLTC was driven mostly by color words (although the Conceptual Similarity × Group × Category interaction was not significant in this ROI; see Figure 4G and Table 6).
Area . | k . | x (mm) . | y (mm) . | z (mm) . | Z . | pFWE . |
---|---|---|---|---|---|---|
Sighted, Different > Similar | ||||||
L lingual gyrus | 180 | −24 | −70 | −7 | 4.33 | .013 |
S.C. | −15 | −70 | −10 | 4.32 | ||
Blind, Different > Similar | ||||||
L middle temporal gyrus | 119 | −60 | −10 | −7 | 4.20 | .049 |
S.C. | −63 | −22 | 2 | 3.90 | ||
L superior temporal gyrus | S.C. | −51 | −13 | −1 | 3.39 | |
R superior temporal gyrus | 323 | 57 | −28 | 8 | 4.23 | .001 |
S.C. | 48 | −37 | 20 | 3.92 | ||
S.C. | 54 | −22 | 2 | 3.92 | ||
L postcentral gyrus | 105 | −48 | −13 | 38 | 4.12 | .07a |
R precentral gyrus | 121 | 27 | −25 | 56 | 3.71 | .02 |
S.C. | 36 | −22 | 62 | 3.70 | ||
S.C. | 36 | −16 | 44 | 3.45 | ||
Sighted > Blind, Different > Similar | ||||||
L superior occipital gyrus | 141 | −24 | −91 | 26 | 4.35 | .03 |
S.C. | −9 | −97 | 20 | 3.77 | ||
L middle occipital gyrus | S.C. | −15 | −100 | 14 | 3.75 | |
R middle occipital gyrus | 250 | 27 | −85 | 11 | 3.97 | .003 |
S.C. | 36 | −79 | 2 | 3.67 | ||
R superior occipital gyrus | S.C. | 27 | −79 | 23 | 3.79 | |
L lingual gyrus | 165 | −24 | −70 | −7 | 3.91 | .017 |
L middle occipital gyrus | S.C. | −39 | −73 | 2 | 3.48 | |
L middle occipital gyrus | S.C. | −27 | −94 | 2 | 3.25 | |
Blind > Sighted, Different > Similar | ||||||
L superior temporal gyrus | −48 | −31 | 20 | 2.75 | .040b | |
R superior temporal gyrus | 45 | −28 | 11 | 3.13 | .015b |
Area . | k . | x (mm) . | y (mm) . | z (mm) . | Z . | pFWE . |
---|---|---|---|---|---|---|
Sighted, Different > Similar | ||||||
L lingual gyrus | 180 | −24 | −70 | −7 | 4.33 | .013 |
S.C. | −15 | −70 | −10 | 4.32 | ||
Blind, Different > Similar | ||||||
L middle temporal gyrus | 119 | −60 | −10 | −7 | 4.20 | .049 |
S.C. | −63 | −22 | 2 | 3.90 | ||
L superior temporal gyrus | S.C. | −51 | −13 | −1 | 3.39 | |
R superior temporal gyrus | 323 | 57 | −28 | 8 | 4.23 | .001 |
S.C. | 48 | −37 | 20 | 3.92 | ||
S.C. | 54 | −22 | 2 | 3.92 | ||
L postcentral gyrus | 105 | −48 | −13 | 38 | 4.12 | .07a |
R precentral gyrus | 121 | 27 | −25 | 56 | 3.71 | .02 |
S.C. | 36 | −22 | 62 | 3.70 | ||
S.C. | 36 | −16 | 44 | 3.45 | ||
Sighted > Blind, Different > Similar | ||||||
L superior occipital gyrus | 141 | −24 | −91 | 26 | 4.35 | .03 |
S.C. | −9 | −97 | 20 | 3.77 | ||
L middle occipital gyrus | S.C. | −15 | −100 | 14 | 3.75 | |
R middle occipital gyrus | 250 | 27 | −85 | 11 | 3.97 | .003 |
S.C. | 36 | −79 | 2 | 3.67 | ||
R superior occipital gyrus | S.C. | 27 | −79 | 23 | 3.79 | |
L lingual gyrus | 165 | −24 | −70 | −7 | 3.91 | .017 |
L middle occipital gyrus | S.C. | −39 | −73 | 2 | 3.48 | |
L middle occipital gyrus | S.C. | −27 | −94 | 2 | 3.25 | |
Blind > Sighted, Different > Similar | ||||||
L superior temporal gyrus | −48 | −31 | 20 | 2.75 | .040b | |
R superior temporal gyrus | 45 | −28 | 11 | 3.13 | .015b |
Significance corrections are reported at the cluster level, unless otherwise specified; cluster size threshold = 50. L = left; R = right; S.C. = same cluster.
Indicates marginally significant clusters (pFWE < 0.1).
Brain activation significant after FWE voxel correction over a small spherical volume (8-mm radius) at peak coordinates for right/left PLTC.
DISCUSSION
Embodied approaches to conceptual knowledge suggest that concepts are partly grounded in our sensory and motor experience of the world (Binder & Desai, 2011; Barsalou, 1999). A straightforward hypothesis emerging from these theories is that people who perceive the world in a different way, such as blind people, should also have different conceptual representations (Casasanto, 2011).
In our study, we tested this hypothesis by characterizing the brain activity of sighted and early-blind individuals while they rated the perceptual similarity of action and color concepts in fMRI. In particular, we investigated which brain regions encode the perceptual similarity of retrieved concepts using an adaptation paradigm. Results in the sighted group showed that word pairs referring to similar colors induced repetition suppression in a circumscribed region along the posterior segment of the left collateral sulcus (Figure 2E). This region is part of the V4 complex, which is involved in the perception of color (Lafer-Sousa et al., 2016; Beauchamp et al., 1999; Zeki et al., 1991), and it has been shown to be particularly sensitive to the retrieval of color knowledge (Fernandino et al., 2015). On the other hand, word pairs referring to similar actions led to adaptation in several regions in the occipital lobe, including V4, V5, the posterior parahippocampal gyrus, and the middle occipital gyrus (Figure 3B). This larger pattern of activation may reflect the multifaceted nature of action representations that are based on the integration of several visual domains, such as body, shape, movement, and scene. The fact that both colors and actions elicit repetition suppression in the V4 region is not as surprising because V4 has been implicated also in shape/object representations (Chang, Bao, & Tsao, 2017; Fernandino et al., 2015).
Crucially, early-blind participants did not show repetition suppression in posterior occipital areas, disclosing a strikingly different neurobiology of concepts as a function of visual deprivation. Posterior occipital regions are known to encode visual features in sighted individuals and to be sensitive to visual similarity independently of categorical membership (Borghesani et al., 2016; Bracci & Op de Beeck, 2016; Proklova, Kaiser, & Peelen, 2016; Fernandino et al., 2015; Connolly et al., 2012; Naselaris et al., 2009; Kriegeskorte et al., 2008). Our results are thus in line with the prediction that this level of representation (i.e., low-level visual similarity) is not available to blind people and is highly sensitive to visual deprivation, contrary to a categorical level of representation that is only partially determined by visual appearance (Peelen & Downing, 2017; Bracci & Op de Beeck, 2016). Indeed, previous studies showed that the anterior part of the VOTC, known for its category-specific parcellation (Grill-Spector & Weiner, 2014), presents a similar functional and connectivity fingerprint in sighted and blind individuals (van den Hurk et al., 2017; Wang et al., 2015), whereas this similarity decreases in more posterior occipital regions (Wang et al., 2015). We suggest that this is due, at least in part, to the fact that these posterior occipital regions represent perceptual (visual) features during conceptual retrieval in sighted, but not in blind, individuals.
On the other hand, perceptually similar color and actions induced greater adaptation in PLTC in blind individuals. Several previous studies using our adaptation methodology with sighted participants, but with shorter SOAs and semantically related words (e.g., dog–leash), elicited adaptation in the same region (Bedny et al., 2008; Wible et al., 2006; Kotz et al., 2002), probing its role in coding semantic relatedness (Bedny et al., 2008). These results suggest that blind individuals may rely (more than sighted people) on a semantic code to retrieve perceptual similarities. This hypothesis is in line with studies showing a larger use of verbal knowledge (instead of a visuospatial code) in the blind population when retrieving information from memory (Bottini, Mattioni, & Collignon, 2016; Cattaneo et al., 2008). It is important to note that the activation of a semantic code, instead of a visual one, can account for the fact that similarity ratings of actions and colors were highly comparable between sighted and blind individuals, despite their different brain activation patterns. This is particularly interesting in the case of color knowledge, which cannot be derived directly from nonvisual experience. It has been suggested that a purely amodal semantic system can contain information about the perceptual characteristics of the world (Lewis, Zettersten, & Lupyan, 2019; Louwerse & Connell, 2011) and that congenitally blind people can show a good knowledge of the similarity of basic color terms (Barilari, de Heering, Crollen, Collignon, & Bottini, 2018; Saysani, Corballis, & Corballis, 2018; Marmor, 1978). Although the indirect remapping of color knowledge onto nonvisual perceptual properties of objects (e.g., temperature) may play a role in color conceptualization in the blind, there is also evidence that verbal knowledge of color similarity can be enough to provide reliable information about few basic color categories, like the ones used in our task. However, when finer grained perceptual knowledge is evaluated, early-blind and sighted people show behavioral differences in their representation of color concepts (Kim, Elli, & Bedny, 2019).
As predicted, contrasting different categories (action vs. color), independently of within-category perceptual similarity, showed some commonalities in brain activity across the two groups (Peelen, He, Han, Caramazza, & Bi, 2014; Bedny et al., 2012; Noppeney et al., 2003), but also some differences (Striem-Amit et al., 2018). Both sighted and blind individuals engaged the lpMTG during action processing. This area is located outside the visual cortex, in a multimodal region that typically displays category specificity in a format that is at least partially independent from perceptual appearance (Peelen & Downing, 2017; Bracci & Op de Beeck, 2016) and typically resilient to visual deprivation (Dormal et al., 2017; Wang et al., 2015; Bedny et al., 2012; Noppeney et al., 2003).2
On the other hand, the contrast Color > Action did not reveal strong commonalities between groups.3 Instead, the posterior portion of the rIPS showed a stronger preference for color trials in sighted compared with blind individuals. The IPS is known to be involved in the perception of color (Cheadle & Zeki, 2014; Zeki & Stutters, 2013; Beauchamp et al., 1999) as well as other visual features (Swisher, Halko, Merabet, McMains, & Somers, 2007; Xu, 2007; Grill-Spector, 2003). Its anatomical position makes it a good candidate to work at the interface between perceptual and conceptual representations (Cheadle & Zeki, 2014) and, thus, to be sensitive to the lack of perceptual referent during conceptual retrieval. Interestingly, the peak of color-specific activity that we found (peak coordinates: 33 −43 35) is very close to the color-specific rIPS area found by Cheadle and Zeki (2014; peak coordinates: 30 −39 45). The lack of visual input in blind people prevents the formation of perceptually driven color representations, which may limit the contribution of the IPS during the retrieval of color knowledge. This is not the case for action representation, which has a direct perceptual referent also in blind individuals (via the spared senses).
What is the role of modality-specific simulations, like the one elicited in visual cortices in the sighted, during conceptual retrieval? It has been suggested that modality-specific representations may not always be necessary during conceptual retrieval and could instead be related to processing demands (Ostarek & Huettig, 2017; Bottini, Bucur, & Crepaldi, 2016). For instance, visual representations of low-level characteristics of objects may not be activated when a shallow processing such as a lexical decision (Ostarek & Huettig, 2017) or an orthogonal conceptual task (e.g., judging semantic relatedness; Martin, Douglas, Newsome, Man, & Barense, 2018) is required. In contrast, our task required a relatively deep conceptual discrimination based on perceptual features, which may have therefore triggered the reenactment of perceptual-like processing in the occipital cortex of sighted people.
From a broader theoretical point of view, the results of this study are in line with a hierarchical model of conceptual representations based on progressive levels of abstraction (Barsalou, 2016; Binder, 2016; Binder et al., 2016; Fernandino et al., 2015; Martin, 2015). At the top of the hierarchy, multimodal representations may coexist with purely symbolic ones organized in an explicit propositional code (Mahon & Caramazza, 2008) or a linguistic code based on word–word co-occurrence statistics (Landauer & Dumais, 1997). It is possible that a large number of conceptual processes can take place involving only the higher levels of representation, supported by conceptual hubs such as pMTG (Fernandino et al., 2015; Binder & Desai, 2011; Binder et al., 2009), as well as category-sensitive regions in anterior VOTC (Peelen & Downing, 2017; Binder & Desai, 2011) or language regions in temporal cortices (Anderson et al., 2017; Anderson, Bruni, Lopopolo, Poesio, & Baroni, 2015). Moreover, multimodal and abstract representations may be ideal to interact with lexical entries in the context of compositional (Binder, 2016), highly automatic (Bottini, Bucur, et al., 2016), or shallow (Ostarek & Huettig, 2017) semantic representations that are continuously required by natural language processing (Binder, 2016).
Overall, our results show that early visual deprivation does partially change the neural bases of conceptual retrieval, but only at specific levels of representation: Whereas the category specificity of concepts is largely retained, the fine-grain visual features of objects, properties, and events, as it is represented in modality-specific occipital areas, is not available to blind individuals. People who perceive the world in a different way have, at least in part, different grounded conceptual representations.
Acknowledgments
This work was supported by a European Research Council starting grant (MADVIS Grant 337573) attributed to O. C. O. C. is a research associate at the Fond National de Recherche Scientifique de Belgique. We wish to extend our gratitude to the Michela Picchetti, Mattia Verri, and Alberto Redolfi for the technical support during fMRI acquisition. We are also extremely thankful to our blind participants; the Unione Ciechi e Ipovedenti in Trento, Milano, Savona, and Trieste; and the Blind Institute of Milano. We also thank Yanchao Bi and Xiaoying Wang for sharing brain maps from their previously published data. R. B. and O. C. designed the research; R. B., S. F., A. N., and V. C. performed the research; R. B. analyzed the data in interaction with O. C.; R. B. and O. C. drafted the article; all authors revised and edit the draft and agreed on the final version of the article.
Reprint requests should be sent to Roberto Bottini, University of Trento (CIMeC), Via delle regole 101, 38123, Mattarello, Italy, or via e-mail: bottini.r@gmail.com or Olivier Collignon, University of Louvain (UCL), 10, Place du Cardinal Mercier; 1348 Louvain-La-Neuve, Belgium, or via e-mail: roberto.bottini@unitn.it.
Notes
Results remained identical when, in control analysis suggested by one of the reviewers, we adjusted the number of “different” and “similar” trials in each subject.
It is, however, important to note that activation in the same areas does not mean identical representation across blind and sighted people. Indeed, overlap of activity in VOTC of blind and sighted people may partially relate to the simulation of different sensory information (i.e., visual representation in sighted individuals and tactile representation in blind individuals), due to the well-known crossmodal plasticity observed in congenitally blind people (Frasnelli et al., 2011).
Although, at a lower threshold, a unique common activation in the right precuneus emerged for the contrast Color > Action.