Seeing or hearing manual actions activates the mirror neuron system, that is, specialized neurons within motor areas which fire when an action is performed but also when it is passively perceived. Using TMS, it was shown that motor cortex of typically developed subjects becomes facilitated not only from seeing others' actions, but also from merely hearing action-related sounds. In the present study, TMS was used for the first time to explore the “auditory” and “visual” responsiveness of motor cortex in individuals with congenital blindness or deafness. TMS was applied over left primary motor cortex (M1) to measure cortico-motor facilitation while subjects passively perceived manual actions (either visually or aurally). Although largely unexpected, congenitally blind or deaf subjects displayed substantially lower resonant motor facilitation upon action perception compared to seeing/hearing control subjects. Moreover, muscle-specific changes in cortico-motor excitability within M1 appeared to be absent in individuals with profound blindness or deafness. Overall, these findings strongly argue against the hypothesis that an increased reliance on the remaining sensory modality in blind or deaf subjects is accompanied by an increased responsiveness of the “auditory” or “visual” perceptual–motor “mirror” system, respectively. Moreover, the apparent lack of resonant motor facilitation for the blind and deaf subjects may challenge the hypothesis of a unitary mirror system underlying human action recognition and may suggest that action perception in blind and deaf subjects engages a mode of action processing that is different from the human action recognition system recruited in typically developed subjects.
In everyday life, the integration of information from different sensory modalities is essential for providing a unified perception and recognition of the behavior and actions of other people. It is believed that this form of perception–action coupling relies on parieto-frontal circuits, particularly, on a specific class of motor neurons which have the striking property to discharge not only when an action is performed but also when the same action is observed (Di Pellegrino, Fadiga, Fogassi, Gallese, & Rizzolatti, 1992). These so-called mirror neurons were first discovered in monkeys and are thought to play an important role in action recognition by matching visual representations of observed actions to motor plans (Rizzolatti & Craighero, 2004). In the past decade, a number of neurophysiological and neuroimaging studies have provided mounting evidence that a similar “mirror system” also exists in the human brain (Buccino et al., 2001; Strafella & Paus, 2000; Iacoboni et al., 1999; Hari et al., 1998; Rizzolatti et al., 1996). For example, using TMS, Fadiga, Fogassi, Pavesi, and Rizzolatti (1995) showed that the motor system of the observer “resonates” with perceived movements in a way that the muscles involved in a certain action become facilitated by the sole observation of this action. Interestingly, however, not only visual perception of actions was shown to activate the mirror system. Kohler et al. (2002) reported that a fraction of monkey mirror neurons also discharges from merely listening to action-related sounds, such as the breaking of a peanut. Similarly in humans, sensitivity of the mirror system to aurally presented action-sounds has been reported (Lahav, Saltzman, & Schlaug, 2007; Gazzola, Aziz-Zadeh, & Keysers, 2006; Aziz-Zadeh, Iacoboni, Zaidel, Wilson, & Mazziotta, 2004). Moreover, results from a recent TMS study indicated that perception-induced motor facilitation was substantially higher when congruent auditory and visual information were presented simultaneously than when auditory or visual input was presented in isolation (Alaerts, Swinnen, & Wenderoth, 2009). Also in monkeys, approximately half of the explored audiovisual mirror neurons (8 out of 22) were shown to exhibit more vigorous responses to congruent multimodal input (seeing and hearing the action event), compared to unimodal input (only seeing or hearing the action event) (Keysers et al., 2003). Together, these observations suggest that perception-induced “action retrieval” may benefit substantially from the simultaneous input of vision and sound describing the same action event. However, considering the apparent multimodal nature of movement perception, questions emerge concerning the recruitment of perceptual–motor “mirror” pathways in individuals that lack a sensory modality from birth (either sight or audition).
Here, we addressed this question by measuring cortico-motor facilitation with TMS in primary motor cortex (M1) during action perception (either aurally or visually presented) in congenitally blind and deaf subjects. Specifically, we determined whether the extent of perception-induced motor facilitation was similar for “unimodally developed” (UD) individuals compared to “typically developed” (TD) control subjects.
We hypothesized that an increased reliance on the remaining sensory modality to process human actions (in the UD group) may increase functional recruitment of audiomotor mapping pathways in blind individuals or visuomotor mapping pathways in deaf individuals. Overall, this “adaptation hypothesis” predicts substantially higher perception-induced M1 facilitation during (unimodal) action perception in the UD subjects compared to the TD controls. Alternatively, however, assuming that the mirror system is essentially multimodal in nature, the possibility cannot be ruled out that a congenital lack of sight or audition may interfere with the typical development of a perceptual–motor mirror system. According to the latter hypothesis, overall motor facilitation during action perception is expected to be substantially reduced in the UD subjects compared to the TD subjects.
Five subjects with congenital blindness (mean age ± SD: 33.1 ± 10.4; 2 women, 3 men) and five sighted control subjects (mean age ± SD: 30.6 ± 7.8; 2 women, 3 men) were recruited. All blind subjects reported to be congenitally blind and to have a total loss of sight from birth. Cause of blindness were congenital amaurosis of Leber (n = 3) and congenital glaucoma (n = 2).
Eight subjects with congenital and profound deafness (mean age ± SD: 36.2 ± 9.2; 3 women, 5 men) and eight hearing control subjects (mean age ± SD: 34.8 ± 9.7; 3 women, 5 men) were recruited. Profound deafness was established by means of an audiogram (all >90 dB), and all subjects confirmed the congenital nature of their deafness. The cause of congenital deafness was unknown. All deaf subjects had Flemish Sign Language (FSL) as their primary language of communication from birth.
All participants, except two deaf subjects, were right-handed, as assessed with the Edinburgh Handedness Questionnaire (Oldfield, 1971), and were naive about the purpose of the experiment. Written informed consent was obtained before the experiment and participants were screened for potential risk of adverse effects during TMS. The experimental procedure, as well as the informed consent, was approved by the local Ethics Committee for Biomedical Research at the Katholieke Universiteit Leuven in accordance to The Code of Ethics of the World Medical Association (Declaration of Helsinki) (Rickham, 1964).
EMG and TMS
Surface electromyography (EMG) was performed with Ag–AgCl electrodes (Blue Sensor SP) placed over the muscle belly and aligned with the longitudinal axis of the muscle. EMG activity was recorded simultaneously from the right opponens pollicis (OP) thumb muscle and the wrist flexor carpi radialis (FCR) muscle. Only the OP muscle, not the FCR muscle, was shown to be substantially involved in the aurally or visually presented action (i.e., crushing of a small plastic bottle by abducting the right thumb) (Alaerts et al., 2009). Focal TMS was performed by means of a 70-mm figure-of-eight coil connected to a Magstim 200 stimulator (Magstim, Whitland, Dyfed, UK). The coil was positioned over the left hemisphere, tangentially to the scalp with the handle pointing backward and laterally at 45° away from the mid-sagittal line, such that the induced current flow was in a posterior–anterior direction, that is, approximately perpendicular to the central sulcus. The optimal scalp position was defined as the position from which motor evoked potentials (MEPs) with maximal amplitude were recorded in the right OP muscle. The resting motor threshold (rMT) was defined as the lowest stimulus intensity evoking MEPs in the OP with an amplitude of at least 50 μV in 5 out of 10 consecutive stimuli (Rossini et al., 1994). Subjects' rMT, expressed as a percentage of the maximum stimulator output, varied from 29% to 59% [blind subjects: 32% to 53%; control subjects: 28% to 48%] [deaf subjects: 32% to 48%; control subjects: 31% to 59%]. For all experimental trials, stimulation intensity was set at 130% of the subjects' rMT. Parameter setting procedures were prioritized for the OP muscle but MEPs were simultaneously obtained for the FCR muscle. FCR stimulation parameters were assumed to be satisfactorily similar, due to the overlapping representations of finger and forearm flexor muscles (Scheiber, 1990). EMG recordings were sampled at 5000 Hz (CED Power 1401; Cambridge Electronic Design, UK), amplified, band-pass filtered (30–1500 Hz), and stored on a PC for off-line analysis. Signal Software (2.02 Version; Cambridge Electronic Design, UK) was used for TMS triggering and EMG recordings.
Participants were seated in a comfortable chair in front of a Dell P992 monitor (resolution, 1024 × 768 pixels; refresh frequency 60 Hz) with two audio speakers on both sides of the screen. Audio/visual video clips (Audio–Video Interleaved [AVI]) were displayed with a frame rate of 25 Hz (or frames per second). Video presentation timing was controlled by Blaxton Video Capture software (South Yorkshire, UK). Before the experimental session, auditory or visual stimuli were presented to the subjects to familiarize them with the experimental stimuli and to avoid potential “startling effects” during the actual MEP measurements (blind and sighted controls only heard the stimuli; deaf and hearing controls only saw the stimuli). In addition, to specifically familiarize subjects with the aurally or visually presented action (i.e., crushing of a small plastic bottle by abducting the right thumb), subjects were asked to actively execute the action several times. The sighted control subjects were instructed to keep their eyes closed during the entire experimental session.
During the experimental session, participants were instructed to keep their hands and forearms as relaxed as possible and to pay full attention to the audio/video presented. Vision of their own hand and forearm was never allowed. Muscle relaxation was monitored, and, whenever increased EMG activity became apparent during data collection, the trial was discarded and repeated.
The experimental stimulus (action) consisted of a video clip of a simple hand action which was aurally or visually presented to the subjects. A single-object manipulation was used, that is, the crushing of a small plastic drinking bottle by abducting the thumb of the right hand (Figure 1A). This action was chosen as it is part of the motor repertoire of all subjects and, moreover, produces a very powerful sound with which the blind and sighted controls were easily familiarized (i.e., by actively executing the action several times). A control video displaying a small stream of floating water in a natural environment (Figure 1B) was aurally or visually presented to measure baseline cortico-motor activity when a sensory stimulus was presented that is not related to actions (control). Additionally, cortico-motor excitability was measured when a video was presented showing only an empty white background without any sound (noStim). All videos were only aurally presented to the blind and sighted control subjects (control subjects sat with eyes closed). The videos were visually presented with muted sound to the deaf and hearing control subjects.
Sounds were controlled for intensity (recorded at similar amplitudes) and played at a volume which was comfortable for the subject. Action and baseline videos were presented 24 times in blocks of four videos, with the order of the blocks randomized within and across subjects. Within each block, the same video was repeated four times. Between the blocked videos, a black screen was shown for an interval of 3 sec. The interval between two blocks of videos was approximately 2 min. During the presentation of each video, a single TMS pulse was delivered randomly within the interval of 4.1–4.8 sec after video onset (i.e., during the actual contraction phase of the right OP muscle). By choosing this timing of TMS stimulation, MEPs were recorded during the time interval at which the “crushing” sound of the action became apparent (3.5–5.5 sec after video onset) (Figure 1). Within a given block, the interstimulus interval lasted for 8–9 sec. In total, 72 MEPs were recorded for each subject and each muscle (OP and FCR).
From the EMG data, peak-to-peak amplitudes of the MEPs were determined. Because background EMG is known to modulate the MEP amplitude (Devanne, Lavoie, & Capaday, 1997; Hess, Mills, & Murray, 1987), prestimulation EMG was assessed in all three experiments by calculating the root-mean-square error scores (RMSE) across a 50-msec interval prior to TMS stimulation. For each subject and for each muscle separately, mean and standard deviation of the background EMG scores were computed over all trials. Trials with background EMG deviating from the mean by more than 2.5 standard deviation were removed from further analysis. Finally, extreme peak-to-peak MEP amplitudes values were considered as outliers and were removed from the analysis when they exceeded Q3 + 1.5 × (Q3 − Q1), with Q1 as the first quartile and Q3 as the third quartile computed over the whole set of trials for each subject (Electronic Statistics Textbook, 2007, StatSoft, Inc. Tulsa). Following these criteria, only 7% (34 out of 480) of all recorded MEPs were discarded from the analyses for the blind group, 5% for the sighted control group (24 out of 480), 3.5% for the deaf group (26 out of 768), and 6% for the hearing control group (45 out of 768).
The mean MEP score from the control condition was used to normalize MEP amplitudes recorded from the action condition (for each muscle separately) (MEP/MEPCONTROL × 100) in order to make them comparable across subjects. RMSE scores of the background EMG were normalized accordingly (RMSE/RMSECONTROL × 100).
To explore potential differences in perception-induced MEP responses measured from “unimodal developed” (UD) subjects (blind and deaf subjects) and “typically developed” (TD) subjects (“sighted controls” and “hearing controls”), a two-way factorial analysis of variance (ANOVA) was conducted on the normalized MEP amplitude scores recorded from the OP (target) muscle. Between-subjects factors were “group” (“UD subjects,” “TD subjects”) and “sensory input” (vision, audition). With this analysis, we can specifically test whether and how the loss of a sensory modality affects perception-induced motor facilitation: (i) Absence of any ANOVA effect would indicate that the loss of a modality has no substantial effect on perception-induced motor facilitation. (ii) Finding of a “group effect” would indicate that the loss of a modality results either in (iiA) a general enhancement or (iiB) a general decrement of perception-induced motor facilitation (depending on the direction of the effect). (iii) Finally, the finding of an interaction effect would indicate that the blind and deaf subject groups display a mutually different pattern of perception-induced motor facilitation. In addition, for each subject group (unimodal blind, unimodal deaf, sighted controls, hearing controls), a one-way repeated measures ANOVA with the within-factor “muscle” (OP, FCR) was performed to specifically test whether perception-induced MEP amplitude scores were different for the OP compared to the FCR muscle. To test whether mean MEP response differed significantly from baseline (=0), one-sample t tests were performed for each group (UD and TD) and muscle (OP and FCR) separately (Bonferroni corrected).
To address whether peak-to-peak MEP amplitude scores were confounded by modulations in background muscle activity, the background EMG data (normalized RMSE-scores) were subjected to analogous statistical analyses as the MEP data.
Statistics were calculated with Statistica 7.0 (StatSoft Inc., Tulsa, USA). The level of significance was set to α = 0.05. All tests were two-tailed.
MEP responses evoked from the OP muscle during action perception were significantly smaller in individuals with a loss of sensory modality (i.e., UD subject group) compared to the matched TD controls (Figure 2A). This finding was revealed by a strong main effect of “group” from the two-way ANOVA [F(1, 22) = 11.85, p < .01]. No interaction [F(1, 22) = .38, p = .55] or main effect of “sensory input” was found [F(1, 22) = .06, p = .81], which indicates that the group effect pertained irrespective of the type of modality lost (i.e., both in the blind and deaf groups, substantially lower MEP responses were evoked compared to their matched control groups; Figure 2B). Furthermore, within-group comparisons revealed that only in the control groups did action perception (either aurally or visually) result in selectively higher MEPs in the OP (target) muscle compared to the FCR (control) muscle [both F > 8.45, p < .05] (Figure 3B). No such muscle-specific facilitation was observed in the blind [F(1, 4) = −2.22, p = .21] or deaf subject group [F(1, 7) = .15, p = .71] (Figure 3A).
For each group (UD and TD) and each muscle (OP and FCR), an additional one-sample t test was applied to determine whether mean MEP responses differed significantly from baseline (=0). For the TD group, OP excitability [mean ± SEM: 11.7 ± 3.8] was significantly increased compared to baseline [t(12) = 3.04, p = .04], whereas no such difference was revealed for the FCR [−7.4 ± 4.9] [t(12) = −1.49, p = .64]. For the UD group, overall excitability tended to be lower than baseline [OP: −11.5 ± 4.7; FCR: −9.9 ± 3.9], however, this difference failed to reach significance for both muscles [OP: t(12) = −2.44, p = .12; FCR: t(12) = −2.57, p = .10].
Additionally, it was checked whether the difference in mean MEP response between the control and noStim conditions (MEPNOSTIM − MEPCONTROL) (recorded from the OP (target−) muscle) was close to zero (no difference) [blind: −0.02 ± 0.10; sighted: 0.03 ± 0.03; deaf: −0.07 ± 0.11; hearing: 0.004 ± 0.04] and did not differ between groups [one-way ANOVA with the factor group: F(3, 22) = 0.29, p = .83]. Additional analyses using noStim for normalization revealed an analogue pattern of results from the two-way ANOVA design [i.e., only a significant group effect, F(1, 22) = 7.05, p = .01, and no interaction, F(1, 22) = .05, p = .82, or main effect of sensory input, F(1, 22) = .92, p = .35]. Thus, the observed group differences were specific for action-related stimuli and did not result from differential responses to the control condition.
The background EMG was generally small and condition-induced modulations were minimal, such that statistics on the RMSE scores did not reveal any significant effects [all F < 1.8, p > .22]. This indicates that the MEP peak-to-peak amplitude scores were not confounded by modulations in background EMG.
Overall, these results indicate that the loss of a modality from birth (either vision or audition) results in a general decrement of perception-induced (muscle-specific) motor facilitation.
The most striking result of the present study was the apparent lack of resonant motor facilitation for UD subjects (blind and deaf subjects) when perceiving hand movements (either aurally or visually). Specifically, action perception selectively enhanced motor output of the muscles mostly involved in the movement only for the TD control group, whereas this muscle-specific modulation was absent in the UD group.
The human mirror system is considered to provide a representation of actions which allows observers to map perceived actions onto their own motor system, in order to understand or interpret the actions made by others (Rizzolatti & Fabbri-Destro, 2008; Iacoboni et al., 2005; Gallese, Fadiga, Fogassi, & Rizzolatti, 1996). In the past, TMS has been used intensively to study excitability changes of primary motor cortex (M1) during the perception of actions performed by other individuals (Fadiga et al., 1995). Although M1 is not considered an actual part of the human mirror system, excitability modulations in M1 are broadly assumed to be indicative of mirror system activity as a consequence of the strong reciprocal cortico-cortical connections between M1 and the frontal node of the mirror system (ventral premotor area F5) (Dum & Strick, 2005; Shimazu, Maier, Cerri, Kirkwood, & Lemon, 2004; Matelli, Camarda, Glickstein, & Rizzolatti, 1986). In this respect, the present results from the TD subjects replicate previous findings on a perceptual–motor mirror system in humans by showing that both aurally and visually presented action stimuli evoke a muscle-specific increase in cortico-motor excitability within M1.
Perception-induced M1 facilitation was found to be substantially lower in the UD group, compared to the TD group, and no muscle-specific changes in M1 excitability were observed in the congenitally blind and deaf individuals. These findings strongly argue against the notion that an increased reliance on the remaining sensory modality in “unimodal” subjects is accompanied by an increased “auditory-induced” motor facilitation in blind or “visual-induced” motor facilitation in deaf subjects. On the contrary, the present data provide indications that the lack of sight or audition from birth may have important implications for the typical development of a functional perceptual–motor mirror system.
Within the UD group, a (nonsignificant) tendency toward a perception-induced decrease in M1 excitability compared to the baseline measurement was noted. Although highly speculative, the tentative reduction in MEP responses compared to baseline in UD subjects may have resulted from differences in the extent of intracortical inhibition (ICI) within M1. In the past, two studies used paired-pulse TMS designs to explore ICI within M1 during movement observation, and both reported—for typically developed subjects—that increased MEP responses were accompanied by a selective reduction of ICI within the agonist muscle (Patuzzo, Fiaschi, & Manganotti, 2003; Strafella & Paus, 2000). Future studies, adopting similar paired-pulse paradigms should be conducted in blind and deaf volunteers to explore the possibility of increased ICI in M1 during movement perception.
Action Perception in Congenitally Deaf Individuals
The finding of aberrant motor resonance, both in the blind and deaf subjects, may indicate that a lifelong experience with combined “multimodal” input of auditory and visual action-related sensory input is a prerequisite for a normally or typically developing perceptual–motor mirror system. The latter hypothesis suggests therefore that perceptual–motor mapping pathways are essentially multimodal in nature and that the atypical motor responsiveness, as found both in the deaf and blind subjects, is directly associated with the lack of any “integrated” audiovisual experience from birth. Alternatively, however, it has been hypothesized that for subjects with profound deafness from birth, not the lack of auditory experience per se, but rather a lifelong experience with a visuo-manual based sign language, may significantly alter the processing of nonlinguistic human actions (Corina & Knapp, 2008). In the present study, all participating deaf subjects had Flemish Sign Language (FSL) as their primary language of communication from birth and this altered experience with manual communication may have modified the neurophysiological system that underlies action perception in native deaf signers.
To date, only two other groups directly explored the responsiveness of the mirror system during perception of manual actions in native deaf signers. PET imaging (Corina et al., 2007) and fMRI (Emmorey, Xu, Gannon, Goldin-Meadow, & Braun, 2010) were used. Consistent with the present findings, both studies reported that deaf signers did not engage a fronto-parietal mirror network during the passive viewing of nonlinguistic manual actions (self-directed or object-directed) (Corina et al., 2007) or pantomime actions (Emmorey et al., 2010). Hearing nonsigners, on the other hand, exhibited a robust activation of the mirror network to the same action stimuli. Moreover, Corina et al. (2007) also revealed that deaf signers as compared to hearing nonsigners rely substantially more on extrastriate association areas during the processing of noncommunicative actions. In this context, the authors suggested that signers may make use of these extrastriate regions to quickly identify and filter nonlinguistic gestures from linguistic ones (Corina et al., 2007).
Although an entirely different experimental approach was adopted in the present study, convergent results were revealed in terms of atypical motor resonance upon action perception in deaf signers. Taken together, these and previous results may challenge the hypothesis of a unitary mirror system underlying human action recognition and suggest that, in the case of deaf signers, the on-line differentiation of human actions as linguistic or nonlinguistic may induce a mode of action processing that is different from the typical human action recognition system recruited in nonsigning subjects (Knapp & Corina, 2010; Corina & Knapp, 2008). Future studies exploring perception-induced motor resonance in “orally” deaf individuals (i.e., communicating predominantly with oral language instead of sign language) are needed to determine whether auditory deprivation or sign language acquisition is the key factor that reduces resonant motor responses to action-related visual input.
Action Perception in Congenitally Blind Individuals
As mentioned previously, the finding of aberrant motor resonance, both in blind and deaf subjects, may indicate that a lifelong experience with combined “multimodal” input of auditory and visual action-related sensory input is a prerequisite for a perceptual–motor mirror system to develop and function in a typical way. However, it was argued that sensory–motor associations arise primarily through the correlated experience of executing and visually perceiving the same actions (Heyes, Bird, Johnson, & Haggard, 2005; Heyes, 2003). This might suggest that “visual” action-related experience (compared to auditory experience) may be far more crucial for a typical development of the mirror system. In this respect, although previous reports have suggested that the mirror system is also responsive to aurally presented action stimuli (Alaerts et al., 2009; Lahav et al., 2007; Gazzola et al., 2006; Aziz-Zadeh et al., 2004), the possibility cannot be ruled out that nonvisual recruitment of mirror areas is largely a consequence of visually based motor imagery (i.e., elicited by the lifelong co-occurrence of action-related vision and sound). The lack of any visual experience, and thus, no visually based motor imagery in congenitally blind individuals, may therefore be an alternative explanation for the apparent lack of resonant motor facilitation during action-sound listening.
To date, only one other group directly explored the responsiveness of the mirror system during auditory action perception in individuals with congenital blindness (Ricciardi et al., 2009). In this study, fMRI was used to assess brain activity in congenitally blind individuals and sighted volunteers during the auditory presentation of manual actions and environmental sounds. Overall, results indicated that both the blind and sighted groups activated a temporo-parietal-premotor cortical network which overlapped with the traditional mirror system. However, a direct group comparison indicated that “the extent of recruitment of bilateral inferior frontal cortex (IF) (part of the frontal node of the mirror network) was smaller in blind as compared to sighted individuals both for familiar and unfamiliar action sound recognition” (Ricciardi et al., 2009). As such, although a fairly similar network of mirror areas may still be recruited in blind and sighted, this and our study provide first indications that the “extent” of mirror system responsiveness (at least for its frontal part) may be substantially smaller in blind compared to sighted individuals. However, future brain imaging studies directly exploring group differences are necessary to address this hypothesis further.
Finally, it may be interesting to note that atypical perception-induced motor facilitation of motor cortex has also been described in individuals with autism spectrum disorder (Theoret et al., 2005). In this respect, although highly speculative, it can be hypothesized that the strikingly common co-occurrence of congenital blindness and autistic-like features may, in part, be associated with changes in the neurophysiological basis of action perception in congenitally blind individuals (Hobson & Bishop, 2003; Hobson, Lee, & Brown, 1999; Brown, Hobson, Lee, & Stevenson, 1997). Further characterization of the neurophysiological mechanism by which congenital blindness predisposes to autism may provide important insights into the developmental pathways that lead to autistic syndromes in sighted individuals (Hobson & Bishop, 2003).
Results from the present study provide indications that the loss of a modality from birth (either vision or audition) results in a general decrement of perception-induced (muscle-specific) motor facilitation. Two distinct interpretations may account for these findings. On the one hand, multimodality may be a necessary prerequisite for perceptual–motor matching pathways to develop and function in a typical way. On the other hand, it is possible that congenitally blind or deaf individuals exhibit a similar “motor resonance” deficit but for different reasons. The process of perceptual–motor mapping may be essentially visuomotor in nature, such that the absence of any visual information results in atypical motor resonant responses observed in congenitally blind subjects. For congenitally deaf subjects, on the other hand, it has been suggested that not the lack of auditory experience per se but rather a lifelong experience with a visuo-manual based sign language may lie at the basis of altered motor resonant responses to “nonlinguistic” action perception.
Support for this study was provided through grants from the Flanders Fund for Scientific Research (Projects G.0292.05, G.0577.06, and G.0749.09). This work was also supported by Grant P6/29 from the Interuniversity Attraction Poles program of the Belgian federal government.
Reprint requests should be sent to Kaat Alaerts, Motor Control Laboratory, Research Centre of Movement Control and Neuroplasticity, Department of Biomedical Kinesiology, Group Biomedical Sciences, Katholieke Universtiteit Leuven, Belgium, Tervuursevest 101, B-3001 Heverlee, Belgium, or via e-mail: Kaat.Alaerts@faber.kuleuven.be.