Abstract

All it takes is a face-to-face conversation in a noisy environment to realize that viewing a speaker's lip movements contributes to speech comprehension. What are the processes underlying the perception and interpretation of visual speech? Brain areas that control speech production are also recruited during lipreading. This finding raises the possibility that lipreading may be supported, at least to some extent, by a covert unconscious imitation of the observed speech movements in the observer's own speech motor system—a motor simulation. However, whether, and if so to what extent, motor simulation contributes to visual speech interpretation remains unclear. In two experiments, we found that several participants with congenital facial paralysis were as good at lipreading as the control population and performed these tasks in a way that is qualitatively similar to the controls despite severely reduced or even completely absent lip motor representations. Although it remains an open question whether this conclusion generalizes to other experimental conditions and to typically developed participants, these findings considerably narrow the space of hypothesis for a role of motor simulation in lipreading. Beyond its theoretical significance in the field of speech perception, this finding also calls for a re-examination of the more general hypothesis that motor simulation underlies action perception and interpretation developed in the frameworks of motor simulation and mirror neuron hypotheses.

INTRODUCTION

In face-to-face conversations, the movement, shape, and position of a speaker's lips provide cues about the vowels and consonants that they pronounce. Accordingly, “lipreading” enhances speech perception in adverse auditory conditions (Tye-Murray, Sommers, & Spehar, 2007; Bernstein, Auer, & Takayanagi, 2004; MacLeod & Summerfield, 1987; Sumby & Pollack, 1954). A fundamental issue addressed here concerns the kind of representations and processes that underlie efficient visual speech perception and interpretation.

In the last 20 years, a series of studies have reported that participants asked to silently lipread recruit not only parts of their visual system but also the inferior frontal gyrus and the premotor cortex typically involved during the execution of the same facial movements (Sato, Buccino, Gentilucci, & Cattaneo, 2010; Okada & Hickok, 2009; Skipper, Nusbaum, & Small, 2005; Callan et al., 2003; Calvert & Campbell, 2003; Watkins, Strafella, & Paus, 2003). This finding has raised the possibility that the interpretation of visual speech may be supported, at least to some extent, by a covert unconscious imitation of the observed speech movements in the observer's motor system—a motor simulation of the observed speech gestures (Barnaud, Bessière, Diard, & Schwartz, 2018; Chu et al., 2013; Tye-Murray, Spehar, Myerson, Hale, & Sommers, 2013; Skipper, Goldin-Meadow, Nusbaum, & Small, 2007; Skipper, van Wassenhove, Nusbaum, & Small, 2007; Callan et al., 2003, 2004).

However, whether, and if so to what extent, motor simulation contributes to visual speech interpretation remains unclear. Indeed, the activation of the motor system observed during lipreading could be a consequence, rather than a cause, of the identification of the visual syllables. In line with this possibility, previous studies reported that lipreading abilities precede speech production abilities during development. For instance, 2- to 5-month-old infants who have not yet mastered articulated speech look longer at a face executing articulatory gestures matching a simultaneously presented sound than at a face that does not (Patterson & Werker, 1999, 2003; Kuhl & Meltzoff, 1982). These studies demonstrate that it is possible to develop at least some level of lipreading capability without motor simulation. Nonetheless, they leave open a large space for a possible contribution of motor simulation to visual speech interpretation. For instance, it could be that motor simulation improves lipreading without being necessary.

The research reported here was designed to explore this possibility. We compared word-level lipreading abilities (in Experiment 1) and the strength of the influence of visual speech on the interpretation of auditory speech information (in Experiment 2) in typically developed participants and in 11 individuals born with congenitally reduced or completely absent lip movements in the context of the Moebius syndrome (individuals with the Moebius syndrome [IMS]; see Table 1)—an extremely rare congenital disorder characterized, among other things, by a nonprogressive facial paralysis caused by an altered development of the facial (VII) cranial nerve (Verzijl, van der Zwaag, Cruysberg, & Padberg, 2003).

Table 1. 
Demographic and Clinical Data of the IMS Participants
 SexAge (Years)Education (Years)Midlevel PerceptionaInferior LipSuperior Lip
IMS1 37 0.9 No movements No movements 
IMS2 36 −3.3 Slight movements No movements 
IMS3 19 −0.3 Slight movements No movements 
IMS4 43 −2.9 Slight movements No movements 
IMS5 31 −0.3 Slight movements Slight movements 
IMS6 19 0.1 Mild movements No movements 
IMS7 31 0.5 No movements No movements 
IMS8 21 0.5 Slight movements No movements 
IMS9 15 −3.3 Mild movements No movements 
IMS10 20 −0.3 No movements No movements 
IMS11 33 −4.2 No movements No movements 
 SexAge (Years)Education (Years)Midlevel PerceptionaInferior LipSuperior Lip
IMS1 37 0.9 No movements No movements 
IMS2 36 −3.3 Slight movements No movements 
IMS3 19 −0.3 Slight movements No movements 
IMS4 43 −2.9 Slight movements No movements 
IMS5 31 −0.3 Slight movements Slight movements 
IMS6 19 0.1 Mild movements No movements 
IMS7 31 0.5 No movements No movements 
IMS8 21 0.5 Slight movements No movements 
IMS9 15 −3.3 Mild movements No movements 
IMS10 20 −0.3 No movements No movements 
IMS11 33 −4.2 No movements No movements 
a

Participants' modified t test (Crawford & Howell, 1998) at the Leuven Perceptual Organization Screening Test (Torfs, Vancleef, Lafosse, Wagemans, & de-Wit, 2014).

Three main arguments support the assumption that a congenital lip paralysis prevents simulating observed lip movements to help lipreading. First, extant evidence suggests that the motor cortex does not contain representations of congenitally absent or deeferented limbs (e.g., Reilly & Sirigu, 2011). Rather, the specific parts of the somatosensory and motor cortices that would normally represent the “absent” or deeferented limbs are allocated to the representation of adjacent body parts (Striem-Amit, Vannuscorps, & Caramazza, 2018; Makin, Scholz, Henderson Slater, Johansen-Berg, & Tracey, 2015; Stoeckel, Seitz, & Buetefisch, 2009; Funk et al., 2008; Kaas, 2000; Kaas, Merzenich, & Killackey, 1983). Beyond this evidence, in any event, it is unclear how a motor representation of lip movement could be formed in individuals who have never had the ability to execute any lip movement, and we are not aware of any attempt at describing how such a mechanism would operate. Furthermore, it is unclear in what sense such a representation would be a “motor representation.” Second, merely “having” lip motor representations would not be sufficient to simulate an observed lip movement. Motor simulation is not based merely on motor representations of body parts (e.g., of the lips) but on representations of the movements previously executed with these body parts. Hence, previous motor experience with observed body movements is critical for motor simulation to occur (e.g., Swaminathan et al., 2013; Turella, Wurm, Tucciarelli, & Lingnau, 2013; Calvo-Merino, Grèzes, Glaser, Passingham, & Haggard, 2006) and lipreading efficiency is assumed to depend on the similarity between the observed lip movements and those used by the viewer (Tye-Murray, Spehar, Myerson, Hale, & Sommers, 2015; Tye-Murray et al., 2013). Because IMS individuals have never executed lip movements, it is unclear how they could motorically simulate observed lip movements, such as a movement of the lower lip to contact the upper teeth rapidly followed by an anteriorization, opening, and rounding of the two lips involved in the articulation of the syllable /fa/. Third, in any event, a motor simulation of observed lip movements by IMS participants would not be sufficient to support their lipreading abilities according to motor simulation theories. According to these theories, motor simulation of observed body movements is necessary but not sufficient to support lipreading. The role of motor simulation derives from the fact that it is supposed to help retrieving information about the observed movements acquired through previous motor experience. Motor simulation, for instance, supports lipreading because simulating a given observed lip movement allows the observer to “retrieve” what sound or syllable these movements allow them to produce when they carry out that particular motor program. Because IMS individuals have never themselves generated the lip movements probed in this study, motor simulation could not be regarded as a possible support to lipreading.

In light of these considerations, the investigation of the lipreading abilities of IMS provides the opportunity to constrain hypotheses about the functional role of motor simulation in visual speech perception: If motor simulation contributes to lipreading, then individuals deprived of lip motor representations should not be as efficient as controls at lipreading. We tested this prediction in two experiments assessing participants' lipreading ability explicitly (Experiment 1) and implicitly (Experiment 2). This allowed testing for the possibility that some IMS could perform at a good level in an explicit task (Experiment 1) by mobilizing additional cognitive resources. If this were the case, then these individuals' lipreading abilities should be hampered in an implicit task (Experiment 2), in which lipreading is not instructed, not encouraged, and in fact worsens performance.

It is important to note, however, that Moebius syndrome typically impacts not only the individuals' sensorimotor system but also their visual, perceptual, cognitive, and social abilities to various extents (Vannuscorps, Andres, & Caramazza, 2020; Bate, Cook, Mole, & Cole, 2013; Carta, Mora, Neri, Favilla, & Sadun, 2011). This has significant interpretative and methodological implications for the current study. Given the complexity of the disorder, it is not unexpected that at least some IMS would show relatively poor lipreading performance. Determining the cause of the lipreading deficit in these individuals is not a straightforward matter: Candidate causes include not only their production disorder but other impaired functionally separate processes, such as visuo-perceptual processing, which co-occur to varying degrees in these individuals. This situation creates an asymmetry in the evidentiary value of good versus poor lipreading performance: Normotypical performance on these tasks indicates that motor simulation is not necessary for lipreading, whereas poor performance is indeterminate on the role of motor simulation in lipreading. This interpretative asymmetry implies that the appropriate methodology in this study is the use of single-case analyses, because this approach allows us to determine unambiguously whether the inability to carry out the relevant motor simulation necessarily adversely affects lipreading performance.

METHODS

The experimental investigations were carried out from October 2015 to September 2019 in sessions lasting between 60 and 90 min. The study was approved by the biomedical ethics committee of the Cliniques Universitaires Saint-Luc, Brussels, Belgium, and was performed in accordance with relevant guidelines/regulations, and all participants gave written informed consent before the study and the research.

The experiments were controlled by the online testable.org interface (www.testable.org; Rezlescu, Danaila, Miron, & Amariei, 2020), which allows precise spatiotemporal control of on-line experiments. Control participants were tested on the 15.6-in. antiglare screen (set at 1366 × 768 pixels and 60 Hz) of a Dell Latitude E5530 laptop operated by Windows 10. The IMS participants were tested remotely on their own computer under supervision of the experimenter through a visual conference system. At the beginning of each experiment, the participant was instructed to set the browsing window of the computer to full screen, minimize possible distractions (TV, phone, etc.), and position themselves at arm's length from the monitor for the duration of the experiment. Next, a calibration procedure ascertaining homogeneous presentation size and time on all computer screens took place. Next, participants started the experiment.

Participants

Eleven individuals with congenitally reduced or completely absent lip movements in the context of Moebius syndrome (IMS; see Table 1 and Vannuscorps et al., 2020) and 25 typically developed, highly educated young adults (15 women; three left-handed; all college students or graduates; mean age ± SD: 28.6 ± 6.5 years) participated in Study 1. Eight of the IMS participants (IMS1, 3, 4, 5, 8, 9, 10, and 11) and 20 new typically developed, highly educated young adults (13 women; two left-handed; all college students or graduates without any history of psychiatric or neurological disorder; mean age ± SD: 22.7 ± 2.3 years) participated in Study 2. None of the participants reported any hearing loss. Self-report about the presence/absence of hearing loss has been shown to have a negative predictive value of 82% for mild hearing loss, 98% for moderate, and 100% for marked hearing loss (Sindhusake et al., 2001).

The participants with Moebius syndrome included in this study presented with congenital bilateral facial paralyses of different degrees of severity. As indicated in Table 1, lip movements were completely absent in IMS1, 7, 10, and 11, very severely reduced in IMS 2, 3, and 4, and severely reduced in IMS5, 6, 8, and 9. IMS 2 could only very slightly pull her lower lip downward. IMS 3 could very slightly pull the corners of his mouth down. This movement was systematically accompanied by a slight increase of mouth opening and a wrinkling of the surface of the skin of the neck, suggesting a slight contraction of the platysma muscle. IMS4 could slightly contract her cheeks (buccinators), resulting in a slight upward and stretching of the lips. In all these participants, there was no mouth closing and no movement of the superior lip, and none of these individuals were able to move their lips in a position corresponding to bilabial (/p/, /b/, /m/) or labiodental (/f/, /v/) French consonants or to rounded (/ɔ/, /ɑ̃/, /œ/, /o/, /y/, /Ø/, /ɔ̃/) and stretched (/i/, /e/) French vowels. IMS5, 6, 8, and 9 were able to open and close the mouth. IMS5 could also slightly pull the angles of the mouth backward by contracting the cheeks, slightly pull her lower lip downward, and very slightly contract the right side of her upper lip. IMS6 could also pull the angles of the mouth backward by contracting the cheeks. IMS8 was able to execute a mild combined backward/upward movement of the right angle of the mouth and a slight backward movement of the left angle of the mouth. IMS9 could normally pull the right angles of the mouth backward by contracting the right cheek and slightly pull the left angles of the mouth backward by contracting the left cheek. IMS5, 6, 8, and 9 were thus able to move their lips in a position corresponding to bilabial French consonants and to stretched French vowels but not in a position corresponding to labiodental consonants and rounded vowels.

It is important to note that, in addition to these motor symptoms, IMS individuals typically also present with a heterogeneous spectrum of visuo-perceptual disorders, including various patterns of ocular motility alterations, various degrees of visual acuity impairments, of lagophthalmos, an absence of stereopsis, and frequent mid and high visual perception problems (Bate et al., 2013; Verzijl et al., 2003; Calder, Keane, Cole, Campbell, & Young, 2000). The performance of the IMS participants included in this study in a midlevel perception screening test ranged from typical to severely impaired (see Table 1), for example.

Stimuli and Procedure

Experiment 1: Viseme Discrimination Task

This task assessed participants' ability to use visual speech to discriminate vowels differing in terms of lip aperture and/or shape and consonants differing in place of articulation. Stimuli were 60 silent video clips (∼3 sec, 30 frames/sec, 854 × 480 pixels) showing one of two actresses articulating a word in French (see list in Appendix). Only the face of the actress (approximately 6° × 10° of visual angle) was visible. Each video started with the actress in a neutral posture, followed by the articulation, and ended by a return in the neutral posture. Each video was associated with a target word and either two (for consonants, n = 37) or three (for vowels, n = 23) distractors. For 23 of the video clips, three distractors differing from the target word in terms of a single vowel articulated with a different shape and/or aperture of the mouth were selected. For instance, the words “plan” (/plɑ̃/, i.e., an open, rounded vowel), “plot” (/plo/, i.e., a midopen, rounded vowel), and “pli” (/pli/, i.e., a closed, nonrounded vowel) were used as response alternatives for the target word “plat” (/pla/, i.e., an open, nonrounded vowel). For the remaining 37 video clips, two distractors differing from the target word in terms of the place of articulation of a single consonant were selected. For instance, the words “fente” (/fɑ̃t/, i.e., a labiodental ?consonant) and “chante” (/ʃɑ̃t/, i.e., a postalveolar consonant) were selected as response alternatives for the target word “menthe” (/mɑ̃t/, i.e., a bilabial consonant).

Each of the 60 trials started with the presentation of three or four words on the computer screen for 5 sec, followed by the presentation of a video clip of an actress articulating one of the words lasting ± 3 sec and three or four response buttons. Participants were asked to read the words carefully, observe the video carefully, and then, identify the phoneme articulated by the actress by clicking on the corresponding word. There was no time constraint for responding, but participants were asked not to respond before the end of the video clip. This design allowed testing visual speech recognition in a task that minimizes the influence of several cognitive abilities—including working memory for the observed lip movements and vocabulary size—that are likely to differ between the two groups.

Experiment 2: Audiovisual Integration

Stimuli were 32 video clips (1.5 sec, 50 frames/sec, 960 × 544 pixels) showing one of four actors (two men, two women) articulating twice in a row one of six syllables paired with a congruent audio (« pa », « ta », « ka », « ba », « da », and « ga ») or one of two syllables paired with an incongruent audio (visual « ga » and « ka » paired with the audio /ba/ and /pa/, respectively) and eight similar video clips in which a small pink dot appeared at a random place on the face of the actor (one by condition, two by actor). Only the shoulders and face of the actors/actresses were visible. The face subtended approximately 4° × 6° of visual angle. Each video started with the actor in a neutral posture, followed by the articulation, and ended by a return in the neutral posture. The actors maintained an even intonation, tempo, and vocal intensity while producing the syllables.

During the experiment, participants were first presented with an auditory-only stimulus and asked to set the volume of their computer at a clearly audible, comfortable level. Then, they received the following instruction (translated from French): “In this experiment, we will test your ability to do two tasks simultaneously. You will see video-clips of actors articulating twice the same syllable. On some of these video-clips, a small pink dot will appear somewhere on the actor's face. After each video, we ask you to click on the response button ‘pink dot’ if you have seen a small pink dot. If no small pink dot has appeared, then simply report the syllable that you heard by clicking on the corresponding syllable on the computer screen.” After the instructions, participants saw, in a pseudorandom order, a series of 128 video clips comprising three repetitions of each actor articulating twice the same syllable paired with a congruent audio (3 repetitions × 4 actors × 6 congruent stimuli = 72 video clips), six repetitions of each actor articulating twice the same syllable paired with an incongruent audio (6 repetitions × 4 actors × 2 congruent stimuli = 48 video clips), and 2 videos of each actor articulating twice the same syllable paired with a congruent audio in which a small pink dot appeared on the actors' face. After each video clip, the participant was asked to indicate if they had seen a pink dot and, if not, to click on the syllable articulated by the actor presented among six alternatives (“pa,” “ta,” “ka,” “ba,” “da,” and “ga”). There was no time constraint for responding. The dual task design allowed us to instruct participants to pay attention to the visual stimulus during the task and to identify those who did not follow this instruction.

RESULTS

The data that support the findings of this study are openly available (Vannuscorps, 2020). In Experiment 1, we counted the number of correct responses of each participant (see Figure 1A), excluded one control participant with a score below 2 SDs from the mean of the controls (see on Figure 1), and then, conducted four series of analyses. We first conducted a Shapiro–Wilk test to verify that the controls data were distributed normally. This was the case (W = 0.93, p > .05). In a second step, we turned to our main question and investigated whether the IMS participants performed at a typical level of efficiency in the experiment. We used Crawford and Garthwaite's (2007) Bayesian approach to compare the performance of each IMS to that of the control group. To minimize the likelihood of false negatives, that is, the risk of concluding erroneously that an IMS achieves a “normotypical level” of efficiency, we set the threshold for “efficient” performance at 0.85 SD below the control mean (as in Vannuscorps et al., 2020). Unsurprisingly given the visuo-perceptual symptoms commonly associated with Moebius Syndrome, four IMS (2, 3, 9, and 11) performed below this threshold. Nevertheless, and more interestingly, the one-sided lower 95% credible limit of the performance of five other IMS was consistently (i.e., when all the items, the consonants and the vowels were considered) above the threshold for efficient performance despite their severely reduced (IMS5, 8), very severely reduced (IMS4), or even completely absent (IMS1, 10) lip motor representations. In addition, IMS6 performed above the threshold for efficient performance when all the items and when the vowels were considered and only slightly below when the consonants were considered (95% one-sided lower credible limit = −1.48) despite severely reduced lip motor representations, and IMS7 performed above the threshold for efficient performance when all the items and the consonants were considered and only slightly below when the vowels were considered (95% one-sided lower credible limit = −1.47) despite completely absent lip motor representations.

Figure 1. 

Results of Experiments 1 (A) and 2 (B) by group and by individual participant. The numbers refer to the IMS individuals reported in Table 1. The proportion of fusion responses reported in B corresponds to the percentage of trials in which incongruent pairings between visual velar and auditory bilabial consonants (n = 48) produced the percept of intermediate dental consonants (/da/ and /ta/).

Figure 1. 

Results of Experiments 1 (A) and 2 (B) by group and by individual participant. The numbers refer to the IMS individuals reported in Table 1. The proportion of fusion responses reported in B corresponds to the percentage of trials in which incongruent pairings between visual velar and auditory bilabial consonants (n = 48) produced the percept of intermediate dental consonants (/da/ and /ta/).

Third, we focused on the seven IMS participants with normotypical performance (on the whole set of items) and assessed whether any discrepancy between their performance and that of the controls was larger on lipreading than in three control tasks used with the same participants in a previous study (Vannuscorps et al., 2020): the Cambridge Face Memory Test (Duchaine & Nakayama, 2006) and the Cambridge Face Perception Test (Duchaine, Germine, & Nakayama, 2007), which assess participants' face perception abilities, and the Leuven Perceptual Organization Screening Test, an online test to assess midlevel visual perception (Torfs et al., 2014). The aim of these analyses was to seek evidence that, despite their normotypical performance on lipreading, these seven IMS individuals may nevertheless be comparatively less good in lipreading than in other tasks not assumed to rely on motor simulation. To this end, we computed Crawford and Garthwaite's (2007) Bayesian Standardized Difference Test (BSDT). The BSDT allows computing an estimate of the percentage of the control population exhibiting a more extreme discrepancy between two tasks than a given individual. We performed three BSDTs for each of the seven IMS participants (comparison of the lipreading task with the three control tasks). The comparisons were either clearly not significant (6/21 comparisons, all BSDTs > 0.5) or indicated a comparatively better performance in the lipreading experiment than in the control tasks (15/21 comparisons). Thus, there was no evidence that these seven IMS individuals performed the lipreading task less accurately than facial identity recognition or midlevel perceptual tasks.

Fourth, we conducted qualitative analyses of the performance of these seven IMS individuals. The aim of these analyses was to investigate the possibility that IMS individuals who achieve efficient lipreading could nevertheless do so by completing the task somewhat differently than the control participants, for instance, that they used different diagnostic features to discriminate the different visemes. Any such processing differences would likely result in different patterns of behavioral responses to different items and, in particular, in different patterns or errors. To explore this possibility, we first carried out correlation analyses over the percentage of correct responses in both groups for each item. The correlation was highly significant, r(56) = .67, p < .001. A second analysis focused on the nature of the errors. To this end, we first computed the two groups' response matrices for the consonants (3 × 3 matrix combination of bilabial, labiodental or post-alveolar) and vowels (7 × 7 matrix combination of open nonrounded and closed, midclosed and midopen rounded, and nonrounded vowels). Then, to examine the similarity between the controls' and the IMS participants' matrices, we vectorized the matrices and correlated them with each other. The resulting coefficients indicated a statistically significant correlation between the two groups' matrices, both when all the responses were considered (consonants: r(9) = .99; vowels: r(49) = .99; all ps < .001) and when only errors were considered (consonants: r(6) = .75, p < .05; vowels: r(42) = .33; both ps < .05). These analyses suggest that lipreading is not only similar in efficiency between both groups, but also qualitatively similar.

Experiment 2 tested whether some IMS individuals would show clear signs of efficient lipreading in an implicit task. In this task, inefficient lipreading would result in the absence or weaker influence of visual on auditory speech, indexed by a small proportion of trials in which incongruent pairings between visual velar and auditory bilabial consonants produced the percept of intermediate dental consonants (/da/ and /ta/), that is, “fusion responses” (MacDonald & McGurk, 1978; McGurk & MacDonald, 1976). Before testing whether this was the case, we first checked that all participants performed the task while looking at the visual stimuli by counting each participant's number of correct identifications of the “pink dot.” This was the case: No participant missed more than one pink dot in eight trials. Then, we checked participants' performance in the congruent condition to ensure that auditory perception in congruent trials was intact. Individuals from both groups performed similarly highly (control mean and SD = 95.8 ± 3.1%; IMS mean and SD = 97 ± 3.5%). Then, we turned to our main analysis. We computed the proportion of fusion responses of each participant (Figure 1B) as an index of the contribution of the visual signal on the perception of the auditory syllables and used Crawford and Garthwaite's (2007) Bayesian approach to compare the percentage of fusion responses of each IMS participant to that of the control group. We set the threshold for typically strong influence of visual upon auditory speech perception at 0.85 SD below the control mean (Vannuscorps et al., 2020). The one-sided lower 95% credible limit of the percentage of fusion responses of four IMS individuals was above that threshold despite their (very) severely reduced (IMS4, 5, 8) or even completely absent (IMS1) lip motor representations. Despite completely absent lip motor representations, for instance, IMS1 had 92% of fusion responses, a proportion that was larger than all but one of the control participants.

DISCUSSION

To constrain the space of hypothesis for a role of motor simulation in lipreading, we compared word-level lipreading abilities (Experiment 1) and the strength of the influence of visual speech on the interpretation of auditory speech information (Experiment 2) in typically developed participants and in 11 individuals born with congenitally reduced or completely absent lip movements, who cannot covertly imitate observed lip movements. As expected, in both experiments, some IMS individuals performed significantly below the control participants. Such association of deficits is interesting but is difficult to interpret because the co-occurrence of motor, visual, and perceptual deficits in the IMS individuals we tested makes it difficult to establish unambiguously the (possibly multiple) origins of these difficulties. However, what is important for the conclusion of this study is that these individuals' marked difficulties in lipreading cannot be explained by their motor disorder because, as reported here, other individuals with an equal or even more severe motor disorder achieved a normal level of performance in these two tasks. In contrast, the finding that three of the four IMS participants who performed significantly less accurately than the controls in Experiment 1 also performed significantly below control participants in a visual perceptual screening test (see Table 1) suggests that their difficulties are likely the consequence of a visuo-perceptual deficit. More interestingly, several other IMS participants were at least as good as the controls. In Experiment 1, the one-sided lower 95% credible limit of the performance of seven of the 11 IMS participants was above 0.85 SD below the control mean and these individuals performed these tasks as well as other visual tasks, and in a way that is qualitatively similar to the controls. In Experiment 2, three IMS participants showed an influence of visual upon auditory speech perception that was stronger than that of 80% of the control participants. This demonstrates that motor simulation is not necessary to achieve typically efficient lipreading abilities in these experiments. This conclusion is compatible with two possible hypotheses regarding the role of motor simulation in these experiments: (1) Motor simulation does not contribute to lipreading in these tasks at all, or (2) motor simulation contributes so marginally to lipreading in these tasks that we simply failed to detect this small effect. Nevertheless, there are two main reasons to prefer the first conclusion. First, the finding that these IMS were as good, and often even better, in lipreading as in other visual tasks makes the second interpretation unlikely. Second, and more importantly, there seems to be currently no reason to favor the second and less parsimonious interpretation.

Of course, it is possible that motor simulation may aid the processing of observed lip movements in some other conditions or circumstances, for instance, in tasks that involve both perceptual processing and other task-specific cognitive processes such as working memory. Indeed, there is evidence that efficient visual working memory for body movements and postures is supported partly by motor simulation (Galvez-Pol, Forster, & Calvo-Merino, 2018; Vannuscorps & Caramazza, 2016c; Gao, Bentin, & Shen, 2015; Moreau, 2013). There is also evidence that interpreting stimuli under degraded conditions may be supported by working memory maintenance of the raw signal (e.g., Mattys, Davis, Bradlow, & Scott, 2012). Thus, a possibility is that motor simulation could play a role in lipreading under visually degraded conditions by contributing to the maintenance of the observed lip movements and thereby extending the processing time window available to interpret them. In line with this possibility, although previous studies have shown that motor simulation is not necessary to perceive and interpret body postures and movements per se (Vannuscorps & Caramazza, 2016b; Vannuscorps, Dricot, & Pillon, 2016; Negri et al., 2007), other studies have reported results suggesting that motor simulation may contribute to the ability to interpret actions perceived under adverse perceptual conditions (van Kemenade, Muggleton, Walsh, & Saygin, 2012; Arrighi, Cartocci, & Burr, 2011; Serino et al., 2010). We have previously reported, for instance, that, although an individual born without upper limbs had no difficulty to perceive and interpret body movements as fast and accurately as control participants, he was nevertheless selectively impaired in recognizing upper-limb (but not lower-limb) actions presented as point-light-displays (Vannuscorps, Andres, & Pillon, 2013).

Although our findings leave open these possibilities, they nevertheless considerably narrow down the hypothesis space for a role of motor simulation in lipreading. Previous studies had demonstrated that it is possible to develop some level of lipreading capability without motor simulation (Patterson & Werker, 1999, 2003; Kuhl & Meltzoff, 1982). Our findings additionally demonstrate that it is possible to reach typical lipreading efficiency without motor simulation, at least in tasks such as those used in this study. As such, our findings support the hypothesis that lipreading is a property of the visuo-perceptual system unaided by the motor system (Bernstein & Liebenthal, 2014; Matchin, Groulx, & Hickok, 2014). According to this view, lipreading requires a visuo-perceptual analysis of the actor's configural and dynamic facial features to provide access to stored visual descriptions of the facial postures and movements corresponding to different linguistic units. Once this stored visual representation is accessed, it may be integrated with auditory information in multisensory integration sites to support audiovisual speech comprehension (Beauchamp, Argall, Bodurka, Duyn, & Martin, 2004).

Admittedly, it is possible that lipreading in IMS individuals relies on atypical mechanisms, and therefore, it is an open question whether our conclusion generalizes to typically developed participants. Future studies are needed to elucidate this question with the help of neuropsychological studies of patients suffering from brain damage that affects their ability to imitate lip movements covertly. Nevertheless, there seems to be currently no compelling empirical reason to favor the less parsimonious motor simulation hypothesis. Hence, our findings at the very least emphasize the need for a shift in the burden of proof relative to the question of the role of motor simulation in lipreading. This conclusion converges with that of previous reports of IMS participants who achieved normal levels of performance in facial expression recognition despite their congenital facial paralysis (Vannuscorps et al., 2020; Bate et al., 2013; Bogart & Matsumoto, 2010; Calder et al., 2000) and of individuals congenitally deprived of hand motor representations who nonetheless perceived and comprehended hand actions as efficiently and with the same biases and brain networks as typically developed participants (Vannuscorps & Caramazza, 2015, 2016a, 2016b, 2017, 2019; Vannuscorps, Wurm, Striem-Amit, & Caramazza, 2019; Vannuscorps et al., 2013; Vannuscorps, Pillon, & Andres, 2012). Together, these results challenge the hypothesis that body movement perception and comprehension rely on motor simulation (Rizzolatti & Sinigaglia, 2010).

APPENDIX: ORTHOGRAPHIC AND PHONETIC TRANSCRIPTION OF THE TARGET WORDS ARTICULATED BY THE ACTRESSES IN EXPERIMENT 1 AND THE ASSOCIATED DISTRACTOR STIMULI

TargetD1D2D3
plat /pla/ plan /plɑ̃/ plot /plo/ pli /pli/ 
fin /fɛ̃/ faon /fɑ̃/ feu /fø/ fée /fe/ 
fin /fɛ̃/ faon /fɑ̃/ feu /fø/ fée /fe/ 
sel /sεl/ sol /sɔl/ saoul /sul/ cil /sil/ 
basse /bas/ bosse /bɔs/ bus /bys/ bis /bis/ 
jet /ʒε/ gens /ʒɑ̃/ jeu /ʒø/ gît /ʒi/ 
mâle /mɑl/ molle /mɔl/ meule /møl/ mille /mil/ 
vin /vɛ̃/ vent /vɑ̃/ vue /vy/ vie /vi/ 
basse /bas/ bosse /bɔs/ bus /bys/ bis /bis/ 
basse /bas/ beurre /bəʁ/ bon /bõ/ bée /be/ 
va /va/ vent /vɑ̃/ voeu /vø/ vie /vi/ 
math /mat/ motte /mɔt/ monte /mõt/ mite /mit/ 
mer /mεrmeurt /mərmur /myrmire /mir
cet /sεt/ sotte /sɔt/ soute /sut/ site /sit/ 
laid /lε/ lent /lɑ̃/ lu /ly/ lit /li/ 
chat /ʃa/ chant /ʃɑ̃/ chaud /ʃo/ chez /ʃe/ 
jet /ʒε/ gens /ʒɑ̃/ jeu /ʒø/ gît /ʒi/ 
tas /ta/ tôt /tɔ/ thé /te/ temps /tɑ̃/ 
matin /matɛ̃/ matant /matɑ̃/ matheux /matø/ maté /mate/ 
va /va/ vent /vɑ¯ voeu /vø/ vie /vi/ 
fin /fɛ̃/ faon /fɑ̃/ fou /fu/ fée /fe/ 
laid /lε/ lent /lɑ̃/ lu /ly/ lit /li/ 
chat /ʃa/ chant /ʃɑ̃/ chaud /ʃo/ chez /ʃe/ 
femme /fam/ fève /fεv/ fache /faʃ/     
main /mɛ̃/ fa /fa/ chat /ʃa/     
pile /pil/ fil /fil/ gilles /ʒil/     
menthe /mɑ̃t/ fente /fɑ̃t/ chante /ʃɑ̃t/     
peu /pø/ voeu /vø/ jeu /ʒø/     
bée /be/ fée /fe/ chez /ʃe/     
mot /mɔ/ veau /vo/ chaud /ʃo/     
bock /bɔk/ phoque /fɔk/ choc /ʃɔk/     
loupe /lyp/ louve /lyv/ louche /lyʃ/     
manger /mɑ̃ʒe/ venger /vɑ̃ʒe/ changer /ʃɑ̃ʒe/     
habit /abi/ avis /avi/ hachis /aʃi/     
percer /pεrse/ verser /vεrse/ gercer /ʒεrse/     
menu /məny/ venu /vəny/ chenu /ʃəny/     
amant /amɑ̃/ avant /avɑ̃/ agent /aʒɑ̃/     
ballon /balõ/ vallon /valõ/ jalon /ʒalõ/     
laper /lape/ laver /lave/ lâcher /laʃe/     
peiner /pεne/ veiner /vεne/ gêner /ʒεne/     
palet /palε/ valet /valε/ chalet /ʃalε/     
bain /bɛ̃/ fin /fɛ̃/ chat /ʃa/     
pie /mi/ vie /vi/ j'ai /ʒe/     
pou /pu/ vous /vu/ chou /ʃu/     
mie /mi/ vie /vi/ j'ai /ʒe/     
pou /pu/ vous /vu/ chou /ʃu/     
pente /pɑ̃t/ fente /fɑ̃t/ jante /ʒɑ̃t/     
pou /pu/ fou /fu/ chou /ʃu/     
pain /pɛ̃/ vin /vɛ̃/ chat /ʃa/     
beau /bo/ faux /vo/ chaud /ʃo/     
banc /bɑ̃/ vent /vɑ̃/ chant /ʃɑ̃/     
mauve /mov/ fauve /fov/ chauve /ʃʃov/     
banc /bɑ̃/ vent /vɑ̃/ chant /ʃʃɑ̃/     
pain /pɛ̃/ vin /vɛ̃/ chat /ʃa/     
banc /bɑ̃/ vent /vɑ̃/ chant /ʃɑ̃/     
bu /by/ vu /vy/ jus /ʒy/     
mou /mu/ vous /vu/ chou /ʃu/     
peu /pø/ voeu /vø/ jeu /ʒø/     
banc /bɑ̃/ vent /vɑ̃/ chant /ʃɑ̃/     
fusain /fyzɛ̃/ fusant /fyzɑ̃/ fusil /fyzi/     
TargetD1D2D3
plat /pla/ plan /plɑ̃/ plot /plo/ pli /pli/ 
fin /fɛ̃/ faon /fɑ̃/ feu /fø/ fée /fe/ 
fin /fɛ̃/ faon /fɑ̃/ feu /fø/ fée /fe/ 
sel /sεl/ sol /sɔl/ saoul /sul/ cil /sil/ 
basse /bas/ bosse /bɔs/ bus /bys/ bis /bis/ 
jet /ʒε/ gens /ʒɑ̃/ jeu /ʒø/ gît /ʒi/ 
mâle /mɑl/ molle /mɔl/ meule /møl/ mille /mil/ 
vin /vɛ̃/ vent /vɑ̃/ vue /vy/ vie /vi/ 
basse /bas/ bosse /bɔs/ bus /bys/ bis /bis/ 
basse /bas/ beurre /bəʁ/ bon /bõ/ bée /be/ 
va /va/ vent /vɑ̃/ voeu /vø/ vie /vi/ 
math /mat/ motte /mɔt/ monte /mõt/ mite /mit/ 
mer /mεrmeurt /mərmur /myrmire /mir
cet /sεt/ sotte /sɔt/ soute /sut/ site /sit/ 
laid /lε/ lent /lɑ̃/ lu /ly/ lit /li/ 
chat /ʃa/ chant /ʃɑ̃/ chaud /ʃo/ chez /ʃe/ 
jet /ʒε/ gens /ʒɑ̃/ jeu /ʒø/ gît /ʒi/ 
tas /ta/ tôt /tɔ/ thé /te/ temps /tɑ̃/ 
matin /matɛ̃/ matant /matɑ̃/ matheux /matø/ maté /mate/ 
va /va/ vent /vɑ¯ voeu /vø/ vie /vi/ 
fin /fɛ̃/ faon /fɑ̃/ fou /fu/ fée /fe/ 
laid /lε/ lent /lɑ̃/ lu /ly/ lit /li/ 
chat /ʃa/ chant /ʃɑ̃/ chaud /ʃo/ chez /ʃe/ 
femme /fam/ fève /fεv/ fache /faʃ/     
main /mɛ̃/ fa /fa/ chat /ʃa/     
pile /pil/ fil /fil/ gilles /ʒil/     
menthe /mɑ̃t/ fente /fɑ̃t/ chante /ʃɑ̃t/     
peu /pø/ voeu /vø/ jeu /ʒø/     
bée /be/ fée /fe/ chez /ʃe/     
mot /mɔ/ veau /vo/ chaud /ʃo/     
bock /bɔk/ phoque /fɔk/ choc /ʃɔk/     
loupe /lyp/ louve /lyv/ louche /lyʃ/     
manger /mɑ̃ʒe/ venger /vɑ̃ʒe/ changer /ʃɑ̃ʒe/     
habit /abi/ avis /avi/ hachis /aʃi/     
percer /pεrse/ verser /vεrse/ gercer /ʒεrse/     
menu /məny/ venu /vəny/ chenu /ʃəny/     
amant /amɑ̃/ avant /avɑ̃/ agent /aʒɑ̃/     
ballon /balõ/ vallon /valõ/ jalon /ʒalõ/     
laper /lape/ laver /lave/ lâcher /laʃe/     
peiner /pεne/ veiner /vεne/ gêner /ʒεne/     
palet /palε/ valet /valε/ chalet /ʃalε/     
bain /bɛ̃/ fin /fɛ̃/ chat /ʃa/     
pie /mi/ vie /vi/ j'ai /ʒe/     
pou /pu/ vous /vu/ chou /ʃu/     
mie /mi/ vie /vi/ j'ai /ʒe/     
pou /pu/ vous /vu/ chou /ʃu/     
pente /pɑ̃t/ fente /fɑ̃t/ jante /ʒɑ̃t/     
pou /pu/ fou /fu/ chou /ʃu/     
pain /pɛ̃/ vin /vɛ̃/ chat /ʃa/     
beau /bo/ faux /vo/ chaud /ʃo/     
banc /bɑ̃/ vent /vɑ̃/ chant /ʃɑ̃/     
mauve /mov/ fauve /fov/ chauve /ʃʃov/     
banc /bɑ̃/ vent /vɑ̃/ chant /ʃʃɑ̃/     
pain /pɛ̃/ vin /vɛ̃/ chat /ʃa/     
banc /bɑ̃/ vent /vɑ̃/ chant /ʃɑ̃/     
bu /by/ vu /vy/ jus /ʒy/     
mou /mu/ vous /vu/ chou /ʃu/     
peu /pø/ voeu /vø/ jeu /ʒø/     
banc /bɑ̃/ vent /vɑ̃/ chant /ʃɑ̃/     
fusain /fyzɛ̃/ fusant /fyzɑ̃/ fusil /fyzi/     

Author Contributions

Gilles Vannuscorps: Conceptualization; Data curation; Formal analysis; Investigation; Methodology; Project administration; Writing – original draft. Michael Andres: Conceptualization; Methodology; Writing – original draft. Sarah Carneiro: Methodology. Elise Rombaux: Methodology. Alfonso Caramazza: Conceptualization; Funding acquisition; Writing – original draft.

Diversity in Citation Practices

A retrospective analysis of the citations in every article published in this journal from 2010 to 2020 has revealed a persistent pattern of gender imbalance: Although the proportions of authorship teams (categorized by estimated gender identification of first author/last author) publishing in the Journal of Cognitive Neuroscience (JoCN) during this period were M(an)/M = .408, W(oman)/M = .335, M/W = .108, and W/W = .149, the comparable proportions for the articles that these authorship teams cited were M/M = .579, W/M = .243, M/W = .102, and W/W = .076 (Fulvio et al., JoCN, 33:1, pp. 3–7). Consequently, JoCN encourages all authors to consider gender balance explicitly when selecting which articles to cite and gives them the opportunity to report their article's gender citation balance. The authors of this article report its proportions of citations by gender category to be as follows: M/M = .485, W/M = .243, M/W = .097, and W/W = .175.

Acknowledgments

This research was supported by the Mind, Brain and Behavior Interfaculty Initiative provostial funds. M. A. is a Research Associate at the Fonds National de la Recherche Scientifique (Belgium). We would like to thank Charlotte Hourmant and Eloïse Trouve for recording the video clips used in Experiment 1 and the Association Syndrome Moebius France for its help all along this project.

Reprint requests should be sent to Gilles Vannuscorps, Psychological Sciences Research Institute, Université catholique de Louvain, 10 Place Cardinal Mercier, Louvain-la-Neuve, Belgium, 1348, or via e-mail: gilles.vannuscorps@uclouvain.be.

REFERENCES

Arrighi
,
R.
,
Cartocci
,
G.
, &
Burr
,
D.
(
2011
).
Reduced perceptual sensitivity for biological motion in paraplegia patients
.
Current Biology
,
21
,
R910
R911
.
Barnaud
,
M.-L.
,
Bessière
,
P.
,
Diard
,
J.
, &
Schwartz
,
J.-L.
(
2018
).
Reanalyzing neurocognitive data on the role of the motor system in speech perception within COSMO, a Bayesian perceptuo-motor model of speech communication
.
Brain and Language
,
187
,
19
32
.
Bate
,
S.
,
Cook
,
S. J.
,
Mole
,
J.
, &
Cole
,
J.
(
2013
).
First report of generalized face processing difficulties in Möbius sequence
.
PLoS One
,
8
,
e62656
.
Beauchamp
,
M. S.
,
Argall
,
B. D.
,
Bodurka
,
J.
,
Duyn
,
J. H.
, &
Martin
,
A.
(
2004
).
Unraveling multisensory integration: Patchy organization within human STS multisensory cortex
.
Nature Neuroscience
,
7
,
1190
1192
.
Bernstein
,
L. E.
,
Auer
,
E. T.
, Jr.
, &
Takayanagi
,
S.
(
2004
).
Auditory speech detection in noise enhanced by lipreading
.
Speech Communication
,
44
,
5
18
.
Bernstein
,
L. E.
, &
Liebenthal
,
E.
(
2014
).
Neural pathways for visual speech perception
.
Frontiers in Neuroscience
,
8
,
386
.
Bogart
,
K. R.
, &
Matsumoto
,
D.
(
2010
).
Facial mimicry is not necessary to recognize emotion: Facial expression recognition by people with Moebius syndrome
.
Social Neuroscience
,
5
,
241
251
.
Calder
,
A. J.
,
Keane
,
J.
,
Cole
,
J.
,
Campbell
,
R.
, &
Young
,
A. W.
(
2000
).
Facial expression recognition by people with Möbius syndrome
.
Cognitive Neuropsychology
,
17
,
73
87
.
Callan
,
D. E.
,
Jones
,
J. A.
,
Munhall
,
K.
,
Callan
,
A. M.
,
Kroos
,
C.
, &
Vatikiotis-Bateson
,
E.
(
2003
).
Neural processes underlying perceptual enhancement by visual speech gestures
.
NeuroReport
,
14
,
2213
2218
.
Callan
,
D. E.
,
Jones
,
J. A.
,
Munhall
,
K.
,
Kroos
,
C.
,
Callan
,
A. M.
, &
Vatikiotis-Bateson
,
E.
(
2004
).
Multisensory integration sites identified by perception of spatial wavelet filtered visual speech gesture information
.
Journal of Cognitive Neuroscience
,
16
,
805
816
.
Calvert
,
G. A.
, &
Campbell
,
R.
(
2003
).
Reading speech from still and moving faces: The neural substrates of visible speech
.
Journal of Cognitive Neuroscience
,
15
,
57
70
.
Calvo-Merino
,
B.
,
Grèzes
,
J.
,
Glaser
,
D. E.
,
Passingham
,
R. E.
, &
Haggard
,
P.
(
2006
).
Seeing or doing? Influence of visual and motor familiarity in action observation
.
Current Biology
,
16
,
1905
1910
.
Carta
,
A.
,
Mora
,
P.
,
Neri
,
A.
,
Favilla
,
S.
, &
Sadun
,
A. A.
(
2011
).
Ophthalmologic and systemic features in Möbius syndrome: An Italian case series
.
Ophthalmology
,
118
,
1518
1523
.
Chu
,
Y.-H.
,
Lin
,
F.-H.
,
Chou
,
Y.-J.
,
Tsai
,
K. W.-K.
,
Kuo
,
W.-J.
, &
Jääskeläinen
,
I. P.
(
2013
).
Effective cerebral connectivity during silent speech reading revealed by functional magnetic resonance imaging
.
PLoS One
,
8
,
e80265
.
Crawford
,
J. R.
, &
Garthwaite
,
P. H.
(
2007
).
Comparison of a single case to a control or normative sample in neuropsychology: Development of a Bayesian approach
.
Cognitive Neuropsychology
,
24
,
343
372
.
Crawford
,
J. R.
, &
Howell
,
D. C.
(
1998
).
Comparing an individual's test score against norms derived from small samples
.
Clinical Neuropsychologist
,
12
,
482
486
.
Duchaine
,
B.
,
Germine
,
L.
, &
Nakayama
,
K.
(
2007
).
Family resemblance: Ten family members with prosopagnosia and within-class object agnosia
.
Cognitive Neuropsychology
,
24
,
419
430
.
Duchaine
,
B.
, &
Nakayama
,
K.
(
2006
).
The Cambridge Face Memory Test: Results for neurologically intact individuals and an investigation of its validity using inverted face stimuli and prosopagnosic participants
.
Neuropsychologia
,
44
,
576
585
.
Funk
,
M.
,
Lutz
,
K.
,
Hotz-Boendermaker
,
S.
,
Roos
,
M.
,
Summers
,
P.
,
Brugger
,
P.
, et al
(
2008
).
Sensorimotor tongue representation in individuals with unilateral upper limb amelia
.
Neuroimage
,
43
,
121
127
.
Galvez-Pol
,
A.
,
Forster
,
B.
, &
Calvo-Merino
,
B.
(
2018
).
Modulation of motor cortex activity in a visual working memory task of hand images
.
Neuropsychologia
,
117
,
75
83
.
Gao
,
Z.
,
Bentin
,
S.
, &
Shen
,
M.
(
2015
).
Rehearsing biological motion in working memory: An EEG study
.
Journal of Cognitive Neuroscience
,
27
,
198
209
.
Kaas
,
J. H.
(
2000
).
The reorganization of somatosensory and motor cortex after peripheral nerve or spinal cord injury in primates
.
Progress in Brain Research
,
128
,
173
179
.
Kaas
,
J. H.
,
Merzenich
,
M. M.
, &
Killackey
,
H. P.
(
1983
).
The reorganization of somatosensory cortex following peripheral nerve damage in adult and developing mammals
.
Annual Review of Neuroscience
,
6
,
325
356
.
Kuhl
,
P. K.
, &
Meltzoff
,
A. N.
(
1982
).
The bimodal perception of speech in infancy
.
Science
,
218
,
1138
1141
.
MacDonald
,
J.
, &
McGurk
,
H.
(
1978
).
Visual influences on speech perception processes
.
Perception & Psychophysics
,
24
,
253
257
.
MacLeod
,
A.
, &
Summerfield
,
Q.
(
1987
).
Quantifying the contribution of vision to speech perception in noise
.
British Journal of Audiology
,
21
,
131
141
.
Makin
,
T. R.
,
Scholz
,
J.
,
Henderson Slater
,
D.
,
Johansen-Berg
,
H.
, &
Tracey
,
I.
(
2015
).
Reassessing cortical reorganization in the primary sensorimotor cortex following arm amputation
.
Brain
,
138
,
2140
2146
.
Matchin
,
W.
,
Groulx
,
K.
, &
Hickok
,
G.
(
2014
).
Audiovisual speech integration does not rely on the motor system: Evidence from articulatory suppression, the McGurk effect, and fMRI
.
Journal of Cognitive Neuroscience
,
26
,
606
620
.
Mattys
,
S. L.
,
Davis
,
M. H.
,
Bradlow
,
A. R.
, &
Scott
,
S. K.
(
2012
).
Speech recognition in adverse conditions: A review
.
Language and Cognitive Processes
,
27
,
953
978
.
McGurk
,
H.
, &
MacDonald
,
J.
(
1976
).
Hearing lips and seeing voices
.
Nature
,
264
,
746
748
.
Moreau
,
D.
(
2013
).
Motor expertise modulates movement processing in working memory
.
Acta Psychologica
,
142
,
356
361
.
Negri
,
G. A. L.
,
Rumiati
,
R.
,
Zadini
,
A.
,
Ukmar
,
M.
,
Mahon
,
B. Z.
, &
Caramazza
,
A.
(
2007
).
What is the role of motor simulation in action and object recognition? Evidence from apraxia
.
Cognitive Neuropsychology
,
24
,
795
816
.
Okada
,
K.
, &
Hickok
,
G.
(
2009
).
Two cortical mechanisms support the integration of visual and auditory speech: A hypothesis and preliminary data
.
Neuroscience Letters
,
452
,
219
223
.
Patterson
,
M. L.
, &
Werker
,
J. F.
(
1999
).
Matching phonetic information in lips and voice is robust in 4.5-month-old infants
.
Infant Behavior and Development
,
22
,
237
247
.
Patterson
,
M. L.
, &
Werker
,
J. F.
(
2003
).
Two-month-old infants match phonetic information in lips and voice
.
Developmental Science
,
6
,
191
196
.
Reilly
,
K. T.
, &
Sirigu
,
A.
(
2011
).
Motor cortex representation of the upper-limb in individuals born without a hand
.
PLoS One
,
6
,
e18100
.
Rezlescu
,
C.
,
Danaila
,
I.
,
Miron
,
A.
, &
Amariei
,
C.
(
2020
).
More time for science: Using Testable to create and share behavioral experiments faster, recruit better participants, and engage students in hands-on research
.
Progress in Brain Research
,
253
,
243
262
.
Rizzolatti
,
G.
, &
Sinigaglia
,
C.
(
2010
).
The functional role of the parieto-frontal mirror circuit: Interpretations and misinterpretations
.
Nature Reviews Neuroscience
,
11
,
264
274
.
Sato
,
M.
,
Buccino
,
G.
,
Gentilucci
,
M.
, &
Cattaneo
,
L.
(
2010
).
On the tip of the tongue: Modulation of the primary motor cortex during audiovisual speech perception
.
Speech Communication
,
52
,
533
541
.
Serino
,
A.
,
De Filippo
,
L.
,
Casavecchia
,
C.
,
Coccia
,
M.
,
Shiffrar
,
M.
, &
Làdavas
,
E.
(
2010
).
Lesions to the motor system affect action perception
.
Journal of Cognitive Neuroscience
,
22
,
413
426
.
Sindhusake
,
D.
,
Mitchell
,
P.
,
Smith
,
W.
,
Golding
,
M.
,
Newall
,
P.
,
Hartley
,
D.
, et al
(
2001
).
Validation of self-reported hearing loss: The Blue Mountains hearing study
.
International Journal of Epidemiology
,
30
,
1371
1378
.
Skipper
,
J. I.
,
Goldin-Meadow
,
S.
,
Nusbaum
,
H. C.
, &
Small
,
S. L.
(
2007
).
Speech-associated gestures, Broca's area, and the human mirror system
.
Brain and Language
,
101
,
260
277
.
Skipper
,
J. I.
,
Nusbaum
,
H. C.
, &
Small
,
S. L.
(
2005
).
Listening to talking faces: Motor cortical activation during speech perception
.
Neuroimage
,
25
,
76
89
.
Skipper
,
J. I.
,
van Wassenhove
,
V.
,
Nusbaum
,
H. C.
, &
Small
,
S. L.
(
2007
).
Hearing lips and seeing voices: How cortical areas supporting speech production mediate audiovisual speech perception
.
Cerebral Cortex
,
17
,
2387
2399
.
Stoeckel
,
M. C.
,
Seitz
,
R. J.
, &
Buetefisch
,
C. M.
(
2009
).
Congenitally altered motor experience alters somatotopic organization of human primary motor cortex
.
Proceedings of the National Academy of Sciences, U.S.A.
,
106
,
2395
2400
.
Striem-Amit
,
E.
,
Vannuscorps
,
G.
, &
Caramazza
,
A.
(
2018
).
Plasticity based on compensatory effector use in the association but not primary sensorimotor cortex of people born without hands
.
Proceedings of the National Academy of Sciences, U.S.A.
,
115
,
7801
7806
.
Sumby
,
W. H.
, &
Pollack
,
I.
(
1954
).
Visual contribution to speech intelligibility in noise
.
Journal of the Acoustical Society of America
,
26
,
212
215
.
Swaminathan
,
S.
,
MacSweeney
,
M.
,
Boyles
,
R.
,
Waters
,
D.
,
Watkins
,
K. E.
, &
Möttönen
,
R.
(
2013
).
Motor excitability during visual perception of known and unknown spoken languages
.
Brain and Language
,
126
,
1
7
.
Torfs
,
K.
,
Vancleef
,
K.
,
Lafosse
,
C.
,
Wagemans
,
J.
, &
de-Wit
,
L.
(
2014
).
The Leuven Perceptual Organization Screening Test (L-POST), an online test to assess mid-level visual perception
.
Behavior Research Methods
,
46
,
472
487
.
Turella
,
L.
,
Wurm
,
M. F.
,
Tucciarelli
,
R.
, &
Lingnau
,
A.
(
2013
).
Expertise in action observation: Recent neuroimaging findings and future perspectives
.
Frontiers in Human Neuroscience
,
7
,
637
.
Tye-Murray
,
N.
,
Sommers
,
M. S.
, &
Spehar
,
B.
(
2007
).
Audiovisual integration and lipreading abilities of older adults with normal and impaired hearing
.
Ear and Hearing
,
28
,
656
668
.
Tye-Murray
,
N.
,
Spehar
,
B. P.
,
Myerson
,
J.
,
Hale
,
S.
, &
Sommers
,
M. S.
(
2013
).
Reading your own lips: Common-coding theory and visual speech perception
.
Psychonomic Bulletin & Review
,
20
,
115
119
.
Tye-Murray
,
N.
,
Spehar
,
B. P.
,
Myerson
,
J.
,
Hale
,
S.
, &
Sommers
,
M. S.
(
2015
).
The self-advantage in visual speech processing enhances audiovisual speech recognition in noise
.
Psychonomic Bulletin & Review
,
22
,
1048
1053
.
van Kemenade
,
B. M.
,
Muggleton
,
N.
,
Walsh
,
V.
, &
Saygin
,
A. P.
(
2012
).
Effects of TMS over premotor and superior temporal cortices on biological motion perception
.
Journal of Cognitive Neuroscience
,
24
,
896
904
.
Vannuscorps
,
G.
(
2020
).
Typically efficient lipreading without motor simulation
.
Mendeley Data
.
Vannuscorps
,
G.
,
Andres
,
M.
, &
Caramazza
,
A.
(
2020
).
Efficient recognition of facial expressions does not require motor simulation
.
eLife
,
9
,
e54687
.
Vannuscorps
,
G.
,
Andres
,
M.
, &
Pillon
,
A.
(
2013
).
When does action comprehension need motor involvement? Evidence from upper limb aplasia
.
Cognitive Neuropsychology
,
30
,
253
283
.
Vannuscorps
,
G.
, &
Caramazza
,
A.
(
2015
).
Typical biomechanical bias in the perception of congenitally absent hands
.
Cortex
,
67
,
147
150
.
Vannuscorps
,
G.
, &
Caramazza
,
A.
(
2016a
).
The origin of the biomechanical bias in apparent body movement perception
.
Neuropsychologia
,
89
,
281
286
.
Vannuscorps
,
G.
, &
Caramazza
,
A.
(
2016b
).
Typical action perception and interpretation without motor simulation
.
Proceedings of the National Academy of Sciences, U.S.A.
,
113
,
86
91
.
Vannuscorps
,
G.
, &
Caramazza
,
A.
(
2016c
).
Impaired short-term memory for hand postures in individuals born without hands
.
Cortex
,
83
,
136
138
.
Vannuscorps
,
G.
, &
Caramazza
,
A.
(
2017
).
Typical predictive eye movements during action observation without effector-specific motor simulation
.
Psychonomic Bulletin & Review
,
24
,
1152
1157
.
Vannuscorps
,
G.
, &
Caramazza
,
A.
(
2019
).
Conceptual processing of action verbs with and without motor representations
.
Cognitive Neuropsychology
,
36
,
301
312
.
Vannuscorps
,
G.
,
Dricot
,
L.
, &
Pillon
,
A.
(
2016
).
Persistent sparing of action conceptual processing in spite of increasing disorders of action production: A case against motor embodiment of action concepts
.
Cognitive Neuropsychology
,
33
,
191
219
.
Vannuscorps
,
G.
,
Pillon
,
A.
, &
Andres
,
M.
(
2012
).
Effect of biomechanical constraints in the hand laterality judgment task: Where does it come from?
Frontiers in Human Neuroscience
,
6
,
299
.
Vannuscorps
,
G.
,
Wurm
,
M. F.
,
Striem-Amit
,
E.
, &
Caramazza
,
A.
(
2019
).
Large-scale organization of the hand action observation network in individuals born without hands
.
Cerebral Cortex
,
29
,
3434
3444
.
Verzijl
,
H. T. F. M.
,
van der Zwaag
,
B.
,
Cruysberg
,
J. R. M.
, &
Padberg
,
G. W.
(
2003
).
Möbius syndrome redefined: A syndrome of rhombencephalic maldevelopment
.
Neurology
,
61
,
327
333
. ,
Watkins
,
K. E.
,
Strafella
,
A. P.
, &
Paus
,
T.
(
2003
).
Seeing and hearing speech excites the motor system involved in speech production
.
Neuropsychologia
,
41
,
989
994
.