Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Kevin Munhall
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2004) 16 (5): 805–816.
Published: 01 June 2004
Abstract
View article
PDF
Perception of speech is improved when presentation of the audio signal is accompanied by concordant visual speech gesture information. This enhancement is most prevalent when the audio signal is degraded. One potential means by which the brain affords perceptual enhancement is thought to be through the integration of concordant information from multiple sensory channels in a common site of convergence, multisensory integration (MSI) sites. Some studies have identified potential sites in the superior temporal gyrus/sulcus (STG/S) that are responsive to multisensory information from the auditory speech signal and visual speech movement. One limitation of these studies is that they do not control for activity resulting from attentional modulation cued by such things as visual information signaling the onsets and offsets of the acoustic speech signal, as well as activity resulting from MSI of properties of the auditory speech signal with aspects of gross visual motion that are not specific to place of articulation information. This fMRI experiment uses spatial wavelet bandpass filtered Japanese sentences presented with background multispeaker audio noise to discern brain activity reflecting MSI induced by auditory and visual correspondence of place of articulation information that controls for activity resulting from the above-mentioned factors. The experiment consists of a low-frequency (LF) filtered condition containing gross visual motion of the lips, jaw, and head without specific place of articulation information, a midfrequency (MF) filtered condition containing place of articulation information, and an unfiltered (UF) condition. Sites of MSI selectively induced by auditory and visual correspondence of place of articulation information were determined by the presence of activity for both the MF and UF conditions relative to the LF condition. Based on these criteria, sites of MSI were found predominantly in the left middle temporal gyrus (MTG), and the left STG/S (including the auditory cortex). By controlling for additional factors that could also induce greater activity resulting from visual motion information, this study identifies potential MSI sites that we believe are involved with improved speech perception intelligibility.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2003) 15 (6): 800–809.
Published: 15 August 2003
Abstract
View article
PDF
Neuropsychological research suggests that the neural system underlying visible speech on the basis of kinematics is distinct from the system underlying visible speech of static images of the face and identifying whole-body actions from kinematics alone. Functional magnetic resonance imaging was used to identify the neural systems underlying point-light visible speech, as well as perception of a walking/jumping point-light body, to determine if they are independent. Although both point-light stimuli produced overlapping activation in the right middle occipital gyrus encompassing area KO and the right inferior temporal gyrus, they also activated distinct areas. Perception of walking biological motion activated a medial occipital area along the lingual gyrus close to the cuneus border, and the ventromedial frontal cortex, neither of which was activated by visible speech biological motion. In contrast, perception of visible speech biological motion activated right V5 and a network of motor-related areas (Broca's area, PM, M1, and supplementary motor area (SMA)), none of which were activated by walking biological motion. Many of the areas activated by seeing visible speech biological motion are similar to those activated while speechreading from an actual face, with the exception of M1 and medial SMA. The motor-related areas found to be active during point-light visible speech are consistent with recent work characterizing the human “mirror” system (Rizzolatti, Fadiga, Gallese, & Fogassi, 1996).