Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Michael S. Beauchamp
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2017) 29 (6): 1044–1060.
Published: 01 June 2017
FIGURES
| View All (6)
Abstract
View article
PDF
Human speech can be comprehended using only auditory information from the talker's voice. However, comprehension is improved if the talker's face is visible, especially if the auditory information is degraded as occurs in noisy environments or with hearing loss. We explored the neural substrates of audiovisual speech perception using electrocorticography, direct recording of neural activity using electrodes implanted on the cortical surface. We observed a double dissociation in the responses to audiovisual speech with clear and noisy auditory component within the superior temporal gyrus (STG), a region long known to be important for speech perception. Anterior STG showed greater neural activity to audiovisual speech with clear auditory component, whereas posterior STG showed similar or greater neural activity to audiovisual speech in which the speech was replaced with speech-like noise. A distinct border between the two response patterns was observed, demarcated by a landmark corresponding to the posterior margin of Heschl's gyrus. To further investigate the computational roles of both regions, we considered Bayesian models of multisensory integration, which predict that combining the independent sources of information available from different modalities should reduce variability in the neural responses. We tested this prediction by measuring the variability of the neural responses to single audiovisual words. Posterior STG showed smaller variability than anterior STG during presentation of audiovisual speech with noisy auditory component. Taken together, these results suggest that posterior STG but not anterior STG is important for multisensory integration of noisy auditory and visual speech.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2005) 17 (12): 1871–1885.
Published: 01 December 2005
Abstract
View article
PDF
We used rapid, event-related fMRI to identify the neural systems underlying object semantics. During scanning, subjects silently read rapidly presented word pairs (150 msec, SOA = 250 msec) that were either unrelated in meaning (ankle-carrot), semantically related (fork-cup), or identical (crow-crow). Activity in the left posterior region of the fusiform gyrus and left inferior frontal cortex was modulated by word-pair relationship. Semantically related pairs yielded less activity than unrelated pairs, but greater activity than identical pairs, mirroring the pattern of behavioral facilitation as measured by word reading times. These findings provide strong support for the involvement of these areas in the automatic processing of object meaning. In addition, words referring to animate objects produced greater activity in the lateral region of the fusiform gyri, right superior temporal sulcus, and medial region of the occipital lobe relative to manmade, manipulable objects, whereas words referring to manmade, manipulable objects produced greater activity in the left ventral premotor, left anterior cingulate, and bilateral parietal cortices relative to animate objects. These findings are consistent with the dissociation between these areas based on sensory-and motor-related object properties, providing further evidence that conceptual object knowledge is housed, in part, in the same neural systems that subserve perception and action.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2003) 15 (7): 991–1001.
Published: 01 October 2003
Abstract
View article
PDF
We used fMRI to study the organization of brain responses to different types of complex visual motion. In a rapid eventrelated design, subjects viewed video clips of humans performing different whole-body motions, video clips of manmade manipulable objects (tools) moving with their characteristic natural motion, point-light displays of human whole-body motion, and point-light displays of manipulable objects. The lateral temporal cortex showed strong responses to both moving videos and moving point-light displays, supporting the hypothesis that the lateral temporal cortex is the cortical locus for processing complex visual motion. Within the lateral temporal cortex, we observed segregated responses to different types of motion. The superior temporal sulcus (STS) responded strongly to human videos and human point-light displays, while the middle temporal gyrus (MTG) and the inferior temporal sulcus responded strongly to tool videos and tool point-light displays. In the ventral temporal cortex, the lateral fusiform responded more to human videos than to any other stimulus category while the medial fusiform preferred tool videos. The relatively weak responses observed to point-light displays in the ventral temporal cortex suggests that form, color, and texture (present in video but not point-light displays) are the main contributors to ventral temporal activity. In contrast, in the lateral temporal cortex, the MTG responded as strongly to point-light displays as to videos, suggesting that motion is the key determinant of response in the MTG. Whereas the STS responded strongly to point-light displays, it showed an even larger response to video displays, suggesting that the STS integrates form, color, and motion information.