Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Burkhard Maess
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2015) 27 (2): 280–291.
Published: 01 February 2015
FIGURES
| View All (4)
Abstract
View article
PDF
The human voice is the primary carrier of speech but also a fingerprint for person identity. Previous neuroimaging studies have revealed that speech and identity recognition is accomplished by partially different neural pathways, despite the perceptual unity of the vocal sound. Importantly, the right STS has been implicated in voice processing, with different contributions of its posterior and anterior parts. However, the time point at which vocal and speech processing diverge is currently unknown. Also, the exact role of the right STS during voice processing is so far unclear because its behavioral relevance has not yet been established. Here, we used the high temporal resolution of magnetoencephalography and a speech task control to pinpoint transient behavioral correlates: we found, at 200 msec after stimulus onset, that activity in right anterior STS predicted behavioral voice recognition performance. At the same time point, the posterior right STS showed increased activity during voice identity recognition in contrast to speech recognition whereas the left mid STS showed the reverse pattern. In contrast to the highly speech-sensitive left STS, the current results highlight the right STS as a key area for voice identity recognition and show that its anatomical-functional division emerges around 200 msec after stimulus onset. We suggest that this time point marks the speech-independent processing of vocal sounds in the posterior STS and their successful mapping to vocal identities in the anterior STS.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2012) 24 (9): 1919–1931.
Published: 01 September 2012
FIGURES
| View All (6)
Abstract
View article
PDF
The N1 auditory ERP and its magnetic counterpart (N1[m]) are suppressed when elicited by self-induced sounds. Because the N1(m) is a correlate of auditory event detection, this N1 suppression effect is generally interpreted as a reflection of the workings of an internal forward model: The forward model captures the contingency (causal relationship) between the action and the sound, and this is used to cancel the predictable sensory reafference when the action is initiated. In this study, we demonstrated in three experiments using a novel coincidence paradigm that actual contingency between actions and sounds is not a necessary condition for N1 suppression. Participants performed time interval production tasks: They pressed a key to set the boundaries of time intervals. Concurrently, but independently of keypresses, a sequence of pure tones with random onset-to-onset intervals was presented. Tones coinciding with keypresses elicited suppressed N1(m) and P2(m), suggesting that action–stimulus contiguity (temporal proximity) is sufficient to suppress sensory processing related to the detection of auditory events.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2002) 14 (3): 455–462.
Published: 01 April 2002
Abstract
View article
PDF
The study investigated the neuronal basis of the retrieval of words from the mental lexicon. The semantic category interference effect was used to locate lexical retrieval processes in time and space. This effect reflects the finding that, for overt naming, volunteers are slower when naming pictures out of a sequence of items from the same semantic category than from different categories. Participants named pictures blockwise either in the context of same-or mixed-category items while the brain response was registered using magnetoencephalography (MEG). Fifteen out of 20 participants showed longer response latencies in the same-category compared to the mixed-category condition. Event-related MEG signals for the participants demonstrating the interference effect were submitted to a current source density (CSD) analysis. As a new approach, a principal component analysis was applied to decompose the grand average CSD distribution into spatial subcomponents (factors). The spatial factor indicating left temporal activity revealed significantly different activation for the same-category compared to the mixed-category condition in the time window between 150 and 225 msec post picture onset. These findings indicate a major involvement of the left temporal cortex in the semantic interference effect. As this effect has been shown to take place at the level of lexical selection, the data suggest that the left temporal cortex supports processes of lexical retrieval during production.