Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-3 of 3
Howard C. Nusbaum
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2022) 34 (3): 425–444.
Published: 01 February 2022
FIGURES
| View All (7)
Abstract
View article
PDF
The ability to generalize across specific experiences is vital for the recognition of new patterns, especially in speech perception considering acoustic–phonetic pattern variability. Indeed, behavioral research has demonstrated that listeners are able via a process of generalized learning to leverage their experiences of past words said by difficult-to-understand talker to improve their understanding for new words said by that talker. Here, we examine differences in neural responses to generalized versus rote learning in auditory cortical processing by training listeners to understand a novel synthetic talker. Using a pretest–posttest design with EEG, participants were trained using either (1) a large inventory of words where no words were repeated across the experiment (generalized learning) or (2) a small inventory of words where words were repeated (rote learning). Analysis of long-latency auditory evoked potentials at pretest and posttest revealed that rote and generalized learning both produced rapid changes in auditory processing, yet the nature of these changes differed. Generalized learning was marked by an amplitude reduction in the N1–P2 complex and by the presence of a late negativity wave in the auditory evoked potential following training; rote learning was marked only by temporally later scalp topography differences. The early N1–P2 change, found only for generalized learning, is consistent with an active processing account of speech perception, which proposes that the ability to rapidly adjust to the specific vocal characteristics of a new talker (for which rote learning is rare) relies on attentional mechanisms to selectively modify early auditory processing sensitivity.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2006) 18 (12): 2013–2029.
Published: 01 November 2006
Abstract
View article
PDF
We examined whether the repeated processing of spoken sentences is accompanied by reduced bold oxygenation level-dependent response (repetition suppression) in regions implicated in sentence comprehension and whether the magnitude of such suppression depends on the task under which the sentences are comprehended or on the complexity of the sentences. We found that sentence repetition was associated with repetition suppression in temporal regions, independent of whether participants judged the sensibility of the statements or listened to the statements passively. In contrast, repetition suppression in inferior frontal regions was found only in the context of the task demanding active judgment. These results suggest that repetition suppression in temporal regions reflects facilitation of sentence comprehension processing per se, whereas in frontal regions it reflects, at least in part, easier execution of specific psycholinguistic judgments.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2004) 16 (7): 1173–1184.
Published: 01 September 2004
Abstract
View article
PDF
To recognize phonemes across variation in talkers, listeners can use information about vocal characteristics, a process referred to as “talker normalization.” The present study investigates the cortical mechanisms underlying talker normalization using fMRI. Listeners recognized target words presented in either a spoken list produced by a single talker or a mix of different talkers. It was found that both conditions activate an extensive cortical network. However, recognizing words in the mixed-talker condition, relative to the blocked-talker condition, activated middle/superior temporal and superior parietal regions to a greater degree. This temporal– parietal network is possibly associated with selectively attending and processing spectral and spatial acoustic cues required in recognizing speech in a mixed-talker condition.