Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-7 of 7
Ingrid S. Johnsrude
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2022) 34 (6): 933–950.
Published: 02 May 2022
FIGURES
| View All (6)
Abstract
View article
PDF
Older people with hearing problems often experience difficulties understanding speech in the presence of background sound. As a result, they may disengage in social situations, which has been associated with negative psychosocial health outcomes. Measuring listening (dis)engagement during challenging listening situations has received little attention thus far. We recruit young, normal-hearing human adults (both sexes) and investigate how speech intelligibility and engagement during naturalistic story listening is affected by the level of acoustic masking (12-talker babble) at different signal-to-noise ratios (SNRs). In , we observed that word-report scores were above 80% for all but the lowest SNR (−3 dB SNR) we tested, at which performance dropped to 54%. In , we calculated intersubject correlation (ISC) using EEG data to identify dynamic spatial patterns of shared neural activity evoked by the stories. ISC has been used as a neural measure of participants' engagement with naturalistic materials. Our results show that ISC was stable across all but the lowest SNRs, despite reduced speech intelligibility. Comparing ISC and intelligibility demonstrated that word-report performance declined more strongly with decreasing SNR compared to ISC. Our measure of neural engagement suggests that individuals remain engaged in story listening despite missing words because of background noise. Our work provides a potentially fruitful approach to investigate listener engagement with naturalistic, spoken stories that may be used to investigate (dis)engagement in older adults with hearing impairment.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2016) 28 (8): 1210–1227.
Published: 01 August 2016
FIGURES
| View All (9)
Abstract
View article
PDF
Every day we generate motor responses that are timed with external cues. This phenomenon of sensorimotor synchronization has been simplified and studied extensively using finger tapping sequences that are executed in synchrony with auditory stimuli. The predictive saccade paradigm closely resembles the finger tapping task. In this paradigm, participants follow a visual target that “steps” between two fixed locations on a visual screen at predictable ISIs. Eventually, the time from target appearance to saccade initiation (i.e., saccadic RT) becomes predictive with values nearing 0 msec. Unlike the finger tapping literature, neural control of predictive behavior described within the eye movement literature has not been well established and is inconsistent, especially between neuroimaging and patient lesion studies. To resolve these discrepancies, we used fMRI to investigate the neural correlates of predictive saccades by contrasting brain areas involved with behavior generated from the predictive saccade task with behavior generated from a reactive saccade task (saccades are generated toward targets that are unpredictably timed). We observed striking differences in neural recruitment between reactive and predictive conditions: Reactive saccades recruited oculomotor structures, as predicted, whereas predictive saccades recruited brain structures that support timing in motor responses, such as the crus I of the cerebellum, and structures commonly associated with the default mode network. Therefore, our results were more consistent with those found in the finger tapping literature.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (12): 3914–3932.
Published: 01 December 2011
FIGURES
| View All (6)
Abstract
View article
PDF
When speech is degraded, word report is higher for semantically coherent sentences (e.g., her new skirt was made of denim ) than for anomalous sentences (e.g., her good slope was done in carrot ). Such increased intelligibility is often described as resulting from “top–down” processes, reflecting an assumption that higher-level (semantic) neural processes support lower-level (perceptual) mechanisms. We used time-resolved sparse fMRI to test for top–down neural mechanisms, measuring activity while participants heard coherent and anomalous sentences presented in speech envelope/spectrum noise at varying signal-to-noise ratios (SNR). The timing of BOLD responses to more intelligible speech provides evidence of hierarchical organization, with earlier responses in peri-auditory regions of the posterior superior temporal gyrus than in more distant temporal and frontal regions. Despite Sentence content × SNR interactions in the superior temporal gyrus, prefrontal regions respond after auditory/perceptual regions. Although we cannot rule out top–down effects, this pattern is more compatible with a purely feedforward or bottom–up account, in which the results of lower-level perceptual processing are passed to inferior frontal regions. Behavioral and neural evidence that sentence content influences perception of degraded speech does not necessarily imply “top–down” neural processes.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (10): 2675–2689.
Published: 01 October 2011
FIGURES
| View All (6)
Abstract
View article
PDF
We investigate whether the neural correlates of the continuity illusion, as measured using fMRI, are modulated by attention. As we have shown previously, when two formants of a synthetic vowel are presented in an alternating pattern, the vowel can be identified if the gaps in each formant are filled with bursts of plausible masking noise, causing the illusory percept of a continuous vowel (“Illusion” condition). When the formant-to-noise ratio is increased so that noise no longer plausibly masks the formants, the formants are heard as interrupted (“Illusion Break” condition) and vowels are not identifiable. A region of the left middle temporal gyrus (MTG) is sensitive both to intact synthetic vowels (two formants present simultaneously) and to Illusion stimuli, compared to Illusion Break stimuli. Here, we compared these conditions in the presence and absence of attention. We examined fMRI signal for different sound types under three attentional conditions: full attention to the vowels; attention to a visual distracter; or attention to an auditory distracter. Crucially, although a robust main effect of attentional state was observed in many regions, the effect of attention did not differ systematically for the illusory vowels compared to either intact vowels or to the Illusion Break stimuli in the left STG/MTG vowel-sensitive region. This result suggests that illusory continuity of vowels is an obligatory perceptual process, and operates independently of attentional state. An additional finding was that the sensitivity of primary auditory cortex to the number of sound onsets in the stimulus was modulated by attention.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (8): 1770–1781.
Published: 01 August 2010
FIGURES
| View All (5)
Abstract
View article
PDF
The fluency and the reliability of speech production suggest a mechanism that links motor commands and sensory feedback. Here, we examined the neural organization supporting such links by using fMRI to identify regions in which activity during speech production is modulated according to whether auditory feedback matches the predicted outcome or not and by examining the overlap with the network recruited during passive listening to speech sounds. We used real-time signal processing to compare brain activity when participants whispered a consonant–vowel–consonant word (“Ted”) and either heard this clearly or heard voice-gated masking noise. We compared this to when they listened to yoked stimuli (identical recordings of “Ted” or noise) without speaking. Activity along the STS and superior temporal gyrus bilaterally was significantly greater if the auditory stimulus was (a) processed as the auditory concomitant of speaking and (b) did not match the predicted outcome (noise). The network exhibiting this Feedback Type × Production/Perception interaction includes a superior temporal gyrus/middle temporal gyrus region that is activated more when listening to speech than to noise. This is consistent with speech production and speech perception being linked in a control system that predicts the sensory outcome of speech acts and that processes an error signal in speech-sensitive regions when this and the sensory data do not match.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2009) 21 (10): 2027–2045.
Published: 01 October 2009
Abstract
View article
PDF
Humans can recognize common objects by touch extremely well whenever vision is unavailable. Despite its importance to a thorough understanding of human object recognition, the neuroscientific study of this topic has been relatively neglected. To date, the few published studies have addressed the haptic recognition of nonbiological objects. We now focus on haptic recognition of the human body, a particularly salient object category for touch. Neuroimaging studies demonstrate that regions of the occipito-temporal cortex are specialized for visual perception of faces (fusiform face area, FFA) and other body parts (extrastriate body area, EBA). Are the same category-sensitive regions activated when these components of the body are recognized haptically? Here, we use fMRI to compare brain organization for haptic and visual recognition of human body parts. Sixteen subjects identified exemplars of faces, hands, feet, and nonbiological control objects using vision and haptics separately. We identified two discrete regions within the fusiform gyrus (FFA and the haptic face region) that were each sensitive to both haptically and visually presented faces; however, these two regions differed significantly in their response patterns. Similarly, two regions within the lateral occipito-temporal area (EBA and the haptic body region) were each sensitive to body parts in both modalities, although the response patterns differed. Thus, although the fusiform gyrus and the lateral occipito-temporal cortex appear to exhibit modality-independent, category-sensitive activity, our results also indicate a degree of functional specialization related to sensory modality within these structures.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2008) 20 (10): 1737–1752.
Published: 01 October 2008
Abstract
View article
PDF
We used functional magnetic resonance imaging to study the neural processing of vowels whose perception depends on the continuity illusion. Participants heard sequences of two-formant vowels under a number of listening conditions. In the “vowel conditions,” both formants were always present simultaneously and the stimuli were perceived as speech-like. Contrasted with a range of nonspeech sounds, these vowels elicited activity in the posterior middle temporal gyrus (MTG) and superior temporal sulcus (STS). When the two formants alternated in time, the “speech-likeness” of the sounds was reduced. It could be partially restored by filling the silent gaps in each formant with bands of noise (the “Illusion” condition) because the noise induced an illusion of continuity in each formant region, causing the two formants to be perceived as simultaneous. However, this manipulation was only effective at low formant-to-noise ratios (FNRs). When the FNR was increased, the illusion broke down (the “illusion-break” condition). Activation in vowel-sensitive regions of the MTG was greater in the illusion than in the illusion-break condition, consistent with the perception of Illusion stimuli as vowels. Activity in Heschl's gyri (HG), the approximate location of the primary auditory cortex, showed the opposite pattern, and may depend instead on the number of perceptual onsets in a sound. Our results demonstrate that speech-sensitive regions of the MTG are sensitive not to the physical characteristics of the stimulus but to the perception of the stimulus as speech, and also provide an anatomically distinct, objective physiological correlate of the continuity illusion in human listeners.