Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-9 of 9
Nina Kraus
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2018) 30 (1): 14–24.
Published: 01 January 2018
FIGURES
Abstract
View article
PDF
Musical rhythm engages motor and reward circuitry that is important for cognitive control, and there is evidence for enhanced inhibitory control in musicians. We recently revealed an inhibitory control advantage in percussionists compared with vocalists, highlighting the potential importance of rhythmic expertise in mediating this advantage. Previous research has shown that better inhibitory control is associated with less variable performance in simple sensorimotor synchronization tasks; however, this relationship has not been examined through the lens of rhythmic expertise. We hypothesize that the development of rhythm skills strengthens inhibitory control in two ways: by fine-tuning motor networks through the precise coordination of movements “in time” and by activating reward-based mechanisms, such as predictive processing and conflict monitoring, which are involved in tracking temporal structure in music. Here, we assess adult percussionists and nonpercussionists on inhibitory control, selective attention, basic drumming skills (self-paced, paced, and continuation drumming), and cortical evoked responses to an auditory stimulus presented on versus off the beat of music. Consistent with our hypotheses, we find that better inhibitory control is correlated with more consistent drumming and enhanced neural tracking of the musical beat. Drumming variability and the neural index of beat alignment each contribute unique predictive power to a regression model, explaining 57% of variance in inhibitory control. These outcomes present the first evidence that enhanced inhibitory control in musicians may be mediated by rhythmic expertise and provide a foundation for future research investigating the potential for rhythm-based training to strengthen cognitive function.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2017) 29 (5): 855–868.
Published: 01 May 2017
FIGURES
Abstract
View article
PDF
Durational patterns provide cues to linguistic structure, thus so variations in rhythm skills may have consequences for language development. Understanding individual differences in rhythm skills, therefore, could help explain variability in language abilities across the population. We investigated the neural foundations of rhythmic proficiency and its relation to language skills in young adults. We hypothesized that rhythmic abilities can be characterized by at least two constructs, which are tied to independent language abilities and neural profiles. Specifically, we hypothesized that rhythm skills that require integration of information across time rely upon the consistency of slow, low-frequency auditory processing, which we measured using the evoked cortical response. On the other hand, we hypothesized that rhythm skills that require fine temporal precision rely upon the consistency of fast, higher-frequency auditory processing, which we measured using the frequency-following response. Performance on rhythm tests aligned with two constructs: rhythm sequencing and synchronization. Rhythm sequencing and synchronization were linked to the consistency of slow cortical and fast frequency-following responses, respectively. Furthermore, whereas rhythm sequencing ability was linked to verbal memory and reading, synchronization ability was linked only to nonverbal auditory temporal processing. Thus, rhythm perception at different time scales reflects distinct abilities, which rely on distinct auditory neural resources. In young adults, slow rhythmic processing makes the more extensive contribution to language skills.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2015) 27 (2): 400–408.
Published: 01 February 2015
FIGURES
| View All (6)
Abstract
View article
PDF
The neural resonance theory of musical meter explains musical beat tracking as the result of entrainment of neural oscillations to the beat frequency and its higher harmonics. This theory has gained empirical support from experiments using simple, abstract stimuli. However, to date there has been no empirical evidence for a role of neural entrainment in the perception of the beat of ecologically valid music. Here we presented participants with a single pop song with a superimposed bassoon sound. This stimulus was either lined up with the beat of the music or shifted away from the beat by 25% of the average interbeat interval. Both conditions elicited a neural response at the beat frequency. However, although the on-the-beat condition elicited a clear response at the first harmonic of the beat, this frequency was absent in the neural response to the off-the-beat condition. These results support a role for neural entrainment in tracking the metrical structure of real music and show that neural meter tracking can be disrupted by the presentation of contradictory rhythmic cues.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2015) 27 (1): 124–140.
Published: 01 January 2015
FIGURES
| View All (6)
Abstract
View article
PDF
To make sense of our ever-changing world, our brains search out patterns. This drive can be so strong that the brain imposes patterns when there are none. The opposite can also occur: The brain can overlook patterns because they do not conform to expectations. In this study, we examined this neural sensitivity to patterns within the auditory brainstem, an evolutionarily ancient part of the brain that can be fine-tuned by experience and is integral to an array of cognitive functions. We have recently shown that this auditory hub is sensitive to patterns embedded within a novel sound stream, and we established a link between neural sensitivity and behavioral indices of learning [Skoe, E., Krizman, J., Spitzer, E., & Kraus, N. The auditory brainstem is a barometer of rapid auditory learning. Neuroscience, 243, 104–114, 2013]. We now ask whether this sensitivity to stimulus statistics is biased by prior experience and the expectations arising from this experience. To address this question, we recorded complex auditory brainstem responses (cABRs) to two patterned sound sequences formed from a set of eight repeating tones. For both patterned sequences, the eight tones were presented such that the transitional probability (TP) between neighboring tones was either 33% (low predictability) or 100% (high predictability). Although both sequences were novel to the healthy young adult listener and had similar TP distributions, one was perceived to be more musical than the other. For the more musical sequence, participants performed above chance when tested on their recognition of the most predictable two-tone combinations within the sequence (TP of 100%); in this case, the cABR differed from a baseline condition where the sound sequence had no predictable structure. In contrast, for the less musical sequence, learning was at chance, suggesting that listeners were “deaf” to the highly predictable repeating two-tone combinations in the sequence. For this condition, the cABR also did not differ from baseline. From this, we posit that the brainstem acts as a Bayesian sound processor, such that it factors in prior knowledge about the environment to index the probability of particular events within ever-changing sensory conditions.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (9): 2268–2279.
Published: 01 September 2011
FIGURES
| View All (5)
Abstract
View article
PDF
The presence of irrelevant auditory information (other talkers, environmental noises) presents a major challenge to listening to speech. The fundamental frequency ( F 0 ) of the target speaker is thought to provide an important cue for the extraction of the speaker's voice from background noise, but little is known about the relationship between speech-in-noise (SIN) perceptual ability and neural encoding of the F 0 . Motivated by recent findings that music and language experience enhance brainstem representation of sound, we examined the hypothesis that brainstem encoding of the F 0 is diminished to a greater degree by background noise in people with poorer perceptual abilities in noise. To this end, we measured speech-evoked auditory brainstem responses to /da/ in quiet and two multitalker babble conditions (two-talker and six-talker) in native English-speaking young adults who ranged in their ability to perceive and recall SIN. Listeners who were poorer performers on a standardized SIN measure demonstrated greater susceptibility to the degradative effects of noise on the neural encoding of the F 0 . Particularly diminished was their phase-locked activity to the fundamental frequency in the portion of the syllable known to be most vulnerable to perceptual disruption (i.e., the formant transition period). Our findings suggest that the subcortical representation of the F 0 in noise contributes to the perception of speech in noisy conditions.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2009) 21 (11): 2121–2128.
Published: 01 November 2009
Abstract
View article
PDF
In order to understand how emotional state influences the listener's physiological response to speech, subjects looked at emotion-evoking pictures while 32-channel EEG evoked responses (ERPs) to an unchanging auditory stimulus (“danny”) were collected. The pictures were selected from the International Affective Picture System database. They were rated by participants and differed in valence (positive, negative, neutral), but not in dominance and arousal. Effects of viewing negative emotion pictures were seen as early as 20 msec ( p = .006). An analysis of the global field power highlighted a time period of interest (30.4–129.0 msec) where the effects of emotion are likely to be the most robust. At the cortical level, the responses differed significantly depending on the valence ratings the subjects provided for the visual stimuli, which divided them into the high valence intensity group and the low valence intensity group. The high valence intensity group exhibited a clear divergent bivalent effect of emotion (ERPs at Cz during viewing neutral pictures subtracted from ERPs during viewing positive or negative pictures) in the time period of interest ( r Φ = .534, p < .01). Moreover, group differences emerged in the pattern of global activation during this time period. Although both groups demonstrated a significant effect of emotion (ANOVA, p = .004 and .006, low valence intensity and high valence intensity, respectively), the high valence intensity group exhibited a much larger effect. Whereas the low valence intensity group exhibited its smaller effect predominantly in frontal areas, the larger effect in the high valence intensity group was found globally, especially in the left temporal areas, with the largest divergent bivalent effects (ANOVA, p < .00001) in high valence intensity subjects around the midline. Thus, divergent bivalent effects were observed between 30 and 130 msec, and were dependent on the subject's subjective state, whereas the effects at 20 msec were evident only for negative emotion, independent of the subject's behavioral responses. Taken together, it appears that emotion can affect auditory function early in the sensory processing stream.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2008) 20 (10): 1892–1902.
Published: 01 October 2008
Abstract
View article
PDF
Peripheral and central structures along the auditory pathway contribute to speech processing and learning. However, because speech requires the use of functionally and acoustically complex sounds which necessitates high sensory and cognitive demands, long-term exposure and experience using these sounds is often attributed to the neocortex with little emphasis placed on subcortical structures. The present study examines changes in the auditory brainstem, specifically the frequency following response (FFR), as native English-speaking adults learn to incorporate foreign speech sounds (lexical pitch patterns) in word identification. The FFR presumably originates from the auditory midbrain and can be elicited preattentively. We measured FFRs to the trained pitch patterns before and after training. Measures of pitch tracking were then derived from the FFR signals. We found increased accuracy in pitch tracking after training, including a decrease in the number of pitch-tracking errors and a refinement in the energy devoted to encoding pitch. Most interestingly, this change in pitch-tracking accuracy only occurred in the most acoustically complex pitch contour (dipping contour), which is also the least familiar to our English-speaking subjects. These results not only demonstrate the contribution of the brainstem in language learning and its plasticity in adulthood but also demonstrate the specificity of this contribution (i.e., changes in encoding only occur in specific, least familiar stimuli, not all stimuli). Our findings complement existing data showing cortical changes after second-language learning, and are consistent with models suggesting that brainstem changes resulting from perceptual learning are most apparent when acuity in encoding is most needed.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2007) 19 (3): 376–385.
Published: 01 March 2007
Abstract
View article
PDF
Children with language-based learning problems often exhibit pronounced speech perception difficulties. Specifically, these children have increased difficulty separating brief sounds occurring in rapid succession (temporal resolution). The purpose of this study was to better understand the consequences of auditory temporal resolution deficits from the perspective of the neural encoding of speech. The findings provide evidence that sensory processes relevant to cognition take place at much earlier levels than traditionally believed. Thresholds from a psychophysical backward masking task were used to divide children into groups with good and poor temporal resolution. Speech-evoked brainstem responses were analyzed across groups to measure the neural integrity of stimulus-time mechanisms. Results suggest that children with poor temporal resolution do not have an overall neural processing deficit, but rather a deficit specific to the encoding of certain acoustic cues in speech. Speech understanding relies on the ability to attach meaning to rapidly fluctuating changes of both the temporal and spectral information found in consonants and vowels. For this to happen properly, the auditory system must first accurately encode these time-varying acoustic cues. Speech perception difficulties that often co-occur in children with poor temporal resolution may originate as a neural encoding deficit in structures as early as the auditory brainstem. Thus, speech-evoked brainstem responses are a biological marker for auditory temporal processing ability.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1995) 7 (1): 25–32.
Published: 01 January 1995
Abstract
View article
PDF
A passively elicited cortical potential that reflects the brain's discrimination of small acoustic contrasts was measured in response to two slightly different speech stimuli in adult human subjects. Behavioral training in the discrimination of those speech stimuli resulted in a significant change in the duration and magnitude of the cortical potential. The results demonstrate that listening training can change the neurophysiologic responses of the central auditory system to just-perceptible differences in speech.