Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Laurel J. Trainor
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2015) 27 (5): 1060–1067.
Published: 01 May 2015
FIGURES
| View All (4)
Abstract
View article
PDF
Sound waves emitted by two or more simultaneous sources reach the ear as one complex waveform. Auditory scene analysis involves parsing a complex waveform into separate perceptual representations of the sound sources [Bregman, A. S. Auditory scene analysis: The perceptual organization of sounds . London: MIT Press, 1990]. Harmonicity provides an important cue for auditory scene analysis. Normally, harmonics at integer multiples of a fundamental frequency are perceived as one sound with a pitch corresponding to the fundamental frequency. However, when one harmonic in such a complex, pitch-evoking sound is sufficiently mistuned, that harmonic emerges from the complex tone and is perceived as a separate auditory object. Previous work has shown that the percept of two objects is indexed in both children and adults by the object-related negativity component of the ERP derived from EEG recordings [Alain, C., Arnott, S. T., & Picton, T. W. Bottom–up and top–down influences on auditory scene analysis: Evidence from event-related brain potentials. Journal of Experimental Psychology: Human Perception and Performance, 27, 1072–1089, 2001]. Here we examine the emergence of object-related responses to an 8% harmonic mistuning in infants between 2 and 12 months of age. Two-month-old infants showed no significant object-related response. However, in 4- to 12-month-old infants, a significant frontally positive component was present, and by 8–12 months, a significant frontocentral object-related negativity was present, similar to that seen in older children and adults. This is in accordance with previous research demonstrating that infants younger than 4 months of age do not integrate harmonic information to perceive pitch when the fundamental is missing [He, C., Hotson, L., & Trainor, L. J. Maturation of cortical mismatch mismatch responses to occasional pitch change in early infancy: Effects of presentation rate and magnitude of change. Neuropsychologia, 47, 218–229, 2009]. The results indicate that the ability to use harmonic information to segregate simultaneous sounds emerges at the cortical level between 2 and 4 months of age.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2007) 19 (5): 878–892.
Published: 01 May 2007
Abstract
View article
PDF
We investigated the emergence of discriminative responses to pitch by recording 2-, 3-, and 4-month-old infants' electro-encephalogram responses to infrequent pitch changes in piano tones. In all age groups, infants' responses to deviant tones were significantly different from responses to standard tones. However, two types of mismatch responses were observed simultaneously in the difference waves. An increase in the left-lateralized positive slow wave was prominent in 2-month-olds, present in 3-month-olds, but insignificant in 4-month-olds. A faster adultlike mismatch negativity (MMN), lateralized to the right hemisphere, emerged at 2 months of age and became earlier and stronger as age increased. The coexistence and dissociation of two types of mismatch responses suggests different underlying neuromechanisms for the two responses. Furthermore, the earlier emergence of the MMN-like component to changes in pitch compared to other sound features implies that neural circuits involved in generating MMN-like responses have different maturational timetables for different sound features.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2005) 17 (10): 1578–1592.
Published: 01 October 2005
Abstract
View article
PDF
In music, multiple musical objects often overlap in time. Western polyphonic music contains multiple simultaneous melodic lines (referred to as “voices”) of equal importance. Previous electrophysiological studies have shown that pitch changes in a single melody are automatically encoded in memory traces, as indexed by mismatch negativity (MMN) and its magnetic counterpart (MMNm), and that this encoding process is enhanced by musical experience. In the present study, we examined whether two simultaneous melodies in polyphonic music are represented as separate entities in the auditory memory trace. Musicians and untrained controls were tested in both magnetoencephalogram and behavioral sessions. Polyphonic stimuli were created by combining two melodies (A and B), each consisting of the same five notes but in a different order. Melody A was in the high voice and Melody B in the low voice in one condition, and this was reversed in the other condition. On 50% of trials, a deviant final (5th) note was played either in the high or in the low voice, and it either went outside the key of the melody or remained within the key. These four deviations occurred with equal probability of 12.5% each. Clear MMNm was obtained for most changes in both groups, despite the 50% deviance level, with a larger amplitude in musicians than in controls. The response pattern was consistent across groups, with larger MMNm for deviants in the high voice than in the low voice, and larger MMNm for in-key than out-of-key changes, despite better behavioral performance for out-of-key changes. The results suggest that melodic information in each voice in polyphonic music is encoded in the sensory memory trace, that the higher voice is more salient than the lower, and that tonality may be processed primarily at cognitive stages subsequent to MMN generation.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2004) 16 (6): 1010–1021.
Published: 01 July 2004
Abstract
View article
PDF
In music, melodic information is thought to be encoded in two forms, a contour code (up/down pattern of pitch changes) and an interval code (pitch distances between successive notes). A recent study recording the mismatch negativity (MMN) evoked by pitch contour and interval deviations in simple melodies demonstrated that people with no formal music education process both contour and interval information in the auditory cortex automatically. However, it is still unclear whether musical experience enhances both strategies of melodic encoding. We designed stimuli to examine contour and interval information separately. In the contour condition there were eight different standard melodies (presented on 80% of trials), each consisting of five notes all ascending in pitch, and the corresponding deviant melodies (20%) were altered to descending on their final note. The interval condition used one five-note standard melody transposed to eight keys from trial to trial, and on deviant trials the last note was raised by one whole tone without changing the pitch contour. There was also a control condition, in which a standard tone (990.7 Hz) and a deviant tone (1111.0 Hz) were presented. The magnetic counterpart of the MMN (MMNm) from musicians and nonmusicians was obtained as the difference between the dipole moment in response to the standard and deviant trials recorded by magnetoencephalography. Significantly larger MMNm was present in musicians in both contour and interval conditions than in nonmusicians, whereas MMNm in the control condition was similar for both groups. The interval MMNm was larger than the contour MMNm in musicians. No hemispheric difference was found in either group. The results suggest that musical training enhances the ability to automatically register abstract changes in the relative pitch structure of melodies.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2002) 14 (3): 430–442.
Published: 01 April 2002
Abstract
View article
PDF
Most work on how pitch is encoded in the auditory cortex has focused on tonotopic (absolute) pitch maps. However, melodic information is thought to be encoded in the brain in two different “relative pitch” forms, a domain-general contour code (up/down pattern of pitch changes) and a music-specific interval code (exact pitch distances between notes). Event-related potentials were analyzed in nonmusicians from both passive and active oddball tasks where either the contour or the interval of melody—final notes was occasionally altered. The occasional deviant notes generated a right frontal positivity peaking around 350 msec and a central parietal P3b peaking around 580 msec that were present only when participants focused their attention on the auditory stimuli. Both types of melodic information were encoded automatically in the absence of absolute pitch cues, as indexed by a mismatch negativity wave recorded during the passive conditions. The results indicate that even in the absence of musical training, the brain is set up to automatically encode music-specific melodic information, even when absolute pitch information is not available.