Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Mari Tervaniemi
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (12): 2716–2727.
Published: 01 December 2010
FIGURES
Abstract
View article
PDF
Our surrounding auditory environment has a dramatic influence on the development of basic auditory and cognitive skills, but little is known about how it influences the recovery of these skills after neural damage. Here, we studied the long-term effects of daily music and speech listening on auditory sensory memory after middle cerebral artery (MCA) stroke. In the acute recovery phase, 60 patients who had middle cerebral artery stroke were randomly assigned to a music listening group, an audio book listening group, or a control group. Auditory sensory memory, as indexed by the magnetic MMN (MMNm) response to changes in sound frequency and duration, was measured 1 week (baseline), 3 months, and 6 months after the stroke with whole-head magnetoencephalography recordings. Fifty-four patients completed the study. Results showed that the amplitude of the frequency MMNm increased significantly more in both music and audio book groups than in the control group during the 6-month poststroke period. In contrast, the duration MMNm amplitude increased more in the audio book group than in the other groups. Moreover, changes in the frequency MMNm amplitude correlated significantly with the behavioral improvement of verbal memory and focused attention induced by music listening. These findings demonstrate that merely listening to music and speech after neural damage can induce long-term plastic changes in early sensory processing, which, in turn, may facilitate the recovery of higher cognitive functions. The neural mechanisms potentially underlying this effect are discussed.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2009) 21 (11): 2230–2244.
Published: 01 November 2009
Abstract
View article
PDF
At the level of the auditory cortex, musicians discriminate pitch changes more accurately than nonmusicians. However, it is not agreed upon how sound familiarity and musical expertise interact in the formation of pitch-change discrimination skills, that is, whether musicians possess musical pitch discrimination abilities that are generally more accurate than in nonmusicians or, alternatively, whether they may be distinguished from nonmusicians particularly with respect to the discrimination of nonprototypical sounds that do not play a reference role in Western tonal music. To resolve this, we used magnetoencephalography (MEG) to measure the change-related magnetic mismatch response (MMNm) in musicians and nonmusicians to two nonprototypical chords, a “dissonant” chord containing a highly unpleasant interval and a “mistuned” chord including a mistuned pitch, and a minor chord, all inserted in a context of major chords. Major and minor are the most frequently used chords in Western tonal music which both musicians and nonmusicians are most familiar with, whereas the other chords are more rarely encountered in tonal music. The MMNm was stronger in musicians than in nonmusicians in response to the dissonant and mistuned chords, whereas no group difference was found in the MMNm strength to minor chords. Correspondingly, the length of musical training correlated with the MMNm strength for the dissonant and mistuned chords only. Our findings provide evidence for superior automatic discrimination of nonprototypical chords in musicians. Most likely, this results from a highly sophisticated auditory system in musicians allowing a more efficient discrimination of chords deviating from the conventional categories of tonal music.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2006) 18 (12): 1959–1972.
Published: 01 November 2006
Abstract
View article
PDF
Timbre is a multidimensional perceptual attribute of complex tones that characterizes the identity of a sound source. Our study explores the representation in auditory sensory memory of three timbre dimensions (acoustically related to attack time, spectral centroid, and spectrum fine structure), using the mismatch negativity (MMN) component of the auditory event-related potential. MMN is elicited by a discriminable change in a sound sequence and reflects the detection of the discrepancy between the current stimulus and traces in auditory sensory memory. The stimuli used in the present study were carefully controlled synthetic tones. MMNs were recorded after changes along each of the three timbre dimensions and their combinations. Additivity of unidimensional MMNs and dipole modeling results suggest partially separate MMN generators for different timbre dimensions, reflecting their mainly separate processing in auditory sensory memory. The results expand to timbre dimensions a property of separation of the representation in sensory memory that has already been reported between basic perceptual attributes (pitch, loudness, duration, and location) of sound sources.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2006) 18 (8): 1292–1303.
Published: 01 August 2006
Abstract
View article
PDF
Implicit knowledge has been proposed to be the substrate of intuition because intuitive judgments resemble implicit processes. We investigated whether the automatically elicited mismatch negativity (MMN) component of the auditory event-related potentials (ERPs) can reflect implicit knowledge and whether this knowledge can be utilized for intuitive sound discrimination. We also determined the sensitivity of the attention-and task-dependent P3 component to intuitive versus explicit knowledge. We recorded the ERPs elicited in an “abstract” oddball paradigm. Tone pairs roving over different frequencies but with a constant ascending inter-pair interval, were presented as frequent standard events. The standards were occasionally replaced by deviating, descending tone pairs. The ERPs were recorded under both ignore and attend conditions. Subjects were interviewed and classified on the basis of whether or not they could datect the deviants. The deviants elicited an MMN even in subjects who subsequent to the MMN recording did not express awareness of the deviants. This suggests that these subjects possessed implicit knowledge of the sound-sequence structure. Some of these subjects learned, in an associative training session, to detect the deviants intuitively, that is, they could detect the deviants but did not give a correct description of how the deviants differed from the standards. Intuitive deviant detection was not accompanied by P3 elicitation whereas subjects who developed explicit knowledge of the sound sequence during the training did show a P3 to the detected deviants.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2004) 16 (2): 331–338.
Published: 01 March 2004
Abstract
View article
PDF
It is believed that auditory processes governing grouping and segmentation of sounds are automatic and represent universal aspects of music perception (e.g., they are independent of the listener's musical skill). The present study challenges this view by showing that musicians and nonmusicians differ in their ability to preattentively group consecutive sounds. We measured event-related potentials (ERPs) from professional musicians and nonmusicians who were presented with isochronous tone sequences that they ignored. Four consecutive tones in a sequence could be grouped according to either pitch similarity or good continuation of pitch. Occasionally, the tone-group length was violated by a deviant tone. The mismatch negativity (MMN) was elicited to the deviants in both subject groups when the sounds could be grouped based on pitch similarity. In contrast, MMN was only elicited in musicians when the sounds could be grouped according to good continuation of pitch. These results suggest that some forms of auditory grouping depend on musical skill and that not all aspects of auditory grouping are universal.