Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-8 of 8
Risto Näätänen
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (6): 1319–1332.
Published: 01 June 2010
FIGURES
| View All (5)
Abstract
View article
PDF
Foreign-language learning is a prime example of a task that entails perceptual learning. The correct comprehension of foreign-language speech requires the correct recognition of speech sounds. The most difficult speech–sound contrasts for foreign-language learners often are the ones that have multiple phonetic cues, especially if the cues are weighted differently in the foreign and native languages. The present study aimed to determine whether non-native-like cue weighting could be changed by using phonetic training. Before the training, we compared the use of spectral and duration cues of English /i/ and /I/ vowels (e.g., beat vs. bit) between native Finnish and English speakers. In Finnish, duration is used phonologically to separate short and long phonemes, and therefore Finns were expected to weight duration cues more than native English speakers. The cross-linguistic differences and training effects were investigated with behavioral and electrophysiological methods, in particular by measuring the MMN brain response that has been used to probe long-term memory representations for speech sounds. The behavioral results suggested that before the training, the Finns indeed relied more on duration in vowel recognition than the native English speakers did. After the training, however, the Finns were able to use the spectral cues of the vowels more reliably than before. Accordingly, the MMN brain responses revealed that the training had enhanced the Finns' ability to preattentively process the spectral cues of the English vowels. This suggests that as a result of training, plastic changes had occurred in the weighting of phonetic cues at early processing stages in the cortex.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2006) 18 (12): 1959–1972.
Published: 01 November 2006
Abstract
View article
PDF
Timbre is a multidimensional perceptual attribute of complex tones that characterizes the identity of a sound source. Our study explores the representation in auditory sensory memory of three timbre dimensions (acoustically related to attack time, spectral centroid, and spectrum fine structure), using the mismatch negativity (MMN) component of the auditory event-related potential. MMN is elicited by a discriminable change in a sound sequence and reflects the detection of the discrepancy between the current stimulus and traces in auditory sensory memory. The stimuli used in the present study were carefully controlled synthetic tones. MMNs were recorded after changes along each of the three timbre dimensions and their combinations. Additivity of unidimensional MMNs and dipole modeling results suggest partially separate MMN generators for different timbre dimensions, reflecting their mainly separate processing in auditory sensory memory. The results expand to timbre dimensions a property of separation of the representation in sensory memory that has already been reported between basic perceptual attributes (pitch, loudness, duration, and location) of sound sources.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2006) 18 (8): 1292–1303.
Published: 01 August 2006
Abstract
View article
PDF
Implicit knowledge has been proposed to be the substrate of intuition because intuitive judgments resemble implicit processes. We investigated whether the automatically elicited mismatch negativity (MMN) component of the auditory event-related potentials (ERPs) can reflect implicit knowledge and whether this knowledge can be utilized for intuitive sound discrimination. We also determined the sensitivity of the attention-and task-dependent P3 component to intuitive versus explicit knowledge. We recorded the ERPs elicited in an “abstract” oddball paradigm. Tone pairs roving over different frequencies but with a constant ascending inter-pair interval, were presented as frequent standard events. The standards were occasionally replaced by deviating, descending tone pairs. The ERPs were recorded under both ignore and attend conditions. Subjects were interviewed and classified on the basis of whether or not they could datect the deviants. The deviants elicited an MMN even in subjects who subsequent to the MMN recording did not express awareness of the deviants. This suggests that these subjects possessed implicit knowledge of the sound-sequence structure. Some of these subjects learned, in an associative training session, to detect the deviants intuitively, that is, they could detect the deviants but did not give a correct description of how the deviants differed from the standards. Intuitive deviant detection was not accompanied by P3 elicitation whereas subjects who developed explicit knowledge of the sound sequence during the training did show a P3 to the detected deviants.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2004) 16 (2): 331–338.
Published: 01 March 2004
Abstract
View article
PDF
It is believed that auditory processes governing grouping and segmentation of sounds are automatic and represent universal aspects of music perception (e.g., they are independent of the listener's musical skill). The present study challenges this view by showing that musicians and nonmusicians differ in their ability to preattentively group consecutive sounds. We measured event-related potentials (ERPs) from professional musicians and nonmusicians who were presented with isochronous tone sequences that they ignored. Four consecutive tones in a sequence could be grouped according to either pitch similarity or good continuation of pitch. Occasionally, the tone-group length was violated by a deviant tone. The mismatch negativity (MMN) was elicited to the deviants in both subject groups when the sounds could be grouped based on pitch similarity. In contrast, MMN was only elicited in musicians when the sounds could be grouped according to good continuation of pitch. These results suggest that some forms of auditory grouping depend on musical skill and that not all aspects of auditory grouping are universal.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2003) 15 (8): 1195–1206.
Published: 15 November 2003
Abstract
View article
PDF
To address the cerebral processing of grammar, we used whole-head high-density magnetoencephalography to record the brain's magnetic fields elicited by grammatically correct and incorrect auditory stimuli in the absence of directed attention to the stimulation. The stimuli were minimal short phrases of the Finnish language differing only in one single phoneme (word-final inflectional affix), which rendered them as either grammatical or ungrammatical. Acoustic and lexical differences were controlled for by using an orthogonal design in which the phoneme's effect on grammaticality was inverted. We found that occasional syntactically incorrect stimuli elicited larger mismatch negativity (MMN) responses than correct phrases. The MMN was earlier proposed as an index of preattentive automatic speech processing. Therefore, its modulation by grammaticality under nonattend conditions suggests that early syntax processing in the human brain may take place outside the focus of attention. Source analysis (single—dipole models and minimum-norm current estimates) indicated grammaticality dependent differential activation of the left superior temporal cortex suggesting that this brain structure may play an important role in such automatic grammar processing.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1998) 10 (5): 590–604.
Published: 01 September 1998
Abstract
View article
PDF
Behavioral and event-related brain potential (ERP) measures were used to elucidate the neural mechanisms of involuntary engagement of attention by novelty and change in the acoustic environment. The behavioral measures consisted of the reaction time (RT) and performance accuracy (hit rate) in a forced-choice visual RT task where subjects were to discriminate between odd and even numbers. Each visual stimulus was preceded by an irrelevant auditory stimulus, which was randomly either a “standard” tone (80%), a slightly, higher “deviant” tone (10%), or a natural, “novel” sound (10%). Novel sounds prolonged the RT to successive visual stimuli by 17 msec as compared with the RT to visual stimuli that followed standard tones. Deviant tones, in turn, decreased the hit rate but did not significantly affect the RT. In the ERPs to deviant tones, the mismatch negativity (MMN), peaking at 150 msec, and a second negativity, peaking at 400 msec, could be observed. Novel sounds elicited an enhanced N1, with a probable overlap by the MMN, and a large positive P3a response with two different subcomponents: an early centrally dominant P3a, peaking at 230 msec, and a late P3a, peaking at 315 msec with a right-frontal scalp maximum. The present results suggest the involvement of two different neural mechanisms in triggering involuntary attention to acoustic novelty and change: a transient-detector mechanism activated by novel sounds and reflected in the N1 and a stimulus-change detector mechanism activated by deviant tones and novel sounds and reflected in the MMN. The observed differential distracting effects by slightly deviant tones and widely deviant novel sounds support the notion of two separate mechanisms of involuntary attention.
Journal Articles
Interactions between Transient and Long-Term Auditory Memory as Reflected by the Mismatch Negativity
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1996) 8 (5): 403–415.
Published: 01 September 1996
Abstract
View article
PDF
The mismatch negativity (MMN) event-related potential (ERP) component is elicited by any discriminable change in series of repetitive auditory stimuli. MMN is generated by a process registering the deviation of the incoming stimulus from the trace of the previous repetitive stimulus. Using MMN as a probe into auditory sensory memory, the present study addressed the question of whether the sensory memory representation is formed strictly on the basis of an automatic feature analysis of incoming sensory stimuli or information from long-term memory is also incorporated. Trains of 6 tone bursts (standards with up to 1 deviant per train) separated by 9.5-sec intertrain intervals were presented to subjects performing a visual tracking task and disregarding the auditory stimuli. Trains were grouped into stimulus blocks of 20 trains with a 2-min rest period between blocks. In the Constant-Standard Condition, both standard and deviant stimuli remained fixed across the session, encouraging the formation of a long-term memory representation. To eliminate the carryover of sensory storage from one train to the next, the first 3.6 sec of the intertrain interval was filled with 6 tones of random frequencies. In the Roving-Standard Condition, the standard changed from train to train and the intervening tones were omitted. It was found that MMN was elicited by deviants presented in Position 2 of the trains in the Constant-Standard Condition revealing that a single reminder of the constant standard reactivated the standard-stimulus representation. The MMN amplitude increased across trials within each stimulus block in the Constant- but not in the Roving-Standard Condition, demonstrating long-term learning in that condition (i.e., the standard-stimulus trace indexed by the MMN amplitude benefitted from the presentations of the constant standard in the previous trains). The present results suggest that the transient auditory sensory memory representation underlying the MMN is facilitated by a longer-term representation of the corresponding stimulus.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1990) 2 (4): 344–357.
Published: 01 October 1990
Abstract
View article
PDF
Event-related potentials (ERPs) to synthetic consonant–vowel syllables were recorded. Infrequent changes in such a syllable elicited a "mismatch negativity" as well as an enhanced N100 component of the ERP even when subjects did not pay attention to the stimuli. Both components are probably generated in the supratemporal auditory cortex suggesting that in these areas there are neural networks that are automatically activated by speech-specific auditory stimulus features such as formant transitions.