Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-19 of 19
Mireille Besson
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2021) 33 (10): 2093–2108.
Published: 01 September 2021
FIGURES
| View All (6)
Abstract
View article
PDF
The learning of new words is a challenge that accompanies human beings throughout the entire life span. Although the main electrophysiological markers of word learning have already been described, little is known about the performance-dependent neural machinery underlying this exceptional human faculty. Furthermore, it is currently unknown how word learning abilities are related to verbal memory capacity, auditory attention functions, phonetic discrimination skills, and musicality. Accordingly, we used EEG and examined 40 individuals, who were assigned to two groups (low [LPs] and high performers [HPs]) based on a median split of word learning performance, while they completed a phonetic-based word learning task. Furthermore, we collected behavioral data during an attentive listening and a phonetic discrimination task with the same stimuli to address relationships between auditory attention and phonetic discrimination skills, word learning performance, and musicality. The phonetic-based word learning task, which also included a nonlearning control condition, was sensitive enough to segregate learning-specific and unspecific N200/N400 manifestations along the anterior–posterior topographical axis. Notably, HPs exhibited enhanced verbal memory capacity and we also revealed a performance-dependent spatial N400 pattern, with maximal amplitudes at posterior electrodes in HPs and central maxima in LPs. Furthermore, phonetic-based word learning performance correlated with verbal memory capacity and phonetic discrimination skills, whereas the latter was related to musicality. This experimental approach clearly highlights the multifaceted dimensions of phonetic-based word learning and is helpful to disentangle learning-specific and unspecific ERPs.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2021) 33 (4): 662–682.
Published: 01 April 2021
FIGURES
| View All (7)
Abstract
View article
PDF
Previous studies evidenced transfer effects from professional music training to novel word learning. However, it is unclear whether such an advantage is driven by cascading, bottom–up effects from better auditory perception to semantic processing or by top–down influences from cognitive functions on perception. Moreover, the long-term effects of novel word learning remain an open issue. To address these questions, we used a word learning design, with four different sets of novel words, and we neutralized the potential perceptive and associative learning advantages in musicians. Under such conditions, we did not observe any advantage in musicians on the day of learning (Day 1 [D1]), at neither a behavioral nor an electrophysiological level; this suggests that the previously reported advantages in musicians are likely to be related to bottom–up processes. Nevertheless, 1 month later (Day 30 [D30]) and for all types of novel words, the error increase from D1 to D30 was lower in musicians compared to nonmusicians. In addition, for the set of words that were perceptually difficult to discriminate, only musicians showed typical N400 effects over parietal sites on D30. These results demonstrate that music training improved long-term memory and that transfer effects from music training to word learning (i.e., semantic levels of speech processing) benefit from reinforced (long-term) memory functions. Finally, these findings highlight the positive impact of music training on the acquisition of foreign languages.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2021) 33 (1): 8–27.
Published: 01 January 2021
FIGURES
| View All (6)
Abstract
View article
PDF
Musical expertise has been shown to positively influence high-level speech abilities such as novel word learning. This study addresses the question whether low-level enhanced perceptual skills causally drives successful novel word learning. We used a longitudinal approach with psychoacoustic procedures to train 2 groups of nonmusicians either on pitch discrimination or on intensity discrimination, using harmonic complex sounds. After short (approximately 3 hr) psychoacoustic training, discrimination thresholds were lower on the specific feature (pitch or intensity) that was trained. Moreover, compared to the intensity group, participants trained on pitch were faster to categorize words varying in pitch. Finally, although the N400 components in both the word learning phase and in the semantic task were larger in the pitch group than in the intensity group, no between-group differences were found at the behavioral level in the semantic task. Thus, these results provide mixed evidence that enhanced perception of relevant features through a few hours of acoustic training with harmonic sounds causally impacts the categorization of speech sounds as well as novel word learning. These results are discussed within the framework of near and far transfer effects from music training to speech processing.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2016) 28 (10): 1584–1602.
Published: 01 October 2016
FIGURES
| View All (7)
Abstract
View article
PDF
On the basis of previous results showing that music training positively influences different aspects of speech perception and cognition, the aim of this series of experiments was to test the hypothesis that adult professional musicians would learn the meaning of novel words through picture–word associations more efficiently than controls without music training (i.e., fewer errors and faster RTs). We also expected musicians to show faster changes in brain electrical activity than controls, in particular regarding the N400 component that develops with word learning. In line with these hypotheses, musicians outperformed controls in the most difficult semantic task. Moreover, although a frontally distributed N400 component developed in both groups of participants after only a few minutes of novel word learning, in musicians this frontal distribution rapidly shifted to parietal scalp sites, as typically found for the N400 elicited by known words. Finally, musicians showed evidence for better long-term memory for novel words 5 months after the main experimental session. Results are discussed in terms of cascading effects from enhanced perception to memory as well as in terms of multifaceted improvements of cognitive processing due to music training. To our knowledge, this is the first report showing that music training influences semantic aspects of language processing in adults. These results open new perspectives for education in showing that early music training can facilitate later foreign language learning. Moreover, the design used in the present experiment can help to specify the stages of word learning that are impaired in children and adults with word learning difficulties.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (12): 3874–3887.
Published: 01 December 2011
FIGURES
| View All (5)
Abstract
View article
PDF
The aim of this study was to examine the influence of musical expertise in 9-year-old children on passive (as reflected by MMN) and active (as reflected by discrimination accuracy) processing of speech sounds. Musician and nonmusician children were presented with a sequence of syllables that included standards and deviants in vowel frequency, vowel duration, and VOT. Both the passive and the active processing of duration and VOT deviants were enhanced in musician compared with nonmusician children. Moreover, although no effect was found on the passive processing of frequency, active frequency discrimination was enhanced in musician children. These findings are discussed in terms of common processing of acoustic features in music and speech and of positive transfer of training from music to the more abstract phonological representations of speech units (syllables).
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (10): 2701–2715.
Published: 01 October 2011
FIGURES
| View All (5)
Abstract
View article
PDF
A same–different task was used to test the hypothesis that musical expertise improves the discrimination of tonal and segmental (consonant, vowel) variations in a tone language, Mandarin Chinese. Two four-word sequences (prime and target) were presented to French musicians and nonmusicians unfamiliar with Mandarin, and event-related brain potentials were recorded. Musicians detected both tonal and segmental variations more accurately than nonmusicians. Moreover, tonal variations were associated with higher error rate than segmental variations and elicited an increased N2/N3 component that developed 100 msec earlier in musicians than in nonmusicians. Finally, musicians also showed enhanced P3b components to both tonal and segmental variations. These results clearly show that musical expertise influenced the perceptual processing as well as the categorization of linguistic contrasts in a foreign language. They show positive music-to-language transfer effects and open new perspectives for the learning of tone languages.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (2): 294–305.
Published: 01 February 2011
FIGURES
| View All (5)
Abstract
View article
PDF
The present study aimed to examine the influence of musical expertise on the metric and semantic aspects of speech processing. In two attentional conditions (metric and semantic tasks), musicians listened to short sentences ending in trisyllabic words that were semantically and/or metrically congruous or incongruous. Both ERPs and behavioral data were analyzed and the results were compared to previous nonmusicians' data. Regarding the processing of meter, results showed that musical expertise influenced the automatic detection of the syllable temporal structure (P200 effect), the integration of metric structure and its influence on word comprehension (N400 effect), as well as the reanalysis of metric violations (P600 and late positivities effects). By contrast, results showed that musical expertise did not influence the semantic level of processing. These results are discussed in terms of transfer of training effects from music to speech processing.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (11): 2555–2569.
Published: 01 November 2010
FIGURES
| View All (4)
Abstract
View article
PDF
The aim of these experiments was to compare conceptual priming for linguistic and for a homogeneous class of nonlinguistic sounds, impact sounds, by using both behavioral (percentage errors and RTs) and electrophysiological measures (ERPs). Experiment 1 aimed at studying the neural basis of impact sound categorization by creating typical and ambiguous sounds from different material categories (wood, metal, and glass). Ambiguous sounds were associated with slower RTs and larger N280, smaller P350/P550 components, and larger negative slow wave than typical impact sounds. Thus, ambiguous sounds were more difficult to categorize than typical sounds. A category membership task was used in Experiment 2. Typical sounds were followed by sounds from the same or from a different category or by ambiguous sounds. Words were followed by words, pseudowords, or nonwords. Error rate was highest for ambiguous sounds and for pseudowords and both elicited larger N400-like components than same typical sounds and words. Moreover, both different typical sounds and nonwords elicited P300 components. These results are discussed in terms of similar conceptual priming effects for nonlinguistic and linguistic stimuli.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (5): 1026–1035.
Published: 01 May 2010
FIGURES
| View All (5)
Abstract
View article
PDF
Two experiments were conducted to examine the conceptual relation between words and nonmeaningful sounds. In order to reduce the role of linguistic mediation, sounds were recorded in such a way that it was highly unlikely to identify the source that produced them. Related and unrelated sound–word pairs were presented in Experiment 1 and the order of presentation was reversed in Experiment 2 (word–sound). Results showed that, in both experiments, participants were sensitive to the conceptual relation between the two items. They were able to correctly categorize items as related or unrelated with good accuracy. Moreover, a relatedness effect developed in the event-related brain potentials between 250 and 600 msec, although with a slightly different scalp topography for word and sound targets. Results are discussed in terms of similar conceptual processing networks and we propose a tentative model of the semiotics of sounds.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2007) 19 (9): 1453–1463.
Published: 01 September 2007
Abstract
View article
PDF
The aim of this study was to determine whether musical expertise influences the detection of pitch variations in a foreign language that participants did not understand. To this end, French adults, musicians and nonmusicians, were presented with sentences spoken in Portuguese. The final words of the sentences were prosodically congruous (spoken at normal pitch height) or incongruous (pitch was increased by 35% or 120%). Results showed that when the pitch deviations were small and difficult to detect (35%: weak prosodic incongruities), the level of performance was higher for musicians than for nonmusicians. Moreover, analysis of the time course of pitch processing, as revealed by the event-related brain potentials to the prosodically congruous and incongruous sentence-final words, showed that musicians were, on average, 300 msec faster than nonmusicians to categorize prosodically congruous and incongruous endings. These results are in line with previous ones showing that musical expertise, by increasing discrimination of pitch—a basic acoustic parameter equally important for music and speech prosody—does facilitate the processing of pitch variations not only in music but also in language. Finally, comparison with previous results [Schön, D., Magne, C., & Besson, M. The music of speech: Music training facilitates pitch processing in both music and language. Psychophysiology, 41 , 341–349, 2004] points to the influence of semantics on the perception of acoustic prosodic cues.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2006) 18 (2): 199–211.
Published: 01 February 2006
Abstract
View article
PDF
The idea that extensive musical training can influence processing in cognitive domains other than music has received considerable attention from the educational system and the media. Here we analyzed behavioral data and recorded event-related brain potentials (ERPs) from 8-year-old children to test the hypothesis that musical training facilitates pitch processing not only in music but also in language. We used a parametric manipulation of pitch so that the final notes or words of musical phrases or sentences were congruous, weakly incongruous, or strongly incongruous. Musician children outperformed nonmusician children in the detection of the weak incongruity in both music and language. Moreover, the greatest differences in the ERPs of musician and nonmusician children were also found for the weak incongruity: whereas for musician children, early negative components developed in music and late positive components in language, no such components were found for nonmusician children. Finally, comparison of these results with previous ones from adults suggests that some aspects of pitch processing are in effect earlier in music than in language. Thus, the present results reveal positive transfer effects between cognitive domains and shed light on the time course and neural basis of the development of prosodic and melodic processing.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2005) 17 (5): 740–756.
Published: 01 May 2005
Abstract
View article
PDF
Highlighting relevant information in a discourse context is a major aim of spoken language communication. Prosodic cues such as focal prominences are used to fulfill this aim through the pragmatic function of prosody. To determine whether listeners make on-line use of focal prominences to build coherent representations of the informational structure of the utterances, we used the brain event-related potential (ERP) method. Short dialogues composed of a question and an answer were presented auditorily. The design of the experiment allowed us to examine precisely the time course of the processing of prosodic patterns of sentence-medial or -final words in the answer. These patterns were either congruous or incongruous with regard to the pragmatic context introduced by the question. Furthermore, the ERP effects were compared for words with or without focal prominences. Results showed that pragmatically congruous and incongruous prosodic patterns elicit clear differences in the ERPs, which were largely modulated in latency and polarity by their position within the answer. By showing that prosodic patterns are processed on-line by listeners in order to understand the informational structure of the message, the present results demonstrate the psychobiological validity of the pragmatic concept of focus, expressed via prosodic cues. Moreover, the functional significance of the positive-going effects found sentence medially and negative-going effects found sentence finally is discussed. Whereas the former may reflect the processing of surprising and task-relevant prosodic patterns, the latter may reflect the integration problems encountered in extracting the overall informational structure of the sentence.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2005) 17 (4): 694–705.
Published: 01 April 2005
Abstract
View article
PDF
The general aim of this experiment was to investigate the processes involved in reading musical notation and to study the relationship between written music and its auditory representation. It was of main interest to determine whether musicians are able to develop expectancies for specific tonal or atonal auditory events based on visual score alone. Can musicians expect an “atonal” event or will it always sound odd? Moreover, it was of interest to determine whether the modulations in amplitude of a late positive component (P600) described in previous studies are linked to a general mismatch detection process or to specific musical expectancies. Results showed clearly that musicians are able to expect tonal auditory endings based on visual information and are also able to do so for atonal endings, although to a smaller extent. Strong interactions seem to exist between visual and auditory musical codes and visual information seems to influence auditory processing as early as 100 msec. These results are directly relevant for the question of whether music reading is actually music perception.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2005) 17 (1): 37–50.
Published: 01 January 2005
Abstract
View article
PDF
Younger and older participants were asked to indicate if 240 complex two-digit addition problems were smaller than 100 or not. Half of the problems were small-split problems (i.e., the proposed sums were 2% or 5% away from 100; e.g., 53 + 49) and half were large-split problems (i.e., proposed sums were 10% or 15% away from 100; 46 + 39). Behavioral and event-related potential (ERP) data revealed that (a) both groups showed a split effect on both reaction times and percent errors, (b) split effects were smaller for older than for younger adults in ERPs, and (c) the hemispheric asymmetry (left hemisphere advantage) reported for younger adults was reduced in older adults (age-related hemispheric asymmetry reduction). These results suggest that older adults tend to use only one strategy to solve all problems, whereas younger adults flexibly and adaptively use different strategies for small- and large-split problems. Implications of these findings for our understanding of age-related similarities and differences in arithmetic problem-solving are discussed.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2001) 13 (2): 241–255.
Published: 15 February 2001
Abstract
View article
PDF
The goal of this study was to analyze the time-course of sensory (bottom-up) and cognitive (top-down) processes that govern musical harmonic expectancy. Eight-chord sequences were presented to 12 musicians and 12 nonmusicians. Expectations for the last chord were manipulated both at the sensory level (i.e., the last chord was sensory consonant or dissonant) and at the cognitive level (the harmonic function of the target was varied by manipulating the harmonic context built up by the first six chords of the sequence). Changes in the harmonic function of the target chord mainly modulate the amplitude of a positive component peaking around 300 msec (P3) after target onset, reflecting top-down influences on the perceptual stages of processing. In contrast, changes in the acoustic structure of the target chord (sensory consonance) mainly modulate the amplitude of a late positive component that develops between 300 and 800 msec after target onset. Most importantly, the effects of sensory consonance and harmonic context on the event-related brain potentials associated with the target chords were found to be independent, thus suggesting that two separate processors contribute to the building up of musical expectancy.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2001) 13 (1): 121–143.
Published: 01 January 2001
Abstract
View article
PDF
Phonological priming between bisyllabic (CV.CVC) spoken items was examined using both behavioral (reaction times, RTs) and electrophysiological (event-related potentials, ERPs) measures. Word and pseudoword targets were preceded by pseudoword primes. Different types of final phonological overlap between prime and target were compared. Critical pairs shared the last syllable, the rime or the coda, while unrelated pairs were used as controls. Participants performed a target shadowing task in Experiment 1 and a delayed lexical decision task in Experiment 2. RTs were measured in the first experiment and ERPs were recorded in the second experiment. The RT experiment was carried out under two presentation conditions. In Condition 1 both primes and targets were presented auditorily, while in Condition 2 the primes were presented visually and the targets auditorily. Priming effects were found in the unimodal condition only. RTs were fastest for syllable overlap, intermediate for rime overlap, and slowest for coda overlap and controls that did not differ from one another. ERPs were recorded under unimodal auditory presentation. ERP results showed that the amplitude of the auditory N400 component was smallest for syllable overlap, intermediate for rime overlap, and largest for coda overlap and controls that did not differ from one another. In both experiments, the priming effects were larger for word than for pseudoword targets. These results are best explained by the combined influences of nonlexical and lexical processes, and a comparison of the reported effects with those found in monosyllables suggests the involvement of rime and syllable representations.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1998) 10 (6): 717–733.
Published: 01 November 1998
Abstract
View article
PDF
In order to test the language-specificity of a known neural correlate of syntactic processing [the P600 event-related brain potential (ERP) component], this study directly compared ERPs elicited by syntactic incongruities in language and music. Using principles of phrase structure for language and principles of harmony and key-relatedness for music, sequences were constructed in which an element was either congruous, moderately incongruous, or highly incongruous with the preceding structural context. A within-subjects design using 15 musically educated adults revealed that linguistic and musical structural incongruities elicited positivities that were statistically indistinguishable in a specified latency range. In contrast, a music-specific ERP component was observed that showed antero-temporal right-hemisphere lateralization. The results argue against the language-specificity of the P600 and suggest that language and music can be studied in parallel to address questions of neural specificity in cognitive processing.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1997) 9 (6): 758–775.
Published: 01 November 1997
Abstract
View article
PDF
Event-related brain potentials (ERPs) to words, pseudowords, and nonwords were recorded in three different tasks. A letter search task was used in Experiment 1. Performance was affected by whether the target letter occurred in a word, a pseudoword, or a random nonword. ERP results corroborated the behavioral results, showing small but reliable ERP differences between the three stimulus types. Words and pseudowords differed from nonwords at posterior sites, whereas words differed from pseudowords and nonwords at anterior sites. Since deciding whether the target letter was present or absent co-occurred with stimulus processing in Experiment 1, a delayed letter search task was used in Experiment 2. ERPs to words and pseudowords were similar and differed from ERPs to nonwords, suggesting a primary role of orthographic and phonological processing in the delayed letter search task. To increase semantic processing, a categorization task was used in Experiment 3. Early differences between ERPs to words and pseudowords at left posterior and anterior locations suggested a rapid activation of lexico-semantic information. These findings suggest that the use of ERPs in a multiple task design makes it possible to track the time course and the activation of multiple sources of linguistic information when processing words, pseudowords, and nonwords. The task-dependent nature of the effects suggests that the language system can use multiple sources of linguistic information in flexible and adaptive ways.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1992) 4 (2): 132–149.
Published: 01 April 1992
Abstract
View article
PDF
In two experiments, event-related brain potentials (ERPs) and cued-recall performance measures were used to examine the consequences of semantic congruity and repetition on the processing of words in sentences. A set of sentences, half of which ended with words that rendered them semantically incongruous, was repeated either once (eg, Experiment 1) or twice (e.g., Experiment 2). After each block of sentences, subjects were given all of the sentences and asked to recall the missing final words. Repetition benefited the recall of both congruous and incongruous endings and reduced the amplitude and shortened the duration of the N400 component of the ERP more for (1) incongruous than congruous words, (2) open class than closed class words, and (3) low-frequency than high-frequency open class words. For incongruous sentence terminations, repetition increased the amplitude of a broad positive component subsequent to the N400. Assuming additive factors logic and a traditional view of the lexicon, our N400 results indicate that in addition to their singular effects, semantic congruiry, repetition, and word frequency converge to influence a common stage of lexical processing. Within a parallel distributed processing framework, our results argue for substantial temporal and spatial overlap in the activation of codes subserving word recognition so as to yield the observed interactions of repetition with semantic congruity, lexical class, and word frequency effects.