Abstract

We tested whether the emergence of familiarity to a melody may trigger or co-occur with the processing of the concept(s) conveyed by emotions to, or semantic association with, the melody. With this objective, we recorded ERPs while participants were presented with highly familiar and less familiar melodies in a gating paradigm. The ERPs time locked to a tone of the melody called the “familiarity emergence point” showed a larger fronto-central negativity for highly familiar compared with less familiar melodies between 200 and 500 msec, with a peak latency around 400 msec. This latency and the sensitivity to the degree of familiarity/conceptual information suggest that this component was an N400, a marker of conceptual processing. Our data suggest that the feeling of familiarity evoked by a musical excerpt could be accompanied by other processing mechanisms at the conceptual level. Coupling the gating paradigm with ERP analyses might become a new avenue for investigating the neurocognitive basis of implicit musical knowledge.

INTRODUCTION

In the last few years, neuroimaging findings have established that neuronal networks of the left hemisphere implicated in linguistic processing—the utmost processing of concepts—overlap with networks subserving the processing of music familiarity (Plailly, Tillmann, & Royet, 2007; Platel, Baron, Desgranges, Bernard, & Eustache, 2003). Moreover, some behavioral (Poulin-Charronnat, Bock, Grieser, Meyer, & Koelsch, 2006) and electrophysiological data (Daltrozzo & Schön, 2008; Koelsch et al., 2004) have supported the claim that music can convey concepts. The concepts communicated by music would belong to three main categories: nonverbalizable concepts issued from the musical structure, concepts that may or may not be verbalizable from emotional feelings, and verbalizable concepts from semantic associations (Patel, 2008). Importantly, the confirmation that music can convey these three types of concepts would have consequences for any model of musical memory and may provide further insight about the existence of a musical lexicon (Peretz & Coltheart, 2003) and the representations it may contain.

The present study aimed to provide a psychophysiological account of the unfolding over time of the feeling of familiarity with melodies. We predicted that the evolution of this feeling of familiarity co-occurs with conceptual processing and thus shows indirectly that music conveys concepts. For instance, the feeling of familiarity evoked by a melody may reactivate emotional or associative concepts carried either by the melody itself or by the memory representations of this melody.

We expected that the ERP technique would help to answer this question because the sensitivity of ERPs to conceptual relatedness has been shown for a word (e.g., “magic”) following a conceptually related or an unrelated musical context (Daltrozzo & Schön, 2008; Koelsch et al., 2004) and for a musical excerpt following a related or an unrelated word context (Daltrozzo & Schön, 2008). Moreover, even if linked to musical structure manipulations, other studies have also interpreted the observed ERP effects in terms of conceptual processing (Steinbeis & Koelsch, 2008; Miranda & Ullman, 2007). In Daltrozzo and Schön (2008) and Koelsch et al. (2004), the ERP effect (i.e., the differential response between unrelated and related excerpts) most likely reflected the access of concepts conveyed by emotions to or semantic associations with a musical excerpt. The ERP effects were interpreted as a modulation of the N400 component of the ERP. The N400 is a negative component starting around 200 msec and peaking around 400 msec after the onset presentation of a word or any other stimulus that can convey some concept (Kutas & Federmeier, 2000; Kutas & Van Petten, 1994; Kutas & Hillyard, 1980). It was first observed in response to a sentence's final word (Kutas & Hillyard, 1980); that is, the N400 to a word ending a sentence congruently (e.g., “The pizza was too hot to eat.”) was smaller than the N400 to a word ending a sentence incongruently (e.g., “The pizza was too hot to cry.”). Later studies indicated that this component has a more general sensitivity to conceptual information. Indeed, the N400 to a target item (e.g., a word, a picture, a sound) was always reduced when this target followed a conceptually related context (Schön, Ystad, Kronland-Martinet, & Besson, 2009; Orgs, Lange, Dombrowski, & Heil, 2006, 2007; Federmeier & Kutas, 2001; Castle, Van Toller, & Milligan, 2000; Bentin, McCarthy, & Wood, 1985). Recent studies indicate that the generators of the N400 may reflect the activation of three different mechanisms: lexical access (Pylkkänen & Marantz, 2003), lexical selection, and conceptual integration (Bles, Alink, & Jansma, 2007; Van den Brink, Brown, & Hagoort, 2006; Pylkkänen & Marantz, 2003). These mechanisms have been described within the “cohort model” of word recognition (Marslen-Wilson, 1987; Grosjean, 1980) and music recognition (Dalla Bella, Peretz, & Aronoff, 2003). During lexical access, a set (or cohort) of an item's representations (e.g., the representations of words or musical excerpts) is activated. Then, the selection and the integration mechanisms result in the inhibition of irrelevant item representations of the cohort.

Importantly, the N400 to novel (i.e., unfamiliar) words increases with learning, that is, as their degree of familiarity and amount of conceptual information increases. Mestres-Missé, Rodriguez-Fornells, and Münte (2007) presented sets of three sentences ending with the same novel word in a learning condition (“M+”), where the conceptual information of the novel word could be guessed (hence, learned) from sentence context, and in a nonlearning condition (M−), where no conceptual information could be mapped to the novel words from sentence context. A larger N400 was found to novel words in the M+ compared with the M− condition, indicating that the N400 increases with the emergence of conceptual information conveyed by a word as it becomes more and more familiar. Similarly, assuming the sensitivity of the N400 to concepts conveyed by music (see above), we predicted that, with music too, the N400 would rise with the emergence of conceptual information.

Therefore, the aim of the present study was to test if the N400 is sensitive to the activation of concepts while a listener starts to feel familiar with a musical excerpt. For this aim, we recorded ERPs while participants listened to a set of pretested melodies (ranging in their degree of familiarity) and judged their familiarity. These judgments in the ERP session allowed us to individualize the ERP analyses to participants' perception (i.e., emergence of familiarity point, degrees of high and medium familiarity) instead of relying on a priori categorizations (for a similar procedure, see Plailly et al., 2007). Participants were presented with highly familiar and less familiar melodies in a gating paradigm. We predicted that highly familiar melodies (e.g., the incipit of Beethoven's Fifth Symphony) would carry more concepts, hence a larger N400 than moderately familiar melodies (e.g., Debussy's Suite Bergamasque). These concepts might be either emotional or associative and carried either by the melody itself or by the memory representations of this melody. Our study does not aim to tease apart these two types of concepts. With respect to previous studies (Schön et al., 2009; Daltrozzo & Schön, 2008; Koelsch et al., 2004), the novelty of this work, besides the material and the design, relies on the focus on the evolution of the feeling of familiarity with melodies. From a methodological point of view, research on the processing of concepts conveyed by music with ERPs has to take technical difficulty into account. Indeed, recording ERPs implies time locking the cognitive event of interest. However, because music unfolds over time, it is difficult to know when the processing of concepts takes place and if it would happen consistently at the same time for different listeners. To overcome this problem, we previously proposed using very short excerpts (i.e., a few hundreds of milliseconds; Daltrozzo & Schön, 2008). With such short durations, the access of concepts is obviously restricted in time and allows us, for instance, to test the effect of a conceptual context (i.e., a word) on the processing of a related or an unrelated musical target. In the present study, we proposed an alternative strategy: to time lock the ERP to the lexical access of familiar musical excerpts. Thus, the design of the present study, which is built on the assumption that a modulation of conceptual information may co-occur with the feeling of familiarity and that this process is time locked to the lexical access of melodies, differs considerably from the design of previous studies that used priming—like paradigms manipulating the (conceptual) relatedness between linguistic and musical stimuli (Schön et al., 2009; Daltrozzo & Schön, 2008; Koelsch et al., 2004). Therefore, there are three important differences with respect to previous studies on music perception and brain correlates of conceptual processing. First, the present work used only musical stimuli (simple melodies, mostly issued from the instrumental classical repertoire) that were not coupled to linguistic stimuli. Second, as compared with previous studies using relatedness judgments between words and music, testing the feeling of familiarity implies that the underlying conceptual processes remain at a more implicit level. Third, unlike the priming paradigm, the gating paradigm allows us to study the temporality of conceptual processing, notably as the feeling of familiarity with melodies unfolds over time. Coupling the gating paradigm with ERP analyses allowed the determination of brain markers associated with the emergence of the feeling of familiarity for each participant. This is a novel paradigmatic approach that might become a new avenue for investigating the neurocognitive basis of implicit musical knowledge.

METHODS

Participants

Twenty-two volunteer nonmusicians (i.e., who did not participate in extracurricular music lessons or performance or did so for less than 6 years; musical instrument instruction: M = 1.76 years, SD = 0.64) were tested in the experiment. However, because of artifacts in the ERP data of one participant, only 21 (age: M = 25.2 years, SD = 1.1 years; duration of education: M = 16.4 years, SD = 0.4 years; 13 women) were retained for analysis. All were right-handed, neurologically normal, and reported normal audition. All participants were paid for their participation.

Stimuli

The material consisted of 80 melodies extracted from Plailly et al. (2007) and Platel et al. (2003). Some of the material included in these two studies contained melodies that were already selected for their high degree of familiarity. Twenty-four melodies used by Plailly et al. (2007) had a mean familiarity score of 9.07 (SD = 0.73) according to 18 nonmusicians (number of years of instruction on a musical instrument ranged from 0 to 6 years; M = 0.89 years, SD = 1.64 years), using a score range of 0 to 10. Sixty-four melodies used by Platel et al. (2003) had a mean familiarity score of 6.01 (SD = 0.44) based on binary judgments from 150 nonmusicians (participants without musical academic background, score range of 0 = 0% of the participants judged the melody to be familiar to 10 = 100% of the participants judged the melody to be familiar). Eight familiar melodies were the same in Plailly et al. (2007) and Platel et al. (2003). The sound files were in the MIDI format. Because the gating procedure consisted of presenting the first three to eight tones of the melodies (see Procedure section), six files were generated for each melody: one for the three first tones (excerpt duration: M = 1149 msec, SD = 66 msec), one for the four first tones, one for the five first tones, one for the six first tones, one for the seven first tones, and one for the eight first tones (duration: M = 3024 msec, SD = 128 msec). The final tone durations (M ± SD) at the six gates were 365 ± 33, 380 ± 32, 358 ± 33, 376 ± 31, 373 ± 29, and 394 ± 31 msec, respectively. This segmentation and further settings to the MIDI files were performed with Cakewalk Pro Audio 9. The onset of the tone of interest for each gate used for ERP time locking was identified with spectral variation using the fast Fourier transformation. The tempo was kept at a constant value of 200 beats per minute. The instrument was set to piano. The sound level of all the excerpts was normalized.

None of the melodies contained lyrics. This was done to reduce verbalization. However, it is possible that some of the selected melodies evoked semantic (i.e., linguistic) associations, for instance, associations with the title of the song, with the composer, or, in very few cases, with lyrics. These 80 melodies were tested for their familiarity with a preliminary experiment, including 10 nonmusicians who were presented with these melodies with the gating paradigm described below (see Procedure section). None of these participants took part in the ERP experiment. They provided a familiarity score (ranging from 1 = low familiarity to 9 = high familiarity) lower than 5 for 39 of the melodies, at 5 for 3 melodies, and higher than 5 for 38 melodies.1

Procedure

The present study tested musical recognition with the gating paradigm (Dalla Bella et al., 2003). In this paradigm, a musical excerpt is presented several times. At the first presentation (or “gate”), the first three tones of a melody were presented. At each new gate, one tone was added (e.g., the second gate contained the first four notes of the melody). After each gate, the participant was instructed to judge the melody as either “familiar” or “not familiar” (no measure of confidence was collected). The recognition processes that unfold during the gating experiment can be modeled by the “cohort model” (Marslen-Wilson, 1987; Grosjean, 1980). Following this model, Dalla Bella et al. (2003) defined the familiarity emergence point (FEP) as the number of tones required by the participant to consider the stimulus as familiar; that is, the gate at which the participant gives the correct response of “familiar” for the first time and never changes at the following gates. The tone corresponding to the emergence of familiarity is referred to below as the “FEP tone.” Note that the position of the FEP tone may vary across different stimuli as well as across participants for the same stimuli. The gate at which the FEP is reached will be referred to below as the “FEP gate.” Thus, the FEP gate is specific to each stimulus and each participant.

Since Dalla Bella et al. (2003) found that nonmusicians needed, on average, about five tones (ranging from three to six) to reach the emergence of familiarity with familiar melodies, the present paradigm started the first gate with the presentation of three tones, allowing us to record two gates before the FEP gate (as control conditions to test the effect of the emergence of familiarity). The FEP tone estimation requires that a few gates are recorded after the FEP gate to confirm the familiarity (see above definition). Therefore, we recorded two additional gates after the FEP gate. Thus, the last gate could range between five and eight tones (i.e., the maximum number of tones for the FEP tone according to Dalla Bella et al., 2003, plus two tones to confirm familiarity). Because the maximum number of presented tones was eight, in all trials where the participant needed more than eight tones to reach emergence of familiarity, the FEP tone could not be found and the trial was excluded from analysis. At each gate, the excerpt presentation was immediately followed by the visual display of “familiar?” Participants were instructed to decide whether the presented stimulus was familiar or not by pressing one of two buttons. Participants were instructed to give their response (familiar/not familiar) only after hearing the last tone of the excerpt and only after they were presented the “familiar?” visual prompt. The association between hand side (left or right) and response (yes or no) was counterbalanced across participants. The next gate started 500 msec after the participant response. One second after the participant's response to the last gate, the monitor always displayed “score?” Participants were asked to provide a score of familiarity for the current excerpt between 1 (low familiarity) and 9 (high familiarity) with a response keyboard. Five hundred milliseconds after the participant's response, the same gating procedure was applied to the next excerpt. The set of six gates for each melody were always presented in the same consecutive order (first three tones, first four tones, first five tones, first six tones, first seven tones, and first eight tones). These 80 sets (for the 80 melodies) were presented in a pseudorandom order, and this presentation order was counterbalanced across participants.

Data Acquisition and Analysis

Participants were comfortably seated in a Faraday box. The EEG was recorded from 32 scalp electrodes located at standard left and right hemisphere positions over frontal, central, parietal, occipital, and temporal areas (International 10/20 system sites: Fz, Cz, Pz, Oz, Fp1, Fp2, Af3, Af4, F3, F4, C3, C4, P3, P4, Po3, Po4, O1, O2, F7, F8, T3, T4, T5, T6, Fc5, Fc1, Fc2, Fc6, Cp5, Cp1, Cp2, and Cp6). The EEG was amplified by Biosemi amplifiers (ActiveTwo system) with a band-pass of 0–102.4 Hz and was digitized at 512 Hz. The data were then re-referenced off-line to algebraic average of the left and right mastoids. Ocular artifacts were removed by independent component analysis under EEGLAB version 5.03. Trials containing movements or other physiological artifacts (25% of the data) were excluded from the averaged ERP waveforms (visual inspection). Of the 80 presented sets of six gates for each melody, the participants showed an FEP in only 64% of them, and the remaining trials were excluded from analysis. Therefore, the analyses presented below were performed on about 800 trials per gate (i.e., about 38 trials per gate and participant).

We performed three different types of analyses. The first analysis (Analysis 1) was designed to characterize how early auditory sensory mechanisms and late strategic cognitive mechanisms due to the administration of the task varied across gates (i.e., as the feeling of familiarity emerges). The purpose of the second analysis (Analysis 2) was to investigate how task-related mechanisms were influenced by the familiarity of the melodies. Finally and most importantly, the aim of the third analysis (Analysis 3) was to test our hypothesis that the emergence of familiarity evoked by the melodies would generate the emergence of conceptual information, hence the rise of an N400 component.

For Analysis 1, ERP data were analyzed in response to the last tone of each gate by computing the mean amplitude, starting 100 msec before and ending 1500 msec after the onset of the tone. The gates used for the analysis are those around the FEP gate (estimated for each melody and for each participant based on their responses), that is, two gates before the FEP gate (referred to as “PRE2”), one gate before the FEP gate (referred to as “PRE1”), the FEP gate (referred to as “FEP gate”), one gate after the FEP gate (referred to as “POST1”), and two gates after the FEP gate (referred to as “POST2”; Figure 1). We also analyzed the N1/P2 complex peak-to-peak amplitude, that is, the absolute difference between the peak amplitude of the N1 and P2 detected within the 100- to 180-msec and the 180- to 270-msec windows, respectively (time windows based on visual inspection of the grand averages). Repeated measures ANOVA was used for statistical assessment. To test the distribution of the effect of the gate on the ERPs to the last tone, 6 ROIs were selected as levels of two topographic within-participant factors (Hemisphere and Anteroposterior): the left (AF3, F3, F7) and right (AF4, F4, F8) anterior sites, the left (C3, FC1, FC5) and right (C4, FC2, FC6) central sites, and the left (P3, CP1, CP5) and right (P4, CP2, CP6) posterior sites. Therefore, we computed repeated measures ANOVAs with Gate (PRE2, PRE1, FEP gate, POST1, and POST2), Anteroposterior (anterior, central, and posterior ROIs), and Hemisphere (left/right) as within-participant factors.

Figure 1. 

Gate sequence of the paradigm (i.e., PRE2, PRE1, FEP, POST1, and POST2). The last tones and the FEP tones of the melody (from Bach's Badinerie, Suite in b-minor) are highlighted. The left side of the figure shows the participant's familiarity judgments corresponding to each gate.

Figure 1. 

Gate sequence of the paradigm (i.e., PRE2, PRE1, FEP, POST1, and POST2). The last tones and the FEP tones of the melody (from Bach's Badinerie, Suite in b-minor) are highlighted. The left side of the figure shows the participant's familiarity judgments corresponding to each gate.

For Analyses 2 and 3, an additional factor, Familiarity with two levels (highly vs. moderately familiar) was included. Because the same melody can be perceived as highly familiar by one participant but as moderately familiar by another participant, the analysis was conducted at the individual level, taking into account this between-participant variability. Thus, the ERP data were not analyzed with a priori categories (i.e., a given melody is not classified as highly or moderately familiar based on the pretests) but were based on participants' responses during the ERP session: the score of familiarity for each melody and each participant (1 = low familiarity and 9 = high familiarity, see above) and the measurement of an FEP (for a similar procedure, see, e.g., Plailly et al., 2007). At each gate, a median split across each participant's scores of familiarity was used to determine the experimental condition assigned to each trial (i.e., highly or moderately familiar).

For Analysis 2, ERP data were also analyzed in response to the last tone but with the additional abovementioned Familiarity factor (not included in Analysis 1). Analysis 2 focused on the influence of the level of Familiarity, still considering Gate, but with only two levels (notably because five levels of Gate would have led to a too small number of trials for each condition). Thus, we computed repeated measures ANOVAs with melody Familiarity (high/moderate), Gate (PRE2 and PRE1 vs. FEP gate and POST1), Anteroposterior (anterior, central, and posterior ROIs), and Hemisphere (left/right) as within-participant factors.

Unlike Analyses 1 and 2, which were performed on the last tone of the melody, Analysis 3 assessed the ERP in response to the FEP tone, that is, the tone corresponding to the emergence of familiarity (see Procedure section). For this analysis, the degree/level of Familiarity (high/moderate) was also included as a factor. Although at early gates (i.e., PRE2 and PRE1) there should be no difference between melodies judged as moderately or highly familiar, a difference might emerge around the FEP gate. Hence, we analyzed only three levels of the Gate factor: FEP gate, POST1, and POST2. The Gate and the Familiarity (highly familiar vs. moderately familiar) factors were used to assess the ERP response to the same tone: the FEP tone. Because the topography of the effects in this third analysis differed from Analyses 1 and 2, we used different ROIs: the left (FC1, FC5, F7) and right (FC2, FC6, F8) anterior sites, the left (C3, CP1, CP5) and right (C4, CP2, CP6) central sites, and the left (P3, P7, PO3) and right (P4, P8, PO4) posterior sites. Therefore, we computed repeated measure ANOVAs with Familiarity (high/ moderate), Gate (FEP gate, POST1, and POST2), Anteroposterior (anterior, central, and posterior ROIs), and Hemisphere (left/right) as within-participant factors. For these three analyses, ANOVAs included the main effect of each factor and all possible interactions between these factors. For these three analyses, additional ANOVAs that included the midline electrodes were also performed. However, because no major differences were found between these two types of analyses, we only reported those including ROIs. We used latency windows of 50 msec in the 0- to 1500-msec range. Because of the increased likelihood of type 1 errors associated with the large number of statistical tests, only effects that reached significance in at least two consecutive time windows were considered significant. All p values reported below were adjusted with the Greenhouse–Geisser correction for nonsphericity, when appropriate. Uncorrected degrees of freedom, corrected p values, and epsilon values are reported. Tukey tests were used in post hoc comparisons. The statistical analyses were conducted with Cleave and Statistica.

RESULTS

Behavioral Results

On average, the participants' responses showed an FEP in 64% of the melodies. It is worth noting that these melodies were not the same for each participant, indicating some between-subjects variability of the musical knowledge.2 Indeed, the same melody was perceived as highly or moderately familiar by one participant (showing an FEP) but less familiar by another participant (showing no FEP), although there was a strong correlation between the participants' familiarity scores of the pilot and ERP experiments (see Methods section). Therefore, we performed the ERP analyses on around two thirds of the material. The participants' familiarity judgments indicated that the number of tones required to reach the emergence of familiarity (FEP tone) was, on average, five (M = 4.94, SEM = 0.05). This number of tones did not differ between highly and moderately familiar melodies (Mann–Whitney statistics: U = 90737, ns; Table 1). However, participants reached the FEP tone earlier with highly familiar compared with moderately familiar melodies (Mann–Whitney statistics: U = 56113, p < .0001; Table 1); that is, the duration of the melodies prior to the FEP tone was shorter for highly familiar compared with moderately familiar melodies. The mean familiarity scores were 8.62 (SD = 0.02) for highly familiar melodies and 4.75 (SD = 0.06) for moderately familiar melodies.

Table 1. 

Mean FEPs with HF (above the Median Score of Familiarity) and MF (below the Median Score of Familiarity) Melodies and Corresponding Duration of the Excerpt


Tone (No.)
Duration (sec)
M
SE
M
SE
HF melodies 4.89 0.08 1.59 0.04 
MF melodies 5.00 0.08 2.25 0.06 

Tone (No.)
Duration (sec)
M
SE
M
SE
HF melodies 4.89 0.08 1.59 0.04 
MF melodies 5.00 0.08 2.25 0.06 

No. = number of tones to reach the FEP; HF = highly familiar; MF = moderately familiar.

Event-related Brain Potentials Results

Analysis 1: Effect of the Gate on the ERPs to the Last Tone

As can be seen from Figure 2, the presentation of the last tone of the excerpt at the gates PRE2, PRE1, FEP gate, POST1, and POST2 elicited different ERPs. A negative component, the N1, peaking at 130 msec (latency range = 100–180 msec) was clearly evident. The next component, P2, with a peak at 240 msec (latency range = 180–270 msec) was obvious at frontal and central sites. Then, a late positivity was observed, which seemed to overlap with the end of the P2 and extended until about 1500 msec.

Figure 2. 

Grand-averaged ERPs to the last tone of the melodies at each gate: PRE2, PRE1, FEP, POST1, and POST2 (n = 21 participants; vertical unit, microvolt; horizontal unit, millisecond; about 800 trials per gate). These data are relevant to Analysis 1.

Figure 2. 

Grand-averaged ERPs to the last tone of the melodies at each gate: PRE2, PRE1, FEP, POST1, and POST2 (n = 21 participants; vertical unit, microvolt; horizontal unit, millisecond; about 800 trials per gate). These data are relevant to Analysis 1.

To analyze in detail how these components were modulated by the independent variables manipulated in this experiment, we computed repeated measure ANOVAs (with Gate, Anteroposterior, and Hemisphere as within-participant factors; see Methods section) using 50-msec windows (Table 2). A main effect of Gate between 200 and 1450 msec, F(4, 80) = 5.14, p = .012, ɛ = 0.471, indicated that the positivities (i.e., P2 and the late positivity) were larger at and after the FEP gate (i.e., POST1 and POST2) compared with before the FEP gate (i.e., PRE2 and PRE1; Figure 3).

Table 2. 

Time Course of the Gate Effect on the Response to the Last Tone (Analysis 1)

Time (msec)
Main Gate Effect
Gate × Anteroposterior
0–50   
50–100   
100–150  ** 
150–200  ** 
200–250 * * 
250–300 ** ** 
300–350 **  
350–400 **  
400–450 **  
450–500 **  
500–550 **  
550–600 **  
600–650 **  
650–700 *  
700–750 *  
750–800 *  
800–850 *  
850–900   
900–950 *  
950–1000 **  
1000–1050 **  
1050–1100 **  
1100–1150 *  
1150–1200   
1200–1250   
1250–1300   
1300–1350   
1350–1400 *  
1400–1450 *  
1450–1500   
Time (msec)
Main Gate Effect
Gate × Anteroposterior
0–50   
50–100   
100–150  ** 
150–200  ** 
200–250 * * 
250–300 ** ** 
300–350 **  
350–400 **  
400–450 **  
450–500 **  
500–550 **  
550–600 **  
600–650 **  
650–700 *  
700–750 *  
750–800 *  
800–850 *  
850–900   
900–950 *  
950–1000 **  
1000–1050 **  
1050–1100 **  
1100–1150 *  
1150–1200   
1200–1250   
1250–1300   
1300–1350   
1350–1400 *  
1400–1450 *  
1450–1500   

*Statistical significance at .05.

**Statistical significance at .01.

Figure 3. 

Channel means (with 95% confidence intervals) of the grand-averaged ERPs to the last tone of the melodies between 200 and 1450 msec across gates (n = 21 participants; vertical unit, microvolt; horizontal unit, gates; about 800 trials per gate). These data are relevant to Analysis 1.

Figure 3. 

Channel means (with 95% confidence intervals) of the grand-averaged ERPs to the last tone of the melodies between 200 and 1450 msec across gates (n = 21 participants; vertical unit, microvolt; horizontal unit, gates; about 800 trials per gate). These data are relevant to Analysis 1.

An interaction between gate and anteroposterior was found within the 100- to 300-msec latency range, F(8, 160) = 4.62, p = .001, ɛ = 0.571. Post hoc comparisons indicated that this interaction was due to a significant difference between POST1 and POST2 at anterior (p < .001) and central sites (p < .001) but not at posterior sites (ns). N1/P2 peak-to-peak analysis showed a significant main effect of Gate, F(4, 80) = 3.13, p = .033, ɛ = 0.733, due to a smaller amplitude of N1/P2 at POST2 (M = 3.59 μV) compared with POST1 (M = 5.11 μV; p = .013).

Analysis 2: Effect of Familiarity on the ERPs to the Last Tone

We separately averaged the ERP to the last tone according to the degree of familiarity with melodies provided by the participants' scores. Because the same melody can be perceived as highly familiar by one participant but moderately familiar by another participant, the analysis of the effect of familiarity was done at the individual level (based on participants' judgments). At the FEP gate and POST1, the ERP to the last tone displayed a larger, late (i.e., around 500 msec) positivity for highly familiar compared with moderately familiar melodies (Figure 4).

Figure 4. 

Grand-averaged ERPs to the last tone of the highly familiar and moderately familiar melodies after emergence of familiarity (FEP and POST1; n = 21 participants; vertical unit, microvolt; horizontal unit, millisecond; about 400 trials per gate and level of familiarity). These data are relevant to Analysis 2.

Figure 4. 

Grand-averaged ERPs to the last tone of the highly familiar and moderately familiar melodies after emergence of familiarity (FEP and POST1; n = 21 participants; vertical unit, microvolt; horizontal unit, millisecond; about 400 trials per gate and level of familiarity). These data are relevant to Analysis 2.

To analyze in detail how the ERPs were modulated by the independent variables manipulated in this experiment, we computed repeated measure ANOVAs (with Familiarity, Gate, Anteroposterior, and Hemisphere as within-participant factors; see Methods) using 50-msec windows.

An interaction between familiarity and gate was significant in the 450- to 650-msec latency range, F(1, 20) = 9.30, p = .006. Post hoc comparisons indicated that this interaction was due to a larger positivity to highly familiar (M = 1.87 μV) compared with moderately familiar melodies (M = 0.70 μV) at FEP gate and POST1 (p < .001).

Analysis 3: Effect of Familiarity on the Responses to the FEP Tone

As described in the Methods section, the gating paradigm used in this study required participants to verify that a melody considered familiar at gate n is still considered familiar at gate (n + 1) and gate (n + 2). Thus, the “FEP tone,” which is the last tone at the FEP gate (where the participant judged for the first time the melody to be familiar), is presented two more times as the penultimate tone in the POST1 gate (i.e., n + 1) and as the third to the last tone in the POST2 gate (i.e., n + 2; see Figure 1, notes in the rectangle). Therefore, it becomes interesting to see how the perception of the same tone evolves over the three different gates.

To analyze in detail how the ERPs to the FEP tone were modulated by the independent variables manipulated in this experiment, we computed repeated measure ANOVAs (with Familiarity, Gate, Anteroposterior, and Hemisphere as within-participant factors; see Methods section) using 50-msec windows (Table 3).

Table 3. 

Time Course of the Familiarity and Gate Effects on the Response to the FEP Tone (Analysis 3)

Time (msec)
Familiarity × Anteroposterior
Main Gate Effect
Gate × Anteroposterior
0–50    
50–100    
100–150    
150–200    
200–250 **   
250–300 **   
300–350 **   
350–400 *   
400–450 * **  
450–500 ** ** * 
500–550  ** ** 
550–600  ** ** 
600–650  ** ** 
650–700  ** ** 
700–750  ** ** 
750–800  ** ** 
800–850  ** * 
850–900  ** ** 
900–950  ** * 
950–1000  ** * 
1000–1050  **  
1050–1100  **  
1100–1150  **  
1150–1200  **  
1200–1250  **  
1250–1300  *  
1300–1350  **  
1350–1400    
1400–1450    
1450–1500    
Time (msec)
Familiarity × Anteroposterior
Main Gate Effect
Gate × Anteroposterior
0–50    
50–100    
100–150    
150–200    
200–250 **   
250–300 **   
300–350 **   
350–400 *   
400–450 * **  
450–500 ** ** * 
500–550  ** ** 
550–600  ** ** 
600–650  ** ** 
650–700  ** ** 
700–750  ** ** 
750–800  ** ** 
800–850  ** * 
850–900  ** ** 
900–950  ** * 
950–1000  ** * 
1000–1050  **  
1050–1100  **  
1100–1150  **  
1150–1200  **  
1200–1250  **  
1250–1300  *  
1300–1350  **  
1350–1400    
1400–1450    
1450–1500    

*Significance threshold at .05.

**Significance threshold at .01.

Familiarity and anteroposterior interacted significantly within the 200- to 500-msec latency range, F(2, 40) = 7.72, p = .005, ɛ = 0.716. Post hoc comparisons indicated that this interaction was due to a larger negativity to highly familiar (M = −2.70 μV) compared with moderately familiar melodies (M = −1.87 μV) at the fronto-central site (p < .001), whereas the differences between levels of familiarity were not significant at other sites (central, p = .28; posterior, p = .92).

Although the Familiarity × Gate interaction, F(2, 40) = 0.72, p = .483, ɛ = 0.935, and the Familiarity × Anteroposterior × Gate interaction, F(4, 80) = 1.41, p = .253, ɛ = 0.615, in this time window were not significant, the effect of familiarity (i.e., the ERP difference between highly familiar and moderately familiar melodies) seemed to differ across gates, whereas there was no effect at the FEP gate (highly familiar melodies: M = 0.40 μV, SD = 0.77 μV; moderately familiar melodies: M = 0.49 μV, SD = 0.75 μV) and POST2 (highly familiar melodies: M = −1.75 μV, SD = 0.68 μV; moderately familiar melodies: M = −1.81 μV, SD = 0.43 μV), there was an effect at POST1 (highly familiar: M = −1.92 μV, SD = 0.41 μV; moderately familiar: M = −0.91 μV, SD = 0.54 μV; Figure 5).

Figure 5. 

Grand-averaged ERPs to the FEP tone of the melodies within the 200- to 500-msec latency range in moderately familiar and highly familiar melodies (n = 21 participants; about 400 trials per gate and level of familiarity). The upper panel shows the channel means (with 95% confidence intervals) of the grand-averaged ERPs across the FEP, the POST1, and the POST2 gates (vertical unit, microvolt; horizontal unit, gates). The lower panel shows the grand-averaged ERPs at POST1 (vertical unit, microvolt; horizontal unit, millisecond) and the corresponding isopotential map of the difference waves (grand-averaged ERPs to highly familiar minus moderately familiar melodies) 400-msec poststimulus onset (unit, microvolt). These data are relevant to Analysis 3.

Figure 5. 

Grand-averaged ERPs to the FEP tone of the melodies within the 200- to 500-msec latency range in moderately familiar and highly familiar melodies (n = 21 participants; about 400 trials per gate and level of familiarity). The upper panel shows the channel means (with 95% confidence intervals) of the grand-averaged ERPs across the FEP, the POST1, and the POST2 gates (vertical unit, microvolt; horizontal unit, gates). The lower panel shows the grand-averaged ERPs at POST1 (vertical unit, microvolt; horizontal unit, millisecond) and the corresponding isopotential map of the difference waves (grand-averaged ERPs to highly familiar minus moderately familiar melodies) 400-msec poststimulus onset (unit, microvolt). These data are relevant to Analysis 3.

DISCUSSION

We predicted that the feeling of familiarity evoked by a melody would trigger or co-occur with the processing of concepts conveyed by emotions to or verbal associations with the melody (or its memory representation), and that the higher the familiarity with the melody, the more concepts would be carried and the larger the N400. This prediction was confirmed by the data showing a larger negativity to highly familiar melodies compared with moderately familiar melodies within the 200- to 500-msec latency window (see Analysis 3). In addition to giving support to the power of music to evoke concepts stored in memory, the present study is the first attempt to show how the processing of concepts unfolds in time while a listener starts to feel familiarity with a melody.

Behavioral Results

The number of tones necessary to reach the emergence of familiarity (i.e., the “FEP tone”) was close to 5 (4.94). It is noteworthy that a similar value (4.9, the mean between 4.0 for highly familiar and 5.7 for moderately familiar melodies in nonmusicians) was reported by Dalla Bella et al. (2003). Because we applied the same experimental design and used nonmusician participants of about the same age (M = 25 years compared with 21 years in their experiment), the main difference between the two studies was in the musical material. Indeed, although we presented mostly classical themes from a set used by Plailly et al. (2007) and Platel et al. (2003), Dalla Bella et al. used melodies from a repertoire of French traditional songs (Berthier, 1979). It is possible, therefore, that the level of familiarity between highly and moderately familiar melodies differed to a stronger extent in Dalla Bella et al. compared with our material because traditional songs may override classical themes in familiarity. In addition, the strong association of music and text in traditional songs might boost the speed of emergence of familiarity in comparison to music/tune alone. This might explain why we did not replicate the difference of number of tones to reach FEP between highly familiar and moderately familiar melodies. However, the mean number of tones to reach FEP across these two types of melodies (i.e., about five tones) closely replicated the data of Dalla Bella et al. (2003). This replication with a different material argues for the generality of this result to a large set of melodies. As indicated earlier, our melodies did not contain lyrics or text and could be associated with lyrics only in few cases. Therefore, provided that labeling strategies (e.g., to recall the title of the musical piece or associated lyrics) did only occur in a few cases (see discussion below), the mechanism under interest in the present study, the emergence of familiarity for melodies, concerned mainly music and not language processing. This aspect is particularly relevant for the interpretation of the reported N400 effect, as discussed below.

Similar to Dalla Bella et al. (2003), we found that participants reached the FEP tone earlier with highly familiar compared with moderately familiar melodies. Indeed, participants of the present experiment reached the FEP tone approximately 0.66 sec earlier with highly familiar compared with moderately familiar melodies (played at the same tempo; see Methods section), whereas Dalla Bella et al. reported a difference of 0.7 sec. Therefore, the present study closely replicated this familiarity effect on recognition time. The fact that the feeling of familiarity is reached earlier with highly familiar than less familiar melodies (using the same tempo and with a comparable number of tones; the FEP tone did not differ significantly between highly and moderately familiar melodies) could be expected. Indeed, when the gating paradigm is applied to words instead of melodies by adding a new item (e.g., a letter or a syllable) at each gate, highly familiar/frequent words are identified faster than less familiar/frequent words (Walley, Michela, & Wood, 1995; Tyler, 1984; Grosjean, 1980). Therefore, the similar familiarity effect with melodies suggests that music and word recognition could be governed by comparable mechanisms.

Event-related Brain Potentials Results

In this study, we proposed to time lock the ERP to the lexical access of familiar musical excerpts. For this aim, we used the gating paradigm and focused on the emergence of familiarity evoked by melodies. We predicted that the feeling of familiarity for a melody would be accompanied by the processing of concepts conveyed by the melody (e.g., emotional or associative concepts carried either by the melody itself or by the memory representations of the melody) and that, much like in the language domain, the processing of concepts would correlate with the variation of the N400 component. More specifically, as found with language, the N400 was expected to increase with the emergence of the concepts conveyed by the stimulus as it became more and more familiar (Mestres-Missé et al., 2007).

The purpose of Analysis 1 was to assess the primary auditory responses (i.e., the N1/P2 complex) and the task-induced decision-related component(s) as the feeling of familiarity increased across gates. The aim of the second analysis (Analysis 2) was to test how the decision-related response was modulated by the familiarity effect (i.e., the responses to highly familiar compared with moderately familiar melodies). Finally, a third analysis (Analysis 3) tested the hypothesis of our study, that the emergence of familiarity would elicit a larger N400 to highly familiar compared with moderately familiar melodies.

Although Analysis 1 (gate effect) and Analysis 2 (familiarity effect) focused on responses to the last tone of the melodies (presented with increasing length), Analysis 3 centered on responses to the FEP tone, the tone number at which participants began to consider the melody as familiar without changing their judgment thereafter (Figure 1).

Gate Effect Observed in Response to the Last Tone (Analysis 1)

Within the gating paradigm, the last tone of the melody was always a new tone at each gate (Figure 1). Analyzing the responses to the last tone allowed us to test, independently of the effects of repetition (Kotchoubey, Schneider, Uhlmann, Schleichert, & Birbaumer, 1997; which would occur if the analysis were performed on responses to the same tone of the melody), how the early auditory processing, the N1/P2 complex of the ERP components, varies across gates as the emergence of familiarity unfolds.

Our results showed that the amplitude of the N1/P2 complex (between 100 and 300 msec) decreased between POST1 and POST2. This effect might reflect an expectancy mechanism (i.e., a set of predicted melodies are preactivated) that prime the subsequent tones of the melody. This priming would reduce processing costs of the following tones, hence the amount of early auditory processing. Indeed, Schön and Besson (2005) have shown that “being able to anticipate a precise target note … seems to influence auditory processing as early as 100 msec” (p. 701). Therefore, the reduction of the N1/P2 could result from such an anticipation process. After emergence of familiarity (i.e., POST1 and POST2), the listener is able to imagine or mentally rehearse the melody, and this may in turn have a habituation effect (Halpern & Zatorre, 1999) on the following tones of the melodies (yet to be presented), reducing the N100 amplitude.

In our experiment, participants had to listen to the last tone to perform a familiarity decision task (familiar vs. unfamiliar). The results showed that as the emergence of familiarity unfolds (across gates), that is, as participants had more and more tones to feel familiar with the melody, ERPs to final tones exhibited a larger late positive component. More specifically, the grand-averaged responses showed a larger late positivity at and after the FEP gate compared with before the FEP gate between 200 and 1450 msec. This late positivity resembles a positivity of the P3 family, known to reflect confidence in decision/classification tasks (Van Hooff, 2005; Cutmore & Muckert, 1998; Simons & Lang, 1976; Squires, Squires, & Hillyard, 1975). As identification unfolded across gates, the melodies became more and more familiar for the participants. Therefore, it is likely that the participants became also more and more confident in their familiarity decision and showed an increased positivity across gates.

Familiarity Effect Observed in Response to the Last Tone (Analysis 2)

Because participants had to provide a score from 1 to 9 judging the overall familiarity of the melody at the end of each set of six gates (for a given melody), we could classify each melody (for each participant) as highly or moderately familiar. Within each gate, this familiarity effect (i.e., the difference between the responses to highly and moderately familiar melodies) revealed a similar pattern as the first “between gates” familiarity effect of Analysis 1 (i.e., as the gate increases, the emergence of familiarity unfolds, hence the familiarity for the gated melody): A larger positivity was associated with higher familiarity. In Analysis 1, it was evident by a larger late positivity with an increasing gate. In Analysis 2, it was shown by a larger positivity between 450 and 650 msec to melodies that were judged as highly familiar compared with those judged moderately familiar.

Similar to the previously discussed effect across gates, this familiarity effect can also be interpreted as a modulation of a late P3 component: Participants judged the familiarity of the melodies with more confidence for highly familiar melodies than for moderately familiar melodies.

Interestingly, Besson and Faïta (1995) found a larger P600 to melodic out-of-key and in-key incongruities in highly familiar compared with unfamiliar melodies. The authors interpreted this finding as reflecting a larger violation of musical expectancy with familiar compared with unfamiliar items. Indeed, the familiarity evoked by a melody is known to help in deciding if the melody is or is not modified (e.g., key transposed or contour modified; Jones & Ralston, 1991; DeWitt & Samuel, 1990), most likely because with familiar tunes, a well-known musical context is retrieved in memory and can help in the identification of a single (distorted) tone (Dewar, Cuddy, & Mewhort, 1977).

Familiarity Effect Observed in Response to the FEP Tone (Analysis 3)

Most importantly, for the purpose of the present study, ERP responses to the FEP tone at POST1 (i.e., the penultimate tone of the melody, Figure 1) and POST2 (i.e., the third to the last tone, Figure 1) were not contaminated by the decision process engendering the positive component described above. Therefore, the analysis of the ERP responses to the FEP tone allowed us to test conceptual processing during the emergence of familiarity to music independently of decision-related processes.

The feeling of familiarity for a melody could imply the activation of the melody in a musical mental lexicon (Peretz, 1996). We expected that the activation of an item in the musical lexicon would be accompanied by an N400 reflecting the emergence of familiarity/conceptual information (Mestres-Missé et al., 2007).

In the present study, at the gate immediately following the emergence of familiarity (POST1), we found a larger negativity to highly familiar compared with moderately familiar melodies between 200 and 500 msec at fronto-central sites, referred to below as the “N400 familiarity effect.” This effect was not directly tested statistically at POST1 because the nonsignificant interaction (Familiarity × Gate and Familiarity × Anteroposterior × Gate interaction) did not allow us to perform post hoc tests. However, the statistical significance of the effect was inferred from the Familiarity × Anteroposterior interaction together with the lack of familiarity effect at the FEP gate and POST2 (Figure 5, upper panel). Indeed, this interaction could only be explained by the familiarity effect at POST1.

The latency of this negative component was around 400 msec. Between POST1 and POST2, in correspondence to the unfolding of the feeling of familiarity (i.e., an increased familiarity for the melody), the negativity to moderately familiar melodies increased so that the difference between the ERP to highly familiar and moderately familiar melodies became irrelevant at POST2 (Figure 5). Therefore, this negativity seems to correlate with the degree of familiarity/conceptual information of the melodies as reflected by the familiarity effect at POST1 and by its modulation across gates, as the feeling of familiarity unfolds. The slower increase of this negativity between POST1 and POST2 in moderately familiar compared with highly familiar melodies seems to mirror the slower emergence of familiarity in moderately familiar compared with highly familiar melodies showed by the behavioral results.

Interestingly, at least two characteristics of the N400 familiarity effect are shared with those of the typical N400 effect: the sensitivity to the degree of familiarity/conceptual information and the latency. Although the fronto-central topography of the N400 familiarity effect is not the one that is the most often observed, N400 effects at frontal areas were found with words (Van Petten & Rheinfelder, 1995), odors (Sarfarazi, Cave, Richardson, Behan, & Sedgwick, 1999), and pictures (Hamm, Blake, & Ian, 2002; West & Holcomb, 2002; McPherson & Holcomb, 1999; Ganis, Kutas, & Sereno, 1996). The larger N400 to highly familiar compared with moderately familiar melodies might be explained by a difference of cohort-size reduction of the “cohort model” (Marslen-Wilson, 1987; Grosjean, 1980). Indeed, highly familiar melodies, hence sets of notes that constitute these melodies, are likely to be more frequent than moderately familiar melodies (and sets of notes that constitute them). Thus, sets of notes in a highly familiar melody would occur more often in the musical lexicon than sets of notes in a moderately familiar melody. Therefore, the former would activate a larger set of items in a mental lexicon of melodies. Hence, more cohort members would have to be inhibited during recognition of highly familiar compared with moderately familiar excerpts. Because the N400 varies with cohort-size reduction (Barber, Vergara, & Carreiras, 2004; Hutzler et al., 2004), a larger N400 would be found to highly familiar compared with moderately familiar excerpts. Interestingly, this correlation between the amount of activated items in a mental lexicon, and the amplitude of the N400 is further supported by other data of word perception. Holcomb, Grainger, and O'Rourke (2002) found a larger N400 to words with many orthographic neighbors compared with words with few neighbors. Therefore, it is likely that the N400 familiarity effect reported here is somehow related to the size of activated melodies or the size of activated conceptual representations during the emergence of familiarity. The exact mechanism (e.g., inhibition or spreading of activation) underlying this effect remains to be investigated.

We assumed that the N400 familiarity effect was mainly influenced by emotional or associative concepts but not by concepts arising from the musical structure of the melodies. This assumption comes from (1) the material and (2) the between-subjects variability of the familiarity judgments. (1) All our melodies, whether judged as highly familiar or moderately familiar, were based on the Western tonal musical system and thus contained tonal structures as well as rhythmic and metric structures. (2) The same melody (i.e., the same musical structure) could be rated as highly familiar by one participant but as moderately familiar by another. For these two reasons, we assumed that structural effects played a minor role in the reported N400 familiarity effect.

Although we propose that the N400 familiarity effect is modulated by access to nonverbal concepts, an alternative interpretation may rely more on semantic associations. In this alternative view, the greater familiarity of a melody would increase the ability to engage in labeling strategies (e.g., to recall the title of the musical piece). Indeed, the size of activated labels and related concepts would be higher with highly compared with moderately familiar melodies. This interpretation would assume that labeling can occur even if the melody is not yet recognized, at a time when the listener perceives the melody to be familiar but cannot yet identify it with a strong confidence. In other words, he or she has a “feeling of familiarity” but is not yet fully conscious of the identity of the melody. Because labeling is a strategy that is an explicit (conscious) process, it is unlikely that it occurs before recognition of the melody. However, further studies need to be carried out to tease apart these two interpretations of the effect.

The ERP familiarity effect found in Analysis 2 on a late positive component suggests, as discussed earlier, that participants' confidence was higher for highly familiar compared with moderately familiar melodies. Importantly, this ERP effect is in the opposite direction of the reported N400 familiarity effect. Therefore, the N400 familiarity is unlikely to reflect a modulation of confidence but might be attenuated by an overlapping positivity effect due to confidence.

In summary, the ERP familiarity effect with musical stimuli seems to reflect a modulation of the N400, an ERP component sensitive to conceptual processing (Bles et al., 2007; Van den Brink et al., 2006; Pylkkänen & Marantz, 2003; Federmeier & Kutas, 2001). Thus, the present study supports the evidence that music is capable of conveying concepts and shows for the first time the temporality of conceptual processing as the feeling of familiarity to melodies unfolds over time.

APPENDIX

The melodies can be downloaded at http://www.incm.cnrs-mrs.fr/pperso/attach/schon/daltrozzo_et_al_gating_stim.zip.

Material used to extract the melodies with scores of familiarity of the melodies (estimated with a preliminary experiment; see Methods section):

Filename
 
Excerpt from
 
Composer
 
Familiarity
 
Symphony # 5 in C minor op 6 Beethoven 
Hungarian dance # 15 in G minor Brahms 
Arnaque Joplin 
Batatelle for Elise, WoO59 Beethoven 
March of the Kings, The girl from Arles Bizet 
The Walkyrie Wagner 
Raiders March Williams 
The Blue Danube Strauss 
Allegro from Concerto # 1 op. 8 Spring Vivaldi 
10 Badinerie, h-moll, BWV1067 Bach 
11 Toccata & Fugue in D minor, Toccata Bach 
12 The Good, the Bad and the Ugly Morricone 
13 Rondo alla Turca Mozart 
14 Peer Gynt Grieg 
15 Sorcerer's Apprentice Dukas 
16 Peter and the Wolf, Peter's walk, Excerpt # 1 Prokofiev 
17 Conquest of Paradise Vangelis 
18 Imperial March Williams 
19 The Trout Schubert 
20 The Sleeping Beauty Tchaikovsky 
21 Toccata & Fugue in D minor, Fugue Bach 
22 Carmen, Excerpt # 3 Bizet 
23 Serenade # 13 for strings in G major, Excerpt # 2 Mozart 
24 Peter and the Wolf, Peter's walk, Excerpt # 2 Prokofiev 
25 Picture at an Exhibition Moussorgski 
26 Symphony # 40, gmoll, KV 550, Molto allegro Mozart 
27 Bolero Ravel 
28 Traviata-Libiamo ne' lieti calici Verdi 
29 Aida: Triumphal March Verdi 
30 Ma V last: Vltava (moldau) Smetana 
31 Star-Spangled Banner anonymous 
32 Gladiators Entrance, op 68. Fuick 
33 Symphony # 9 in E minor op 95 Dvorak 
34 Waltz # 5 Shostakovitch 
35 Preludes & Fugues in C Bach 
36 Jeux Interdits, Romance for guitar anonymous 
37 America Gershwin 
38 Peter and the Wolf, The cat Prokofiev 
39 Sylvia Delibes 
40 The Magic Flute, Excerpt # 2 Mozart 
41 Radetsky March Strauss 
42 The Magic Flute, Excerpt # 3 Mozart 
43 Nutcraker—Chinese dance Tchaikovsky 
44 Romeo and Juliette: Montaigu and Capulet Prokofiev 
45 Infinity Walden 
46 The Magic Flute, Excerpt # 1 Mozart 
47 God save the queen anonymous 
48 Carmen, Excerpt # 2 Bizet 
49 The Nutcracker, Waltz of the Flowers Tchaikovsky 
50 Concerto for Piano # 21 in C major, Andante Mozart 
51 Brandenburg Concerto Bach 
52 Chicken Reel Daly 
53 Limelight Chaplin 
54 What a feeling Cara 
55 Poomp and Circumstance: March # 1 Elgar 
56 Hungarian March Berlioz 
57 Symphony # 9 Beethoven 
58 Symphonie fantastique, Excerpt # 3 Berlioz 
59 Is Paris Burning? Jarre 
60 Symphonie fantastique, Excerpt # 2 Berlioz 
61 Salzburg Symphony # 3 in F major Mozart 
62 Concerto for oboe in C minor Mozart 
63 Swan Lake, Excerpt # 1 Tchaikovsky 
64 Swan Lake, Excerpt # 2 Tchaikovsky 
65 Summer of '42 Legrand 
66 What a wonderful world Amstrong 
67 Carmen, Excerpt # 4 Bizet 
68 Symphony # 6 Schubert 
69 The barber of Seville Rossini 
70 Dance of the Hours, La Gioconda Ponchielli 
71 Suite bergamasque Debussy 
72 To the Spring Grieg 
73 My name is nobody Morricone 
74 Concerto for violin in D major Mozart 
75 The Swan St Saens 
76 Serenade # 13 for strings in G major, Excerpt # 1 Mozart 
77 Concerto # 2 for horn in E flat major Mozart 
78 Alleluia Haendel 
79 Carmen, Excerpt # 1 Bizet 
80 Symphonie fantastique, Excerpt # 1 Berlioz 
Filename
 
Excerpt from
 
Composer
 
Familiarity
 
Symphony # 5 in C minor op 6 Beethoven 
Hungarian dance # 15 in G minor Brahms 
Arnaque Joplin 
Batatelle for Elise, WoO59 Beethoven 
March of the Kings, The girl from Arles Bizet 
The Walkyrie Wagner 
Raiders March Williams 
The Blue Danube Strauss 
Allegro from Concerto # 1 op. 8 Spring Vivaldi 
10 Badinerie, h-moll, BWV1067 Bach 
11 Toccata & Fugue in D minor, Toccata Bach 
12 The Good, the Bad and the Ugly Morricone 
13 Rondo alla Turca Mozart 
14 Peer Gynt Grieg 
15 Sorcerer's Apprentice Dukas 
16 Peter and the Wolf, Peter's walk, Excerpt # 1 Prokofiev 
17 Conquest of Paradise Vangelis 
18 Imperial March Williams 
19 The Trout Schubert 
20 The Sleeping Beauty Tchaikovsky 
21 Toccata & Fugue in D minor, Fugue Bach 
22 Carmen, Excerpt # 3 Bizet 
23 Serenade # 13 for strings in G major, Excerpt # 2 Mozart 
24 Peter and the Wolf, Peter's walk, Excerpt # 2 Prokofiev 
25 Picture at an Exhibition Moussorgski 
26 Symphony # 40, gmoll, KV 550, Molto allegro Mozart 
27 Bolero Ravel 
28 Traviata-Libiamo ne' lieti calici Verdi 
29 Aida: Triumphal March Verdi 
30 Ma V last: Vltava (moldau) Smetana 
31 Star-Spangled Banner anonymous 
32 Gladiators Entrance, op 68. Fuick 
33 Symphony # 9 in E minor op 95 Dvorak 
34 Waltz # 5 Shostakovitch 
35 Preludes & Fugues in C Bach 
36 Jeux Interdits, Romance for guitar anonymous 
37 America Gershwin 
38 Peter and the Wolf, The cat Prokofiev 
39 Sylvia Delibes 
40 The Magic Flute, Excerpt # 2 Mozart 
41 Radetsky March Strauss 
42 The Magic Flute, Excerpt # 3 Mozart 
43 Nutcraker—Chinese dance Tchaikovsky 
44 Romeo and Juliette: Montaigu and Capulet Prokofiev 
45 Infinity Walden 
46 The Magic Flute, Excerpt # 1 Mozart 
47 God save the queen anonymous 
48 Carmen, Excerpt # 2 Bizet 
49 The Nutcracker, Waltz of the Flowers Tchaikovsky 
50 Concerto for Piano # 21 in C major, Andante Mozart 
51 Brandenburg Concerto Bach 
52 Chicken Reel Daly 
53 Limelight Chaplin 
54 What a feeling Cara 
55 Poomp and Circumstance: March # 1 Elgar 
56 Hungarian March Berlioz 
57 Symphony # 9 Beethoven 
58 Symphonie fantastique, Excerpt # 3 Berlioz 
59 Is Paris Burning? Jarre 
60 Symphonie fantastique, Excerpt # 2 Berlioz 
61 Salzburg Symphony # 3 in F major Mozart 
62 Concerto for oboe in C minor Mozart 
63 Swan Lake, Excerpt # 1 Tchaikovsky 
64 Swan Lake, Excerpt # 2 Tchaikovsky 
65 Summer of '42 Legrand 
66 What a wonderful world Amstrong 
67 Carmen, Excerpt # 4 Bizet 
68 Symphony # 6 Schubert 
69 The barber of Seville Rossini 
70 Dance of the Hours, La Gioconda Ponchielli 
71 Suite bergamasque Debussy 
72 To the Spring Grieg 
73 My name is nobody Morricone 
74 Concerto for violin in D major Mozart 
75 The Swan St Saens 
76 Serenade # 13 for strings in G major, Excerpt # 1 Mozart 
77 Concerto # 2 for horn in E flat major Mozart 
78 Alleluia Haendel 
79 Carmen, Excerpt # 1 Bizet 
80 Symphonie fantastique, Excerpt # 1 Berlioz 

Acknowledgments

This research was supported by a grant from the French National Agency for Research (ANR 2005-8 “Music & Memory”).

Reprint requests should be sent to Jérôme Daltrozzo, INCM-CNRS, 31 Ch. Joseph-Aiguier, 13402 Marseille Cedex 20, France, or via e-mail: jerome.daltrozzo@inserm.fr.

Notes

1. 

It is worth noting that the familiarity scores provided by the participants of this pilot experiment and those of the ERP experiment were highly correlated (r = .66, p < .0001), pointing to a strong intergroup agreement for the familiarity judgments.

2. 

See note in the Methods section.

REFERENCES

REFERENCES
Barber
,
H.
,
Vergara
,
M.
, &
Carreiras
,
M.
(
2004
).
Syllable-frequency effects in visual word recognition: Evidence from ERPs.
NeuroReport
,
15
,
545
548
.
Bentin
,
S.
,
McCarthy
,
G.
, &
Wood
,
C. C.
(
1985
).
Event-related potentials, lexical decision and semantic priming.
Electroencephalography and Clinical Neurophysiology
,
60
,
343
355
.
Berthier
,
J. E.
(
1979
).
1000 chants [1,000 songs].
Paris
:
Presses de l'Ile-de-France
.
Besson
,
M.
, &
Faïta
,
F.
(
1995
).
An event-related potential (ERP) study of musical expectancy: Comparisons of musicians with non-musicians.
Journal of Experimental Psychology: Human Perception and Performance
,
21
,
1278
1296
.
Bles
,
M.
,
Alink
,
A.
, &
Jansma
,
B. M.
(
2007
).
Neural aspects of cohort-size reduction during visual gating.
Brain Research
,
1150
,
143
154
.
Castle
,
P. C.
,
Van Toller
,
S.
, &
Milligan
,
G. J.
(
2000
).
The effect of odour priming on cortical EEG and visual ERP responses.
International Journal of Psychophysiology
,
36
,
123
131
.
Cutmore
,
T. R.
, &
Muckert
,
T. D.
(
1998
).
Event-related potentials can reveal differences between two decision-making groups.
Biological Psychology
,
47
,
159
179
.
Dalla Bella
,
S.
,
Peretz
,
I.
, &
Aronoff
,
N.
(
2003
).
Time course of melody recognition: A gating paradigm study.
Perception and Psychophysics
,
65
,
1019
1028
.
Daltrozzo
,
J.
, &
Schön
,
D.
(
2008
).
Conceptual processing in music as revealed by N400 effects on words and musical targets.
Journal of Cognitive Neuroscience
,
21
,
1882
1892
.
Dewar
,
K. M.
,
Cuddy
,
L. L.
, &
Mewhort
,
D. J.
(
1977
).
Recognition memory for single tones with and without context.
Journal of Experimental Psychology: Human Perception and Performance
,
3
,
60
67
.
DeWitt
,
L. A.
, &
Samuel
,
A. G.
(
1990
).
The role of knowledge-based expectations in music perception: Evidence from musical restoration.
Journal of Experimental Psychology: General
,
119
,
123
144
.
Federmeier
,
K. D.
, &
Kutas
,
M.
(
2001
).
Meaning and modality: Influences of context, semantic memory organization, and perceptual predictability on picture processing.
Journal of Experimental Psychology: Learning, Memory, and Cognition
,
27
,
202
224
.
Ganis
,
G.
,
Kutas
,
M.
, &
Sereno
,
M. I.
(
1996
).
The search for “common sense”: An electrophysiological study of the comprehension of words and pictures in reading.
Journal of Cognitive Neuroscience
,
8
,
89
106
.
Grosjean
,
F.
(
1980
).
Spoken word recognition and the gating paradigm.
Perception and Psychophysics
,
28
,
267
283
.
Halpern
,
A. R.
, &
Zatorre
,
R. J.
(
1999
).
When that tune runs through your head: A PET investigation of auditory imagery for familiar melodies.
Cerebral Cortex
,
9
,
697
704
.
Hamm
,
J. P.
,
Blake
,
W. J.
, &
Ian
,
K.
(
2002
).
Comparison of the N300 and N400 ERPs to picture stimuli in congruent and incongruent contexts.
Clinical Neurophysiology
,
113
,
1339
1350
.
Holcomb
,
P. J.
,
Grainger
,
J.
, &
O'Rourke
,
T.
(
2002
).
An electrophysiological study of the effects of orthographic neighborhood size on printed word perception.
Journal of Cognitive Neuroscience
,
14
,
938
950
.
Hutzler
,
F.
,
Bergmann
,
J.
,
Conrad
,
M.
,
Kronbichler
,
M.
,
Stenneken
,
P.
, &
Jacobs
,
A. M.
(
2004
).
Inhibitory effects of first syllable-frequency in lexical decision: An event-related potential study.
Neuroscience Letters
,
372
,
179
184
.
Jones
,
M. R.
, &
Ralston
,
J. T.
(
1991
).
Some influences of accent structure on melody recognition.
Memory & Cognition
,
19
,
8
20
.
Koelsch
,
S.
,
Kasper
,
E.
,
Sammler
,
D.
,
Schulze
,
K.
,
Gunter
,
T.
, &
Friederici
,
A. D.
(
2004
).
Music, language and meaning: Brain signatures of semantic processing.
Nature Neuroscience
,
7
,
302
307
.
Kotchoubey
,
B.
,
Schneider
,
D.
,
Uhlmann
,
C.
,
Schleichert
,
H.
, &
Birbaumer
,
N.
(
1997
).
Beyond habituation: Long-term repetition effects on visual event-related potentials in epileptic patients.
Electroencephalography and Clinical Neurophysiology
,
103
,
450
456
.
Kutas
,
M.
, &
Federmeier
,
K. D.
(
2000
).
Electrophysiology reveals semantic memory use in language comprehension.
Trends in Cognitive Sciences
,
4
,
463
470
.
Kutas
,
M.
, &
Hillyard
,
S. A.
(
1980
).
Reading senseless sentences: Brain potentials reflect semantic incongruity.
Science
,
204
,
203
205
.
Kutas
,
M.
, &
Van Petten
,
C.
(
1994
).
Psycholinguistics electrified.
In M. A. Gernsbacher (Ed.),
Handbook of psycholinguistics
(pp.
83
143
).
San Diego, CA
:
Academic Press
.
Marslen-Wilson
,
W. D.
(
1987
).
Functional parallelism in spoken word-recognition.
Cognition
,
25
,
71
102
.
McPherson
,
W. B.
, &
Holcomb
,
P. J.
(
1999
).
An electrophysiological investigation of semantic priming with pictures of real objects.
Psychophysiology
,
36
,
53
65
.
Mestres-Missé
,
A.
,
Rodriguez-Fornells
,
A.
, &
Münte
,
T. F.
(
2007
).
Watching the brain during meaning acquisition.
Cerebral Cortex
,
17
,
1858
1866
.
Miranda
,
R. A.
, &
Ullman
,
M. T.
(
2007
).
Double dissociation between rules and memory in music: An event-related potential study.
Neuroimage
,
38
,
331
345
.
Orgs
,
G.
,
Lange
,
K.
,
Dombrowski
,
J.
, &
Heil
,
M.
(
2006
).
Conceptual priming for environmental sounds and words: An ERP study.
Brain and Cognition
,
62
,
267
272
.
Orgs
,
G.
,
Lange
,
K.
,
Dombrowski
,
J.
, &
Heil
,
M.
(
2007
).
Is conceptual priming for environmental sounds obligatory?
International Journal of Psychophysiology
,
65
,
162
166
.
Patel
,
A.
(
2008
).
Music, language, and the brain.
New York
:
Oxford University Press
.
Peretz
,
I.
(
1996
).
Can we lose memory for music? A case of music agnosia in a non musician.
Journal of Cognitive Neuroscience
,
8
,
481
496
.
Peretz
,
I.
, &
Coltheart
,
M.
(
2003
).
Modularity of music processing.
Nature Neuroscience
,
6
,
688
691
.
Plailly
,
J.
,
Tillmann
,
B.
, &
Royet
,
J. P.
(
2007
).
The feeling of familiarity of music and odors: The same neural signature?
Cerebral Cortex
,
17
,
2650
2658
.
Platel
,
H.
,
Baron
,
J. C.
,
Desgranges
,
B.
,
Bernard
,
F.
, &
Eustache
,
F.
(
2003
).
Semantic and episodic memory of music are subserved by distinct neural networks.
Neuroimage
,
20
,
244
256
.
Poulin-Charronnat
,
B.
,
Bock
,
B.
,
Grieser
,
J.
,
Meyer
,
K.
, &
Koelsch
,
S.
(
2006
).
More about music, language and meaning: The follow-up of Koelsch et al. (2004).
In M. Baroni, A. R. Addessi, R. Caterina, & M. Costa (Eds.),
Proceedings of the 9th International Conference on Music Perception and Cognition (ICMPC9), Bologna/Italy, August 22–26 2006
(pp.
1855
). Retrieved from http://www.escom-icmpc-2006.org/.
Pylkkänen
,
L.
, &
Marantz
,
A.
(
2003
).
Tracking the time course of word recognition with MEG.
Trends in Cognitive Sciences
,
7
,
187
189
.
Sarfarazi
,
M.
,
Cave
,
B.
,
Richardson
,
A.
,
Behan
,
J.
, &
Sedgwick
,
E. M.
(
1999
).
Visual event related potentials modulated by contextually relevant and irrelevant olfactory primes.
Chemical Senses
,
24
,
145
154
.
Schön
,
D.
, &
Besson
,
M.
(
2005
).
Visually induced auditory expectancy in music reading: A behavioural and electrophysiological study.
Journal of Cognitive Neuroscience
,
17
,
693
704
.
Schön
,
D.
,
Ystad
,
Y.
,
Kronland-Martinet
,
R.
, &
Besson
,
M.
(
2009
).
The evocative power of sounds: Conceptual priming between words and nonverbal sounds.
Journal of Cognitive Neuroscience
,
22
,
1026
1035
.
Simons
,
R. F.
, &
Lang
,
P. J.
(
1976
).
Psychophysical judgment: Electro-cortical and heart rate correlates of accuracy and uncertainty.
Biological Psychology
,
4
,
51
64
.
Squires
,
K. C.
,
Squires
,
N. K.
, &
Hillyard
,
S. A.
(
1975
).
Decision-related cortical potentials during an auditory signal detection task with cued observation intervals.
Journal of Experimental Psychology: Human Perception and Performance
,
1
,
268
279
.
Steinbeis
,
N.
, &
Koelsch
,
S.
(
2008
).
Shared neural resources between music and language indicate semantic processing of musical tension-resolution patterns.
Cerebral Cortex
,
18
,
1169
1178
.
Tyler
,
L. K.
(
1984
).
The structure of the initial cohort: Evidence from gating.
Perception and Psychophysics
,
36
,
417
427
.
Van den Brink
,
D.
,
Brown
,
C. M.
, &
Hagoort
,
P.
(
2006
).
The cascaded nature of lexical selection and integration in auditory sentence processing.
Journal of Experimental Psychology: Learning, Memory, and Cognition
,
32
,
364
372
.
Van Hooff
,
J. C.
(
2005
).
The influence of encoding intention on electrophysiological indices of recognition memory.
International Journal of Psychophysiology
,
56
,
25
36
.
Van Petten
,
C.
, &
Rheinfelder
,
H.
(
1995
).
Conceptual relationships between spoken words and environmental sounds: Event-related brain potential measures.
Neuropsychologia
,
33
,
485
508
.
Walley
,
A. C.
,
Michela
,
V. L.
, &
Wood
,
D. R.
(
1995
).
The gating paradigm: Effects of presentation format on spoken word recognition by children and adults.
Perception and Psychophysics
,
57
,
343
351
.
West
,
W. C.
, &
Holcomb
,
P. J.
(
2002
).
Event-related potentials during discourse-level semantic integration of complex pictures.
Cognitive Brain Research
,
13
,
363
375
.