Abstract

Currently, there is striking evidence showing that professional musical training can substantially alter the response properties of auditory-related cortical fields. Such plastic changes have previously been shown not only to abet the processing of musical sounds, but likewise spectral and temporal aspects of speech. Therefore, here we used the EEG technique and measured a sample of musicians and nonmusicians while the participants were passively exposed to artificial vowels in the context of an oddball paradigm. Thereby, we evaluated whether increased intracerebral functional connectivity between bilateral auditory-related brain regions may promote sensory specialization in musicians, as reflected by altered cortical N1 and P2 responses. This assumption builds on the reasoning that sensory specialization is dependent, at least in part, on the amount of synchronization between the two auditory-related cortices. Results clearly revealed that auditory-evoked N1 responses were shaped by musical expertise. In addition, in line with our reasoning musicians showed an overall increased intracerebral functional connectivity (as indexed by lagged phase synchronization) in theta, alpha, and beta bands. Finally, within-group correlative analyses indicated a relationship between intracerebral beta band connectivity and cortical N1 responses, however only within the musicians' group. Taken together, we provide first electrophysiological evidence for a relationship between musical expertise, auditory-evoked brain responses, and intracerebral functional connectivity among auditory-related brain regions.

INTRODUCTION

Currently, there is agreement that professional musical training favors functional and structural plastic changes in auditory-related brain regions. Such plastic changes can be found at the macroanatomical (Elmer, Hänggi, Meyer, & Jäncke, 2013; Bermudez, Lerch, Evans, & Zatorre, 2009; Schneider et al., 2005) as well as at the functional (Kühnis, Elmer, Meyer, & Jäncke, 2013b; Ellis et al., 2012; Elmer, Meyer, & Jäncke, 2012; Marie, Kujala, & Besson, 2012; Schneider et al., 2005) level and often correlate fairly well with the age of commencement of musical training (Pantev et al., 1998), the years of training (Musacchia, Sams, Skoe, & Kraus, 2007), or even with the cumulative hours of training (Elmer et al., 2012). Therefore, it is not really surprising that training-related changes in auditory-related brain regions of musicians strengthen the faculty to perceive or categorize musical sounds (Elmer, Klein, Kühnis, Liem, Meyer, & Jäncke, 2014; Meyer, Baumann, & Jäncke, 2006; Pantev, Roberts, Schulz, Engelien, & Ross, 2001) or even temporal and spectral (Kühnis, Elmer, Meyer, & Jäncke, 2013a; Kühnis et al., 2013b; Elmer et al., 2012; Marie et al., 2012; Marie, Delogu, Lampis, Belardinelli, & Besson, 2011; Marie, Magne, & Besson, 2011) speech information. However, from a methodological point of view, it is important to differentiate between active (i.e., discrimination or categorization) tasks and passive listening paradigms. In fact, active tasks more strongly rely on the engagement of cognitive functions than passive ones do. This can be particularly problematic, because musical training has been shown to have some influences on cognitive functions like attention and memory processes (Moreno & Bidelman, 2013; Strait & Kraus, 2013; Herholz & Zatorre, 2012; Besson, Chobert, & Marie, 2011; Moreno et al., 2011). In addition, active tasks are often associated with an inverse relationship of brain activity in prefrontal and auditory-related brain regions (Elmer et al., 2012; Brechmann & Scheich, 2005), probably reflecting a down-tuning of activity in sensory cortices by increased cognitive load.

Auditory-evoked N1 and P2 responses have repeatedly been shown to constitute robust and reliable markers for electrical activity originating from primary and secondary sensory areas (Bosnyak, Eaton, & Roberts, 2004; Vaughan & Ritter, 1970). These specific brain responses are evoked at about 100 msec (N1) and 200 msec (P2) after stimulus onset, characterized by a negative (N1) or positive (P2) deflection with maximal current distribution at central scalp sites (i.e., at electrode Cz), and associated with an inversion of polarity at lateral (mastoid) electrodes. Whereas the N1 response is thought to reflect the encoding of acoustic features (Näätänen & Picton, 1987), the P2 response seems rather to be associated with stimulus evaluation and classification processes (Bosnyak et al., 2004; Reinke, He, Wang, & Alain, 2003).

Meanwhile, there is general agreement that musical training has a profound influence on N1/P2 responses during both active (Marie, Magne, et al., 2011; Shahin, Bosnyak, Trainor, & Roberts, 2003) and passive (Seppanen, Hamalainen, Pesonen, & Tervaniemi, 2012; Baumann, Meyer, & Jäncke, 2008) listening conditions. The effects of intense musical training on auditory processing have been well documented in various cross-sectional (Baumann et al., 2008; Meyer et al., 2006; Pantev et al., 1998; Schlaug, Jäncke, Huang, & Steinmetz, 1995) and longitudinal (Chobert, Francois, Velay, & Besson, 2012; Hyde et al., 2009; Moreno et al., 2009) studies. Currently, the intrinsic meaning of enhanced or reduced N1 amplitudes in musicians compared with nonmusicians is still a matter of debate. In fact, some studies reported larger N1 responses in musicians compared with nonmusicians (Ott, Langer, Oechslin, & Jäncke, 2011; Baumann et al., 2008; Kuriki, Kanda, & Hirata, 2006), whereas others did not reveal between-group differences (Lutkenhoner, Seither-Preisler, & Seither, 2006; Schneider et al., 2002) or reported reduced N1 amplitudes in musicians compared with nonmusicians (Kühnis et al., 2013a; Seppanen et al., 2012). Probably, these inconsistencies are driven by the selection of professional musicians, the spectral complexity of the stimuli, and the electrode location used for analyses. Otherwise, there is much more consistency among studies concerning the direction of P2 modulations as a function of training (i.e., increased or reduced amplitudes). In fact, P2 amplitudes in musicians are more likely enhanced than attenuated. On the basis of current knowledge, depressed auditory-evoked potentials (AEP) are often interpreted as reflecting neuronal efficiency, whereas increased AEPs are more likely supposed to mirror an increased number of activated neurons or even synchronicity.

Despite the vast body of evidence available in the literature showing a modulation of the N1/P2 complex as a function of musical expertise, a fundamental research question has not yet been addressed by using EEG. In particular, to date it is completely unknown whether the differential N1/P2 amplitudes often observed in professional musicians are somehow related to an altered functional interhemispheric connectivity between bilateral auditory-related brain regions. Otherwise, currently there is at least some evidence showing that the musicians' advantage in processing speech cues as well as functional changes in auditory-related brain regions are related to increased white matter connectivity between the two auditory-related cortices (Elmer, Hänggi, & Jäncke, in preparation). In a similar way, in the present work we assumed that increased functional connectivity between bilateral auditory-related brain regions might promote sensory specialization, as reflected by altered N1 and/or P2 responses. That the auditory cortices of both hemispheres interact with each other during auditory perception has been shown in several experiments (e.g., Sinai & Pratt, 2003). Here we analyzed the phase coupling of electrical brain activity between both auditory cortices. The advantage of using EEG for inferring interhemispheric functional coupling is that EEG (and MEG) permits us to measure dynamic postsynaptic activity in the cerebral cortex with a high temporal resolution. Thus, EEG is suitable for visualizing synchronization across different frequency bands in large-scale neural assemblies. Previous studies have suggested that coherent brain oscillations may play a pivotal role in coordinating and/or binding together different neural assemblies or brain areas (Scheeringa et al., 2011; Womelsdorf & Fries, 2007; Engel, Fries, & Singer, 2001). These studies have also shown that coherent brain oscillations are associated with fast and efficient information exchange and enable us to bind neural information from different regions (Serrien, Pogosyan, & Brown, 2004; Varela, Lachaux, Rodriguez, & Martinerie, 2001; Rosen, Sherman, & Galaburda, 1989). In this study, we evaluated lagged phase synchronization between left and right auditory-related brain regions. This measure allows us to quantify functional connectivity between different neural assemblies independently from volume conduction. Thus, this measure represents the “true” physiological connectivity information (Pascual-Marqui et al., 2011). With this purpose in mind, we used EEG and measured a sample of professional string players and nonmusicians in the context of an oddball paradigm consisting of passively listening to spectrally manipulated vowels. A similar paradigm has previously been shown to elicit stronger preattentive brain responses in the auditory-related cortex of musicians compared with nonmusicians (Kühnis et al., 2013b).

In the present work, we focused on three specific hypotheses. First of all, based on previous work (Kühnis et al., 2013a; Seppanen et al., 2012; Marie, Magne, et al., 2011; Baumann et al., 2008; Pantev et al., 2001), we expected to find differential N1 and increased P2 amplitudes in musicians compared with nonmusicians. Second, based on our previous structural work (Elmer et al., 2013), we predicted increased functional connectivity between bilateral auditory-related brain regions in experts compared with nonexperts. Third, by combining the first two hypotheses, we postulated a putative relationship between functional connectivity patterns and maximal N1/P2 amplitudes within the musicians' group.

METHODS

Participants

Twenty-seven healthy volunteers (16 women and 11 men) with no past or current neurological, psychiatric, or neuropsychological problems participated in this study. All participants were native Swiss German or German speakers. The first group consisted of 14 professional string players (eight women and six men; primary musical instrument: six violinists, four violists, and four cellists; mean age = 24.6 years, SD = 6.2 years) who commenced their musical training between 2.5 and 8 years (mean age = 6.03 years, SD = 1.57 years). The musicians we measured practiced their musical instrument on average for 21.6 hr per week (min = 4 hr, max = 60 hr, SD = 15.1 hr). The total number of estimated training hours across lifespan in the musicians' group was on average 12,900 hr. Furthermore, the musicians had an average duration of musical training of 18.6 years (SD = 6.2 years). The control group consisted of 13 volunteers without formal musical education (eight women, five men; mean age = 28.5 years, SD = 6.1 years). Furthermore, the control participants did not received musical training despite of obligatory flute lessons at school. All participants were paid for participation and gave informed written consent in accordance with the procedures approved by the local ethics committee. It is important to mention that the same participants have already been measured in the context of a previous study (Kühnis et al., 2013b). However, the data we report here have not yet been analyzed and published.

Behavioral Data

History of Musical Training

History of musical training was assessed by using an in-house questionnaire (Elmer et al., 2012). This questionnaire was adopted to evaluate the age of commencement of musical training, the instruments played, as well as the estimated number of training hours across life span. In particular, the participants estimated the total number of training hours they performed per day (and per week) in the following periods of life (age): 0–7, 8–10, 11–13, 14–16, etc.

Musical Aptitudes

The musical aptitudes of the participants were estimated by using the “Advanced Measure of Music Audiation” (AMMA) test (Gordon, 1989). This procedure bases on the assumption that to hold music sounds in memory and to detect melodic and rhythmic variations constitute a fundamental prerequisite for musical aptitudes. During the AMMA test, the volunteers listened to short pairs of piano tone sequences and had to decide whether these sequences were equivalent, rhythmically different, or tonally different. Evaluation was based on a composite score of the pitch/rhythm subtests, and the dependent variable was accuracy.

Cognitive Capability

To exclude between group differences in cognitive capability, we applied two short intelligence tests, namely the KAI (Kurztest der aktuellen geistigen Leistungsfähigkeit, i.e., short test for actual cognitive capability; Lehrl & Fischer, 1992) and the MWT (Mehrfachwahl-Wortschatz-Intelligenz, i.e., verbal-lexical intelligence test; Lehrl, 1977). This KAI test measures fluid intelligence and is based on working memory and speed of information processing. The MWT quantifies crystalline intelligence and consists of word lists with increasing difficulty in which the participants have to identify the solely word with a meaning out of four pseudowords. We did not reveal significant differences between the two groups; this result indicating a comparable cognitive capability in musicians and nonmusicians. Table 1 provides an overview of the biographical and behavioral data of the participants.

Table 1. 

Biographical and Behavioral Data of the Two Groups

Age (a)SexKAI (IQ)MWT-B (IQ)AMMA**(PR)
MeanSDMaleFemaleMeanSDMeanSDMeanSD
24.6 6.24 129.3 9.0 114.1 11.4 77.9 8.76 
NM 28.45 8.04 131.1 6.1 107.4 10.8 57.2 19.4 
Age (a)SexKAI (IQ)MWT-B (IQ)AMMA**(PR)
MeanSDMaleFemaleMeanSDMeanSDMeanSD
24.6 6.24 129.3 9.0 114.1 11.4 77.9 8.76 
NM 28.45 8.04 131.1 6.1 107.4 10.8 57.2 19.4 

M = musician; NM = nonmusicians.

**Significant difference between the two groups, F(1, 23) = 12.75, p = .002.

Stimulus Material

In this study, we used an artificial German vowel, which was created by using the PRAAT software (www.fon.hum.uva.nl/praat/). By using the vowel editor, a tool of the PRAAT software, we generated a standard German /a/ vowel with fundamental frequency (f0) of 122 HZ, first formant transition of 680 Hz (F1), and second formant transition (F2) of 1100 HZ. In a successive processing step, we generated three different deviant levels by shifting F2 to 1200, 1300, and 1400 Hz, respectively. The vowels were steady-state signals with fixed formants. Stimulus duration was 300 msec; the relative probability of standards was 50% and that of each deviant of 16.7%. Furthermore, the stimuli were edited with a logarithmic fade-in/fade-out of 5 msec.

Experimental Procedure

The participants sat in a comfortable chair at a distance of about 75 cm from a 19-in. monitor. During EEG recordings, the participants watched a silent movie while the auditory stimuli were presented at a sound pressure level of 65 dB (Digital Sound Level Meter 329, Voltcraft, Colchester) by using Sennheiser in-ear HIFI-headphones (Sennheiser, CX-350, Colchester, Essex, UK). We used the so-called multifeature “optimum-1” paradigm previously proposed by Näätänen, Pakarinen, Rinne, and Takegata (2004). This paradigm implies that standard and deviant vowels are presented alternatively and that each deviant differs from the last presented deviant. The advantage of using an “optimum-1” paradigm is that this procedure enables us to collect umpteen brain responses in a short time period and to reduce neuronal adaptation. The standard stimulus was presented 360 times, whereas each deviant was presented 120 times. Furthermore, to avoid expectation, the ISI was jittered between 600 and 900 msec. The whole experiment lasted 15 min. The presentation of the auditory stimuli was controlled by the Presentation software (www.neurobs.com; version 14.5).

EEG Acquisition, Preprocessing, and Data Analysis

Continuous EEG (32 electrodes + 2 eye channels, provided by Easy Cap) was recorded with a sampling rate of 1000 Hz and a high pass filter (0.1 Hz) by using an EEG amplifier (Brain Products, Munich, Germany). The electrodes (sintered silver/silver chloride electrodes) were located at frontal, temporal, parietal, and occipital scalp sites according to the international 10–10 system (Fp1, Fp2, F7, F3, Fz, F4, F8, FT7, FC3, FCz, FC4, FT8, T7, C3, Cz, C4, T8, TP9, TP7, CP3, CPz, CP4, TP8, TP10, P7, P3, Pz, P4, P8, O1, Oz, O2). The reference electrode was placed on the tip of the nose, and electrode impedance was reduced to <10 kΩ by using electrogel conductant. For all preprocessing steps, we used the Brain Vision Analyser software (Version 2.01, Brain Products, Munich, Germany). Data were filtered offline from 1 to 30 Hz, and artifacts (i.e., eye movements and blinks) were eliminated by using an independent component analysis (Jung et al., 2000) in association with a semiautomatic raw data inspection.

For AEP analyses, data were sectioned into segments of 500 msec (from 200 msec prestimulus to 300 msec poststimulus period) and a baseline correction relative to the −200 to 0 msec prestimulus time period was applied. AEPs (P1, N1 and P2) were calculated by averaging the single segments, separately for the standard and each of the three deviant stimuli. In addition, all deviants were averaged together, and a grand average was computed. This procedure served for defining the time windows for peak detection. In fact, maximal P1, N1, and P2 amplitudes were labeled semiautomatically at electrode Cz, in a time window of 60 msec around the grand averaged peaks, separately for P1, N1, and P2 components. The labeled peaks were additionally confirmed by visual inspection. For analyses, electrode Cz was chosen because it shows the most prominent amplitudes and best reflects activity originating from auditory-related brain regions (Bosnyak et al., 2004). For connectivity analyses, the single baseline-corrected sweeps were additionally segmented into 300-msec periods by eliminating the prestimulus period.

eLORETA

Intracerebral source estimation was performed by using the eLORETA approach (publicly available free academic software at www.uzh.ch/keyinst/loreta.htm). In particular, the electric potential distribution over the scalp was used for inferring three-dimensional distribution of current density, with the three-dimensional solution space restricted to cortical gray matter. This procedure enables to reliably estimate ERP sources, however with low spatial resolution. The description of the method, together with the proof of its exact zero-error localization property, is presented in two previous articles (Pascual-Marqui, 2007, 2009). eLORETA source estimation was performed by using a realistic head model (Fuchs, Kastner, Wagner, Hawes, & Ebersole, 2002) and the MNI152 template (Mazziotta et al., 2001). The intracerebral volume of eLORETA consists of 6239 voxels, each of them with 5-mm spatial resolution. Anatomical labels are implemented as Brodmann's areas (BA) and reported within the MNI space. A more detailed description of this method is available on the following Web page: www.uzh.ch/keyinst/NewLORETA/Methods/MethodsEloreta.htm.

eLORETA Functional Connectivity Analyses

Functional connectivity can be estimated by using objective EEG parameters. If two brain regions show similar brain activity that is related to each other in a fixed manner, one may deduce that these brain areas are somehow functionally related. This can be inferred by the reasoning that, in this case, the two brain regions do the same thing at the same time. In this context, phase coherence can be taken as a measure to estimate the synchronization between two brain signals (for a detailed overview of functional connectivity, please consider Sauseng & Klimesch, 2008). For functional connectivity analyses, we selected two bilateral auditory-related ROIs, each consisting of BA 22, BA 41, and BA 42. The anatomical definitions of BAs provided by the eLORETA software package are based on the Talairach Daemon (www.talairach.org/). For connectivity analyses between the two BAs, a method using a single voxel at the centroid of the BAs was chosen. This procedure is particularly fruitful because eLORETA has a low spatial resolution, which makes it difficult to separate two contiguous sources. Furthermore, the centroid voxel (the closest to the center of the BA mass) is an excellent representative of the corresponding BA. For statistical analyses, we evaluated functional connectivity between the centroids of the two bilateral ROIs. Lagged phase synchronization was used for measuring functional connectivity between the two BAs in the time period from stimulus onset to 300 msec poststimulus. Lagged phase synchronization measures the similarity (a corrected phase synchrony value) between signals in the frequency domain based on normalized (unit module) Fourier transforms; thus, it is related to nonlinear functional connectivity (Canuet et al., 2011). Lagged phase synchronization values are reported unit less in the range between 0 and 1. An important feature of this connectivity measure is that it is independent from volume conduction and represents a “true” physiological connectivity measure (Lehmann, Faber, Gianotti, Kochi, & Pascual-Marqui, 2006). Details on eLORETA connectivity algorithm can be found in recent reports of Pascual-Marqui et al. (2011).

For each group, eLORETA functional connectivity was computed in three a priori selected frequency bands, namely theta (4–7 Hz), alpha (8–12 Hz), and beta (13–30 Hz), separately for standard and deviant stimuli. These frequency bands have frequently been shown to be involved in perception and attention processes (Keil, Müller, Hartmann, & Weisz, 2013; Lange, Christian, & Schnitzler, 2013; Vanneste, Song, & De Ridder, 2013; Wu et al., 2012). Furthermore, to reduce the number of statistical tests, we computed an average connectivity value of all stimuli for each frequency band.

Statistical Analyses

AEPs (maximal P1, N1, and P2 peaks) and connectivity values (lagged phase synchronization) were evaluated between the two groups by using repeated-measures ANOVAs. In addition, within each group Pearson's correlations between N1/P2 peaks and connectivity values (in the three frequency bands) were calculated (averaged N1/P2 peaks and mean connectivity values across all stimuli for each frequency band).

RESULTS

Behavioral Data

As expected, the musicians performed significantly better than the nonmusicians on the AMMA test, t(23) = 3.571, p = .002. Furthermore, we did not reveal significant between-group differences with respect to the basic cognitive abilities, this result indicating a comparable cognitive capability in musicians and nonmusicians. Table 1 provides an overview of the biographical and behavioral data of the participants.

P1/N1/P2 Peaks

Figure 1 shows the AEP waveforms separately for each group and stimulus type (i.e., one standard and three deviants). Before subjecting data to statistical comparisons, we ensured that P1/N1/P2 peaks were normally distributed within the two groups of participants by using the Kolmogorov–Smirnov procedure. This procedure indicated normal data distribution for all electrophysiological parameters. Between-group peak comparison was assessed by means of a 2 (Group) × 4 (Stimuli) ANOVA (repeated measurements) separately for the N1 and the P2 responses. The analysis of N1 responses yielded a main effect of Group, F(1, 25) = 4.283, p = .049; ηp2 = .146. Furthermore, we revealed a statistical trend concerning the main Stimulus effect, F(1, 25) = 3.686, p = .066; ηp2 = .128. The evaluation of the P2 magnitudes did not reach significance. The main effect of Group was associated with reduced N1 amplitudes in musicians compared with nonmusicians, whereas the main effect of Stimulus originated from less negative N1 amplitudes in response to the standard vowel in comparison with the deviant ones. The evaluation of the P1 response across the two groups did not yield a significant effect.

Figure 1. 

ERP waveforms in response to Standard (top panel), Deviant 100 (second panel), Deviant 200 (third panel), and Deviant 300 (bottom panel) are depicted at electrode C3 (left), Cz (middle), and C4 (right). Musicians are depicted in red; nonmusicians are depicted in black.

Figure 1. 

ERP waveforms in response to Standard (top panel), Deviant 100 (second panel), Deviant 200 (third panel), and Deviant 300 (bottom panel) are depicted at electrode C3 (left), Cz (middle), and C4 (right). Musicians are depicted in red; nonmusicians are depicted in black.

In an additional statistical analysis, we evaluated whether the main effect of Group may have originated from differential adaptation effects between experts and nonexperts in response to the standard stimulus. This is an important prerequisite for our work, because a previous article reported differential adaptation processes between musicians and nonmusicians during a passive listening task (Seppanen et al., 2012). With this finding in mind, AEPs in response to the standard were separately averaged for the first and second half part of the experiment, and an additional 2 (Group) × 2 (Time Periods) ANOVA (repeated measurements) was computed for the N1 amplitudes evoked by the standard stimulus. This statistical procedure revealed a main effect of Time, F(1, 25) = 8.249, p = .008; ηp2 = .248, and group, F(1, 25) = 6.856, p = .015; ηp2 = .215. The main effect of Group was associated with a reduced N1 amplitude in musicians compared with nonmusicians Figure 2, whereas the main effect of time originated from less negative N1 response during the second half of the experiment. Because the Group × Time interaction did not yield significance, F(1, 1) = 0.463, p = .502, we exclude that differential adaptation processes between the two groups may have influenced the results (see Figure 3).

Figure 2. 

N1 and P2 peak amplitudes for all stimuli and both groups. y axis = microvolts. dev = deviant; std = standard.

Figure 2. 

N1 and P2 peak amplitudes for all stimuli and both groups. y axis = microvolts. dev = deviant; std = standard.

Figure 3. 

N1 and P2 peak amplitudes (μV) in response to the standard stimuli during the first and second half of the experiment.

Figure 3. 

N1 and P2 peak amplitudes (μV) in response to the standard stimuli during the first and second half of the experiment.

eLORETA Functional Connectivity Analyses

In the present work, we focused on functional connectivity, as indexed by lagged phase synchronization in the theta, alpha, and beta bands, between bilateral auditory related brain regions (i.e., BA 22, BA 41, and BA 42). Before subjecting data to statistical comparisons, we ensured that data were normally distributed within the two groups of participants by using the Kolmogorov–Smirnov procedure. This procedure indicated normal data distribution. Functional connectivity was evaluated by means of a 2 (Group) × 4 (Stimuli) × 3 (Frequency Bands) ANOVA (repeated measurements). This statistical analysis yielded a main effect of Group, F(1, 25) = 11.505, p = .002; ηp2 = .315, Stimulus, F(1, 3) = 8.984, p = .006; ηp2 = .264, Frequency Band, F(1, 25) = 1244.675, p < .001; ηp2 = .980, as well as a Stimulus × Frequency Band interaction effect, F(3, 2) = 7.349, p = .012; ηp2 = .227. All other effects did not reach significance. The main effect of Group originated from an overall increased connectivity in musicians compared with nonmusicians, whereas the main effect of Frequency Band was reflected by increased connectivity in the beta band. Figure 4 shows the connectivity values, separately for each group, frequency bands, and stimulus type.

Figure 4. 

Lagged phase synchronization values for all stimuli and both groups in the theta (top), alpha (middle), and beta band (bottom). Phase synchronization is reported in the range between 0 and 1, unit less. Musicians = dark color; nonmusicians = light color.

Figure 4. 

Lagged phase synchronization values for all stimuli and both groups in the theta (top), alpha (middle), and beta band (bottom). Phase synchronization is reported in the range between 0 and 1, unit less. Musicians = dark color; nonmusicians = light color.

Correlative Analyses between N1/P2 Peaks and Functional Connectivity Values

Because ERP as well as connectivity analyses yielded a main effect of Group, correlations were computed by collapsing all stimuli (i.e., standard and deviants) together. The correlative analyses between N1/P2 peaks and connectivity values only reached significance within the musicians' group. In particular, we revealed a positive relationship between N1 amplitudes and connectivity in the beta band (r = .586, p = .028). All other correlations did not reach significance. In summary, within the musicians' group increased beta connectivity was associated with reduced N1 amplitudes (see Figure 5).

Figure 5. 

Correlation between N1 amplitudes and lagged phase synchronization in the beta band in response to all stimuli (averaged). Musicians = dark diamonds; nonmusicians = light squares.

Figure 5. 

Correlation between N1 amplitudes and lagged phase synchronization in the beta band in response to all stimuli (averaged). Musicians = dark diamonds; nonmusicians = light squares.

DISCUSSION

General Discussion

In the present work, we were specifically interested in answering two research questions, namely whether musicianship shapes functional connectivity between bilateral auditory-related brain regions, and whether such connectivity patterns are somehow related to auditory-evoked N1/P2 responses. With this purpose in mind, we adopted a fast passive oddball paradigm and measured a sample of musicians and nonmusicians by means of EEG. Such passive paradigms are particularly fruitful in that they permit to collect brain responses, which are not contaminated by top–down processes, in a very short time period (Näätänen et al., 2004). According to current knowledge, N1/P2 responses can be used as reliable markers for measuring sensory encoding mechanisms at different auditory processing stages (Bosnyak et al., 2004). Currently, it is generally acknowledged that N1 and P2 scalp potentials are generated by temporally and spatially distinct generators, that the medial territory of Heschl's gyrus constitutes one of the major sources of the N1 component, and that P2 responses are more strongly dependent on the recruitment of auditory association cortex (Bosnyak et al., 2004).

The innovative aspect of our work is that we provide first functional evidence for a generally increased interhemispheric connectivity between auditory-related brain regions (as indexed by intracerebral lagged phase synchronization) in musicians compared with nonmusicians. Most notably, within the musicians' group N1 amplitudes were modulated by phase synchronization between bilateral auditory-related brain regions, particularly in the beta frequency range. Because such a relationship was not found within the control group, results are interpreted as suggesting that training-related changes in intracerebral functional connectivity may, at least in part, modulate AEPs. Different anatomical measures of interhemispheric connectivity as well as different interhemispheric BOLD response correlations are by far not the same as different interhemispheric coupling of electrical oscillations. In our view, coupled oscillations are a very strong argument for similar interhemispheric processes. Thus, our data provide additional and extending insight into the differences between musicians and nonmusicians in terms of interhemispheric cooperation. To our knowledge, there is no study published yet demonstrating that early AEP responses are that strongly related to interhemispheric coupling. This is in our view a new finding that sheds light on the expertise-related and dynamic nature of interhemispheric cooperation. In turn, we will discuss these novel results in a more comprehensive manner by integrating current knowledge on AEPs, functional connectivity, and brain oscillations.

Group Differences in N1 Amplitudes

Currently, there is striking evidence showing that musical training has the potential to strongly modify the responsiveness of auditory-related brain regions while encoding a variety of acoustic features (Kühnis et al., 2013a, 2013b; Marie et al., 2012; Seppanen et al., 2012; Meyer et al., 2006; Shahin et al., 2003; Pantev et al., 1998, 2001). In line with previous EEG studies (Seppanen et al., 2012; Meyer et al., 2006; Shahin et al., 2003; Pantev et al., 1998, 2001), we revealed significant between-group differences in terms of N1 amplitudes at central scalp sites. In particular, while listening to spectral speech sounds musicians showed reduced N1 amplitudes compared with nonmusicians. On the basis of the fact that AEP amplitudes are modulated by the number of activated neurons or firing rate synchronicity, N1 results are interpreted as reflecting a more efficient encoding of vowels in musicians. Finally, our results also replicate previous findings consistently showing larger N1/P2 magnitudes in response to the deviant as opposed to the standard stimuli (Seppanen et al., 2012). This result is not at all surprising and has previously been replicated by several MMN studies (Seppanen et al., 2012). In this context, larger brain responses to deviants in comparison with standards are interpreted as reflecting sensory-memory traces related to infrequent events within auditory streams.

In a further analysis, we inspected whether AEP amplitude differences between the two groups may have been driven by neuronal adaptation mechanisms. However, based on our data, we did not find evidence for differential neuronal adaptations. Thus, although we cannot completely exclude that exposure time may have influenced the results in some direction, it is plausible to assume that expertise-specific adaptation processes may not primarily account for the AEP differences we revealed between the two groups. This result diverges from that previously reported by Seppanen et al. (2012), who showed that intracerebral N1 and P2 source activation in the auditory cortex was selectively decreased in musicians after 15 min of passive exposure to sounds. Several reasons may account for such a discrepancy. First of all, it should be mentioned that Seppanen and coworkers observed significant adaptation effects in musicians between the first and second block of passive stimulation, each block lasting about 15 min. By contrast, our experiment had a total duration of 15 min, and adaptation effects were estimated by comparing the first with the second half period of stimulation, each lasting about 7.5 min. A second fundamental difference is that Seppanen and colleagues principally evaluated intracerebral AEP sources. By contrast, in the present work we focused on AEPs measured at the surface of the scalp. Thus, methodological differences between both studies may account for these differences. Beside this difference our results replicate previous findings showing that musicianship influences the responsiveness of auditory-related cortical fields, as reflected by altered N1/P2 responses.

Group Differences in Interhemispheric Connectivity between Homologue Auditory-related Brain Regions

A main purpose of the present work was to evaluate whether musical expertise may promote functional interhemispheric connectivity between homologue auditory-related brain regions. This assumption bases on the reasoning that functional specialization of auditory cortical fields as well as a more efficient signal processing in musicians may be related, at least in part, to an increased synchronization between the two auditory-related cortices. By taking into account a division of labor between the two hemispheres (Hickok & Poeppel, 2007; first principle) as well as previous work showing a functional specialization of the left and right auditory cortex in musicians while processing temporal (Elmer et al., 2012) and spectral (Kühnis et al., 2013b) speech cues (second principle), we assumed a relationship between these two principles and functional connectivity. This assumption is at least supported by a previous work of our group (Elmer et al., 2013), which provided evidence for increased white matter connectivity between the planum temporale homologues of the two hemispheres in musicians compared with nonmusicians, as reflected by reduced radial diffusivity. In the same study, we also reported that radial diffusivity was related to the performance in a phonetic categorization task, musical aptitudes, as well as to the BOLD responses in the left planum temporale. Hence, this previous work provided first evidence for a relationship between structural interhemispheric connectivity among auditory-related brain regions, musicianship, and speech processing (for an opposite perspective, also consider Galaburda, Rosen, & Sherman, 1990; Rosen et al., 1989).

In line with our reasoning, we found an overall increased functional connectivity (i.e., as reflected by lagged phase synchronization) in musicians compared with nonmusicians in theta, alpha, and beta frequency bands. In addition, both groups showed the strongest connectivity in the beta band, and theta band connectivity was more pronounced in response to deviants as compared with standards. The observation of increased theta band connectivity in response to deviants as compared with standards, irrespective of group affiliation, is in line with a previous work of Hsiao, Wu, Ho, and Lin (2009), who evaluated the phase (phase-locking values) and power characteristics of brain oscillations during passive auditory deviance detection (i.e., sine tones differing in duration in the context of an oddball paradigm) by means of wavelet analyses. As a main result, the authors reported increased theta and alpha power as well as increased theta phase-locking in bilateral temporal regions in response to both standard and deviant tones. In addition, deviant stimuli showed larger theta phase-locking values and power than standard ones. In a similar way, Ko et al. (2012) provided evidence for an increased event-related spectral perturbation and intertrial phase coherence in the theta band in response to deviant tones as compared with standard ones.

The strongest functional connectivity was found in both groups in the beta band. Although the intrinsic meaning of beta oscillations is far away from being understood, there is at least some evidence showing that fast oscillations reflect different aspects of sensory information processing and support the processing of novel auditory stimuli (Haenschel, Baldeweg, Croft, Whittington, & Gruzelier, 2000; Pantev, 1995). Furthermore, beta phase-locking has previously been associated with the encoding and consolidation of sensory information (Bibbig, Faulkner, Whittington, & Traub, 2001). Because we also revealed increased connectivity patterns in the beta band in musicians compared with nonmusicians, we speculate whether increased bilateral connectivity in this specific frequency band may reflect sensory specialization as a function of musical expertise.

Theta band oscillations have previously not only been associated with memory (Bastiaansen, van Berkum, & Hagoort, 2002), working memory (Sarnthein, Petsche, Rappelsberger, Shaw, & von Stein, 1998), and learning processes (Klimesch et al., 2001), but also with language processing (Giraud et al., 2007; Hickok & Poeppel, 2007; Luo & Poeppel, 2007). In this context, it is assumed that theta (4–8 Hz) oscillations participate in segmenting the incoming speech signal in single units of about 200 msec, which corresponds about to the length of phonetic units (in our case vowels). Especially the previous observation that theta band power is not different before and during speech processing (Luo & Poeppel, 2007) leads to suggest that the input signal cause phase resetting of intrinsic theta oscillations in auditory cortex through stimulus–brain alignment mechanisms (Giraud & Poeppel, 2012). On the basis of our data, we propose that in professional musicians such stimulus–brain alignment may be adapted as a function of musical training, possibly also enabling a more efficient processing of spectral speech sounds (Kühnis et al., 2013b). This line of argumentation may be associated with the fact that single tones can be understood as the basic units of musical processing, like phonemes in speech processing. We may speculate whether theta oscillations contribute to the segmentation of the incoming musical signal into single units. Consequently, theta oscillations are supposed to entrain as a function of musical training and to promote speech processing as well. Certainly, further studies are necessary for better comprehending whether the increased connectivity we revealed in musicians in the theta band more likely represents memory or general sensory processes.

Finally, previous work has elucidated that alpha oscillations have the faculty to temporal realign their phase while processing vowels in a task-dependent manner (Bonte, Valente, & Formisano, 2009). This previous work extends the findings on the functional contribution of alpha oscillations to inhibitory control (Klimesch, Sauseng, & Hanslmayr, 2007) and shows that the precise timing of alpha oscillations promotes sensory speech processing as well. On the basis of the work of Bonte et al. (2009), the increased connectivity in the alpha band we revealed in musicians is interpreted as reflecting a training-related tuning of bilateral auditory-related brain regions during speech processing.

Relationship between ERP Amplitudes and Functional Connectivity

In a last approach, we addressed putative relationships between N1/P2 amplitudes and functional connectivity between auditory-related brain regions, separately for each group. Results revealed a positive relationship between N1 amplitudes and beta band synchronization, however only within the musicians' group. In particular, increased beta band connectivity was associated with reduced N1 amplitudes.

On the basis of generally increased functional connectivity values and altered N1 responses in musicians, as well as on the significant relationship between N1 responses and beta band connectivity, we propose that within the musicians' group auditory-related cortex activity is more likely modulated by interhemispheric functional exchange. In particular, we propose that musicians more strongly rely on the division of labor between the two auditory-related cortices, this probably promoting (at least in part) functional specialization. However, based on the present data, it is difficult to intrinsically comprehend the specific role of beta band oscillations in modulating (excitement or inhibition) auditory cortex activity. Certainly, further work is necessary to better comprehend the specific meaning of increased interhemispheric auditory-related cortex connectivity in musicians for promoting functional specialization.

Limitations

In the present work, we only focused on theta, alpha, and beta oscillations for measuring interhemispheric intracerebral connectivity between auditory-related brain regions. However, we cannot exclude that other frequency bands likewise contribute to modulate auditory-related cortex activity as a function of expertise. In addition, it still remains an open question how the nesting relationships between the different frequency bands may promote functional specialization. Finally, we cannot exclude that interhemispheric connectivity between auditory-related brain regions as well as the relationship we revealed between beta band connectivity and N1 responses were mediated by a third variable, such as, for example, resting-state connectivity. Certainly, further work is necessary to better understand these open questions.

Conclusions

Here, we replicated previous findings showing that musicianship is associated with an altered responsiveness of auditory-related cortical fields, as revealed by reduced N1 amplitudes. The novelty of the present work is that we provide first evidence for an increased functional intracerebral connectivity between auditory-related brain regions in musicians while passively listening to vowels. Because significant correlations between functional connectivity and AEPs were only observed within the musicians' group, we propose that functional connectivity enables a more efficient division of labor between bilateral auditory-related brain regions and promotes efficiency.

Acknowledgments

This research was supported by the Swiss National Science Foundation (SNF, grant no. 320030-120661 and grant no. 4-62341-05 to L. J.).

Reprint requests should be sent to Dr. Jürg Kühnis or Stefan Elmer, Division Neuropsychology, Institute of Psychology, University of Zurich, Binzmühlestrasse 14/25, CH-8050 Zurich, Switzerland, or via e-mail: j.kuehnis@psychologie.uzh.ch, s.elmer@psychologie.uzh.ch.

REFERENCES

Bastiaansen
,
M. C. M.
,
van Berkum
,
J. J. A.
, &
Hagoort
,
P.
(
2002
).
Event-related theta power increases in the human EEG during online sentence processing.
Neuroscience Letters
,
323
,
13
16
.
Baumann
,
S.
,
Meyer
,
M.
, &
Jäncke
,
L.
(
2008
).
Enhancement of auditory-evoked potentials in musicians reflects an influence of expertise but not selective attention.
Journal of Cognitive Neuroscience
,
20
,
2238
2249
.
Bermudez
,
P.
,
Lerch
,
J. P.
,
Evans
,
A. C.
, &
Zatorre
,
R. J.
(
2009
).
Neuroanatomical correlates of musicianship as revealed by cortical thickness and voxel-based morphometry.
Cerebral Cortex
,
19
,
1583
1596
.
Besson
,
M.
,
Chobert
,
J.
, &
Marie
,
C.
(
2011
).
Transfer of training between music and speech: Common processing, attention, and memory.
Frontiers in Auditory Cognitive Neuroscience
,
2
,
94
.
Bibbig
,
A.
,
Faulkner
,
H. J.
,
Whittington
,
M. A.
, &
Traub
,
R. D.
(
2001
).
Self-organized synaptic plasticity contributes to the shaping of gamma and beta oscillations in vitro.
Journal of Neuroscience
,
21
,
9053
9067
.
Bonte
,
M.
,
Valente
,
G.
, &
Formisano
,
E.
(
2009
).
Dynamic and task-dependent encoding of speech and voice by phase reorganization of cortical oscillations.
Journal of Neuroscience
,
29
,
1699
1706
.
Bosnyak
,
D. J.
,
Eaton
,
R. A.
, &
Roberts
,
L. E.
(
2004
).
Distributed auditory cortical representations are modified when nonmusicians are trained at pitch discrimination with 40 Hz amplitude modulated tones.
Cerebral Cortex
,
14
,
1088
1099
.
Brechmann
,
A.
, &
Scheich
,
H.
(
2005
).
Hemispheric shifts of sound representation in auditory cortex with conceptual listening.
Cerebral Cortex
,
15
,
578
587
.
Canuet
,
L.
,
Ishii
,
R.
,
Pascual-Marqui
,
R. D.
,
Iwase
,
M.
,
Kurimoto
,
R.
,
Aoki
,
Y.
,
et al
(
2011
).
Resting-state EEG source localization and functional connectivity in schizophrenia-like psychosis of epilepsy.
PLoS One
,
6
,
1
10
.
Chobert
,
J.
,
Francois
,
C.
,
Velay
,
J. L.
, &
Besson
,
M.
(
2012
).
Twelve months of active musical training in 8- to 10-year-old children enhances the preattentive processing of syllabic duration and voice onset time.
Cerebral Cortex
,
24
,
956
967
.
Ellis
,
R. J.
,
Norton
,
A. C.
,
Overy
,
K.
,
Winner
,
E.
,
Alsop
,
D. C.
, &
Schlaug
,
G.
(
2012
).
Differentiating maturational and training influences on fMRI activation during music processing.
Neuroimage
,
60
,
1902
1912
.
Elmer
,
S.
,
Hänggi
,
J.
, &
Jäncke
,
L.
(
in preparation
).
Interhemispheric transcallosal connectivity between the left and right planum temporale predicts musicianship, performance in temporal speech processing, and functional specialization
.
Elmer
,
S.
,
Hänggi
,
J.
,
Meyer
,
M.
, &
Jäncke
,
L.
(
2013
).
Increased cortical surface area of the left planum temporale in musicians facilitates the categorization of phonetic and temporal speech sounds.
Cortex
,
49
,
2812
2821
.
Elmer
,
S.
,
Klein
,
C.
,
Kühnis
,
J.
,
Liem
,
F.
,
Meyer
,
M.
, &
Jäncke
,
L.
(
2014
).
Music and language expertise influence the categorization of speech and musical sounds: Behavioral and electrophysiological measurements.
Journal of Cognitive Neuroscience
,
26
,
2356
2369
.
Elmer
,
S.
,
Meyer
,
M.
, &
Jäncke
,
L.
(
2012
).
Neurofunctional and behavioral correlates of phonetic and temporal categorization in musically trained and untrained subjects.
Cerebral Cortex
,
22
,
650
658
.
Engel
,
A. K.
,
Fries
,
P.
, &
Singer
,
W.
(
2001
).
Dynamic predictions: Oscillations and synchrony in top–down processing.
Nature Reviews Neuroscience
,
2
,
704
716
.
Fuchs
,
M.
,
Kastner
,
J.
,
Wagner
,
M.
,
Hawes
,
S.
, &
Ebersole
,
J. S.
(
2002
).
A standardized boundary element method volume conductor model.
Clinical Neurophysiology
,
113
,
702
712
.
Galaburda
,
A. M.
,
Rosen
,
G. D.
, &
Sherman
,
G. F.
(
1990
).
Individual variability in cortical organization—Its relationship to brain laterality and implications to function.
Neuropsychologia
,
28
,
529
546
.
Giraud
,
A. L.
,
Kleinschmidt
,
A.
,
Poeppel
,
D.
,
Lund
,
T. E.
,
Frackowiak
,
R. S. J.
, &
Laufs
,
H.
(
2007
).
Endogenous cortical rhythms determine cerebral specialization for speech perception and production.
Neuron
,
56
,
1127
1134
.
Giraud
,
A. L.
, &
Poeppel
,
D.
(
2012
).
Cortical oscillations and speech processing: Emerging computational principles and operations.
Nature Neuroscience
,
15
,
511
517
.
Gordon
,
E. E.
(
1989
).
Manual for the advanced measures of music education.
Chicago
:
G.I.A. Publications, Inc
.
Haenschel
,
C.
,
Baldeweg
,
T.
,
Croft
,
R. J.
,
Whittington
,
M.
, &
Gruzelier
,
J.
(
2000
).
Gamma and beta frequency oscillations in response to novel auditory stimuli: A comparison of human electroencephalogram (EEG) data with in vitro models.
Proceedings of the National Academy of Sciences, U.S.A.
,
97
,
7645
7650
.
Herholz
,
S. C.
, &
Zatorre
,
R. J.
(
2012
).
Musical training as a framework for brain plasticity: Behavior, function, and structure.
Neuron
,
76
,
486
502
.
Hickok
,
G.
, &
Poeppel
,
D.
(
2007
).
The cortical organization of speech processing.
Nature Reviews Neuroscience
,
8
,
393
402
.
Hsiao
,
F. J.
,
Wu
,
Z. A.
,
Ho
,
L. T.
, &
Lin
,
Y. Y.
(
2009
).
Theta oscillation during auditory change detection: An MEG study.
Biological Psychology
,
81
,
58
66
.
Hyde
,
K. L.
,
Lerch
,
J.
,
Norton
,
A.
,
Forgeard
,
M.
,
Winner
,
E.
,
Evans
,
A. C.
,
et al
(
2009
).
The effects of musical training on structural brain development a longitudinal study.
Neurosciences and Music III: Disorders and Plasticity
,
1169
,
182
186
.
Jung
,
T. P.
,
Makeig
,
S.
,
Westerfield
,
M.
,
Townsend
,
J.
,
Courchesne
,
E.
, &
Sejnowski
,
T. J.
(
2000
).
Removal of eye activity artifacts from visual event-related potentials in normal and clinical subjects.
Clinical Neurophysiology
,
111
,
1745
1758
.
Keil
,
J.
,
Müller
,
N.
,
Hartmann
,
T.
, &
Weisz
,
N.
(
2013
).
Prestimulus beta power and phase synchrony influence the sound-induced flash illusion.
Cerebral Cortex
,
24
,
1278
1288
.
Klimesch
,
W.
,
Doppelmayr
,
M.
,
Wimmer
,
H.
,
Schwaiger
,
J.
,
Rohm
,
D.
,
Gruber
,
W.
,
et al
(
2001
).
Theta band power changes in normal and dyslexic children.
Clinical Neurophysiology
,
112
,
1174
1185
.
Klimesch
,
W.
,
Sauseng
,
P.
, &
Hanslmayr
,
S.
(
2007
).
EEG alpha oscillations: The inhibition-timing hypothesis.
Brain Research Reviews
,
53
,
63
88
.
Ko
,
D.
,
Kwon
,
S.
,
Lee
,
G. T.
,
Im
,
C. H.
,
Kim
,
K. H.
, &
Jung
,
K. Y.
(
2012
).
Theta oscillation related to the auditory discrimination process in mismatch negativity: Oddball versus control paradigm.
Journal of Clinical Neurology
,
8
,
35
42
.
Kühnis
,
J.
,
Elmer
,
S.
,
Meyer
,
M.
, &
Jäncke
,
L.
(
2013a
).
Musicianship boosts perceptual learning of pseudoword-chimeras: An electrophysiological approach.
Brain Topography
,
26
,
110
125
.
Kühnis
,
J.
,
Elmer
,
S.
,
Meyer
,
M.
, &
Jäncke
,
L.
(
2013b
).
The encoding of vowels and temporal speech cues in the auditory cortex of professional musicians: An EEG study.
Neuropsychologia
,
51
,
1608
1618
.
Kuriki
,
S.
,
Kanda
,
S.
, &
Hirata
,
Y.
(
2006
).
Effects of musical experience on different components of MEG responses elicited by sequential piano-tones and chords.
Journal of Neuroscience
,
26
,
4046
4053
.
Lange
,
J.
,
Christian
,
N.
, &
Schnitzler
,
A.
(
2013
).
Audio-visual congruency alters power and coherence of oscillatory activity within and between cortical areas.
Neuroimage
,
79
,
111
120
.
Lehmann
,
D.
,
Faber
,
P. L.
,
Gianotti
,
L. R. R.
,
Kochi
,
K.
, &
Pascual-Marqui
,
R. D.
(
2006
).
Coherence and phase locking in the scalp EEG and between LORETA model sources, and microstates as putative mechanisms of brain temporo-spatial functional organization.
Journal of Physiology-Paris
,
99
,
29
36
.
Lehrl
,
S.
(
1977
).
Mehrfachwahl-Wortschatz-Intelligenz-Test (MWT-B). Testzentrale, Göttingen (1992)
.
Lehrl
,
S.
, &
Fischer
,
B.
(
1992
).
Kurztest für allgemeine Basisgrössen der Informationsverarbeitung (KAI) (3. Aufl)
.
Ebersberg
:
Vless
.
Luo
,
H.
, &
Poeppel
,
D.
(
2007
).
Phase patterns of neuronal responses reliably discriminate speech in human auditory cortex.
Neuron
,
54
,
1001
1010
.
Lutkenhoner
,
B.
,
Seither-Preisler
,
A.
, &
Seither
,
S.
(
2006
).
Piano tones evoke stronger magnetic fields than pure tones or noise, both in musicians and nonmusicians.
Neuroimage
,
30
,
927
937
.
Marie
,
C.
,
Delogu
,
F.
,
Lampis
,
G.
,
Belardinelli
,
M. O.
, &
Besson
,
M.
(
2011
).
Influence of musical expertise on segmental and tonal processing in Mandarin Chinese.
Journal of Cognitive Neuroscience
,
23
,
2701
2715
.
Marie
,
C.
,
Kujala
,
T.
, &
Besson
,
M.
(
2012
).
Musical and linguistic expertise influence pre-attentive and attentive processing of non-speech sounds.
Cortex
,
48
,
447
457
.
Marie
,
C.
,
Magne
,
C.
, &
Besson
,
M.
(
2011
).
Musicians and the metric structure of words.
Journal of Cognitive Neuroscience
,
23
,
294
305
.
Mazziotta
,
J.
,
Toga
,
A.
,
Evans
,
A.
,
Fox
,
P.
,
Lancaster
,
J.
,
Zilles
,
K.
,
et al
(
2001
).
A probabilistic atlas and reference system for the human brain: International Consortium for Brain Mapping (ICBM).
Philosophical Transactions of the Royal Society, Series B, Biological Sciences
,
356
,
1293
1322
.
Meyer
,
M.
,
Baumann
,
S.
, &
Jäncke
,
L.
(
2006
).
Electrical brain imaging reveals spatio-temporal dynamics of timbre perception in humans.
Neuroimage
,
32
,
1510
1523
.
Moreno
,
S.
,
Bialystok
,
E.
,
Barac
,
R.
,
Schellenberg
,
E. G.
,
Cepeda
,
N. J.
, &
Chau
,
T.
(
2011
).
Short-term music training enhances verbal intelligence and executive function.
Psychological Science
,
22
,
1425
1433
.
Moreno
,
S.
, &
Bidelman
,
G. M.
(
2013
).
Examining neural plasticity and cognitive benefit through the unique lens of musical training.
Hearing Research
,
308
,
84
97
.
Moreno
,
S.
,
Marques
,
C.
,
Santos
,
A.
,
Santos
,
M.
,
Castro
,
S. L.
, &
Besson
,
M.
(
2009
).
Musical training influences linguistic abilities in 8-year-old children: More evidence for brain plasticity.
Cerebral Cortex
,
19
,
712
723
.
Musacchia
,
G.
,
Sams
,
M.
,
Skoe
,
E.
, &
Kraus
,
N.
(
2007
).
Musicians have enhanced subcortical auditory and audiovisual processing of speech and music.
Proceedings of the National Academy of Sciences, U.S.A.
,
104
,
15894
15898
.
Näätänen
,
R.
,
Pakarinen
,
S.
,
Rinne
,
T.
, &
Takegata
,
R.
(
2004
).
The mismatch negativity (MMN): Towards the optimal paradigm.
Clinical Neurophysiology
,
115
,
140
144
.
Näätänen
,
R.
, &
Picton
,
T.
(
1987
).
The N1 wave of the human electric and magnetic response to sound—A review and an analysis of the component structure.
Psychophysiology
,
24
,
375
425
.
Ott
,
C. G.
,
Langer
,
N.
,
Oechslin
,
M. S.
, &
Jäncke
,
L.
(
2011
).
Processing of voiced and unvoiced acoustic stimuli in musicians.
Frontiers in Psychology
,
2
,
195
.
Pantev
,
C.
(
1995
).
Evoked and induced gamma-band activity of the human cortex.
Brain Topography
,
7
,
321
330
.
Pantev
,
C.
,
Oostenveld
,
R.
,
Engelien
,
A.
,
Ross
,
B.
,
Roberts
,
L. E.
, &
Hoke
,
M.
(
1998
).
Increased auditory cortical representation in musicians.
Nature
,
392
,
811
814
.
Pantev
,
C.
,
Roberts
,
L. E.
,
Schulz
,
M.
,
Engelien
,
A.
, &
Ross
,
B.
(
2001
).
Timbre-specific enhancement of auditory cortical representations in musicians.
NeuroReport
,
12
,
169
174
.
Pascual-Marqui
,
R. D.
(
2007
).
Discrete, 3D distributed, linear imaging methods of electric neuronal activity.
Part 1: Exact, zero error localization [online]. Retrieved from arxiv.org/abs/0710.3341
.
Pascual-Marqui
,
R. D.
(
2009
).
Theory of the EEG inverse problem.
In
S.
Tong
&
N. V.
Thakor
(Eds.),
Quantitative EEG analysis: Methods and clinical applications
(pp.
121
140
).
Boston
:
Artech House
.
Pascual-Marqui
,
R. D.
,
Lehmann
,
D.
,
Koukkou
,
M.
,
Kochi
,
K.
,
Anderer
,
P.
,
Saletu
,
B.
,
et al
(
2011
).
Assessing interactions in the brain with exact low-resolution electromagnetic tomography.
Philosophical Transactions of the Royal Society, Series A, Mathematical Physical and Engineering Sciences
,
369
,
3768
3784
.
Reinke
,
K. S.
,
He
,
Y.
,
Wang
,
C. H.
, &
Alain
,
C.
(
2003
).
Perceptual learning modulates sensory evoked response during vowel segregation.
Cognitive Brain Research
,
17
,
781
791
.
Rosen
,
G. D.
,
Sherman
,
G. F.
, &
Galaburda
,
A. M.
(
1989
).
Interhemispheric connections differ between symmetrical and asymmetrical brain-regions.
Neuroscience
,
33
,
525
533
.
Sarnthein
,
J.
,
Petsche
,
H.
,
Rappelsberger
,
P.
,
Shaw
,
G. L.
, &
von Stein
,
A.
(
1998
).
Synchronization between prefrontal and posterior association cortex during human working memory.
Proceedings of the National Academy of Sciences, U.S.A.
,
95
,
7092
7096
.
Sauseng
,
P.
, &
Klimesch
,
W.
(
2008
).
What does phase information of oscillatory brain activity tell us about cognitive processes?
Neuroscience and Biobehavioral Reviews
,
32
,
1001
1013
.
Scheeringa
,
R.
,
Fries
,
P.
,
Petersson
,
K. M.
,
Oostenveld
,
R.
,
Grothe
,
I.
,
Norris
,
D. G.
,
et al
(
2011
).
Neuronal dynamics underlying high- and low-frequency EEG oscillations contribute independently to the human BOLD signal.
Neuron
,
69
,
572
583
.
Schlaug
,
G.
,
Jäncke
,
L.
,
Huang
,
Y. X.
,
Staiger
,
J. F.
, &
Steinmetz
,
H.
(
1995
).
Increased corpus-callosum size in musicians.
Neuropsychologia
,
33
,
1047
1055
.
Schlaug
,
G.
,
Jäncke
,
L.
,
Huang
,
Y. X.
, &
Steinmetz
,
H.
(
1995
).
In-vivo evidence of structural brain asymmetry in musicians.
Science
,
267
,
699
701
.
Schneider
,
P.
,
Scherg
,
M.
,
Dosch
,
H. G.
,
Specht
,
H. J.
,
Gutschalk
,
A.
, &
Rupp
,
A.
(
2002
).
Morphology of Heschl's gyrus reflects enhanced activation in the auditory cortex of musicians.
Nature Neuroscience
,
5
,
688
694
.
Schneider
,
P.
,
Sluming
,
V.
,
Roberts
,
N.
,
Scherg
,
M.
,
Goebel
,
R.
,
Specht
,
H. J.
,
et al
(
2005
).
Structural and functional asymmetry of lateral Heschl's gyrus reflects pitch perception preference.
Nature Neuroscience
,
8
,
1241
1247
.
Seppanen
,
M.
,
Hamalainen
,
J.
,
Pesonen
,
A. K.
, &
Tervaniemi
,
M.
(
2012
).
Music training enhances rapid neural plasticity of N1 and P2 source activation for unattended sounds.
Frontiers in Human Neuroscience
,
6
,
1
13
.
Serrien
,
D. J.
,
Pogosyan
,
A. H.
, &
Brown
,
P.
(
2004
).
Cortico-cortical coupling patterns during dual task performance.
Experimental Brain Research
,
157
,
79
84
.
Shahin
,
A.
,
Bosnyak
,
D. J.
,
Trainor
,
L. J.
, &
Roberts
,
L. E.
(
2003
).
Enhancement of neuroplastic P2 and N1c auditory evoked potentials in musicians.
Journal of Neuroscience
,
23
,
5545
5552
.
Sinai
,
A.
, &
Pratt
,
H.
(
2003
).
High-resolution time course of hemispheric dominance revealed by low-resolution electromagnetic tomography.
Clinical Neurophysiology
,
114
,
1181
1188
.
Strait
,
D. L.
, &
Kraus
,
N.
(
2013
).
Biological impact of auditory expertise across the life span: Musicians as a model of auditory learning.
Hearing Research
,
308
,
109
121
.
Vanneste
,
S.
,
Song
,
J. J.
, &
De Ridder
,
D.
(
2013
).
Tinnitus and musical hallucinosis: The same but more.
Neuroimage
,
82
,
373
383
.
Varela
,
F.
,
Lachaux
,
J. P.
,
Rodriguez
,
E.
, &
Martinerie
,
J.
(
2001
).
The brainweb: Phase synchronization and large-scale integration.
Nature Reviews Neuroscience
,
2
,
229
239
.
Vaughan
,
H. G.
, &
Ritter
,
W.
(
1970
).
Sources of auditory evoked responses recorded from human scalp.
Electroencephalography and Clinical Neurophysiology
,
28
,
360
367
.
Womelsdorf
,
T.
, &
Fries
,
P.
(
2007
).
The role of neuronal synchronization in selective attention.
Current Opinion in Neurobiology
,
17
,
154
160
.
Wu
,
J. J.
,
Zhang
,
J. S.
,
Liu
,
C.
,
Liu
,
D. W.
,
Ding
,
X. J.
, &
Zhou
,
C. L.
(
2012
).
Graph theoretical analysis of EEG functional connectivity during music perception.
Brain Research
,
1483
,
71
81
.

Author notes

*

Jürg Kühnis and Stefan Elmer equally contributed to this work.