Abstract

Complex auditory exposures in ambient environments include systems of not only linguistic but also musical sounds. Because musical exposure is often passive, consisting of listening rather than performing, examining listeners without formal musical training allows for the investigation of the effects of passive exposure on our nervous system without active use. Additionally, studying listeners who have exposure to more than one musical system allows for an evaluation of how the brain acquires multiple symbolic and communicative systems. In the present fMRI study, listeners who had been exposed to Western-only (monomusicals) and both Indian and Western musical systems (bimusicals) since childhood and did not have significant formal musical training made tension judgments on Western and Indian music. Significant group by music interactions in temporal and limbic regions were found, with effects predominantly driven by between-music differences in temporal regions in the monomusicals and by between-music differences in limbic regions in the bimusicals. Effective connectivity analysis of this network via structural equation modeling (SEM) showed significant path differences across groups and music conditions, most notably a higher degree of connectivity and larger differentiation between the music conditions within the bimusicals. SEM was also used to examine the relationships among the degree of music exposure, affective responses, and activation in various brain regions. Results revealed a more complex behavioral–neural relationship in the bimusicals, suggesting that affective responses in this group are shaped by multiple behavioral and neural factors. These three lines of evidence suggest a clear differentiation of the effects of the exposure of one versus multiple musical systems.

INTRODUCTION

Our nervous system routinely handles two or more similar, sometimes competing symbolic, communicative, and skill systems, which enables us to perform effectively under different functional and social contexts with different sets of expectations, including affective responses. A prominent example of such dual representation is bilingualism, the ability to speak (or use) two or more languages, a skill that many people in the world possess. A less familiar example of such dual representation is bimusicalism (Wong, Roy, & Margulis, 2009), in which early exposure to more than one musical culture, often in the absence of explicit instruction and active use, results in measurable cognitive and affective sensitivities to music of both cultures. The present study examines the neurophysiological mechanisms underlying bimusicalism. Specifically, we draw from two hypotheses in earlier bilingualism research (Grosjean, 1989) and ask whether the bimusical brain is represented by two separate and isolable musical systems (the fractional view) or whether it is qualitatively different than a monomusical brain (the holistic view).

Bimusicalism research is only in its infancy. Although extensive recent research has focused on relationships between the brain and music because of a variety of intrinsic and extrinsic factors, including the effects of musical training (e.g., Wong, Skoe, Russo, Dees, & Kraus, 2007; Gaser & Schlaug, 2003), sensitivity to musical syntactic properties (Patel, 2003), and affective processing (e.g., Blood & Zatorre, 2001), none of these studies considered the question of dual representations. Several cross-cultural music studies have been conducted (e.g., Demorest et al., 2010; Nan, Knosche, & Friederici, 2009), but the goal was not to address how exposure to two systems may fundamentally alter neural processing. In our previous behavioral study on bimusicalism (Wong, Roy, et al., 2009), monomusical (Western-only and Indian-only) and bimusical (Indian–Western) listeners were presented with Western and Indian music and asked to perform recognition memory and tension judgment tasks as measures of cognitive and affective processing, respectively. Although an in-culture advantage/sensitivity was observed in the monomusical groups, bimusical listeners did not differentiate their responses to music of the two cultures, suggesting equal sensitivity to both types of music. However, it is unknown whether neural representations of in-culture music is similar across bimusical and monomusical subjects and whether and how bimusical subjects differentiate in their brain both types of music that are culturally relevant to them. For the present study, we are especially interested in affective responses to music as a result of having acquired multiple musical systems.

In bilingualism, a subject closely related to bimusicalism, a consensus has emerged, advocating that bilinguals are not simply two monolinguals in one, supporting the holistic view (Ng & Wigglesworth, 2007; Cutler, Mehler, Norris, & Segui, 1992; Grosjean, 1989). The most relevant aspect of this proposal for our study is that the neural processing of the same native language by a monolingual and a bilingual is not necessarily identical. However, one fundamental difference between bimusicals and bilinguals is that, for individuals without formal musical training, neural representation of music is predominately a result of passive exposure without extensive music making and performance throughout life, whereas for most bilinguals, active use is normally a critical aspect. With such important distinctions, questions arise as to whether findings from bilingualism can be applied directly to bimusicalism. Furthermore, it is not clear if factors such as age of acquisition and level of proficiency influence the bimusical brain in the way that they do the bilingual brain. Together with bilingualism research, bimusicalism research can assist in formulating a comprehensive theory regarding the acquisition of skills and symbolic and communicative systems across neural systems and sensory domains. For example, do issues of interactive memory systems and the competition of symbolic systems apply equally across all domains (Hernandez, Li, & MacWhinney, 2005)?

In the present study, we test the holistic and fractional hypotheses of bimusicalism by measuring hemodynamic responses to Western and Indian music by monomusical (Western-only) and bimusical (Indian–Western) subjects while they perform a tension judgment task in the fMRI environment. This tension judgment task has been used in the music perception literature for measuring affective responses (Lerdahl & Krumhansl, 2007; Margulis, 2007), and is a task that we used in our bimusical behavioral study (Wong, Roy, et al., 2009). Because we expect bimusicalism to have an effect on affective processing, the tension judgment task is used again here. The holistic hypothesis would predict differences in how bimusical subjects process the two types of music and how both groups process Western music, which is culturally familiar to both groups. Furthermore, the holistic hypothesis would predict that the neural and behavioral factors influencing brain responses differ in the two subject groups.

METHODS

Subjects

Subjects were two groups of younger adults who were attending Northwestern University at the time of participation. None of the subjects had received more than 3 years of formal musical training (mean year of training = 0.71, SD = 0.95). Table 1 lists the years of formal music training for each subject in each group. Eleven monomusical subjects (six women, mean age = 24.9) reported exposure to only Western music and no exposure to Indian music at any point in life. Eleven bimusical subjects (five women, mean age = 23.6) reported exposure to both Western and Indian music. Bimusical subjects reported the relative percentages of time they listened to Western, Indian classical, Indian pop, and other (if any) music genres while growing up. Table 2 details their music exposure history. All subjects underwent and passed an air conduction audiological screening at 30 dB hearing level between the frequencies of 500 and 4000 Hz using a standard clinical audiometer. This study was approved by the Northwestern University Institutional Review Board. All subjects gave their informed consent before their inclusion in the study.

Table 1. 

Years of Formal Music Training of Subjects


Monomusical Subject
Bimusical Subject
Subject 01 
Subject 02 
Subject 03 
Subject 04 
Subject 05 
Subject 06 
Subject 07 
Subject 08 
Subject 09 
Subject 10 
Subject 11 

Monomusical Subject
Bimusical Subject
Subject 01 
Subject 02 
Subject 03 
Subject 04 
Subject 05 
Subject 06 
Subject 07 
Subject 08 
Subject 09 
Subject 10 
Subject 11 
Table 2. 

Bimusical Subjects Reported Their Estimates of Relative Percentage of Time that They Listened to Each of the Four Musical Genres while Growing Up


Age
0–12 years
12 years to Present
Western 15.73 (SD = 19.0) 41.36 (SD = 26.1) 
Indian classical 15.93 (SD = 18.7) 13.91 (SD = 12.5) 
Indian pop 58.80 (SD = 25.5) 42.23 (SD = 18.2) 
Others 9.55 (SD = 16.6) 2.50 (SD = 4.3) 

Age
0–12 years
12 years to Present
Western 15.73 (SD = 19.0) 41.36 (SD = 26.1) 
Indian classical 15.93 (SD = 18.7) 13.91 (SD = 12.5) 
Indian pop 58.80 (SD = 25.5) 42.23 (SD = 18.2) 
Others 9.55 (SD = 16.6) 2.50 (SD = 4.3) 

The total listening time is 100% at each age range.

Behavioral Task

All subjects performed a tension judgment task inside the scanner by evaluating the tension evoked by Western and Indian musical stimuli. A subset of melodies taken from Wong, Roy, et al. (2009) was used for this experiment. To summarize, monophonic Western and Indian melodies were composed especially for these experiments. Western melodies were written in the major and minor scales, and Indian melodies were written in the Bhairav and Todi scales. Excerpts were matched across culture for tempo, meter, and key. For the current study, only melodies without syntactic violations were chosen. Half of the melodies from each culture were played on the sitar, and the other half were played on the piano. This eliminated timbre, a surface characteristic, as a distinguishing feature, requiring subjects to focus on syntactic differences.

Subjects were asked to continually respond to the perceived musical tension of the musical stimuli. Musical tension was selected as a task to provide a measure that did not require previous experience with explicit music instruction but still provided nonmusicians with a degree of objectivity in their responses. Subjects were given a short definition of musical tension outside the scanner that identified musical tension with the stability (less tense) or instability (more tense) associated with a moment in music. This explanation was provided to ensure that subjects followed similar guidelines in rating musical tension. Unlike our behavioral experiment (Wong, Roy, et al., 2009) where tension was rated at the end of the musical stimuli, continuous judgment of tension was performed, in line with numerous related studies (e.g., Margulis, 2007) to encourage sustained engagement, but tension data were only collected at the end of each musical stimulus. Subjects indicated the tension by pressing one of two buttons that caused a bar on the screen to shrink or grow for lower and higher tension, respectively.

Musical stimuli were played in groups of two followed by a rest period of 8 sec. Each musical stimulus consisted of a variable length pause at the end of 1–3 sec to make the entire clip an even length and to give the subject an indication that the stimulus was over. Each group of two musical stimuli always consisted of similar stimuli to create a block for facilitating a block design fMRI analysis (e.g., Indian melodies on the sitar or Western melodies on the sitar). There were 32 individual musical stimuli (four groups of stimuli) that were repeated four times with an average length of 15 sec for each (16 sec with variable rest). Including the rest periods, this made for a total scanning of 1357 Repetition Time (TR). Stimuli were presented binaurally via headphones that were custom made for fMRI environments (Avotec, Stuart, FL). Matlab scripts were used to deliver the stimuli. A blocked design was used.

fMRI Acquisition

Structural and functional MR images were acquired at the Center for Advanced MRI in the Department of Radiology at Northwestern University using a 3T Trio Siemens scanner. The high-resolution T1-weighted image was acquired axially and used to localize the functional activation maps (MP-Rage; TR/Echo Time [TE] = 2100 msec/2.4 msec, flip angle = 8°, inversion time (TI) = 1100 msec, matrix size = 256 × 256, field of view (FOV) = 22 cm, slice thickness = 1 mm). T2*-weighted functional images were acquired axially along the AC–PC plane using a susceptibility weighted EPI pulse sequence while subjects performed the behavioral task (TE = 20 msec, TR = 2 sec, flip angle = 90°, in-plane resolution = 3.4375 mm2; 38 slices, with a slice thickness = 3 mm3, and zero gap were acquired in an interleaved measurement).

Basic fMRI Analyses

Basic fMRI processing procedures were similar to our published studies (e.g., Wong, Jin, et al., 2009; Wong, Uppunda, Parish, & Dhar, 2008). The time series were taken through preprocessing steps that included motion correction, slice-scan time correction, linear detrending, temporal filtering, and spatial smoothing (FWHM = 6 mm). Anatomical and functional images were also transformed into the Talairach stereotaxic space to facilitate group level comparisons (Talairach & Tournoux, 1998). Two regressors corresponding to the Western and Indian music conditions were created. These regressors were HRF-corrected functions reflecting the TRs corresponding to the two music conditions. These regressors were entered into a general linear model (GLM) to fit the preprocessed fMRI signal from each voxel. Normalized regression coefficients reflecting percent signal change were used for the two different conditions from each subject for additional analyses.

Structural Equation Modeling

We used structural equation modeling (SEM) to perform two types of analyses. First, SEM was used to assess effective connectivity among brain regions. Second, it was used to assess interdependencies among neural and behavioral factors.

There were several steps in performing SEM to assess effective connectivity among brain regions. Step 1 consisted of identifying the nodes of the model to be tested. Identification of the nodes of the network was based on the clusters that showed a significant Group × Music type interaction effect in the GLM with regions and connections (direct and indirect) that form a realistic neuroanatomical model (Vogt, Nimchinsky, Vogt, & Hoi, 1995; Vogt & Pandya, 1987; Vogt, Pandya, & Rosene, 1987). A 5 mm3 ROI was drawn, and average percent signal changes were calculated for each node. Step 2 involved estimating the error variances of each node in each condition of each subject group and entering these initially estimated error variances as fixed values into the individual subject's model estimation. Fixing error variance can reduce the number of free parameters in model estimation (Bullmore et al., 2000; McIntosh et al., 1994). The resulting estimated path coefficients (standardized linear regression weights) represent the expected change in the activity of one region given a unit of change in the activity of the region influencing it (Bollen, 1989). A t value was then calculated for each path to determine its statistical significance. Because these t values were normalized across groups and conditions, they were used for further analysis. For each path, a 2 × 2 (Group × Music condition) repeated measures ANOVA was conducted on these t values to evaluate main and interaction effects and subsequent post hoc analyses.

To assess potential group differences in interdependencies among neural and behavioral factors, SEM was also performed. We selected two regions that showed the strongest interaction effects in our GLM (right superior temporal gyrus [STG] and right amygdala), as well as the cingulate gyrus, which is presumed to connect these two regions. Methods of extracting brain activity for the three brain nodes were the same as described in the previous paragraph. The behavioral measures of self-reported music experience during early life and tension rating during fMRI scans were included. Because the main purpose of this analysis is to observe potential group differences, model estimation was performed in each subject group. Note that both music types were included within the model for each subject group. For each subject group, each subject contributed two data points for each node. For brain activation and tension rating, these two data points were drawn from their responses in each music listening condition. For music experience, the two data points came from their self-reported music experience (see Table 2 for bimusical subjects' responses; 0% and 100% for Indian and Western music, respectively, were used for monomusical subjects). The connectivities of these nodes were established based on these assumptions. Connections between the right STG and amygdala were bidirectional and via the cingulate gyrus. Self-reported music experience (Table 2) is assumed to shape brain activation and tension ratings (but self-reported music experience cannot be shaped by these factors). In contrast, tension rating can only be shaped by brain activation and music experience. Because behavioral measures were included in path estimations and only one data point exists for each condition for each subject, individual subject model estimation could not be performed and only group estimation was conducted. Subsequent statistical procedures follow the above discussion. Although the two effective connectivity analyses (Figures 3 and 4) were both based on SEM, there were also differences as described above. The most obvious differences were the number of brain regions involved and the fact that nonbrain factors were included in the equation for the latter analysis. As a result, the degrees of freedom and the way in which variance was estimated were also different.

RESULTS

Behavioral Performance

Subjects' tension ratings collected inside the scanner were entered into a Group × Music repeated measures ANOVA. We found a main effect of Group [F(1, 20) = 4.85, p = .04], with the (Western) monomusical group's overall ratings higher than the bimusical group's. In our previous behavioral study, we found lower tension responses to in-culture music (Wong, Roy, et al., 2009). Because both types of music were culturally familiar to the bimusical subjects, this main effect was expected. We also found a significant interaction [F(1, 20) = 9.50, p = .006]. Post hoc t tests revealed that, whereas the bimusical group's Western and Indian music ratings did not differ, consistent with a bimusical profile, the monomusical group rated the Western (in-culture) music as less tense [t(10) = 2.67, p = .024]. There was no main effect of Music. Figure 1 summarizes the results. These behavioral results are consistent with our previous behavioral study, despite that a continuous response digital interface dial (Gerringer, 2003) was used for judging tension in a quiet room in the previous study, traditional response buttons were used in a noisy MRI environment in the current study.

Figure 1. 

Results of subjects' tension judgment during fMRI scanning. A significant interaction and main effect of group was found. Post hoc t tests showed that, whereas the monomusical group rated Indian music to be more tense, the bimusical group did not differ in their ratings of the two types of music. Moreover, the two groups of subjects did not differ in their rating of Western music. Mono = monomusical group, Bi = bimusical group, W = Western music, I = Indian music, ns = not significant.

Figure 1. 

Results of subjects' tension judgment during fMRI scanning. A significant interaction and main effect of group was found. Post hoc t tests showed that, whereas the monomusical group rated Indian music to be more tense, the bimusical group did not differ in their ratings of the two types of music. Moreover, the two groups of subjects did not differ in their rating of Western music. Mono = monomusical group, Bi = bimusical group, W = Western music, I = Indian music, ns = not significant.

As shown in Table 2, bimusical subjects were asked to fill out a questionnaire detailing their exposures to Indian classical music and Western music at various ages. For our bimusical subjects, a significant negative correlation between tension rating and early exposure (0–12 years) to Indian classical music was found [Spearman's Rho = −.594, p = .027], which is also consistent with our earlier finding that early exposure to music results in a relative decrease in the experience of musical tension (Wong, Roy, et al., 2009). None of our Western subjects reported exposure to any type of music other than Western early in life.

fMRI Results

We conducted three sets of analyses on our fMRI data. First, we conducted a voxel-wise mixed effect ANOVA, with a particular focus on identifying significant Group × Music interaction effects (how mono- and bimusical subjects may distinguish Western and Indian music differently). Second, we assessed potential group differences at the network level by determining the causal relationships among brain regions (effective connectivity) using SEM. Third, we considered behavioral–neural interdependency, especially pertaining to how a tension response emerges, by testing a structure equation model for each group that included both behavioral and brain responses as variables.

Voxel-wise ANOVA

After image preprocessing as described in the Methods section, fMRI data were entered into a mixed-effect ANOVA with Music condition (Indian or Western) as a fixed factor and Subject group as a random factor. Table 3 reports the significant results (corrected p < .05), including main effects and significant interactions. Statistical significance was determined by a Monte Carlo simulation with family-wise (whole-brain) alpha set at .05, which results in a single-voxel p value of .005, extending at least 659 mm3 contiguously.

Table 3. 

Voxel-wise GLM Group × Condition ANOVA Results

Activation Peak
Talairach Coordinate x, y, z
Peak(s) F Value/t Value
Cluster Size (mm3)
Cluster-specific Statistics
Interaction
Condition
Group
Post hoc
(a) Interaction 
Right STG 51, −13, 7 23.83 4924 F(1, 20) = 20.97, p < .000 F(1, 20) = 3.89, p = .062 F(1, 20) < 1, p = .765 Mono: W > I, p < .005; Bi: I > W, p < .05 
Left STG −57, 7, 1 19.08 727 F(1, 20) = 17.08, p < .001 F(1, 20) = 3.35, p = .082 F(1, 20) < 1, p = .535 Mono: W > I, p < .01; Bi: I > W, p < .05 
Posterior cingulate/precuneus −5, −46, 33 17.7 4866 F(1, 20) = 14.54, p < .001 F(1, 20) < 1, p = .839 F(1, 20) < 1, p = .540 Bi: I > W, p < .005 
−8, −57, 17 16.73 
Right parahippocampal gyrus/amygdala 30, −18, −25 27.25 2219 F(1, 20) = 16.37, p < .001 F(1, 20) < 1, p = .880 F(1, 20) < 1, p = .346 Mono: W > I, p < .05; Bi: I > W, p < .000 
20, −7, −15 12.56 
Left parahippocampal gyrus −36, −29, −22 17.62 1639 F(1, 20) = 15.13, p < .001 F(1, 20) < 1, p = .958 F(1, 20) < 1, p = .344 Bi: I > W, p < .001 
Medial frontal 0, 59, 17 22.72 678 F(1, 20) = 19.04, p < .000 F(1, 20) = 1.336, p = .261 F(1, 20) < 1, p = .805 Mono: W > I, p < .01; Bi: I > W, p < .01 
Right MTG 40, −56, 25 15.08 659 F(1, 20) = 12.50, p < .005 F(1, 20) < 1, p = .723 F(1, 20) = 1.71, p = .206 Bi: I > W, p < .001 
 
(b) Main Effect of Music (Western > Indian Music) 
Right STG 44, −3, −9 4.94 819     
 
(c) Main Effect of Subject 
Nil        
Activation Peak
Talairach Coordinate x, y, z
Peak(s) F Value/t Value
Cluster Size (mm3)
Cluster-specific Statistics
Interaction
Condition
Group
Post hoc
(a) Interaction 
Right STG 51, −13, 7 23.83 4924 F(1, 20) = 20.97, p < .000 F(1, 20) = 3.89, p = .062 F(1, 20) < 1, p = .765 Mono: W > I, p < .005; Bi: I > W, p < .05 
Left STG −57, 7, 1 19.08 727 F(1, 20) = 17.08, p < .001 F(1, 20) = 3.35, p = .082 F(1, 20) < 1, p = .535 Mono: W > I, p < .01; Bi: I > W, p < .05 
Posterior cingulate/precuneus −5, −46, 33 17.7 4866 F(1, 20) = 14.54, p < .001 F(1, 20) < 1, p = .839 F(1, 20) < 1, p = .540 Bi: I > W, p < .005 
−8, −57, 17 16.73 
Right parahippocampal gyrus/amygdala 30, −18, −25 27.25 2219 F(1, 20) = 16.37, p < .001 F(1, 20) < 1, p = .880 F(1, 20) < 1, p = .346 Mono: W > I, p < .05; Bi: I > W, p < .000 
20, −7, −15 12.56 
Left parahippocampal gyrus −36, −29, −22 17.62 1639 F(1, 20) = 15.13, p < .001 F(1, 20) < 1, p = .958 F(1, 20) < 1, p = .344 Bi: I > W, p < .001 
Medial frontal 0, 59, 17 22.72 678 F(1, 20) = 19.04, p < .000 F(1, 20) = 1.336, p = .261 F(1, 20) < 1, p = .805 Mono: W > I, p < .01; Bi: I > W, p < .01 
Right MTG 40, −56, 25 15.08 659 F(1, 20) = 12.50, p < .005 F(1, 20) < 1, p = .723 F(1, 20) = 1.71, p = .206 Bi: I > W, p < .001 
 
(b) Main Effect of Music (Western > Indian Music) 
Right STG 44, −3, −9 4.94 819     
 
(c) Main Effect of Subject 
Nil        

Mono = monomusical, Bi = bimusical, W = Western music, I = Indian music.

A significant interaction was found in the auditory cortex (superior temporal region) bilaterally, the parahippocampus bilaterally, the right amygdala, the posterior cingulate gyrus, and the precuneus. To examine the nature of the interaction, Group × Music ANOVAs were performed on each significant cluster (see Table 3 for detailed statistics). Generally speaking, the significant interactions found in auditory regions were driven predominantly by Western monomusical subjects' increased activation in the Western music condition relative to the Indian music condition (Figure 2A and B). For other regions, especially limbic regions (regions relevant for attention, emotion, and memory), the interaction was driven predominantly by bimusical subjects' differential responses to Indian and Western music (Figure 2C).

Figure 2. 

A significant Group × Music interaction was found across multiple brain regions, with results largely driven by differences in right (A) and left (B) auditory areas in the monomusical group and limbic areas in the bimusical group (C). p values are obtained from post hoc, within-group t statistics. See Table 3 for summary statistics. (D) Post hoc analyses showed significantly higher activation in both the auditory and limbic areas during Indian music than Western music listening in bimusical subjects. (E) More extensive activation was found in the right STG/MTG during Western music listening in the monomusical than the bimusical group. L = left hemisphere, R = right hemisphere, Mono = monomusical group, Bi = bimusical group, ns = not significant.

Figure 2. 

A significant Group × Music interaction was found across multiple brain regions, with results largely driven by differences in right (A) and left (B) auditory areas in the monomusical group and limbic areas in the bimusical group (C). p values are obtained from post hoc, within-group t statistics. See Table 3 for summary statistics. (D) Post hoc analyses showed significantly higher activation in both the auditory and limbic areas during Indian music than Western music listening in bimusical subjects. (E) More extensive activation was found in the right STG/MTG during Western music listening in the monomusical than the bimusical group. L = left hemisphere, R = right hemisphere, Mono = monomusical group, Bi = bimusical group, ns = not significant.

There was no main effect of Subject group. However, a main effect of Music was found in the right STG (Western music > Indian music) because Western music is culturally familiar to both groups of subjects. That is, this main effect of music was driven by an overall higher activation to Western music irrespective of subject group because, although activation was approximately the same between the two music conditions for the bimusical subjects, activation was clearly higher in the Western music condition for the monomusical subjects. This pattern is consistent with previous findings of higher activation in the auditory cortex with increased musical exposure (e.g., Margulis, Mlsna, Uppunda, Parrish, & Wong, 2009; Nan, Knosche, Zysset, & Friederici, 2008; Morrison, Demorest, Aylward, Cramer, & Maravilla, 2003).

These univariate GLM statistical results showed that the neural processing of the two musical systems is different in bimusicals, despite that these bimusicals' behavioral tension rating results showed no reliable difference between the two types of music. Monomusicals differentiated Western and Indian music, but in a manner that is different than the bimusicals. These results provide preliminary evidence that the neural representations of music are different in monomusicals and bimusicals.

Post hoc analyses comparing within and across music groups showed that bimusical subjects engaged more auditory and limbic areas in processing Indian than Western music (Figure 2D). However, when compared with monomusical subjects, bimusical subjects did not exhibit more brain activation in processing Indian music. When processing the familiar Western music, monomusical subjects elicited more activations in the right superior/middle temporal gyrus (MTG) than their counterparts (Figure 2E).

Effective Connectivity

The GLM results discussed above showed regional differences in how monomusical and bimusical listeners responded to Western and Indian music without considering how brain regions are used as a network. Here we report results on potential differences at the network level. Figure 3A shows an anatomically realistic model we tested, including the bilateral auditory cortex, the bilateral amygdala extending to the parahippocampal gyrus, the right MTG, and direct and indirect connections via the posterior cingulate gyrus (e.g., Vogt et al., 1987, 1995; Vogt & Pandya, 1987). These regions are hypothesized to engage in the sensory and affective processes involved in the music listening and tension judgment task (see Peretz, 2010). The cingulate gyrus connects cortical and subcortical structures. These areas were also regions that showed the strongest interaction effects in the voxel-wise ANOVA reported above. With SEM, we assessed the effective connectivity of this anatomical model for each subject in each music listening condition. Path coefficients for each connection in each condition for each subject were estimated, and the resulting t statistics were calculated by dividing the path coefficients by their corresponding standard errors. Because these t statistics represent normalized path coefficients across subjects and conditions, they could be used for further statistical analyses for assessing potential differences across subject groups, listening conditions, and interaction effects.

Figure 3. 

Effective connectivity analysis showed stronger path coefficients across multiple connections in the bimusical groups as well as greater within-group differentiation. A shows our a priori model, and B summarizes the results. PC = posterior cingulate, LSTG = left STG, RSTG = right STG, LPH = left parahippocampal gyrus, RAm = right amygdala, RMTG = right MTG, Mono = monomusical group, Bi = bimusical group.

Figure 3. 

Effective connectivity analysis showed stronger path coefficients across multiple connections in the bimusical groups as well as greater within-group differentiation. A shows our a priori model, and B summarizes the results. PC = posterior cingulate, LSTG = left STG, RSTG = right STG, LPH = left parahippocampal gyrus, RAm = right amygdala, RMTG = right MTG, Mono = monomusical group, Bi = bimusical group.

t statistics from each path were entered into a 2 × 2 repeated measures ANOVA. Table 4 summarizes the results showing two main patterns. First, several paths show a main effect of group, with greater degree of connectivity in the bimusical group. Second, numerous paths show a significant interaction, driven by a differentiation of connectivity within the bimusical group across the two listening conditions. These paths involve both sensory and affective processes. Taken together, our effective connectivity analysis suggests that not only did the bimusical subjects engage sensory and affective brain regions as a network during music listening, but they also differentiated between the two types of music at the network level.

Table 4. 

ANOVAs on Path Coefficients Based on the SEM

Path
Interaction
Group
Condition
Post hoc
Mono W vs. Bi I
Mono W vs. Bi W
PC → LSTG F(1, 20) = 5.40, p < .05 F(1, 20) = 15.0, p < .001 F(1, 20) = 5.14, p < .05 p < .010 p < .0001 
PC → RAm F(1, 20) = 17.53, p < .001 F(1, 20) = 33.7, p < .001 F(1, 20) = 13.98, p < .001 p < .0001 p < .005 
PC → RMTG F(1, 20) = 2.47, ns F(1, 20) = 22.4, p < .001 F(1, 20) = 2.82, ns p < .001 p < .001 
PC → LPH F(1, 20) < 1, ns F(1, 20) = 5.44, p < .05 F(1, 20) < 1, ns ns p < .05 
RMTG → PC F(1, 20) = 1.24, ns F(1, 20) = 33.2, p < .001 F(1, 20) = 1.30, ns p < .005 p < .005 
RSTG → RMTG F(1, 20) = 14.86, p < .001 F(1, 20) < 1, ns F(1, 20) = 16.03, p < .001 p < .050 p < .05 
RMTG → RSTG F(1, 20) = 10.94, p < .005 F(1, 20) = 3.75, ns F(1, 20) = 10.05, p < .005 p < .001 ns 
RAm → PC F(1, 20) = 5.92, p < .05 F(1, 20) < 1, ns F(1, 20) = 5.99, p < .05 ns ns 
RSTG → PC F(1, 20) = 1.29, ns F(1, 20) < 1, ns F(1, 20) < 1, ns ns ns 
LSTG → PC F(1, 20) = 2.69, ns F(1, 20) < 1, ns F(1, 20) = 2.64, ns ns ns 
PC → RSTG F(1, 20) = 1.86, ns F(1, 20) = 1.51, ns F(1, 20) = 1.79, ns ns ns 
LPH → PC F(1, 20) = 1.24, ns F(1, 20) = 1.38, ns F(1, 20) = 1.24, ns ns ns 
Path
Interaction
Group
Condition
Post hoc
Mono W vs. Bi I
Mono W vs. Bi W
PC → LSTG F(1, 20) = 5.40, p < .05 F(1, 20) = 15.0, p < .001 F(1, 20) = 5.14, p < .05 p < .010 p < .0001 
PC → RAm F(1, 20) = 17.53, p < .001 F(1, 20) = 33.7, p < .001 F(1, 20) = 13.98, p < .001 p < .0001 p < .005 
PC → RMTG F(1, 20) = 2.47, ns F(1, 20) = 22.4, p < .001 F(1, 20) = 2.82, ns p < .001 p < .001 
PC → LPH F(1, 20) < 1, ns F(1, 20) = 5.44, p < .05 F(1, 20) < 1, ns ns p < .05 
RMTG → PC F(1, 20) = 1.24, ns F(1, 20) = 33.2, p < .001 F(1, 20) = 1.30, ns p < .005 p < .005 
RSTG → RMTG F(1, 20) = 14.86, p < .001 F(1, 20) < 1, ns F(1, 20) = 16.03, p < .001 p < .050 p < .05 
RMTG → RSTG F(1, 20) = 10.94, p < .005 F(1, 20) = 3.75, ns F(1, 20) = 10.05, p < .005 p < .001 ns 
RAm → PC F(1, 20) = 5.92, p < .05 F(1, 20) < 1, ns F(1, 20) = 5.99, p < .05 ns ns 
RSTG → PC F(1, 20) = 1.29, ns F(1, 20) < 1, ns F(1, 20) < 1, ns ns ns 
LSTG → PC F(1, 20) = 2.69, ns F(1, 20) < 1, ns F(1, 20) = 2.64, ns ns ns 
PC → RSTG F(1, 20) = 1.86, ns F(1, 20) = 1.51, ns F(1, 20) = 1.79, ns ns ns 
LPH → PC F(1, 20) = 1.24, ns F(1, 20) = 1.38, ns F(1, 20) = 1.24, ns ns ns 

Mono = monomusical, Bi = bimusical, W = Western music, I = Indian music, PC = posterior cingulate, LSTG = left STG, RAm = right amygdala, RMTG = right MTG, LPH = left parahippocampal gyrus, RSTG = right STG.

To more directly evaluate whether the processing of culturally familiar music in bimusicals and monomusicals was similar, we performed post hoc t statistics comparing in-culture music listening in the bimusical group (either Western or Indian music) and in-culture music listening in the monomusical group (Western music only) as shown in Table 4. Numerous paths showed significant differences. These results more directly support the holistic hypothesis and against the fractional hypothesis. The processing of in-culture music is different in bimusicals compared with monomusicals.

Assessment of Behavioral–Neural Casual Relationships

The aforementioned results focus on behavioral responses, univariate regional neural responses, and neural connectivity. All of the results point to distinctions between mono- and bimusical listeners in differentiating Western and Indian music. Although interesting, these results did not consider the interdependencies among behavioral and neural responses. To capture such interdependencies and possible group differences, we constructed structural equation models that included early music exposure, tension response, right STG activation, amygdala activation, and cingulate activation as variables. The three brain regions were identified based on the voxel-wise ANOVA results discussed above; they were regions that showed the strongest interaction effects based on the spatial extent (having the largest number of voxels in a cluster). Together, these three regions also constitute a realistic anatomical network, with regions that best differentiated monomusical (right STG) and bimusical (right amygdala) groups as discussed earlier, as well as the cingulate gyrus, which we hypothesized to directly and indirectly connect the STG and amygdale (e.g., Vogt et al., 1987, 1995; Vogt & Pandya, 1987).

Figure 4 shows the a priori model we built for each group of subjects, including the path coefficients and their significance levels. In constructing this model, we considered responses to both music types simultaneously within each listener group. Our model assumes that although music exposure (Table 2) shapes right STG and amygdala activation, tension responses are consequential to activities of these two brain regions and are also influenced by music exposure. Right STG and amygdala are connected via the association area cingulate gyrus. The main group differences we found rested on factors that influenced tension responses. Although tension responses were largely shaped by the amygdala in the monomusical group, they were an amalgamation of auditory (STG) activation, affective (amygdala) activation, and music exposure early in life for the bimusical group. These results more comprehensively support the view that bimusicals and monomusicals differ fundamentally in terms of the mechanisms that produce affective musical responses.

Figure 4. 

Assessment of brain–behavior relationship showed that tension rating was influenced by multiple factors in the bimusical groups. The table shows path coefficients and p values for all connections. PC = posterior cingulate, RSTG = right STG, RAm = right amygdala.

Figure 4. 

Assessment of brain–behavior relationship showed that tension rating was influenced by multiple factors in the bimusical groups. The table shows path coefficients and p values for all connections. PC = posterior cingulate, RSTG = right STG, RAm = right amygdala.

DISCUSSION

The present study examined the neurophysiological underpinnings of bimusicalism. Everyday music listeners with no extensive musical training were asked to participate in a tension judgment task that has been well documented in the music literature to reflect affective processing (Lerdahl & Krumhansl, 2007; Margulis, 2007) and has been shown to reliably differentiate monomusical and bimusical subject (Wong, Roy, et al., 2009), with subjects judging in-culture music as less tense. The stimuli we used consisted of specially composed melodies consistent with Indian and Western musical traditions and were not music that subjects had previously heard, ruling out a simple surface familiarity account for the present findings. As music is often viewed as a vehicle for evoking emotion even in individuals without formal training (Koelsch, 2010), the use of such a task is particularly appropriate for our subjects because they lack extensive formal training. Our bimusical subjects were exposed to both Western and Indian music early in life and their tension responses suggested a pattern of in-culture processing in both types of music. The monomusical subjects had only been exposed to Western music, which was also confirmed by their behavioral responses. For the most part, the act of judging musical tension evoked a network of activation that is consistent with what has been reported in the previous literature, in particular the auditory cortex and the limbic system (Margulis et al., 2009; Koelsch, 2006; Blood & Zatorre, 2001). For example, in examining neural responses to the music of their expertise, Margulis et al. (2009) found violinists and flutists to show increased responses in the auditory cortex to violin and flute music, respectively.

The fractional hypothesis of the bimusical brain would predict that the processing of the two musical systems constitutes a simple composite of two monomusical systems. That is, processing of the two in-culture musical systems (Western and Indian music) should be similar within bimusicals, and in-culture music processing (Western music) should also be similar across monomusicals and bimusicals. In fact, if only the behavioral results (tension rating) were considered, such a conclusion could be drawn. But we provide three lines of evidence rejecting this fractional view and argue instead that enculturation as bimusicals fundamentally and qualitatively changes how music is processed in the brain, supporting the holistic hypothesis. First, regional brain analysis shows differentiation of Western and Indian music in the auditory cortex for the monomusical subjects, whereas this differentiation can be seen in the limbic structures for the bimusical subjects. Second, effective connectivity analysis reveals a stronger connectivity among both sensory and limbic pathways in the bimusical group as a whole, and that differentiation of the two types of music in connection strengths can be seen in the bimusical subjects. Third, we found that the emotion (tension) evoked by music in the bimusical group is an amalgamation of multiple neural and behavioral factors. Overall, these results not only support the holistic view of bimusicalism, but also speak to the possibility of a more complex music-brain relationship in bimusicals. The complexity of this music-brain relationship includes possible features such as greater integration between acoustic and meaning mapping (STG-MTG connectivity, see Figure 3) by the bimusicals, borrowing a model from spoken language processing (Hickok & Poeppel, 2007). Furthermore, this complexity also includes potentially greater emotional processing of the music via input and output to/from the amgydala (pathways going in/out of the amygdala as depicted in Figure 3), as the emotional (tension) judgment task was performed.

Although the present study provides evidence for a distinction between monomusicals and bimusical listeners in the neural representation of music, it is unclear at this point why such a distinction is needed in the first place and why a potentially more complex music-brain relationship might exist in bimusicals. One potential explanation, borrowed from bilingualism (e.g., Crinion et al., 2006), has to do with competition, interference, or control when two systems coexist. There is ample evidence supporting the existence of phonological and lexical competition effects in bilinguals (e.g., Marian & Spivey, 2003; Hermans, Bongaerts, Bot, & Schreuder, 1998), even in highly proficient bilinguals (e.g., Costa, Colome, Gomez, & Sebastian-Galles, 2003). It is conceivable that a similar effect exists in our bimusicals so that there are increases in neural connectivity and in the involvement of both neural and behavioral factors that determine affective responses. It is also possible that bimusicals cannot rely simply on an impression of foreignness, or a lack of familiarity, to distinguish one musical culture from another. This inability may force them to use cognitive processing to make the distinction, potentially accounting for some of the changes in activation patterns. These explanations are speculative at best. With the present study as a starting point, future studies could be designed to determine the precise mechanisms of bimusicalism.

In the bilingualism literature, it has been reported that age of acquisition and proficiency level (e.g., Perani et al., 1998, 2003) are strongly associated with neural processing. All of our bimusical subjects reported exposure to both Western and Indian music before the age of 5 and reported that they continued to listen to both types of music. Regardless of whether one views their music exposure to be Western or Indian “dominant,” a fractional view of bimusicalism could have been supported by comparing the bimusicals' responses to either type of music with Western music listening in the monomusicals. At the network level, these comparisons (Table 4 and Figure 3) failed to support this fractional view. The specific proficiency levels are difficult to measure because our subjects did not have extensive formal musical training; however, their tension ratings could be viewed as a proxy of proficiency, and we did not find differences in our bimusicals' rating of the two types of music nor did we find their ratings to be different from the monomusical subjects' rating of Western music. At least with this measure, we could rule out differences in music proficiency level as an explanation of our results.

It is worth emphasizing that, although we borrow concepts from the bilingualism literature, there are clear distinctions between language and music that make the study of bimusicalism interesting. First, unlike language which often involves both exposure and use, music for our subjects with virtually no musical training involves largely passive exposure without extensive production or performance throughout life. Second, related to the concept of linguistic use is language control and switching, which has received much attention (e.g., Abutalebi et al., 2007; Crinion et al., 2006) in the study of the bilingual brain. Bilinguals are confronted with situations where they actively select language(s) to communicate, but bimusicals rarely have to exert such control and use mechanisms. For example, although bilinguals may often evaluate a particular social context to determine which language to speak, bimusicals are rarely in the position of having to select an appropriate music type, because active use and production of music is so much rarer than for language. Third, the acquisition of language involves both learning referential meanings and mapping sounds to meaning, as often examined by studies of bilingualism (e.g., Marian & Spivey, 2003). Even with these crucial differences, we see the holistic view to be applicable in both domains, implicating holistic organization as a possible principle of dual acquisition across communicative and skill domains.

Although the holistic hypothesis very generally predicts differences in processing of two musical systems by bimusical listeners, such differences could take many possible forms that require future studies to investigate in a hypothesis-driven manner. For example, it is possible that potential differences can lie in the form of differential cognitive and affective processing. Although the processing of one type of music might be more cognitive-based, the processing of another might be more affective-based. A potential future study could include both cognitive (e.g., recognition memory similar to Wong, Roy, et al., 2009) and tension judgment to examine a version of the holistic hypothesis more systematically. How brain injuries may affect linguistic (e.g., Catani & Mesulam, 2008; Peach & Wong, 2004) and musical functions (e.g., Peretz, 1993, 1996) of the bimusical brain should also be examined in future research.

Acknowledgments

We thank Viorica Marian, Jieun Kim, Geshri Gunasekera, and Dan Choi for their comments and assistance in this research. This research was supported by grants from National Science Foundation (BCS-0719666 and BCS-1125144), National Institutes of Health (R01DC008333, R21DC009652, and K02AG035382), Alumnae of Northwestern University, and Northwestern University Center for Interdisciplinary Research in Arts, School of Communication, and Undergraduate Research Grants Committee.

Reprint requests should be sent to Patrick C. M. Wong, Department of Communication Sciences and Disorders, 2240 Campus Drive, Evanston, IL, or via e-mail: pwong@northwestern.edu or Alice H. D. Chan, Division of Linguistics and Multilingual Studies, 14 Nanyang Drive, Singapore 637332, or via e-mail: alice@ntu.edu.sg.

REFERENCES

Abutalebi
,
J.
,
Brambati
,
S. M.
,
Annoni
,
J. M.
,
Moro
,
A.
,
Cappa
,
S. F.
, &
Perani
,
D.
(
2007
).
The neural cost of the auditory perception of language switches: An event-related functional magnetic resonance imaging study in bilinguals.
Journal of Neuroscience
,
27
,
13762
13769
.
Blood
,
A. J.
, &
Zatorre
,
R. J.
(
2001
).
Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion.
Proceedings of the National Academy of Sciences, U.S.A.
,
98
,
11818
11823
.
Bollen
,
K. A.
(
1989
).
Structural equations with latent variables.
Toronto, Ontario
:
Wiley-Interscience
.
Bullmore
,
E.
,
Horwitz
,
B.
,
Honey
,
G.
,
Brammer
,
M.
,
Williams
,
S.
, &
Sharma
,
T.
(
2000
).
How good is good enough in path analysis of fMRI data?
Neuroimage
,
11
,
289
301
.
Catani
,
M.
, &
Mesulam
,
M.
(
2008
).
The arcuate fasciculus and the disconnection theme in language and aphasia: History and current state.
Cortex
,
44
,
953
961
.
Costa
,
A.
,
Colome
,
A.
,
Gomez
,
O.
, &
Sebastian-Galles
,
N.
(
2003
).
Another look at cross-language competition in bilingual speech production: Lexical and phonological factors.
Bilingualism: Language and Cognition
,
6
,
167
179
.
Crinion
,
J.
,
Turner
,
R.
,
Grogan
,
A.
,
Hanakawa
,
T.
,
Noppeney
,
U.
,
Devlin
,
J. T.
,
et al
(
2006
).
Language control in the bilingual brain.
Science
,
312
,
1537
1540
.
Cutler
,
A.
,
Mehler
,
J.
,
Norris
,
D.
, &
Segui
,
J.
(
1992
).
The monolingual nature of speech segmentation by bilinguals.
Cognitive Psychology
,
24
,
381
410
.
Demorest
,
S. M.
,
Morrison
,
S. J.
,
Stambaugh
,
L. A.
,
Beken
,
M.
,
Richards
,
T. L.
, &
Johnson
,
C.
(
2010
).
An fMRI investigation of the cultural specificity of music memory.
Social Cognitive and Affective Neuroscience
,
5
,
282
291
.
Gaser
,
C.
, &
Schlaug
,
G.
(
2003
).
Brain structures differ between musicians and non-musicians.
Journal of Neuroscience
,
23
,
9240
9245
.
Gerringer
,
J.
(
2003
).
Continuous response digital interface.
Tallahassee, FL
:
Florida State University, Center for Music Research
.
Grosjean
,
F.
(
1989
).
Neurolinguists, beware! The bilingual is not two monolinguals in one person.
Brain and Language
,
36
,
3
15
.
Hermans
,
D.
,
Bongaerts
,
T.
,
Bot
,
K. D.
, &
Schreuder
,
R.
(
1998
).
Producing words in a foreign language: Can speakers prevent interference from their first language?
Bilingualism: Language and Cognition
,
1
,
213
229
.
Hernandez
,
A.
,
Li
,
P.
, &
MacWhinney
,
B.
(
2005
).
The emergence of competing modules in bilingualism.
Trends in Cognitive Sciences
,
9
,
220
225
.
Hickok
,
G.
, &
Poeppel
,
D.
(
2007
).
The cortical organization of speech processing.
Nature Reviews Neuroscience
,
8
,
393
402
.
Koelsch
,
S.
(
2006
).
Significance of Broca's area and ventral premotor cortex for music-syntactic processing.
Cortex
,
42
,
518
520
.
Koelsch
,
S.
(
2010
).
Towards a neural basis of music-evoked emotions.
Trends in Cognitive Sciences
,
14
,
131
137
.
Lerdahl
,
F.
, &
Krumhansl
,
C. L.
(
2007
).
Modeling tonal tension.
Music Perception
,
24
,
326
366
.
Margulis
,
E. H.
(
2007
).
Silences in music are musical not silent: An exploratory study of context effects on the experience of musical pauses.
Music Perception
,
24
,
485
506
.
Margulis
,
E. H.
,
Mlsna
,
L. M.
,
Uppunda
,
A. K.
,
Parrish
,
T. B.
, &
Wong
,
P. C.
(
2009
).
Selective neurophysiologic responses to music in instrumentalists with different listening biographies.
Human Brain Mapping
,
30
,
267
275
.
Marian
,
V.
, &
Spivey
,
M.
(
2003
).
Competing activation in bilingual language processing: Within- and between-language competition.
Bilingualism: Language and Cognition
,
6
,
97
115
.
McIntosh
,
A. R.
,
Grady
,
C. L.
,
Ungerleider
,
L. G.
,
Haxby
,
J. V.
,
Rapoport
,
S. I.
, &
Horwitz
,
B.
(
1994
).
Network analysis of cortical visual pathways mapped with PET.
Journal of Neuroscience
,
14
,
655
666
.
Morrison
,
S. J.
,
Demorest
,
S. M.
,
Aylward
,
E. H.
,
Cramer
,
S. C.
, &
Maravilla
,
K. R.
(
2003
).
fMRI investigation of cross-cultural music comprehension.
Neuroimage
,
20
,
378
384
.
Nan
,
Y.
,
Knosche
,
T. R.
, &
Friederici
,
A. D.
(
2009
).
Non-musicians' perception of the phrase boundaries in music: A cross-cultural ERP study.
Biological Psychology
,
82
,
70
81
.
Nan
,
Y.
,
Knosche
,
T. R.
,
Zysset
,
S.
, &
Friederici
,
A. D.
(
2008
).
Cross-cultural music phrase processing: An fMRI study.
Human Brain Mapping
,
29
,
312
328
.
Ng
,
B. C.
, &
Wigglesworth
,
G.
(
2007
).
Bilingualism: An advanced resource book.
London
:
Routledge
.
Patel
,
A. D.
(
2003
).
Language, music, syntax and the brain.
Nature Neuroscience
,
6
,
674
681
.
Peach
,
R. K.
, &
Wong
,
P. C. M.
(
2004
).
Integrating the message level into treatment for agrammatism using story retelling.
Aphasiology
,
18
,
429
441
.
Perani
,
D.
,
Abutalebi
,
J.
,
Paulesu
,
E.
,
Brambati
,
S.
,
Scifo
,
P.
,
Cappa
,
S. F.
,
et al
(
2003
).
The role of age of acquisition and language usage in early, high-proficient bilinguals: An fMRI study during verbal fluency.
Human Brain Mapping
,
19
,
170
182
.
Perani
,
D.
,
Paulesu
,
E.
,
Galles
,
N. S.
,
Dupoux
,
E.
,
Dehaene
,
S.
,
Bettinardi
,
V.
,
et al
(
1998
).
The bilingual brain. Proficiency and age of acquisition of the second language.
Brain
,
121
,
1841
1852
.
Peretz
,
I.
(
1993
).
Auditory atonalia for melodies.
Cognitive Neuropsychology
,
10
,
21
56
.
Peretz
,
I.
(
1996
).
Can we lose memory for music? A case study of music agnosia in a nonmusician.
Journal of Cognitive Neuroscience
,
8
,
481
496
.
Peretz
,
I.
(
2010
).
Towards a neurobiology of musical emotions.
In P. Juslin & J. Sloboda (Eds.),
Handbook of music and emotion: Theory, research, applications
(pp.
99
126
).
Oxford
:
Oxford University Press
.
Talairach
,
J.
, &
Tournoux
,
P.
(
1998
).
Co-planar stereotaxic atlas of the human brain.
New York
:
Thieme
.
Vogt
,
B. A.
,
Nimchinsky
,
E. A.
,
Vogt
,
L. J.
, &
Hoi
,
P. R.
(
1995
).
Human cingulate cortex: Surface features, flat maps, and cytoarchitecture.
Journal of Comparative Neurology
,
359
,
490
506
.
Vogt
,
B. A.
, &
Pandya
,
D. N.
(
1987
).
Cingulate cortex of the rhesus monkey: II. Cortical afferents.
Journal of Comparative Neurology
,
262
,
271
289
.
Vogt
,
B. A.
,
Pandya
,
D. N.
, &
Rosene
,
D. L.
(
1987
).
Cingulate cortex of the rhesus monkey: I. Cytoarchitecture and thalamic afferents.
Journal of Comparative Neurology
,
262
,
256
270
.
Wong
,
P. C. M.
,
Jin
,
J. X.
,
Gunasekera
,
G. M.
,
Abel
,
R.
,
Lee
,
E. R.
, &
Dhar
,
S.
(
2009
).
Aging and cortical mechanisms of speech perception in noise.
Neuropsychologia
,
47
,
693
703
.
Wong
,
P. C. M.
,
Roy
,
A. K.
, &
Margulis
,
E. H.
(
2009
).
Bimusicalism: The implicit dual enculturation of cognitive and affective systems.
Music Perception
,
27
,
81
88
.
Wong
,
P. C. M.
,
Skoe
,
E.
,
Russo
,
N. M.
,
Dees
,
T.
, &
Kraus
,
N.
(
2007
).
Musical experience shapes human brainstem encoding of linguistic pitch patterns.
Nature Neuroscience
,
10
,
420
422
.
Wong
,
P. C. M.
,
Uppunda
,
A. K.
,
Parrish
,
T. B.
, &
Dhar
,
S.
(
2008
).
Cortical mechanisms of speech perception in noise.
Journal of Speech, Language, and Hearing Research
,
51
,
1026
1041
.

Author notes

*

All authors designed and conducted the experiments, performed data analysis, and wrote the article.