Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Patrick C. M. Wong
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2012) 24 (5): 1087–1103.
Published: 01 May 2012
FIGURES
| View All (5)
Abstract
View article
PDF
The human brain possesses a remarkable capacity to interpret and recall novel sounds as spoken language. These linguistic abilities arise from complex processing spanning a widely distributed cortical network and are characterized by marked individual variation. Recently, graph theoretical analysis has facilitated the exploration of how such aspects of large-scale brain functional organization may underlie cognitive performance. Brain functional networks are known to possess small-world topologies characterized by efficient global and local information transfer, but whether these properties relate to language learning abilities remains unknown. Here we applied graph theory to construct large-scale cortical functional networks from cerebral hemodynamic (fMRI) responses acquired during an auditory pitch discrimination task and found that such network properties were associated with participants' future success in learning words of an artificial spoken language. Successful learners possessed networks with reduced local efficiency but increased global efficiency relative to less successful learners and had a more cost-efficient network organization. Regionally, successful and less successful learners exhibited differences in these network properties spanning bilateral prefrontal, parietal, and right temporal cortex, overlapping a core network of auditory language areas. These results suggest that efficient cortical network organization is associated with sound-to-word learning abilities among healthy, younger adults.
Journal Articles
The Bimusical Brain Is Not Two Monomusical Brains in One: Evidence from Musical Affective Processing
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (12): 4082–4093.
Published: 01 December 2011
FIGURES
| View All (4)
Abstract
View article
PDF
Complex auditory exposures in ambient environments include systems of not only linguistic but also musical sounds. Because musical exposure is often passive, consisting of listening rather than performing, examining listeners without formal musical training allows for the investigation of the effects of passive exposure on our nervous system without active use. Additionally, studying listeners who have exposure to more than one musical system allows for an evaluation of how the brain acquires multiple symbolic and communicative systems. In the present fMRI study, listeners who had been exposed to Western-only (monomusicals) and both Indian and Western musical systems (bimusicals) since childhood and did not have significant formal musical training made tension judgments on Western and Indian music. Significant group by music interactions in temporal and limbic regions were found, with effects predominantly driven by between-music differences in temporal regions in the monomusicals and by between-music differences in limbic regions in the bimusicals. Effective connectivity analysis of this network via structural equation modeling (SEM) showed significant path differences across groups and music conditions, most notably a higher degree of connectivity and larger differentiation between the music conditions within the bimusicals. SEM was also used to examine the relationships among the degree of music exposure, affective responses, and activation in various brain regions. Results revealed a more complex behavioral–neural relationship in the bimusicals, suggesting that affective responses in this group are shaped by multiple behavioral and neural factors. These three lines of evidence suggest a clear differentiation of the effects of the exposure of one versus multiple musical systems.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (10): 2690–2700.
Published: 01 October 2011
FIGURES
| View All (6)
Abstract
View article
PDF
Human speech is composed of two types of information, related to content (lexical information, i.e., “what” is being said [e.g., words]) and to the speaker (indexical information, i.e., “who” is talking [e.g., voices]). The extent to which lexical versus indexical information is represented separately or integrally in the brain is unresolved. In the current experiment, we use short-term fMRI adaptation to address this issue. Participants performed a loudness judgment task during which single or multiple sets of words/pseudowords were repeated with single (repeat) or multiple talkers (speaker-change) conditions while BOLD responses were collected. As reflected by adaptation fMRI, the left posterior middle temporal gyrus, a crucial component of the ventral auditory stream performing sound-to-meaning computations (“what” pathway), showed sensitivity to lexical as well as indexical information. Previous studies have suggested that speaker information is abstracted during this stage of auditory word processing. Here, we demonstrate that indexical information is strongly coupled with word information. These findings are consistent with a plethora of behavioral results that have demonstrated that changes to speaker-related information can influence lexical processing.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2008) 20 (10): 1892–1902.
Published: 01 October 2008
Abstract
View article
PDF
Peripheral and central structures along the auditory pathway contribute to speech processing and learning. However, because speech requires the use of functionally and acoustically complex sounds which necessitates high sensory and cognitive demands, long-term exposure and experience using these sounds is often attributed to the neocortex with little emphasis placed on subcortical structures. The present study examines changes in the auditory brainstem, specifically the frequency following response (FFR), as native English-speaking adults learn to incorporate foreign speech sounds (lexical pitch patterns) in word identification. The FFR presumably originates from the auditory midbrain and can be elicited preattentively. We measured FFRs to the trained pitch patterns before and after training. Measures of pitch tracking were then derived from the FFR signals. We found increased accuracy in pitch tracking after training, including a decrease in the number of pitch-tracking errors and a refinement in the energy devoted to encoding pitch. Most interestingly, this change in pitch-tracking accuracy only occurred in the most acoustically complex pitch contour (dipping contour), which is also the least familiar to our English-speaking subjects. These results not only demonstrate the contribution of the brainstem in language learning and its plasticity in adulthood but also demonstrate the specificity of this contribution (i.e., changes in encoding only occur in specific, least familiar stimuli, not all stimuli). Our findings complement existing data showing cortical changes after second-language learning, and are consistent with models suggesting that brainstem changes resulting from perceptual learning are most apparent when acuity in encoding is most needed.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2004) 16 (7): 1173–1184.
Published: 01 September 2004
Abstract
View article
PDF
To recognize phonemes across variation in talkers, listeners can use information about vocal characteristics, a process referred to as “talker normalization.” The present study investigates the cortical mechanisms underlying talker normalization using fMRI. Listeners recognized target words presented in either a spoken list produced by a single talker or a mix of different talkers. It was found that both conditions activate an extensive cortical network. However, recognizing words in the mixed-talker condition, relative to the blocked-talker condition, activated middle/superior temporal and superior parietal regions to a greater degree. This temporal– parietal network is possibly associated with selectively attending and processing spectral and spatial acoustic cues required in recognizing speech in a mixed-talker condition.