Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-6 of 6
Christo Pantev
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2014) 26 (10): 2224–2238.
Published: 01 October 2014
FIGURES
| View All (6)
Abstract
View article
PDF
The human ability to integrate the input of several sensory systems is essential for building a meaningful interpretation out of the complexity of the environment. Training studies have shown that the involvement of multiple senses during training enhances neuroplasticity, but it is not clear to what extent integration of the senses during training is required for the observed effects. This study intended to elucidate the differential contributions of uni- and multisensory elements of music reading training in the resulting plasticity of abstract audiovisual incongruency identification. We used magnetoencephalography to measure the pre- and posttraining cortical responses of two randomly assigned groups of participants that followed either an audiovisual music reading training that required multisensory integration (AV-Int group) or a unisensory training that had separate auditory and visual elements (AV-Sep group). Results revealed a network of frontal generators for the abstract audiovisual incongruency response, confirming previous findings, and indicated the central role of anterior prefrontal cortex in this process. Differential neuroplastic effects of the two types of training in frontal and temporal regions point to the crucial role of multisensory integration occurring during training. Moreover, a comparison of the posttraining cortical responses of both groups to a group of musicians that were tested using the same paradigm revealed that long-term music training leads to significantly greater responses than the short-term training of the AV-Int group in anterior prefrontal regions as well as to significantly greater responses than both short-term training protocols in the left superior temporal gyrus (STG).
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2012) 24 (1): 17–27.
Published: 01 January 2012
FIGURES
| View All (4)
Abstract
View article
PDF
Evidence from hemodynamic and electrophysiological measures suggests that the processing of emotionally relevant information occurs in a spatially and temporally distributed affective network. ERP studies of emotional stimulus processing frequently report differential responses to emotional stimuli starting around 120 msec. However, the involvement of structures that seem to become activated at earlier latencies (i.e., amygdala and OFC) would allow for more rapid modulations, even in distant cortical areas. Consistent with this notion, recent ERP studies investigating associative learning have provided evidence for rapid modulations in sensory areas earlier than 120 msec, but these studies either used simple and/or very few stimuli. The present study used high-density whole-head magneto-encephalography to measure brain responses to a multitude of neutral facial stimuli, which were associated with an aversive or neutral odor. Significant emotional modulations were observed at intervals of 50–80 and 130–190 msec in frontal and occipito-temporal regions, respectively. In the absence of contingency awareness and with only two learning instances, a remarkable capacity for emotional learning is observed.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (8): 1855–1863.
Published: 01 August 2011
FIGURES
| View All (7)
Abstract
View article
PDF
Both attention and masking sounds can alter auditory neural processes and affect auditory signal perception. In the present study, we investigated the complex effects of auditory-focused attention and the signal-to-noise ratio of sound stimuli on three different auditory evoked field components (auditory steady-state response, N1m, and sustained field) by means of magnetoencephalography. The results indicate that the auditory steady-state response originating in primary auditory cortex reflects the signal-to-noise ratio of physical sound inputs (bottom–up process) rather than the listener's attentional state (top–down process), whereas the sustained field, originating in nonprimary auditory cortex, reflects the attentional state rather than the signal-to-noise ratio. The N1m was substantially influenced by both bottom–up and top–down neural processes. The differential sensitivity of the components to bottom–up and top–down neural processes, contingent on their level in the processing pathway, suggests a stream from bottom–up driven sensory neural processing to top–down driven auditory perception within human auditory cortex.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (6): 1251–1261.
Published: 01 June 2010
FIGURES
Abstract
View article
PDF
The plasticity of the adult memory network for integrating novel word forms (lexemes) was investigated with whole-head magnetoencephalography (MEG). We showed that spoken word forms of an (artificial) foreign language are integrated rapidly and successfully into existing lexical and conceptual memory networks. The new lexemes were learned in an untutored way, by pairing them frequently with one particular object (and thus meaning), and infrequently with 10 other objects ( learned set ). Other novel word forms were encountered just as often, but paired with many different objects ( nonlearned set ). Their impact on semantic memory was assessed with cross-modal priming, with novel word forms as primes and object pictures as targets. The MEG counterpart of the N400 (N400m) served as an indicator of a semantic (mis)match between words and pictures. Prior to learning, all novel words induced a pronounced N400m mismatch effect to the pictures. This component was strongly reduced after training for the learned novel lexemes only, and now closely resembled the brain's response to semantically related native-language words. This result cannot be explained by mere stimulus repetition or stimulus–stimulus association. Thus, learned novel words rapidly gained access to existing conceptual representations, as effectively as related native-language words. This association of novel lexemes and conceptual information happened fast and almost without effort. Neural networks mediating these integration processes were found within left temporal lobe, an area typically described as one of the main generators of the N400 response.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2005) 17 (10): 1578–1592.
Published: 01 October 2005
Abstract
View article
PDF
In music, multiple musical objects often overlap in time. Western polyphonic music contains multiple simultaneous melodic lines (referred to as “voices”) of equal importance. Previous electrophysiological studies have shown that pitch changes in a single melody are automatically encoded in memory traces, as indexed by mismatch negativity (MMN) and its magnetic counterpart (MMNm), and that this encoding process is enhanced by musical experience. In the present study, we examined whether two simultaneous melodies in polyphonic music are represented as separate entities in the auditory memory trace. Musicians and untrained controls were tested in both magnetoencephalogram and behavioral sessions. Polyphonic stimuli were created by combining two melodies (A and B), each consisting of the same five notes but in a different order. Melody A was in the high voice and Melody B in the low voice in one condition, and this was reversed in the other condition. On 50% of trials, a deviant final (5th) note was played either in the high or in the low voice, and it either went outside the key of the melody or remained within the key. These four deviations occurred with equal probability of 12.5% each. Clear MMNm was obtained for most changes in both groups, despite the 50% deviance level, with a larger amplitude in musicians than in controls. The response pattern was consistent across groups, with larger MMNm for deviants in the high voice than in the low voice, and larger MMNm for in-key than out-of-key changes, despite better behavioral performance for out-of-key changes. The results suggest that melodic information in each voice in polyphonic music is encoded in the sensory memory trace, that the higher voice is more salient than the lower, and that tonality may be processed primarily at cognitive stages subsequent to MMN generation.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2004) 16 (6): 1010–1021.
Published: 01 July 2004
Abstract
View article
PDF
In music, melodic information is thought to be encoded in two forms, a contour code (up/down pattern of pitch changes) and an interval code (pitch distances between successive notes). A recent study recording the mismatch negativity (MMN) evoked by pitch contour and interval deviations in simple melodies demonstrated that people with no formal music education process both contour and interval information in the auditory cortex automatically. However, it is still unclear whether musical experience enhances both strategies of melodic encoding. We designed stimuli to examine contour and interval information separately. In the contour condition there were eight different standard melodies (presented on 80% of trials), each consisting of five notes all ascending in pitch, and the corresponding deviant melodies (20%) were altered to descending on their final note. The interval condition used one five-note standard melody transposed to eight keys from trial to trial, and on deviant trials the last note was raised by one whole tone without changing the pitch contour. There was also a control condition, in which a standard tone (990.7 Hz) and a deviant tone (1111.0 Hz) were presented. The magnetic counterpart of the MMN (MMNm) from musicians and nonmusicians was obtained as the difference between the dipole moment in response to the standard and deviant trials recorded by magnetoencephalography. Significantly larger MMNm was present in musicians in both contour and interval conditions than in nonmusicians, whereas MMNm in the control condition was similar for both groups. The interval MMNm was larger than the contour MMNm in musicians. No hemispheric difference was found in either group. The results suggest that musical training enhances the ability to automatically register abstract changes in the relative pitch structure of melodies.