Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-18 of 18
Claude Alain
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2022) 34 (5): 846–863.
Published: 31 March 2022
FIGURES
| View All (6)
Abstract
View article
PDF
The brain's ability to extract information from multiple sensory channels is crucial to perception and effective engagement with the environment, but the individual differences observed in multisensory processing lack mechanistic explanation. We hypothesized that, from the perspective of information theory, individuals with more effective multisensory processing will exhibit a higher degree of shared information among distributed neural populations while engaged in a multisensory task, representing more effective coordination of information among regions. To investigate this, healthy young adults completed an audiovisual simultaneity judgment task to measure their temporal binding window (TBW), which quantifies the ability to distinguish fine discrepancies in timing between auditory and visual stimuli. EEG was then recorded during a second run of the simultaneity judgment task, and partial least squares was used to relate individual differences in the TBW width to source-localized EEG measures of local entropy and mutual information, indexing local and distributed processing of information, respectively. The narrowness of the TBW, reflecting more effective multisensory processing, was related to a broad pattern of higher mutual information and lower local entropy at multiple timescales. Furthermore, a small group of temporal and frontal cortical regions, including those previously implicated in multisensory integration and response selection, respectively, played a prominent role in this pattern. Overall, these findings suggest that individual differences in multisensory processing are related to widespread individual differences in the balance of distributed versus local information processing among a large subset of brain regions, with more distributed information being associated with more effective multisensory processing. The balance of distributed versus local information processing may therefore be a useful measure for exploring individual differences in multisensory processing, its relationship to higher cognitive traits, and its disruption in neurodevelopmental disorders and clinical conditions.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2020) 32 (10): 1851–1863.
Published: 01 October 2020
FIGURES
| View All (9)
Abstract
View article
PDF
Selective attention to sound object features such as pitch and location is associated with enhanced brain activity in ventral and dorsal streams, respectively. We examined the role of these pathways in involuntary orienting and conflict resolution using fMRI. Participants were presented with two tones that may, or may not, share the same nonspatial (frequency) or spatial (location) auditory features. In separate blocks of trials, participants were asked to attend to sound frequency or sound location and ignore the change in the task-irrelevant feature. In both attend-frequency and attend-location tasks, RTs were slower when the task-irrelevant feature changed than when it stayed the same (involuntary orienting). This behavioral cost coincided with enhanced activity in the pFC and superior temporal gyrus. Conflict resolution was examined by comparing situations where the change in stimulus features was congruent (both features changed) and incongruent (only one feature changed). Participants were slower and less accurate for incongruent than congruent sound features. This congruency effect was associated with enhanced activity in the pFC and was greater in the right superior temporal gyrus and medial frontal cortex during the attend-location task than during the attend-frequency task. Together, these findings do not support a strict division of “labor” into ventral and dorsal streams but rather suggest interactions between these pathways in situations involving changes in task-irrelevant sound feature and conflict resolution. These findings also validate the Test of Attention in Listening task by revealing distinct neural correlates for involuntary orienting and conflict resolution.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2015) 27 (11): 2186–2196.
Published: 01 November 2015
FIGURES
| View All (6)
Abstract
View article
PDF
Detecting a brief silent interval (i.e., a gap) is more difficult when listeners perceive two concurrent sounds rather than one in a sound containing a mistuned harmonic in otherwise in-tune harmonics. This impairment in gap detection may reflect the interaction of low-level encoding or the division of attention between two sound objects, both of which could interfere with signal detection. To distinguish between these two alternatives, we compared ERPs during active and passive listening with complex harmonic tones that could include a gap, a mistuned harmonic, both features, or neither. During active listening, participants indicated whether they heard a gap irrespective of mistuning. During passive listening, participants watched a subtitled muted movie of their choice while the same sounds were presented. Gap detection was impaired when the complex sounds included a mistuned harmonic that popped out as a separate object. The ERP analysis revealed an early gap-related activity that was little affected by mistuning during the active or passive listening condition. However, during active listening, there was a marked decrease in the late positive wave that was thought to index attention and response-related processes. These results suggest that the limitation in detecting the gap is related to attentional processing, possibly divided attention induced by the concurrent sound objects, rather than deficits in preattentional sensory encoding.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2014) 26 (12): 2840–2862.
Published: 01 December 2014
FIGURES
| View All (6)
Abstract
View article
PDF
EEG studies employing time–frequency analysis have revealed changes in theta and alpha power in a variety of language and memory tasks. Semantic and syntactic violations embedded in sentences evoke well-known ERPs, but little is known about the oscillatory responses to these violations. We investigated oscillatory responses to both kinds of violations, while monolingual and bilingual participants performed an acceptability judgment task. Both violations elicited power decreases (event-related desynchronization, ERD) in the 8–30 Hz frequency range, but with different scalp topographies. In addition, semantic anomalies elicited power increases (event-related synchronization, ERS) in the 1–5 Hz frequency band. The 1–5 Hz ERS was strongly phase-locked to stimulus onset and highly correlated with time domain averages, whereas the 8–30 Hz ERD response varied independently of these. In addition, the results showed that language expertise modulated 8–30 Hz ERD for syntactic violations as a function of the executive demands of the task. When the executive function demands were increased using a grammaticality judgment task, bilinguals but not monolinguals demonstrated reduced 8–30 Hz ERD for syntactic violations. These findings suggest a putative role of the 8–30 Hz ERD response as a marker of linguistic processing that likely represents a separate neural process from those underlying ERPs.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2013) 25 (4): 503–516.
Published: 01 April 2013
FIGURES
| View All (6)
Abstract
View article
PDF
The ability to separate concurrent sounds based on periodicity cues is critical for parsing complex auditory scenes. This ability is enhanced in young adult musicians and reduced in older adults. Here, we investigated the impact of lifelong musicianship on concurrent sound segregation and perception using scalp-recorded ERPs. Older and younger musicians and nonmusicians were presented with periodic harmonic complexes where the second harmonic could be tuned or mistuned by 1–16% of its original value. The likelihood of perceiving two simultaneous sounds increased with mistuning, and musicians, both older and younger, were more likely to detect and report hearing two sounds when the second harmonic was mistuned at or above 2%. The perception of a mistuned harmonic as a separate sound was paralleled by an object-related negativity that was larger and earlier in younger musicians compared with the other three groups. When listeners made a judgment about the harmonic stimuli, the perception of the mistuned harmonic as a separate sound was paralleled by a positive wave at about 400 msec poststimulus (P400), which was enhanced in both older and younger musicians. These findings suggest attention-dependent processing of a mistuned harmonic is enhanced in older musicians and provides further evidence that age-related decline in hearing abilities are mitigated by musical training.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2012) 24 (6): 1286–1293.
Published: 01 June 2012
FIGURES
| View All (5)
Abstract
View article
PDF
Playing a first-person shooter (FPS) video game alters the neural processes that support spatial selective attention. Our experiment establishes a causal relationship between playing an FPS game and neuroplastic change. Twenty-five participants completed an attentional visual field task while we measured ERPs before and after playing an FPS video game for a cumulative total of 10 hr. Early visual ERPs sensitive to bottom–up attentional processes were little affected by video game playing for only 10 hr. However, participants who played the FPS video game and also showed the greatest improvement on the attentional visual field task displayed increased amplitudes in the later visual ERPs. These potentials are thought to index top–down enhancement of spatial selective attention via increased inhibition of distractors. Individual variations in learning were observed, and these differences show that not all video game players benefit equally, either behaviorally or in terms of neural change.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (7): 1609–1623.
Published: 01 July 2011
FIGURES
| View All (9)
Abstract
View article
PDF
The present study examined the modality specificity and spatio-temporal dynamics of “what” and “where” preparatory processes in anticipation of auditory and visual targets using ERPs and a cue–target paradigm. Participants were presented with an auditory (Experiment 1) or a visual (Experiment 2) cue that signaled them to attend to the identity or location of an upcoming auditory or visual target. In both experiments, participants responded faster to the location compared to the identity conditions. Multivariate spatio-temporal partial least square (ST-PLS) analysis of the scalp-recorded data revealed supramodal “where” preparatory processes between 300–600 msec and 600–1200 msec at central and posterior parietal electrode sites in anticipation of both auditory and visual targets. Furthermore, preparation for pitch processing was captured at modality-specific temporal regions between 300 and 700 msec, and preparation for shape processing was detected at occipital electrode sites between 700 and 1150 msec. The spatio-temporal patterns noted above were replicated when a visual cue signaled the upcoming response (Experiment 2). Pitch or shape preparation exhibited modality-dependent spatio-temporal patterns, whereas preparation for target localization was associated with larger amplitude deflections at multimodal, centro-parietal sites preceding both auditory and visual targets. Using a novel paradigm, the study supports the notion of a division of labor in the auditory and visual pathways following both auditory and visual cues that signal identity or location response preparation to upcoming auditory or visual targets.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (2): 392–403.
Published: 01 February 2010
FIGURES
| View All (6)
Abstract
View article
PDF
Perceptual learning is sometimes characterized by rapid improvements in performance within the first hour of training (fast perceptual learning), which may be accompanied by changes in sensory and/or response pathways. Here, we report rapid physiological changes in the human auditory system that coincide with learning during a 1-hour test session in which participants learned to identify two consonant vowel syllables that differed in voice onset time. Within each block of trials, listeners were also presented with a broadband noise control stimulus to determine whether changes in auditory evoked potentials were specific to the trained speech cue. The ability to identify the speech sounds improved from the first to the fourth block of trials and remained relatively constant thereafter. This behavioral improvement coincided with a decrease in N1 and P2 amplitude, and these learning-related changes differed from those observed for the noise stimulus. These training-induced changes in sensory evoked responses were followed by an increased negative peak (between 275 and 330 msec) over fronto-central sites and by an increase in sustained activity over the parietal regions. Although the former was also observed for the noise stimulus, the latter was specific to the speech sounds. The results are consistent with a top–down nonspecific attention effect on neural activity during learning as well as a more learning-specific modulation, which is coincident with behavioral improvements in speech identification.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2009) 21 (8): 1488–1498.
Published: 01 August 2009
Abstract
View article
PDF
The ability to segregate simultaneously occurring sounds is fundamental to auditory perception. Many studies have shown that musicians have enhanced auditory perceptual abilities; however, the impact of musical expertise on segregating concurrently occurring sounds is unknown. Therefore, we examined whether long-term musical training can improve listeners' ability to segregate sounds that occur simultaneously. Participants were presented with complex sounds that had either all harmonics in tune or the second harmonic mistuned by 1%, 2%, 4%, 8%, or 16% of its original value. The likelihood of hearing two sounds simultaneously increased with mistuning, and this effect was greater in musicians than nonmusicians. The segregation of the mistuned harmonic from the harmonic series was paralleled by an object-related negativity that was larger and peaked earlier in musicians. It also coincided with a late positive wave referred to as the P400 whose amplitude was larger in musicians than in nonmusicians. The behavioral and electrophysiological effects of musical expertise were specific to processing the mistuned harmonic as the N1, the N1c, and the P2 waves elicited by the tuned stimuli were comparable in both musicians and nonmusicians. These results demonstrate that listeners' ability to segregate concurrent sounds based on harmonicity is modulated by experience and provides a basis for further studies assessing the potential rehabilitative effects of musical training on solving complex scene analysis problems illustrated by the cocktail party example.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2008) 20 (2): 285–295.
Published: 01 February 2008
Abstract
View article
PDF
There is strong evidence for dissociable “what” and “where” pathways in the auditory system, but considerable debate remains regarding the functional role of these pathways. The sensory-motor account of spatial processing posits that the dorsal brain regions (e.g., inferior parietal lobule, IPL) mediate sensory-motor integration required during “where” responding. An alternative account suggests that the IPL plays an important role in monitoring sound location. To test these two models, we used a mixed-block and event-related functional magnetic resonance imaging (fMRI) design in which participants responded to occasional repetitions in either sound location (“where” task) or semantic category (“what” task). The fMRI data were analyzed with the general linear model using separate regressors for representing sustained and transient activity in both listening conditions. This analysis revealed more sustained activity in right dorsal brain regions, including the IPL and superior frontal sulcus, during the location than during the category task, after accounting for transient activity related to target detection and the motor response. Conversely, we found greater sustained activity in the left superior temporal gyrus and left inferior frontal gyrus during the category task compared to the location task. Transient target-related activity in both tasks was associated with enhanced signal in the left pre- and postcentral gyrus, prefrontal cortex and bilateral IPL. These results suggest dual roles for the right IPL in auditory working memory—one involved in monitoring and updating sound location independent of motor responding, and another that underlies the integration of sensory and motor functions.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2007) 19 (11): 1815–1826.
Published: 01 November 2007
Abstract
View article
PDF
Unlike most other objects that are processed analytically, faces are processed configurally. This configural processing is reflected early in visual processing following face inversion and contrast reversal, as an increase in the N170 amplitude, a scalp-recorded event-related potential. Here, we show that these face-specific effects are mediated by the eye region. That is, they occurred only when the eyes were present, but not when eyes were removed from the face. The N170 recorded to inverted and negative faces likely reflects the processing of the eyes. We propose a neural model of face processing in which face- and eye-selective neurons situated in the superior temporal sulcus region of the human brain respond differently to the face configuration and to the eyes depending on the face context. This dynamic response modulation accounts for the N170 variations reported in the literature. The eyes may be central to what makes faces so special.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2006) 18 (12): 2108–2129.
Published: 01 November 2006
Abstract
View article
PDF
People often remain “blind” to visual changes occurring during a brief interruption of the display. The processing stages responsible for such failure remain unresolved. We used event-related potentials to determine the time course of brain activity during conscious change detection versus change blindness. Participants saw two successive visual displays, each with two faces, and reported whether one of the faces changed between the first and second displays. Relative to blindness, change detection was associated with a distinct pattern of neural activity at several successive processing stages, including an enhanced occipital P1 response and a sustained frontal activity (CNV-like potential) after the first display, before the change itself. The amplitude of the N170 and P3 responses after the second visual display were also modulated by awareness of the face change. Furthermore, a unique topography of event-related potential activity was observed during correct change and correct no-change reports, but not during blindness, with a recurrent time course in the stimulus sequence and simultaneous sources in the parietal and temporo-occipital cortex. These results indicate that awareness of visual changes may depend on the attentional state subserved by coordinated neural activity in a distributed network, before the onset of the change itself.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2006) 18 (1): 1–13.
Published: 01 January 2006
Abstract
View article
PDF
A general assumption underlying auditory scene analysis is that the initial grouping of acoustic elements is independent of attention. The effects of attention on auditory stream segregation were investigated by recording event-related potentials (ERPs) while participants either attended to sound stimuli and indicated whether they heard one or two streams or watched a muted movie. The stimuli were pure-tone ABA-patterns that repeated for 10.8 sec with a stimulus onset asynchrony between A and B tones of 100 msec in which the A tone was fixed at 500 Hz, the B tone could be 500, 625, 750, or 1000 Hz, and was a silence. In both listening conditions, an enhancement of the auditory-evoked response (P1-N1-P2 and N1c) to the B tone varied with f and correlated with perception of streaming. The ERP from 150 to 250 msec after the beginning of the repeating ABA-patterns became more positive during the course of the trial and was diminished when participants ignored the tones, consistent with behavioral studies indicating that streaming takes several seconds to build up. The N1c enhancement and the buildup over time were larger at right than left temporal electrodes, suggesting a right-hemisphere dominance for stream segregation. Sources in Heschl's gyrus accounted for the ERP modulations related to f-based segregation and buildup. These findings provide evidence for two cortical mechanisms of streaming: automatic segregation of sounds and attention-dependent buildup process that integrates successive tones within streams over several seconds.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2005) 17 (5): 811–818.
Published: 01 May 2005
Abstract
View article
PDF
The discrimination of concurrent sounds is paramount to speech perception. During social gatherings, listeners must extract information from a composite acoustic wave, which sums multiple individual voices that are simultaneously active. The observers' ability to identify two simultaneously presented vowels improves with increasing separation between the fundamental frequencies ( f 0 ) of the two vowels. Event-related potentials to stimuli presented during attend and ignore conditions revealed activity between 130 and 170 msec after sound onset that reflected the f 0 differences between the two vowels. Another, more posterior and right-lateralized, negative wave maximal at 250 msec, and a central-parietal slow negativity were observed only during vowel identification and may index stimulus categorization. This sequence of neural events supports a multistage model of auditory scene analysis in which the spectral pattern of each vowel constituent is automatically extracted and then matched against representations of those vowels in working memory.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2005) 17 (5): 819–831.
Published: 01 May 2005
Abstract
View article
PDF
Spatial and nonspatial auditory tasks preferentially recruit dorsal and ventral brain areas, respectively. However, the extent to which these auditory differences reflect specific aspects of mental processing has not been directly studied. In the present functional magnetic resonance imaging experiment, participants encoded and maintained either the location or the identity of a sound for a delay period of several seconds and then subsequently compared that information with a second sound. Relative to sound localization, sound identification was associated with greater hemodynamic activity in the left rostral superior temporal gyrus. In contrast, localizing sounds recruited greater activity in the parietal cortex, posterior temporal lobe, and superior frontal sulcus. The identification differences were most prominent during the early stage of the trial, whereas the location differences were most evident during the late (i.e., comparison) stage. Accordingly, our results suggest that auditory spatial and identity dissociations as revealed by functional imaging may be dependent to some degree on the type of processing being carried out. In addition, dorsolateral prefrontal and lateral superior parietal areas showed greater activity during the comparison as opposed to the earlier stage of the trial, regardless of the type of auditory task, consistent with results from visual working memory studies.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2003) 15 (7): 1063–1073.
Published: 01 October 2003
Abstract
View article
PDF
The effects of attention on the neural processes underlying auditory scene analysis were investigated through the manipulation of auditory task load. Participants were asked to focus their attention on tuned and mistuned stimuli presented to one ear and to ignore similar stimuli presented to the other ear. For both tuned and mistuned sounds, long (standard) and shorter (deviant) duration stimuli were presented in both ears. Auditory task load was manipulated by varying task instructions. In the easier condition, participants were asked to press a button for deviant sounds (target) at the attended location, irrespective of tuning. In the harder condition, participants were further asked to identify whether the targets were tuned or mistuned. Participants were faster in detecting targets defined by duration only than by both duration and tuning. At the unattended location, deviant stimuli generated a mismatch negativity wave at frontocentral sites whose amplitude decreased with increasing task demand. In comparison, standard mistuned stimuli generated an object-related negativity at central sites whose amplitude was not affected by task difficulty. These results show that the processing of sound sequences is differentially affected by attentional load than is the processing of sounds that occur simultaneously (i.e., sequential vs. simultaneous grouping processes), and that they each recruit distinct neural networks.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2002) 14 (3): 430–442.
Published: 01 April 2002
Abstract
View article
PDF
Most work on how pitch is encoded in the auditory cortex has focused on tonotopic (absolute) pitch maps. However, melodic information is thought to be encoded in the brain in two different “relative pitch” forms, a domain-general contour code (up/down pattern of pitch changes) and a music-specific interval code (exact pitch distances between notes). Event-related potentials were analyzed in nonmusicians from both passive and active oddball tasks where either the contour or the interval of melody—final notes was occasionally altered. The occasional deviant notes generated a right frontal positivity peaking around 350 msec and a central parietal P3b peaking around 580 msec that were present only when participants focused their attention on the auditory stimuli. Both types of melodic information were encoded automatically in the absence of absolute pitch cues, as indexed by a mismatch negativity wave recorded during the passive conditions. The results indicate that even in the absence of musical training, the brain is set up to automatically encode music-specific melodic information, even when absolute pitch information is not available.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2001) 13 (4): 492–509.
Published: 15 May 2001
Abstract
View article
PDF
The mechanisms of auditory feature processing and conjunction were examined with event-related brain potential (ERP) recording in a task in which participants responded to target tones defined by the combination of location, frequency, and duration features amid distractor tones varying randomly along all feature dimensions. Attention effects were isolated as negative difference (Nd) waves by subtracting ERPs to tones with no target features from ERPs to tones with one, two, or three target features. Nd waves were seen to all tones sharing a single feature with the target, including tones sharing only target duration. Nd waves associated with the analysis of frequency and location features began at latencies of 60 msec, whereas Nd-Duration waves began at 120 msec. Nd waves to tones with single target features continued until 400+ msec, suggesting that once begun, the analysis of tone features continued exhaustively to conclusion. Nd-Frequency and Nd-Location waves had distinct scalp distributions, consistent with generation in different auditory cortical areas. Three stages of feature processing were identified: (1) Parallel feature processing (60-140 msec): Nd waves combined linearly, such that Nd-wave amplitudes following tones with two or three target features were equal to the sum of the Nd waves elicited by tones with only one target feature. (2) Conjunction-specific (CS) processing (140-220 msec): Nd amplitudes were enhanced following tones with any pair of attended features. (3) Target-specific (TS) processing (220-300 msec): Nd amplitudes were specifically enhanced to target tones with all three features. These results are consistent with a facilitatory interactive feature analysis (FIFA) model in which feature conjunction is associated with the amplified processing of individual stimulus features. Activation of N -methyl-D-aspartate (NMDA) receptors is proposed to underlie the FIFA process.