Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-3 of 3
Terence W. Picton
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2020) 32 (3): 379–385.
Published: 01 March 2020
FIGURES
| View All (5)
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2006) 18 (1): 1–13.
Published: 01 January 2006
Abstract
View article
PDF
A general assumption underlying auditory scene analysis is that the initial grouping of acoustic elements is independent of attention. The effects of attention on auditory stream segregation were investigated by recording event-related potentials (ERPs) while participants either attended to sound stimuli and indicated whether they heard one or two streams or watched a muted movie. The stimuli were pure-tone ABA-patterns that repeated for 10.8 sec with a stimulus onset asynchrony between A and B tones of 100 msec in which the A tone was fixed at 500 Hz, the B tone could be 500, 625, 750, or 1000 Hz, and was a silence. In both listening conditions, an enhancement of the auditory-evoked response (P1-N1-P2 and N1c) to the B tone varied with f and correlated with perception of streaming. The ERP from 150 to 250 msec after the beginning of the repeating ABA-patterns became more positive during the course of the trial and was diminished when participants ignored the tones, consistent with behavioral studies indicating that streaming takes several seconds to build up. The N1c enhancement and the buildup over time were larger at right than left temporal electrodes, suggesting a right-hemisphere dominance for stream segregation. Sources in Heschl's gyrus accounted for the ERP modulations related to f-based segregation and buildup. These findings provide evidence for two cortical mechanisms of streaming: automatic segregation of sounds and attention-dependent buildup process that integrates successive tones within streams over several seconds.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1989) 1 (4): 336–355.
Published: 01 October 1989
Abstract
View article
PDF
The intracerebral generators of the human auditory evoked potentials were estimated using dipole source analysis of 14-channel scalp recordings. The response to a 400-msec toneburst presented every 0.9 sec could be explained by three major dipole sources in each temporal lobe. The first was a vertically oriented dipole located on the supratemporal plane in or near the auditory koniocortex. This contributed to the scalp-recorded N1 wave at 100 msec. The second was a vertically oriented dipole source located on the supratemporal plane somewhat anterior to the first. This contributed to both the Nl and the sustained potential (SP). The third was a laterally oriented dipole source that perhaps originated in the magnopyramidal temporal field. This contributed a negative wave to the lateral scalp recordings at the latency of 145 msec. A change in the frequency of the toneburst elicited an additional negativity in the scalp-recording —the mismatch negativity (MMN). When the frequency change was large, the mismatch negativity was composed of two distinct sources with sequential but partially overlapping activities. The earlier corresponded to the Nl dipole sources and the later to a more anteriorly located dipole with an orientation more lateral than Nl. Only the later source was active when the frequency change was small. MMN source activities peaked about 15 msec earlier in the contralateral hemisphere, while this difference was only 4 msec for the sources of the Nl.