Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Michael A. Pitts
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2018) 30 (4): 498–513.
Published: 01 April 2018
FIGURES
| View All (12)
Abstract
View article
PDF
In auditory–visual sensory substitution, visual information (e.g., shape) can be extracted through strictly auditory input (e.g., soundscapes). Previous studies have shown that image-to-sound conversions that follow simple rules [such as the Meijer algorithm; Meijer, P. B. L. An experimental system for auditory image representation. Transactions on Biomedical Engineering, 39, 111–121, 1992] are highly intuitive and rapidly learned by both blind and sighted individuals. A number of recent fMRI studies have begun to explore the neuroplastic changes that result from sensory substitution training. However, the time course of cross-sensory information transfer in sensory substitution is largely unexplored and may offer insights into the underlying neural mechanisms. In this study, we recorded ERPs to soundscapes before and after sighted participants were trained with the Meijer algorithm. We compared these posttraining versus pretraining ERP differences with those of a control group who received the same set of 80 auditory/visual stimuli but with arbitrary pairings during training. Our behavioral results confirmed the rapid acquisition of cross-sensory mappings, and the group trained with the Meijer algorithm was able to generalize their learning to novel soundscapes at impressive levels of accuracy. The ERP results revealed an early cross-sensory learning effect (150–210 msec) that was significantly enhanced in the algorithm-trained group compared with the control group as well as a later difference (420–480 msec) that was unique to the algorithm-trained group. These ERP modulations are consistent with previous fMRI results and provide additional insight into the time course of cross-sensory information transfer in sensory substitution.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2012) 24 (2): 287–303.
Published: 01 February 2012
FIGURES
| View All (6)
Abstract
View article
PDF
An inattentional blindness paradigm was adapted to measure ERPs elicited by visual contour patterns that were or were not consciously perceived. In the first phase of the experiment, subjects performed an attentionally demanding task while task-irrelevant line segments formed square-shaped patterns or random configurations. After the square patterns had been presented 240 times, subjects' awareness of these patterns was assessed. More than half of all subjects, when queried, failed to notice the square patterns and were thus considered inattentionally blind during this first phase. In the second phase of the experiment, the task and stimuli were the same, but following this phase, all of the subjects reported having seen the patterns. ERPs recorded over the occipital pole differed in amplitude from 220 to 260 msec for the pattern stimuli compared with the random arrays regardless of whether subjects were aware of the patterns. At subsequent latencies (300–340 msec) however, ERPs over bilateral occipital-parietal areas differed between patterns and random arrays only when subjects were aware of the patterns. Finally, in a third phase of the experiment, subjects viewed the same stimuli, but the task was altered so that the patterns became task relevant. Here, the same two difference components were evident but were followed by a series of additional components that were absent in the first two phases of the experiment. We hypothesize that the ERP difference at 220–260 msec reflects neural activity associated with automatic contour integration whereas the difference at 300–340 msec reflects visual awareness, both of which are dissociable from task-related postperceptual processing.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (4): 880–895.
Published: 01 April 2011
FIGURES
| View All (8)
Abstract
View article
PDF
The temporal sequence of neural processes supporting figure–ground perception was investigated by recording ERPs associated with subjects' perceptions of the face–vase figure. In Experiment 1, subjects continuously reported whether they perceived the face or the vase as the foreground figure by pressing one of two buttons. Each button press triggered a probe flash to the face region, the vase region, or the borders between the two. The N170/vertex positive potential (VPP) component of the ERP elicited by probes to the face region was larger when subjects perceived the faces as figure. Preceding the N170/VPP, two additional components were identified. First, when the borders were probed, ERPs differed in amplitude as early as 110 msec after probe onset depending on subjects' figure–ground perceptions. Second, when the face or vase regions were probed, ERPs were more positive (at ∼150–200 msec) when that region was perceived as figure versus background. These components likely reflect an early “border ownership” stage, and a subsequent “figure–ground segregation” stage of processing. To explore the influence of attention on these stages of processing, two additional experiments were conducted. In Experiment 2, subjects selectively attended to the face or vase region, and the same early ERP components were again produced. In Experiment 3, subjects performed an identical selective attention task, but on a display lacking distinctive figure–ground borders, and neither of the early components were produced. Results from these experiments suggest sequential stages of processing underlying figure–ground perception, each which are subject to modifications by selective attention.