Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Antígona Martínez
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2016) 28 (3): 433–445.
Published: 01 March 2016
FIGURES
| View All (5)
Abstract
View article
PDF
Recent findings suggest that a salient, irrelevant sound attracts attention to its location involuntarily and facilitates processing of a colocalized visual event [McDonald, J. J., Störmer, V. S., Martinez, A., Feng, W. F., & Hillyard, S. A. Salient sounds activate human visual cortex automatically. Journal of Neuroscience, 33, 9194–9201, 2013]. Associated with this cross-modal facilitation is a sound-evoked slow potential over the contralateral visual cortex termed the auditory-evoked contralateral occipital positivity (ACOP). Here, we further tested the hypothesis that a salient sound captures visual attention involuntarily by examining sound-evoked modulations of the occipital alpha rhythm, which has been strongly associated with visual attention. In two purely auditory experiments, lateralized irrelevant sounds triggered a bilateral desynchronization of occipital alpha-band activity (10–14 Hz) that was more pronounced in the hemisphere contralateral to the sound's location. The timing of the contralateral alpha-band desynchronization overlapped with that of the ACOP (∼240–400 msec), and both measures of neural activity were estimated to arise from neural generators in the ventral-occipital cortex. The magnitude of the lateralized alpha desynchronization was correlated with ACOP amplitude on a trial-by-trial basis and between participants, suggesting that they arise from or are dependent on a common neural mechanism. These results support the hypothesis that the sound-induced alpha desynchronization and ACOP both reflect the involuntary cross-modal orienting of spatial attention to the sound's location.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2012) 24 (2): 287–303.
Published: 01 February 2012
FIGURES
| View All (6)
Abstract
View article
PDF
An inattentional blindness paradigm was adapted to measure ERPs elicited by visual contour patterns that were or were not consciously perceived. In the first phase of the experiment, subjects performed an attentionally demanding task while task-irrelevant line segments formed square-shaped patterns or random configurations. After the square patterns had been presented 240 times, subjects' awareness of these patterns was assessed. More than half of all subjects, when queried, failed to notice the square patterns and were thus considered inattentionally blind during this first phase. In the second phase of the experiment, the task and stimuli were the same, but following this phase, all of the subjects reported having seen the patterns. ERPs recorded over the occipital pole differed in amplitude from 220 to 260 msec for the pattern stimuli compared with the random arrays regardless of whether subjects were aware of the patterns. At subsequent latencies (300–340 msec) however, ERPs over bilateral occipital-parietal areas differed between patterns and random arrays only when subjects were aware of the patterns. Finally, in a third phase of the experiment, subjects viewed the same stimuli, but the task was altered so that the patterns became task relevant. Here, the same two difference components were evident but were followed by a series of additional components that were absent in the first two phases of the experiment. We hypothesize that the ERP difference at 220–260 msec reflects neural activity associated with automatic contour integration whereas the difference at 300–340 msec reflects visual awareness, both of which are dissociable from task-related postperceptual processing.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (4): 880–895.
Published: 01 April 2011
FIGURES
| View All (8)
Abstract
View article
PDF
The temporal sequence of neural processes supporting figure–ground perception was investigated by recording ERPs associated with subjects' perceptions of the face–vase figure. In Experiment 1, subjects continuously reported whether they perceived the face or the vase as the foreground figure by pressing one of two buttons. Each button press triggered a probe flash to the face region, the vase region, or the borders between the two. The N170/vertex positive potential (VPP) component of the ERP elicited by probes to the face region was larger when subjects perceived the faces as figure. Preceding the N170/VPP, two additional components were identified. First, when the borders were probed, ERPs differed in amplitude as early as 110 msec after probe onset depending on subjects' figure–ground perceptions. Second, when the face or vase regions were probed, ERPs were more positive (at ∼150–200 msec) when that region was perceived as figure versus background. These components likely reflect an early “border ownership” stage, and a subsequent “figure–ground segregation” stage of processing. To explore the influence of attention on these stages of processing, two additional experiments were conducted. In Experiment 2, subjects selectively attended to the face or vase region, and the same early ERP components were again produced. In Experiment 3, subjects performed an identical selective attention task, but on a display lacking distinctive figure–ground borders, and neither of the early components were produced. Results from these experiments suggest sequential stages of processing underlying figure–ground perception, each which are subject to modifications by selective attention.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (8): 1714–1729.
Published: 01 August 2010
FIGURES
| View All (9)
Abstract
View article
PDF
When a single flash of light is presented interposed between two brief auditory stimuli separated by 60–100 msec, subjects typically report perceiving two flashes [Shams, L., Kamitani, Y., & Shimojo, S. Visual illusion induced by sound. Brain Research, Cognitive Brain Research, 14, 147–152, 2002; Shams, L., Kamitani, Y., & Shimojo, S. Illusions. What you see is what you hear. Nature, 408, 788, 2000]. Using ERP recordings, we previously found that perception of the illusory extra flash was accompanied by a rapid dynamic interplay between auditory and visual cortical areas that was triggered by the second sound [Mishra, J., Martínez, A., Sejnowski, T. J., & Hillyard, S. A. Early cross-modal interactions in auditory and visual cortex underlie a sound-induced visual illusion. Journal of Neuroscience, 27, 4120–4131, 2007]. In the current study, we investigated the effect of attention on the ERP components associated with the illusory extra flash in 15 individuals who perceived this cross-modal illusion frequently. All early ERP components in the cross-modal difference wave associated with the extra flash illusion were significantly enhanced by selective spatial attention. The earliest attention-related modulation was an amplitude increase of the positive-going PD110/PD120 component, which was previously shown to be correlated with an individual's propensity to perceive the illusory second flash [Mishra, J., Martínez, A., Sejnowski, T. J., & Hillyard, S. A. Early cross-modal interactions in auditory and visual cortex underlie a sound-induced visual illusion. Journal of Neuroscience, 27, 4120–4131, 2007]. The polarity of the early PD110/PD120 component did not differ as a function of the visual field (upper vs. lower) of stimulus presentation. This, along with the source localization of the component, suggested that its principal generator lies in extrastriate visual cortex. These results indicate that neural processes previously shown to be associated with the extra flash illusion can be modulated by attention, and thus are not the result of a wholly automatic cross-modal integration process.