Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Zhuanghua Shi
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2022) 34 (9): 1702–1717.
Published: 01 August 2022
FIGURES
| View All (5)
Abstract
View article
PDF
Using a combination of behavioral and EEG measures in a tactile odd-one-out search task with collocated visual items, we investigated the mechanisms underlying facilitation of search by repeated (vs. nonrepeated) spatial distractor–target configurations (“contextual cueing”) when either the tactile (same-modality) or the visual array (different-modality) context was predictive of the location of the tactile singleton target. Importantly, in both conditions, the stimulation was multisensory, consisting of tactile plus visual items, although the target was singled out in the tactile modality and so the visual items were task-irrelevant. We found that when the predictive context was tactile, facilitation of search RTs by repeated configurations was accompanied by, and correlated with, enhanced lateralized ERP markers of pre-attentive (N1, N2) and, respectively focal-attentional processing (contralateral delay activity) not only over central (“somatosensory”), but also posterior (“visual”) electrode sites, although the ERP effects were less marked over visual cortex. A similar pattern—of facilitated RTs and enhanced lateralized (N2 and contralateral delay activity) ERP components—was found when the predictive context was visual, although the ERP effects were less marked over somatosensory cortex. These findings indicate that both somatosensory and visual cortical regions contribute to the more efficient processing of the tactile target in repeated stimulus arrays, although their involvement is differentially weighted depending on the sensory modality that contains the predictive information.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2006) 18 (10): 1663–1665.
Published: 01 October 2006
Abstract
View article
PDF
How does neuronal activity bring about the interpretation of visual space in terms of objects or complex perceptual events? If they group, simple visual features can bring about the integration of spikes from neurons responding to different features to within a few milliseconds. Considered as a potential solution to the “binding problem,” it is suggested that neuronal synchronization is the glue for binding together different features of the same object. This idea receives some support from correlated- and periodic-stimulus motion paradigms, both of which suggest that the segregation of a figure from ground is a direct result of the temporal correlation of visual signals. One could say that perception of a highly correlated visual structure permits space to be bound in time. However, on closer analysis, the concept of perceptual synchrony is insufficient to explain the conditions under which events will be seen as simultaneous. Instead, the grouping effects ascribed to perceptual synchrony are better explained in terms of the intervals of time over which stimulus events integrate and seem to occur simultaneously. This point is supported by the equivalence of some of these measures with well-established estimates of the perceptual moment. However, it is time in extension and not the instantaneous that may best describe how seemingly simultaneous features group. This means that studies of perceptual synchrony are insufficient to address the binding problem.