Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-3 of 3
Zhuanghua Shi
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2023) 35 (4): 543–570.
Published: 01 April 2023
FIGURES
| View all 8
Abstract
View articletitled, Hierarchy of Intra- and Cross-modal Redundancy Gains in Visuo-tactile Search: Evidence from the Posterior Contralateral Negativity
View
PDF
for article titled, Hierarchy of Intra- and Cross-modal Redundancy Gains in Visuo-tactile Search: Evidence from the Posterior Contralateral Negativity
Redundant combination of target features from separable dimensions can expedite visual search. The dimension-weighting account explains these “redundancy gains” by assuming that the attention-guiding priority map integrates the feature-contrast signals generated by targets within the respective dimensions. The present study investigated whether this hierarchical architecture is sufficient to explain the gains accruing from redundant targets defined by features in different modalities, or whether an additional level of modality-specific priority coding is necessary, as postulated by the modality-weighting account (MWA). To address this, we had observers perform a visuo-tactile search task in which targets popped out by a visual feature (color or shape) or a tactile feature (vibro-tactile frequency) as well as any combination of these features. The RT gains turned out larger for visuo-tactile versus visual redundant targets, as predicted by the MWA. In addition, we analyzed two lateralized event-related EEG components: the posterior (PCN) and central (CCN) contralateral negativities, which are associated with visual and tactile attentional selection, respectively. The CCN proved to be a stable somatosensory component, unaffected by cross-modal redundancies. In contrast, the PCN was sensitive to cross-modal redundancies, evidenced by earlier onsets and higher amplitudes, which could not be explained by linear superposition of the earlier CCN onto the later PCN. Moreover, linear mixed-effect modeling of the PCN amplitude and timing parameters accounted for approximately 25% of the behavioral RT variance. Together, these behavioral and PCN effects support the hierarchy of priority-signal computation assumed by the MWA.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2022) 34 (9): 1702–1717.
Published: 01 August 2022
FIGURES
| View all 5
Abstract
View articletitled, Multisensory Rather than Unisensory Representations Contribute to Statistical Context Learning in Tactile Search
View
PDF
for article titled, Multisensory Rather than Unisensory Representations Contribute to Statistical Context Learning in Tactile Search
Using a combination of behavioral and EEG measures in a tactile odd-one-out search task with collocated visual items, we investigated the mechanisms underlying facilitation of search by repeated (vs. nonrepeated) spatial distractor–target configurations (“contextual cueing”) when either the tactile (same-modality) or the visual array (different-modality) context was predictive of the location of the tactile singleton target. Importantly, in both conditions, the stimulation was multisensory, consisting of tactile plus visual items, although the target was singled out in the tactile modality and so the visual items were task-irrelevant. We found that when the predictive context was tactile, facilitation of search RTs by repeated configurations was accompanied by, and correlated with, enhanced lateralized ERP markers of pre-attentive (N1, N2) and, respectively focal-attentional processing (contralateral delay activity) not only over central (“somatosensory”), but also posterior (“visual”) electrode sites, although the ERP effects were less marked over visual cortex. A similar pattern—of facilitated RTs and enhanced lateralized (N2 and contralateral delay activity) ERP components—was found when the predictive context was visual, although the ERP effects were less marked over somatosensory cortex. These findings indicate that both somatosensory and visual cortical regions contribute to the more efficient processing of the tactile target in repeated stimulus arrays, although their involvement is differentially weighted depending on the sensory modality that contains the predictive information.
Journal Articles
A Moment to Reflect upon Perceptual Synchrony
UnavailablePublisher: Journals Gateway
Journal of Cognitive Neuroscience (2006) 18 (10): 1663–1665.
Published: 01 October 2006
Abstract
View articletitled, A Moment to Reflect upon Perceptual Synchrony
View
PDF
for article titled, A Moment to Reflect upon Perceptual Synchrony
How does neuronal activity bring about the interpretation of visual space in terms of objects or complex perceptual events? If they group, simple visual features can bring about the integration of spikes from neurons responding to different features to within a few milliseconds. Considered as a potential solution to the “binding problem,” it is suggested that neuronal synchronization is the glue for binding together different features of the same object. This idea receives some support from correlated- and periodic-stimulus motion paradigms, both of which suggest that the segregation of a figure from ground is a direct result of the temporal correlation of visual signals. One could say that perception of a highly correlated visual structure permits space to be bound in time. However, on closer analysis, the concept of perceptual synchrony is insufficient to explain the conditions under which events will be seen as simultaneous. Instead, the grouping effects ascribed to perceptual synchrony are better explained in terms of the intervals of time over which stimulus events integrate and seem to occur simultaneously. This point is supported by the equivalence of some of these measures with well-established estimates of the perceptual moment. However, it is time in extension and not the instantaneous that may best describe how seemingly simultaneous features group. This means that studies of perceptual synchrony are insufficient to address the binding problem.