Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Agnieszka Wykowska
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Abdulaziz Abubshait, Kyveli Kompatsiari, Pasquale Cardellicchio, Enrico Vescovo, Davide De Tommaso ...
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2023) 35 (10): 1670–1680.
Published: 01 October 2023
FIGURES
Abstract
View article
PDF
Communicative gaze (e.g., mutual or averted) has been shown to affect attentional orienting. However, no study to date has clearly separated the neural basis of the pure social component that modulates attentional orienting in response to communicative gaze from other processes that might be a combination of attentional and social effects. We used TMS to isolate the purely social effects of communicative gaze on attentional orienting. Participants completed a gaze-cueing task with a humanoid robot who engaged either in mutual or in averted gaze before shifting its gaze. Before the task, participants received either sham stimulation (baseline), stimulation of right TPJ (rTPJ), or dorsomedial prefrontal cortex (dmPFC). Results showed, as expected, that communicative gaze affected attentional orienting in baseline condition. This effect was not evident for rTPJ stimulation. Interestingly, stimulation to rTPJ also canceled out attentional orienting altogether. On the other hand, dmPFC stimulation eliminated the socially driven difference in attention orienting between the two gaze conditions while maintaining the basic general attentional orienting effect. Thus, our results allowed for separation of the pure social effect of communicative gaze on attentional orienting from other processes that are a combination of social and generic attentional components.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2022) 34 (1): 108–126.
Published: 01 December 2021
FIGURES
| View All (6)
Abstract
View article
PDF
Understanding others' nonverbal behavior is essential for social interaction, as it allows, among others, to infer mental states. Although gaze communication, a well-established nonverbal social behavior, has shown its importance in inferring others' mental states, not much is known about the effects of irrelevant gaze signals on cognitive conflict markers during collaborative settings. In the present study, participants completed a categorization task where they categorized objects based on their color while observing images of a robot. On each trial, participants observed the robot iCub grasping an object from a table and offering it to them to simulate a handover. Once the robot “moved” the object forward, participants were asked to categorize the object according to its color. Before participants were allowed to respond, the robot made a lateral head/gaze shift. The gaze shifts were either congruent or incongruent with the object's color. We expected that incongruent head cues would induce more errors (Study 1), would be associated with more curvature in eye-tracking trajectories (Study 2), and induce larger amplitude in electrophysiological markers of cognitive conflict (Study 3). Results of the three studies show more oculomotor interference as measured in error rates (Study 1), larger curvatures eye-tracking trajectories (Study 2), and higher amplitudes of the N2 ERP component of the EEG signals as well as higher event-related spectral perturbation amplitudes (Study 3) for incongruent trials compared with congruent trials. Our findings reveal that behavioral, ocular, and electrophysiological markers can index the influence of irrelevant signals during goal-oriented tasks.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (3): 645–660.
Published: 01 March 2011
FIGURES
| View All (8)
Abstract
View article
PDF
It is not clear how salient distractors affect visual processing. The debate concerning the issue of whether irrelevant salient items capture spatial attention [e.g., Theeuwes, J., Atchley, P., & Kramer, A. F. On the time course of top–down and bottom–up control of visual attention. In S. Monsell & J. Driver (Eds.), Attention and performance XVIII: Control of cognitive performance (pp. 105–124). Cambridge, MA: MIT Press, 2000] or produce only nonspatial interference in the form of, for example, filtering costs [Folk, Ch. L., & Remington, R. Top–down modulation of preattentive processing: Testing the recovery account of contingent capture. Visual Cognition, 14, 445–465, 2006] has not yet been settled. The present ERP study examined deployment of attention in visual search displays that contained an additional irrelevant singleton. Display-locked N2pc showed that attention was allocated to the target and not to the irrelevant singleton. However, the onset of the N2pc to the target was delayed when the irrelevant singleton was presented in the opposite hemifield relative to the same hemifield. Thus, although attention was successfully focused on the target, the irrelevant singleton produced some interference resulting in a delayed allocation of attention to the target. A subsequent probe discrimination task allowed for locking ERPs to probe onsets and investigating the dynamics of sensory gain control for probes appearing at relevant (target) or irrelevant (singleton distractor) positions. Probe-locked P1 showed sensory gain for probes positioned at the target location but no such effect for irrelevant singletons in the additional singleton condition. Taken together, the present data support the claim that irrelevant singletons do not capture attention. If they produce any interference, it is rather due to nonspatial filtering costs.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (4): 640–654.
Published: 01 April 2010
FIGURES
| View All (8)
Abstract
View article
PDF
Two mechanisms are said to be responsible for guiding focal attention in visual selection: bottom–up, saliency-driven capture and top–down control. These mechanisms were examined with a paradigm that combined a visual search task with postdisplay probe detection. Two SOAs between the search display and probe onsets were introduced to investigate how attention was allocated to particular items at different points in time. The dynamic interplay between bottom–up and top–down mechanisms was investigated with ERP methodology. ERPs locked to the search displays showed that top–down control needed time to develop. N2pc indicated allocation of attention to the target item and not to the irrelevant singleton. ERPs locked to probes revealed modulations in the P1 component reflecting top–down control of focal attention at the long SOA. Early bottom–up effects were observed in the error rates at the short SOA. Taken together, the present results show that the top–down mechanism takes time to guide focal attention to the relevant target item and that it is potent enough to limit bottom–up attentional capture.