Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Dimitrios Kourtis
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2021) 33 (5): 826–839.
Published: 01 April 2021
FIGURES
| View All (5)
Abstract
View article
PDF
Previous work suggests that perception of an object automatically facilitates actions related to object grasping and manipulation. Recently, the notion of automaticity has been challenged by behavioral studies suggesting that dangerous objects elicit aversive affordances that interfere with encoding of an object's motor properties; however, related EEG studies have provided little support for these claims. We sought EEG evidence that would support the operation of an inhibitory mechanism that interferes with the motor encoding of dangerous objects, and we investigated whether such mechanism would be modulated by the perceived distance of an object and the goal of a given task. EEGs were recorded by 24 participants who passively perceived dangerous and neutral objects in their peripersonal, boundary, or extrapersonal space and performed either a reachability judgment task or a categorization task. Our results showed that greater attention, reflected in the visual P1 potential, was drawn by dangerous and reachable objects. Crucially, a frontal N2 potential, associated with motor inhibition, was larger for dangerous objects only when participants performed a reachability judgment task. Furthermore, a larger parietal P3b potential for dangerous objects indicated the greater difficulty in linking a dangerous object to the appropriate response, especially when it was located in the participants' extrapersonal space. Taken together, our results show that perception of dangerous objects elicits aversive affordances in a task-dependent way and provides evidence for the operation of a neural mechanism that does not code affordances of dangerous objects automatically, but rather on the basis of contextual information.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2014) 26 (10): 2275–2286.
Published: 01 October 2014
FIGURES
| View All (4)
Abstract
View article
PDF
We investigated whether people take into account an interaction partner's attentional focus and whether they represent in advance their partner's part of the task when planning to engage in a synchronous joint action. The experiment involved two participants planning and performing joint actions (i.e., synchronously lifting and clinking glasses), unimanual individual actions (i.e., lifting and moving a glass as if clinking with another person), and bimanual individual actions. EEG was recorded from one of the participants. We employed a choice reaction paradigm where a visual cue indicated the type of action to be planned, followed 1.5 sec later by a visual go stimulus, prompting the participants to act. We studied attention allocation processes by examining two lateralized EEG components, namely the anterior directing attention negativity and the late directing attention positivity. Action planning processes were examined using the late contingent negative variation and the movement-related potential. The results show that early stages of joint action planning involve dividing attention between locations in space relevant for one's own part of the joint action and locations relevant for one's partner's part of the joint action. At later stages of joint action planning, participants represented in advance their partner's upcoming action in addition to their own action, although not at an effector-specific level. Our study provides electrophysiological evidence supporting the operation of attention sharing processes and predictive self/other action representation during the planning phase of a synchronous joint task.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2013) 25 (7): 1049–1061.
Published: 01 July 2013
FIGURES
Abstract
View article
PDF
We investigated whether people monitor the outcomes of their own and their partners' individual actions as well as the outcome of their combined actions when performing joint actions together. Pairs of pianists memorized both parts of a piano duet. Each pianist then performed one part while their partner performed the other; EEG was recorded from both. Auditory outcomes (pitches) associated with keystrokes produced by the pianists were occasionally altered in a way that either did or did not affect the joint auditory outcome (i.e., the harmony of a chord produced by the two pianists' combined pitches). Altered auditory outcomes elicited a feedback-related negativity whether they occurred in the pianist's own part or the partner's part, and whether they affected individual or joint action outcomes. Altered auditory outcomes also elicited a P300 whose amplitude was larger when the alteration affected the joint outcome compared with individual outcomes and when the alteration affected the pianist's own part compared with the partner's part. Thus, musicians engaged in joint actions monitor their own and their partner's actions as well as their combined action outcomes, while at the same time maintaining a distinction between their own and others' actions and between individual and joint outcomes.