Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Liuba Papeo
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2021) 33 (7): 1343–1353.
Published: 01 June 2021
FIGURES
Abstract
View article
PDF
To navigate the social world, humans must represent social entities and the relationships between those entities, starting with spatial relationships. Recent research suggests that two bodies are processed with particularly high efficiency in visual perception, when they are in a spatial positioning that cues interaction, that is, close and face-to-face. Socially relevant spatial relations such as facingness may facilitate visual perception by triggering grouping of bodies into a new integrated percept, which would make the stimuli more visible and easier to process. We used EEG and a frequency-tagging paradigm to measure a neural correlate of grouping (or visual binding), while female and male participants saw images of two bodies face-to-face or back-to-back. The two bodies in a dyad flickered at frequency F1 and F2, respectively, and appeared together at a third frequency Fd (dyad frequency). This stimulation should elicit a periodic neural response for each body at F1 and F2, and a third response at Fd, which would be larger for face-to-face (vs. back-to-back) bodies, if those stimuli yield additional integrative processing. Results showed that responses at F1 and F2 were higher for upright than for inverted bodies, demonstrating that our paradigm could capture neural activity associated with viewing bodies. Crucially, the response to dyads at Fd was larger for face-to-face (vs. back-to-back) dyads, suggesting integration mediated by grouping. We propose that spatial relations that recur in social interaction (i.e., facingness) promote binding of multiple bodies into a new representation. This mechanism can explain how the visual system contributes to integrating and transforming the representation of disconnected body shapes into structured representations of social events.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2016) 28 (12): 1980–1986.
Published: 01 December 2016
FIGURES
Abstract
View article
PDF
Negation is a fundamental component of human reasoning and language. Yet, current neurocognitive models, conceived to account for the cortical representation of meanings (e.g., writing ), hardly accommodate the representation of negated meanings ( not writing ). One main hypothesis, known as the two-step model, proposes that, for negated meanings, the corresponding positive representation is first fully activated and then modified to reflect negation. Recast in neurobiological terms, this model predicts that, in the initial stage of semantic processing, the neural representation of a stimulus' meaning is indistinguishable from the neural representation of that meaning following negation. Although previous work has shown that pragmatic and task manipulations can favor or hinder a two-step processing, we just do not know how the brain processes an utterance as simple as “I am not writing.” We implemented two methodologies based on chronometric TMS to measure motor excitability (Experiment 1) and inhibition (Experiment 2) as physiological markers of semantic access to action-related meanings. We used elementary sentences (Adverb + Verb) and a passive reading task. For the first time, we defined action word-related motor activity in terms of increased excitability and concurrently reduced inhibition. Moreover, we showed that this pattern changes already in the earliest stage of semantic processing, when action meanings were negated. Negation modifies the neural representation of the argument in its scope, as soon as semantic effects are observed in the brain.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2012) 24 (12): 2348–2362.
Published: 01 December 2012
FIGURES
Abstract
View article
PDF
Activity in frontocentral motor regions is routinely reported when individuals process action words and is often interpreted as the implicit simulation of the word content. We hypothesized that these neural responses are not invariant components of action word processing but are modulated by the context in which they are evoked. Using fMRI, we assessed the relative weight of stimulus features (i.e., the intrinsic semantics of words) and contextual factors, in eliciting word-related sensorimotor activity. Participants silently read action-related and state verbs after performing a mental rotation task engaging either a motor strategy (i.e., referring visual stimuli to their own bodily movements) or a visuospatial strategy. The mental rotation tasks were used to induce, respectively, a motor and a nonmotor “cognitive context” into the following silent reading. Irrespective of the verb category, reading in the motor context, compared with reading in the nonmotor context, increased the activity in the left primary motor cortex, the bilateral premotor cortex, and the right somatosensory cortex. Thus, the cognitive context induced by the preceding motor strategy-based mental rotation modulated word-related sensorimotor responses, possibly reflecting the strategy of referring a word meaning to one's own bodily activity. This pattern, common to action and state verbs, suggests that the context in which words are encountered prevails over the intrinsic semantics of the stimuli in mediating the recruitment of sensorimotor regions.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (12): 3939–3948.
Published: 01 December 2011
FIGURES
Abstract
View article
PDF
Embodied theories hold that understanding what another person is doing requires the observer to map that action directly onto his or her own motor representation and simulate it internally. The human motor system may, thus, be endowed with a “mirror matching” device through which the same motor representation is activated, when the subject is either the performer or the observer of another's action (“self-other shared representation”). It is suggested that understanding action verbs relies upon the same mechanism; this implies that motor responses to these words are automatic and independent of the subject of the verb. In the current study, participants were requested to read silently and decide on the syntactic subject of action and nonaction verbs, presented in first (1P) or third (3P) person, while TMS was applied to the left hand primary motor cortex (M1). TMS-induced motor-evoked potentials were recorded from hand muscles as a measure of cortico-spinal excitability. Motor-evoked potentials increased for 1P, but not for 3P, action verbs or 1P and 3P nonaction verbs. We provide novel demonstration that the motor simulation is triggered only when the conceptual representation of a word integrates the action with the self as the agent of that action. This questions the core principle of “mirror matching” and opens to alternative interpretations of the relationship between conceptual and sensorimotor processes.