Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Jean-Rémy Hochmann
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2021) 33 (7): 1343–1353.
Published: 01 June 2021
FIGURES
Abstract
View article
PDF
To navigate the social world, humans must represent social entities and the relationships between those entities, starting with spatial relationships. Recent research suggests that two bodies are processed with particularly high efficiency in visual perception, when they are in a spatial positioning that cues interaction, that is, close and face-to-face. Socially relevant spatial relations such as facingness may facilitate visual perception by triggering grouping of bodies into a new integrated percept, which would make the stimuli more visible and easier to process. We used EEG and a frequency-tagging paradigm to measure a neural correlate of grouping (or visual binding), while female and male participants saw images of two bodies face-to-face or back-to-back. The two bodies in a dyad flickered at frequency F1 and F2, respectively, and appeared together at a third frequency Fd (dyad frequency). This stimulation should elicit a periodic neural response for each body at F1 and F2, and a third response at Fd, which would be larger for face-to-face (vs. back-to-back) bodies, if those stimuli yield additional integrative processing. Results showed that responses at F1 and F2 were higher for upright than for inverted bodies, demonstrating that our paradigm could capture neural activity associated with viewing bodies. Crucially, the response to dyads at Fd was larger for face-to-face (vs. back-to-back) dyads, suggesting integration mediated by grouping. We propose that spatial relations that recur in social interaction (i.e., facingness) promote binding of multiple bodies into a new representation. This mechanism can explain how the visual system contributes to integrating and transforming the representation of disconnected body shapes into structured representations of social events.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2016) 28 (12): 1980–1986.
Published: 01 December 2016
FIGURES
Abstract
View article
PDF
Negation is a fundamental component of human reasoning and language. Yet, current neurocognitive models, conceived to account for the cortical representation of meanings (e.g., writing ), hardly accommodate the representation of negated meanings ( not writing ). One main hypothesis, known as the two-step model, proposes that, for negated meanings, the corresponding positive representation is first fully activated and then modified to reflect negation. Recast in neurobiological terms, this model predicts that, in the initial stage of semantic processing, the neural representation of a stimulus' meaning is indistinguishable from the neural representation of that meaning following negation. Although previous work has shown that pragmatic and task manipulations can favor or hinder a two-step processing, we just do not know how the brain processes an utterance as simple as “I am not writing.” We implemented two methodologies based on chronometric TMS to measure motor excitability (Experiment 1) and inhibition (Experiment 2) as physiological markers of semantic access to action-related meanings. We used elementary sentences (Adverb + Verb) and a passive reading task. For the first time, we defined action word-related motor activity in terms of increased excitability and concurrently reduced inhibition. Moreover, we showed that this pattern changes already in the earliest stage of semantic processing, when action meanings were negated. Negation modifies the neural representation of the argument in its scope, as soon as semantic effects are observed in the brain.