Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Jean-Marie Annoni
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2021) 33 (8): 1563–1580.
Published: 01 July 2021
FIGURES
Abstract
View articletitled, First and Second Language at Hand: A Chronometric Transcranial-Magnetic Stimulation Study on Semantic and Motor Resonance
View
PDF
for article titled, First and Second Language at Hand: A Chronometric Transcranial-Magnetic Stimulation Study on Semantic and Motor Resonance
According to embodied theories, motor and language processing bidirectionally interact: Motor activation modulates behavior in lexico-semantic tasks (semantic resonance), and understanding motor-related words entails activation of the corresponding motor brain areas (motor resonance). Whereas many studies investigated such interaction in the first language (L1), only few did so in a second language (L2), focusing on motor resonance. Here, we directly compared L1 and a late L2, for the first time both in terms of semantic and motor resonance and both in terms of magnitude and timing, by taking advantage of single-pulse TMS. Twenty-five bilinguals judged, in each language, whether hand motor-related (“grasp”) and non-motor-related verbs (“believe”), were physical or mental. Meanwhile, we applied TMS on the hand motor cortex at 125, 275, 350, and 500 msec post verb onset, and recorded behavioral responses and TMS-induced motor evoked potentials. TMS induced faster responses for L1 versus L2 motor and nonmotor verbs at 125 msec (three-way interaction β = −0.0442, 95% CI [0.0814, −0.0070]), showing a semantic resonance effect at an early stage of word processing in L1 but not in L2. Concerning motor resonance, TMS-induced motor evoked potentials at 275 msec revealed higher motor cortex excitability for L2 versus L1 processing (two-way interaction β = 0.095, 95% CI [0.017, 0.173]). These findings confirm action–language interaction at early stages of word recognition, provide further evidence that L1 and L2 are differently embodied, and call for an update of existing models of bilingualism and embodiment, concerning both language representations and processing.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2016) 28 (10): 1613–1624.
Published: 01 October 2016
FIGURES
| View All (6)
Abstract
View articletitled, Eye Gaze Behavior at Turn Transition: How Aphasic Patients Process Speakers' Turns during Video Observation
View
PDF
for article titled, Eye Gaze Behavior at Turn Transition: How Aphasic Patients Process Speakers' Turns during Video Observation
The human turn-taking system regulates the smooth and precise exchange of speaking turns during face-to-face interaction. Recent studies investigated the processing of ongoing turns during conversation by measuring the eye movements of noninvolved observers. The findings suggest that humans shift their gaze in anticipation to the next speaker before the start of the next turn. Moreover, there is evidence that the ability to timely detect turn transitions mainly relies on the lexico-syntactic content provided by the conversation. Consequently, patients with aphasia, who often experience deficits in both semantic and syntactic processing, might encounter difficulties to detect and timely shift their gaze at turn transitions. To test this assumption, we presented video vignettes of natural conversations to aphasic patients and healthy controls, while their eye movements were measured. The frequency and latency of event-related gaze shifts, with respect to the end of the current turn in the videos, were compared between the two groups. Our results suggest that, compared with healthy controls, aphasic patients have a reduced probability to shift their gaze at turn transitions but do not show significantly increased gaze shift latencies. In healthy controls, but not in aphasic patients, the probability to shift the gaze at turn transition was increased when the video content of the current turn had a higher lexico-syntactic complexity. Furthermore, the results from voxel-based lesion symptom mapping indicate that the association between lexico-syntactic complexity and gaze shift latency in aphasic patients is predicted by brain lesions located in the posterior branch of the left arcuate fasciculus. Higher lexico-syntactic processing demands seem to lead to a reduced gaze shift probability in aphasic patients. This finding may represent missed opportunities for patients to place their contributions during everyday conversation.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2015) 27 (10): 1968–1980.
Published: 01 October 2015
FIGURES
Abstract
View articletitled, Experience-based Auditory Predictions Modulate Brain Activity to Silence as Do Real Sounds
View
PDF
for article titled, Experience-based Auditory Predictions Modulate Brain Activity to Silence as Do Real Sounds
Interactions between stimuli's acoustic features and experience-based internal models of the environment enable listeners to compensate for the disruptions in auditory streams that are regularly encountered in noisy environments. However, whether auditory gaps are filled in predictively or restored a posteriori remains unclear. The current lack of positive statistical evidence that internal models can actually shape brain activity as would real sounds precludes accepting predictive accounts of filling-in phenomenon. We investigated the neurophysiological effects of internal models by testing whether single-trial electrophysiological responses to omitted sounds in a rule-based sequence of tones with varying pitch could be decoded from the responses to real sounds and by analyzing the ERPs to the omissions with data-driven electrical neuroimaging methods. The decoding of the brain responses to different expected, but omitted, tones in both passive and active listening conditions was above chance based on the responses to the real sound in active listening conditions. Topographic ERP analyses and electrical source estimations revealed that, in the absence of any stimulation, experience-based internal models elicit an electrophysiological activity different from noise and that the temporal dynamics of this activity depend on attention. We further found that the expected change in pitch direction of omitted tones modulated the activity of left posterior temporal areas 140–200 msec after the onset of omissions. Collectively, our results indicate that, even in the absence of any stimulation, internal models modulate brain activity as do real sounds, indicating that auditory filling in can be accounted for by predictive activity.