Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Boukje Habets
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (11): 3694–3702.
Published: 01 November 2011
FIGURES
Abstract
View article
PDF
An initial stage of speech production is conceptual planning, where a speaker determines which information to convey first (the linearization problem). This fMRI study investigated the linearization process during the production of “before” and “after” sentences. In “after” sentences, a series of events is expressed in the order of event occurrence. In “before” sentences, however, the order of event mention is achieved by reversing the chronological order. We suggested that the linearization process may be supported by a neural network connecting the left middle temporal gyrus (MTG) with the medial superior frontal gyrus, left middle frontal gyrus, and left angular gyrus/inferior parietal gyrus. Within this network, regions were more activated and interregional interactions were strongly enhanced for producing “before” than “after” sentences. The left MTG was also functionally connected with the left orbital inferior frontal gyrus, contributing to the retrieval of necessary world knowledge and linguistic knowledge. Connectivity between these two regions was not different between conditions.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (8): 1845–1854.
Published: 01 August 2011
FIGURES
Abstract
View article
PDF
During face-to-face communication, one does not only hear speech but also see a speaker's communicative hand movements. It has been shown that such hand gestures play an important role in communication where the two modalities influence each other's interpretation. A gesture typically temporally overlaps with coexpressive speech, but the gesture is often initiated before (but not after) the coexpressive speech. The present ERP study investigated what degree of asynchrony in the speech and gesture onsets are optimal for semantic integration of the concurrent gesture and speech. Videos of a person gesturing were combined with speech segments that were either semantically congruent or incongruent with the gesture. Although gesture and speech always overlapped in time, gesture and speech were presented with three different degrees of asynchrony. In the SOA 0 condition, the gesture onset and the speech onset were simultaneous. In the SOA 160 and 360 conditions, speech was delayed by 160 and 360 msec, respectively. ERPs time locked to speech onset showed a significant difference between semantically congruent versus incongruent gesture–speech combinations on the N400 for the SOA 0 and 160 conditions. No significant difference was found for the SOA 360 condition. These results imply that speech and gesture are integrated most efficiently when the differences in onsets do not exceed a certain time span because of the fact that iconic gestures need speech to be disambiguated in a way relevant to the speech context.