While multisensory super-additivity has been demonstrated in the context of visual articulation, it is unclear whether speech and co-speech gestures are similarly subject to super-additive integration. The current study investigates multisensory integration of speech and bodily gestures, testing whether biological motion signatures of co-speech gestures enhance cortical tracking of the speech envelope. We recorded EEG from 20 healthy adults as they watched a series of multimodal discourse clips from four conditions: AV congruent clips with co-speech gestures that were naturally aligned with speech, AV incongruent clips in which gestures were not aligned with the speech, audio-only clips in which speech was delivered in isolation, and video-only clips presenting the gesture content with no accompanying speech. As we hypothesize that the kinematics of co-speech gestures are sufficient to drive gestural enhancement of speech, our clips employed minimalistic “point-light” depictions of a speaker's movements: point-light talkers. Using neural decoder models to predict the amplitude of the speech envelope from EEG elicited in all four conditions, we compared speech reconstruction performance between multisensory (AV congruent) and additive models, that is, those representing the summed neural response across the two unisensory conditions. We found significant improvement in decoder scores for models trained on AV congruent trials relative to both audio-only and additive models. Forward models of brain activity indicated signatures of multisensory integration 140–160 msec following changes to the speech envelope. These results provide novel evidence for a multisensory enhancement effect of co-speech gesture kinematics on continuous speech tracking.

This content is only available as a PDF.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.