Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-1 of 1
Sara Tuomaala
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Automated speech artefact removal from MEG data utilizing facial gestures and mutual information
Open AccessPublisher: Journals Gateway
Imaging Neuroscience (2025) 3: imag_a_00545.
Published: 22 April 2025
FIGURES
| View All (13)
Abstract
View articletitled, Automated speech artefact removal from MEG data utilizing facial gestures and mutual information
View
PDF
for article titled, Automated speech artefact removal from MEG data utilizing facial gestures and mutual information
The ability to speak is one of the most crucial human skills, motivating neuroscientific studies of speech production and speech-related neural dynamics. Increased knowledge in this area allows, for example, for development of rehabilitation protocols for language-related disorders. While our understanding of speech-related neural processes has been greatly enhanced owing to non-invasive neuroimaging techniques, the interpretations have been limited by speech artefacts caused by the activation of facial muscles that mask important language-related information. Despite earlier approaches applying independent component analysis (ICA), the artefact removal process continues to be time consuming, poorly replicable, and affected by inconsistencies between different observers, typically requiring manual selection of artefactual components. The artefact component selection criteria have been variable, leading to non-standardized speech artefact removal processes. To address these issues, we propose here a pipeline for automated speech artefact removal from magnetoencephalography (MEG) data. We developed an ICA-based speech artefact removal routine by utilizing electromyography (EMG) data measured from facial muscles during a facial gesture task for isolating the speech-induced artefacts. Additionally, we used mutual information (MI) as a similarity measure between the EMG signals and the ICA-decomposed MEG to provide a feasible way to identify the artefactual components. Our approach efficiently and in an automated manner removed speech artefacts from MEG data. The method can be feasibly applied to improve the understanding of speech-related cortical dynamics, while transparently evaluating the removed and preserved MEG activation.