Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-1 of 1
Heath E. Matheson
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Imaging Neuroscience (2025) 3: imag_a_00428.
Published: 13 January 2025
FIGURES
| View All (6)
Abstract
View articletitled, Differential weighting of information during aloud and silent reading: Evidence from representational similarity analysis of fMRI data
View
PDF
for article titled, Differential weighting of information during aloud and silent reading: Evidence from representational similarity analysis of fMRI data
Single-word reading depends on multiple types of information processing: readers must process low-level visual properties of the stimulus, form orthographic and phonological representations of the word, and retrieve semantic content from memory. Reading aloud introduces an additional type of processing wherein readers must execute an appropriate sequence of articulatory movements necessary to produce the word. To date, cognitive and neural differences between aloud and silent reading have mainly been ascribed to articulatory processes. However, it remains unclear whether articulatory information is used to discriminate unique words, at the neural level, during aloud reading. Moreover, very little work has investigated how other types of information processing might differ between the two tasks. The current work used representational similarity analysis (RSA) to interrogate fMRI data collected while participants read single words aloud or silently. RSA was implemented using a whole-brain searchlight procedure to characterise correspondence between neural data and each of five models representing a discrete type of information. Both conditions elicited decodability of visual, orthographic, phonological, and articulatory information, though to different degrees. Compared with reading silently, reading aloud elicited greater decodability of visual, phonological, and articulatory information. By contrast, silent reading elicited greater decodability of orthographic information in right anterior temporal lobe. These results support an adaptive view of reading whereby information is weighted according to its task relevance, in a manner that best suits the reader’s goals.
Includes: Supplementary data