Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Marco Buiatti
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2019) 31 (1): 95–108.
Published: 01 January 2019
FIGURES
| View All (4)
Abstract
View article
PDF
A single word (the noun “ elephant ”) encapsulates a complex multidimensional meaning, including both perceptual (“ big ”, “ gray ”, “ trumpeting ”) and conceptual (“ mammal ”, “ can be found in India ”) features. Opposing theories make different predictions as to whether different features (also conceivable as dimensions of the semantic space) are stored in similar neural regions and recovered with similar temporal dynamics during word reading. In this magnetoencephalography study, we tracked the brain activity of healthy human participants while reading single words varying orthogonally across three semantic dimensions: two perceptual ones (i.e., the average implied real-world size and the average strength of association with a prototypical sound) and a conceptual one (i.e., the semantic category). The results indicate that perceptual and conceptual representations are supported by partially segregated neural networks: Whereas visual and auditory dimensions are encoded in the phase coherence of low-frequency oscillations of occipital and superior temporal regions, respectively, semantic features are encoded in the power of low-frequency oscillations of anterior temporal and inferior parietal areas. However, despite the differences, these representations appear to emerge at the same latency: around 200 msec after stimulus onset. Taken together, these findings suggest that perceptual and conceptual dimensions of the semantic space are recovered automatically, rapidly, and in parallel during word reading.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (5): 1054–1068.
Published: 01 May 2010
FIGURES
| View All (7)
Abstract
View article
PDF
When two displays are presented in close temporal succession at the same location, how does the brain assign them to one versus two conscious percepts? We investigate this issue using a novel reading paradigm in which the odd and even letters of a string are presented alternatively at a variable rate. The results reveal a window of temporal integration during reading, with a nonlinear boundary around ∼80 msec of presentation duration. Below this limit, the oscillating stimulus is easily fused into a single percept, with all characteristics of normal reading. Above this limit, reading times are severely slowed and suffer from a word-length effect. ERPs indicate that, even at the fastest frequency, the oscillating stimulus elicits synchronous oscillations in posterior visual cortices, while late ERP components sensitive to lexical status vanish beyond the fusion threshold. Thus, the fusion/segregation dilemma is not resolved by retinal or subcortical filtering, but at cortical level by at most 300 msec. The results argue against theories of visual word recognition and letter binding that rely on temporal synchrony or other fine temporal codes.