Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Valentina Borghesani
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2019) 31 (6): 791–807.
Published: 01 June 2019
FIGURES
| View All (7)
Abstract
View article
PDF
Previous evidence from neuropsychological and neuroimaging studies suggests functional specialization for tools and related semantic knowledge in a left frontoparietal network. It is still debated whether these areas are involved in the representation of rudimentary movement-relevant knowledge regardless of semantic domains (animate vs. inanimate) or categories (tools vs. nontool objects). Here, we used fMRI to record brain activity while 13 volunteers performed two semantic judgment tasks on visually presented items from three different categories: animals, tools, and nontool objects. Participants had to judge two distinct semantic features: whether two items typically move in a similar way (e.g., a fan and a windmill move in circular motion) or whether they are usually found in the same environment (e.g., a seesaw and a swing are found in a playground). We investigated differences in overall activation (which areas are involved) as well as representational content (which information is encoded) across semantic features and categories. Results of voxel-wise mass univariate analysis showed that, regardless of semantic category, a dissociation emerges between processing information on prototypical location (involving the anterior temporal cortex and the angular gyrus) and movement (linked to left inferior parietal and frontal activation). Multivoxel pattern correlation analyses confirmed the representational segregation of networks encoding task- and category-related aspects of semantic processing. Taken together, these findings suggest that the left frontoparietal network is recruited to process movement properties of items (including both biological and nonbiological motion) regardless of their semantic category.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2019) 31 (1): 95–108.
Published: 01 January 2019
FIGURES
| View All (4)
Abstract
View article
PDF
A single word (the noun “ elephant ”) encapsulates a complex multidimensional meaning, including both perceptual (“ big ”, “ gray ”, “ trumpeting ”) and conceptual (“ mammal ”, “ can be found in India ”) features. Opposing theories make different predictions as to whether different features (also conceivable as dimensions of the semantic space) are stored in similar neural regions and recovered with similar temporal dynamics during word reading. In this magnetoencephalography study, we tracked the brain activity of healthy human participants while reading single words varying orthogonally across three semantic dimensions: two perceptual ones (i.e., the average implied real-world size and the average strength of association with a prototypical sound) and a conceptual one (i.e., the semantic category). The results indicate that perceptual and conceptual representations are supported by partially segregated neural networks: Whereas visual and auditory dimensions are encoded in the phase coherence of low-frequency oscillations of occipital and superior temporal regions, respectively, semantic features are encoded in the power of low-frequency oscillations of anterior temporal and inferior parietal areas. However, despite the differences, these representations appear to emerge at the same latency: around 200 msec after stimulus onset. Taken together, these findings suggest that perceptual and conceptual dimensions of the semantic space are recovered automatically, rapidly, and in parallel during word reading.