Skip Nav Destination
1-5 of 5
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Imagining Sounds and Images: Decoding the Contribution of Unimodal and Transmodal Brain Regions to Semantic Retrieval in the Absence of Meaningful Input
Journal of Cognitive Neuroscience (2019) 31 (11): 1599–1616.
Published: 01 November 2019
FIGURES | View All (6)
AbstractView article PDF
In the absence of sensory information, we can generate meaningful images and sounds from representations in memory. However, it remains unclear which neural systems underpin this process and whether tasks requiring the top–down generation of different kinds of features recruit similar or different neural networks. We asked people to internally generate the visual and auditory features of objects, either in isolation (car, dog) or in specific and complex meaning-based contexts (car/dog race). Using an fMRI decoding approach, in conjunction with functional connectivity analysis, we examined the role of auditory/visual cortex and transmodal brain regions. Conceptual retrieval in the absence of external input recruited sensory and transmodal cortex. The response in transmodal regions—including anterior middle temporal gyrus—was of equal magnitude for visual and auditory features yet nevertheless captured modality information in the pattern of response across voxels. In contrast, sensory regions showed greater activation for modality-relevant features in imagination (even when external inputs did not differ). These data are consistent with the view that transmodal regions support internally generated experiences and that they play a role in integrating perceptual features encoded in memory.
Observing, Performing, and Understanding Actions: Revisiting the Role of Cortical Motor Areas in Processing of Action Words
Journal of Cognitive Neuroscience (2014) 26 (8): 1644–1653.
Published: 01 August 2014
AbstractView article PDF
Language content and action/perception have been shown to activate common brain areas in previous neuroimaging studies. However, it is unclear whether overlapping cortical activation reflects a common neural source or adjacent, but distinct, sources. We address this issue by using multivoxel pattern analysis on fMRI data. Specifically, participants were instructed to engage in five tasks: (1) execute hand actions (AE), (2) observe hand actions (AO), (3) observe nonbiological motion (MO), (4) read action verbs, and (5) read nonaction verbs. A classifier was trained to distinguish between data collected from neural motor areas during (1) AE versus MO and (2) AO versus MO. These two algorithms were then used to test for a distinction between data collected during the reading of action versus nonaction verbs. The results show that the algorithm trained to distinguish between AE and MO distinguishes between word categories using signal recorded from the left parietal cortex and pre-SMA, but not from ventrolateral premotor cortex. In contrast, the algorithm trained to distinguish between AO and MO discriminates between word categories using the activity pattern in the left premotor and left parietal cortex. This shows that the sensitivity of premotor areas to language content is more similar to the process of observing others acting than to acting oneself. Furthermore, those parts of the brain that show comparable neural pattern for action execution and action word comprehension are high-level integrative motor areas rather than low-level motor areas.
Journal of Cognitive Neuroscience (2012) 24 (11): 2237–2247.
Published: 01 November 2012
AbstractView article PDF
Research from the past decade has shown that understanding the meaning of words and utterances (i.e., abstracted symbols) engages the same systems we used to perceive and interact with the physical world in a content-specific manner. For example, understanding the word “grasp” elicits activation in the cortical motor network, that is, part of the neural substrate involved in planned and executing a grasping action. In the embodied literature, cortical motor activation during language comprehension is thought to reflect motor simulation underlying conceptual knowledge [note that outside the embodied framework, other explanations for the link between action and language are offered, e.g., Mahon, B. Z., & Caramazza, A. A critical look at the embodied cognition hypothesis and a new proposal for grouding conceptual content. Journal of Physiology, 102, 59–70, 2008; Hagoort, P. On Broca, brain, and binding: A new framework. Trends in Cognitive Sciences, 9, 416–423, 2005]. Previous research has supported the view that the coupling between language and action is flexible, and reading an action-related word form is not sufficient for cortical motor activation [Van Dam, W. O., van Dijk, M., Bekkering, H., & Rueschemeyer, S.-A. Flexibility in embodied lexical–semantic representations. Human Brain Mapping , doi: 10.1002/hbm.21365, 2011]. The current study goes one step further by addressing the necessity of action-related word forms for motor activation during language comprehension. Subjects listened to indirect requests (IRs) for action during an fMRI session. IRs for action are speech acts in which access to an action concept is required, although it is not explicitly encoded in the language. For example, the utterance “It is hot here!” in a room with a window is likely to be interpreted as a request to open the window. However, the same utterance in a desert will be interpreted as a statement. The results indicate (1) that comprehension of IR sentences activates cortical motor areas reliably more than comprehension of sentences devoid of any implicit motor information. This is true despite the fact that IR sentences contain no lexical reference to action. (2) Comprehension of IR sentences also reliably activates substantial portions of the theory of mind network, known to be involved in making inferences about mental states of others. The implications of these findings for embodied theories of language are discussed.
Context-dependent Changes in Functional Connectivity of Auditory Cortices during the Perception of Object Words
Journal of Cognitive Neuroscience (2012) 24 (10): 2108–2119.
Published: 01 October 2012
FIGURES | View All (5)
AbstractView article PDF
Embodied theories hold that cognitive concepts are grounded in our sensorimotor systems. Specifically, a number of behavioral and neuroimaging studies have buttressed the idea that language concepts are represented in areas involved in perception and action [Pulvermueller, F. Brain mechanisms linking language and action. Nature Reviews Neuroscience, 6, 576–582, 2005; Barsalou, L. W. Perceptual symbol systems. Behavioral and Brain Sciences, 22, 577–660, 1999]. Proponents of a strong embodied account argue that activity in perception/action areas is triggered automatically upon encountering a word and reflect static semantic representations. In contrast to what would be expected if lexical semantic representations are automatically triggered upon encountering a word, a number of studies failed to find motor-related activity for words with a putative action-semantic component [Raposo, A., Moss, H. E., Stamatakis, E. A., & Tyler, L. K. Modulation of motor and premotor cortices by actions, action words and action sentences. Neuropsychologia, 47, 388–396, 2009; Rueschemeyer, S.-A., Brass, M., & Friederici, A. D. Comprehending prehending: Neural correlates of processing verbs with motor stems. Journal of Cognitive Neuroscience, 19, 855–865, 2007]. In a recent fMRI study, Van Dam and colleagues [Van Dam, W. O., Van Dijk, M., Bekkering, H., & Rueschemeyer, S.-A. Flexibility in embodied lexical-semantic representations. Human Brain Mapping, in press] showed that the degree to which a modality-specific region contributes to a representation considerably changes as a function of context. In the current study, we presented words for which both motor and visual properties (e.g., tennis ball , boxing glove ) were important in constituting the concept. Our aim was to corroborate on earlier findings of flexible and context-dependent language representations by testing whether functional integration between auditory brain regions and perception/action areas is modulated by context. Functional connectivity was investigated by means of a psychophysiological interaction analysis, in which we found that bilateral superior temporal gyrus was more strongly connected with brain regions relevant for coding action information: (1) for Action Color words vs. Abstract words, and (2) for Action Color words presented in a context that emphasized action vs. a context that emphasized color properties.
The Function of Words: Distinct Neural Correlates for Words Denoting Differently Manipulable Objects
Journal of Cognitive Neuroscience (2010) 22 (8): 1844–1851.
Published: 01 August 2010
AbstractView article PDF
Recent research indicates that language processing relies on brain areas dedicated to perception and action. For example, processing words denoting manipulable objects has been shown to activate a fronto-parietal network involved in actual tool use. This is suggested to reflect the knowledge the subject has about how objects are moved and used. However, information about how to use an object may be much more central to the conceptual representation of an object than information about how to move an object. Therefore, there may be much more fine-grained distinctions between objects on the neural level, especially related to the usability of manipulable objects. In the current study, we investigated whether a distinction can be made between words denoting (1) objects that can be picked up to move (e.g., volumetrically manipulable objects: bookend, clock) and (2) objects that must be picked up to use (e.g., functionally manipulable objects: cup, pen). The results show that functionally manipulable words elicit greater levels of activation in the fronto-parietal sensorimotor areas than volumetrically manipulable words. This suggests that indeed a distinction can be made between different types of manipulable objects. Specifically, how an object is used functionally rather than whether an object can be displaced with the hand is reflected in semantic representations in the brain.