Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Benjamin Straube
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (2): 306–324.
Published: 01 February 2011
FIGURES
| View All (6)
Abstract
View article
PDF
In social situations, we encounter information transferred in firsthand (egocentric) and secondhand (allocentric) communication contexts. However, the mechanism by which an individual distinguishes whether a past interaction occurred in an egocentric versus allocentric situation is poorly understood. This study examined the neural bases for encoding memories of social interactions through experimentally manipulating the communication context. During fMRI data acquisition, participants watched video clips of an actor speaking and gesturing directly toward them (egocentric context) or toward an unseen third person (allocentric context). After scanning, a recognition task gauged participants' ability to recognize the sentences they had just seen and to recall the context in which the sentences had been spoken. We found no differences between the recognition of sentences spoken in egocentric and allocentric contexts. However, when asked about the communication context (“Had the actor directly spoken to you?”), participants tended to believe falsely that the actor had directly spoken to them during allocentric conditions. Greater activity in the hippocampus was related to correct context memory, whereas the ventral ACC was activated for subsequent inaccurate context memory. For the interaction between encoding context and context memory, we observed increased activation for egocentric remembered items in the bilateral and medial frontal cortex, the BG, and the left parietal and temporal lobe. Our data indicate that memories of social interactions are biased to be remembered egocentrically. Self-referential encoding processes reflected in increased frontal activation and decreased hippocampal activation might be the basis of correct item but false context memory of social interactions.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2009) 21 (4): 821–836.
Published: 01 April 2009
Abstract
View article
PDF
In human face-to-face communication, the content of speech is often illustrated by coverbal gestures. Behavioral evidence suggests that gestures provide advantages in the comprehension and memory of speech. Yet, how the human brain integrates abstract auditory and visual information into a common representation is not known. Our study investigates the neural basis of memory for bimodal speech and gesture representations. In this fMRI study, 12 participants were presented with video clips showing an actor performing meaningful metaphoric gestures (MG), unrelated, free gestures (FG), and no arm and hand movements (NG) accompanying sentences with an abstract content. After the fMRI session, the participants performed a recognition task. Behaviorally, the participants showed the highest hit rate for sentences accompanied by meaningful metaphoric gestures. Despite comparable old/new discrimination performances ( d ′) for the three conditions, we obtained distinct memory-related left-hemispheric activations in the inferior frontal gyrus (IFG), the premotor cortex (BA 6), and the middle temporal gyrus (MTG), as well as significant correlations between hippocampal activation and memory performance in the metaphoric gesture condition. In contrast, unrelated speech and gesture information (FG) was processed in areas of the left occipito-temporal and cerebellar region and the right IFG just like the no-gesture condition (NG). We propose that the specific left-lateralized activation pattern for the metaphoric speech–gesture sentences reflects semantic integration of speech and gestures. These results provide novel evidence about the neural integration of abstract speech and gestures as it contributes to subsequent memory performance.