Abstract

Everyday experience affords us many opportunities to learn about objects through multiple senses using physical interaction. Previous work has shown that active motor learning of unisensory items enhances memory and leads to the involvement of motor systems during subsequent perception. However, the impact of active motor learning on subsequent perception and recognition of associations among multiple senses has not been investigated. Twenty participants were included in an fMRI study that explored the impact of active motor learning on subsequent processing of unisensory and multisensory stimuli. Participants were exposed to visuo-motor associations between novel objects and novel sounds either through self-generated actions on the objects or by observing an experimenter produce the actions. Immediately after exposure, accuracy, RT, and BOLD fMRI measures were collected with unisensory and multisensory stimuli in associative perception and recognition tasks. Response times during audiovisual associative and unisensory recognition were enhanced by active learning, as was accuracy during audiovisual associative recognition. The difference in motor cortex activation between old and new associations was greater for the active than the passive group. Furthermore, functional connectivity between visual and motor cortices was stronger after active learning than passive learning. Active learning also led to greater activation of the fusiform gyrus during subsequent unisensory visual perception. Finally, brain regions implicated in audiovisual integration (e.g., STS) showed greater multisensory gain after active learning than after passive learning. Overall, the results show that active motor learning modulates the processing of multisensory associations.

INTRODUCTION

Our everyday experience with the world involves interacting with and perceiving objects through multiple senses. From using complex devices such as cell phones and musical instruments to doing simple things like ringing a doorbell, we are often required to reenact motor activities to achieve certain perceived outcomes. Through experience, we learn associations between perceived objects and perceived outcomes in the context of self-performed actions. Memory representations of such experiences contain information not only about what we have seen, heard, smelled, touched, or tasted, but also about our physical actions during these events. Therefore, we come to associate multisensory perceptions in the context of goal-directed action.

Here we will use the term active learning to denote experience that involves self-performed physical actions during the encoding of information—requiring the involvement of motor systems as well as perceptual systems (vision, audition, haptics), during the exposure episode. Previous research into active learning has focused on physical exploration of three-dimensional objects, learning how to use tools, and the enactment of verbal phrases (Weisberg, van Turennout, & Martin, 2007; Harman, Humphrey, & Goodale, 1999; Cohen, 1989). In behavioral paradigms, such effects usually demonstrate that active learning enhances behavioral performance during memory tasks. For example, actively exploring visual objects leads to faster RTs during recognition (James, Humphrey, Vilis, et al., 2002; Harman et al., 1999). Other work shows that active learning can also affect other cognitive processes—actively exploring visual objects enhances later mental rotation of those objects relative to passively (visual only) explored objects (e.g., James, Humphrey, & Goodale, 2001). These studies suggest that visual and motor processes interact in ways that become important for understanding object constancy. Because active motor learning impacts object knowledge, it may be more than a mere outcome of perceptual and cognitive processing and, in fact, may be an integral aspect of how such processes function in everyday experience.

Other research has focused on how subject-performed tasks, in response to verbal phrases, enhance subsequent memory performance in free recall tasks relative to passive listening (Engelkamp & Zimmer, 1994; Cohen, 1989). Work in this area suggests that it is the active motor experience, inherent to the self-performed task, that lead to these enhancements as opposed to other differences between self-performed versus passive tasks (von Essen & Nilsson, 2003; Engelkamp & Zimmer, 1997). Active self-performing of verbal commands enhances old–new recognition accuracy compared with passive encoding, and this enhancement holds even when task difficulty is increased (Engelkamp, Zimmer, & Biegelmann, 1993).

Neuroimaging studies suggest that active learning during encoding impacts the neural activation patterns associated with subsequent perception. One consistent finding across studies is that activation in motor cortices occurs during visual presentation of stimuli previously associated with actions. Commonly experienced stimuli including tools (Chao & Martin, 2000), kitchen utensils (Grezes & Decety, 2002), letters (James & Gauthier, 2006; Longcamp, Anton, Roth, & Velay, 2005), auditorily presented action words (James & Maouene, 2009), and visually presented action words (e.g., Pulvermuller, Harle, & Hummel, 2001) have all recruited motor systems during perception. In these studies, participants are shown stimuli that have associations with motor actions with no controlled training before neuroimaging. However, other studies use novel stimuli in which active training occurs before neuroimaging. Such studies have demonstrated that active motor training that involved using novel objects like tools (Weisberg et al., 2007) or letter-like symbols (James & Atwood, 2009) led to activation in motion (left middle temporal gyrus), motor (left intraparietal sulcus and premotor cortex), and more focal visual regions (fusiform) when the objects were later perceived. Motor system activation during auditory perception has also been shown using EEG (De Lucia, Camen, Clarke, & Murray, 2009) and fMRI (Mutschler et al., 2007). That is, action-related sounds or sounds that are learned actively, are associated with activation of frontal and premotor regions during subsequent perceptual presentations. In addition, activation of premotor regions during fMRI has been used to train multivariate pattern classifiers to predict whether participants were listening to hand- or mouth-related action (Etzel, Gazzola, & Keysers, 2008).

Neuroimaging work has also shown that active learning during encoding impacts the neural activation patterns during subsequent memory tasks. For example, a megnetoencephalography study showed that enhancement in recognition accuracy for active self-performed tasks over passive verbal tasks is related to the activation of motor systems in the brain that occurs between 150 and 250 msec after stimulus onset (Masumoto et al., 2006). Similarly, an ERP study demonstrated that during a source memory test brain activation changed depending on whether real objects were encoded during self-performed actions, passive viewing of actions, imagined actions, or nonmotor tasks (Senkfor, Petten, & Kutas, 2002). Finally, a positron emission topography study found that the encoding of overt or covert action phrases, compared with passive verbal encoding, was associated with activation of motor regions during subsequent cued recall (Nyberg et al., 2001; also, see Nilsson et al., 2000). These activations may reflect the occurrence of motor reactivation. Usually, reactivation is defined by patterns of activity that occur during an event that reflect previous experience. In these cases, the task during which reactivation is seen is not thought to require the elicited pattern-it is recruited as a result of a prior encoding episode. For example, primary motor areas, thought to support only motor processing, can be reactivated during a purely visual task—as a result of prior motor experience. Motor reactivation, occurring during perception and memory tasks after motor-related experiences, supports a larger literature theorizing (e.g., Fuster, 2009; Barsalou, 1999; Damasio, 1989) and demonstrating (e.g., Johnson, McDuff, Rugg, & Norman, 2009; James & Gauthier, 2003; Wheeler, Peterson, & Buckner, 2000; for a review, see Slotnick, 2004) that brain regions involved during encoding reactivate during subsequent perception and memory. Importantly, this work suggests that stored representations of known events include the embodied information that is incorporated during encoding.

It is important to point out that active motor learning commonly involves the processing of haptic information. Often the objects we manipulate we also feel, gathering both small- and large-scale tactile information. In a similar way to motor-related information, haptic information may modulate subsequent perception and memory of objects. Previous work has shown that the inclusion of haptic exploration during learning enhances the recognition of audiovisual associations (Fredembach, de Boisferon, & Gentaz, 2009). Furthermore, fMRI studies have shown that somatosensory regions reactivate during the retrieval of haptic information (Stock, Roder, Burke, Bien, & Rosler, 2009).

Although the influence of active learning on the recognition of unisensory information has been investigated, the effects of active learning on multisensory perception, multisensory associative recognition, and multisensory integration are not well known. To address these gaps in knowledge, the current study explored the behavioral and neural effects on the subsequent perception and recognition of actively versus passively learned audiovisual associations. To this end, we manipulated the type of encoding experience by having participants use self-performed motor movements (active condition) or observe an experimenter perform the same actions (passive condition), all of which involved the conjunction of novel visual and novel auditory information. We attempted to keep the encoding experience between the active and passive encoding groups as similar as possible with the only difference being who performed the active physical involvement with the stimuli. In summary, the approach of the current study was to present participants with a multisensory experience during encoding that either included a self-generated action or only the observation of the action and then to test how these different encoding experiences impacted subsequent perception, both neurally and behaviorally.

Three main hypotheses were tested. First, we expected that active learning would enhance behavioral measures of associative and unisensory recognition. Second, we expected that active learning would be associated with the reactivation of motor-related systems during the subsequent perception of audiovisual associations and that this pattern of activation would differ during subsequent unisensory perception. Related to this, functional connectivity analyses were also performed to assess whether connectivity with motor regions was stronger in the active group than the passive group. Specifically, we hypothesized that connectivity would be stronger between motor regions and sensory processing regions. Third, we expected that active learning of audiovisual associations would modulate later multisensory processing such that brain regions responsible for audiovisual integration would show stronger multisensory gain after active learning than passive learning. Specifically, the difference between the audiovisual and the sum of the audio and visual stimulus conditions (i.e., the gain) would be greater after active learning than passive learning. Therefore, this study aims to test whether active learning of specific conjunctions of audiovisual stimuli modulates behavioral associative recognition measures as well as the activity in motor and multisensory brain regions, thus extending extant research beyond the impact of active learning on unisensory item information.

METHODS

Participants

Twenty individuals (10 women, 10 men) participated in the study (mean age = 24.35, SD = 3.2). Participants were randomly assigned to either the active group (5 women and 5 men; mean age = 24.0, SD = 2.2) or the passive group (5 women and 5 men; mean age = 24.7, SD = 4.1). There was no significant differences between the ages of the two groups (t(18) = 0.4, p = .699). All participants gave informed consent according to the guidelines of the Indiana University Institutional Review Board. All participants reported right handedness and normal or corrected-to-normal vision. Participants were compensated for their time.

Stimuli and Apparatus

The visual stimuli were relatively complex novel 3-D objects. These novel 3-D objects were made of gray-colored light-weight ABS plastic using fused deposition modeling with a Stratasys Prodigy Plus rapid prototyping machine (see Figure 1). Novel sounds were created by initially recording sounds of distinct common objects and environmental sounds. These sounds were then imported into Garage Band (Apple Inc, Cupertino, CA) and distorted using various effects found in the program. The goal was to make the sounds distinct from each other while, at the same time, distorting them so the original sound could no longer be recognized. Both the audio and visual stimuli were novel to the participants to reduce the possibility that they had any known prior audio-, visual-, or motor-related associations.

Figure 1. 

Examples of visual stimuli. Photos of the novel 3-D objects used during active and passive training.

Figure 1. 

Examples of visual stimuli. Photos of the novel 3-D objects used during active and passive training.

To allow for active motor involvement during the encoding of these stimuli, a pressure-sensitive box was used. This box contained an internal speaker that was triggered when the object was lightly pressed to the surface of the box (see Figure 2). This allowed the participants in the active group or the experimenter in the passive group to perform a transitive movement with the novel objects to produce the associated sound.

Figure 2. 

Images of the apparatus during active and passive training. Photos demonstrating both the active and passive training. (A) During active learning, the experimenter first placed the object in front of the participant, and then the participant picked up the object and placed it on the box to trigger the associated sound. (B) During passive learning, the experimenter first placed the object in front of the participant, and then the experimenter picked up the object and placed it on the box to trigger the associated sound.

Figure 2. 

Images of the apparatus during active and passive training. Photos demonstrating both the active and passive training. (A) During active learning, the experimenter first placed the object in front of the participant, and then the participant picked up the object and placed it on the box to trigger the associated sound. (B) During passive learning, the experimenter first placed the object in front of the participant, and then the experimenter picked up the object and placed it on the box to trigger the associated sound.

Active and Passive Audiovisual Association Exposure Procedures

Before entering the imaging research facility, both the active and passive groups received exposure in which they associated novel audio and visual stimuli. Half of the participants actively moved the novel objects to make novel associated sounds, and the other half passively viewed an experimenter perform this task. All participants were exposed to the 12 audiovisual pairs 15 times. The 12 pairs were presented sequentially 15 different times, and the order of presentation was randomized. The same random sequences were used for all participants from both groups. There was no evaluation of learning before the magnet—the purpose of this portion of the experiment was to expose the participants to the audio-visual associations through either self-generated or other-generated action. The exposure session lasted 30 min.

Participants in the active group picked up objects with their right hands when they were placed before them and transported them to the pressure-sensitive box, pressing lightly to create the associated sound (see Figure 2A). The visual and auditory experience of the passive group was kept similar to the active group. The only difference was that the passive group merely watched the experimenter press the objects onto the pressure sensitive box to create the associated sound. To keep the visual perception of the two groups as similar as possible, the experimenter placed the object in the same orientation in front of the passive participants just as in the active group and moved his arm away then picked it up himself and finally placed it on the pressure sensitive box (see Figure 2B). During the exposure session, the objects were presented to all participants in a constant orientation that matched what they saw later in the MRI environment. It should be noted that the active group were exposed to the objects from an egocentric point of view, but the passive group were exposed from an allocentric point of view.

Testing Procedures

Immediately after the exposure session, participants were brought to the imaging research facility. After instructions and safety screening, participants underwent the imaging session that lasted 1–1.5 hr. Functional imaging was divided into two runs. These two runs were followed by four other runs with a different design that were part of a different study. For the current study, only the first two runs were analyzed. These two runs lasted 3 min and 30 sec each. After the functional runs were complete, an anatomical series was collected.

During the two functional imaging runs, participants viewed visual objects and listened to sounds. These runs included six different conditions including the previously learned stimuli (“old visual,” “old auditory,” and “old audiovisual” pairs) and unlearned stimuli (“new visual,” “new auditory,” and “new audiovisual” re-pairings). All visual stimuli were presented as computer-generated renders of the 3-D objects, were gray, and were presented in the center of the screen. Objects were presented such that the axis of elongation was 45° from the participant's line of sight. The visual angle ranged from 6.68 to 14.25 (using the height of the stimuli) and from 8.58 to 14.25 (using the width of the stimuli). The “new visual” and “new auditory” stimuli were different stimuli from those learned during the exposure session. The “new audiovisual” re-pairings consisted of the same audio and visual stimuli given during the exposure session but were paired differently. Stimulus conditions were presented in blocks that lasted 24 sec with an interblock interval of 10 sec. The task during the blocks and rest periods was passive viewing. The instruction before the runs was to pay attention to the stimuli that would be presented. The participants did not make visible hand movements as confirmed by a camera in the bore of the magnet.

Subsequent to the training and functional scans, behavioral tests of recognition were given to the participants. Participants performed a behavioral associative recognition test with two conditions, old audiovisual pairs, and new audiovisual re-pairings. Re-pairings were not repeated during behavioral testing, and they were different from those used in the functional imaging runs. Participants also performed a behavioral visual and auditory unisensory item recognition task. Participants used the index and middle finger of their left hand to indicate whether the stimuli were old or new. The assignment of fingers to old and new responses was counterbalanced across participants.

Functional Imaging Parameters

Imaging was performed using a 3-T Siemens Magnetom Trio whole-body MRI system and a 32-channel head coil, located at the Indiana University Psychological and Brain Sciences Department. All stimuli were back-displayed via a Mitsubishi XL30 projector onto a screen that was viewed through a mirror from the bore of the scanner. Stimuli were presented via Superlab software via an Apple Macbook laptop.

The field of view was 22 × 22 × 9.9 cm, with an in-plane resolution of 64 × 64 pixels and 33 slices per volume that were 3.4-mm thick. These parameters allowed us to collect data from the entire brain. The resulting voxel size was 1.7 × 1.7 × 3.4 mm. Images were acquired using an echo-planar technique (TE = 28 msec, TR = 2000 msec, flip angle = 90°) for BOLD-based imaging. High-resolution T1-weighted anatomical volumes were acquired using a 3-D turbo-FLASH acquisition.

fMRI Data Analysis

BrainVoyager QX™ 2.2.0 (Brain Innovation, Maastricht, Netherlands) was used to analyze the fMRI data. fMRI data preprocessing included slice scan time correction, 3-D motion correction, Gaussian spatial smoothing (6 mm), and linear trend removal. Individual functional volumes were coregistered to anatomical volumes with an intensity-matching, rigid body transformation algorithm. Individual anatomical volumes were normalized to the stereotactic space of Talairach and Tournoux (1988) using an eight-parameter affine transformation, with parameters selected by visual inspection of anatomical landmarks. Applying the same affine transformation to the coregistered functional volumes placed the functional data in a common brain space, allowing comparisons across participants. Voxel size of the normalized functional volumes was resampled at 3 mm3 using trilinear interpolation. It was this voxel size to which the cluster size threshold was applied. Brain maps in figures are shown with the voxel size resampled at 1 mm3.

The data were entered into general linear models using an assumed two-gamma hemodynamic response function. The data from the two runs were concatenated using a random effects analysis at the group level. Specifically, we concatenated two runs per subject, calculated beta values from all conditions per subject, and then used these beta values, over subjects, for the random effects analysis. The baseline was the average of the rest intervals across the two runs. Whole-brain SPM analysis involving several contrasts of interest from the functional imaging runs were performed and are described in detail in Results. We used the Brain Voyager Cluster-Level Statistical Threshold Estimator plug-in to control for multiple tests. The plug-in estimates the cluster size threshold necessary to produce an effective alpha of <.05, given a specific voxel-wise p value, using Monte Carlo simulation. The statistical significance of clusters in a given contrast was first assessed using a random effects between-group ANCOVA model. Voxel-wise significance was set at p = .001. The Cluster-Level Statistical Threshold Estimator plug-in estimated a cluster size threshold of six 3 mm3 voxels.

Functional connectivity was assessed using the RFX Granger Causality Mapping v2.0 plug-in in Brain Voyager. The one seed region used was based on the ROI in the left precentral gyrus shown in the map in Figure 5. Instantaneous correlations were calculated for BOLD activation produced during the AV conditions. Statistical significance of the clusters was assessed using a random effects between-group ANCOVA model combined with the Cluster-Level Statistical Threshold Estimator plug-in (p = .005, cluster size = 6).

BOLD activation from two ROIs, the left STS (see Figure 7), and the left precentral gyrus (see Figure 5) was extracted for each individual subject and used as a dependent measure in a correlation analysis with both RT and accuracy. Correlations were calculated across individual subjects separately in the active and passive groups (i.e., n = 10 for each). In addition, the same correlations were calculated after collapsing across groups (i.e., n = 20).

RESULTS

Behavioral Results

There was a significant behavioral performance difference between groups that resulted from the memory test task (Figure 3). Behavioral accuracy was significantly greater in the associative recognition task for the active group compared with the passive group [t(18) = 2.317, p < .05]. In addition, RT was significantly lower in the active group relative to the passive group [t(18) = 3.233, p < .01]. Therefore, behavioral measures demonstrated that both speed and accuracy were enhanced in the associative recognition test for the active group compared with the passive group.

Figure 3. 

Behavioral audiovisual associative recognition results. Accuracy and RT results for audiovisual associative recognition. Active learning showed increased accuracy and decreased RTs compared with passive learning. In both graphs, error bars represent SEM. *Statistically significant difference at p < .05 for both graphs.

Figure 3. 

Behavioral audiovisual associative recognition results. Accuracy and RT results for audiovisual associative recognition. Active learning showed increased accuracy and decreased RTs compared with passive learning. In both graphs, error bars represent SEM. *Statistically significant difference at p < .05 for both graphs.

Behavioral accuracy during unisensory recognition of either visual or auditory stimuli was not significantly different between groups. However, RT was significantly lower for the active group compared with the passive group for both types of unisensory item recognition including visual item recognition [t(18) = 2.660, p < .05] as well as unisensory auditory recognition [t(18) = 2.786, p < .05]. Therefore, behavioral measures demonstrated that only RT was enhanced in the unisensory visual and auditory recognition test for the active group compared with the passive group (Figure 4). This finding replicates previous work showing that only RT is affected by active learning (James et al., 2001; Harman et al., 1999).

Figure 4. 

Behavioral unisensory item recognition results. Accuracy and RT results for both visual item recognition (left) and auditory item recognition (right). RT was significantly faster for the active group during both visual and auditory item recognition. In both graphs, error bars represent SEM. *Statistically significant difference at p < .05 for all graphs.

Figure 4. 

Behavioral unisensory item recognition results. Accuracy and RT results for both visual item recognition (left) and auditory item recognition (right). RT was significantly faster for the active group during both visual and auditory item recognition. In both graphs, error bars represent SEM. *Statistically significant difference at p < .05 for all graphs.

Whole-brain BOLD Contrasts

Contrasts of old audiovisual pairs > new audiovisual re-pairings (Figure 5 and Table 1) were performed to reveal the effects of active learning on the subsequent perception of audiovisual associations. First this old–new contrast was performed on the active and passive groups combined, the active group alone, and on the passive group alone. With the groups combined (Figure 5A), the right hippocampus (Talairach coordinates (x, y, z): 18, −16, −8) was significantly more activated with old audiovisual associations than with new audiovisual re-pairings. Analyzing the active group alone (Figure 5B) showed that a left medial primary motor region (−12, −19, 34), two regions in the right cerebellum (21, −31, −20 and 24, −55, −17), and one region in the left cerebellum (−6, −46, −23) were more activated with old audiovisual associations than with new audiovisual re-pairings. An analysis of the passive group alone (Figure 5C) demonstrated that left fusiform gyrus (−30, −58, −14), right fusiform gyrus (48, −55, −20 and 36, −76, −14), right middle occipital gyrus (27, −88, 13), and left lingual gyrus (−9, −94, −5) were more activated with old audiovisual associations than with new audiovisual re-pairings.

Figure 5. 

Contrast of old audiovisual pairs versus new audiovisual re-pairings. Activation related to the presentation of learned audiovisual pairs from the one-way contrast (old audiovisual pairs) > (new audiovisual re-pairings). Results using this contrast are shown from analyses on the combination of both groups (A), on the active group alone (B), and on the passive group alone (C). Results are also shown from an analysis using a two-way interaction contrast directly comparing the one-way contrast across active and passive groups (D).

Figure 5. 

Contrast of old audiovisual pairs versus new audiovisual re-pairings. Activation related to the presentation of learned audiovisual pairs from the one-way contrast (old audiovisual pairs) > (new audiovisual re-pairings). Results using this contrast are shown from analyses on the combination of both groups (A), on the active group alone (B), and on the passive group alone (C). Results are also shown from an analysis using a two-way interaction contrast directly comparing the one-way contrast across active and passive groups (D).

Table 1. 

SPM Contrasts Cluster Data

Contrast
Location
Cluster Size
Peak X
Peak Y
Peak Z
Peak t Value
Both groups combined: Old AV > New AV Right hippocampus 259 18 −16 −8 4.58 
Active group only: Old AV > New AV Right cerebellum 359 21 −31 −20 7.41 
Left SMA/cingulate gyrus 372 −12 −19 34 10.03 
Left cerebellum 716 −6 −46 −23 9.57 
Right cerebellum 797 24 −55 −17 9.02 
Passive group only: Old AV > New AV Right fusiform gyrus 297 48 −55 −20 12.88 
Right fusiform 456 36 −76 −14 6.21 
Right middle occipital gyrus 322 27 −88 13 6.49 
Left lingual gyrus 2863 −9 −94 −5 7.97 
Left fusiform gyrus 411 −30 −58 −14 6.72 
Interaction contrast between pair type (Old AV > New AV) and group (Active > Passive) Right cerebellum 304 −49 −20 4.88 
Left SMA/cingulate gyrus 522 −12 −13 43 6.86 
Left insula 558 −42 −4 13 5.25 
Left anterior temporal 360 −42 −1 −8 4.93 
Left precentral gyrus 275 −54 −19 37 5.72 
Interaction contrast between pair type (Old Visual > New Visual) and group (Active > Passive) Left fusiform gyrus 537 −42 −49 −5 4.85 
Active group only: Old AV > (Old visual + Old auditory) Left superior temporal sulcus 641 −39 −49 19 6.89 
Left cingulate gyrus 376 −12 −49 34 6.73 
Passive group only: (Old visual + Old auditory) > Old AV Left supramarginal gyrus 468 −51 −37 37 7.34 
Interaction contrast between modality (Old AV > (Old visual + Old auditory)) and group (Active > Passive) Right superior frontal gyrus 424 18 17 58 4.81 
Left cingulate gyrus 494 −6 −40 28 4.98 
Left superior frontal gyrus 324 −15 20 58 5.39 
Left superior frontal gyrus 294 −15 47 37 4.65 
Left superior temporal sulcus 2447 −39 −49 19 7.43 
Left inferior parietal lobule 385 −39 −64 47 5.65 
Left supramarginal gyrus 281 −60 −46 31 4.75 
Functional correlation analysis R. mid/postintraparietal sulcus 295 26 −77 33 5.08 
Left middle occipital gyrus 400 −28 −77 18 4.89 
Left middle temporal gyrus 819 −46 −65 4.43 
Contrast
Location
Cluster Size
Peak X
Peak Y
Peak Z
Peak t Value
Both groups combined: Old AV > New AV Right hippocampus 259 18 −16 −8 4.58 
Active group only: Old AV > New AV Right cerebellum 359 21 −31 −20 7.41 
Left SMA/cingulate gyrus 372 −12 −19 34 10.03 
Left cerebellum 716 −6 −46 −23 9.57 
Right cerebellum 797 24 −55 −17 9.02 
Passive group only: Old AV > New AV Right fusiform gyrus 297 48 −55 −20 12.88 
Right fusiform 456 36 −76 −14 6.21 
Right middle occipital gyrus 322 27 −88 13 6.49 
Left lingual gyrus 2863 −9 −94 −5 7.97 
Left fusiform gyrus 411 −30 −58 −14 6.72 
Interaction contrast between pair type (Old AV > New AV) and group (Active > Passive) Right cerebellum 304 −49 −20 4.88 
Left SMA/cingulate gyrus 522 −12 −13 43 6.86 
Left insula 558 −42 −4 13 5.25 
Left anterior temporal 360 −42 −1 −8 4.93 
Left precentral gyrus 275 −54 −19 37 5.72 
Interaction contrast between pair type (Old Visual > New Visual) and group (Active > Passive) Left fusiform gyrus 537 −42 −49 −5 4.85 
Active group only: Old AV > (Old visual + Old auditory) Left superior temporal sulcus 641 −39 −49 19 6.89 
Left cingulate gyrus 376 −12 −49 34 6.73 
Passive group only: (Old visual + Old auditory) > Old AV Left supramarginal gyrus 468 −51 −37 37 7.34 
Interaction contrast between modality (Old AV > (Old visual + Old auditory)) and group (Active > Passive) Right superior frontal gyrus 424 18 17 58 4.81 
Left cingulate gyrus 494 −6 −40 28 4.98 
Left superior frontal gyrus 324 −15 20 58 5.39 
Left superior frontal gyrus 294 −15 47 37 4.65 
Left superior temporal sulcus 2447 −39 −49 19 7.43 
Left inferior parietal lobule 385 −39 −64 47 5.65 
Left supramarginal gyrus 281 −60 −46 31 4.75 
Functional correlation analysis R. mid/postintraparietal sulcus 295 26 −77 33 5.08 
Left middle occipital gyrus 400 −28 −77 18 4.89 
Left middle temporal gyrus 819 −46 −65 4.43 

Relevant data concerning all significantly active clusters from all reported contrasts.

To directly compare the active and passive groups, a two-way factorial contrast revealing the interaction between pair type (old audiovisual pairs versus new audiovisual re-pairings) and group (active versus passive) was performed (Figure 5D). The difference in activation between old and new pairings was greater in the active group than in the passive group in the left SMA/cingulate gyrus (−12, −13, 43), left lateral primary motor (−54, −19, 37), right cerebellum (9, −49, −20), left insula (−42, −4, 13), and left anterior temporal lobe (−42, −1, −8). It is important to note that the passive group showed no evidence of motor activation using very liberal thresholds (p = .05, uncorrected) with this contrast. However, using slightly more liberal thresholds (p = .005, corrected with a cluster threshold of 6), the active group had left lateralized activation extending to both motor and somatosensory cortex (postcentral gyrus at coordinates −30, −37, 60; see Supplementary Figure 1).

The active and passive groups were also directly compared in a two-way factorial contrast comparing unisensory conditions. There were no significant differences between groups during the perception of auditory stimuli. However, a two-way factorial contrast of the interaction between visual stimulus type (old visual items versus new visual items) and group (active versus passive) showed significant differences between groups (Figure 6). The difference in activation between old and new visual items was greater in the active group than in the passive group in the left fusiform gyrus (−42, −49, −5).

Figure 6. 

Between-group contrast of old visual items versus new visual items. Activation greater for the active group from a two-way factorial contrast directly investigating the interaction between visual item type (old versus new) and group (active versus passive).

Figure 6. 

Between-group contrast of old visual items versus new visual items. Activation greater for the active group from a two-way factorial contrast directly investigating the interaction between visual item type (old versus new) and group (active versus passive).

A second two-way factorial contrast investigated differences in multisensory enhancement between the active and passive groups for the old (learned) items. We defined multisensory enhancement as stronger activation to multisensory presentation than to the sum of unisensory presentation. Metrics for assessing multisensory enhancement, such as superadditivity and the maximum rule, have been the target of recent scrutiny (Laurienti, Perrault, Stanford, Wallace, & Stein, 2009; Stevenson, Geoghegan, & James, 2007; Beauchamp, 2005). The current two-factor analysis, however, is an example of an additive factors design, which has been found to ameliorate many of the concerns directed at established metrics of multisensory enhancement (Stevenson, Kim, & James, 2009). For the sensory modality factor, the old audiovisual pairs were contrasted with the sum of the old unisensory visual and old unisensory auditory items. Importantly, the old audiovisual pairs were multisensory combinations of same stimuli presented in the unisensory visual and auditory conditions. For the active/passive factor, active and passive groups were equally weighted. For this contrast the passive group alone (Figure 7A) had significant negative activation in the left supramarginal gyrus (−51, −37, 37), and active group alone (Figure 7B) showed significant activation of the left STS (−39, −49, 19) as well as the left cingulate gyrus (−12, −49, 34). Results from this two-way factorial contrast, in which the two groups were directly contrasted (Figure 7C), demonstrated greater enhancement in the active than the passive group (i.e., a two-way interaction) in the left STS (−39, −49, 19), left superior frontal gyrus (−15, 20, 58 and −15, 47, 37), right superior frontal gyrus (18, 17, 58), left cingulate gyrus (−6, −40, 28), left inferior parietal lobule (−39, −64, 47), and left supramarginal gyrus (−60, −46, 31). It is important to note that there was no motor activation when using the sum of unisensory conditions for the two-way factorial contrast, but when using the average of the unisensory conditions in the two-way factorial contrast, there was clear left lateralized motor activation where it would be expected. It is likely that motor activation is not found because it occurs for both the multisensory and unisensory conditions.

Figure 7. 

Contrast of old audiovisual pairs versus old unisensory visual and auditory items. Greater activation for the passive group alone (A), active group alone (B), and the active group compared with the passive group (C) during the presentation of learned audiovisual pairs using the contrast (old audiovisual pairs) > (old visual items + old auditory items). The labels below each image define the location of the regions of activation. In the case of images with multiple regions of activation, the arrow shows which regions are being labeled below the image.

Figure 7. 

Contrast of old audiovisual pairs versus old unisensory visual and auditory items. Greater activation for the passive group alone (A), active group alone (B), and the active group compared with the passive group (C) during the presentation of learned audiovisual pairs using the contrast (old audiovisual pairs) > (old visual items + old auditory items). The labels below each image define the location of the regions of activation. In the case of images with multiple regions of activation, the arrow shows which regions are being labeled below the image.

Functional Connectivity Analyses

The results comparing the functional connectivity of a region in the left precentral gyrus (primary motor) revealed multiple regions showing a stronger instantaneous correlation for the active group than the passive group (see Figure 8). These included two regions, one in the left middle occipital gyrus (−28, −77, 18) and one in the left middle temporal gyrus (46, −65, 3) that correspond to locations within the functionally defined object-selective lateral occipital complex (James, Humphrey, Gati, Menon, & Goodale, 2002; Grill-Spector, Kushnir, Hendler, & Malach, 2000) and one region in the right middle to posterior intraparietal sulcus (26, −77, 33).

Figure 8. 

Between-group differences in functional connectivity between motor and visual brain regions. The seed region, shown in green, is in the left precentral gryus. This motor region ROI was derived from the contrast (old audiovisual pairs) > (new audiovisual re-pairings; see Figure 5). Orange clusters indicate regions where the active group showed stronger functional connectivity with this seed region than the passive group (p < .05 corrected).

Figure 8. 

Between-group differences in functional connectivity between motor and visual brain regions. The seed region, shown in green, is in the left precentral gryus. This motor region ROI was derived from the contrast (old audiovisual pairs) > (new audiovisual re-pairings; see Figure 5). Orange clusters indicate regions where the active group showed stronger functional connectivity with this seed region than the passive group (p < .05 corrected).

Correlation of BOLD Activation and Behavioral Performance

None of the correlations between BOLD activation and behavioral performance measures were significant. Although the lack of significance could be attributed solely to the small sample size, the correlations observed were small enough (r < 0.25) that it is doubtful they would have reached significance with larger sample size.

DISCUSSION

The current study demonstrated that active interaction with objects differentially impacted the subsequent perception and recognition of audio and visual associations compared with passive observation of the same objects. There were four main findings that supported our hypotheses. First, response time of unisensory and audiovisual associative recognition and accuracy of audiovisual associative recognition were facilitated after active exposure to objects than after passive observation. Second, there was greater activation of motor-related regions for the active learning group compared with the passive group during the perception of previously learned audiovisual associations relative to re-pairings of the same stimuli. Additionally, these motor regions showed stronger functional connectivity with visual processing regions for the active group than the passive group. Third, active multisensensory exposure was associated with greater activation of fusiform gyrus during subsequent perception of unisensory visual items. Fourth, there was stronger multisensory enhancement in brain regions implicated in audiovisual integration (e.g., STS) after active exploration of objects compared with passive observation experience.

The fact that behavioral performance was significantly enhanced with active relative to passive learning of audiovisual association extends previous work focusing on the effects of active learning on unisensory information. The current study also extends previous work by showing that both auditory and visual unisensory recognition was enhanced after active exposure to audiovisual associations. Furthermore, previous work has demonstrated that actively exploring novel unisensory visual objects leads to faster RTs during later visual item recognition (James, Humphrey, Vilis, et al., 2002; Harman et al., 1999). We replicate this finding by showing that active exposure facilitates speed of recognition during subsequent unisensory recognition but go further to show that active multisensory exposure enhanced both RT and accuracy during the recognition of audiovisual associations. Importantly, this may suggest that active learning has a greater behavioral impact on the later associative recognition compared with unisensory recognition. Facilitated performance after active learning may represent an increase in accessibility for actively learned associations. Differences in brain regions recruited by the active group during audiovisual perception, including motor-related regions, may be related to these enhancements in behavior.

Previous work has demonstrated that the reactivation of motor regions occurs during perception and recognition of unisensory information after active learning experiences. Specific motor-related regions reactivated in such studies include primary motor (James & Atwood, 2009; James & Maouene, 2009; Masumoto et al., 2006; Grezes & Decety, 2002; Senkfor et al., 2002; Nyberg et al., 2001; Nilsson et al., 2000), premotor regions (De Lucia et al., 2009; James & Maouene, 2009; Etzel et al., 2008; Weisberg et al., 2007; Longcamp et al., 2005; Chao & Martin, 2000), SMA (Grezes & Decety, 2002), insula (Mutschler et al., 2007), and cerebellum (Imamizu, Kuroda, Miyauchi, Yoshioka, & Kawato, 2003; Nyberg et al., 2001). The current results extend this work by demonstrating similar effects but using multisensory associative information. Crucially, our findings suggest that reactivation of motor systems, in the context of multisensory associative learning, only occurs when specific actively learned associations are subsequently perceived. The lateralization of this motor reactivation to the left hemisphere was presumably due to the influence of the active exposure episode, in which the participants learned the associations by performing the task with their right hand. One common explanation for motor activation during perception of visual objects is that some objects afford actions in an automatic fashion, thus leading to motor activation (Grezes & Decety, 2002; Gibson, 1977). The current results, however, suggest that affordances alone are not enough to elicit motor activation; motor interaction during initial exposure was required for this effect to occur. This may further suggest that actions are not associated with objects through automatic affordances but instead are associated through experience.

There was also a significant difference between the active and passive groups in the functional connectivity between motor and visual processing regions during the processing of audiovisual stimuli. Therefore, the active group not only had activation of motor regions during later audiovisual processing but also demonstrated a greater coherence in activity between motor and visual regions. This raises the possibility that motor regions are recruited after active learning due to strengthening of connections in a circuit linking sensory and motor processing regions of the brain. Previous work has not explored this possibility. Thus, the current results show for the first time that active motor learning modulated both the activation of motor regions as well as the connectivity between motor and visual regions. These particular visual regions included regions in both the dorsal and ventral visual processing streams (Goodale & Milner, 1992). The occipito-temporal region in the ventral stream, corresponding spatially to the functionally defined lateral occipital complex, has been implicated in visual object processing, whereas intraparietal regions, in the dorsal stream, have been shown to participate in visuo-motor control (Culham & Valyear, 2006), object recognition (James, Humphrey, Gati, et al., 2002; Grill-Spector et al., 2000), and recognition of action (James, VanDerKlok, Stevenson, & James, 2011; Culham & Valyear, 2006). Active learning, therefore, enhanced connections between motor regions and both dorsal and ventral visual processing streams.

The current study also suggests that participants encoded haptic information during active learning. Using a slightly more liberal thresholds (p < .005, corrected) left lateralized activation was seen across both motor and somatosensory cortex that was greater for the active group compared with the passive group when perceiving old associations (see Supplementary Figure 1). This suggests that both motor and haptic information modulate subsequent perception and recognition at neural and behavioral levels. Future studies controlling for haptic and motor information could reveal the unique impact of motor versus haptic experience on perception and memory.

Because the results of the current study involve the activation of motor systems in the absence of motor movements the possible role of the “human mirror system” (HMS) is important to consider. Evidence suggests that mirror neurons fire during both the performance and the observation of actions in the nonhuman primate (Rizzolatti, Fadiga, Gallese, & Fogassi, 1996). The generalization of this much-cited finding to the human brain is, at present, quite controversial (see, e.g., Dinstein, Thomas, Behrmann, & Heeger, 2008). However, it should be stressed that in the current study participants were not viewing actions in the testing session, as would be the case in a study of mirror neurons or the HMS. The current study participants were presented with static visual and/or auditory stimuli in isolation that had been associated with actions. This does not rule out that the HMS was not involved during passive learning, but this hypothesis was not explicitly tested in the current work.

Previous work suggests that seeing actions from an egocentric point of view leads to contralateral activation of the anterior parietal cortex, but seeing actions from an allocentric viewpoint leads to activation of the ipsilateral anterior parietal cortex (Shmuelof & Zohary, 2008). However, given that we did not collect fMRI data during the exposure session, we cannot comment on the possibility of this result. Nonetheless, during subsequent scanning, one might expect such differences as a result of egocentrically and allocentrically viewed objects during exposure—that is, a reactivation may have occurred that was lateralized differently as a result of the exposure session. There was no difference in terms of laterality of activation between the two exposure groups.

Similarly, another area of research that may relate to the current study concerns motor learning by observation. Previous work has shown that motor learning by observation can lead to the recruitment of motor systems although participants are making no overt motor actions (Malfait et al., 2010; Mattar & Gribble, 2005). However, the exposure conditions did not involve learning a new motor movement as occurred in related work.

Several theories propose that an overlap exists between brain regions engaged during encoding and retrieval operations, and these theories would predict the reactivation of motor systems after active learning (e.g., Fuster, 2009; Barsalou, 1999; Damasio, 1989). The current study provides further support for the idea that events are stored as an embodied representation that includes access to the pattern of motor activity utilized during learning. However, the current study also extends these theories to show that the perception of specific associations may be required for encoding related regions to be reactivated.

Active multisensensory exposure was also associated with greater activation of the fusiform gyrus during subsequent perception of unisensory visual items. Group differences in the fusiform were not significant during the subsequent perception of multisensory information. This suggests that active multisensory learning has differential impacts on subsequent unisensory compared with multisensory perception. Previous work has shown differences in visual regions during subsequent perception after active unisensory learning (James & Atwood, 2009; Weisberg et al., 2007). This, however, is the first study to show such effects after active learning of multisensory associations. One reason why there may have been a difference between groups for the visual, but not auditory, stimuli relates to the nature of these different modalities. In the current study, the visual information has a strong connection with the motor actions and the haptic information of the objects because both are influenced by shape. However, the auditory stimuli have no connection to the shape of the objects as they are arbitrarily paired. The visual affordances were therefore more impacted by active learning than the auditory affordances, and this was reflected by later differences in activation between groups for visual, but not auditory, stimuli.

In addition to motor reactivation, actively learned multisensory pairings also produced greater multisensory gain in several regions compared with passive learning. Multisensory gain (or enhancement) in the current study refers to stronger activation to multisensory presentation than to the sum of unisensory presentations. These regions included the left STS, bilateral superior frontal gyrus, left cingulate gyrus, left inferior parietal lobule, and the left supramarginal gyrus. Importantly, the analysis performed to investigate this effect used an additive factors design to avoid some of the concerns associated with commonly used metrics of multisensory enhancement (Stevenson et al., 2009). The STS is a known site of audiovisual integration (Amedi, von Kriegstein, Atteveldt, Beauchamp, & Naumer, 2005), and the STS is involved in the creation of audio and visual associations during encoding (Tanabe, Honda, & Sadato, 2005). The current results show that active learning modulates the regions involved in multisensory enhancement and integration; multisensory gain was greater with active learning. This effect could be operationalized as an increase in the number of STS neurons that receive input from multiple sensory systems. Alternatively, the existing STS neurons may simply increase their gain. Finally, it is possible that a “tuning” of new STS multisensory neurons occurs to a greater degree as a result of active compared with passive learning. Overall, the current findings suggest that learning through self-performed motor actions plays an important role in facilitating multisensory integration during subsequent perception.

Unlike activation in motor and multisensory areas, activation in the hippocampus was seen in both the active and the passive group during the presentation of learned audiovisual associations. These results are consistent with the idea that the hippocampus plays an important role in the reinstatement of encoding-related cortical activity (Rugg, Johnson, Park, & Uncapher, 2008). However, although the hippocampus was engaged by the presence of learned audiovisual associations in both groups, only in the active group was this hippocampal activation associated with the reinstatement of motor-related regions. This suggests that the hippocampus is involved in the reactivation of cortical regions but that motor regions are only reactivated if the learning condition was active. Furthermore, hippocampal activation was only seen if the correct pairing or conjunction of stimuli was presented. It is important to note that this hippocampal involvement may be time dependent, such that if this experiment were repeated at longer time delays its activation would diminish or not occur.

In conclusion, the current study demonstrates that, relative to passive observation, active experience with audiovisual associations impacts multisensory associative and unisensory item recognition at the behavioral level and multisensory associative and unisensory item perception at the neural level. Learning audiovisual associations through self-generated actions resulted in motor system reactivation to specific associations, enhanced functional connectivity between visual and motor regions, and increased multisensory gain in audiovisual regions. Importantly, this same self-generated action also improved the efficiency of audiovisual associative recognition. Therefore, the current study extends previous research to show that active motor learning modulates not only the processing of unisensory information but also multisensory information.

Acknowledgments

This research was partially supported by the Indiana METACyt Initiative of Indiana University, funded in part through a major grant from the Lilly Endowment, Inc. This research was also supported in part by the Faculty Research Support Program through the Indiana University Bloomington Office of the Vice President of Research. We thank Thea Atwood and Becky Ward for their assistance with data collection and Nathan McAninch and Corey Wright for their assistance in design and implementation.

Reprint requests should be sent to Andrew J. Butler, Department of Psychological and Brain Sciences, Indiana University, 1105 East 10th Street, Bloomington, IN 47401, or via e-mail: Butler7@indiana.edu.

REFERENCES

Amedi
,
A.
,
von Kriegstein
,
K.
,
Atteveldt
,
N. M.
,
Beauchamp
,
M. S.
, &
Naumer
,
M. J.
(
2005
).
Functional imaging of human crossmodal identification and object recognition.
Experimental Brain Research
,
166
,
559
571
.
Barsalou
,
L. W.
(
1999
).
Perceptual symbol systems.
Behavioral and Brain Sciences
,
22
,
577
609
.
Beauchamp
,
M. S.
(
2005
).
Statistical criteria in fMRI studies of multisensory integration.
Neuroinformatics
,
3
,
93
113
.
Chao
,
L. L.
, &
Martin
,
A.
(
2000
).
Representation of manipulable man-made objects in the dorsal stream.
Neuroimage
,
12
,
478
484
.
Cohen
,
R. L.
(
1989
).
Memory for action events: The power of enactment.
Educational Psychological Reviews
,
1
,
57
80
.
Culham
,
J. C.
, &
Valyear
,
K. F.
(
2006
).
Human parietal cortex in action.
Current Opinion in Neurobiology
,
16
,
205
212
.
Damasio
,
A. R.
(
1989
).
Time-locked multiregional retroactivation: A systems-level proposal for the neural substrates of recall and recognition.
Cognition
,
33
,
37
43
.
De Lucia
,
M.
,
Camen
,
C.
,
Clarke
,
S.
, &
Murray
,
M. M.
(
2009
).
The role of actions in auditory object discrimination.
Neuroimage
,
48
,
475
485
.
Dinstein
,
I.
,
Thomas
,
C.
,
Behrmann
,
M.
, &
Heeger
,
D. J.
(
2008
).
A mirror up to nature.
Current Biology
,
18
,
13
18
.
Engelkamp
,
J.
, &
Zimmer
,
H. D.
(
1994
).
The human memory. A multimodal approach
.
Seattle
:
Hogrefe and Huber
.
Engelkamp
,
J.
, &
Zimmer
,
H. D.
(
1997
).
Sensory factors in memory for subject-performed tasks.
Acta Psychologica
,
96
,
43
60
.
Engelkamp
,
J.
,
Zimmer
,
H. D.
, &
Biegelmann
,
U. E.
(
1993
).
Bizarreness effects in verbal tasks and subject-performed tasks.
European Journal of Cognitive Psychology
,
5
,
393
415
.
Etzel
,
J. A.
,
Gazzola
,
V.
, &
Keysers
,
C.
(
2008
).
Testing simulation theory with cross-modal multivariate classification of fMRI data.
Public Library of Science One
,
3
,
1
6
.
Fredembach
,
B.
,
de Boisferon
,
A.
, &
Gentaz
,
E.
(
2009
).
Learning of arbitrary association between visual and auditory stimuli in adults: The “bond effect” of haptic exploration.
Public Library of Science One
,
4
.
Fuster
,
J. M.
(
2009
).
Cortex and memory: Emergence of a new paradigm.
Journal of Cognitive Neuroscience
,
21
,
2047
2072
.
Gibson
,
J. J.
(
1977
).
The theory of affordances.
In R. Shaw & J. Bransford (Eds.),
Perceiving, acting, and knowing
.
New York
:
John Wiley and Sons
.
Goodale
,
M. A.
, &
Milner
,
A. D.
(
1992
).
Separate visual pathways for perception and action.
Trends in Neuroscience
,
15
,
20
25
.
Grezes
,
J. & Decety, J.
(
2002
).
Does visual perception of object afford action? Evidence from a neuroimaging study.
Neuropsychologia
,
40
,
212
222
.
Grill-Spector
,
K.
,
Kushnir
,
T.
,
Hendler
,
T.
, &
Malach
,
R.
(
2000
).
The dynamics of object-selective activation correlate with recognition performance in humans.
Nature Neuroscience
,
3
,
837
843
.
Harman
,
K. L.
,
Humphrey
,
G. K.
, &
Goodale
,
M. A.
(
1999
).
Active manual control of object views facilitates visual recognition.
Current Biology
,
9
,
1315
1318
.
Imamizu
,
H.
,
Kuroda
,
T.
,
Miyauchi
,
S.
,
Yoshioka
,
T.
, &
Kawato
,
M.
(
2003
).
Modular organization of internal models of tools in the human cerebellum.
Proceedings of the National Academy of Sciences, U.S.A.
,
100
,
5461
5466
.
James
,
K. H.
, &
Atwood
,
T. P.
(
2009
).
The role of sensorimotor learning in the perception of letter-like forms: Tracking the causes of neural specialization for letters.
Cognitive Neuropsychology
,
26
,
91
110
.
James
,
K. H.
, &
Gauthier
,
I.
(
2006
).
Letter processing automatically recruits a sensory-motor brain network.
Neuropsychologia
,
44
,
2937
2949
.
James
,
K. H.
,
Humphrey
,
G. K.
, &
Goodale
,
M. A.
(
2001
).
Manipulating and recognizing virtual objects: Where the action is.
Canadian Journal of Experimental Psychology
,
55
,
111
120
.
James
,
K. H.
,
Humphrey
,
G. K.
,
Vilis
,
T.
,
Baddour
,
R.
,
Corrie
,
B.
, &
Goodale
,
M. A.
(
2002
).
Learning three-dimensional object structure: A virtual reality study.
Behavioral Research Methods, Instruments and Computers
,
34
,
383
390
.
James
,
K. H.
, &
Maouene
,
J.
(
2009
).
Auditory verb perception recruits motor systems in the developing brain: An fMRI investigation.
Developmental Science
,
12
,
F26
F34
.
James
,
T. W.
, &
Gauthier
,
I.
(
2003
).
Auditory and action semantic features activate sensory-specific perceptual brain regions.
Current Biology
,
13
,
1792
1796
.
James
,
T. W.
,
Humphrey
,
G. K.
,
Gati
,
J. S.
,
Menon
,
R. S.
, &
Goodale
,
M. A.
(
2002
).
Differential effects of viewing on object-driven activation in dorsal and ventral streams.
Neuron
,
35
,
793
801
.
James
,
T. W.
,
VanDerKlok
,
R. M.
,
Stevenson
,
R. A.
, &
James
,
K. H.
(
2011
).
Multisensory perception of action in temporal and parietal cortices.
Neuropsychologia
,
49
,
108
114
.
Johnson
,
J. D.
,
McDuff
,
S. G. R.
,
Rugg
,
M. D.
, &
Norman
,
K. A.
(
2009
).
Recollection, familiarity, and cortical reinstatement: A multivoxel pattern analysis.
Neuron
,
63
,
697
708
.
Laurienti
,
P. J.
,
Perrault
,
T. J.
,
Stanford
,
T. R.
,
Wallace
,
M. T.
, &
Stein
,
B. E.
(
2009
).
On the use of superadditivity as a metric for characterizing multisensory integration in functional neuroimaging studies.
Experimental Brain Research
,
166
,
289
297
.
Longcamp
,
M.
,
Anton
,
J.-L.
,
Roth
,
M.
, &
Velay
,
J.-L.
(
2005
).
Premotor activations in response to visually presented single letters depend on the hand used to write: A study on left-handers.
Neuropsychologia
,
43
,
1801
1809
.
Malfait
,
N.
,
Valyear
,
K. F.
,
Culham
,
J. C.
,
Anton
,
J.-L.
,
Brown
,
L. E.
, &
Gribble
,
P. L.
(
2010
).
fMRI activation during observation of others' reach errors.
Journal of Cognitive Neuroscience
,
22
,
1493
1503
.
Masumoto
,
K.
,
Yamaguchi
,
M.
,
Sutani
,
K.
,
Tsunetoa
,
S.
,
Fujita
,
A.
, &
Tonoike
,
M.
(
2006
).
Reactivation of physical motor information in the memory of action events.
Brain Research
,
1101
,
102
109
.
Mattar
,
A. A. G.
, &
Gribble
,
P. L.
(
2005
).
Motor learning by observing.
Neuron
,
46
,
153
160
.
Mutschler
,
I.
,
Schulze-Bonhage
,
A.
,
Glauche
,
V.
,
Demandt
,
E.
,
Speck
,
O.
, &
Ball
,
T.
(
2007
).
A rapid sound-action association effect in human insular cortex.
Public Library of Science One
,
2
,
1
9
.
Nilsson
,
L.-G.
,
Nyberg
,
L.
,
Klingberg
,
T.
,
Aberg
,
C.
,
Persson
,
J.
, &
Roland
,
P. E.
(
2000
).
Activity in motor areas while remembering action events.
NeuroReport
,
11
,
2199
2201
.
Nyberg
,
L.
,
Petersson
,
K. M.
,
Nilsson
,
L.-G.
,
Sandblom
,
J.
,
Aberg
,
C.
, &
Ingvar
,
M.
(
2001
).
Reactivation of motor brain areas during explicit memory for actions.
Neuroimage
,
14
,
521
528
.
Pulvermuller
,
F.
,
Harle
,
M.
, &
Hummel
,
F.
(
2001
).
Walking or talking: Behavioral and neurophysiological correlates of action verb processing.
Brain and Language
,
78
,
143
168
.
Rizzolatti
,
G.
,
Fadiga
,
L.
,
Gallese
,
V.
, &
Fogassi
,
L.
(
1996
).
Premotor cortex and the recognition of motor actions.
Cognitive Brain Research
,
3
,
131
141
.
Rugg
,
M. D.
,
Johnson
,
J. D.
,
Park
,
H.
, &
Uncapher
,
M. R.
(
2008
).
Encoding-retrieval overlap in human episodic memory: A functional neuroimaging perspective.
Progress in Brain Research
,
169
,
339
352
.
Senkfor
,
A. J.
,
Petten
,
C. V.
, &
Kutas
,
M.
(
2002
).
Episodic action memory for real objects: An ERP investigation with perform, watch, and imagine action encoding tasks versus a non-action encoding task.
Journal of Cognitive Neuroscience
,
14
,
402
419
.
Shmuelof
,
L.
, &
Zohary
,
E.
(
2008
).
Mirror-image representation of action in the anterior parietal cortex.
Nature Neuroscience
,
11
,
1267
1269
.
Slotnick
,
S. D.
(
2004
).
Visual memory and visual perception recruit common neural substrates.
Behavioral and Cognitive Neuroscience Reviews
,
3
,
207
221
.
Stevenson
,
R. A.
,
Geoghegan
,
M. L.
, &
James
,
T. W.
(
2007
).
Superadditive BOLD response in superior temporal sulcus with threshold non-speech objects.
Experimental Brain Research
,
179
,
85
95
.
Stevenson
,
R. A.
,
Kim
,
S.
, &
James
,
T. W.
(
2009
).
An additive-factors design to disambiguate neuronal and areal convergence: Measuring multisensory interactions between audio, visual, and haptic sensory streams using fMRI.
Experimental Brain Research
,
198
,
183
194
.
Stock
,
O.
,
Roder
,
B.
,
Burke
,
M.
,
Bien
,
S.
, &
Rosler
,
F.
(
2009
).
Corticle activation patterns during long-term memory retrieval of visually or hapitcally encoded objects and locations.
Journal of Cognitive Neuroscience
,
21
,
58
82
.
Talairach
,
J.
, &
Tournoux
,
P.
(
1988
).
A co-planar stereotactic atlas of the human brain: 3-Dimensional proportional system: An approach to cerebral mapping (M. Rayport, Trans.)
.
New York
:
Thieme
.
Tanabe
,
H. C.
,
Honda
,
M.
, &
Sadato
,
N.
(
2005
).
Functionally segregated neural substrates for arbitrary audiovisual paired-association learning.
The Journal of Neuroscience
,
25
,
6409
6418
.
von Essen
,
J. D.
, &
Nilsson
,
L.-G.
(
2003
).
Memory effects of motor activation in subject-performed tasks and sign language.
Psychonomic Bulletin & Review
,
10
,
445
449
.
Weisberg
,
J.
,
van Turennout
,
M.
, &
Martin
,
A.
(
2007
).
A neural system for learning about object function.
Cerebral Cortex
,
17
,
513
521
.
Wheeler
,
M. E.
,
Peterson
,
S. E.
, &
Buckner
,
R. L.
(
2000
).
Memory's echo: Vivid remembering reactivates sensory-specific cortex.
Proceedings of the National Academy of Sciences, U.S.A.
,
97
,
11125
11129
.