Abstract

The present study used functional magnetic resonance imaging to delineate cortical networks that are activated when objects or spatial locations encoded either visually (visual encoding group, n = 10) or haptically (haptic encoding group, n = 10) had to be retrieved from long-term memory. Participants learned associations between auditorily presented words and either meaningless objects or locations in a 3-D space. During the retrieval phase one day later, participants had to decide whether two auditorily presented words shared an association with a common object or location. Thus, perceptual stimulation during retrieval was always equivalent, whereas either visually or haptically encoded object or location associations had to be reactivated. Moreover, the number of associations fanning out from each word varied systematically, enabling a parametric increase of the number of reactivated representations. Recall of visual objects predominantly activated the left superior frontal gyrus and the intraparietal cortex, whereas visually learned locations activated the superior parietal cortex of both hemispheres. Retrieval of haptically encoded material activated the left medial frontal gyrus and the intraparietal cortex in the object condition, and the bilateral superior parietal cortex in the location condition. A direct test for modality-specific effects showed that visually encoded material activated more vision-related areas (BA 18/19) and haptically encoded material more motor and somatosensory-related areas. A conjunction analysis identified supramodal and material-unspecific activations within the medial and superior frontal gyrus and the superior parietal lobe including the intraparietal sulcus. These activation patterns strongly support the idea that code-specific representations are consolidated and reactivated within anatomically distributed cell assemblies that comprise sensory and motor processing systems.

INTRODUCTION

It is a well-established finding that information about objects and space are processed within anatomically distinct cortical networks. Lesion and brain imaging studies in humans as well as animals provide converging evidence that visually encoded objects are represented in ventral, occipito-temporal brain areas (the so-called what pathway; e.g., Ungerleider & Haxby, 1994; Mishkin, Ungerleider, & Macko, 1983), whereas visually encoded spatial (Mishkin et al., 1983) and/or action-related information (Astafiev, Stanley, Shulman, & Corbetta, 2004; Goodale & Milner, 1992) is represented within dorsal, occipito-parietal brain regions (the so-called where or how pathway, respectively). Although this functional–anatomical distinction seems to be undisputed for the visual modality, it is not yet clear to what extent these networks are modality specific or supramodal.

For the ventral pathway, brain imaging studies have revealed category-related activation patterns not only for visual but also for haptic object recognition (Pietrini et al., 2004; James et al., 2002; Amedi, Malach, Hendler, Peled, & Zohary, 2001). In particular, it was postulated that an area called the lateral occipital complex (LOC) analyzes the shape of objects irrespectively of their modality. Similarly, Pietrini et al. (2004) suggested that the “visual” ventral stream contains more abstract or supramodal representations of objects, accessed by both vision and touch, rather than modality-specific representations.

Likewise, the dorsal pathway has been found to be involved not only in visual but also in haptic spatial information processing tasks (Reed, Klatzky, & Halgren, 2005). For example, mental rotation, when performed with haptically encoded objects, activates the superior parietal cortex (e.g., Prather & Sathian, 2002; Röder, Rösler, & Hennighausen, 1997), that is, the same area as activated when visual stimuli are mentally rotated (Carpenter, Just, Keller, Eddy, & Thulborn, 1999; Rösler, Heil, Bajric, Pauls, & Hennighausen, 1995).

Other neuroimaging studies (e.g., Reed et al., 2005; Stoeckel et al., 2003; Bodegard, Geyer, Grefkes, Zilles, & Roland, 2001; Binkofski et al., 1998, 1999; Roland, O'Sullivan, & Kawashima, 1998) have stressed the importance of parietal areas for haptic object recognition. For example, Roland et al. (1998) reported functional magnetic resonance imaging (fMRI) activations mainly in the intraparietal sulcus (IPS) and adjacent cortical regions, including the supramarginal gyrus and the secondary somatosensory cortex (SII). Furthermore, the insula, known to receive direct input of the SII (Binkofski et al., 1999), seems to be involved in memory processes related to haptic input. Similarly, Bodegard et al. (2001) observed an activation of the IPS while participants had to judge the length and curvatures of more complex objects. They suggested that haptic object recognition relies on a pathway from the postcentral gyrus (SI) via the SII to the IPS. Interestingly, this network overlaps with the fronto-parietal system found to be active when haptic objects have to be actively manipulated (Binkofski et al., 1999). Therefore, the SII and the IPS, in conjunction with the ventral premotor cortex, seem to constitute the core region for haptic object identification (e.g., Reed et al., 2005; Bodegard et al., 2001; Roland et al., 1998). These findings are incompatible with the idea that parietal areas are exclusively associated with spatial information processing.

A functional heterogeneity of parietal areas with respect to object and spatial processing has been highlighted by animal and human studies in which visuomotor pointing, reaching, and grasping tasks were used. By summarizing these studies, it has been suggested that the “dorsal stream” should be subdivided into a dorsodorsal (dd) and a ventrodorsal (vd) stream (Galletti, Kutz, Gamberini, Breveglieri, & Fattori, 2003; Rizzolatti & Matelli, 2003). The dd-stream comprises the superior parietal lobe (Lps; in humans: Brodmann's areas [BA] 5 and 7) and is assumed to be functionally related to action organization in reaching and pointing tasks. In contrast, the vd-stream comprises the inferior parietal lobe (BAs 39, 40) and the ventral and anterior IPS, and is functionally related to grasping and haptic object recognition.

Taken together, the available evidence suggests that the cortical networks comprising the ventral and the dorsal processing stream are not restricted to visual information processing alone, but seem to have multimodal functions. In particular, visual and haptic representations seem to share some of the dorsal and ventral processing areas (Peltier et al., 2007). Moreover, the strict functional distinction between object and spatial or action information (the “what vs. where” or “what vs. how” distinction) and its mapping onto the ventral and dorsal processing areas, respectively, do not seem to have been finally decided. At least for parietal areas, a functional dissociation of a dorsodorsal spatial action and a ventrodorsal object processing region seems to be likely (Makin, Holmes, & Zohary, 2007; Swisher, Halko, Merabet, McMains, & Somers, 2007).

Objectives and Design of the Present Study

The goal of the present study was to get more insight into the functional specificity of ventral and dorsal processing areas in visual and haptic tasks. In particular, we wanted to delineate areas that are functionally specialized for processing either object or spatial knowledge and areas that are relevant for both types of information. Moreover, we wanted to study whether these areas are specific or unspecific with respect to the visual and haptic modality.

In the work cited above, the focus of research concerned different types of information and different modalities. A tacit assumption in most of the studies was that the mentioned distinctions are universal and do apply to all cognitive tasks and representations, likewise to perceptual traces evoked in visual–motor reaching or grasping tasks, to working memory traces evoked during imagery and n-back recognition tasks, or to permanent long-term memory representations that have to be reactivated during memory search. That is, it is not distinguished whether these representations are predominantly input-, output-, or memory-related and whether they are transient or permanent. Most often, a particular task does not even permit a precise distinction between these representations. Perceptual and motor tasks usually involve both sensory bottom–up and cognitive, memory-guided top–down representations. The same holds for short- and long-term memory recognition tasks in which to be recognized stimuli are presented together with lures for an old/new decision. Both the perceptual entity presented for recognition and the stored entity to which it has to be compared share the same object or spatial features. We think that these aspects have to be carefully controlled, if the question of modality specificity or unspecificity is at issue, because it could be that the more input-bound, transient and the more memory-bound, permanent representations differ in this very respect, the former possibly being modality specific, the latter being modality unspecific.

In the present study, we will focus on long-term memory representations only, that is, we want to study where in the brain visual and haptic representations of object and spatial knowledge are reactivated after being encoded at least 24 hr earlier. Reactivation will be triggered by associated cues that are distinct from the retrieved representations. In so doing, we want to avoid confounding perceptual, modality-specific bottom–up processes with memory reactivation processes proper. Moreover, cross-modal associations shall be excluded, for instance, that haptic information will automatically trigger visual associations and vice versa.

To meet these prerequisites, we employed a paradigm that had already been successfully used to identify information-specific activations during long-term memory retrieval with neurophysiological measures (for slow event-related electroencephalogram responses, see Khader, Heil, & Rösler, 2005; Heil, Rösler, & Hennighausen, 1997; Rösler, Heil, & Hennighausen, 1995; for event-related blood oxygenation level-dependent [BOLD] responses, see Khader et al., 2007; Khader, Burke, Bien, Ranganath, & Rösler, 2005). In this paradigm, completely new long-term memory representations of different stimulus types are first established in an elaborate learning procedure, such that the material is fully possessed. In a later test, these representations are reactivated by using always the same type of retrieval cue which is not identical with the stimuli previously encountered during learning. This avoids confounding of perceptual and memory reactivation processes, and because the material is newly acquired, pre-experimentally existing cross-modal associations can be excluded.

In the version used here, participants had to learn associations between auditory cues (spoken words) and either objects (meaningless forms) or spatial locations (in a real 3-D layout). In one group of participants, the objects and the locations were encoded by studying the items visually; in another group, by exploring them haptically. Each word was associated with either one or two objects or locations and each object or location could have associations with one or two words (see Figure 1). During the retrieval phase, neither the objects nor the locations were presented again. Rather, the participants always heard only two words and they had to decide whether the words were associated via a common object or a common spatial location. Although always the same type of triggering cue was used (a word pair), participants had to reactivate, nevertheless, distinct long-term memory traces, either haptically or visually encoded objects or locations. Moreover, because the number of associations fanning out from each cue varied systematically, the number of the reactivated representations was fully under control and could be parametrically manipulated. Participants had either to reactivate two, three, or four links pointing from the words to either the domain of objects or the domain of spatial locations. Thus, the gradual increase of retrieval load within one domain and its relation to the BOLD signal could be studied.

Figure 1. 

(A) Spatial arrangement of the six locations in the cubicle. (B) Arrangement of the display when word–location associations had to be learned. (C) Examples of word–object associations and (D) illustration of the resulting levels of fan in the retrieval test. (E) Timing of a retrieval trial (WT = warning tone, Cue processing 1 and 2 = word pair, ITI = intertrial interval; axis shows time [sec] and recorded volumes [TR]).

Figure 1. 

(A) Spatial arrangement of the six locations in the cubicle. (B) Arrangement of the display when word–location associations had to be learned. (C) Examples of word–object associations and (D) illustration of the resulting levels of fan in the retrieval test. (E) Timing of a retrieval trial (WT = warning tone, Cue processing 1 and 2 = word pair, ITI = intertrial interval; axis shows time [sec] and recorded volumes [TR]).

Hypotheses

The available evidence cited above and a previous study in which we had investigated retrieval-related activation patterns of visually encoded man-made objects (drinking cups) and locations in 2-D space (Khader et al., 2007) provided three sets of hypotheses.

Material-specific Effects for Differently Encoded Items

For the different types of memory representations (objects, locations), we expected clearly distinct activation patterns during retrieval. All of the activation differences within one modality should become manifest in a main effect that contrasts object versus location retrieval as such, that is, not distinguishing between different levels of retrieval load. However, we also expect that the material-specific effects should become manifest in a graded increase of the BOLD response with increasing fan (contrast of Fan levels 3 vs. 2 vs. 1 vs. 0 [control]), that is, the BOLD response should be modulated at different cortical areas for different types of representations, and the locations where such a modulation is observed should overlap with the overall activation maxima for these types of material (see Khader et al., 2007; Khader, Burke, et al., 2005; Khader, Heil, & Rösler, 2005). This means that all hypotheses on material-specific activations have to be tested as main effects contrasting two materials but also as material-specific modulation effects by contrasting the different fan levels. We presume that modulations of material-specific BOLD responses provide much stronger support than the main effects alone and that there exists a close functional relationship between retrieval of material-specific memory representations and the corresponding brain activation patterns. With increasing fan, more representations of the same type have to be traced and checked for common links (i.e., the search activity within one material has to be more extensive). Therefore, a fan-related increase of the BOLD response can be seen as a direct manifestation of reactivating an increasing number of material-specific representations.

Modality-specific Effects

Theories on long-term memory storage and retrieval (e.g., McClelland, McNaughton, & O'Reilly, 1995; Damasio, 1989) emphasize that representations are permanently consolidated in cortical areas in which their constituting features are processed during perception or action. Therefore, we expect modality-specific activation effects during retrieval. In particular, we suppose that visually encoded material activates during retrieval more posterior “vision-related” areas than haptically encoded material. In contrast, haptically encoded material should activate more somatosensory and motor areas than visually encoded material. These modality-specific effects should be valid, by and large, for both objects and locations.

Supramodal Effects

On the other hand, we also expect some overlap of the activation patterns that accompany retrieval of one type of material irrespective of the modality used for encoding. The basis of this expectation is the idea that long-term memory representations form most likely a more abstract code and, therefore, they will lose some details of input-bound modality-specific traces of activation. Explicitly formulated, we expect some overlap of activation patterns during retrieval of visually and haptically encoded objects and locations, respectively. As far as these activations are topographically distinct, they have to be seen as material-specific and cannot be due to general processes that control retrieval from long-term memory as such. However, such modality and material-unspecific activity that reflects general effects of retrieval control has to be expected, too. Thus, a conjunction analysis of haptically and visually encoded material should also reveal some supramodal and material-unspecific activations. According to previous studies on working and long-term memory control processes, these activations will be most likely located in the lateral prefrontal cortex (Ranganath, Cohen, Dam, & D'Esposito, 2004; Ranganath, Johnson, & D'Esposito, 2003; D'Esposito, Postle, & Rypma, 2000; Owen, 2000) and in the parietal cortex (Wheeler & Buckner, 2004; Buckner & Wheeler, 2001).

METHODS

Participants

Twenty-four healthy right-handed volunteers from the student population or from staff members of the University of Marburg participated. Twelve participants explored the material visually (visual group), the other 12 were blindfolded throughout the experiment and explored the stimuli haptically (haptic group). Two participants from each group had to be excluded because of too many movement artifacts in the fMRI data or due to technical problems during data acquisition. Thus, the final sample comprised 10 participants in each group (visual: 6 women, mean age = 24.3 years, range = 21–29 years; haptic: 5 women, mean age = 23.8 years, range = 21–33 years). All participants were native speakers of German, reported normal or corrected-to-normal vision, and had no known neurological disorder. They were all naïve with respect to the purpose of the study. Written informed consent was obtained from all participants. They all received a monetary compensation.

Materials

Thirty-six abstract words such as “hope,” “love,” or “doubt” were used as cues. The mean word frequency1 was 50.8 (according to the German CELEX lexical database). The words were spoken by a professional female speaker (mean word duration = 850 msec, range = 700–1000 msec) and presented binaurally with headphones. In the study phase, they had a mean intensity of 65 dB(A).

Targets consisted of spheres arranged in a cubicle (location condition) and abstract three-dimensional forms (object condition).

The spheres were mounted into a 80 × 80 × 80 cm cubicle positioned on a rack (Figure 1A and B). A chair, adjustable in height, was used to adequately position the participants so that, in the visual condition, the head, and, in the tactile condition, the shoulder, was approximately located at the center of the front plane of the display. To ensure that haptically learning participants had a stable point of reference during the acquisition phase, a horizontal bar was mounted at a height of 30 cm at the side of the cubicle that was oriented to the participants. While exploring the display with their right hand, participants rested their right arm on this bar. In the haptic condition, each sphere was mounted on the tip of a thin iron stick by means of which it could be moved to different locations within the cubicle. In the visual condition, each sphere was suspended by a thin, transparent thread that was attached to a wooden construction on top of the cubicle. The cubicle was covered with black cloth that contained six openings through which the spheres could be moved. Spheres were locationed at six predefined landmarks within a 3 × 3 × 3 matrix (Figure 1A).

The objects were three-dimensional green wooden forms with a diameter of about 10 cm and a height of 2 cm. They were mounted on a black wooden square (size = 19 × 19 × 1.5 cm; see Figure 1C). All shapes were compositions of three-dimensional shapes (square, circle, triangle). They were selected in order to render verbal encoding unlikely and visual imagery difficult. The cue–target assignments, as well as the sequence of item presentation during the learning phase, were systematically varied across participants.

Procedure

Each experiment comprised three parts: learning, direct association test, and retrieval. Each participant spent about 8.5 hr in the lab, 5 hr on the first and 3.5 hr on the second day. In the learning phase, participants acquired associations between words and either visually or haptically presented objects and locations (4 hr). Successful learning was tested in a subsequent retrieval phase on the same day (1 hr). On the second day, participants had the opportunity to relearn the associations acquired on Day 1 (0.5 hr). Moreover, a retrieval test was run without fMRI recording (1.5 hr). If their overall error rate did not exceed 15%, the retrieval task was performed in the scanner (1.5 hr).

Learning Procedure

Participants were seated in front of the cubicle (location condition) or in front of a desk (object condition) with their hands resting on their legs. Participants of the haptic group were blindfolded. The cues (abstract words) were paired with either one or two stimuli of each stimulus domain (called “mediators” in the following). Participants learned to associate 18 words with one or two of a total of six locations and another 18 words with one or two of a total of six objects. Twelve of these 18 words were associated with one target stimulus only, whereas the remaining six words were associated with two target stimuli (Figure 1C). During retrieval, two cues were always presented. When two cues with one associated target were presented, two associations had to be activated (Fan(1,1)); when one of the cues had one while the other had two associations, three associations had to be retrieved (Fan(1,2)). When both cues had two associated targets each, four associations had to be reactivated during the retrieval phase (Fan(2,2)). In the following we will refer to these conditions as Fans 1, 2, and 3 (see Figure 1C).

Familiarization phase
Haptic group

First, participants explored the dimensions of the cubicle with their right hand. They were guided by an experimenter who demonstrated each of the six possible locations for a sphere. The participants were informed that the sticks holding the spheres were not essential for later retrieval. Objects were arranged in a 2 × 3 grid on a desk. The participant was allowed to explore each object by palpating it for 1 min in order to be able to recognize each form later. Each participant explored the forms in a newly defined random sequence.

Visual group

The experimenter guided the participant's gaze through the scene by indicating the spheres with a pointer. The objects were introduced in the same randomized 2 × 3 arrangement as in the haptic group. Again, participants were asked to build a precise mental representation of the objects.

Association phase

When the participant felt familiar with the setting, the scene was cleared, that is, all spheres were pulled back (haptic group) or up (visual group), respectively, or all objects were put aside. Now the association learning task started (see Figure 1B). Participants had to encode the association between an auditory presented abstract word and either the location/s or the object/s. To this end, the word serving as cue was repeatedly played while participants simultaneously encountered the to-be-learned mediator/s. When participants felt sure to have learned the cue–mediator association, the mediator/s was/were replaced by the next set and they had to learn the next cue–mediator associations. The 18 cue–mediator associations per condition were learned in a pseudorandom sequence. On average, participants had to repeat this learning sequence three times to be prepared for the testing phase. In order to reduce interferences during the paired-associate learning task, participants of both groups were asked to ignore the changing of the setting. In the visual group, participants were instructed to close their eyes between trials while the experimenter placed one or two objects on the table or rearranged the locations in the cubicle. In the haptic group, participants withdrew their right hand from the setting while objects or locations were rearranged. Participants were discouraged to use verbal labels for both objects and locations.

Direct association test

In order to test if the associations between the cues and targets had been learned successfully, a word was presented and the participant had to indicate the associated object(s) or location(s). In order to prevent that object information was confounded with location information in the object condition, the 2 × 3 arrangement of the objects on the table in front of the participants was continuously reshuffled. Moreover, to avoid sequence effects, the 18 words of each condition were presented in random order. Incorrect responses were corrected by showing the correct target(s). On average, participants had to work on four to five sets before they fulfilled the criterion of two error-free sets. After a participant had successfully completed the acquisition of the first stimulus set (either locations or objects), the other one of the same modality was learned using the same procedure.

Retrieval Test

In the retrieval phase, two spoken words were presented with a stimulus onset asynchrony of 800 msec. Participants had to decide whether they were associated with each other via a common location or a common object. To provide an answer—whether it was yes or no—participants had to reactivate the associated targets in any case. The subjects indicated their decision by pressing either the left or the right mouse button. In the scanner, a fiber-optic mouse was used and operated by the index and the middle finger of the right hand. The association between buttons and responses type (yes vs. no) was balanced across participants. Presentation timing and response recording were controlled by the Presentation 0.51 software using a Microsoft Windows PC.

A trial started with a warning tone (1000 Hz; duration = 40 msec). After 1 sec, the first cue was presented and followed by the second cue 2 sec later. The interval between the end of the second word presentation and the end of the retrieval epoch had a duration of 10 sec. The next trial started after an intertrial interval of 2 sec. Thirty-six different word pairs were presented four times during the retrieval phase, resulting in 144 trials. Test items were selected such that the experimental factors of condition (object, location), level of fan (1, 2, 3), and type of probe (positive, negative) were completely crossed. The 12 different probe conditions were presented in a random order under the restriction that the same probe condition was not immediately repeated a second time. Participants were instructed to respond as fast but also as accurately as possible.

Cues with a mean intensity of about 85 dB(A) were delivered in the scanner with MRI-compatible headphones (MR CONFON; www.mr-confon.de). Trial structure was as described. In addition to the six retrieval conditions—location with Fans 1, 2, 3 and object with Fans 1, 2, 3—a so-called high-level baseline condition was introduced. In this condition (Fan 0), two identical cue stimuli were presented and the participant had to respond “yes.” Thus, condition Fan 0 involves all perceptual and motor components as the remaining conditions but no retrieval of associations from long-term memory. A session comprised a total of four runs with 42 trials each (36 retrieval trials proper and six Fan 0 trials; duration = 11 min 22 sec). Thus, 168 retrieval trials with 24 trials of each experimental condition plus 24 identical word pairs were presented in the scanning session. During the retrieval tests, all participants were blindfolded.

fMRI Data Acquisition and Analysis

The fMRI session started with two functional runs, followed by the acquisition of a T1-weighted high-resolution magnetic resonance (MR) image volume, and ended with another two functional runs. A vacuum cushion was used to minimize head movements.

Data were acquired with a 1.5-Tesla whole-body fMRI scanner (Signa, GE Medical Systems) equipped with an echo-planar imaging (EPI) upgrade using a standard head volume coil. T2-weighted functional MR images were obtained using an EPI sequence (TE = 60 msec, TR = 2000 msec, FA = 90°, matrix = 64 × 64, FOV = 240 mm × 240 mm). One volume consisted of 19 axial slices. Slices thickness was 5 mm, yielding voxel dimensions of 3.75 × 3.75 × 5 mm. The interslice gap was 0.5 mm. Each functional run comprised 346 volumes acquired within 11 min 30 sec. A whole-head 3-D volume for anatomic localization was recorded using a 3-D grass sequence (TE = 6 msec, TR = 33 msec; matrix = 256 × 192, FOV = 240 mm × 180 mm, resulting in an in-plane resolution of 0.9375 mm × 0.9375 mm). One hundred twenty-four axial slices (slice thickness = 1.4 mm) were acquired within 13 min 24 sec.

Preprocessing and statistical analysis were performed with Brain Voyager QX Software (version 1.8; www.brainvoyager.com). Image preprocessing of the functional data included the elimination of low-frequency signal drifts by applying a voxel-based temporal high-pass filter (0.067 Hz cutoff frequency), a slice scan time correction, and a motion correction to remove residual head movements. The structural dataset was reconstructed and transferred into Talairach space (Talairach & Tournoux, 1988). Next, all 2-D functional datasets of each participant were aligned with the corresponding anatomical scans and converted into Talairach space as well.

To identify brain regions whose activity varied with the experimental protocol, a general linear model (GLM) with nine predictors was defined. The predictors modeled (1) the cue processing phase (time epoch 0–4 sec), (2 to 7) the retrieval phase (time epoch 4–14 sec) of each fan level (Fan(1, 2, 3)) and each material (object, location), (8) the high-level baseline (Fan 0), and (9) the low-level baseline (intertrial interval = 2 sec). The low-level baseline served as n-predictor. This method correlates the fMRI time series with an idealized hemodynamic reference function on a voxel-by-voxel basis. In order to compensate for the delay of the hemodynamic response, the predictor functions were shifted in all analyses by 2 sec (delta = 2.5; tau = 1.25) relative to the onset of stimulation. Functional datasets were spatially smoothed with a Gaussian kernel of 6 mm full width at half maximum.

To address the question of different cortical substrates for type of material (main effect objects vs. locations), two contrasts were calculated: “visual object Fan 3 and Fan 2 and Fan 1 versus visual location Fan 3 and Fan 2 and Fan 1” and “haptic object Fan 3 and Fan 2 and Fan 1 vs. haptic location Fan 3 and Fan 2 and Fan 2.” To protect against false positives, cluster size was set to 100 and alpha to p < .001 (corrected).

Next, to reveal brain areas that responded during the recall of visually or haptically encoded objects and locations, parametric analyses with regression weights set to −3, −1, +1, and +3 for the conditions Fan 0, Fan 1, Fan 2, and Fan 3 were calculated for both object and location conditions and visual and haptic encoding (cluster size = 100, p < .0001, corrected).

To disentangle brain areas that responded only during the recall of visually or haptically encoded items, the contrasts “visual object Fan 3 and Fan 2 and Fan 1 vs. haptic object Fan 3 and Fan 2 and Fan 1” and “visual location Fan 3 and Fan 2 and Fan 1 vs. haptic location Fan 3 and Fan 2 and Fan 1” were computed (i.e., main effect type of encoding: visual versus haptic; cluster size = 100, p < .001, uncorrected). Furthermore, a conjunction analysis was calculated to show brain areas active during both the recall of visually and haptically encoded objects and locations: “visual object Fan 3 > Fan 2 > Fan 1 > Fan 0 and haptic object Fan 3 > Fan 2 > Fan 1 > Fan 0” and “visual location Fan 3 > Fan 2 > Fan 1 > Fan 0 and haptic location Fan 3 > Fan 2 > Fan 1 > Fan 0” with p < .001 (corrected) and cluster size = 100.

A three-dimensional T1-weighted structural recording of a participant was used for the surface reconstruction. The cortex was segmented into gray and white matter by using a grow region function. The cortical surface was then unfolded, cut, and flattened.

The activation clusters of the statistical maps were characterized in terms of their spatial extent (number of voxels) and mean t value. Clusters of activation were anatomically classified according to their Talairach and Tournoux (1988) coordinates. Because some activation sites within a specific gyrus or sulcus belonged to different Brodmann's areas, the underlying anatomical gyrus or sulcus was divided into subsegments in accordance with specific Brodmann's areas presented in the atlas of Talairach and Tournoux. For example, the superior frontal gyrus (Gfs) was divided in a middle and posterior part, in correspondence with BA 8 and BA 6. Furthermore, terminology and definition of regions of interest within the parietal and temporal cortex were adapted as closely as possible to findings of Culham and Valyear (2006, p. 206), Amedi, von Kriegstein, van Atteveldt, Beauchamp, and Naumer (2005, p. 560), and Rizzolatti and Matelli (2003). Areas of activation were depicted in a pseudocolor map according to the significance level, and superimposed on high-resolution anatomical images. Finally, time courses of the BOLD response were calculated from significantly activated voxels within specific regions of interest.

RESULTS

Behavioral Data

Response Times

Response times and error rates obtained during the retrieval phase in the scanner are summarized in Figure 2. An overall analysis of variance (ANOVA) of response times with the between-participant factor modality (visual, haptic) and within-participant factors condition (object, location) and fan (1, 2, 3)2 provided significant main effects of fan [F(2, 36) = 151.08, p < .001, explained variance = .894] and condition [F(1, 18) = 36.72, p < .001, explained variance = .671] and a significant interaction Fan × Condition [F(2, 36) = 9.81, p = .001, explained variance = .353]. None of the interactions with factor modality was reliable.

Figure 2. 

Mean response times in milliseconds (msec; bars) and error rates in percentage (circles) as a function of level of fan for objects and locations separately for the visual and the haptic encoding groups. Error bars show ±1 standard error. Fan 0 is the control condition (high-level baseline) with two identical words presented as retrieval cue.

Figure 2. 

Mean response times in milliseconds (msec; bars) and error rates in percentage (circles) as a function of level of fan for objects and locations separately for the visual and the haptic encoding groups. Error bars show ±1 standard error. Fan 0 is the control condition (high-level baseline) with two identical words presented as retrieval cue.

Response time proved to be a monotonic function of the number of associations learned during the study phase, irrespectively of modality and condition [main effects fan for visual objects: F(2, 18) = 62.02; visual locations: F(2, 18) = 29.49; haptic objects: F(2, 18) = 56.21; and haptic locations: F(1, 18) = 50.16, all p < .001]. In both groups, retrieval of objects took, on average, longer than retrieval of locations [main effect condition in the visually learning group: F(1, 9) = 18.38, p < .01, with mean response time for objects = 6170 msec and for locations = 5660 msec; main effect condition for haptically learning group: F(1, 9) = 20.26, p < .001, with mean response time for objects = 6222 msec and for locations = 5272 msec].

Error Rates

The overall ANOVA for error rates provided a significant main effect of fan [F(2, 36) = 14.48, p < .001, explained variance = .445]. Error rates increased monotonically with increasing fan in both groups and for both types of material (all main effects fan were significant with p < .001).

This pattern of results suggests that the two groups were behaviorally equivalent, that is, there were no systematic differences in task difficulty between the visual and the haptic conditions. Moreover, factor fan explains most of the variance of response times.

fMRI Data

Material-specific Effects for Differently Encoded Items

Visually encoded objects and locations

In a first analysis, we tested which areas were more active during retrieval of object knowledge than during retrieval of location knowledge, and vice versa, irrespective of the fan manipulation (main effects “type of material” for the visual encoding condition). The results are summarized in Figure 3A and Table 1. This analysis revealed the following: Object retrieval evoked a distributed activation pattern within the left hemisphere comprising the middle and posterior part of the medial frontal gyrus (Gfm), the anterior cingulate gyrus (CG), and the anterior and posterior IPS. Within this pattern, the activation of the middle Gfm was more pronounced than that of the posterior Gfm (expressed by the number of activated voxels), and the activation of the anterior IPS was stronger than that of the posterior IPS. In contrast, retrieval of locations evoked an extended bilateral activation pattern which had a clear posterior center of gravity. Bilaterally activated areas included the posterior part of the Gfs, the SII, the Lps, the posterior IPS, the medial-temporal gyrus, the sulcus parieto-occipitalis, and the precuneus. In general, all activations were more pronounced in the right than in the left hemispheric homologue areas. The activation maximum was located in the Lps.

Figure 3. 

Statistical parametric map of the main effect “objects vs. locations.” (A) Visual and (B) haptic encoding group. Green/blue colored areas were significantly more active during retrieval of location information, whereas yellow/red colored areas were significantly more active during retrieval of object information.

Figure 3. 

Statistical parametric map of the main effect “objects vs. locations.” (A) Visual and (B) haptic encoding group. Green/blue colored areas were significantly more active during retrieval of location information, whereas yellow/red colored areas were significantly more active during retrieval of object information.

Table 1. 

Main Effect “Objects versus Locations” of the Visual Encoding Condition

Region (BA)
Hemisphere
Left
Right
x
y
z
n
t
x
y
z
n
t
Contrast: Visual Objects vs. Visual Locations—More Activation in the Object Condition 
Anterior Gfm (10/46)           
Middle Gfm (46/9) 41 34 35 989 8.21      
Posterior Gfm (9/6) 38 10 36 652 8.35      
Middle Gfs (8)           
Posterior Gfs (6)           
Anterior insula           
GC (24/32) 16 33 1338 9.04      
Postcentral gyrus (3/1/2)           
SII (5/7)           
Lps (7)           
Anterior IPS (7) 31 −58 37 1946 10.74      
Posterior IPS (7) 30 −66 39 1078 10.07      
Inferior temporal gyrus (21/37)           
LOC (19)           
Parieto-occipital sulcus (31/19)           
Precuneus (7, 31)           
 
Contrast: Visual Objects vs. Visual Locations—More Activation in the Location Condition 
Anterior Gfm (10/46)           
Middle Gfm (46/9)           
Posterior Gfm (9/6)           
Middle Gfs (8)      −27 14 51 307 −8.01 
Posterior Gfs (6) 21 57 1180 −8.81 −27 54 1436 −8.64 
Anterior insula           
GC (24/32)           
Postcentral gyrus (3/1/2)           
SII (5/7) 41 −36 40 1687 −10.13 −42 −39 43 2287 −9.37 
Lps (7) 11 −57 52 3082 −10.54 −11 −57 55 4532 −14.70 
Anterior IPS (7)           
Posterior IPS (7) 30 −75 32 1475 −10.53 −31 −70 33 2564 −11.28 
Inferior temporal gyrus (21/37) 53 −55 1183 −9.00 −51 −53 2288 −9.82 
LOC (19)           
Parieto-occipital sulcus (31/19) 11 −60 23 429 −7.75 −13 −57 21 1383 −10.25 
Precuneus (7, 31) −58 44 309 −7.64 −5 −59 45 415 −8.15 
Region (BA)
Hemisphere
Left
Right
x
y
z
n
t
x
y
z
n
t
Contrast: Visual Objects vs. Visual Locations—More Activation in the Object Condition 
Anterior Gfm (10/46)           
Middle Gfm (46/9) 41 34 35 989 8.21      
Posterior Gfm (9/6) 38 10 36 652 8.35      
Middle Gfs (8)           
Posterior Gfs (6)           
Anterior insula           
GC (24/32) 16 33 1338 9.04      
Postcentral gyrus (3/1/2)           
SII (5/7)           
Lps (7)           
Anterior IPS (7) 31 −58 37 1946 10.74      
Posterior IPS (7) 30 −66 39 1078 10.07      
Inferior temporal gyrus (21/37)           
LOC (19)           
Parieto-occipital sulcus (31/19)           
Precuneus (7, 31)           
 
Contrast: Visual Objects vs. Visual Locations—More Activation in the Location Condition 
Anterior Gfm (10/46)           
Middle Gfm (46/9)           
Posterior Gfm (9/6)           
Middle Gfs (8)      −27 14 51 307 −8.01 
Posterior Gfs (6) 21 57 1180 −8.81 −27 54 1436 −8.64 
Anterior insula           
GC (24/32)           
Postcentral gyrus (3/1/2)           
SII (5/7) 41 −36 40 1687 −10.13 −42 −39 43 2287 −9.37 
Lps (7) 11 −57 52 3082 −10.54 −11 −57 55 4532 −14.70 
Anterior IPS (7)           
Posterior IPS (7) 30 −75 32 1475 −10.53 −31 −70 33 2564 −11.28 
Inferior temporal gyrus (21/37) 53 −55 1183 −9.00 −51 −53 2288 −9.82 
LOC (19)           
Parieto-occipital sulcus (31/19) 11 −60 23 429 −7.75 −13 −57 21 1383 −10.25 
Precuneus (7, 31) −58 44 309 −7.64 −5 −59 45 415 −8.15 

Anatomical region, Brodmann's area (BA), and Talairach coordinates for the centers of gravity. Number of significantly activated voxels (n) and averaged t values of brain areas (p < .001, corrected).

By computing linear fan contrasts separately for objects and locations (with regression weights −3, −1, +1, +3 for conditions Fan 0, Fan 1, Fan 2, Fan 3), we determined areas that responded for each type of material parametrically to the number of to-be-retrieved associations. These contrasts reveal how much the activation of an area is modulated by the amount of material-specific retrieval effort. The results are summarized in Figure 4 and Table 2.

Figure 4. 

(A) Statistical parametric maps of the linear contrasts “Fan 3 > Fan 2 > Fan 1 > Fan 0” in the visual encoding group. (A) Objects, (B) locations, and (C) superposition of the object and location maps (A and B). Activations are mapped onto a full Talairach-normalized unfolded brain of the left and right hemispheres. Color scale indicates level of significance (corrected for multiple comparisons). In (C) red indicates more activation during the recall of objects, blue more activity during the retrieval of locations, and violet indicates activations that are common to both recall of objects and locations. (D) Average percentage signal increases recorded in the middle frontal gyrus, the intraparietal sulcus, and the superior parietal lobe. The experimental conditions are represented by differently colored lines. Percentage signal change averaged across scans and subjects. Time point “0” indicates the beginning of the retrieval phase.

Figure 4. 

(A) Statistical parametric maps of the linear contrasts “Fan 3 > Fan 2 > Fan 1 > Fan 0” in the visual encoding group. (A) Objects, (B) locations, and (C) superposition of the object and location maps (A and B). Activations are mapped onto a full Talairach-normalized unfolded brain of the left and right hemispheres. Color scale indicates level of significance (corrected for multiple comparisons). In (C) red indicates more activation during the recall of objects, blue more activity during the retrieval of locations, and violet indicates activations that are common to both recall of objects and locations. (D) Average percentage signal increases recorded in the middle frontal gyrus, the intraparietal sulcus, and the superior parietal lobe. The experimental conditions are represented by differently colored lines. Percentage signal change averaged across scans and subjects. Time point “0” indicates the beginning of the retrieval phase.

Table 2. 

Parametric Contrast “Fan 3 > Fan 2 > Fan 1 > Fan 0” for Visually Encoded Objects (Top) and Visually Encoded Locations (Bottom)

Region (BA)
Hemisphere
Left
Right
x
y
z
n
t
x
y
z
n
t
Parametric Analysis: Visual Objects, Fan 3 > Fan 2 > Fan 1 > Fan 0 
Anterior Gfm (10/46)           
Middle Gfm (46/9) 401 28 28 1081 13.60 −41 29 30 441 12.89 
Posterior Gfm (9/6) 42 35 2643 16.82      
Middle Gfs (8) 25 13 54 1518 14.48      
Posterior Gfs (6) 29 53 3115 17.87 −27 53 1212 16.03 
Anterior insula 31 22 930 13.98      
GC (24/32) 16 35 1547 20.27 −6 21 34 1979 17.80 
Postcentral gyrus (3/1/2)           
SII (5/7) 41 −42 43 1050 15.02      
Lps (7) 13 −66 53 2750 18.51 −18 −66 50 2061 15.09 
Anterior IPS (7) 28 −56 35 2867 20.19 −31 −50 36 1523 14.99 
Posterior IPS (7) 27 −66 38 2956 21.07 −28 −66 39 1567 16.61 
Inferior temporal gyrus (21/37)           
LOC (19)           
Parieto-occipital sulcus (31/19)           
Precuneus (7, 31)           
 
Parametric Analysis: Visual Locations, Fan 3 > Fan 2 > Fan 1 > Fan 0 
Anterior Gfm (10/46)           
Middle Gfm (46/9)           
Posterior Gfm (9/6) 43 38 1233 14.38      
Middle Gfs (8) 22 12 56 1472 18.18 −28 10 55 907 16.35 
Posterior Gfs (6) 28 −1 54 2818 20.68 −25 −1 56 1976 19.43 
Anterior insula           
GC (24/32) 19 37 542 15.41 −4 19 35 818 14.45 
Postcentral gyrus (3/1/2)           
SII (5/7) 38 −39 45 2018 16.93 −37 −40 46 1453 15.40 
Lps (7) 17 −60 57 39832 24.64 −22 −61 54 42872 25.71 
Anterior IPS (7) 29 −50 38 2717 17.71 −28 −51 36 2531 17.58 
Posterior IPS (7) 28 −67 40 30156 19.83 −31 −67 41 2864 22.64 
Inferior temporal gyrus (21/37) 47 −54 −1 535 13.02      
LOC (19)           
Parieto-occipital sulcus (31/19) 11 −63 23 1534 14.37 −14 −59 23 1138 16.05 
Precuneus (7, 31) −58 52 1324 13.79      
Region (BA)
Hemisphere
Left
Right
x
y
z
n
t
x
y
z
n
t
Parametric Analysis: Visual Objects, Fan 3 > Fan 2 > Fan 1 > Fan 0 
Anterior Gfm (10/46)           
Middle Gfm (46/9) 401 28 28 1081 13.60 −41 29 30 441 12.89 
Posterior Gfm (9/6) 42 35 2643 16.82      
Middle Gfs (8) 25 13 54 1518 14.48      
Posterior Gfs (6) 29 53 3115 17.87 −27 53 1212 16.03 
Anterior insula 31 22 930 13.98      
GC (24/32) 16 35 1547 20.27 −6 21 34 1979 17.80 
Postcentral gyrus (3/1/2)           
SII (5/7) 41 −42 43 1050 15.02      
Lps (7) 13 −66 53 2750 18.51 −18 −66 50 2061 15.09 
Anterior IPS (7) 28 −56 35 2867 20.19 −31 −50 36 1523 14.99 
Posterior IPS (7) 27 −66 38 2956 21.07 −28 −66 39 1567 16.61 
Inferior temporal gyrus (21/37)           
LOC (19)           
Parieto-occipital sulcus (31/19)           
Precuneus (7, 31)           
 
Parametric Analysis: Visual Locations, Fan 3 > Fan 2 > Fan 1 > Fan 0 
Anterior Gfm (10/46)           
Middle Gfm (46/9)           
Posterior Gfm (9/6) 43 38 1233 14.38      
Middle Gfs (8) 22 12 56 1472 18.18 −28 10 55 907 16.35 
Posterior Gfs (6) 28 −1 54 2818 20.68 −25 −1 56 1976 19.43 
Anterior insula           
GC (24/32) 19 37 542 15.41 −4 19 35 818 14.45 
Postcentral gyrus (3/1/2)           
SII (5/7) 38 −39 45 2018 16.93 −37 −40 46 1453 15.40 
Lps (7) 17 −60 57 39832 24.64 −22 −61 54 42872 25.71 
Anterior IPS (7) 29 −50 38 2717 17.71 −28 −51 36 2531 17.58 
Posterior IPS (7) 28 −67 40 30156 19.83 −31 −67 41 2864 22.64 
Inferior temporal gyrus (21/37) 47 −54 −1 535 13.02      
LOC (19)           
Parieto-occipital sulcus (31/19) 11 −63 23 1534 14.37 −14 −59 23 1138 16.05 
Precuneus (7, 31) −58 52 1324 13.79      

Columns as in Table 1; p < .0001, corrected.

In addition to the already described activations, this analysis revealed significant activations for object retrieval within the middle and posterior part of the Gfs, the anterior insula, and within the Lps (and SII). Again, there was a clear left hemispheric preponderance of activations during object retrieval, expressed by both the number of different activated areas and the number of voxels activated within homologue areas. For retrieving locations, additional bilateral activations were found in the Gfs, the anterior CG, and the anterior and posterior IPS. Moreover, left hemispheric activations were detected in the posterior Gfm, the medial-temporal gyrus, and the precuneus.

As shown in the superposition of both activation maps (Figure 4C), there is a large overlap of activations for object and location retrieval, particularly, in the left Gfs, the left and right IPS, and the Lps. Specific activations for objects light up in the left Gfm, the left and right anterior cingulate cortex (ACC), and the ventral part of the IPS. In contrast, location-specific activations dominate in the superior parietal cortex, including the Lps and SII, and within the posterior IPS, again with a right hemispheric bias. These material-specific and -unspecific activations are also clearly revealed by the time courses of the BOLD signal in these areas (Figure 4D). The amplitudes of the hemodynamic response function show no clear rank order for objects and locations in the Lps and the posterior IPS. However, in the anterior, more ventrally located part of the IPS, all object responses are larger than all location responses, whereas in the Lps, the pattern is reversed, that is, location responses are substantially larger than object responses.

Finally, the LOC did not light up significantly for objects neither as main effect (contrast of the overall activity of visually objects vs. locations) nor in the linear fan contrast.

Haptically encoded objects and locations

The results of the overall contrast “objects versus locations” in the haptic encoding condition are presented in Table 3 and Figure 3B. The pattern of activations is similar, but not identical, to the one found in the visual encoding condition. Again, for objects, a clear left-lateralized pattern lights up, comprising the middle and posterior part of the Gfm, the anterior CG, and the anterior and posterior IPS. Tested with the same significance level, the number of activated voxels within these areas is much larger in the haptic than in the visual condition (compare top part of Tables 1 and 3). When locations had to be retrieved, activations were again mostly bilateral and comprised the Lps (SII, Lps), the medial-temporal gyrus, and the parieto-occipital sulcus. In the Lps and the medial-temporal gyrus, activations were more pronounced in the right than in the left hemisphere. In addition, during location retrieval, a substantial spot lighted up in the right posterior part of the Gfs. Compared to the visual encoding condition, the activation pattern in the haptic condition proved to be more focused to areas of the parietal cortex during location retrieval.

Table 3. 

Main Effect “Objects versus Locations” of the Haptic Encoding Condition

Region (BA)
Hemisphere
Left
Right
x
y
z
n
t
x
y
z
n
t
Contrast: Haptical Objects vs. Haptical Locations—More Activation in the Object Condition 
Anterior Gfm (10/46)           
Middle Gfm (46/9) 441 30 31 2540 8.67      
Posterior Gfm (9/6) 44 10 35 1674 9.35      
Middle Gfs (8)           
Posterior Gfs (6)           
Anterior insula           
GC (24/32) 25 33 1646 8.40      
Postcentral gyrus (3/1/2)           
SII (5/7)           
Lps (7)           
Anterior IPS (7) 32 −53 41 2708 8.98      
Posterior IPS (7) 33 −65 43 2355 9.04      
Middle temporal gyrus (21/37)           
LOC (19)           
Parieto-occipital sulcus (31/19)           
Precuneus (7, 31)           
 
Contrast: Haptical Objects vs. Haptical Locations—More Activation in the Location Condition 
Anterior Gfm (10/46)           
Middle Gfm (46/9)           
Posterior Gfm (9/6)           
Middle Gfs (8)           
Posterior Gfs (6)      −26 55 1580 −7.08 
Anterior insula           
GC (24/32)           
Postcentral gyrus (3/1/2)           
SII (5/7) 53 −31 37 2805 −7.09 −53 −33 40 2728 −7.93 
Lps (7) 10 −56 57 3704 −8.23 −13 −64 53 4246 −9.74 
Anterior IPS (7)           
Posterior IPS (7)           
Middle temporal gyrus (21/37) 46 −60 15 3147 −8.10 −46 −57 154 3402 −8.86 
LOC (19)           
Parieto-occipital sulcus (31/19) 16 −54 18 14664 −7.64 −13 −52 22 1333 −6.03 
Precuneus (7, 31)           
Region (BA)
Hemisphere
Left
Right
x
y
z
n
t
x
y
z
n
t
Contrast: Haptical Objects vs. Haptical Locations—More Activation in the Object Condition 
Anterior Gfm (10/46)           
Middle Gfm (46/9) 441 30 31 2540 8.67      
Posterior Gfm (9/6) 44 10 35 1674 9.35      
Middle Gfs (8)           
Posterior Gfs (6)           
Anterior insula           
GC (24/32) 25 33 1646 8.40      
Postcentral gyrus (3/1/2)           
SII (5/7)           
Lps (7)           
Anterior IPS (7) 32 −53 41 2708 8.98      
Posterior IPS (7) 33 −65 43 2355 9.04      
Middle temporal gyrus (21/37)           
LOC (19)           
Parieto-occipital sulcus (31/19)           
Precuneus (7, 31)           
 
Contrast: Haptical Objects vs. Haptical Locations—More Activation in the Location Condition 
Anterior Gfm (10/46)           
Middle Gfm (46/9)           
Posterior Gfm (9/6)           
Middle Gfs (8)           
Posterior Gfs (6)      −26 55 1580 −7.08 
Anterior insula           
GC (24/32)           
Postcentral gyrus (3/1/2)           
SII (5/7) 53 −31 37 2805 −7.09 −53 −33 40 2728 −7.93 
Lps (7) 10 −56 57 3704 −8.23 −13 −64 53 4246 −9.74 
Anterior IPS (7)           
Posterior IPS (7)           
Middle temporal gyrus (21/37) 46 −60 15 3147 −8.10 −46 −57 154 3402 −8.86 
LOC (19)           
Parieto-occipital sulcus (31/19) 16 −54 18 14664 −7.64 −13 −52 22 1333 −6.03 
Precuneus (7, 31)           

Anatomical region, Brodmann's area (BA), and Talairach coordinates for the centers of gravity. Number of significantly activated voxels (n) and averaged t values of brain areas (p < .001, corrected).

The results of the linear fan contrasts (Fan 0 < Fan 1 < Fan 2 < Fan 3) that reflect the parametric increase of activation with increasing amount of material-specific retrieval load in the haptic encoding condition are summarized in Figure 5 and Table 4.

Figure 5. 

(A) Statistical parametric maps of the linear contrasts “Fan 3 > Fan 2 > Fan 1 > Fan 0” in the haptic encoding group. (A) Objects, (B) locations, and (C) superposition of the object and location maps (A and B). Layout and color coding as in Figure 4: red = object; blue = location; and violet = common activations. (D) Average percentage signal increases recorded in the middle frontal gyrus, the intraparietal sulcus, and the superior parietal lobe.

Figure 5. 

(A) Statistical parametric maps of the linear contrasts “Fan 3 > Fan 2 > Fan 1 > Fan 0” in the haptic encoding group. (A) Objects, (B) locations, and (C) superposition of the object and location maps (A and B). Layout and color coding as in Figure 4: red = object; blue = location; and violet = common activations. (D) Average percentage signal increases recorded in the middle frontal gyrus, the intraparietal sulcus, and the superior parietal lobe.

Table 4. 

Parametric Contrast “Fan 3 > Fan 2 > Fan 1 > Fan 0” for Haptically Encoded Objects (Top) and Haptically Encoded Locations (Bottom)

Region (BA)
Hemisphere
Left
Right
x
y
z
n
t
x
y
z
n
t
Parametric Analysis: Haptic Objects, Fan 3 > Fan 2 > Fan 1 > Fan 0 
Anterior Gfm (10/46)           
Middle Gfm (46/9) 43 25 30 6553 18.95 −48 29 31 451 12.63 
Posterior Gfm (9/6) 42 36 5170 18.47      
Middle Gfs (8) 26 13 54 704 13.42      
Posterior Gfs (6) 27 −2 55 2044 17.24      
Anterior insula 28 25 770 13.22 −31 23 415 12.96 
GC (24/32) 35 917 16.96 −4 34 632 16.29 
Postcentral gyrus (3/1/2)           
SII (5/7) 38 −38 41 956 15.33      
Lps (7) 18 −65 55 837 17.01      
Anterior IPS (7) 30 −50 40 3746 19.39 −33 −49 40 756 14.22 
Posterior IPS (7) 27 −65 41 3326 18.96 −31 −62 41 1147 14.09 
Middle temporal gyrus (21/37)           
LOC (19)           
Parieto-occipital sulcus (31/19)           
Precuneus (7, 31)           
 
Parametric Analysis: Haptic Locations, Fan 3 > Fan 2 > Fan 1 > Fan 0 
Anterior Gfm (10/46)           
Middle Gfm (46/9) 43 27 30 1441 13.21      
Posterior Gfm (9/6) 47 30 2429 13.45      
Middle Gfs (8)           
Posterior Gfs (6) 26 −4 56 2721 17.66 −27 54 1586 13.57 
Anterior insula           
GC (24/32) 14 35 403 13.83 −4 17 36 1040 14.90 
Postcentral gyrus (3/1/2)           
SII (5/7) 38 −45 43 1023 14.21 −34     
Lps (7) 18 −63 52 4523 17.93 −14 −67 50 2953 19.79 
Anterior IPS (7) 29 −52 40 1180 14.44      
Posterior IPS (7) 26 −65 38 1493 15.52 −23 −63 38 682 14.52 
Middle temporal gyrus (21/37)           
LOC (19)           
Parieto-occipital sulcus (31/19)           
Precuneus (7, 31)           
Region (BA)
Hemisphere
Left
Right
x
y
z
n
t
x
y
z
n
t
Parametric Analysis: Haptic Objects, Fan 3 > Fan 2 > Fan 1 > Fan 0 
Anterior Gfm (10/46)           
Middle Gfm (46/9) 43 25 30 6553 18.95 −48 29 31 451 12.63 
Posterior Gfm (9/6) 42 36 5170 18.47      
Middle Gfs (8) 26 13 54 704 13.42      
Posterior Gfs (6) 27 −2 55 2044 17.24      
Anterior insula 28 25 770 13.22 −31 23 415 12.96 
GC (24/32) 35 917 16.96 −4 34 632 16.29 
Postcentral gyrus (3/1/2)           
SII (5/7) 38 −38 41 956 15.33      
Lps (7) 18 −65 55 837 17.01      
Anterior IPS (7) 30 −50 40 3746 19.39 −33 −49 40 756 14.22 
Posterior IPS (7) 27 −65 41 3326 18.96 −31 −62 41 1147 14.09 
Middle temporal gyrus (21/37)           
LOC (19)           
Parieto-occipital sulcus (31/19)           
Precuneus (7, 31)           
 
Parametric Analysis: Haptic Locations, Fan 3 > Fan 2 > Fan 1 > Fan 0 
Anterior Gfm (10/46)           
Middle Gfm (46/9) 43 27 30 1441 13.21      
Posterior Gfm (9/6) 47 30 2429 13.45      
Middle Gfs (8)           
Posterior Gfs (6) 26 −4 56 2721 17.66 −27 54 1586 13.57 
Anterior insula           
GC (24/32) 14 35 403 13.83 −4 17 36 1040 14.90 
Postcentral gyrus (3/1/2)           
SII (5/7) 38 −45 43 1023 14.21 −34     
Lps (7) 18 −63 52 4523 17.93 −14 −67 50 2953 19.79 
Anterior IPS (7) 29 −52 40 1180 14.44      
Posterior IPS (7) 26 −65 38 1493 15.52 −23 −63 38 682 14.52 
Middle temporal gyrus (21/37)           
LOC (19)           
Parieto-occipital sulcus (31/19)           
Precuneus (7, 31)           

Columns as in Table 1; p < .0001, corrected.

In addition to the described overall effects, this contrast reveals for the retrieval of haptically encoded objects activations in the left middle and posterior part of the Gfs and the left superior parietal cortex (Lps, SII). The activations already seen as main effects in the left middle part of the Gfm, the left anterior CG, and the left anterior and posterior part of the IPS are supplemented in the linear fan contrast by significant activations of the homologue areas in the right hemisphere. Moreover, the contrast reveals a significant activation of both the left and right anterior insula. Although the pattern is now distributed more bilaterally, the center of gravity of activations remains within the left hemisphere (i.e., the left middle Gfm, the left anterior and posterior IPS).

For haptically encoded locations, the linear fan contrast revealed additional activations in the left middle and posterior part of the Gfm, the left posterior Gfs, and the left anterior and posterior part of the IPS. Moreover, the left and right anterior CG were activated. However, the bilateral activations in the medial-temporal gyrus and the parieto-occipital sulcus are not identified by the linear fan contrast.

Specificity and unspecificity of these activations for retrieving haptically encoded objects and locations are highlighted by the superposition map (Figure 4C) and the shown hemodynamic response functions (Figure 4D). Pronounced common activations are seen in the left Gfs, the superior parietal cortex (Lps, SII), and the IPS. Distinct activations exist for objects in the left middle part of the Gfs and the left and right anterior IPS. In contrast, distinct activations for locations cluster within the posterior IPS and the right Lps.

Previous work on object recognition memory suggests (Martin, 2007; Haxby et al., 2001; Ishai, Ungerleider, Martin, Schouten, & Haxby, 1999) that processing object knowledge activates substantially occipito-temporal cortex regions (LOC). This hypothesis was not supported, neither by the overall contrast nor by the linear fan contrast. However, predictions derived from the work of Galletti et al. (2003) and Rizzolatti and Matelli (2003), claiming distinct functions for the dd- and the vd-stream, receives some support by both the overall contrast and the linear fan contrasts. Comparing the activation extent (number of active voxels) of the anterior and the posterior IPS, it can always be seen that, in the object retrieval condition, the anterior IPS (functionally related to the ventrodorsal stream) is more activated than the posterior IPS, whereas the opposite pattern holds for the location retrieval condition. There, the posterior IPS (functionally related to the dorsodorsal stream) is always more activated than the anterior IPS.

Modality-specific Effects

In order to highlight the modality-specific activations, we contrasted the fan levels of the visual and the haptic conditions [visual(Fan 3, Fan 2, Fan 1, Fan 0) > or < haptic(Fan 3, Fan 2, Fan 1, Fan 0)]. These contrasts were calculated separately for retrieval of objects and locations, and the results are shown in Figure 6 and Tables 5 and 6.

Figure 6. 

Statistical parametric maps of the contrast “visual object Fan 1/2/3 versus haptic object Fan 1/2/3” (A) and “visual location Fan 1/2/3 versus haptic location Fan 1/2/3” (B). Green colored areas were significantly more active during retrieval of visually encoded material and purple colored areas were significantly more active during retrieval of haptically encoded material. Turquoise indicates areas that were identified by the conjunction analysis “visual and haptic Fan 1/2/3” and that were active irrespective of the modality used for encoding during retrieval of objects (A) and locations (B).

Figure 6. 

Statistical parametric maps of the contrast “visual object Fan 1/2/3 versus haptic object Fan 1/2/3” (A) and “visual location Fan 1/2/3 versus haptic location Fan 1/2/3” (B). Green colored areas were significantly more active during retrieval of visually encoded material and purple colored areas were significantly more active during retrieval of haptically encoded material. Turquoise indicates areas that were identified by the conjunction analysis “visual and haptic Fan 1/2/3” and that were active irrespective of the modality used for encoding during retrieval of objects (A) and locations (B).

Table 5. 

Modality-specific and Supramodal Effects

Region (BA)
Hemisphere
Left
Right
x
y
z
n
t
x
y
z
n
t
(Ia) Contrast: Visual Learning (Fan 3 and Fan 2 and Fan 1) vs. Haptic Learning (Fan 3 and Fan 2 and Fan 1)—Significantly More Activation in the Visual Object Learning Group 
Middle Gfs (8) 14 19 52 3755 4.38 −18 18 51 4573 4.32 
Posterior Gfm (9/6)      −29 53 175 3.44 
Lps (7)      −20 −62 50 495 3.75 
Superior occipital gyri (19) 28 −75 20 1852 3.81 −25 −74 21 1995 3.76 
Parieto-occipital sulcus (31/19) 14 −59 16 3128 3.79 −15 −62 17 3441 4.21 
Cuneus (19/18) 10 −67 14 2325 3.95 −13 −71 12 2875 3.85 
 
(Ib) Contrast: Haptic Learning (Fan 3 and Fan 2 and Fan 1) vs. Visual Learning (Fan 3 and Fan 2 and Fan 1)—Significantly More Activation in the Haptic Object Learning Group 
Posterior Gfi (45) 48 21 22 671 4.55      
Middle Gfm (46/9) 44 22 31 4528 5.48 −41 22 30 1979 4.52 
Precentral gyrus (4) 24 −16 54 847 3.74 −22 −15 55 241 3.48 
Anterior insula 43 12 2754 4.49 −42 10 712 3.73 
Posterior insula 46 −19 17 2285 4.41 −43 −17 20 207 3.44 
GC (24/32) −2 36 2669 4.94 −6 14 35 2743 4.85 
Superior temporal gyrus 54 −16 1058 4.97      
Medial temporal gyrus      −51 −50 2582 4.31 
Postcentral gyrus (1/2/3) 32 −27 63 602 3.82 −35 −32 57 452 3.80 
SII (5/7) 38 −37 52 2319 4.74      
Lpi (39/40) 50 −50 33 1030 3.58 −42 −57 39 1120 3.79 
Precuneus (7, 31) −59 40 949 5.71 −5 −59 39 408 4.53 
 
(II) Conjunction Analysis: Visual Learning (Fan 3 > Fan 2 > Fan 1 > Fan 0) and Haptic Learning (Fan 3 > Fan 2 > Fan 1 > Fan 0 )—Activation in Both Object Learning Groups 
Posterior Gfm (9/6) 43 36 3504 14.78      
Posterior Gfs (6) 29 56 3610 14.69      
Lps (7) 23 −64 50 3548 17.97      
Anterior IPS (7) 34 −51 43 3879 15.28 −32 −52 44 623 13.51 
Posterior IPS (7) 27 −65 41 5363 17.88 −31 −62 42 690 13.64 
Region (BA)
Hemisphere
Left
Right
x
y
z
n
t
x
y
z
n
t
(Ia) Contrast: Visual Learning (Fan 3 and Fan 2 and Fan 1) vs. Haptic Learning (Fan 3 and Fan 2 and Fan 1)—Significantly More Activation in the Visual Object Learning Group 
Middle Gfs (8) 14 19 52 3755 4.38 −18 18 51 4573 4.32 
Posterior Gfm (9/6)      −29 53 175 3.44 
Lps (7)      −20 −62 50 495 3.75 
Superior occipital gyri (19) 28 −75 20 1852 3.81 −25 −74 21 1995 3.76 
Parieto-occipital sulcus (31/19) 14 −59 16 3128 3.79 −15 −62 17 3441 4.21 
Cuneus (19/18) 10 −67 14 2325 3.95 −13 −71 12 2875 3.85 
 
(Ib) Contrast: Haptic Learning (Fan 3 and Fan 2 and Fan 1) vs. Visual Learning (Fan 3 and Fan 2 and Fan 1)—Significantly More Activation in the Haptic Object Learning Group 
Posterior Gfi (45) 48 21 22 671 4.55      
Middle Gfm (46/9) 44 22 31 4528 5.48 −41 22 30 1979 4.52 
Precentral gyrus (4) 24 −16 54 847 3.74 −22 −15 55 241 3.48 
Anterior insula 43 12 2754 4.49 −42 10 712 3.73 
Posterior insula 46 −19 17 2285 4.41 −43 −17 20 207 3.44 
GC (24/32) −2 36 2669 4.94 −6 14 35 2743 4.85 
Superior temporal gyrus 54 −16 1058 4.97      
Medial temporal gyrus      −51 −50 2582 4.31 
Postcentral gyrus (1/2/3) 32 −27 63 602 3.82 −35 −32 57 452 3.80 
SII (5/7) 38 −37 52 2319 4.74      
Lpi (39/40) 50 −50 33 1030 3.58 −42 −57 39 1120 3.79 
Precuneus (7, 31) −59 40 949 5.71 −5 −59 39 408 4.53 
 
(II) Conjunction Analysis: Visual Learning (Fan 3 > Fan 2 > Fan 1 > Fan 0) and Haptic Learning (Fan 3 > Fan 2 > Fan 1 > Fan 0 )—Activation in Both Object Learning Groups 
Posterior Gfm (9/6) 43 36 3504 14.78      
Posterior Gfs (6) 29 56 3610 14.69      
Lps (7) 23 −64 50 3548 17.97      
Anterior IPS (7) 34 −51 43 3879 15.28 −32 −52 44 623 13.51 
Posterior IPS (7) 27 −65 41 5363 17.88 −31 −62 42 690 13.64 

(I) Brain areas that responded only during the recall of visually (Ia) or haptically (Ib) encoded objects (main effects “visual vs. haptic encoding,” p < .001, uncorrected); (II) Brain areas active during both the recall of visually and haptically encoded objects (conjunction analysis: contrast “visual object Fan 3 > Fan 2 > Fan 1 > Fan 0” and “haptic object Fan 3 > Fan 2 > Fan 1 > Fan 0,” p < .001, corrected.

Columns as in Table 1.

Table 6. 

Modality-specific and Supramodal Effects

Region (BA)
Hemisphere
Left
Right
x
y
z
n
t
x
y
z
n
t
(Ia) Contrast: Visual Learning (Fan 3 and Fan 2 and Fan 1) vs. Haptic Learning (Fan 3 and Fan 2 and Fan 1)—Significantly More Activation in the Visual Location Learning Group 
Middle Gfs (8) 20 17 52 4758 5.00 −22 18 51 4414 4.75 
Lps (7)      −18 −60 51 4260 4.58 
Superior occipital gyri (19) 24 −75 24 633 3.92 −30 −75 22 675 4.32 
Parieto-occipital sulcus (31/19) −61 17 1219 3.70 −12 −60 19 2950 5.02 
Cuneus (19/18) 10 −70 488 3.47 −10 −76 848 3.75 
 
(Ib) Contrast: Haptic Learning (Fan 3 and Fan 2 and Fan 1) vs. Visual Learning (Fan 3 and Fan 2 and Fan 1)—Significantly More Activation in the Haptic Location Learning Group 
Posterior Gfi (45) 46 22 23 354 3.89      
Anterior Gfm (10/46)      −29 34 29 644 3.86 
Middle Gfm (46/9) 41 24 29 4549 5.15 −36 24 27 1239 3.96 
Posterior Gfs (6) 23 −6 57 1003 4.18      
Precentral gyrus (4) 23 −16 55 580 3.85      
Anterior insula 44 938 3.90      
Posterior insula 51 −17 17 1229 4.12      
GC (24/32) −1 35 2745 4.63 −6 15 36 2683 4.81 
Superior temporal gyrus 52 −15 343 3.36      
Medial temporal gyrus 49 −47 18 2587 3.88 −47 −52 12 1662 3.89 
Lpi (39/40) 50 −51 32 540 3.69      
Precuneus (7, 31) −60 39 646 3.75      
 
(II) Conjunction Analysis: Visual Learning (Fan 3 > Fan 2 > Fan 1 > Fan 0) and Haptic Learning (Fan 3 > Fan 2 > Fan 1 > Fan 0 )—Activation in Both Location Learning Groups 
Posterior Gfm (9/6) 47 36 210 12.98      
Posterior Gfs (6) 27 −1 56 3373 15.81      
Lps (7) 13 −64 52 4579 17.69 −9 66 52 846 17.89 
Anterior IPS (7) 34 −50 44 2417 13.68      
Posterior IPS (7) 25 −64 39 2014 14.21      
Region (BA)
Hemisphere
Left
Right
x
y
z
n
t
x
y
z
n
t
(Ia) Contrast: Visual Learning (Fan 3 and Fan 2 and Fan 1) vs. Haptic Learning (Fan 3 and Fan 2 and Fan 1)—Significantly More Activation in the Visual Location Learning Group 
Middle Gfs (8) 20 17 52 4758 5.00 −22 18 51 4414 4.75 
Lps (7)      −18 −60 51 4260 4.58 
Superior occipital gyri (19) 24 −75 24 633 3.92 −30 −75 22 675 4.32 
Parieto-occipital sulcus (31/19) −61 17 1219 3.70 −12 −60 19 2950 5.02 
Cuneus (19/18) 10 −70 488 3.47 −10 −76 848 3.75 
 
(Ib) Contrast: Haptic Learning (Fan 3 and Fan 2 and Fan 1) vs. Visual Learning (Fan 3 and Fan 2 and Fan 1)—Significantly More Activation in the Haptic Location Learning Group 
Posterior Gfi (45) 46 22 23 354 3.89      
Anterior Gfm (10/46)      −29 34 29 644 3.86 
Middle Gfm (46/9) 41 24 29 4549 5.15 −36 24 27 1239 3.96 
Posterior Gfs (6) 23 −6 57 1003 4.18      
Precentral gyrus (4) 23 −16 55 580 3.85      
Anterior insula 44 938 3.90      
Posterior insula 51 −17 17 1229 4.12      
GC (24/32) −1 35 2745 4.63 −6 15 36 2683 4.81 
Superior temporal gyrus 52 −15 343 3.36      
Medial temporal gyrus 49 −47 18 2587 3.88 −47 −52 12 1662 3.89 
Lpi (39/40) 50 −51 32 540 3.69      
Precuneus (7, 31) −60 39 646 3.75      
 
(II) Conjunction Analysis: Visual Learning (Fan 3 > Fan 2 > Fan 1 > Fan 0) and Haptic Learning (Fan 3 > Fan 2 > Fan 1 > Fan 0 )—Activation in Both Location Learning Groups 
Posterior Gfm (9/6) 47 36 210 12.98      
Posterior Gfs (6) 27 −1 56 3373 15.81      
Lps (7) 13 −64 52 4579 17.69 −9 66 52 846 17.89 
Anterior IPS (7) 34 −50 44 2417 13.68      
Posterior IPS (7) 25 −64 39 2014 14.21      

(I) Brain areas that responded only during the recall of visually (Ia) or haptically (Ib) encoded locations (main effects “visual versus haptic encoding,” p < .001, uncorrected); (II) Brain areas active during both the recall of visually and haptically encoded locations (conjunction analysis: contrast “visual location Fan 3 > Fan 2 > Fan 1 > Fan 0” and “haptic location Fan 3 > Fan 2 > Fan 1 > Fan 0,” p < .001, Corrected).

Columns as in Table 1.

For both retrieval of objects and locations, a number of areas reveal modality-specific activation effects, which, by and large, were very similar for the two materials. Visually encoded objects and locations induced substantial bilateral activation in secondary visual processing areas (see green spots in Figure 6, and Tables 5 and 6, top sections), namely, in the superior occipital gyri, the parieto-occipital sulcus, and the cuneus (BA 18 and 19). In addition, retrieval of visually encoded entities activated bilaterally the middle part of Gfs and the right Lps. Finally, in the object condition, a small additional activation spot appeared in the right posterior Gfm.

The reverse contrast reveals that retrieval of haptically encoded objects and locations (purple in Figure 6, middle part of Tables 5 and 6) activates more motor and somatosensory areas, in particular, the precentral gyrus (BA 4, objects and locations), the posterior Gfs (BA 6, locations), the postcentral gyrus (BA 1/2/3, objects), and the SII (BA 5/7, objects). In addition, there are activations in a number of other areas that are usually not classified as “core” areas for processing movement or touch but for which activations during kinesthetic information processing has been observed in previous studies (see Introduction). These areas comprise for both objects and locations the anterior and posterior insula, the anterior CG (BA 24/32), the medial-temporal gyrus, and the lobus parietalis inferior (BA 39/40). In general, retrieval of haptically encoded objects activated most of these areas bilaterally, whereas retrieval of haptically encoded locations activated predominantly left hemispheric areas. Finally, there were activations of frontal areas, as the left inferior frontal gyrus (BA 45, objects and locations), the left and right middle part of the Gfm (BA 46/9), and the anterior part of the right Gfm (BA 10/46, locations).

Supramodal Effects

In order to identify supramodal areas, we calculated a conjunction analysis that comprised all visual and all haptic fan levels irrespective of the encoding condition [visual(Fan 3 > Fan 2 > Fan 1 > Fan 0) ∩ haptic(Fan 3 > Fan 2 > Fan 1 > Fan 0)]. This analysis was run separately for the retrieval of objects and locations.

For both stored entities, the conjunction analysis detected highly reliable activations in the posterior part of the Gfm (BA 9/6) and the posterior part of the superior frontal (BA 6), the left Lps (BA 7), and the anterior and posterior part of the IPS (BA 7). Because these activations were present during both retrieval of objects and locations, and lighted up in the conjunction of visual and haptic encoding, they are both supramodal and unspecific with respect to the material. These overall activations are supplemented by some smaller, modality-unspecific but material-specific spots in the right hemisphere in BA 7. These were located more superior (Lps) during location retrieval and more inferior (anterior and posterior IPS) during object retrieval.

DISCUSSION

The present study delineated cortical networks when either visually or haptically encoded objects or spatial locations are retrieved from long-term memory. According to our hypotheses, there are two different viewpoints from where the findings can be systematized. First, there are material-specific effects for differently encoded items. We will discuss these object- and location-specific effects separately for the visual and the haptic encoding conditions and we will stress the differences between material types. Second, there are supramodal- and modality-specific effects that appeared either for objects, for locations, or for both. In this section we will highlight the commonalities and distinctions between items encoded via distinct modalities.

Before evaluating the observed activation patterns and relating them to the literature, it is important to recapitulate the specific aspects of our design. First, participants learned the later to be retrieved information either visually or haptically. Thus, a direct visual–haptic interaction or a transformation of the one type of representation into the other, due to already existing associations, can be excluded. Second, the retrieval test did not confound perceptual processes and memory retrieval processes proper, as it is the case with a recognition memory design. Here, the participants had learned word–target associations and only the words were presented in the retrieval phase. Thus, all brain activation differences observed during retrieval must be due to how the stimuli had been encoded and how they were represented in long-term memory. Third, activated areas were delineated by a linear contrast between different fan levels and the high-level baseline (regression weights −3, −1, +1, +3 for the fan levels 0, 1, 2, and 3), that is, these areas show parametrically controlled increasing BOLD responses as a function of material-specific (or modality-specific) increases of retrieval load. This is substantiated by the behavioral data: Response latencies and error rates showed a clear and almost equivalent fan effect for all four conditions (object vs. location condition and visual vs. haptic modality). Most important, there were no systematic performance differences between the visual and the haptic modalities. Thus, activations observed in this study cannot be attributed to differences in overall task load, or to specific perceptual demands. Fourth, even if our participants had formed complex word–object or word–location associations, the observed differences in the activation patterns must, nevertheless, be due to the distinct quality of associated targets. The fact that words had to be processed to access these associations cannot explain the effects, because the words are a constant factor in all calculated contrasts, that is, the activations of word-related areas (either concerning the phonetics or the semantics of the cues) must have leveled off in these calculations. It is, of course, correct that the words were, so to speak, the key to open the door of the various memory compartments, but this key was always the same while it opened different doors. Moreover, the word-related activity was captured by a distinct predictor, which modeled the first 4 sec of the retrieval epoch. In contrast, the predictors used to model the retrieval-related activity proper captured a time epoch of 10 sec, which covered the epoch after cue processing until and beyond the decision (decision times lay between 4.5 and 8 sec).

Taking all these arguments together, one must conclude that all activations observed in this study relate to the process of reactivating cue-associated long-term memory representations. Activation differences must have a pure cognitive origin, that is, they relate to the modality via which items were previously encoded (vision, touch) and to the type of information they represent (objects, locations). Both features—modality and material type—are not processed bottom–up during retrieval, rather they must be reactivated top–down from memory.

Retrieval of Visually Encoded Objects and Locations

The results of the visual group confirm previous neuroimaging findings indicating that retrieval of long-term memory contents activates an extended and scattered neural network encompassing frontal and parietal areas (e.g., Khader et al., 2007; James et al., 2002; Amedi et al., 2001). This is substantiated by both the main effect and the linear fan contrasts.

The first striking finding is that several areas are activated by-and-large to the same extent by visually encoded objects and locations. This holds, in particular, for the Gfs and the IPS. However, although the activation spots in these areas are almost identical in the left hemisphere, their extent is modulated by the type of material in the right hemisphere. There, the activation in the middle Gfs and the anterior and posterior part of the IPS covers always a larger area (expressed by the number of voxels) for locations than for objects. Nevertheless, the substantial overlap indicates that these areas must be recruited during the retrieval of both objects and locations. Two causes are conceivable for this effect: Either the respective sites store and reactivate very general features of the retrieved representations, or they exert superordinate executive control functions (e.g., that they amplify relevant and suppress irrelevant associations during retrieval or that they evaluate familiarity and recollection signals which guide the final decision). Considering previous findings on the functional significance of the dorsolateral prefrontal cortex (e.g., D'Esposito et al., 2000; Owen, 2000) and part of the IPS (Wheeler & Buckner, 2004), such superordinate functions seem most likely. Nevertheless, which of the two options is correct cannot be decided on the basis of these data alone. We will come back to this issue below when discussing the supramodal effects.

Aside from the mentioned overlap, the activation patterns evoked during retrieval of object and location knowledge are clearly distinct. Even at the borders of the strongly overlapping areas, strength and extent of activation for the one or the other type of material shows a relative dominance. With respect to the hypotheses, it has to be noticed that retrieval of object knowledge involved predominantly areas in the left hemisphere, whereas retrieval of location knowledge recruited both hemispheres with some bias toward a right hemispheric dominance. The most prominent activation spots comprised for objects the left anterior insula, the anterior CG, the SII, and the Lps. For locations the most prominent spots lit up on the right hemisphere, including Gfs, the anterior CG, the SII, Lps, IPS, and the parieto-occipital sulcus.

This activation pattern caused by retrieval of visually encoded objects and locations agrees well with the results of a previous study (Khader et al., 2007), in which the same paradigm was used but with different targets, namely, pictures of meaningful objects (drinking cups) and locations in a 2-D grid. It is interesting that despite the item differences there is such a close correspondence between the results of these two studies. This suggests that the observed activations do not relate to specific item knowledge, but rather to more general features that define objects as entities or locations in space. The prominent parietal activation spots in the location condition agree well with results from animal (e.g., Mountcastle, 1995; Mishkin et al., 1983), human lesion (e.g., Karnath & Perenin, 2005), and human brain imaging studies (electroencephalogram: Heil, Bajric, Rösler, & Hennighausen, 1996; Rösler, Heil, Bajric, et al., 1995; Rösler, Heil, & Hennighausen, 1995; positron emission tomography: Vanlierde, De Volder, Wanet-Defalque, & Veraart, 2003; fMRI: Khader et al., 2007). Likewise, the relative preponderance of the right superior parietal cortex during processing of spatial information is in agreement with previous findings (Khader et al., 2007; Khader, Burke, et al., 2005; Khader, Heil, & Rösler, 2005; Vanlierde et al., 2003).

With respect to areas related to object retrieval, we considered two possible outcomes. According to many studies in which brain images were taken during working memory recognition tasks (Martin, 2007; Amedi, Jacobson, Hendler, Malach, & Zohary, 2002; Amedi et al., 2001), one had to expect that meaningful and meaningless objects evoke activity in the LOC and in the inferior occipito-temporal cortex. According to our own study, however, in which we had used the same long-term memory retrieval design but with different items, we had to expect that these very areas are not activated. As a matter of fact, the found activation pattern confirms our previous result, that is, the LOC and the inferior occipito-temporal cortex were not systematically activated when permanently stored object representations had to be retrieved from long-term memory and scanned for mutual associations.

By considering the present design, it becomes clear that the two findings must not necessarily contradict. Our participants did not perform a recognition memory task, which implies substantial perceptual processing besides memory retrieval. Rather, they had to reactivate object knowledge in memory without perceiving these objects. In some sense, they had to rely on images of the formerly perceived objects. It is conceivable that such an imagery task does involve different brain areas than a more perceptually laden recognition task. Moreover, we did not measure activity against a resting baseline but, instead, we analyzed the parametric increase of the BOLD response for conditions in which the number of reactivated items increased systematically. Thus, basic processes unrelated to the amount of to-be-reactivated memory representations are not detected by our analyses. In conclusion, it must be accepted that the “core” areas of the “what-path” are not necessarily activated in long-term memory retrieval tasks as used here. This suggests that the LOC activations reliably observed in other studies must functionally relate more to perceptual than to long-term memory processes.

Retrieval of Haptically Encoded Objects and Locations

Retrieval of haptically encoded objects and locations showed a similar picture as outlined for the visual condition. There was again substantial overlap of activated areas but, nevertheless, even within these overlapping areas always a relative activation dominance of the one or the other retrieval condition could be seen. With respect to our hypotheses, the data revealed that haptically encoded objects evoked two very prominent activation clusters in the left Gfm (middle and posterior part) and in the anterior and posterior IPS. These areas were also activated in the location retrieval condition but to a much lesser extent. In contrast, the most prominent clusters in the location condition lay in the left and right Lps, which had a larger extent in the left than in the right hemisphere. Thus, here again, two clearly distinct activation patterns emerged during object and location retrieval. Both constituted a fronto-parietal network. However, the distinctiveness is less due to an activation of topographically separate areas but more to a distinct relative activation of partially overlapping areas. This is disclosed by the relative size of the activation spots (number of voxels) as well as by the amplitude of the hemodynamic response functions (Figure 5).

The LOC and the inferior occipito-temporal cortex were again not activated, neither by haptically encoded objects nor by locations (H3hapt). Thus, the same arguments, as already mentioned for the visually encoded material, do apply here. As shown by others, the LOC does have multisensory functions with respect to object encoding and recognition, but when haptically encoded objects have to be retrieved from long-term memory (i.e., when they are accessed after full consolidation), these areas seem not to be involved anymore.

Finally, according to the suggested distinction between a dd- and a vd-stream (Culham & Valyear, 2006; Rizzolatti & Matelli, 2003; Binkofski et al., 1998), we had formulated a specific hypothesis with respect to the IPS, namely, that haptically encoded locations should activate more the dorsally located posterior IPS and haptically encoded objects more the ventrally located anterior IPS. Our findings support this hypothesis: If one compares the number of activated voxels of the anterior and posterior IPS, it becomes clear that objects activated the anterior IPS more than the posterior IPS and that locations did it the other way round. Moreover, the Lps, which borders on the posterior IPS, is also activated more during location retrieval than during object retrieval. This is also revealed by the hemodynamic response functions shown in Figure 5D. This finding is in line with observations of Rizzolatti and Matelli (2003), who suggested the functional distinction of this region. Moreover, Roland et al. (1998) also reported somatosensory object-related activity mainly in the IPS and adjacent cortical regions in the parietal lobe, such as the supramarginal gyrus and the SII. Activation in the IPS was also found in tasks requiring analysis of length, curvature, and object shape of geometrical forms (Bodegard et al., 2001; O'Sullivan, Roland, & Kawashima, 1994) and meaningful objects (Reed, Shoham, & Halgren, 2004; Amedi et al., 2002; Amedi et al., 2001), that is, in tasks that activate object knowledge.

It is worth mentioning that a similar although less prominent distinction was present in the visual condition, too. There, the Lps was also activated more in the location than in the object retrieval condition, whereas the most ventral part of the IPS that borders on the inferior parietal lobe was more activated during retrieval of objects than locations (see Figure 4C and D). Thus, the IPS and neighboring parietal regions seem to be functionally distinct for object and spatial processing but they also seem to code supramodal features (Makin et al., 2007; Peltier et al., 2007).

In general, retrieval of haptically encoded objects and locations activated more and with greater amplitude left than right hemisphere areas. We presume that this must be attributed, at least in part, to the fact that our participants had used their right hand for manipulating the objects and identifying the locations in space. Thus, the left hemisphere must have been more involved in the learning situation. It is striking that this hemispheric bias is preserved in the retrieval condition although neither hand nor arm movements were performed during the recall phase in the scanner. It clearly suggests that at least some features of haptically encoded entities are stored and reactivated in those areas of the left hemisphere that were employed during encoding. This is clearly supporting evidence for cortical storage theories as suggested by McClelland et al. (1995) or Damasio (1989).

Supramodal- and Modality-specific Effects

Both of our modality-specific hypotheses, namely, that visually encoded material should activate more posterior vision-related areas than haptically encoded material and, vice versa, that haptically encoded material should activate more anterior motor- and somatosensory-related areas than visually encoded material, are clearly supported by the results. Visually encoded material, whether it was objects or locations, activated in both hemispheres secondary visual processing areas (BA 18 and 19), the parieto-occipital sulcus, and the superior occipital gyri. These areas are known to respond in visual perceptual and visual imagery tasks (Amedi, Malach, & Pascual-Leone, 2005; Ganis, Thompson, & Kosslyn, 2004; Mechelli, Price, Friston, & Ishai, 2004; Ishai, Ungerleider, & Haxby, 2000; Kosslyn & Thompson, 2000) and it is plausible to assume that our participants may have initiated imagery processes during retrieval in order to “scan” the representations for common associations.

Haptically encoded objects and locations activated in the retrieval situation significantly the precentral gyrus, and objects also the postcentral gyrus and SII, that is, areas that are functionally related to movement and somatosensory perception.

In addition to these activations that are closely related to secondary sensory processing areas used for encoding the stimuli, some other modality-specific effects were put forth by the direct contrast of the two modality-distinct encoding conditions. These differences appear in the frontal and the parietal cortex. Visually encoded items activated more the middle section of the Gfs, whereas haptically encoded material activated more the middle section of the Gfm. Haptically encoded items activated, in addition, a number of areas in the parietal cortex (inferior parietal lobe, precuneus), the insula, and the ACC. As already mentioned above, these activations were more prominent in the left hemisphere, most likely due to the right-handed encoding procedure.

The distinct frontal cortex activations during recall of haptic and visual information reveal that modality specificity is not only a feature of more posteriorly located areas closely linked to distinct input channels. Rather, modality specificity seems to extend also into cortex regions that are usually associated with superordinate executive control functions.

In the present study, visually encoded items activated areas in the superior frontal cortex that are overlapping with the frontal eye fields (BA 8). This activation was present in both the object and the location conditions, but not, and this is the important finding, when haptically encoded material was retrieved (see Tables 5 and 6). Positron emission tomography and fMRI studies demonstrated that saccade execution, saccade preparation, as well as foveal fixation, lead to activity in BA 8 (see, e.g., Grosbras, Laird, & Paus, 2005; Perry & Zeki, 2000). As both tasks had required visuospatial processing during learning, an activation of this area seems plausible. However, one has to keep in mind that all participants were blindfolded in the recall phase and that they were instructed to keep their eyes closed in the scanner. The systematic activation in the frontal eye fields that became more intense with increasing retrieval load must therefore be retrieval-related either in that the stored information was partly formed and reactivated within the frontal eye field, or that reactivations of the representations at other sites (e.g., in the parietal cortex) triggered the same processing networks as during encoding (including covered or even overt eye movements). This conclusion would not be feasible, of course, if we had seen the same frontal eye field activity also in the haptic condition. However, there we observed more activation of the motor and somatosensory areas, in particular, in the precentral gyrus (BA 4), the postcentral gyrus (BA 1/2/3), and the SII (BA 5/7). This finding suggests that the motor programs activated during encoding are partially reactivated during retrieval. As with the frontal eye field activation, this can be either due to pure cortical processes or due to an interaction between cortical and peripheral processes (e.g., subliminal electromyogram activity in the arm and hand). It would be interesting to pursue this idea further by simultaneously recording electrooculogram and electromyogram activity in the scanner while participants retrieve either visually or haptically encoded items.

The pattern of results agrees well with the idea that information is stored in those areas of the cortex that mediate associated perceptual processes (Damasio, 1989). Considering the activity of the Gfs, one could extend the theory of code-specific memory representations by saying that not only the perceptual features of an item are encoded in the sensory areas but also that all motor processes performed during perception and learning do become an integral part of the memory representation. The same argument can be applied to the activations observed in the ACC during retrieval of haptically encoded entities. The ACC has been closely linked to superordinate functions of conflict monitoring in simple and complex motor tasks. This functional feature may also be relevant when participants discriminate between associations that were learned by means of active motor exploration. However, it must not necessarily be such a superordinate function, as conflict monitoring that is of relevance here. Other studies have shown that the rostral part of the ACC is also activated during somatosensory stimulation and movement execution as such (Paus, Koski, Caramanos, & Westbury, 1998; Picard & Strick, 1996; Devinsky, Morrell, & Vogt, 1995). In the present context, the specific functional implication of this ACC activation seems to be of minor importance, as it cannot be decided by means of the present data. More important is the fact that these movement-related activations appeared during retrieval of haptically encoded items while the participants lay motionless in the scanner not performing any overt finger, hand, or arm movements. Thus, such movement-related features of the associations seem to form an integral part of the consolidated representation.

The other activation spots within the Gfm and the inferior parietal lobe that proved to be specific for the haptic encoding condition do also fit into this pattern. Studies that suggest a distinction into a dd-stream and a vd-stream (Rizzolatti & Matelli, 2003) as well as brain scans during tactile information processing have systematically revealed a close interaction of both parietal and prefrontal cortex areas. For example, Stoeckel et al. (2003) also found a substantial activation within the ventral prefrontal cortex during active touch (i.e., in an area overlapping in our terminology with the middle Gfm).

Last, but not the least, there were two prominent activation clusters that were both supramodal and unspecific with respect to the type of retrieved material. They comprised the posterior medial and superior frontal cortex (BA 6/9) and a large part of the Lps, including the anterior and posterior part of the IPS (BA 7). The supramodal activation of the superior parietal cortex and the IPS agrees well with the increasing number of studies showing that both visual and tactile information have access to these regions (Makin et al., 2007; Peltier et al., 2007; Swisher et al., 2007; Stoeckel et al., 2004; Zhang, Weisser, Stilla, Prather, & Sathian, 2004; Bodegard et al., 2001; Jancke, Kleinschmidt, Mirzazade, Shah, & Freund, 2001; Roland et al., 1998). According to these studies, it is most likely that multimodal features of object and space representations are integrated and handled in the superior parietal areas. However, multimodality must not be the only cause of the activations seen in all of our conditions during retrieval. There is also evidence that some general control functions, as the distinction between remember and know signals, are localized in close neighborhood of or within the IPS (Wheeler & Buckner, 2004; Buckner & Wheeler, 2001).

Such general executive functions are, however, the most likely cause of the supramodal and material-unspecific activations of the left dorsolateral prefrontal cortex (within BA 6 and 9). Many studies have shown that these frontal regions are tightly related to control processes during working memory and long-term memory tasks, for example, refreshing currently activated items, coding item relations, or amplifying and attenuating different types of information during memory storage or access (Postle et al., 2006; Ranganath, Cohen, & Brozinsky, 2005; Ranganath, Cohen, et al., 2004; Ranganath, DeGutis, & D'Esposito, 2004; Johnson, Raye, Mitchell, Greene, & Anderson, 2003; D'Esposito et al., 2000; Owen, 2000). All of these functions are, without any doubt, most relevant for solving our highly demanding retrieval task. Going back to the linear contrasts used to model the fan effect, it can also be seen that these activations in the Gfm increased monotonously the more associations had to be reactivated and checked for mutual links. Thus, the prefrontal multimodal activation reflects directly the retrieval-related increase of control effort.

Conclusion

Taken together, the observed BOLD activation patterns are highly compatible with the theoretical framework of McClelland et al. (1995), Damasio (1989), and others according to which code-specific representations are consolidated and reactivated within anatomically distributed cell assemblies that are part of the sensory and motor processing systems. Recall of objects activated a distributed neural network, including the superior and the middle frontal gyrus and the IPS, whereas retrieval of locations activated a network comprising the Gfs and the superior parietal cortex. Considering the functional significance of these areas, it seems likely that the fragmentary traces refer not only to allocentric features of the to-be-learned and stored entities but also include traces of the intrinsic processes which enable encoding, that is, finger, hand, and arm movements in the haptic and eye movements in the visual encoding condition, respectively. Moreover, they include features that are handled in the secondary sensory processing areas, that is, in the SII in the haptic and in BA 18/19 in the visual encoding condition. In other words, recall of the stored representations results in an activation of all neuronal cell assemblies which mediate the encoding process (Gabrieli, 1998).

The finding of the present study that the retrieval of both abstract objects and spatial locations elicited predominantly regions within the parietal cortex is also evidence questioning the oversimplified model of two distinct processing pathways in the brain. In a review on the what and where pathway, Merigan and Maunsell (1993) said already that the notion of parallel visual subsystems “has been broadly disseminated and popularised … and has quickly become widely accepted, owing in part to its great explanatory power and its appealing simplicity” (p. 370). Nevertheless, they also emphasized that increasing evidence—anatomical, neurophysiological, and behavioral—suggests that the two systems extensively overlap. Our data, together with many other findings, substantiate this caveat. Moreover, they show that areas in the superior parietal cortex and the IPS are not only involved in on-line processing of spatial locations and their relations (where-information) or action-related information, which is relevant for immediate object manipulation (what- or how-information). Rather, they are also systematically involved during retrieval of memory traces that consolidated some time earlier in long-term memory.

Acknowledgments

O. S. and F. R. contributed equally to this publication. With support of the German Research Foundation (DFG) grant FOR 254 and Ro 529/18 assigned to F. R. The revision of this article was prepared while the last author spent a sabbatical at the Institute for Advanced Study in Berlin, whose support is highly appreciated.

Reprint requests should be sent to Frank Rösler, Department of Psychology, Philipps-University, 35032 Marburg, Germany, or via e-mail: roesler@staff.uni-marburg.de; Web: http://www.uni-marburg.de/fb04/team-roesler.

Notes

1. 

Frequency per million words; Celex-Centre for Lexical Information, Max Planck Institute for Psycholinguistic, Nijmegen (The Netherlands), 1990.

2. 

Reaction time differences between yes- and no-responses did not reach significance (p > .4) in the pre-fMRI retrieval test. Therefore, factor “probe” was not included in the ANOVAs of the fMRI retrieval session proper.

REFERENCES

REFERENCES
Amedi
,
A.
,
Jacobson
,
G.
,
Hendler
,
T.
,
Malach
,
R.
, &
Zohary
,
E.
(
2002
).
Convergence of visual and tactile shape processing in the human lateral occipital complex.
Cerebral Cortex
,
12
,
1202
1212
.
Amedi
,
A.
,
Malach
,
R.
,
Hendler
,
T.
,
Peled
,
S.
, &
Zohary
,
E.
(
2001
).
Visuo-haptic object-related activation in the ventral visual pathway.
Nature Neuroscience
,
4
,
324
330
.
Amedi
,
A.
,
Malach
,
R.
, &
Pascual-Leone
,
A.
(
2005
).
Negative BOLD differentiates visual imagery and perception.
Neuron
,
48
,
859
872
.
Amedi
,
A.
,
von Kriegstein
,
K.
,
van Atteveldt
,
N. M.
,
Beauchamp
,
M. S.
, &
Naumer
,
M. J.
(
2005
).
Functional imaging of human crossmodal identification and object recognition.
Experimental Brain Research
,
166
,
559
571
.
Astafiev
,
S. V.
,
Stanley
,
C. M.
,
Shulman
,
G. L.
, &
Corbetta
,
M.
(
2004
).
Extrastriate body area in human occipital cortex responds to the performance of motor actions.
Nature Neuroscience
,
7
,
542
548
.
Binkofski
,
F.
,
Buccino
,
G.
,
Posse
,
S.
,
Seitz
,
R. J.
,
Rizzolatti
,
G.
, &
Freund
,
H.
(
1999
).
A fronto-parietal circuit for object manipulation in man: Evidence from an fMRI-study.
European Journal of Neuroscience
,
11
,
3276
3286
.
Binkofski
,
F.
,
Dohle
,
C.
,
Posse
,
S.
,
Stephan
,
K. M.
,
Hefter
,
H.
,
Seitz
,
R. J.
,
et al
(
1998
).
Human anterior intraparietal area subserves prehension: A combined lesion and functional MRI activation study.
Neurology
,
50
,
1253
1259
.
Bodegard
,
A.
,
Geyer
,
S.
,
Grefkes
,
C.
,
Zilles
,
K.
, &
Roland
,
P. E.
(
2001
).
Hierarchical processing of tactile shape in the human brain.
Neuron
,
31
,
317
328
.
Buckner
,
R. L.
, &
Wheeler
,
M. E.
(
2001
).
The cognitive neuroscience of remembering.
Nature Reviews Neuroscience
,
2
,
624
634
.
Carpenter
,
P. A.
,
Just
,
M. A.
,
Keller
,
T. A.
,
Eddy
,
W.
, &
Thulborn
,
K.
(
1999
).
Graded functional activation in the visuospatial system with the amount of task demand.
Journal of Cognitive Neuroscience
,
11
,
9
24
.
Culham
,
J. C.
, &
Valyear
,
K. F.
(
2006
).
Human parietal cortex in action.
Current Opinion in Neurobiology
,
16
,
205
212
.
Damasio
,
A. R.
(
1989
).
Time-locked multiregional retroactivation: A systems-level proposal for the neural substrates of recall and recognition.
Cognition
,
33
,
25
62
.
D'Esposito
,
M.
,
Postle
,
B. R.
, &
Rypma
,
B.
(
2000
).
Prefrontal cortical contributions to working memory: Evidence from event-related fMRI studies.
Experimental Brain Research
,
133
,
3
11
.
Devinsky
,
O.
,
Morrell
,
M. J.
, &
Vogt
,
B. A.
(
1995
).
Contributions of anterior cingulate cortex to behaviour.
Brain
,
118
,
279
306
.
Gabrieli
,
J. D. E.
(
1998
).
Cognitive neuroscience of human memory.
Annual Review of Psychology
,
49
,
87
115
.
Galletti
,
C.
,
Kutz
,
D. F.
,
Gamberini
,
M.
,
Breveglieri
,
R.
, &
Fattori
,
P.
(
2003
).
Role of the medial parieto-occipital cortex in the control of reaching and grasping movements.
Experimental Brain Research
,
153
,
158
170
.
Ganis
,
G.
,
Thompson
,
W. L.
, &
Kosslyn
,
S. M.
(
2004
).
Brain areas underlying visual mental imagery and visual perception: An fMRI study.
Brain Research, Cognitive Brain Research
,
20
,
226
241
.
Goodale
,
M. A.
, &
Milner
,
A. D.
(
1992
).
Separate visual pathways for perception and action.
Trends in Neurosciences
,
15
,
20
25
.
Grosbras
,
M. H.
,
Laird
,
A. R.
, &
Paus
,
T.
(
2005
).
Cortical regions involved in eye movements, shifts of attention, and gaze perception.
Human Brain Mapping
,
25
,
140
154
.
Haxby
,
J. V.
,
Gobbini
,
M. I.
,
Furey
,
M. L.
,
Ishai
,
A.
,
Schouten
,
J. L.
, &
Pietrini
,
P.
(
2001
).
Distributed and overlapping representations of faces and objects in ventral temporal cortex.
Science
,
293
,
2425
2430
.
Heil
,
M.
,
Bajric
,
J.
,
Rösler
,
F.
, &
Hennighausen
,
E.
(
1996
).
Event-related potentials during mental rotation: Disentangling the contributions of character classification and image transformation.
Journal of Psychophysiology
,
10
,
326
335
.
Heil
,
M.
,
Rösler
,
F.
, &
Hennighausen
,
E.
(
1997
).
Topography of brain electrical activity dissociates the retrieval of spatial versus verbal information from episodic long-term memory in humans.
Neuroscience Letters
,
222
,
45
48
.
Ishai
,
A.
,
Ungerleider
,
L. G.
, &
Haxby
,
J. V.
(
2000
).
Distributed neural systems for the generation of visual images.
Neuron
,
28
,
979
990
.
Ishai
,
A.
,
Ungerleider
,
L. G.
,
Martin
,
A.
,
Schouten
,
J. L.
, &
Haxby
,
J. V.
(
1999
).
Distributed representation of objects in the human ventral visual pathway.
Proceedings of the National Academy of Sciences, U.S.A.
,
96
,
9379
9384
.
James
,
T. W.
,
Humphrey
,
G. K.
,
Gati
,
J. S.
,
Servos
,
P.
,
Menon
,
R. S.
, &
Goodale
,
M. A.
(
2002
).
Haptic study of three-dimensional objects activates extrastriate visual areas.
Neuropsychologia
,
40
,
1706
1714
.
Jancke
,
L.
,
Kleinschmidt
,
A.
,
Mirzazade
,
S.
,
Shah
,
N. J.
, &
Freund
,
H. J.
(
2001
).
The role of the inferior parietal cortex in linking the tactile perception and manual construction of object shapes.
Cerebral Cortex
,
11
,
114
121
.
Johnson
,
M. K.
,
Raye
,
C. L.
,
Mitchell
,
K. J.
,
Greene
,
E. J.
, &
Anderson
,
A. W.
(
2003
).
fMRI evidence for an organization of prefrontal cortex by both type of process and type of information.
Cerebral Cortex
,
13
,
265
273
.
Karnath
,
H. O.
, &
Perenin
,
M. T.
(
2005
).
Cortical control of visually guided reaching: Evidence from patients with optic ataxia.
Cerebral Cortex
,
15
,
1561
1569
.
Khader
,
P.
,
Burke
,
M.
,
Bien
,
S.
,
Ranganath
,
C.
, &
Rösler
,
F.
(
2005
).
Content-specific activation during associative long-term memory retrieval.
Neuroimage
,
27
,
805
816
.
Khader
,
P.
,
Heil
,
M.
, &
Rösler
,
F.
(
2005
).
Material-specific long-term memory representations of faces and spatial positions: Evidence from slow event-related potentials.
Neuropsychologia
,
43
,
2109
2124
.
Khader
,
P.
,
Knoth
,
K.
,
Burke
,
M.
,
Ranganath
,
C.
,
Bien
,
S.
, &
Rösler
,
F.
(
2007
).
Topography and dynamics of associative long-term memory retrieval in humans.
Journal of Cognitive Neuroscience
,
19
,
493
512
.
Kosslyn
,
S. M.
, &
Thompson
,
W. L.
(
2000
).
Shared mechanisms in visual imagery and visual perception: Insights from cognitive neuroscience.
In M. S. Gazzaniga (Ed.),
The new cognitive neurosciences
(2nd ed., pp.
975
985
).
Cambridge, MA
:
MIT Press
.
Makin
,
T. R.
,
Holmes
,
N. P.
, &
Zohary
,
E.
(
2007
).
Is that near my hand? Multisensory representation of peripersonal space in human intraparietal sulcus.
Journal of Neuroscience
,
27
,
731
740
.
Martin
,
A.
(
2007
).
The representation of object concepts in the brain.
Annual Review of Psychology
,
58
,
25
45
.
McClelland
,
J. L.
,
McNaughton
,
B. L.
, &
O'Reilly
,
R. C.
(
1995
).
Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory.
Psychological Review
,
102
,
419
457
.
Mechelli
,
A.
,
Price
,
C. J.
,
Friston
,
K. J.
, &
Ishai
,
A.
(
2004
).
Where bottom–up meets top–down: Neuronal interactions during perception and imagery.
Cerebral Cortex
,
14
,
1256
1265
.
Merigan
,
W. H.
, &
Maunsell
,
J. H.
(
1993
).
How parallel are the primate visual pathways?
Annual Review of Neuroscience
,
16
,
369
402
.
Mishkin
,
M.
,
Ungerleider
,
L. G.
, &
Macko
,
K. A.
(
1983
).
Object vision and spatial vision: Two cortical pathways.
Trends in Neurosciences
,
6
,
414
417
.
Mountcastle
,
V. B.
(
1995
).
The parietal system and some higher brain functions.
Cerebral Cortex
,
5
,
377
390
.
O'Sullivan
,
B. T.
,
Roland
,
P. E.
, &
Kawashima
,
R.
(
1994
).
A PET study of somatosensory discrimination in man: Microgeometry versus macrogeometry.
European Journal of Neuroscience
,
6
,
137
148
.
Owen
,
A. M.
(
2000
).
The role of the lateral frontal cortex in mnemonic processing: The contribution of functional neuroimaging.
Experimental Brain Research
,
133
,
33
43
.
Paus
,
T.
,
Koski
,
L.
,
Caramanos
,
Z.
, &
Westbury
,
C.
(
1998
).
Regional differences in the effects of task difficulty and motor output on blood flow response in the human anterior cingulate cortex: A review of 107 PET activation studies.
NeuroReport
,
9
,
R37
R47
.
Peltier
,
S.
,
Stilla
,
R.
,
Mariola
,
E.
,
LaConte
,
S.
,
Hu
,
X.
, &
Sathian
,
K.
(
2007
).
Activity and effective connectivity of parietal and occipital cortical regions during haptic shape perception.
Neuropsychologia
,
45
,
476
483
.
Perry
,
R. J.
, &
Zeki
,
S.
(
2000
).
The neurology of saccades and covert shifts in spatial attention. An event-related fMRI study.
Brain
,
123
,
2273
2288
.
Picard
,
N.
, &
Strick
,
P. L.
(
1996
).
Motor areas of the medial wall: A review of their location and functional activation.
Cerebral Cortex
,
6
,
342
353
.
Pietrini
,
P.
,
Furey
,
M. L.
,
Ricciardi
,
E.
,
Gobbini
,
M. I.
,
Wu
,
W. H.
,
Cohen
,
L.
,
et al
(
2004
).
Beyond sensory images: Object-based representation in the human ventral pathway.
Proceedings of the National Academy of Sciences, U.S.A.
,
101
,
5658
5663
.
Postle
,
B. R.
,
Ferrarelli
,
F.
,
Hamidi
,
M.
,
Feredoes
,
E.
,
Massimini
,
M.
,
Peterson
,
M.
,
et al
(
2006
).
Repetitive transcranial magnetic stimulation dissociates working memory manipulation from retention functions in the prefrontal, but not posterior parietal, cortex.
Journal of Cognitive Neuroscience
,
18
,
1712
1722
.
Prather
,
S. C.
, &
Sathian
,
K.
(
2002
).
Mental rotation of tactile stimuli.
Cognitive Brain Research
,
14
,
91
98
.
Ranganath
,
C.
,
Cohen
,
M. X.
, &
Brozinsky
,
C. J.
(
2005
).
Working memory maintenance contributes to long-term memory formation: Neural and behavioral evidence.
Journal of Cognitive Neuroscience
,
17
,
994
1010
.
Ranganath
,
C.
,
Cohen
,
M. X.
,
Dam
,
C.
, &
D'Esposito
,
M.
(
2004
).
Inferior temporal, prefrontal, and hippocampal contributions to visual working memory maintenance and associative memory retrieval.
Journal of Neuroscience
,
24
,
3917
3925
.
Ranganath
,
C.
,
DeGutis
,
J.
, &
D'Esposito
,
M.
(
2004
).
Category-specific modulation of inferior temporal activity during working memory encoding and maintenance.
Brain Research, Cognitive Brain Research
,
20
,
37
45
.
Ranganath
,
C.
,
Johnson
,
M. K.
, &
D'Esposito
,
M.
(
2003
).
Prefrontal activity associated with working memory and episodic long-term memory.
Neuropsychologia
,
41
,
378
389
.
Reed
,
C. L.
,
Klatzky
,
R. L.
, &
Halgren
,
E.
(
2005
).
What vs. where in touch: An fMRI study.
Neuroimage
,
25
,
718
726
.
Reed
,
C. L.
,
Shoham
,
S.
, &
Halgren
,
E.
(
2004
).
Neural substrates of tactile object recognition: An fMRI study.
Human Brain Mapping
,
21
,
236
246
.
Rizzolatti
,
G.
, &
Matelli
,
M.
(
2003
).
Two different streams form the dorsal visual system: Anatomy and functions.
Experimental Brain Research
,
153
,
146
157
.
Röder
,
B.
,
Rösler
,
F.
, &
Hennighausen
,
E.
(
1997
).
Different cortical activation patterns in blind and sighted humans during encoding and transformation of haptic images.
Psychophysiology
,
34
,
292
307
.
Roland
,
P. E.
,
O'Sullivan
,
B.
, &
Kawashima
,
R.
(
1998
).
Shape and roughness activate different somatosensory areas in the human brain.
Proceedings of the National Academy of Sciences, U.S.A.
,
95
,
3295
3300
.
Rösler
,
F.
,
Heil
,
M.
,
Bajric
,
J.
,
Pauls
,
A. C.
, &
Hennighausen
,
E.
(
1995
).
Patterns of cerebral activation while mental images are rotated and changed in size.
Psychophysiology
,
32
,
135
150
.
Rösler
,
F.
,
Heil
,
M.
, &
Hennighausen
,
E.
(
1995
).
Distinct cortical activation patterns during long-term memory retrieval of verbal, spatial and color information.
Journal of Cognitive Neuroscience
,
7
,
51
65
.
Stoeckel
,
M. C.
,
Weder
,
B.
,
Binkofski
,
F.
,
Buccino
,
G.
,
Shah
,
N. J.
, &
Seitz
,
R. J.
(
2003
).
A fronto-parietal circuit for tactile object discrimination: An event-related fMRI study.
Neuroimage
,
19
,
1103
1114
.
Stoeckel
,
M. C.
,
Weder
,
B.
,
Binkofski
,
F.
,
Choi
,
H. J.
,
Amunts
,
K.
,
Pieperhoff
,
P.
,
et al
(
2004
).
Left and right superior parietal lobule in tactile object discrimination.
European Journal of Neuroscience
,
19
,
1067
1072
.
Swisher
,
J. D.
,
Halko
,
M. A.
,
Merabet
,
L. B.
,
McMains
,
S. A.
, &
Somers
,
D. C.
(
2007
).
Visual topography of human intraparietal sulcus.
Journal of Neuroscience
,
27
,
5326
5327
.
Talairach
,
J.
, &
Tournoux
,
P.
(
1988
).
Co-planar sterotaxic atlas of the human brain.
Stuttgart
:
Thieme
.
Ungerleider
,
L. G.
, &
Haxby
,
J. V.
(
1994
).
“What” and “where” in the human brain.
Current Opinion in Neurobiology
,
4
,
157
165
.
Vanlierde
,
A.
,
De Volder
,
A. G.
,
Wanet-Defalque
,
M. C.
, &
Veraart
,
C.
(
2003
).
Occipito-parietal cortex activation during visuo-spatial imagery in early blind humans.
Neuroimage
,
19
,
698
709
.
Wheeler
,
M. E.
, &
Buckner
,
R. L.
(
2004
).
Functional–anatomic correlates of remembering and knowing.
Neuroimage
,
21
,
1337
1349
.
Zhang
,
M.
,
Weisser
,
V. D.
,
Stilla
,
R.
,
Prather
,
S. C.
, &
Sathian
,
K.
(
2004
).
Multisensory cortical processing of object shape and its relation to mental imagery.
Cognitive, Affective, & Behavioral Neuroscience
,
4
,
251
259
.