Visually guided reaching involves the transformation of a spatial position of a target into a body-centered reference frame. Although involvement of the posterior parietal cortex (PPC) has been proposed in this visuomotor transformation, it is unclear whether human PPC uses visual or body-centered coordinates in visually guided movements. We used a delayed visually guided reaching task, together with an fMRI multivoxel pattern analysis, to reveal the reference frame used in the human PPC. In experiments, a target was first presented either to the left or to the right of a fixation point. After a delay period, subjects moved a cursor to the position where the target had previously been displayed using either a normal or a left–right reversed mouse. The activation patterns of normal sessions were first used to train the classifier to predict movement directions. The activity patterns of the reversed sessions were then used as inputs to the decoder to test whether predicted directions correspond to actual movement directions in either visual or body-centered coordinates. When the target was presented before actual movement, the predicted direction in the medial intraparietal cortex was congruent with the actual movement in the body-centered coordinates, although the averaged signal intensities were not significantly different between two movement directions. Our results indicate that the human medial intraparietal cortex uses body-centered coordinates to encode target position or movement directions, which are crucial for visually guided movements.
When we reach for an external object, the retinal inputs of the target position need to be transformed into our limb/body-centered reference frame to generate the appropriate motor commands. This process is called visuomotor transformation and is crucial for visually guided reaching. Multiple coordinates for representing spatial information are known to be contained within the posterior parietal cortex (PPC), and experimental evidence suggests that the PPC is involved in visuomotor transformation in monkeys as well as in humans.
Neurophysiological studies with nonhuman primates indicate that PPC is related to the transformation from retinal to body-centered coordinates for visuomotor control (for a review, see Colby, 1998). Specifically, a region called the parietal reach region, which comprises the parieto-occipital junction (PO or V6A) as well as the medial part of the intraparietal area (MIP), is related to visually guided reaching movements in monkeys (Andersen & Buneo, 2002; Cohen & Andersen, 2002; Snyder, Batista, & Andersen, 2000).
In humans, previous studies with fMRI revealed that two regions in PPC are related to visually guided reaching: the V6A/PO, extending medially into the precuneus (Fernandez-Ruiz, Goltz, Desouza, Vilis, & Crawford, 2007; Astafiev et al., 2003; Connolly, Andersen, & Goodale, 2003), and the medial intraparietal cortex (mIPS; Hinkley, Krubitzer, Padberg, & Disbrow, 2009; Grefkes & Fink, 2005; Prado et al., 2005; Grefkes, Ritzl, Zilles, & Fink, 2004). Grefkes et al. (2004) identified human homologue of MIP using on-screen cursor movements, as had been employed in a study on monkeys (Eskandar & Assad, 1999, 2002). They found increased mIPS activity for a condition requiring visuomotor transformation compared with a control condition that did not require this type of transformation. However, it is not yet clear which reference frame is used in this region for visual representation of a presented target. This is because the conventional univariate fMRI study reveals only overall increases in activation in a specific region and cannot reveal directional selectivity of movements, mainly because of its poor spatial resolution. One possibility is that the reference frame is based on the visual coordinates of the on-screen cursor (visual coordinates), whereas the other possibility is that it is centered on the hand used to manipulate the cursor (body-centered coordinates).
To decide between these possibilities, this study investigated the reference frame used in PPC using fMRI multivoxel pattern analysis (MVPA; Norman, Polyn, Detre, & Haxby, 2006; Haynes & Rees, 2005; Kamitani & Tong, 2005). Reference frames were investigated in previous studies using a univoxel fMRI analysis that exploited the contralateral response property with respect to visual hemifield or the hand used for moving: for example, a greater response to visual stimuli displayed in the contralateral visual field was observed in a part of the PPC (Beurze, de Lange, Toni, & Medendorp, 2007; Medendorp, Goltz, & Vilis, 2005, 2006; Medendorp, Goltz, Crawford, & Vilis, 2005; Medendorp, Goltz, Vilis, & Crawford, 2003; Merriam, Genovese, & Colby, 2003). However, this method is only applicable to regions that have a clear contralateral response bias. In contrast, the MVPA can be applied to regions that lack these properties, as this type of analysis evaluates differences in the spatial pattern of activated voxels rather than overall differences in amplitudes of fMRI signals among conditions. The MVPA extracts information contained in multivoxel activity using a pattern classification method, which enables investigation of neuronal selectivity beyond the conventional univoxel analysis, and it can thereby reveal reference frames more directly.
In our experiment, subjects performed delayed visually guided reaching task using a left–right reversed mouse (i.e., movement of the mouse to the right moved the cursor to the left and vice versa). This allowed us to dissociate these two coordinates (visual vs. body centered). We then used MVPA to reveal which reference frame (i.e., visual or body-centered coordinates) is used to encode target or movement direction for visually guided reaching in the human PPC.
Subjects were 16 volunteers (seven men and nine women) with a mean age of 28.3 years (range = 22–34 years). No volunteer exhibited excessive head movement (over 2 mm) during scanning. All subjects were right-handed, as assessed using the improved version of the Edinburgh Handedness Inventory (Oldfield, 1971). Written informed consent was obtained from all subjects in accordance with the Declaration of Helsinki. The experimental protocol received approval from the ethics committee of the Advanced Telecommunication Research Institute, Japan.
We used a delayed visually guided reaching paradigm that temporally separates the target presentation and movement execution periods (Figure 1A). First, a target (a small red square) was presented for 250 msec, either to the left or to the right (5° in visual angle) of a fixation point on a screen. This was immediately followed by presentation of a mask image for another 250 msec. After a memory delay period of 9.1 sec, during which subjects kept fixation, a mouse cursor (a small blue square) was presented centrally on screen. The subjects then moved the cursor with their right hand to the position where the target had previously been displayed. The cursor, together with the right and left target, was displayed for 7.2 sec. During intertrial intervals, a fixation was displayed for 9.6 sec, and subjects moved their hands back to the original position. To perform the task, participants first used a normal mouse (normal condition) in two sessions, and then the left–right reversed mouse (reversed condition) in another two sessions. Subjects were explicitly told that the cursor motion was right–left reversed, immediately before the start of the reversed sessions. Each session consists of 30 trials, with an equal number of left and right target presentations given in a pseudorandom order. Before the scanning, participants underwent a brief practice session where they used only a normal mouse, while lying in the MRI scanner, to become accustomed to performing the task. Experimental stimuli, controlled by Presentation software (Neurobehavioral Systems, Albany, CA), were presented on a liquid crystal display and projected onto a custom-made viewing screen. Subjects lay supine in the scanner with the head immobilized and viewed the screen via a mirror. The trial onset was synchronized with fMRI acquisitions, and 11 functional images were acquired within each trial.
Cartesian coordinates of the on-screen cursor were recorded at 50 Hz. The initial distance between the target and the cursor was 306 pixels (corresponding to 5° in visual angle), and the size of the target and the cursor was 40 × 40 pixels. We defined each trial as a correct event when the cursor entered into the target window (50 × 50 pixels; 10 pixels larger than the target size) within the movement period. We also defined an error trial when subjects incorrectly reached for the opposite target. These error trials were excluded from further analysis. We then calculated the following two behavioral measures: RT, which was defined as the time between appearance of the cursor and movement onset, and movement time, which was defined as the time from cursor presentation to successful reaching to the target window.
A 3-T Siemens Trio scanner (Erlangen, Germany) with a 12-channel head coil was used to perform T2*-weighted EPI. A total of 335 scans were acquired with a gradient-echo EPI sequence. The first four scans were discarded to allow for T1 equilibration. Scanning parameters were as follows: repetition time = 2400 msec, echo time = 30 msec, flip angle = 80°, field of view = 192 × 192 mm, matrix = 64 × 64, 40 axial slices, slice thickness = 3 mm without gap, and voxel size = 3 × 3 × 3 mm. T1-weighted anatomical imaging with a MP-RAGE sequence was performed with the following parameters: repetition time = 2250 msec, echo time = 3.06 msec, flip angle = 9°, field of view = 256 × 256 mm, matrix = 256 × 256, 192 axial slices, slice thickness = 1 mm without gap, and voxel size = 1 × 1 × 1 mm.
Preprocessing and Modeling of fMRI Data
Image preprocessing was performed using SPM5 software (Wellcome Department of Cognitive Neurology, London, UK; www.fil.ion.ucl.ac.uk/spm). All functional images were first corrected for slice-timing and then realigned to adjust for motion-related artifacts. The realigned images were spatially normalized with the Montreal Neurological Institute template based on spatial transformations derived from coregistered T2-weighted anatomical images and resampled into 3-mm3 voxels with sinc interpolation. Unless otherwise noted, all spatial localizations were made using Montreal Neurological Institute coordinates. All images were spatially smoothed using a Gaussian kernel of 8 × 8 × 8 mm FWHM. Smoothing was not performed for the data that were used for the MVPA, as this could blur fine-grained information contained in the multivoxel activity (Mur, Bandettini, & Kriegeskorte, 2009).
Using the general linear model, the 30 trials per session were independently modeled. Each task period (periods of target presentation, delay, and movement-execution, hereafter referred to as “TARGET,” “DELAY,” and “MOVE,” respectively) was modeled as a separate regressor within a trial. TARGET and MOVE were modeled as delta functions, whereas DELAY was a box-car function whose width equaled the length of time between target presentation and movement period. These regressors were then convolved with a canonical hemodynamic response function. The six movement parameters resulting from the realignment stage estimated by SPM software were modeled as confounding covariates. Low-frequency noise was removed using a high-pass filter with a cutoff period of 128 sec, and serial correlations among scans were estimated with an autoregressive model implemented in SPM5. This analysis yielded 30 independently estimated parameters (beta values) for each task period, resulting in a total of 90 parameters per session. These parameters were subsequently used as inputs for decoding analysis.
Definition of ROIs
We defined two ROIs in PPC related to visually guided hand movements: bilateral mIPS in the superior parietal lobule (SPL; left [L]: [−29, −51, 58], right [R]: [31, −54, 57]) and V6A/PO located medially in the parieto-occipital sulcus and precuneus (L: [−5, −80, 40], R: [1, −69, 42]) were defined as mean coordinates among the previous fMRI studies that identified arm/hand reaching task (mIPS from Prado et al., 2005; Grefkes et al., 2004; V6A/PO from Fernandez-Ruiz et al., 2007; Astafiev et al., 2003; Connolly et al., 2003). The hand region of the primary motor area (Mot) was also selected (L: [−37, −22, 58], R: [38, −20, 58]; Hanakawa, Parikh, Bruno, & Hallett, 2005). The visual cortex (Vis; L: [−30, −78, −6], R: [27, −72, −12]) was defined as the region that showed greatest activation in the occipital cortex when a target was presented in the contralateral visual field (p < .001, uncorrected). All ROIs were defined as 12-mm radius spheres centered at each coordinate. The mIPS ROIs, including the medial part of intraparietal sulcus, were also anatomically masked by SPL using WFU PickAtlas (fmri.wfubmc.edu/cms/software). This allowed exclusion of regions that extended into the inferior part of intraparietal lobule. Given that the right-handed participants used their dominant hand to perform this task, our primary interest is in ROIs in the left hemisphere, but we also analyzed right ROIs for comparison.
fMRI Univariate Analysis
We first analyzed activated regions in the whole brain with conventional univoxel analysis using SPM, which allowed direct comparison of activities between two directions. Activation was thresholded at p < .001, uncorrected for multiple comparisons for the voxel level, and the extent threshold was 20 voxels, unless otherwise indicated. We then analyzed the averaged signal time course of each ROI in two directions (left vs. right), either in visual or in body-centered coordinates, after trial onset (target presentation). Signal intensities were normalized as percent signal changes with zero means within each scanning session. A two-way ANOVA with repeated measures was performed, using 11 time points and 2 directions as intrasubject factors, to determine the effect of movement directions on the overall signal increase within the ROIs.
The classification analysis of MVPA was performed with a linear support vector machine implemented in LIBSVM (www.csie.ntu.edu.tw/∼cjlin/libsvm/), with default parameters (a fixed regularization parameter C = 1). The parameter estimates (betas) of each trial of voxels within the ROIs were z-normalized within each trial and were then used as inputs to the classifier. t values of betas during the task block compared with the rest period were estimated. Only training runs were used for voxel selection to avoid non-independence errors (Kriegeskorte, Simmons, Bellgowan, & Baker, 2009). Voxels in each ROI were then selected in the order of highest t value of training runs, which was based on the univariate analysis, until the number of voxels reached 200 for each subject (Pereira, Mitchell, & Botvinick, 2009). The total number of voxels contained in each spherical ROI was approximately 250 voxels, which meant that about 80% of the activated voxels were selected for the MVPA. The mIPS ROIs were additionally masked anatomically with SPL, so they had fewer than 200 voxels (an average of 169 and 173 voxels for the left and right, respectively). We first used brain activity patterns (beta values) from 30 pseudorandomly sampled (with replacement) trials among 60 trials of normal sessions to train the classifier to decode movement direction (left or right). We then used activations from 30 pseudorandomly chosen (with replacement) trials among 60 trials of reversed sessions as inputs to the decoder to test whether predicted direction of the decoder corresponded to actual movement direction with either visual or body-centered coordinates (Figure 1B). This procedure was repeated 15,000 times for each ROI, and then the averaged decoding accuracy was estimated. A two-sided t test was used to determine whether observed decoding accuracy was significantly higher than chance (50%), with the intersubject difference treated as a random factor (degree of freedom [df] = 14).
One subject was excluded from further analysis because of poor performance (the percent accuracy ratio of correct trials was only 69.2%). The remaining subjects performed almost perfectly, with over 95% accuracy and with an averaged accuracy of 99.2%. The accuracy (SD) of normal and reversed trials was 99.3% (1.8%) and 99.1% (1.8%), respectively. The paired t test showed no significant difference in accuracy between normal and reversed trials [t(14) = 0.36, p = .72]. Figure 2 presents the raw trajectories of one representative subject in the normal (Figure 2A) and reversed sessions (Figure 2B). The averaged RTs (SD) across subjects (in msec) were 522.5 (93.9) and 518.0 (110.4) for Sessions 1 and 2 (normal condition) and 520.0 (126.0) and 526.7 (121.7) for Sessions 3 and 4 (reversed condition). Two-way ANOVA showed no statistical significance in RT between sessions [F(3, 42) = 0.08, p > .05]. The averaged movement time (SD) across subjects (in msec) was 1006.6 (234.9) and 931.9 (191.1) for Sessions 1 and 2 (normal condition) and 1146.5 (416.7) and 1059.1 (309.5) for Sessions 3 and 4 (reversed condition). Two-way ANOVA showed statistical significance in movement time between sessions [F(3, 42) = 4.03, p < .05]. Post hoc analysis with Ryan's method showed a statistically significant increase in movement time only when Session 3 was compared with Session 2 (p < .05).
The Univoxel Analysis in the Whole Brain
We first analyzed activated regions in the whole brain with conventional univoxel analysis within the three task periods (TARGET, DELAY, and MOVE; Figure 3). During the TARGET period, significantly increased activations were found bilaterally in the dorsal visual pathway, including the PPC, together with the temporal cortex (Figure 3, left column), whereas no significant activations were found in these regions during DELAY period (Figure 3, middle column). During the MOVE period, a significant increase in activation was observed largely in the left-lateralized sensorimotor areas, together with the parietal and occipital regions in both hemispheres (Figure 3, right column).
We then performed direct comparisons between the two target directions (left vs. right) and found significant differences only in the occipital regions for all task periods (Figure 4A–C). During the TARGET period, significantly greater activities were observed in the ventral part of occipital cortex contralateral to the visual field where the stimulus was displayed: the left [(−30, −78, −6), number of voxels = 29] and the right [(27, −72, −12), number of voxels = 22] hemisphere for the right and left target, respectively (Figure 4A). The increased activation was also found for the left versus right target during the DELAY (Figure 4B) in the lateral occipital [(−42, −75, −12), number of voxels = 50] as well as in the dorsal occipital region [(−27, −93, 15), number of voxels = 24] for the right versus left target for MOVE period (Figure 4C). When two movement directions (left vs. right) were compared, significant activities were found in small clusters near the posterior cingulate cortex [(−9, −51, 36), number of voxels = 28] and in the cerebellum [(15, −84, −36), number of voxels = 21] for the right versus left during the TARGET period (Figure 4D). We also found significant activations in the SPL that peaked at the precuneus [(12, −66, 66), number of voxels = 365] and the premotor cortex [(45, 3, 54), number of voxels = 21] in the right hemisphere for the left versus right during the DELAY (Figure 4E).
Finally, comparisons between normal (Sessions 1 and 2) versus reversed (Sessions 3 and 4) trials collapsed over three events revealed significantly greater activity only in a small area near the posterior cingulate cortex [(−15, −45, 39), number of voxels = 22] in normal compared with reversed sessions (Figure 4F), but the opposite contrast revealed no change.
The Averaged Signal Time Course in the ROIs
Figure 5 shows the averaged signal time course of each ROI after trial onset (target presentation) in the left (A) and right (B) hemispheres in two directions (left vs. right) and for two conditions (normal and reverse). The same data shown in Figure 5 are plotted again in Figure 6 (left hemisphere) and Figure 7 (right hemisphere) using either visual (A) or body-centered (B) coordinates collapsed over conditions. In the left hemisphere, a significant difference was found between the two directions in the Vis ROI (p < .001), which showed greater activation when the stimuli (target and moving cursor) were displayed in a contralateral visual field, considering the fMRI hemodynamic response delay of 5–6 sec. There were no significant differences in body-centered coordinates. The mIPS ROI showed no significant difference in averaged signal intensity when the target was presented as well as in the delay period between the two directions in either coordinate, whereas a significant difference (p < .001) was seen in the visual coordinates after the movement period (16.8–21.6 sec after trial onset). The V6A and Mot ROIs showed no significant difference between directions for either coordinate (Figure 6). In the right hemisphere, we found a significant difference in Vis ROI, indicating contralateral dominance in the visual coordinates, as had been observed in the corresponding ROI in the left hemisphere. No significant differences between directions were found in other ROIs (V6A, mIPS, and Mot) in either coordinate (Figure 7).
We separately estimated the decoding accuracies for movement directions within the three task periods (TARGET, DELAY, and MOVE) to reveal the reference frame (visual or body-centered coordinates) used in each ROI. We first performed the MVPA using data from only normal sessions by pseudorandomly choosing 30 trials each for training and testing data out of 60 trials (Figure 8). For the left hemisphere (Figure 8A), significant decoding accuracies were found during all task periods in the mIPS [TARGET, t(14) = 2.95, p < .05; DELAY, t(14) = 6.84, MOVE, t(14) = 6.10, both p < .01] and Vis [TARGET, t(14) = 2.56, p < .05; DELAY, t(14) = 5.33, MOVE, t(14) = 7.09, both p < .01], and DELAY and MOVE in V6A [DELAY, t(14) = 5.05, MOVE, t(14) = 8.91, both p < .01] and Mot [DELAY, t(14) = 5.19, MOVE, t(14) = 3.92, both p < .01]. In the right hemisphere (Figure 8B), similar patterns of decoding accuracies were also observed.
We next used data from the normal session, in which the visual and body-centered coordinates coincided, to train the decoder. We then fed in data from the reversed session, in which these two coordinates were dissociated, and we compared decoding accuracy for each coordinate (Figure 9). For the left hemisphere (Figure 9A), in the TARGET period, significant decoding accuracies in visual coordinates were observed in Vis [t(14) = 3.64, p < .01] and in V6A [t(14) = 2.20, p < .05], whereas mIPS showed a significant decoding accuracy in body-centered coordinates [t(14) = 2.81, p < .05]. During the DELAY period, significant decoding accuracies were found in body-centered coordinates in mIPS [t(14) = 3.25, p < .01] and in V6A [t(14) = 2.74, p < .05], as well as in Mot [t(14) = 2.66, p < .05]. Finally, in the MOVE period, we saw significant decoding accuracies in Mot in body-centered coordinates [t(14) = 2.63, p < .05] as well as in Vis in visual coordinates [t(14) = 2.61, p < .05]. Figure 10 shows the decoding accuracies of individual subjects in the left mIPS for normal (A) and reversed (B) trials. It shows that 12 and 11 of 15 participants had above-chance decoding accuracy for body-centered coordinates during the TARGET and DELAY period, respectively, in the reversed trials. For the ROIs in the right hemisphere (Figure 9B), we found a significant accuracy in visual coordinates in both TARGET [t(14) = 2.28, p < .05] and MOVE [t(14) = 2.40, p < .05] in V1, similar to what had been observed in the left hemisphere. No other ROIs (mIPS, V6A, and Mot) showed significant accuracies during any of the periods.
The current study used MVPA to determine which reference frame (visual or body-centered coordinates) is used to encode target or movement direction for visually guided reaching in the human PPC.
First, conventional univariate analysis revealed directional selectivity only in the occipital cortex when the two target directions were compared. For comparison of two movement directions, large activations were found in the right SPL and premotor areas for left versus right movement directions during the delay period. The right SPL is well known for its dominant role in spatial attention (Corbetta, Miezin, Shulman, & Petersen, 1993), and the right parieto-frontal areas constitutes a dorsal attentional network for controlling top–down attention (Corbetta & Shulman, 2002). In this study, the top–down attention to the contralateral hemisphere for the upcoming movements may induce overall increases in activation within these areas. No significantly different activation was found in the left PPC contralateral to the hand used. The averaged signal intensities also showed no significant directional sensitivity within the mIPS when target was presented or during the delay period. In contrast, MVPA revealed significantly above-chance decoding accuracy in the left mIPS for body-centered coordinates, both when the target was initially presented as well as in the delay period before actual movement. This finding indicates that the human mIPS encodes the position of a target or movement direction for visually guided reaching in a body-centered reference frame.
It should be noted that the univariate analysis, both with whole-brain analysis and the signal time course, revealed no increase in overall activation in the mIPS during the delay period. This successful decoding with no increase in overall signal amplitudes was previously reported in MVPA studies (Harrison & Tong, 2009; Soon, Brass, Heinze, & Haynes, 2008). Harrison and Tong (2009) showed successful classification of visual working memory content during the delay period using activities in the early visual areas, even when the participants showed no increased overall activations from baseline. Soon et al. (2008) successfully decoded participants' free-decision content (right or left button press) using activities in the prefrontal and parietal cortex without an increase in overall signal amplitudes. The findings from these previous studies are consistent with our current results, which could be explained by the use of differential activation patterns of many voxels by the MVPA and also by successful decoding that is not necessarily accompanied with an averaged signal increase within regions.
The mIPS and Body-centered Coordinates
The current results revealed that mIPS encodes a visually presented target or movement direction in body-centered coordinates upon a brief presentation of the target, well before any movement execution. This property was contrary to the results obtained from the early visual areas or V6A/PO, which showed significant decoding accuracies in visual coordinates upon target presentation. Furthermore, no significant decoding accuracy was observed in the mIPS at a later movement period when the subjects were actually moving their hand to reach to the target. This indicates that the successful decoding of mIPS is not because of kinesthetic or proprioceptive signals from hand movements but is related to encoding as well as retention of the spatial position of the target to be reached. In contrast, for the primary motor area, significant decoding accuracies were observed in delay as well as actual movement execution, but not in target presentation, which confirms its dominant role in motor control rather than on the perceptual side. The eye movements could be a possible confounder, especially in a response period in which the cursor was moving on screen. However, two reasons can be suggested why eye movements were unlikely to affect our MVPA results. First, body-centered coordinates in the mIPS were observed during target presentation, as well as in the delay period, and not during response period, which was the period in when eye movements were most likely to occur. Second, the direction of cursor motion on the display is dissociated with that of hand movements in the reversed sessions. If the decoder's output had been influenced by an eye movement effect, the predicted direction should have been in visual coordinates, not in body-centered coordinates. The eye movements, therefore, were probably not a confounding factor in our results.
The previous neurophysiological studies using monkeys revealed that the MIP neurons are selectively activated for the direction of hand/arm movements, not that of an on-screen cursor (Eskandar & Assad, 1999, 2002). In agreement with a previous human fMRI study by Grefkes et al. (2004), the current study further validates homologies between the human mIPS and the MIP in monkeys by showing that each has the same directional selectivity with respect to body-centered coordinates.
The V6A/PO and Visuomotor Transformation
V6A/PO showed a characteristic pattern in decoding accuracy that was somewhat of a combination of those of the early Vis and mIPS: a significant decoding accuracy was observed in visual coordinates at target presentation, the same as in the early visual area, together with a significant accuracy in body-centered coordinates at the delay period, as in mIPS. For human parietal reach region, Fernandez-Ruiz et al. (2007) compared right versus left target for visually guided movements using fMRI with conventional subtraction design. They found V6A/PO uses visual, not body-centered, coordinates in visually guided reaching, which indicates that the visuomotor transformation from visual to body-centered coordinates is performed possibly later in V6A/PO within a parieto-premotor pathway. The current result also replicated their result, as we saw significant decoding accuracy in visual coordinates in the target presentation period (Figure 9A). In contrast, our MVPA further revealed that V6A/PO uses body-centered coordinates during delay period when visual information is unavailable. Taken together, our study indicates that V6A/PO is at the intermediate stage of visuomotor transformation, located between the occipital and PPC.
The PPC and Deficits in Visually Guided Reaching
Neuropsychological studies with human patients have described deficits in visually guided reaching in patients with optic ataxia (OA; Balint, 1909). The cause of OA is considered to be impairments in visuomotor transformation of retinal inputs of target position to be reached into body-centered reference frames. Lesion in the mIPS as well as in the SPL appear to be the main cause of OA (Roy, Stefanini, Pavesi, & Gentilucci, 2004; Perenin & Vighetto, 1988). A case study of a PPC lesion, as well as virtual lesion studies using TMS, has also indicated that the PPC along the IPS is involved in on-line correction of visually guided movements (Grea et al., 2002; Desmurget et al., 1999). In monkeys, lesions of MIP also caused reaching impairments based on a proprioceptive information of body location (Rushworth, Nixon, & Passingham, 1997). The current finding supports the view that impaired function of visuomotor transformation in mIPS causes the deficits in visually guided reaching observed in OA.
Related Studies on Reference Frames in Humans
Some previous fMRI studies have used univoxel analysis to investigate reference frames for eye or hand movement in humans (Beurze et al., 2007; Medendorp et al., 2003, 2006; Medendorp, Goltz, Crawford, et al., 2005; Medendorp, Goltz, & Vilis, 2005; Merriam et al., 2003). These studies have exploited the contralateral response property of cortical regions with respect to the visual hemifield or the hand used. For example, use of this method has successfully revealed that saccadic eye movements are dynamically remapped in visual coordinates in the PPC (Medendorp et al., 2003; Merriam et al., 2003). However, these studies are only applicable to regions that have contralateral response properties. In contrast, the MVPA is able to analyze differences in spatial patterns of voxel activities rather than overall signal increases in a region. The use of MVPA in our study, therefore, revealed body-centered reference frames in the mIPS, although it showed comparable overall activation amplitude in univoxel analysis.
Recent studies have used fMRI adaptation or a repetition suppression paradigm (Grill-Spector & Malach, 2001) to reveal a coordinate systems for visually guided movements (Bernier & Grafton, 2010; Van Pelt, Toni, Diedrichsen, & Medendorp, 2010). Repetition suppression is another way of achieving finer neuronal selectivity beyond voxel resolution, similar to the MVPA. However, considering the largely unknown neurophysiological underpinnings and time scales of adaptation, as well as attentional confounders accompanied by novelty or mismatch effects in repetitive presentation, the MVPA might be a more direct way of revealing neuronal selectivity (Bartels, Logothetis, & Moutoussis, 2008; Logothetis, 2008; Sawamura, Orban, & Vogels, 2006). Our study showed that MVPA could be used to reveal coordinate systems in the human brain, as was shown in previous neurophysiological studies with monkeys (Colby, 1998).
The Limitation of the Current Study
One caveat concerning MVPA is that lack of significance in decoding accuracy does not validate that neurons in the local region are totally unselective to those coordinates. With the neural underpinning still unclear, the decoding accuracy of MVPA is considered to be dependent on spatial patterns of distinct neuronal populations and/or accompanying vasculature units, together with spatial resolution of fMRI (Gardner, 2010; Bartels et al., 2008). Furthermore, the current experimental manipulation cannot reveal any region that jointly uses visual and body-centered coordinates due to the use of the right–left reversal mouse. This means that if a certain area contains both visual and body-centered coordinate neurons, the decoding accuracy would be lower for our current dissociation paradigm. This may partially explain the relatively lower decoding accuracy for the classification that used normal and reversed sessions for training and testing, compared with the accuracy obtained using only normal trials. Recent neurophysiological studies have reported that multiple reference frames are represented in the same region (e.g., Crowe, Averbeck, & Chafee, 2008; Batista et al., 2007). Further investigations are needed to clarify the coordinate system in the PPC, possibly with an improved fMRI scanning method that could achieve higher spatial resolution or with a smarter classification algorism.
The current study used fMRI together with MVPA to reveal the reference frame used in the human PPC to encode the target position or movement direction for visually guided reaching. A significantly above-chance decoding accuracy was noted in the mIPS for body-centered coordinates both when the target was initially presented and also during the delay period before actual movement. Our findings indicate that the mIPS uses a body-centered reference frame to encode the position of a target or movement direction, which is crucial for visually guided reaching.
Reprint requests should be sent to Kenji Ogawa, ATR, Cognitive Neuroscience Laboratories, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-0288, Japan, or via e-mail: firstname.lastname@example.org.