Reaching movements require the integration of both somatic and visual information. These signals can have different relevance, depending on whether reaches are performed toward visual or memorized targets. We tested the hypothesis that under such conditions, therefore depending on target visibility, posterior parietal neurons integrate differently somatic and visual signals. Monkeys were trained to execute both types of reaches from different hand resting positions and in total darkness. Neural activity was recorded in Area 5 (PE) and analyzed by focusing on the preparatory epoch, that is, before movement initiation. Many neurons were influenced by the initial hand position, and most of them were further modulated by the target visibility. For the same starting position, we found a prevalence of neurons with activity that differed depending on whether hand movement was performed toward memorized or visual targets. This result suggests that posterior parietal cortex integrates available signals in a flexible way based on contextual demands.
Reaching a visual target requires the integration of both visual and somatic signals, as prerequisite to calculate the motor error, that is, the difference between the initial and final hand positions. This vector can be computed differently in the brain, depending on available sensory information, task constraints, and cognitive demands (Hadjidimitrakis, Bertozzi, Breveglieri, Fattori, & Galletti, 2014; Ferraina, Battaglia-Mayer, Genovesio, Archambault, & Caminiti, 2009; Crawford, Medendorp, & Marotta, 2004; Battaglia-Mayer, Caminiti, Lacquaniti, & Zago, 2003; Henriques, Klier, Smith, Lowy, & Crawford, 1998; Soechting & Flanders, 1992). Target position is usually obtained through vision, and subjects make minimal errors when reaching to both visual and memorized targets, in either foveal or extrafoveal position (see Battaglia-Mayer et al., 2003, for a review). The initial hand position can be derived by both retinal and proprioceptive information. However, in total darkness and during many daily motor activities in which the eye is away from the hand's action space, as for example during driving (Land & Lee, 1994), somatic signals become an important source of information on hand position (Sober & Sabes, 2003; Rossetti, Desmurget, & Prablanc, 1995). In total darkness or when the visual background is poor, subjects rely more on somatic signals than in what they see (Mon-Williams, Wann, Jenkinson, & Rushton, 1997).
Several neurophysiological studies in nonhuman primates have shown that, for motor planning and performance, posterior parietal cortex (PPC) plays a key role in encoding arm somatic signals and in their integration with visual information about target location (McGuire & Sabes, 2009; Marzocchi, Breveglieri, Galletti, & Fattori, 2008; Andersen & Buneo, 2002; Colby & Goldberg, 1999; Caminiti, Ferraina, & Mayer, 1998; Lacquaniti, Guigon, Bianchi, Ferraina, & Caminiti, 1995; Ferraina & Bianchi, 1994). It is believed that this combination of information is necessary to build a body-centered representation of the target location (Chang & Snyder, 2010; Ferraina, Battaglia-Mayer, et al., 2009; Ferraina, Brunamonti, et al., 2009; Buneo, Batista, Jarvis, & Andersen, 2008; Cohen & Andersen, 2002; Lacquaniti et al., 1995). Although hand-centered frames are more common in the rostral than in the more caudal regions of PPC, neurons modulated by hand position (Georgopoulos, Caminiti, & Kalaska, 1984) have been described in most of PPC (Piserchia et al., 2016; Hadjidimitrakis et al., 2014; Buneo & Andersen, 2012; Ferraina, Brunamonti, et al., 2009; Battaglia-Mayer, Mascaro, Brunamonti, & Caminiti, 2005; Battaglia-Mayer et al., 2000, 2001, 2003; Ferraina et al., 1997), suggesting that the computation of motor error emerges from the combinatorial operations of multiple parallel mechanisms. Therefore, it is conceivable that the way somatic and/or external signals are encoded by PPC and the contribution of each area to the motor plan at a given moment will depend on the behavioral context in which a movement is performed (see Angelaki & DeAngelis, 2009; McGuire & Sabes, 2009; Mountcastle, Lynch, Georgopolous, Sakata, & Acuna, 1975). The study of reaching performed in light and dark conditions, for example, has revealed that visual information on the action space and/or arm position can influence the way somatic signals are encoded in several PPC areas (Shi, Apker, & Buneo, 2013; Buneo & Andersen, 2012; Bosco, Breveglieri, Chinellato, Galletti, & Fattori, 2010; Ferraina et al., 1997). Behaviorally, the visibility of the target could influence motor preparation (Heuer & Sangals, 1998). Planning a movement to a visually signaled location rather than to a memorized position in total darkness affects motor planning (McIntyre, Stratta, & Lacquaniti, 1997). In this process, reaching neurons in PPC could differently weigh their contribution, depending on the availability of all contextual variables.
To our knowledge, how the target visibility influences motor preparation has never been tested with a neurophysiological approach. In this study, we evaluated the influence of target visibility on neuronal activity related to initial hand position, which is necessary for the computation of the motor error. We trained two monkeys to reach to either visual or memorized extrafoveal targets in total darkness while maintaining fixation on targets located at different distances for the entire duration of the trial. In such a way, the view of the hand and the use of the eye-position-on-target signals for encoding the movement vector were both prevented. We found that reach-related neurons sensitive to hand position were differently active when monkeys planned reaches to memorized versus visual targets. Our results provide support to the hypothesis that target visibility plays a role in the computation of the motor error.
Animals, Surgery, Recording Methods
Two female rhesus monkeys (Macaca mulatta; monkeys UM and IS; body weight: UM, 7.8 kg; IS, 4.8 kg) were studied using general procedures as previously described (Ferraina, Brunamonti, et al., 2009). Animal care, housing, and surgical procedures were in conformity with the European (Directives 86/609/ECC and 210/63/EU) and Italian (D.L. 116/92 and D.L. 26/2014) laws on the use of nonhuman primates in scientific research and were approved by the Italian Ministry of Health.
A recording cylinder was implanted in monkeys under general anesthesia (isofluorane/oxygen 1–3% to effect) at known stereotaxic coordinates (monkey UM, P3–L14; monkey IS, P3–L10) to allow extracellular recordings of single-unit activity from superior parietal area PE (Figure 1A; see Ferraina, Brunamonti, et al., 2009, for details of the recording tracks), which is part of Brodmann's area 5. Binocular scleral search coils and a head post were also implanted aseptically during the same surgical session. Single-unit activity was isolated from the extracellular activity visualized by using a dual time–amplitude window discriminator (BAK Electronics, Umatilla, FL). The electrodes used were glass-coated tungsten–platinum fibers (Thomas Recording, Gießen, Germany; 0.8–2 M impedance at 1 kHz).
Experimental Setup and Behavioral Tasks
Monkeys were trained to move the arm contralateral to the recording hemisphere toward extrafoveal targets (TG) located at different distances in depth. Movements were performed in total darkness, and the experimental room was illuminated during the intertrial interval to avoid dark adaptation. To prevent indirect illumination of the arm, both the fixation point (FP) and the TG position were indicated by the tip of an optic fiber, lighted by light-emitting diodes (LEDs), which were located at the extremity of two robotic arms (CRS Robotics, Burlington, Ontario, Canada). The relative position in depth of both TG and FP changed in the different trials, following an intermingled design. In the following lines, we describe the setup used for the control of the behavior, even if we selected for the analysis only a portion of all available data, for the reasons better explained below (see Data Analysis). For monkey UM, FPs were located at 200, 250, and 312 mm from the head, whereas in monkey IS, the used distances were 100, 140, and 195 mm (see also Ferraina, Brunamonti, et al., 2009). Extrafoveal reach targets were placed at constant elevation and aligned, both horizontally and vertically, to the shoulder of the performing arm. To place the target at a comfortable reach distance, different target location arrangements were used for the two monkeys, based on the arm length. In monkey UM, we used five targets (5-cm steps, at depths of 13–33 cm from the shoulder), whereas in monkey IS, we used three targets (6-cm steps, at depths of 10–22 cm). For both animals, three push buttons were attached to the chair on the animal's side, 26 cm below the shoulder and at different distances from it (15, 20, 25 cm). They corresponded to different starting hand positions. To help monkeys to place the hand at the required initial position, push buttons were independently illuminated by a LED only at the onset of each trial. As soon as the button was touched, the LED was switched off to maintain total darkness. Arm position and trajectory were monitored by using an optoelectronic motion capture system (Optotrack 3020 real-time system; Northern Digital), at a 100-Hz sampling rate. To record the hand trajectory and control the movement end point, three optic markers were placed on a bracelet fixed to the wrist of the monkeys. The position of the fingers used to compute the reach end point to the memorized and visual target locations was calculated from the position of the three optic markers (see Ferraina, Brunamonti, et al., 2009, for further details). To minimize possible differences between the end points across the two conditions of the task, monkeys were trained to terminate the reaching movement within a spherical window of 3-cm diameter, centered on the target.
Animals performed two versions of a reaching in depth (RID) task (Figure 1B, C): a visually delayed (RIDvis) and a memory-delayed (RIDmem) paradigm. In the RIDvis task, targets were signaled by a red light kept on for all the arm movement and until the end of the trial. In the RIDmem task, instead, the target position was only briefly signaled (300 msec), and its location had to be remembered until the end of reach movement. Thus, in RIDmem trials, monkeys had no visual feedback on the final hand position on target location. Further details of experimental controls and performance of RIDmem are reported in our previous report (Ferraina, Brunamonti, et al., 2009). RIDvis and RIDmem trials were intermingled and randomly presented to the animals. The time evolution of the different epochs of the experimental tasks is reported in Figure 1B. Each trial started when the monkey placed its hand on the lighted push button used as the starting hand position. Then, the LED on the push button was turned off, and the fiber optic FP was turned on. The monkey was required to fixate binocularly the FP for a variable control time (500–800 msec), at the end of which the TG was turned on (for the entire duration of the trial in the RIDvis task or briefly in RIDmem). In the RIDmem trials, after the TG position was signaled, the robot arm holding the TG exited the work space to attain a resting position. After a variable delay (500–800 msec) period, the FP changed color (go signal), and the monkeys had 1000 msec (upper limit RT) to move the hand to the visual or the memorized TG, in the RIDvis and RIDmem, respectively, while maintaining fixation stable. The reward was delivered after a variable (200–500 msec) hand holding time on the visual or memorized target location. To complete each experimental session, the monkeys had to perform at least five replications of each experimental condition, that is, the combination of each hand starting position, FP location, and TG distance for each trial type and task condition (Figure 1C shows an example behavioral session).
In a previous report (Ferraina, Brunamonti, et al., 2009), we showed that, in the RIDmem task, reaching-related neurons in area PE were modulated by target depth. Furthermore, we showed that the information on the initial hand position influenced significantly the PE neurons' activity while the effect of binocular eye position was negligible. Finally, and related to both observations, we provided evidence that neural modulation in PE was mainly explained by the component in depth of the motor error.
Here, by comparing data obtained in both RIDvis and RIDmem paradigms, we aimed at better evaluating how target visibility could influence reaching activity. We focused the analysis on the time preceding the movement onset and selected reaching-related neurons for further analysis by focusing on two changes in the initial hand position (20 and 25 cm from the monkey) for which we had complete data sets. All analyses have been performed on trials with constant eye fixation in depth.
For data analysis, in both the RIDvis and RIDmem tasks, four different epochs of analysis were defined: (1) a control epoch, referring to the last 300 msec of the time preceding the TG onset; (2) a signal epoch, defined as the time from 70 to 200 msec following the target onset; (3) a delay epoch, consisting of the last 300 msec before the go signal; and (4) a premovement epoch, defined as the 200 msec preceding the onset of the reaching movement.
Considering that each of the factors studied here (target location, hand initial position, and task condition: target visibility) can putatively modulate the reaching-related neuronal activity (Piserchia et al., 2016; Buneo & Andersen, 2012; Ferraina, Brunamonti, et al., 2009; Battaglia-Mayer et al., 2005), we used a preliminary three-way ANOVA to quantify the proportion of neurons modulated by each variable. At the same time, the activity of the studied neurons could be influenced by a combination of the same variables, for example, the combination of a given hand position and target location (e.g., either the linear motor error vector or the hand trajectory). Even though this last possibility could not be ruled out (however, see Lacquaniti et al., 1995), here we considered each of the studied factors as independent variables. Furthermore, to evaluate the effect of target visibility on the tuning response for the different targets in the population of neurons modulated by the target position as revealed by the ANOVA, we separated the cells in which maximum activity was elicited, in both the visual and the memory trials, by the same target position or the one close to it, from those with a larger change in their preferred target position in the two task conditions. To this end, for every neuron of the two groups, the spike rate was ranked based on its target preference in the visual condition and compared with the spike rate during the memory condition using the same rank (see Results for further details).
A separate three-way ANOVA was used to explore the influence of the target visibility and the initial hand position on the encoding of arm movements, regardless of target distance, in the different epochs of analysis. For all ANOVA tests (main factor effects, interactions, or post hoc results), the alpha level was set to .05. Finally, we used a receiver operating characteristic (ROC) analysis to study how the visibility of the target influenced the visuomotor transformations occurring during motor preparation and to estimate the time when the signal related to the visibility of the target became embedded in the multidimensional activity modulation contributing to the motor plan. For each reach-related neuron, we first identified the starting hand position that elicited the highest response in both task conditions, and then ROC values between RIDvis (considered as the reference task) and RIDmem activity were computed within 200-msec time windows, moving at steps of 1 msec from 800 msec before the go signal to the time of movement onset (about 300 msec on average, on both animals). Significant values for the area under the ROC (AUROC) were those that exceeded 0.6 (RIDmem prevailing) or dropped below 0.4 (RIDvis prevailing). The time when the AUROC crossed the threshold values was considered as the time when a neuron started to be influenced by the target visibility used to implement the motor plan.
The main goal of our analyses was to test the influence of target visibility on reaching-related activity in dorsal parietal area PE (Area 5). To this end, from the database of neurons analyzed using the RIDmem task in our previous report (Ferraina, Brunamonti, et al., 2009), we selected those neurons also tested in the RIDvis task (N = 179; Monkey 1: 151; Monkey 2: 28). Figure 2A shows the neural activity of a PE neuron during reaches to the five target locations, starting from two different hand positions in the visual (top) and memory (bottom) version of the task. During the motor preparation (delay epoch; Figure 2A, gray areas), the activity of this cell (i) displayed a significant modulation for the target location in depth only for reaches starting from the position H1 (black lines), where the response was higher for T1 and T2 and lower for T3, T4, and T5 (three-way ANOVA [factors: Target distance, Hand position, Target visibility]: main effect Hand position, p < .001; interaction Target distance × Hand position, p = .038; Newman–Keuls post hoc comparisons, ps < .05); (ii) cell firing reached a higher value during the memory condition than in the visual condition (Target visibility main effect, p = .003). Thus, this neuron was modulated both by the initial hand position, well evident before target appearance, and by the visibility of the target during delay.
On all tested neurons, we performed a three-way ANOVA (factors: Target distance, Hand position, Target visibility) to quantify the proportion of neurons influenced by each of the contextual variables or by their interaction in the delay epoch (Table 1). We found similar proportions of cells significantly modulated (here we also took into account significant interactions among the main factor considered and the other two factors) by target depth (54/179; 30.2%), hand position (54/179; 30.2%), and target visibility (40/179; 22.3%). These results confirm that neurons of area PE are sensitive to both target location and hand initial position of reaches (Buneo & Andersen, 2012; McGuire & Sabes 2011; Ferraina, Brunamonti, et al., 2009; Lacquaniti et al., 1995); however, they also reveal that neural activity was influenced by the visibility of the target. The evaluation of the interaction between target location and task condition suggested that in a subpopulation of neurons (20/179; 11.1%) target visibility could also influence the tuning properties.
|Neurons .||TG Visibility .||Hand Pos .||TG Pos .||TG Visibility × Hand Pos .||TG Visibility × TG Pos .||Hand Pos × TG Pos .||Hand Pos × TG Pos × TG Visibility .|
|Neurons .||TG Visibility .||Hand Pos .||TG Pos .||TG Visibility × Hand Pos .||TG Visibility × TG Pos .||Hand Pos × TG Pos .||Hand Pos × TG Pos × TG Visibility .|
Results of the three-way ANOVA (factors: Target position; Hand position; Target visibility). TG = target; Pos = position.
Figure 2B shows a further analysis on the effect of target visibility on the response tuning for different targets. Separately for the two monkeys and for each neuron, we normalized the neural activity for the best response observed (preferred target) in the RIDvis, and we sorted each observation accordingly to the relative rank. Then we ranked with the same order the normalized neural activity obtained in the RIDmem task. Finally, we controlled for rank position in the two tasks looking at the position of the maximum. Among 54 neurons tuned for the different target positions (see ANOVA results in the previous paragraph), we identified 29 neurons with a value of change in rank position less than 2 (Figure 2B, top) and 25 neurons with higher values, corresponding to a proportion of neurons similar to that revealed by the interaction Target distance × Target visibility in the ANOVA analysis illustrated in the previous paragraph.
Figure 2C shows the effect of hand position changes on neural activity during the delay epoch. Filled symbols are for neurons with a significant modulation for the hand position and/or the interaction of this factor with the target depth and visibility (54/179; see ANOVA results presented above).
In the rest of this study, we will focus on the influence of the target visibility on reaching-related activity of PE neurons, independently of the coding of target location. To this end, we collapsed the neural activity across the different targets and performed a three-way ANOVA (factors: Epoch, Hand position, Target visibility) on all neurons tested in both task versions (Table 2). Eight neurons showed a significant difference from the control epoch only in the visual epoch; this group of neurons was not tested further. About 65% neurons (115/179; 64.2%) had a reaching-related activity, because they showed a significant difference from the control epoch in the delay and/or premovement epoch. Within the reaching-related group, 50/115 (43.4%) neurons were modulated in the visual, in the delay, and/or in the premovement epoch, and 65/115 (57.5%) were modulated only in the delay and/or premovement epoch. Furthermore, a high proportion of reach neurons (93/115; 81.8%) was modulated by changes in the initial position of the hand (including those showing a main effect of hand position, and/or interaction Target visibility × Hand position, and/or interaction Hand position × Epoch). Of these hand-modulated neurons, 38/93 (40.9%) were influenced by the task condition (including those with significant main effect of target visibility, and/or interaction Target visibility × Hand position, and/or interaction Target visibility × Epoch).
|Neurons .||Epoch .||Hand Pos .||TG Visibility .||Epoch × Hand Pos .||Epoch × TG Visibility .||Hand Pos × TG Visibility .||Epoch × Hand Pos × TG Visibility .|
|Neurons .||Epoch .||Hand Pos .||TG Visibility .||Epoch × Hand Pos .||Epoch × TG Visibility .||Hand Pos × TG Visibility .||Epoch × Hand Pos × TG Visibility .|
TG = target; Pos = position.
To further evaluate the influence of the target visibility, each of the 115 cells classified as reaching neurons was tested with an ROC analysis. We aligned the data to the go signal and compared the neural activity for the two conditions (RIDvis and RIDmem) using data from the starting position that elicited the strongest cell modulation. Figure 3A displays the time evolution of the neural activity, for an example of a reaching-related neuron modulated by both the hand position and task condition (RIDmem, top; RIDvis, middle) during the period leading to the go signal and aligned to it (main effect Epoch, p < .001, and Hand position, p = .004; Hand position × Epoch, p < .001; Hand position × Target visibility, p = .0006; Epoch × Target visibility, p = .03). The comparison over time (Figure 3A, bottom) of neural activity for the H2 starting position (best responses) displays a difference, depending on target visibility, starting at about 650 msec before the go signal, as measured by the AUROC value (the color scale is reported in Figure 3B).
We applied the same analysis to all reaching neurons, and we observed that 22.6% (26/115) of cells decreased their activity in the visual version of the task. The average AUROC value of this group calculated in the 400 msec centered on the go signal was 0.43 ± 0.072, a value significantly lower than 0.5 (t test: t(25) = −5.064; p < .001). On the contrary, 69.6% (80/115) of cells increased their activity in the memory version of the task. Their average AUROC value in the 400 centered on the go signal (0.56 ± 0.084) was significantly higher than 0.5 (t test: t(79) = 6.434; p < .001). The proportion of neurons that did not reach the criterion for classification in one of the two groups was 7.8% (9/115). Figure 3 (B, C) shows details of the time when ROC values resulted over threshold separately for the two animals. To this end, we transformed ROC values of <0.5 to represent modulated neurons as a single group (Figure 3B). On average, by representing the time of the detected thresholds (black lines in Figure 3B) for all neurons, we estimated that reaching-related cells started to be influenced by the visibility of the target at around 320 msec before the go signal (Figure 3C).
To summarize, in area PE during reaches toward memorized or visible target locations, the great majority of reaching-related neurons modulated their activity in the time preceding movement onset. The majority of these neurons were also influenced by initial hand position.
In a previous work, we reported that neurons in area PE of the superior parietal lobule encode the target for reaches in depth in a body-centered reference frame and that before reach onset the initial position of the hand was relevant for this code (Ferraina, Brunamonti, et al., 2009). In the present work, we studied whether and how the activity of PE neurons was modulated by contextual variables, focusing on the effect of target visibility during motor preparation. Our results revealed that reaching-related activity was affected by the visibility of the target in the workspace, thus influencing the computation of the motor error. This result supports the view that PPC neurons weight different contextual variables contributing to the composition of motor commands (Mountcastle et al., 1975).
The study of visuomotor transformations leading to the shaping of motor plans for arm movements has shown that PPC uses, among others, an arm-centered reference frame (see Ferraina, Battaglia-Mayer, et al., 2009; Ferraina, Brunamonti, et al., 2009; Burnod et al., 1999). In particular, PE neurons have been reported to integrate somatic (Ferraina, Brunamonti, et al., 2009) or visual (Buneo & Andersen, 2012; Graziano, Cooke, & Taylor, 2000; Ferraina et al., 1997; Lacquaniti et al., 1995) signals about hand position in space with target location information. Previous works have reported that the viewing of the moving hand (Buneo & Andersen, 2012) or the modality of target presentation (McGuire & Sabes 2011) did not shift the tuning for target location during reaches performed on a plane parallel to the body. We found that, during reach planning in depth, target visibility affected the tuning profile in a subpopulation of cells. Therefore, the modality of target presentation, either visual or memorized, influenced the neural representation of the preferred location of the target across conditions. This aspect deserves further studies in the future.
Then, we studied if the visuomotor transformation was influenced by the visibility of the target location, independently of its position in depth. We found that the activity of neurons in area PE was strongly influenced by the hand position (see also Ferraina, Brunamonti, et al., 2009), by the task condition (target visibility) or their interaction. Our findings suggest that the way somatic signals about hand position are encoded during motor planning in the dark (Ferraina, Brunamonti, et al., 2009) is significantly modulated by the view of the target position or the memory of its location, from its presentation until the end of reaching. Introducing the vision of the target location decreased the reach-related activity, mostly related to the initial hand position, in the majority of neurons. As an alternative interpretation, the effect we observed could reflect a different way in which somatic signals are processed in total darkness, independently of target visibility (Mon-Williams et al., 1997). Also, by considering the working memory-related activity observed in both intraparietal sulcus (Gnadt & Andersen, 1988) and inferior parietal lobule (Rawley & Constantinidis, 2009; Constantinidis & Steinmetz, 1996, for review), we could not rule out the possibility that, during the memory-guided task, working memory processes influenced the reaching-related activity. Even though, in one of our previous study, where about 600 neurons were studied in seven different tasks in inferior parietal areas PG and OPt, very little cell modulation was observed during the memory delay time preceding reach onset (Battaglia-Mayer et al., 2005). However, since to our knowledge evidence of the involvement of area PE in working memory is lacking, further studies involving the manipulation of different memory loads would provide a measure of the relationship between reaching- and working memory-related activity in this area. Finally, we controlled for the position of the end point of the movement by rewarding the monkeys only if they reached the end point within a restricted area around the target in both conditions (tolerance window of 3-cm diameter; see Figure 1C). However, we observed that the visual and memorized targets were reached with different trajectories. Even though we focused our analysis in the delay/memory period to minimize the probability that the difference in activity detected could reflect the encoding of different parameters of reach kinematics, we could not exclude this probability.
In a study aimed at evaluating whether the proprioceptive encoding of hand position changed depending on the vision of the hand in area PE neurons, Shi et al. (2013) have reported only a modest effect on hand position encoding by visually cueing the hand location at the end of reaches in a virtual reality environment. Although these results seem to differ from ours, both the visual cuing of the hand on the target and of the target position, as in our experiment, acted mostly as a negative gain on neural encoding of hand position. The modulation on the hand-related neural activity provided by the two different visual information differed in magnitude. Thus, although the vision of the hand after target acquisition influenced only modestly and only at a population level the firing rate of hand position neurons in PE (Shi et al., 2013), in our experiment the target visibility strongly modulated the neural activity both at single cells and population level. As a main difference, we studied the neural activity at the time of movement preparation, during which the position of the hand had to be combined with that of the target, whereas Shi et al. (2013) studied the neuronal activity during a resting period at the end of movement. Under such condition, the monkeys were not required to estimate the hand position to plan a reaching movement in the context of the task, making the vision of the hand unnecessary. This key difference in the task design might account for the weak influence of the vision of the hand on the neuronal activity in the above study.
Overall, these results are consistent with the idea that the PPC could flexibly manage the use of the different sources of information while composing a reach motor plan (Chang, Calton, Lawrence, Dickinson, & Snyder, 2016; Chang & Snyder, 2010; Ferraina, Battaglia-Mayer, et al., 2009; McGuire & Sabes, 2009). In particular, here we showed that neural activity related to motor preparation in area PE was modulated by the availability of visual information about target location.
Neuronal Mechanisms of Integration of Visual and Somatic Cues
Neurophysiological data have so far provided evidence of a multiplicity of reference frames distributed across the PPC network subtending reaching movements (Chang & Snyder, 2010; Ferraina, Battaglia-Mayer, et al., 2009; Buneo et al., 2008; Cohen & Andersen, 2002; Burnod et al., 1999; Lacquaniti et al., 1995). In each area of this network, patterns of activity have been observed encoding the reach target in eye-centered, hand-centered, or both coordinates (Hadjidimitrakis et al., 2014; Buneo & Andersen, 2012; McGuire & Sabes 2011; Chang & Snyder, 2010; Burnod et al., 1999), without a net dominance of the representation in any given coordinate frame over the others, but with a prevalence of one relative to the others, depending on the position of the area studied along the parietofrontal gradient. In fact, several lines of evidence converge on the view that there exists in PPC a posterior to anterior gradient of signal representation and parietofrontal connectivity (Marconi et al., 2001; Johnson, Ferraina, Bianchi, & Caminiti, 1996; for a recent review, see Caminiti, Innocenti, & Battaglia-Mayer, 2015, and the references therein), supporting a transition of reach encoding from eye-to-body/hand centered frames (Ferraina, Battaglia-Mayer, et al., 2009; McGuire & Sabes, 2009; Battaglia-Mayer et al., 2001; Marconi et al., 2001; Burnod et al., 1999) as one moves from more posterior to anterior location within the superior parietal lobule. The finding that the visibility of the target modulates the encoding mechanism of area PE neurons suggests that the formation of motor plans relying on the somatic hand position signals depends on the availability of other contextual information (McGuire & Sabes, 2009; Sober & Sabes, 2005) in the action space. It has been suggested, in fact, that during visuomotor transformations an eye-centered reference frame is facilitated by the presence of visual information about both target and hand position; reducing visual signals should increase, or modify, the importance of somatic signals and of body-centered maps (Buneo & Andersen, 2012; Engel, Flanders, & Soechting, 2002; Carrozzo, McIntyre, Zago, & Lacquaniti, 1999; Heuer & Sangals, 1998). However, because we did not vary systematically other variables involved in the preparation of the motor plan, as the position of the eye over the same location of the target and hand, further experiments are needed to clarify if the visibility of the target emphasized the engagement of an eye- over a hand-centered reference frame representation, although we consider this possibility as remote, given the relatively scarce influence of eye position signals in area PE. Finally, our results need to be explored in reaching protocols in which the target depth is kept constant, because it is known that vision and proprioception could be weighted differently when the target/effector is localized/displaced in depth relative to azimuth (van Beers, Baraduc, & Wolpert, 2002).
In conclusion, our results are in line with the view that weighting all contextual information leads to the selection of the more appropriate reference frame for movement (Fetsch, Pouget, DeAngelis, & Angelaki, 2011; Angelaki & DeAngelis, 2009).
Reprint requests should be sent to Stefano Ferraina, Department of Physiology and Pharmacology, Sapienza University, Piazzale Aldo Moro 5, 00185, Rome, Italy, or via e-mail: email@example.com.