Abstract

Peripersonal space is a multisensory representation relying on the processing of tactile and visual stimuli presented on and close to different body parts. The most studied peripersonal space representation is perihand space (PHS), a highly plastic representation modulated following tool use and by the rapid approach of visual objects. Given these properties, PHS may serve different sensorimotor functions, including guidance of voluntary actions such as object grasping. Strong support for this hypothesis would derive from evidence that PHS plastic changes occur before the upcoming movement rather than after its initiation, yet to date, such evidence is scant. Here, we tested whether action-dependent modulation of PHS, behaviorally assessed via visuotactile perception, may occur before an overt movement as early as the action planning phase. To do so, we probed tactile and visuotactile perception at different time points before and during the grasping action. Results showed that visuotactile perception was more strongly affected during the planning phase (250 msec after vision of the target) than during a similarly static but earlier phase (50 msec after vision of the target). Visuotactile interaction was also enhanced at the onset of hand movement, and it further increased during subsequent phases of hand movement. Such a visuotactile interaction featured interference effects during all phases from action planning onward as well as a facilitation effect at the movement onset. These findings reveal that planning to grab an object strengthens the multisensory interaction of visual information from the target and somatosensory information from the hand. Such early updating of the visuotactile interaction reflects multisensory processes supporting motor planning of actions.

INTRODUCTION

Our daily manual interactions with nearby objects are remarkably smooth and efficient. However, the brain faces huge challenges when rapidly processing a stream of multisensory information in real time to form appropriate motor plans. This requires constant updates regarding the positions of both effectors and targets relative to a common eye-centered reference frame (Cohen & Andersen, 2002) and/or the use of a sensorimotor interface termed peripersonal space (PPS; van der Stoep, Serino, Farnè, Di Luca, & Spence, 2016; Cléry, Guipponi, Wardak, & Ben Hamed, 2015; Makin, Holmes, Brozzoli, & Farnè, 2012; Rizzolatti, Fadiga, Fogassi, & Gallese, 1997). PPS is a multisensory representation of the space immediately surrounding the body that uses one or multiple body parts as spatial references to encode nearby objects (Blanke, Slater, & Serino, 2015; di Pellegrino & Làdavas, 2015; Brozzoli, Ehrsson, & Farnè, 2014). Converging evidence, ranging from nonhuman primate electrophysiology (Duhamel, Bremmer, Ben Hamed, & Graf, 1997; Graziano, Yap, & Gross, 1994; Colby, Duhamel, & Goldberg, 1993; Rizzolatti, Scandolara, Matelli, & Gentilucci, 1981a, 1981b) to neuroimaging (Ferri et al., 2015; Brozzoli, Gentile, Bergouignan, & Ehrsson, 2013; Brozzoli, Gentile, & Ehrsson, 2012; Brozzoli, Gentile, Petkova, & Ehrsson, 2011; Makin, Holmes, & Zohary, 2007; Sereno & Huang, 2006) and behavioral studies in humans (Serino, Bassolino, Farnè, & Làdavas, 2007; Spence, Pavani, Maravita, & Holmes, 2004; Farnè, Pavani, Meneghello, & Làdavas, 2000; di Pellegrino, Làdavas, & Farné, 1997), indicates the existence of PPS representations centered on the hand, face, trunk, and potentially other body parts (Scandola, Aglioti, Bonente, Avesani, & Moro, 2016; Serino et al., 2015; Avillac, Denève, Olivier, Pouget, & Duhamel, 2005; Farnè, Demattè, & Làdavas, 2005; di Pellegrino et al., 1997). In particular, perihand space (PHS) is thought to support hand–object interaction because it allows one to visually monitor objects available near the hand in relation to the hand itself. To do so, PHS representation relies on the activity of multisensory parietal and premotor regions of the human brain, where visual, tactile, and proprioceptive signals interact. Based on recent neuroimaging findings, these multisensory areas exhibit visual selectivity for objects presented near, compared with far, from the hand. Such selectivity is anchored to the hand when it changes location, indicating that it is hand-centered (Brozzoli et al., 2012; see also Maimon-Mor, Johansen-Berg, & Makin, 2017; Makin et al., 2007). These findings in humans parallel electrophysiological findings in nonhuman primates that identified the neural bases of PPS within parietal and premotor territories. Several visuotactile neurons in those areas, for example, feature somatosensory receptive fields on the hand and corresponding visual receptive fields. These visual receptive fields are anchored to tactile receptive fields and protrude over a limited (typically 5- to 30-cm) sector of space surrounding them (Graziano & Gross, 1993; Gentilucci et al., 1988; Rizzolatti et al., 1981a, 1981b). Thus, tactile signals from the hand converge with visual signals arising within the space surrounding the hand at the single-neuron level. In humans, the peculiar multisensory interaction characterizing the coding of PHS is captured by well-established visuotactile interactions (VTIs): Visual stimuli modulate responses to tactile stimulation of the hand more strongly when presented near than far from the hand (Spence, Pavani, & Driver, 2004; Spence, Pavani, Maravita, et al., 2004; Farnè et al., 2000; Pavani, Spence, & Driver, 2000; di Pellegrino et al., 1997).

Because of the properties summarized above, PHS has been thought to serve defensive purposes, that is, preparing for or boosting motor responses to potential threats approaching the body, such as avoidance movements (Makin, Brozzoli, Cardinali, Holmes, & Farnè, 2015; Sambo, Liang, Cruccu, & Iannetti, 2012; Makin, Holmes, Brozzoli, Rossetti, & Farnè, 2009; Graziano & Cooke, 2006; Cooke & Graziano, 2004). In line with such a defensive role, PHS boundaries may expand as a function of the speed of approaching objects (Fogassi et al., 1996), thus allowing individuals to respond to objects approaching the body at higher speeds at even farther distances. The same dynamic features have also been proposed to serve appetitive actions, such as grasping objects (de Vignemont & Iannetti, 2015; Brozzoli et al., 2014; Rizzolatti et al., 1981a, 1997). A wealth of studies in both human and nonhuman primates has documented changes in PHS boundaries, namely, the extension following tool use (Martel, Cardinali, Roy, & Farnè, 2016; Cardinali, Brozzoli, & Farnè, 2009; Farnè, Iriki, & Làdavas, 2005; Berti & Frassinetti, 2000; Farnè & Làdavas, 2000; Iriki, Tanaka, & Iwamura, 1996). Most studies in humans have quantified VTI in the space surrounding the hand before and after a short training session with the tool. However, in these studies, the hand was typically immobile during VTI assessment (Farnè, Bonifazi, & Làdavas, 2005; Farnè, Iriki, et al., 2005; Maravita, Spence, Kennett, & Driver, 2002; Maravita, Husain, Clarke, & Driver, 2001). Consequently, this line of research was unable to probe dynamic changes in PHS boundaries during action unfolding.

We have provided initial support also for the “appetitive function” hypothesis, that is, to guide planned, voluntary actions toward a given object to interact with it, regardless of its affective valence. In contrast to the static approaches described above, we assessed the PHS boundary under active conditions, namely, while the hand was moving to grasp an object. In particular, we measured the strength of the interaction between touches delivered to the hand and visual distractors placed on the object that the hand reached for and grasped. We detected an increase in VTI in real time when the hand moved to grasp the object compared with when the hand was immobile (Brozzoli, Pavani, Urquizar, Cardinali, & Farnè, 2009). In addition, kinematic recordings of hand movements demonstrated that complex grasping movements produced stronger VTI modulation during action compared with simpler pointing movements. Given the well-known finding that VTI is stronger for targets nearer the hand in humans than for those farther away (Spence, Pavani, & Driver, 2004; Spence, Pavani, Maravita, et al., 2004; di Pellegrino et al., 1997), these results indicated a modulation of PHS boundaries. That is to say, the target of the action, which was located far from the initial hand position, was “remapped” as if it were within PHS as soon as the hand moved to grasp it.

However, the question of whether the PHS representation is recruited before an upcoming action (i.e., during action planning) remains unanswered, even though it is fundamental to understanding the role of multisensory space representations for action. In the current study, we posited that, if PHS supports the control of voluntary actions, an increase in VTI should occur even when planning to grasp it, that is, well before the overt motor act. Indeed, it is during planning that the brain initiates the sensorimotor processes that compute both object and hand current states to form an appropriate motor plan that will eventually be realized as a movement (Castiello & Begliomini, 2008; Culham & Valyear, 2006; Castiello, 2005). We therefore predicted that PHS remapping, indexed by an increase in VTI, is triggered by action planning well before movement initiation. To test this hypothesis, we presented the target to be grasped with an unpredictable orientation on a trial-by-trial basis, and we made this visual information available only at the go signal for the action. This design was conceived to force participants to plan the action anew at every trial. Such a procedure ensures the possibility of comparing the strength of VTI at two critical time points before movement onset. More specifically, we probed VTI 50 msec after the go signal, which is immediately after information concerning the target has been made visually available (object vision phase), and 250 msec after the go signal, when participants are planning the upcoming movement (action planning phase).

In addition, we assessed touch perception alone under the same conditions. Unisensory tactile (hereafter T) and multisensory visuotactile (hereafter VT) stimuli were therefore delivered during the same phases. We note that contrasting multisensory VT and unisensory T performance allowed us to monitor whether VTI changes reflect a mere decrease in tactile perception during movement execution, possibly due to tactile suppression (Voss, Ingram, Haggard, & Wolpert, 2006; Chapman, 1994; Chapman, Bushnell, Miron, Duncan, & Lund, 1987). Alternatively, modulation of VTI may reflect interference and/or facilitation driven by the effect of visual stimuli on tactile perception, thus hinting at the role of multisensory processing (Spence, Pavani, & Driver, 1998, 2004). Finally, to rule out the possibility that VTI modulation during different action phases was attributable to the multisensory stimulation affecting hand movements, we recorded and analyzed the kinematic patterns of grasping movements. Based on previous work using a similar setup, we predicted that kinematics would differ as a function of object orientation (Brozzoli, Cardinali, Pavani, & Farnè, 2010) without being critically affected by concurrent sensory stimulation.

Our results show that enhancement of multisensory interaction between visual signals from the action target and tactile signals from the acting hand started in the planning phase. These findings are in line with the hypothesis that planning to grasp an object induces a modulation of PHS boundaries such that the action target, despite being far from the hand, is remapped to within PHS. These findings support the notion that multisensory processing of PHS serves a functional role by contributing to the control of voluntary appetitive actions.

METHODS

Participants

Sixteen healthy participants (mean age = 28 ± 5 years) with normal or corrected-to-normal vision and no history of sensory problems took part in the study. A statistical power analysis was performed for sample size estimation (GPower 3.1.9) based on data from our previous study (Brozzoli et al., 2009). With alpha = .05 and power = .80, the projected sample size needed to replicate the effect would be n = 13, whereas with power = .90, the estimated sample size would be n = 17. Thus, we chose a sample size n = 16 that would be more than adequate for the main objective of this study and should allow capturing any possible effect of action planning. All participants gave informed consent to take part in this study, which was approved by the CEEI (Comité d'evaluation éthique de l'Inserm)/institutional review board (No. 16-329) and conducted in accordance with the principles of the revised Helsinki Declaration.

Apparatus

The target object was a wooden cylinder (7 cm in height, 1.7 cm in diameter) located at eye level at a distance of 47 cm from the starting position of the participant's hand (Figure 1A). Participants had to grasp the cylinder with a precision grip, such that the index finger touched the top surface and the thumb touched the bottom surface. Two red light-emitting diodes (LEDs) were embedded into the cylinder proximal to the contact surfaces of the fingers in the precision grip configuration. Visual stimuli consisted of a single flash (200-msec duration) from either the top or bottom LED, delivered concurrently with electrocutaneous stimulation to the grasping hand. A black dot (1 cm in diameter) in the center of the cylinder (between the two LEDs) served as a visual fixation (see Figure 1A). Disposable electrodes (70015-K, Ambu Neuroline) were used to present suprathreshold electrocutaneous stimuli consisting of square-wave pulses (100 μsec, 400 V) delivered by constant current stimulators (DS7A, Digitimer Ltd.) to either the index finger or thumb of the right hand. To ensure that participants could be near to 100% detection of the electrical stimuli during the task, we first estimated thresholds for each of the two fingers and added 20% to the respective intensities that were then kept constant throughout the experiment. Finger threshold was determined via a staircase procedure with manually triggered stimulations (five on the thumb and five on the index finger) in a random order, intermingled with five catch trials in which no stimulation was delivered. Participants were asked to report when and where they felt the tactile stimulus. During the experimental task, participants had to respond to the tactile stimulus as fast as possible by releasing one of two foot pedals (Herga Electric Ltd.). The toe pedal indicated stimulation of the index finger, and the heel pedal indicated stimulation of the thumb, according to the classical procedure employed in studies investigating VTI through the cross-modal congruency effect (see Shore, Barnes, & Spence, 2006; Shore, Gray, Spry, & Spence, 2005; Spence, Pavani, & Driver, 2004; Spence, Pavani, Maravita, et al., 2004; Spence et al., 1998). Participants were therefore required to make speeded location responses, reporting whether tactile target stimuli were presented to the index finger or thumb. They were also asked to ignore task-irrelevant visual stimuli embedded in the object. Visual stimuli were presented in a spatially congruent or incongruent arrangement with respect to tactile targets when considering the hand posture (i.e., index finger and top LED or thumb and bottom LED for VT congruent stimulation; index finger and bottom LED or thumb and top LED for VT incongruent stimulation; see Figure 1A). Participants wore a pair of shutter goggles (FE-1, Cambridge Research Systems Ltd.) based on ferroelectric liquid crystal technology. The liquid crystal lenses of the goggles were configured in either a transparent (open) or a translucent (closed) state; vision was completely occluded in the latter condition and allowed in the former. The brand-estimated switching time from close to open state is 0.1 msec. Participants had to move as soon as the goggles opened, making the object visible and constituting the go signal.

Figure 1. 

Visuotactile stimulation during the planning and execution of a grasping movement. (A) Participants were asked to discriminate the location (up or down) of touches (red triangle) delivered to either the thumb (bottom) or index finger (top) while ignoring visual stimuli embedded in the cylinder to be grasped (top and bottom red circles); these stimuli produced either spatially congruent or incongruent patterns of visuotactile stimulation (dark and light gray-framed panels, respectively). (B) Sudden opening of shutter goggles prompted participants to grasp the cylinder in a given orientation; participants' vision was inhibited by the shutters before the beginning of each trial. Across blocks, unisensory tactile and multisensory visuotactile stimulation were delivered unpredictably, time-locked to crucial phases of the action: the object vision phase (50 msec after the goggles' opening), action planning phase (250 msec after the goggles' opening), movement onset phase (time-locked to individual motor RT), movement execution phase (200 msec after action onset), and max grip phase (time-locked to the MGA, available in real time via the kinematics recording).

Figure 1. 

Visuotactile stimulation during the planning and execution of a grasping movement. (A) Participants were asked to discriminate the location (up or down) of touches (red triangle) delivered to either the thumb (bottom) or index finger (top) while ignoring visual stimuli embedded in the cylinder to be grasped (top and bottom red circles); these stimuli produced either spatially congruent or incongruent patterns of visuotactile stimulation (dark and light gray-framed panels, respectively). (B) Sudden opening of shutter goggles prompted participants to grasp the cylinder in a given orientation; participants' vision was inhibited by the shutters before the beginning of each trial. Across blocks, unisensory tactile and multisensory visuotactile stimulation were delivered unpredictably, time-locked to crucial phases of the action: the object vision phase (50 msec after the goggles' opening), action planning phase (250 msec after the goggles' opening), movement onset phase (time-locked to individual motor RT), movement execution phase (200 msec after action onset), and max grip phase (time-locked to the MGA, available in real time via the kinematics recording).

To vary action plans and execution on a trial-by-trial basis, the cylinder was unpredictably rotated before becoming visible (manually from behind the panel) to one of two different orientations: +36° (clockwise) or −36° (counterclockwise). Accordingly, the object orientation imposed either clockwise (+36°) or counterclockwise (−36°) wrist rotation. Movements were recorded using an Optotrak 3020 system (Northern Digital, Inc.), with a sampling rate of 150 Hz (0.01-mm 3-D resolution at 2.25-m distance) via three infrared-emitting diodes (IREDs). Two IREDs were attached to the lateral and interior parts of the nails on the thumb and index finger, and one was attached to the interior part of the wrist at the styloid process level. These markers were used to perform online registration and subsequent offline reconstruction of the transport component (the change over time in wrist marker position while the right hand was reaching for the target) and grip component (the change over time in distance between index finger and thumb) for the action. Through MAIN, a software package developed in the laboratory Impact for preprocessing and 3-D visualization of kinematic data, we identified the following parameters without applying any filter to the position signals: peaks and relative latencies of wrist acceleration, velocity and deceleration for the transport component of the movement, and peaks and relative latencies of maximum grip aperture (MGA) and velocity of grip aperture (VGA) for the grip component. Movement start was detected on the velocity curve of the wrist IRED with a threshold criterion of 15 mm/sec. The velocity was calculated as the first temporal derivative of the position signal relative to the marker placed on the wrist, with a 5-point time window. Its peak was defined as the maximum value between the point when the speed passed the threshold of 15 mm/sec and the point when speed went below this threshold again. Peak of acceleration and deceleration were defined respectively as the maximum and the minimum value of the second temporal derivative of the position signal from the wrist marker before and after the peak of velocity, respectively. The grip aperture was defined as the variation in time of the Euclidean distance between index and thumb. Its peak was defined as the maximum value reached after movement initiation and before the end. VGA was defined as the first derivative of the grip aperture measure. The movement end was set at the first of a series of points showing a stable grip aperture, signaling that the object had been steadily grasped. The latencies of all the parameters correspond to the point in time (msec) of their occurrence with respect to movement onset. All trials were inspected visually to spot accidental failure of the automatic procedure of the software. When necessary, manual detection of peaks and relative latencies was applied following the criteria described above.

Design and Procedure

Participants sat at a table with the thumb and index finger of each hand in a closed pinch grip posture on two switches fixed to the table. They were instructed to perform two concurrent tasks during each trial: the perceptual task (speeded discrimination of tactile stimulus location: index finger or thumb) and the motor task (reaching and grasping the cylinder along its longitudinal axis with the right index finger and thumb). Each trial started with an auditory warning signal. After a variable delay (1500–2200 msec), the goggles opened (i.e., changed to a translucent state), constituting the go signal for the motor task (Figure 1B). The experiment consisted of six blocks of 80 trials: In two blocks, only tactile stimuli were delivered, and in the other four blocks, visuotactile stimuli were delivered simultaneously. The unisensory and multisensory conditions were run in separate blocks to avoid any spurious effect due to confounding factors (e.g., stimulus expectancy, attentional demands) in a fully counterbalanced design. Moreover, this design ensured an equivalent number of stimulation trials for the unisensory tactile and multisensory (congruent and incongruent) visuotactile conditions. Half of the participants started with tactile blocks, whereas the other half started with visuotactile blocks. In each block, stimulation was randomly delivered across trials at five different latencies (see Figure 1B): (1) the object vision phase, beginning 50 msec after the opening of the goggles; (2) the action planning phase, beginning 250 msec after the opening of the goggles; (3) the movement onset phase, where movement initiation was detected by the release of the start switch; (4) the movement execution phase, beginning 200 msec after action onset; and (5) the max grip phase, which was time-locked to the MGA of the fingers computed online. The choice of stimulation times during the static phases (object vision and action planning) was dictated by the fact that precision grasping planning is typically not initiated earlier than 50 msec after a go signal (Koch et al., 2010). Such a condition should lead therefore to a “baseline” VTI, similar to that previously reported in the absence of action (Brozzoli et al., 2009). It is to note that both the object vision and action planning phases occurred before movement initiation. Thus, they differ in terms of time elapsed from the moment when visual features of the object become available for the motor program, but they are identical in terms of the motor state of the hand (i.e., immobile in both conditions).

Statistics

Because our RT data were not normally distributed, as the Lilliefors-corrected Kolmogorov–Smirnov test showed, we applied a log transformation on raw data. Statistical analysis was conducted on transformed RTs; however, for the sake of clarity, bar plots display untransformed RTs expressed in millisecond.

To assess the dynamics of multisensory interactions, we calculated the VTI as the difference between RTs for spatially incongruent and congruent VT trials, as this difference quantifies the strength of the interaction between visual and tactile stimuli. As similar patterns of results were found for accuracy scores, for the sake of brevity, we report only analyses and results for RTs. A two-way ANOVA was performed with within-subject factors of Object orientation (clockwise vs. counterclockwise) and Timing (object vision phase vs. action planning phase vs. movement onset phase vs. movement execution phase vs. max grip phase). A similar ANOVA was run on tactile RTs to test potential unisensory tactile modulation during action. Because we were interested in assessing the time course of VTI changes during action as compared with object vision phase, any significant effect of Timing was followed up with two-tailed paired-samples t tests contrasting object vision with all the remaining phases (Bonferroni correction was applied to control for the family-wise error).

In addition, we compared VT to unisensory tactile performance. The reasoning behind this choice was twofold. First, we aimed to evaluate the impact of potential variations in unisensory T on multisensory VT perception to ensure that VTI was genuinely affected, independent of any change in unisensory touch perception per se. Second, we aimed to test whether observed VTI is driven by facilitatory and/or interfering multisensory processes. Thus, for each timing we expressed RTs for congruent and incongruent VT trials with respect to the unisensory tactile performance (i.e., congruent RTs = VT congruent RTs − T RTs; incongruent RTs = VT incongruent RTs − T RTs). Object orientation (clockwise vs. counterclockwise) conditions were collapsed because this factor did not affect uni- or multisensory performance. Any significant deviation from zero would thus indicate an effect of the visual event over the perception of touch in terms of either facilitation (if <0) or interference (if >0). We therefore assessed differences between conditions by running one series of Bonferroni-corrected one-tailed t tests for the congruent condition and a second series for incongruent RTs against a null hypothesis of zero (i.e., one-sample t tests).

For the motor task, the primary kinematic parameters for the transport and grip components of the movements were analyzed to assess potential differences in the movement profile across conditions. A series of three-way ANOVAs was conducted on VT condition with Stimulus (congruent vs. incongruent), Object orientation (clockwise vs. counterclockwise), and Timing (object vision phase vs. action planning phase vs. movement onset phase vs. movement execution phase vs. max grip phase) as within-subject factors. Separate ANOVAs were performed for latency and amplitude of acceleration, deceleration, and velocity peaks (transport component) as well as for MGA and VGA peaks (grip component). A similar series of two-way ANOVAs was conducted on T condition with Object orientation and Timing as within-subject factors. Separate ANOVAs were conducted for the peak and latency of each kinematic parameter. Kinematics analyses were intended to rule out the possibility that VTI increases over time merely reflected a difference in motor performance across conditions. On the contrary, we expected the kinematics of grasping movements to be affected primarily by object orientation in both VT and T trials.

Hereafter, effect sizes are reported in terms of partial eta squared (ηp2) and Cohen's d, and averages are reported along with the SEM. Unless stated otherwise, only significant results are reported.

RESULTS

Visuotactile Performance

Significant action-dependent modulation of VTI was observed (main effect of Timing, F(4, 60) = 11.22, p < .0001, ηp2 = .43). Multisensory interactions were enhanced during action planning before any overt movement of the hand occurred. Indeed, even though the hand was still immobile, participants displayed greater VTI in the action planning phase, log(66/msec) ± log(9/msec), than in the object vision phase, log(37/msec) ± log(12/msec), t(15) = 3.92, Cohen's d = 0.98, Bonferroni-corrected p = .005. Moreover, VTI further increased during all the dynamic phases with respect to the object vision (movement onset: log(92/msec) ± log(13/msec), t(15) = 3.90, Cohen's d = 0.97, Bonferroni-corrected p = .006; movement execution: log(110/msec) ± log(13/msec), t(15) = 11.01, Cohen's d = 2.75, Bonferroni-corrected p < .001; max grip: log(99/msec) ± log(14/msec), t(15) = 3.90, Cohen's d = 1.25, Bonferroni-corrected p < .001; see Figure 2A).

Figure 2. 

Multisensory–motor planning and execution. (A) Bar plots (with SEM) show the modulation of VTI (incongruent minus congruent difference on untransformed RTs) as a function of timing. Asterisks indicate significant differences between the object vision and all other phases. (B) Bar plots (with SEM) display visuotactile untransformed RTs relative to unisensory tactile untransformed RTs. Asterisks indicate significant deviations from 0 (either facilitation if <0 or interference if >0). The multisensory effect comprised both interference (during all phases from action planning onward) and selective facilitation (in the movement onset phase).

Figure 2. 

Multisensory–motor planning and execution. (A) Bar plots (with SEM) show the modulation of VTI (incongruent minus congruent difference on untransformed RTs) as a function of timing. Asterisks indicate significant differences between the object vision and all other phases. (B) Bar plots (with SEM) display visuotactile untransformed RTs relative to unisensory tactile untransformed RTs. Asterisks indicate significant deviations from 0 (either facilitation if <0 or interference if >0). The multisensory effect comprised both interference (during all phases from action planning onward) and selective facilitation (in the movement onset phase).

Tactile Performance

Unisensory T perception was affected by action execution (main effect of timing, F(4, 60) = 12.60, p < .001, ηp2 = .46). As compared with the object vision, log(485/msec) ± log(26/msec), participants were faster at discriminating which finger had been touched during the dynamic phases (movement execution: log(421/msec) ± log(20/msec), t(15) = 4.04, Cohen's d = 1.01, Bonferroni-corrected p = .004; max grip: log(410/msec) ± log(14/msec), t(15) = 3.61, Cohen's d = 0.90, Bonferroni-corrected p = .010), except at the movement onset, log(500/msec) ± log(30/msec), p > .05. Crucially, unisensory T performance was better during action planning, log(456/msec) ± log(24/msec), t(15) = 3.92, Cohen's d = 0.98, Bonferroni-corrected p = .005 (see Figure 3).

Figure 3. 

Unisensory tactile performance. Bar plots (with SEM) display modulation of tactile untransformed RTs as a function of timing. Asterisks indicate significant differences from the object vision phase.

Figure 3. 

Unisensory tactile performance. Bar plots (with SEM) display modulation of tactile untransformed RTs as a function of timing. Asterisks indicate significant differences from the object vision phase.

VT Performance Relative to Tactile Performance

Facilitation of tactile discrimination by congruent visual stimuli occurred selectively at action onset, t(15) = 2.72, Cohen's d = 0.68, Bonferroni-corrected p = .24. In contrast, interference with tactile discrimination by incongruent visual stimulation emerged in all remaining phases, from the action planning phase onward (all ts(15) > 5.10, Cohen's ds > 1.27, Bonferroni-corrected ps < .001). Neither facilitation nor interference effects emerged in the object vision phase (see Figure 2B).

Motor Performance

Kinematic analysis of transport component parameters showed that movements required for grasping the counterclockwise-oriented object resulted in larger peaks than movements required for the clockwise-oriented object. Velocity peak latency was accordingly modulated by object orientation (Table 1). This modulation, which was present irrespective of the type of VT stimulation (congruent or incongruent), as expected, was confirmed by kinematic analysis of unisensory T condition, which exhibited similar modulation for peaks of the transport component parameters (Table 1). No major effect of object orientation was found for latencies in unisensory T condition, although velocity peak latency tended to differ according to the clockwise/counterclockwise orientation of the target object (Table 1, right). The effect of object orientation was present across all other independent variables (see Table 2 for an exhaustive report of other statistically significant results). No other significant major effect or interaction was observed.

Table 1. 
Kinematic Results
Effect of Object Orientation on the Transport Component
Multisensory Visuotactile Condition
PeakLatency
Acceleration: F(1, 15) = 5.34, p = .036* Counterclockwise 8517 ± 785 mm/sec2 Acceleration: F(1, 15) = 0.47, p = .504 Counterclockwise 114 ± 6 msec 
Clockwise 8358 ± 765 mm/sec2 Clockwise 115 ± 7 msec 
Velocity: F(1, 15) = 14.42, p = .002* Counterclockwise 1421 ± 62 mm/sec Velocity: F(1, 15) = 5.38, p = .035* Counterclockwise 310 ± 12 msec 
Clockwise 1397 ± 62 mm/sec Clockwise 313 ± 12 msec 
Deceleration: F(1, 15) = 20.27, p ≤ .001* Counterclockwise −6745 ± 494 mm/sec2 Deceleration: F(1, 15) = 2.25, p = .154 Counterclockwise 4483 ± 17 msec 
Clockwise −6452 ± 471 mm/sec2 Clockwise 452 ± 17 msec 
  
Unisensory Tactile Condition
PeakLatency
Acceleration: F(1, 15) = 4.92, p = .043* Counterclockwise 7975 ± 617 mm/sec2 Acceleration: F(1, 15) = 0.97, p = .369 Counterclockwise 118 ± 8 msec 
Clockwise 7777 ± 642 mm/sec2 Clockwise 119 ± 9 msec 
Velocity: F(1, 15) = 4.74, p = .046* Counterclockwise 1403 ± 57 mm/sec Velocity: F(1, 15) = 3.64, p = .076 Counterclockwise 325 ± 16 msec 
Clockwise 1378 ± 57 mm/sec Clockwise 328 ± 14 msec 
Deceleration: F(1, 15) = 7.79, p = .014* Counterclockwise −6673 ± 430 mm/sec2 Deceleration: F(1, 15) = 2.21, p = .157 Counterclockwise 463 ± 19 msec 
Clockwise −6374 ± 409 mm/sec2 Clockwise 467 ± 18 msec 
  
Effect of Timing on the Grip Component
Multisensory Visuotactile Condition
PeakLatency
MGA: F(4, 60) = 4.11, p = .005* Object vision phase: 114 ± 2 mm MGA: F(4, 60) = 12.70, p ≤ .001* Object vision phase: 510 ± 17 msec 
Action planning phase: 115 ± 2 mm Action planning phase: 512 ± 17 msec 
Movement onset phase: 114 ± 2 mm Movement onset phase: 532 ± 16 msec 
Movement execution phase: 113 ± 2 mm Movement execution phase: 550 ± 17 msec 
Max grip phase: 111 ± 2 mm Max grip phase: 563 ± 20 msec 
  
Effect of Timing on the Grip Component
Multisensory Visuotactile Condition
PeakLatency
VGA: F(4, 60) = 2.80, p = .034* Object vision phase: 570 ± 49 mm/sec VGA: F(4, 60) = 30.39, p < .001* Object vision phase: 266 ± 17 msec 
Action planning phase: 587 ± 50 mm/sec Action planning phase: 289 ± 18 msec 
Movement onset phase: 579 ± 51 mm/sec Movement onset phase: 332 ± 17 msec 
Movement execution phase: 553 ± 49 mm/sec Movement execution phase: 335 ± 19 msec 
Max grip phase: 553 ± 52 mm/sec Max grip phase: 330 ± 31 msec 
  
Unisensory Tactile Trials
PeakLatency
MGA: F(4, 60) = 6.72, p ≤ .001* Object vision phase: 114 ± 2 mm MGA: F(4, 60) = 9.84, p ≤ .001* Object vision phase: 519 ± 23 msec 
Action planning phase: 113 ± 2 mm Action planning phase: 521 ± 23 msec 
Movement onset phase: 112 ± 2 mm Movement onset phase: 542 ± 20 msec 
Movement execution phase: 110 ± 2 mm Movement execution phase: 568 ± 20 msec 
Max grip phase: 113 ± 2 mm Max grip phase: 566 ± 27 msec 
VGA: F(4, 60) = 2.74, p = .037* Object vision phase: 555 ± 48 mm/sec VGA: F(4, 60) = 17.11, p < .001* Object vision: 287 ± 23 msec 
Action planning phase: 563 ± 53 mm/sec Action planning phase: 302 ± 22 msec 
Movement onset phase: 555 ± 50 mm/sec Movement onset phase: 341 ± 26 msec 
Movement execution phase: 527 ± 44 mm/sec Movement execution phase: 348 ± 26 msec 
Max grip phase: 536 ± 50 mm/sec Max grip phase: 336 ± 25 msec 
Effect of Object Orientation on the Transport Component
Multisensory Visuotactile Condition
PeakLatency
Acceleration: F(1, 15) = 5.34, p = .036* Counterclockwise 8517 ± 785 mm/sec2 Acceleration: F(1, 15) = 0.47, p = .504 Counterclockwise 114 ± 6 msec 
Clockwise 8358 ± 765 mm/sec2 Clockwise 115 ± 7 msec 
Velocity: F(1, 15) = 14.42, p = .002* Counterclockwise 1421 ± 62 mm/sec Velocity: F(1, 15) = 5.38, p = .035* Counterclockwise 310 ± 12 msec 
Clockwise 1397 ± 62 mm/sec Clockwise 313 ± 12 msec 
Deceleration: F(1, 15) = 20.27, p ≤ .001* Counterclockwise −6745 ± 494 mm/sec2 Deceleration: F(1, 15) = 2.25, p = .154 Counterclockwise 4483 ± 17 msec 
Clockwise −6452 ± 471 mm/sec2 Clockwise 452 ± 17 msec 
  
Unisensory Tactile Condition
PeakLatency
Acceleration: F(1, 15) = 4.92, p = .043* Counterclockwise 7975 ± 617 mm/sec2 Acceleration: F(1, 15) = 0.97, p = .369 Counterclockwise 118 ± 8 msec 
Clockwise 7777 ± 642 mm/sec2 Clockwise 119 ± 9 msec 
Velocity: F(1, 15) = 4.74, p = .046* Counterclockwise 1403 ± 57 mm/sec Velocity: F(1, 15) = 3.64, p = .076 Counterclockwise 325 ± 16 msec 
Clockwise 1378 ± 57 mm/sec Clockwise 328 ± 14 msec 
Deceleration: F(1, 15) = 7.79, p = .014* Counterclockwise −6673 ± 430 mm/sec2 Deceleration: F(1, 15) = 2.21, p = .157 Counterclockwise 463 ± 19 msec 
Clockwise −6374 ± 409 mm/sec2 Clockwise 467 ± 18 msec 
  
Effect of Timing on the Grip Component
Multisensory Visuotactile Condition
PeakLatency
MGA: F(4, 60) = 4.11, p = .005* Object vision phase: 114 ± 2 mm MGA: F(4, 60) = 12.70, p ≤ .001* Object vision phase: 510 ± 17 msec 
Action planning phase: 115 ± 2 mm Action planning phase: 512 ± 17 msec 
Movement onset phase: 114 ± 2 mm Movement onset phase: 532 ± 16 msec 
Movement execution phase: 113 ± 2 mm Movement execution phase: 550 ± 17 msec 
Max grip phase: 111 ± 2 mm Max grip phase: 563 ± 20 msec 
  
Effect of Timing on the Grip Component
Multisensory Visuotactile Condition
PeakLatency
VGA: F(4, 60) = 2.80, p = .034* Object vision phase: 570 ± 49 mm/sec VGA: F(4, 60) = 30.39, p < .001* Object vision phase: 266 ± 17 msec 
Action planning phase: 587 ± 50 mm/sec Action planning phase: 289 ± 18 msec 
Movement onset phase: 579 ± 51 mm/sec Movement onset phase: 332 ± 17 msec 
Movement execution phase: 553 ± 49 mm/sec Movement execution phase: 335 ± 19 msec 
Max grip phase: 553 ± 52 mm/sec Max grip phase: 330 ± 31 msec 
  
Unisensory Tactile Trials
PeakLatency
MGA: F(4, 60) = 6.72, p ≤ .001* Object vision phase: 114 ± 2 mm MGA: F(4, 60) = 9.84, p ≤ .001* Object vision phase: 519 ± 23 msec 
Action planning phase: 113 ± 2 mm Action planning phase: 521 ± 23 msec 
Movement onset phase: 112 ± 2 mm Movement onset phase: 542 ± 20 msec 
Movement execution phase: 110 ± 2 mm Movement execution phase: 568 ± 20 msec 
Max grip phase: 113 ± 2 mm Max grip phase: 566 ± 27 msec 
VGA: F(4, 60) = 2.74, p = .037* Object vision phase: 555 ± 48 mm/sec VGA: F(4, 60) = 17.11, p < .001* Object vision: 287 ± 23 msec 
Action planning phase: 563 ± 53 mm/sec Action planning phase: 302 ± 22 msec 
Movement onset phase: 555 ± 50 mm/sec Movement onset phase: 341 ± 26 msec 
Movement execution phase: 527 ± 44 mm/sec Movement execution phase: 348 ± 26 msec 
Max grip phase: 536 ± 50 mm/sec Max grip phase: 336 ± 25 msec 

Top: Main effect of object orientation on the transport component of grasping movements performed on multisensory VT and unisensory T conditions. Bottom: Major effect of timing on the grip components of grasping movements performed on multisensory VT and unisensory T conditions. Asterisks denote significant effects.

Table 2. 
Other Significant Main Effects and Interactions Observed for ANOVAs Performed on Kinematic Parameters for Multisensory VT and Unisensory T Condition
Kinematic ParameterOther Significant Main Effects and Interactions
Multisensory Visuotactile Condition
Acceleration latency Timing: F(4, 60) = 4.97, p = .002   
Velocity latency Timing: F(4, 60) = 9.53, p < .001 Timing * Stimulus: F(4, 60) = 4.04, p = .006 
Deceleration latency Timing: F(4, 60) = 5.07, p = .001 Timing * Stimulus: F(4, 60) = 3.11, p = .022 
MGA latency Stimulus: F(1, 15) = 7.06, p = .018   
VGA latency Stimulus: F(1, 15) = 8.33, p = .011   
  
 Unisensory Tactile Condition
Velocity latency Timing: F(4, 60) = 8.41, p < .001   
Deceleration latency Timing: F(4, 60) = 3.28, p = .017   
Kinematic ParameterOther Significant Main Effects and Interactions
Multisensory Visuotactile Condition
Acceleration latency Timing: F(4, 60) = 4.97, p = .002   
Velocity latency Timing: F(4, 60) = 9.53, p < .001 Timing * Stimulus: F(4, 60) = 4.04, p = .006 
Deceleration latency Timing: F(4, 60) = 5.07, p = .001 Timing * Stimulus: F(4, 60) = 3.11, p = .022 
MGA latency Stimulus: F(1, 15) = 7.06, p = .018   
VGA latency Stimulus: F(1, 15) = 8.33, p = .011   
  
 Unisensory Tactile Condition
Velocity latency Timing: F(4, 60) = 8.41, p < .001   
Deceleration latency Timing: F(4, 60) = 3.28, p = .017   

Kinematic analysis of the grip component parameters revealed that the timing of sensory stimulation affected both latency and amplitude of the MGA and VGA, and this effect was similar for multi- and unisensory conditions. Participants tended to open their fingers wider and faster when stimulation was delivered in the static versus dynamic phases, both under multi- and unisensory conditions and irrespective of the congruency between visual and tactile events. Furthermore, participants displayed longer latencies for these parameters in the dynamic phases of the action (see Table 1), again regardless of the type of stimulation (uni- or multisensory). No other significant effect on the kinematics of the grip component was observed (see Table 2).

DISCUSSION

This study aimed to test whether PHS representation is remapped for action purposes even before overt movement, that is, while planning to execute voluntary grasping actions. We demonstrated that the brain updates the relationship between visual signals from the target object and tactile signals from the acting hand at earlier stages than previously known. Notably, this result indicates that PHS is modified by action planning and is thus temporally suited to “remapping” the (distant) action target into the PHS representation. Such remapping may possibly incur the benefits of the distinctive multisensory processing known to occur within the PHS representation. We suggest that this multisensory–motor processing may contribute to guiding the hand toward a goal during voluntary movements. Contrary to threat-driven defensive actions, voluntary actions afford and actually require a planning step, during which the brain prepares the appropriate sequence of motor commands to achieve the desired goal (Culham & Valyear, 2006; Castiello, 2005). Our results are in line with the guidance role proposed for the PHS during appetitive hand–object interactions.

We provided previous evidence in favor of the hypothesis that PHS supports the execution of appetitive actions (Brozzoli et al., 2009, 2010). Here, however, we overcame two major shortcomings. First, as mentioned in the introduction, previous work reported VTI increases only after initiation of the reach-to-grasp movement. These reports imply that the hand was physically close to the target object, albeit to a small extent. Therefore, an alternative interpretation is that the VTI increase documented at the action start was actually (at least partially) dependent upon the reduced distance between the hand and target object. Notably, in the current study, we found that VTI increased when participants prepared the upcoming movement (250 msec after vision of the target) as compared with a similarly static but earlier phase (50 msec after vision of the target). In this latter, although the hand was similarly immobile and at the same distance from the target object, the movement has yet to be prepared. Because planning of a precision grasping movement is believed to take place well after 50 msec from the “go signal” (Michaels, Dann, Intveld, & Scherberger, 2018; Churchland et al., 2012; Koch et al., 2010; Churchland, Yu, Ryu, Santhanam, & Shenoy, 2006), the two different timings from the go signal make the two phases differ in terms of action planning (i.e., absent vs. present). Such an increase in VTI during the planning phase of an action suggests that PHS is recruited before an upcoming action, thus providing the crucial but still missing evidence for functional role of PHS in motor control.

The second potential shortcoming that one could identify with our previous work is that the effect of action on PHS we demonstrated could be compatible with an interaction between action and perception in terms of tactile suppression (Juravle, Deubel, Tan, & Spence, 2010; Voss et al., 2006; Chapman, 1994; Chapman et al., 1987). As our new data demonstrate, multisensory changes were independent of variations in unisensory tactile perception. In particular, the pattern of VTI modulation reported here is not accounted for by any reduction (i.e., impairment) in unisensory tactile perception, possibly due to concurrent action-dependent tactile suppression (Chapman et al., 1987). In fact, we found that tactile discrimination improved (as evidenced by shorter RTs) during action planning as well as during later stages of action execution compared with object vision. These results align with the fact that tactile sensations are enhanced, are decreased, or even remain unchanged during movement depending on perceptual task demands (Juravle, Binsted, & Spence, 2017; Colino, Buckingham, Cheng, van Donkelaar, & Binsted, 2014; Post, Zompa, & Chapman, 1994; Chapman et al., 1987). Furthermore, contrary to what tactile suppression would predict, we showed that unisensory tactile performance during action planning improved. This rules out the possibility that decreased tactile performance explains the strengthening of multisensory interactions observed when preparing the action.

Remarkably, another interesting finding of this study is that, compared with unisensory tactile performance, multisensory VTI was characterized by both stronger interference and facilitation, selective for the action start. Although most previous studies pointed to differences between VT-congruent and -incongruent trials (Marini, Romano, & Maravita, 2017; Spence, Pavani, & Driver, 2004; Spence, Pavani, Maravita, et al., 2004), here we additionally examined multisensory VT performance with respect to unisensory T performance (Noel et al., 2015; Serino et al., 2015; Shore et al., 2006). We thus revealed for the first time that the strength of the interaction between visual and tactile information is a result of the different contribution of multisensory interference as well as of multisensory facilitation. Here we wish to highlight that a pervasive interference effect was detected during all the planning and execution action phases whereas a facilitation effect emerged only at the onset of movement execution. One might speculate that such an asymmetrical impact of interference and facilitation on VTI could reflect different neural integrative mechanisms. In this respect, neural multisensory integration (defined as a nonlinear summation of the response to VT stimuli, different from the sum of V + T stimuli; Stein, Stanford, Ramachandran, Perrault, & Rowland, 2009; Stein & Stanford, 2008) has been assessed in a parietal area coding PPS in monkeys. From a neuronal perspective, the ventral intraparietal cortex does indeed perform multisensory source fusion, showing heterogeneous integration responses where subadditive neurons (i.e., decreased neural activity during integration) are more frequent in number as compared with superadditive neurons (i.e., enhanced neural activity; Avillac, Hamed, & Duhamel, 2007; Avillac et al., 2005). This aligns with the recent findings from an intracranial electrocorticography work in humans investigating multisensory integration between vision and touch. Indeed, this study demonstrated that, in the supramarginal gyrus, multisensory VT integration invariably resulted in subadditive responses and thus suggested that multisensory integration might proceed mainly through local neuronal inhibition (Quinn et al., 2014). Although it is tempting to pool these findings and claim the multisensory interference dominates over facilitation both at the neural and the behavioral level, this parallel should be made with caution. The direct link between multisensory VT integration and PPS representation, as well as the hypothesis that different contributions of interference and facilitation effects would derive from different neural computations, deserves future investigation. However, the observation in the present report that VTI was enhanced in terms of both facilitation and interference at the onset of grasping movements is in agreement with the hypothesis that visuotactile processes may contribute to successfully guiding the hand to its target.

In agreement with previous work, when the movement is not yet prepared, the strength of VTI is similar to when the movement is not required at all. Indeed, when comparing the object vision in this study to a purely static condition in a previous study, we found a similar amount of VTI (37 msec vs. 33 msec, respectably, p = .82; see Experiment 1 from Brozzoli et al., 2009, whereby participants performed the perceptual task only). That is, the magnitude of the multisensory effect reported in the present grasping setting—before action planning—is similar to that measured in a static setting, where neither action planning nor action execution is required. Here we crucially demonstrated that, although the overt movement is not initiated yet, VTI starts to significantly increase as early as during the planning of the upcoming action. Moreover, VTI was further enhanced when the hand started moving as well as during execution of the reaching phase (Brozzoli et al., 2009). By monitoring VTI at later action stages than have been investigated in previous work, in this study we demonstrated that VTI modulation lasts at least until completion of the finger opening phase, when multisensory remapping appears to plateau (see Figure 2A). Thus, these results provide the first indication that multisensory PHS boundaries may reflect a continuous process that starts developing during action planning and evolves online during action execution to monitor and possibly adjust movements until their completion. From this perspective, we anticipate that VTI undergoes a decrease during later stages of action execution, despite the hand moving closer to the object from which visually interfering information originates.

Finally, hand motion tracking allowed us to further assess whether changes in VTI and thus in PHS extent were dependent on changes in motor behavior caused by the contingent sensory stimulation. In keeping with previous studies (Brozzoli et al., 2009, 2010), the kinematics of the transport component of the movement showed a consistent effect in terms of object orientation, whereby counterclockwise orientation of the target object elicited kinematically more demanding reaching movements than clockwise orientation. Analyses of the grip component of the movement additionally revealed that timing of sensory stimulation affected the aperture of the fingers. Crucially, whenever the perceptual task affected movement kinematic parameters, it did so in the same way in uni- and multisensory conditions: Multi- and unisensory perception were thus assessed during the planning and execution of comparably demanding grasping movements. Overall, these findings point to a genuine modulation of multisensory interaction processes arising during the preparation of grasping actions.

When considering the possible neural underpinnings of multisensory PHS coding and its action-dependent changes, we note that there may be an intriguing overlap between the two processes. Recent neuroimaging results in humans have shown that a set of brain areas, including the anterior portion of the intraparietal sulcus (aIPS), premotor cortex, and putamen, contains neurons that are selective for the visual presence of an object in the space surrounding the hand (Brozzoli et al., 2011, 2013; Makin et al., 2007). This series of studies also demonstrated that visual selectivity for space near the hand is anchored to this body part such that when it moves in space between two locations, the near-hand selective response follows the hand (Brozzoli et al., 2012). Notably, such visual selectivity remains evident even when near and far locations are both within a reachable distance, indicating that PHS does not coincide with the portion of space that is reachable. This result suggests that human premotor–posterior parietal neuronal populations encode space near the hands in hand-centered coordinates, similar to nonhuman primate frontoparietal areas (Cléry et al., 2015; Makin et al., 2009; Rizzolatti et al., 1997). These findings, together with previous behavioral studies (Brozzoli et al., 2009, 2010), suggest that multisensory changes originating during action planning may be coded at the level of the PHS representation (Brozzoli et al., 2014; Makin et al., 2012). Neuroimaging studies investigating brain areas involved in the execution of grasping and reaching movements also point to regions within the parietal and premotor cortices (Castiello & Begliomini, 2008; Grol et al., 2007; Culham & Valyear, 2006). For example, fMRI studies have demonstrated that action-dependent activity in similar parietal and premotor areas is modulated as a function of the type of action (grasping or reaching) or as a function of the degree of online control required by the action, even before overt execution of the movement (Grol et al., 2007; Culham & Valyear, 2006). Again, this set of results is compatible with neurophysiological research that identifies the neural circuits for grasping and reaching in the macaque brain as residing within a frontoparietal network (Fogassi & Luppino, 2005; Gardner, Debowy, Ro, Ghosh, & Srinivasa Babu, 2002; Rizzolatti et al., 1988). Solid evidence supports the view that the cortical visuomotor grasping circuit, comprising the intraparietal sulcus, ventral premotor, and primary motor cortex, allows for transformation of an object's physical properties into a suitable motor command for grasping (Castiello & Begliomini, 2008; Castiello, 2005; Murata, Gallese, Luppino, Kaseda, & Sakata, 2000; Murata et al., 1997). Several TMS studies converge in supporting the causal role of aIPS in motor planning. For instance, Davare and colleagues showed that usual muscle-specific ventral premotor–primary motor cortex interactions that appeared during grasp planning were significantly reduced following aIPS interference (Davare, Kraskov, Rothwell, & Lemon, 2011; Davare, Rothwell, & Lemon, 2010; see also Verhagen, Dijkerman, Medendorp, & Toni, 2012, for a TMS study assessing the casual role of aIPS during planning). Overall, the literature from human and nonhuman primates indicates neural and functional similarities. We therefore suggest that modulation of multisensory perception occurring in the planning phase of a grasping action may arise from activity within the premotor–parietal network involved in the multisensory hand-centered representation of PPS. Our findings also point to an interesting prediction: visuotactile neurons involved in this representation may not only update their visual receptive field location as a function of the hand position in space but also anticipate the upcoming hand position just before the movement starts (see Belardinelli, Lohmann, Farnè, & Butz, 2018). Similar remapping mechanisms have been described for visual receptive fields before upcoming saccadic movements (Duhamel, Colby, & Goldberg, 1992). Although further studies in human and nonhuman primates are needed to identify the physiological mechanisms underlying such a behavioral “remapping” of PPS, the current study provides the first evidence that multisensory interactions are dynamically enhanced both before and during action execution. Early multisensory–motor processes that temporally precede and subsequently accompany overt motor execution are ideally suited to planning and guiding our actions.

Acknowledgments

This work was supported by the following grants: FRM Fellowship FDT20080914045 to C. B.; IHU CeSaMe ANR-10-IBHU-0003, FRC (Fédération pour la Recherche sur le Cerveau, Neurodon), and the James S. McDonnell Scholar Award to A. F.; and Doctoral Mobility grants from the Avenir Lyon Saint-Etienne Program to I. P. This work was performed within the framework of the LABEX CORTEX (ANR-11-LABX-0042) of Université de Lyon. L. C. was supported by the Fyssen Foundation. The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Reprint requests should be sent to Claudio Brozzoli, Integrative Multisensory Perception Action & Cognition Team (ImpAct), Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16, ave Doyen Lépine, Lyon, France, or via e-mail: claudio.brozzoli@inserm.fr.

REFERENCES

Avillac
,
M.
,
Denève
,
S.
,
Olivier
,
E.
,
Pouget
,
A.
, &
Duhamel
,
J.-R.
(
2005
).
Reference frames for representing visual and tactile locations in parietal cortex
.
Nature Neuroscience
,
8
,
941
949
.
Avillac
,
M.
,
Hamed
,
S. B.
, &
Duhamel
,
J.-R.
(
2007
).
Multisensory integration in the ventral intraparietal area of the macaque monkey
.
Journal of Neuroscience
,
27
,
1922
1932
.
Belardinelli
,
A.
,
Lohmann
,
J.
,
Farnè
,
A.
, &
Butz
,
M. V.
(
2018
).
Mental space maps into the future
.
Cognition
,
176
,
65
73
.
Berti
,
A.
, &
Frassinetti
,
F.
(
2000
).
When far becomes near: Remapping of space by tool use
.
Journal of Cognitive Neuroscience
,
12
,
415
420
.
Blanke
,
O.
,
Slater
,
M.
, &
Serino
,
A.
(
2015
).
Behavioral, neural, and computational principles of bodily self-consciousness
.
Neuron
,
88
,
145
166
.
Brozzoli
,
C.
,
Cardinali
,
L.
,
Pavani
,
F.
, &
Farnè
,
A.
(
2010
).
Action-specific remapping of peripersonal space
.
Neuropsychologia
,
48
,
796
802
.
Brozzoli
,
C.
,
Ehrsson
,
H. H.
, &
Farnè
,
A.
(
2014
).
Multisensory representation of the space near the hand: From perception to action and interindividual interactions
.
Neuroscientist
,
20
,
122
135
.
Brozzoli
,
C.
,
Gentile
,
G.
,
Bergouignan
,
L.
, &
Ehrsson
,
H. H.
(
2013
).
A shared representation of the space near oneself and others in the human premotor cortex
.
Current Biology
,
23
,
1764
1768
.
Brozzoli
,
C.
,
Gentile
,
G.
, &
Ehrsson
,
H. H.
(
2012
).
That's near my hand! Parietal and premotor coding of hand-centered space contributes to localization and self-attribution of the hand
.
Journal of Neuroscience
,
32
,
14573
14582
.
Brozzoli
,
C.
,
Gentile
,
G.
,
Petkova
,
V. I.
, &
Ehrsson
,
H. H.
(
2011
).
fMRI adaptation reveals a cortical mechanism for the coding of space near the hand
.
Journal of Neuroscience
,
31
,
9023
9031
.
Brozzoli
,
C.
,
Pavani
,
F.
,
Urquizar
,
C.
,
Cardinali
,
L.
, &
Farnè
,
A.
(
2009
).
Grasping actions remap peripersonal space
.
NeuroReport
,
20
,
913
917
.
Cardinali
,
L.
,
Brozzoli
,
C.
, &
Farnè
,
A.
(
2009
).
Peripersonal space and body schema: Two labels for the same concept?
Brain Topography
,
21
,
252
260
.
Castiello
,
U.
(
2005
).
The neuroscience of grasping
.
Nature Reviews Neuroscience
,
6
,
726
736
.
Castiello
,
U.
, &
Begliomini
,
C.
(
2008
).
The cortical control of visually guided grasping
.
Neuroscientist
,
14
,
157
170
.
Chapman
,
C. E.
(
1994
).
Active versus passive touch: Factors influencing the transmission of somatosensory signals to primary somatosensory cortex
.
Canadian Journal of Physiology and Pharmacology
,
72
,
558
570
.
Chapman
,
C. E.
,
Bushnell
,
M. C.
,
Miron
,
D.
,
Duncan
,
G. H.
, &
Lund
,
J. P.
(
1987
).
Sensory perception during movement in man
.
Experimental Brain Research
,
68
,
516
524
.
Churchland
,
M. M.
,
Cunningham
,
J. P.
,
Kaufman
,
M. T.
,
Foster
,
J. D.
,
Nuyujukian
,
P.
,
Ryu
,
S. I.
, et al
(
2012
).
Neural population dynamics during reaching
.
Nature
,
487
,
51
56
.
Churchland
,
M. M.
,
Yu
,
B. M.
,
Ryu
,
S. I.
,
Santhanam
,
G.
, &
Shenoy
,
K. V.
(
2006
).
Neural variability in premotor cortex provides a signature of motor preparation
.
Journal of Neuroscience
,
26
,
3697
3712
.
Cléry
,
J.
,
Guipponi
,
O.
,
Wardak
,
C.
, &
Ben Hamed
,
S.
(
2015
).
Neuronal bases of peripersonal and extrapersonal spaces, their plasticity and their dynamics: Knowns and unknowns
.
Neuropsychologia
,
70
,
313
326
.
Cohen
,
Y. E.
, &
Andersen
,
R. A.
(
2002
).
A common reference frame for movement plans in the posterior parietal cortex
.
Nature Reviews Neuroscience
,
3
,
553
562
.
Colby
,
C. L.
,
Duhamel
,
J.-R.
, &
Goldberg
,
M. E.
(
1993
).
Ventral intraparietal area of the macaque: Anatomic location and visual response properties
.
Journal of Neurophysiology
,
69
,
902
914
.
Colino
,
F. L.
,
Buckingham
,
G.
,
Cheng
,
D. T.
,
van Donkelaar
,
P.
, &
Binsted
,
G.
(
2014
).
Tactile gating in a reaching and grasping task
.
Physiological Reports
,
2
,
e00267
.
Cooke
,
D. F.
, &
Graziano
,
M. S.
(
2004
).
Super-flinchers and nerves of steel: Defensive movements altered by chemical manipulation of a cortical motor area
.
Neuron
,
43
,
585
593
.
Culham
,
J. C.
, &
Valyear
,
K. F.
(
2006
).
Human parietal cortex in action
.
Current Opinion in Neurobiology
,
16
,
205
212
.
Davare
,
M.
,
Kraskov
,
A.
,
Rothwell
,
J. C.
, &
Lemon
,
R. N.
(
2011
).
Interactions between areas of the cortical grasping network
.
Current Opinion in Neurobiology
,
21
,
565
570
.
Davare
,
M.
,
Rothwell
,
J. C.
, &
Lemon
,
R. N.
(
2010
).
Causal connectivity between the human anterior intraparietal area and premotor cortex during grasp
.
Current Biology
,
20
,
176
181
.
de Vignemont
,
F.
, &
Iannetti
,
G. D.
(
2015
).
How many peripersonal spaces?
Neuropsychologia
,
70
,
327
334
.
di Pellegrino
,
G.
, &
Làdavas
,
E.
(
2015
).
Peripersonal space in the brain
.
Neuropsychologia
,
66
,
126
133
.
di Pellegrino
,
G.
,
Làdavas
,
E.
, &
Farné
,
A.
(
1997
).
Seeing where your hands are
.
Nature
,
388
,
730
.
Duhamel
,
J.-R.
,
Bremmer
,
F.
,
Ben Hamed
,
S.
, &
Graf
,
W.
(
1997
).
Spatial invariance of visual receptive fields in parietal cortex neurons
.
Nature
,
389
,
845
848
.
Duhamel
,
J.-R.
,
Colby
,
C. L.
, &
Goldberg
,
M. E.
(
1992
).
The updating of the representation of visual space in parietal cortex by intended eye movements
.
Science
,
255
,
90
92
.
Farnè
,
A.
,
Bonifazi
,
S.
, &
Làdavas
,
E.
(
2005
).
The role played by tool-use and tool-length on the Plastic Elongation of peri-hand space: A single case study
.
Cognitive Neuropsychology
,
22
,
408
418
.
Farnè
,
A.
,
Demattè
,
M. L.
, &
Làdavas
,
E.
(
2005
).
Neuropsychological evidence of modular organization of the near peripersonal space
.
Neurology
,
65
,
1754
1758
.
Farnè
,
A.
,
Iriki
,
A.
, &
Làdavas
,
E.
(
2005
).
Shaping multisensory action–space with tools: Evidence from patients with cross-modal extinction
.
Neuropsychologia
,
43
,
238
248
.
Farnè
,
A.
, &
Làdavas
,
E.
(
2000
).
Dynamic size-change of hand peripersonal space following tool use
.
NeuroReport
,
11
,
1645
1649
.
Farnè
,
A.
,
Pavani
,
F.
,
Meneghello
,
F.
, &
Làdavas
,
E.
(
2000
).
Left tactile extinction following visual stimulation of a rubber hand
.
Brain
,
123
,
2350
2360
.
Ferri
,
F.
,
Costantini
,
M.
,
Huang
,
Z.
,
Perrucci
,
M. G.
,
Ferretti
,
A.
,
Romani
,
G. L.
, et al
(
2015
).
Intertrial variability in the premotor cortex accounts for individual differences in peripersonal space
.
Journal of Neuroscience
,
35
,
16328
16339
.
Fogassi
,
L.
,
Gallese
,
V.
,
Fadiga
,
L.
,
Luppino
,
G.
,
Matelli
,
M.
, &
Rizzolatti
,
G.
(
1996
).
Coding of peripersonal space in inferior premotor cortex (area F4)
.
Journal of Neurophysiology
,
76
,
141
157
.
Fogassi
,
L.
, &
Luppino
,
G.
(
2005
).
Motor functions of the parietal lobe
.
Current Opinion in Neurobiology
,
15
,
626
631
.
Gardner
,
E. P.
,
Debowy
,
D. J.
,
Ro
,
J. Y.
,
Ghosh
,
S.
, &
Srinivasa Babu
,
K.
(
2002
).
Sensory monitoring of prehension in the parietal lobe: A study using digital video
.
Behavioural Brain Research
,
135
,
213
224
.
Gentilucci
,
M.
,
Fogassi
,
L.
,
Luppino
,
G.
,
Matelli
,
M.
,
Camarda
,
R.
, &
Rizzolatti
,
G.
(
1988
).
Functional organization of inferior area 6 in the macaque monkey
.
Experimental Brain Research
,
71
,
475
490
.
Graziano
,
M. S.
, &
Cooke
,
D. F.
(
2006
).
Parieto-frontal interactions, personal space, and defensive behavior
.
Neuropsychologia
,
44
,
2621
2635
.
Graziano
,
M. S.
, &
Gross
,
C. G.
(
1993
).
A bimodal map of space: Somatosensory receptive fields in the macaque putamen with corresponding visual receptive fields
.
Experimental Brain Research
,
97
,
96
109
.
Graziano
,
M. S.
,
Yap
,
G. S.
, &
Gross
,
C. G.
(
1994
).
Coding of visual space by premotor neurons
.
Science
,
266
,
1054
1057
.
Grol
,
M. J.
,
Majdandžić
,
J.
,
Stephan
,
K. E.
,
Verhagen
,
L.
,
Dijkerman
,
H. C.
,
Bekkering
,
H.
, et al
(
2007
).
Parieto-frontal connectivity during visually guided grasping
.
Journal of Neuroscience
,
27
,
11877
11887
.
Iriki
,
A.
,
Tanaka
,
M.
, &
Iwamura
,
Y.
(
1996
).
Coding of modified body schema during tool use by macaque postcentral neurones
.
NeuroReport
,
7
,
2325
2330
.
Juravle
,
G.
,
Binsted
,
G.
, &
Spence
,
C.
(
2017
).
Tactile suppression in goal-directed movement
.
Psychonomic Bulletin & Review
,
24
,
1060
1076
.
Juravle
,
G.
,
Deubel
,
H.
,
Tan
,
H. Z.
, &
Spence
,
C.
(
2010
).
Changes in tactile sensitivity over the time-course of a goal-directed movement
.
Behavioural Brain Research
,
208
,
391
401
.
Koch
,
G.
,
Cercignani
,
M.
,
Pecchioli
,
C.
,
Versace
,
V.
,
Oliveri
,
M.
,
Caltagirone
,
C.
, et al
(
2010
).
In vivo definition of parieto-motor connections involved in planning of grasping movements
.
Neuroimage
,
51
,
300
312
.
Maimon-Mor
,
R. O.
,
Johansen-Berg
,
H.
, &
Makin
,
T. R.
(
2017
).
Peri-hand space representation in the absence of a hand—Evidence from congenital one-handers
.
Cortex
,
95
,
169
171
.
Makin
,
T. R.
,
Brozzoli
,
C.
,
Cardinali
,
L.
,
Holmes
,
N. P.
, &
Farnè
,
A.
(
2015
).
Left or right? Rapid visuomotor coding of hand laterality during motor decisions
.
Cortex
,
64
,
289
292
.
Makin
,
T. R.
,
Holmes
,
N. P.
,
Brozzoli
,
C.
, &
Farnè
,
A.
(
2012
).
Keeping the world at hand: Rapid visuomotor processing for hand–object interactions
.
Experimental Brain Research
,
219
,
421
428
.
Makin
,
T. R.
,
Holmes
,
N. P.
,
Brozzoli
,
C.
,
Rossetti
,
Y.
, &
Farnè
,
A.
(
2009
).
Coding of visual space during motor preparation: Approaching objects rapidly modulate corticospinal excitability in hand-centered coordinates
.
Journal of Neuroscience
,
29
,
11841
11851
.
Makin
,
T. R.
,
Holmes
,
N. P.
, &
Zohary
,
E.
(
2007
).
Is that near my hand? Multisensory representation of peripersonal space in human intraparietal sulcus
.
Journal of Neuroscience
,
27
,
731
740
.
Maravita
,
A.
,
Husain
,
M.
,
Clarke
,
K.
, &
Driver
,
J.
(
2001
).
Reaching with a tool extends visual–tactile interactions into far space: Evidence from cross-modal extinction
.
Neuropsychologia
,
39
,
580
585
.
Maravita
,
A.
,
Spence
,
C.
,
Kennett
,
S.
, &
Driver
,
J.
(
2002
).
Tool-use changes multimodal spatial interactions between vision and touch in normal humans
.
Cognition
,
83
,
B25
B34
.
Marini
,
F.
,
Romano
,
D.
, &
Maravita
,
A.
(
2017
).
The contribution of response conflict, multisensory integration, and body-mediated attention to the crossmodal congruency effect
.
Experimental Brain Research
,
235
,
873
887
.
Martel
,
M.
,
Cardinali
,
L.
,
Roy
,
A. C.
, &
Farnè
,
A.
(
2016
).
Tool-use: An open window into body representation and its plasticity
.
Cognitive Neuropsychology
,
33
,
82
101
.
Michaels
,
J. A.
,
Dann
,
B.
,
Intveld
,
R. W.
, &
Scherberger
,
H.
(
2018
).
Neural dynamics of variable grasp-movement preparation in the macaque fronto-parietal network
.
Journal of Neuroscience
,
38
,
5759
5773
.
Murata
,
A.
,
Fadiga
,
L.
,
Fogassi
,
L.
,
Gallese
,
V.
,
Raos
,
V.
, &
Rizzolatti
,
G.
(
1997
).
Object representation in the ventral premotor cortex (area F5) of the monkey
.
Journal of Neurophysiology
,
78
,
2226
2230
.
Murata
,
A.
,
Gallese
,
V.
,
Luppino
,
G.
,
Kaseda
,
M.
, &
Sakata
,
H.
(
2000
).
Selectivity for the shape, size, and orientation of objects for grasping in neurons of monkey parietal area AIP
.
Journal of Neurophysiology
,
83
,
2580
2601
.
Noel
,
J.-P.
,
Grivaz
,
P.
,
Marmaroli
,
P.
,
Lissek
,
H.
,
Blanke
,
O.
, &
Serino
,
A.
(
2015
).
Full body action remapping of peripersonal space: The case of walking
.
Neuropsychologia
,
70
,
375
384
.
Pavani
,
F.
,
Spence
,
C.
, &
Driver
,
J.
(
2000
).
Visual capture of touch: Out-of-the-body experiences with rubber gloves
.
Psychological Science
,
11
,
353
359
.
Post
,
L. J.
,
Zompa
,
I. C.
, &
Chapman
,
C. E.
(
1994
).
Perception of vibrotactile stimuli during motor activity in human subjects
.
Experimental Brain Research
,
100
,
107
120
.
Quinn
,
B. T.
,
Carlson
,
C.
,
Doyle
,
W.
,
Cash
,
S. S.
,
Devinsky
,
O.
,
Spence
,
C.
, et al
(
2014
).
Intracranial cortical responses during visual–tactile integration in humans
.
Journal of Neuroscience
,
34
,
171
181
.
Rizzolatti
,
G.
,
Camarda
,
R.
,
Fogassi
,
L.
,
Gentilucci
,
M.
,
Luppino
,
G.
, &
Matelli
,
M.
(
1988
).
Functional organization of inferior area 6 in the macaque monkey. II. Area F5 and the control of distal movements
.
Experimental Brain Research
,
71
,
491
507
.
Rizzolatti
,
G.
,
Fadiga
,
L.
,
Fogassi
,
L.
, &
Gallese
,
V.
(
1997
).
The space around us
.
Science
,
277
,
190
191
.
Rizzolatti
,
G.
,
Scandolara
,
C.
,
Matelli
,
M.
, &
Gentilucci
,
M.
(
1981a
).
Afferent properties of periarcuate neurons in macaque monkeys. I. Somatosensory responses
.
Behavioural Brain Research
,
2
,
125
146
.
Rizzolatti
,
G.
,
Scandolara
,
C.
,
Matelli
,
M.
, &
Gentilucci
,
M.
(
1981b
).
Afferent properties of periarcuate neurons in macaque monkeys. II. Visual responses
.
Behavioural Brain Research
,
2
,
147
163
.
Sambo
,
C. F.
,
Liang
,
M.
,
Cruccu
,
G.
, &
Iannetti
,
G. D.
(
2012
).
Defensive peripersonal space: The blink reflex evoked by hand stimulation is increased when the hand is near the face
.
Journal of Neurophysiology
,
107
,
880
889
.
Scandola
,
M.
,
Aglioti
,
S. M.
,
Bonente
,
C.
,
Avesani
,
R.
, &
Moro
,
V.
(
2016
).
Spinal cord lesions shrink peripersonal space around the feet, passive mobilization of paraplegic limbs restores it
.
Scientific Reports
,
6
,
24126
.
Sereno
,
M. I.
, &
Huang
,
R.-S.
(
2006
).
A human parietal face area contains aligned head-centered visual and tactile maps
.
Nature Neuroscience
,
9
,
1337
1343
.
Serino
,
A.
,
Bassolino
,
M.
,
Farnè
,
A.
, &
Làdavas
,
E.
(
2007
).
Extended multisensory space in blind cane users
.
Psychological Science
,
18
,
642
648
.
Serino
,
A.
,
Noel
,
J.-P.
,
Galli
,
G.
,
Canzoneri
,
E.
,
Marmaroli
,
P.
,
Lissek
,
H.
, et al
(
2015
).
Body part-centered and full body-centered peripersonal space representations
.
Scientific Reports
,
5
,
18603
.
Shore
,
D. I.
,
Barnes
,
M. E.
, &
Spence
,
C.
(
2006
).
Temporal aspects of the visuotactile congruency effect
.
Neuroscience Letters
,
392
,
96
100
.
Shore
,
D. I.
,
Gray
,
K.
,
Spry
,
E.
, &
Spence
,
C.
(
2005
).
Spatial modulation of tactile temporal-order judgments
.
Perception
,
34
,
1251
1262
.
Spence
,
C.
,
Pavani
,
F.
, &
Driver
,
J.
(
1998
).
What crossing the hands can reveal about crossmodal links in spatial attention
.
Abstracts of the Psychonomic Society
,
3
,
13
.
Spence
,
C.
,
Pavani
,
F.
, &
Driver
,
J.
(
2004
).
Spatial constraints on visual–tactile cross-modal distractor congruency effects
.
Cognitive, Affective, & Behavioral Neuroscience
,
4
,
148
169
.
Spence
,
C.
,
Pavani
,
F.
,
Maravita
,
A.
, &
Holmes
,
N.
(
2004
).
Multisensory contributions to the 3-D representation of visuotactile peripersonal space in humans: Evidence from the crossmodal congruency task
.
Journal of Physiology-Paris
,
98
,
171
189
.
Stein
,
B. E.
, &
Stanford
,
T. R.
(
2008
).
Multisensory integration: Current issues from the perspective of the single neuron
.
Nature Reviews Neuroscience
,
9
,
255
266
.
Stein
,
B. E.
,
Stanford
,
T. R.
,
Ramachandran
,
R.
,
Perrault
,
T. J.
, Jr.
, &
Rowland
,
B. A.
(
2009
).
Challenges in quantifying multisensory integration: Alternative criteria, models, and inverse effectiveness
.
Experimental Brain Research
,
198
,
113
126
.
van der Stoep
,
N.
,
Serino
,
A.
,
Farnè
,
A.
,
Di Luca
,
M.
, &
Spence
,
C.
(
2016
).
Depth: The forgotten dimension in multisensory research
.
Multisensory Research
,
29
,
493
524
.
Verhagen
,
L.
,
Dijkerman
,
H. C.
,
Medendorp
,
W. P.
, &
Toni
,
I.
(
2012
).
Cortical dynamics of sensorimotor integration during grasp planning
.
Journal of Neuroscience
,
32
,
4508
4519
.
Voss
,
M.
,
Ingram
,
J. N.
,
Haggard
,
P.
, &
Wolpert
,
D. M.
(
2006
).
Sensorimotor attenuation by central motor command signals in the absence of movement
.
Nature Neuroscience
,
9
,
26
27
.

Author notes

This paper is part of a Special Focus deriving from a symposium at the 2017 International Multisensory Research Forum (IMRF).

*

These authors contributed equally to the paper.

#

These authors contributed equally to the paper.