Abstract
The perception of tactile stimuli on the face is modulated if subjects concurrently observe a face being touched; this effect, termed visual remapping of touch (VRT), is maximum for observing one's own face. In the present fMRI study, we investigated the neural basis of the VRT effect. Participants in the scanner received tactile stimuli, near the perceptual threshold, on their right, left, or both cheeks. Concurrently, they watched movies depicting their own face, another person's face, or a ball that could be touched or only approached by human fingers. Participants were requested to distinguish between unilateral and bilateral tactile stimulation. Behaviorally, perception of tactile stimuli was modulated by viewing a tactile stimulation, with a stronger effect when viewing one's own face being touched. In terms of brain activity, viewing touch was related with an enhanced activity in the ventral intraparietal area. The specific effect of viewing touch on oneself was instead related with a reduced activity in both the ventral premotor cortex and the somatosensory cortex. The present findings suggest that VRT is supported by a network of fronto-parietal areas. The ventral intraparietal area might remap visual information about touch onto tactile processing. Ventral premotor cortex might specifically modulate multisensory interaction when sensory information is related to one's own body. Then this activity might back project to the somatosensory cortices, thus affecting tactile perception.
INTRODUCTION
Viewing another person or even an object being touched activates brain regions normally recruited during tactile perception, even if the observer's body is not directly tactilely stimulated. Such visually evoked somatosensory activity involves a network of fronto-parietal areas distributed along the postcentral gyrus, the supramarginal gyrus, and the precentral gyrus (premotor cortex) (Ebisch et al., 2008; Blakemore, Bristow, Bird, Frith, & Ward, 2005; Keysers et al., 2004). This overlap of brain activity for perceiving and viewing touch has been taken as an evidence for the existence of a “tactile mirror system,” a neural mechanism remapping tactile sensation seen on the body of others onto one's own somatosensory system.
This visually dependent somatosensory activity does not normally result in an actual tactile percept, as most subjects do not report to feel touch when observing touch on the body of others. Visuotactile synesthetes represent an interesting exception, in that they report feeling touch on their body when they view the body of others being touched (Banissy & Ward, 2007). A neuroimaging study run on a single synesthetic subject showed that the brain activity evoked by the observation of touch in the aforementioned fronto-parietal areas was stronger in this subject than that in nonsynesthetic controls (Blakemore et al., 2005). These findings suggest that a modulation of tactile processing due to the vision of touch occurs in all subjects, but only in synesthetes, this effect is sufficient to overcome the threshold of conscious experience. In line with this view, we have recently shown that if perceptual thresholds are experimentally manipulated, an effect of viewing touch on tactile perception can be behaviorally unmasked also in nonsynesthetes (Serino, Pizzoferrato, & Ladavas, 2008). The perception of near-threshold tactile stimuli on the face of nonsynesthetic subjects was modulated if they observed a face being touched by two fingers in comparison with when they observed the same face being just approached by the fingers. This effect, called visual remapping of touch (VRT), was specific for viewing a bodily stimulus because the effect of vision on touch disappeared if the subjects observed the picture of an object instead of a face. Moreover, the effect of vision on touch was maximum when subjects observed their own face being touched instead of the face of another person, suggesting that the VRT effect increases as much as the observer's and the observed body match. To remap a sensation from one sensory modality to another—namely, from vision to touch—the remapping could be favored if the two modalities share a common reference system, in this case the same body. As a consequence, visual information about the self may modulate the sense of touch.
This experimental finding opens a new intriguing question. On the one hand, multisensory integration has typically been studied between low levels of sensory processing. On the other hand, the study of self-representation usually concerns high levels of information processing. In the case of the results of the study of Serino et al. (2008), high-order visual information concerning the representation of oneself, as different from others, modulates the perception of tactile stimuli. How does this effect occur? Which are the neural underpinnings of such complex form of multisensory interaction?
When viewing a face, high-order visual areas in the extrastriate cortex, connected to portions of the middle and inferior frontal gyrus (Platek, Wathne, Tierney, & Thomson, 2008), signal whether that face belongs to oneself or to another individual. In the case of viewing one's own face, this complex visual judgment might activate different representations of the self. Cognitive neuroscience literature (Stamenov, 2005) individuates at least two levels of representations of the self: a semantic, conceptual representation, the narrative self (D'Argembeau et al., 2007; Buckner & Carroll, 2006), and a sensory motor representation of one's own body, the embodied self (Blanke & Metzinger, 2009; Tsakiris, Hesse, Boy, Haggard, & Fink, 2007; Ehrsson, Holmes, & Passingham, 2005). A pool of brain structures in the ventromedial pFC are thought to support the representation of the narrative self because those areas are engaged during a number of tasks requiring the processing of self-knowledge, self-referencing (D'Argembeau et al., 2007; Heatherton et al., 2006; Northoff & Bermpohl, 2004), mentalizing, or judgments about oneself relative to other people in general (Jenkins, Macrae, & Mitchell, 2008; Mitchell, Macrae, & Banaji, 2006). On the other hand, a network of fronto-parietal areas is supposed to underlie the representation of the embodied self because those areas are involved in integrating multisensory information pertaining one's own body and are engaged when people experience a sense of ownership of a body-like stimulus, such as in the so-called rubber hand illusion (RHI; Tsakiris et al., 2007; Ehrsson et al., 2005; Botvinick & Cohen, 1998). In the present study, we asked which kind of self-representation could modulate tactile perception and how such high-level representation could directly influence low-level perceptual processing.
To answer these questions, in the present work we adapted the paradigm from Serino et al. (2008) for fMRI scanning. Subjects received an electrical stimulation on their right, left, or both cheeks and were requested to discriminate between unilateral and bilateral stimulation. To manipulate perceptual thresholds, the stimulus on the left cheek was stronger than that on the right cheek. In this way, in condition of bilateral stimulation, the stronger stimulus would frequently extinguish the weaker one (Serino, Giovagnoli, & Ladavas, 2009; Serino et al., 2008). During the task, subjects were watching a movie showing, in different trials, the image of their own face, of another person's face, or of a nonbody stimulus, namely, a ball. The image could be touched or just approached bilaterally by two human fingers (one on its left and one on its right side) in different trials. Subjects were instructed to respond only on the basis of tactile stimulation and not of visual stimulation. We studied neural activity evoked in different brain areas as a function of the different experimental conditions and in relationship to subjects' perceptual reports.
The first question was whether the modulation of VRT due to viewing one's own face relies on the activation of a conceptual or of a physical representation of self. If the narrative self is responsible for the effect, a specific modulation of brain activity in ventromedial prefrontal areas should be found when subjects view one's own face being touched in comparison with viewing another person's face or an object. In contrast, if the embodied self is the origin of the effect, such modulation of brain activity should be found in fronto-parietal multisensory areas and not in ventromedial frontal areas.
Second, once either representation of the self is activated, we asked how such representation could affect the perception of touch. A possible explanation is that visual information about the self modulates tactile processing because the activity in high-order self-related areas projects to somatosensory cortices, where the tactile stimulus is processed. If this is the case, the same modulation of neural activity for the different experimental conditions found in the brain network underlying the self representation should be found also in somatosensory cortices within the parietal lobe.
METHODS
Participants
Fifteen healthy young adults (10 women) were included in the present study (mean age = 23.6 years, range = 19–30 years). All participants were right-handed, had normal or corrected-to-normal vision, had normal touch, and were naive as to the purposes of the experiment. Participants gave their written informed consent to participate in the study and were paid (€25) for their participation. The study was approved by the ethics committee of the “G. d'Annunzio” University, Chieti, and was conducted in accordance with the ethical standards of the 1964 Declaration of Helsinki.
fMRI Data Acquisition
All images were collected with a 1.5-T Philips Achieva scanner operating at the Institute of Advanced Biomedical Technologies (I.T.A.B. Fondazione G. d'Annunzio, Chieti, Italy). T1-weighted anatomical images were collected using a multiplanar rapid acquisition gradient-echo sequence (230 sagittal slices, voxel size = 0.5 × 0.5 × 0.8 mm, repetition time = 8.08 msec, echo time = 3.7 msec). Functional images were collected with a gradient-echo EPI sequence. Each subject underwent four acquisition runs, each including 198 consecutive volumes comprising 25 consecutive 4-mm-thick slices oriented parallel to the anterior-posterior commissure and covering the whole brain (repetition time = 2.3 sec, echo time = 60 msec, 64 × 64 image matrix, 4 × 4-mm in-plane resolution).
Stimuli and Conditions
The experimental stimuli consisted of both tactile and visual stimuli.
Tactile stimuli were delivered via a pair of miniaturized screen electrodes placed on the subjects' cheeks (stimulus duration = 5 msec). In different trials, a tactile stimulus was administered to the right, left, or both cheeks. The tactile stimulus on the left cheek was calibrated to be more intense than that on the right cheek. Before the experiment, while the subject was lying in the fMRI scanner, the intensity of the electrical stimuli was titrated for each subject in the absence of visual information. Using a staircase procedure, stimulus intensity was titrated at a threshold of 100% of detection for the stronger stimulus (mean threshold = 20 ± 3 mA) and of 60% for the weaker stimulus (mean threshold = 13 ± 4 mA). Thresholds were recalibrated before each experimental block.
Visual stimuli consisted of three sets of gray scale movies, one depicting the subject's own face (self), the second depicting the face of another person (of the same age and sex as the subject; other), and the third depicting a ball (object). A ball has a perceptual configuration similar to a face but is anatomically categorized as a nonbodily stimulus.
The movie also showed two fingers initially positioned on the lower part of the screen, one on the right and one on the left. During the movie, both fingers moved toward the centrally presented image and then backward to their starting position. In different trials, the motion followed one of two trajectories: in the touch condition, the fingers actually touched the central image, and in the no-touch condition, the fingers stopped about 5 cm away from the image.
Visual and tactile stimuli were synchronized so that when the fingers reached the image, a tactile input (a bilateral or a unilateral tactile stimulation) was delivered to the subject's face. Each movie lasted in total 1000 msec, and tactile stimulation was delivered at ∼500 msec from the beginning of the movie. Each movie was preceded by a fixation stimulus lasting a variable, nonpredicable interval of either 2000, 2500, or 3000 msec (see Figure 1).
Subjects laid supine in the scanner with their arms outstretched beside their abdomen. Visual stimuli were projected onto a back-projection screen situated behind the subject's head and were visible via a mirror (10 × 15 cm).
Sound-attenuating headphones were used to muffle scanner noise. The presentation of the stimuli and the recording of the participants' responses were controlled by a PC running Cogent 2000 (developed by the Cogent 2000 team at the FIL and the ICN, University College London, UK) and Cogent Graphics (developed by John Romaya at the LON at the Wellcome Department of Imaging Neuroscience, University College London, UK) under Matlab (The Mathworks Company, Natick, MA) on the Microsoft Windows XP operating system.
Design and Procedure
The event-related paradigm consisted of four acquisition runs of the tactile confrontation task. Each run presented six unique stimuli representing all combinations of type of image (self, other, and object) and fingers movement trajectory (touch and no touch), synchronized with a bilateral tactile stimulation. Thus, the experimental design was a 3 (image: self, other, and object) × 2 (trajectory: touch and no touch) within-subjects factorial design. The six unique stimuli were repeated 18 times, for a total of 108 trials per run, presented in pseudorandom order. Tactile stimulation was also presented simultaneously with visual stimulation. In each run, 22 unilateral tactile stimuli were also included. In total, the experiment consisted of 520 trials.
Before scanning, participants were told that electrical stimuli would be delivered either to one or to both cheeks and that concurrently they would be presented with short movies with a different content. They were instructed to press a button with the right hand when they would perceive a unilateral tactile stimulus and to refrain from responding when they would perceive a bilateral tactile stimulus. Participants were instructed to look at visual information and to answer on the basis only of tactile stimulation.
The fMRI design differs from the behavioral study by Serino et al. (2008) for two important aspects. First, in the present study, subjects actively responded only to unilateral tactile stimuli, which were rare in the total number of trials, whereas in the study of Serino et al., subjects were requested to differently respond to unilateral left, right, and bilateral stimuli. Second, in the present study, visual information always signaled a bilateral stimulation, whereas in Serino et al., the side of tactile and visual stimulations was completely crossed. These modifications were necessary to study the neural basis of the VRT effect. The current paradigm indeed was designed to maximize the number of trials critical to show the modulation of the effect (i.e., bilateral tactile stimulation), to minimize the number of possible combinations of visuotactile stimuli (using only bilateral visual stimulation), and to minimize possible brain activations not directly involved in the effect, such as those derived from motor responses. For these reasons, subjects received much less unilateral than bilateral tactile stimuli, viewed only bilateral stimuli, and were requested to actively respond only to trials with unilateral tactile stimulation (which were not included in fMRI analyses).
The experimental design was a rapid event-related fMRI design alternating a state of stimulation—that is, 1000 msec movies plus electrical stimulation—with a baseline state consisting in the fixation interval lasting 2000, 2500, or 3000 msec; each of the three different baseline durations had the same probability of occurrence. Each run lasted about 7 minutes. A pause of 5 minutes, during which tactile stimuli were recalibrated, was interspersed between runs.
Data Analysis
fMRI data were analyzed using SPM5 (Wellcome Trust Centre for Neuroimaging, University College, London). Functional images were first corrected for head movement using a least-squares approach and a six-parameter rigid body spatial transformation (Friston et al., 1995) and for difference in acquisition timing between slices. The high-resolution anatomical image and the functional images were coregistered and stereotactically normalized to the Montreal Neurological Institute brain template used in SPM5 (Mazziotta, Toga, Evans, Fox, & Lancaster, 1995). Functional images were resampled with a voxel size of 4 × 4 × 4 mm and spatially smoothed with a three-dimensional Gaussian filter of 8-mm FWHM (Friston et al., 1995).
The time series of functional MR images obtained from each participant was then analyzed on a voxel-by-voxel basis using the principles of the general linear model extended to allow the analysis of fMRI data as a time series (Worsley & Friston, 1995). The onset of each trial constituted a neural event, which was modeled through a canonical hemodynamic response function, chosen to represent the relationship between neuronal activation and BOLD signal changes (Friston et al., 1998). Unilateral catch trials (20%) and false alarm trials (i.e., when participants had pressed the button in the presence of a bilateral tactile stimulus; 18%) were modeled as separate conditions and then excluded from further analyses, which concentrated on correct responses (i.e., no response to bilateral stimulation).
Group analysis was performed in two steps. First, we used a conventional voxel-by-voxel group random-effects analysis, which allowed to test hypotheses relative to the whole population and to identify brain regions responding during the experimental trials relative to the baseline condition of the study, that is, the intertrial fixation interval. This was done through an omnibus F test comparing each of the six conditions resulting from the combination of the image and trajectory factors with the intertrial fixation. The resulting statistical parametric maps of the F statistics were thresholded at p < .01, corrected for multiple comparisons over the total amount of acquired brain volume using false discovery rate (Genovese, Lazar, & Nichols, 2002). The resulting regions are listed in Table 1 and rendered in Figure 2 and include all voxels showing a reliable BOLD response evoked by the onset of the experimental trials, irrespective of the somatosensory stimulus, visual image, and fingers movement trajectory delivered in any particular trial and of the sign (positive or negative) of the evoked BOLD response.
. | Self Face (%) . | Other Face (%) . | Object (%) . | |||
---|---|---|---|---|---|---|
Touch . | No Touch . | Touch . | No Touch . | Touch . | No Touch . | |
Bilateral | ||||||
Average | 84 | 81 | 82 | 79 | 80 | 80 |
SEM | 4.1 | 3.9 | 4.1 | 4.1 | 4.8 | 4.2 |
Unilateral Left | ||||||
Average | 30 | 34 | 29 | 35 | 39 | 42 |
SEM | 6 | 7 | 5 | 8 | 8 | 8 |
Unilateral Right | ||||||
Average | 15 | 17 | 16 | 14 | 15 | 18 |
SEM | 5 | 6 | 6 | 6 | 5 | 7 |
. | Self Face (%) . | Other Face (%) . | Object (%) . | |||
---|---|---|---|---|---|---|
Touch . | No Touch . | Touch . | No Touch . | Touch . | No Touch . | |
Bilateral | ||||||
Average | 84 | 81 | 82 | 79 | 80 | 80 |
SEM | 4.1 | 3.9 | 4.1 | 4.1 | 4.8 | 4.2 |
Unilateral Left | ||||||
Average | 30 | 34 | 29 | 35 | 39 | 42 |
SEM | 6 | 7 | 5 | 8 | 8 | 8 |
Unilateral Right | ||||||
Average | 15 | 17 | 16 | 14 | 15 | 18 |
SEM | 5 | 6 | 6 | 6 | 5 | 7 |
The second step consisted in searching for modulation of BOLD responses in these voxels as a function of the type of image (image factor: self, other, and object) and finger movement trajectory (trajectory factor: touch and no touch). To increase sensitivity of the analysis, this step was performed on regionally averaged data as follows: Voxels resulting from the first step were grouped into regions, that is, clusters of adjacent significant voxels. For each subject and region, we computed a regional estimate of the amplitude of the hemodynamic response in each experimental condition by entering a spatial average (across all voxels in the region) of the preprocessed time series into the individual general linear models. Such regional hemodynamic response estimates, which are shown in the plots in Figure 2, were then analyzed through a 3 × 2, Image × Trajectory, repeated measures ANOVA. For bilaterally activated regions, the hemisphere factor was added to the ANOVA.
Note that although the first and the second steps in this analysis procedure use the same data set, they are inherently independent because the first step tests for the presence of any neural response regardless of the identity of the delivered stimulus, whereas the second step tests for modulations induced by the kind of visual stimulus on the responsive regions, thus avoiding the risk of “double dipping” (Kriegeskorte, Simmons, Bellgowman, & Baker, 2009).
RESULTS
Behavioral Results
The behavioral effect of visual stimulation on tactile perception was studied by comparing subjects' accuracy in responding to bilateral tactile stimuli when the fingers touched or did not touch the different images. In light of the results from Serino et al. (2008), we expected that the perception of bilateral tactile stimuli was higher when subjects saw their own face being touched rather than approached. To control that behavioral data from the present fMRI experiment confirmed this critical prediction, for each image condition (self, other, and object), subjects' accuracy was compared between the two fingers movement trajectories (touch and no touch) by means of t tests (one-tailed). To prevent the risk of inflating one-type error, a Bonferroni correction was applied; thus, only p values < .025 were considered significant. When viewing one's own face, tactile perception was enhanced when fingers touched the face (accuracy = 84%; SEM = 4.1%) than when just approached the face (81%; SEM = 3.9%), t(14) = 2.28, p < .019. A similar nearly significant pattern, t(14) = 1.57, p =.06, was found for viewing the other face: The accuracy was 82% (SEM = 4.1%) in the touch condition and 79% (SEM = 4.1%) in the no-touch condition. No modulation of tactile perception was found for the object condition: The same accuracy was found for the touch (80%; SEM = 4.8%) and the no-touch (80%; SEM = 4.2%), t(14) = 0.13, p =.44, conditions. Behavioral data for responses to bilateral and unilateral weak and strong stimulation are reported in Table 1.
fMRI Results
From the group-level whole-brain analysis of functional MR images, we identified six different cortical regions where BOLD signal was significantly different during any of the six conditions resulting from the combination of type of image (self, other, and object) and finger movement trajectory (touch and no touch), relative to the intertrial fixation intervals. The six regions were located in the bilateral occipital cortex, ventral intraparietal area (VIP), somatosensory cortex, ventral premotor cortex (VPM), right insula, and dorsomedial pFC (see Table 2 and Figure 2). To study the modulation of neural activity within these areas as a function of the experimental conditions, for each area, we run an ANOVA on the estimated percent BOLD signal change with the factors image (self, other, and object) and trajectory (touch and no touch). A factor hemisphere (right and left) was added when both left and right activation of homologue areas was found. Post hoc comparisons were conducted, when necessary, by means of the Duncan test.
Regions of Activation . | Main Local Maxima . | ||||||
---|---|---|---|---|---|---|---|
Anatomical Location . | Extent (Voxels) . | Side . | Anatomical Subdivisions . | MNI Coordinates . | F . | ||
x . | y . | z . | |||||
Occipital cortex | 849 | L | Middle occipital gyrus | −12 | −104 | 4 | 25.28 |
−48 | −76 | 4 | 20.62 | ||||
−20 | −88 | −20 | 14.78 | ||||
Inferior occipital gyrus | −24 | −84 | −4 | 6.83 | |||
R | Cuneus | 12 | −96 | 12 | 17.35 | ||
Calcarine cortex | 16 | −96 | 0 | 15.48 | |||
4 | −88 | 4 | 15.20 | ||||
Inferior occipital gyrus | 32 | −84 | −4 | 7.36 | |||
VIP | 21 | L | Inferior parietal lobule | −40 | −36 | 36 | 7.21 |
48 | R | Inferior parietal lobule | 32 | −52 | 44 | 6.62 | |
48 | −36 | 48 | 4.85 | ||||
Somatosensory cortices (SI/SII) | 54 | L | Postcentral gyrus (inferior) | −60 | −20 | 20 | 12.24 |
Superior temporal gyrus | −52 | −36 | 20 | 5.86 | |||
29 | R | Postcentral gyrus (inferior) | 60 | −16 | 20 | 8.15 | |
VPM | 44 | L | Precentral gyrus | −44 | −4 | 60 | 8.77 |
−36 | −6 | 68 | 7.56 | ||||
25 | R | Precentral gyrus | 52 | 8 | 36 | 6.66 | |
Insula | 43 | R | Insula | 48 | 16 | −4 | 7.93 |
Inferior frontal gyrus | 60 | 12 | 4 | 6.91 | |||
Dorsomedial pFC | 25 | L | Superior frontal gyrus | −6 | 58 | 24 | 7.88 |
Regions of Activation . | Main Local Maxima . | ||||||
---|---|---|---|---|---|---|---|
Anatomical Location . | Extent (Voxels) . | Side . | Anatomical Subdivisions . | MNI Coordinates . | F . | ||
x . | y . | z . | |||||
Occipital cortex | 849 | L | Middle occipital gyrus | −12 | −104 | 4 | 25.28 |
−48 | −76 | 4 | 20.62 | ||||
−20 | −88 | −20 | 14.78 | ||||
Inferior occipital gyrus | −24 | −84 | −4 | 6.83 | |||
R | Cuneus | 12 | −96 | 12 | 17.35 | ||
Calcarine cortex | 16 | −96 | 0 | 15.48 | |||
4 | −88 | 4 | 15.20 | ||||
Inferior occipital gyrus | 32 | −84 | −4 | 7.36 | |||
VIP | 21 | L | Inferior parietal lobule | −40 | −36 | 36 | 7.21 |
48 | R | Inferior parietal lobule | 32 | −52 | 44 | 6.62 | |
48 | −36 | 48 | 4.85 | ||||
Somatosensory cortices (SI/SII) | 54 | L | Postcentral gyrus (inferior) | −60 | −20 | 20 | 12.24 |
Superior temporal gyrus | −52 | −36 | 20 | 5.86 | |||
29 | R | Postcentral gyrus (inferior) | 60 | −16 | 20 | 8.15 | |
VPM | 44 | L | Precentral gyrus | −44 | −4 | 60 | 8.77 |
−36 | −6 | 68 | 7.56 | ||||
25 | R | Precentral gyrus | 52 | 8 | 36 | 6.66 | |
Insula | 43 | R | Insula | 48 | 16 | −4 | 7.93 |
Inferior frontal gyrus | 60 | 12 | 4 | 6.91 | |||
Dorsomedial pFC | 25 | L | Superior frontal gyrus | −6 | 58 | 24 | 7.88 |
The table shows local maxima more than 4 mm apart.
Occipital Cortex
The activation cluster in the occipital cortex included a wide portion of the occipital lobe encompassing Brodmann's areas (BAs) 17, 18, and 19. To functionally characterize this cluster, we created three different anatomical masks encompassing BAs 17, 18, and 19, respectively, and we computed the BOLD percent signal change in each area and in each condition. Anatomical masks were created by means of AAL toolbox available with SPM (Tzourio-Mazoyer et al., 2002). Results showed no functional difference between the three areas, so the results will be discussed for the whole cluster.
The ANOVA showed that BOLD response in this cluster was modulated only by the type of image viewed by the subject because only the effect of image was significant, F(2, 28) = 4.00, p < .05. Post hoc comparisons showed that BOLD signal was higher when subjects viewed both their own face (0.30% increase relative to the intertrial fixation baseline) and another person's face (0.29%) than an object (0.25%, p < .05 in both cases; see Figure 2). Thus, BOLD signal in this area discriminates between bodily and nonbodily visual stimuli.
Ventral Intraparietal Area
An activation cluster was bilaterally found at the confluence of the postcentral and intraparietal sulci, compatibly with the location of the human VIP (Sereno & Huang, 2006). In both hemispheres, VIP activation was mainly centered within BA 40. Neither the main effect of hemisphere nor any interaction between hemisphere and the other factors were significant; thus, the results for both hemispheres will be presented together (see Figure 2). Only the main effect of trajectory was significant, F(1, 14) = 4.56, p < .05, showing a higher activation during observation of touch (0.19%) than of no-touch (0.16%) trajectory (see Figure 2). Therefore, neural activity in this area discriminates visual information specifically related to touch from that related to no-touch stimulation.
Somatosensory Cortices (SI/SII)
An activation cluster was bilaterally found in the ventral postcentral gyrus. For both hemispheres, this activation site includes the face area in the primary somatosensory cortex (Eickhoff, Grefkes, Fink, & Zilles, 2008) and the secondary somatosensory cortex (Eickhoff et al., 2008). Face representations in the primary and secondary somatosensory cortices are very close to each other, both encompassing the ventral aspect of the postcentral gyrus (Eickhoff et al., 2008; Sereno & Huang, 2006). Although our cluster clearly falls within this region, the present results do not discriminate any neural activity selectively related to either SI or SII. Thus, we will refer to this activation cluster with the comprehensive term “somatosensory cortices.”
The main effect of Image was significant, F(2, 28) = 8.05, p < .01, with a weaker activation for one's own face (0.17%) than for the other's face (0.20%, p < .01) and for the object (0.20%, p < .01). These results should be interpreted in the light of the significant two-way interaction Image × Trajectory, F(2, 28) = 4.03, p < .05. In the touch condition, viewing one's own face (0.17%) resulted in weaker activity than viewing both another person's face (0.21%, p < .01) and an object (0.21%, p < .01). In contrast, in the no-touch condition, no difference was found between one's own face (0.18%), the other's face (0.18%), and the object (0.18%, p > .45 in both cases) (see Figure 2). Such modulation resulted also in a different pattern of results when the effect of touch and no touch was compared across the three images: Although for the object and for the other's face, neural activity in the touch condition was higher than that in the no-touch condition (p < .05 in both comparisons), this difference was not found for one's own face (p = .22), where rather a nonsignificant opposite trend was found. Thus, in summary, viewing one's own face being touched resulted in a reduction of the activity in right and left somatosensory cortices within the postcentral gyrus.
Ventral Premotor Cortex
An activation cluster was found bilaterally in the precentral gyrus. Although the cluster on the right hemisphere was more ventral than that on the left hemisphere, both clusters were located in the ventral half of the precentral gyrus and fell within BA 6, accordingly to the cytoarchitectonic atlas (Eickhoff et al., 2005). Neither the main effect of hemisphere nor any interaction between hemisphere and the other factors were significant; thus, we will present the results for both hemispheres together (see Figure 2). The critical Image × Trajectory interaction was significant, F(2, 28) = 7.04, p < .01. Post hoc comparisons showed that in the touch condition, BOLD response for the observation of one's own face (0.21%) was reduced in comparison with that for the observation of the other's face (0.24%) and of the object (0.24%, p < .05 in both cases). Conversely, in the no-touch condition, BOLD response was enhanced for the observation of one's own face (0.25%) in comparison with that for the observation of the other's face (0.21%, p < .03) and of the object (0.22%, p < .05). When neural response between touch and no-touch condition was compared for the different images, we found an opposite pattern of activity for viewing one's own and the other's face: for the self condition, neural activity was lower in the touch (0.21%) than that in the no-touch condition (0.25%, p < .05), whereas for the other condition, BOLD response was higher in the touch (0.24%) than that in the no-touch condition (0.21%, p < .05) (see Figure 2). For the object condition, the pattern of results showed a trend similar to that for the other condition (p = .09). Thus, BOLD response in the left and right precentral gyrus seems able to discriminate between the effect of viewing touch on one's own face as compared with viewing touch on another person's face or on an object. The self-specific effect consists in a reduction of metabolic activity when viewing one's own face being touched.
Right Insula
The activation cluster in the right insula was centered on BA 47. The ANOVA performed on the percent BOLD signal change in this cluster (see Figure 2) showed a significant Image × Touch interaction, F(2, 28) = 10.53, p < .01. Post hoc comparisons showed that in the touch condition, the BOLD response for the other's face (0.22%) was higher than that for the object (0.19%, p < .05). In the no-touch condition, the BOLD response for the other's face (0.15%) was weaker than that for one's own face (0.19%, p < .05) and for the object (0.20%, p < .01). Finally, for the other's face condition, the effect of touch (0.22%) was higher than that of no touch (0.15%, p < .01) (see Figure 2).
Dorsomedial pFC
A deactivated cluster was found in the dorsomedial pFC. The cluster was mainly centered within BA 10. The ANOVA performed on the percent BOLD signal change in this cluster showed no main effect or interaction (see Figure 2).
DISCUSSION
Viewing one's own face being touched affects tactile perception on the face more than viewing another person's face or a nonbody stimulus (Serino et al., 2008). Here we studied which brain areas underlie this effect. In particular, we asked how a high-level representation of the self conveyed by visual stimulation may interact with the processing of tactile sensation.
To this aim, we used fMRI to measure brain activity in subjects involved in a tactile sensory discrimination task on their face (discriminating between a unilateral and a bilateral stimulation) while they viewed three different images, namely, their own face, another person's face, or an object, being touched bilaterally or just approached by fingers. The experimental paradigm was designed to maximize brain activity specifically related to the effect of interest (i.e., the modulation of touch due to visual information about the self) rather than to study the cognitive mechanism underlying the effect (see Serino et al., 2008, 2009). Nevertheless, behavioral data basically replicate the main important finding on VRT: Subjects more frequently reported to feel a bilateral stimulation on their face when they viewed a picture of their own face being touched bilaterally in comparison with when they viewed their own face being only approached. We will now relate these behavioral findings to neural activity recorded by fMRI.
Neural Activity Related to Viewing a Face
In a wide area of the occipital cortex, involving BAs 17, 18, and 19, BOLD signal was modulated as a function of the shown image: Neural activity was higher when subjects viewed a face, both their own and another person's face, than when they viewed a picture of a ball. Thus, this neural modulation may reflect the processing of complex visual information, such as that pertaining to a face, as compared with the processing of a simpler visual stimulus, such as a ball. These findings are in keeping with several previous data showing that the human body and its parts are specially relevant visual stimuli, processed by dedicated high-order visual areas (e.g., the so-called extrastriate body area; Downing, Jiang, Shuman, & Kanwisher, 2001) for the body, the occipital face area (Pitcher, Walsh, Yovel, & Duchaine, 2007; Gauthier et al., 2000; Haxby, Hoffman, & Gobbini, 2000), and the so-called fusiform face area (Kanwisher & Yovel, 2006).
Neural Activity Related to Viewing Touch
Neural activity in visual cortex did not discriminate visual information specifically related to touch from that not related to touch because the modulation of BOLD signal due to viewing different images was independent from whether the image was touched or just approached by the fingers. Conversely, such information pertaining finger movement trajectories affected neural activity in a portion of the parietal cortex, probably corresponding to the VIP (Sereno & Huang, 2006). VIP activity was enhanced when subjects received a tactile stimulation on their face and viewed two fingers touching an image rather than pointing beside that image.
Neurons in the monkey VIP respond to both visual and somatosensory information directed toward the animal's face (Avillac, Deneve, Olivier, Pouget, & Duhamel, 2005; Grefkes & Fink, 2005; Duhamel, Colby, & Goldberg, 1998; Colby, Duhamel, & Goldberg, 1993). Analogously, in humans, VIP contains a visuotactile somatotopic map of the face (Sereno & Huang, 2006). However, differently from the above-cited studies, in the present experiment, visual stimulation was not directed toward the subject's real face but toward an image facing the subject. Thus, information derived from viewing touch was remapped, such as touch was directed toward one's own face and integrated with an actual tactile stimulation received on the face. We suggest that the modulation of VIP activity found in the present study actually reflects such integrative and remapping process. This suggestion is supported by recent neurophysiological data on monkeys showing that some VIP neurons respond not only to visual and tactile stimulation administered on or close to a part of the animal's body but also when a stimulus is directed toward a part of the body of an experimenter facing the animal (Ishida, Nakajiama, Inase, & Murata, 2010). This response property of VIP cells allows to link the representation of an individual's body with that of the body of others. We believe that a similar mechanism might underlie the VRT effect in humans, as shown by the present fMRI results.
Neural Activity Related to Viewing Touch on One's Own Face
Therefore, neural activity in occipital and VIPs may discriminate between viewing a face from viewing an object and between viewing touch from viewing no touch, respectively. However, the critical information strongly modulating subject's perception, that is, viewing touch on one's own face, is processed elsewhere. A significant interaction between the viewed image and the fingers movement trajectory was found bilaterally in the VPM. In the VPM, BOLD signal when viewing one's own face being touched was significantly different from that when viewing one's own face not being touched and when viewing another person's face and an object being touched. In particular, a reduction of VPM activity was found for one's own face in the touch condition. Thus, neural activity in VPM may specifically represent information about touch on one's own face.
VPM is a well-known multisensory area, integrating visual, somatosensory, and proprioceptive information about the body and the space immediately surrounding the body. In the monkey, the homologous VPM area contains motor neurons with sensory proprieties, in that they respond also to visual, acoustic, and tactile stimulation administered on the monkey's body or within monkey's peripersonal space (Graziano & Cooke, 2006; Rizzolatti, Fogassi, & Gallese, 2002). VPM neurons are also active when the monkey sees a part of its body (Graziano, Cooke, & Taylor, 2000). In humans, VPM is activated when processing tactile information on the face and visual or acoustic information moving toward the face (Huang & Sereno, 2007; Bremmer et al., 2001). VPM is largely interconnected with VIP (Luppino, Murata, Govoni, & Matelli, 1999) and receives important projections from visual and somatosensory cortices (Matelli, Camarda, Glickstein, & Rizzolatti, 1986; Godschalk, Lemon, Kuypers, & Ronday, 1984). Thus, VPM together with VIP represents an ideal candidate for integrating visual and tactile information related to face stimulation. The new finding from the present study is that, differently from VIP, VPM activity discriminated when the observed touch was administered to the observer's face rather than to another person's face or an object. In other words, VPM processed and integrated visuotactile information specifically pertaining to the self.
Previous fMRI findings have shown that VPM is directly involved in the feeling of body ownership (Ehrsson et al., 2005; Ehrsson, Spence, & Passingham, 2004). In the so-called RHI, viewing touch on a fake hand and feeling synchronously touch on one's own hidden hand result in an illusory percept of the fake hand as one's own hand (Botvinick & Cohen, 1998). During synchronous visuotactile stimulation causing the RHI, VPM is active. Moreover, brain lesions involving VPM are related to disorders of body ownership, such as anosognosia for hemiplegia (Pia, Neppi-Modona, Ricci, & Berti, 2004) and asomatognosia (Arzy, Overney, Landis, & Blanke, 2006). Thus, VPM together with other regions in the inferior parietal cortex (Berlucchi & Aglioti, 1997, 2010) is a key area in subserving the feeling of ownership of one's own body that is the embodied self. It is worth noting that no activation specifically related to the present experimental manipulations was found in medial pFC, in areas processing more abstract and semantic representations of oneself, that is, the narrative self (Jenkins et al., 2008; Mitchell et al., 2006). The cluster of activation change recorded in the dorsomedial pFC, indeed, did not vary as a function of the kind of visuotactile stimulation the subject was processing. Thus, coming back to the first questions of the present study, namely, which brain areas and which representation of the self underlie the self-related enhancement of VRT effect, we might conclude that VPM and the embodied self are the respective answers.
It remains to explain why such self-related VPM modulation is characterized by a reduction of neural activity, instead of by an enhancement, as one might more simply expect. Neural activity in VPM during the RHI positively correlates with the subjective feeling of body ownership. It has been proposed that the strength of VPM activation does reflect the effort of integrating different modalities into a unique body representation; accordingly to this view, VPM plays a specific role in embodying a nonbody object (Tsakiris et al., 2007). So, the higher activation of this area, in the case of viewing another person's face and an object being touched, might reflect the effort in the embodying process, whereas viewing one's own face being touched facilitates embodiment and thus less VPM activity is recorded.
Neural Activity Related to the Modulation of Touch Perception When Viewing One's Own Face
Finally, how does visuotactile integration related to oneself modulate tactile perception? The pattern of neural response shown in the premotor cortex reflects to the somatosensory areas. In particular, a reduced activity in the somatosensory cluster including the face area of SI and SII was found for viewing one's own face being touched in comparison with all other conditions. It is already known that visual information modulates tactile processing within somatosensory cortices (Macaluso, Frith, & Driver, 2005), probably via feedback projections from multimodal fronto-parietal areas (Macaluso & Driver, 2005; Bremmer et al., 2001; Macaluso, Frith, & Driver, 2000). In line with this view, we suggest that VPM exerts a modulation on the somatosensory cortex (Macaluso, 2006). On the other hand, it is impossible to hypothesize a direct modulation of the somatosensory areas due to different images because somatosensory cortices cannot directly process visual features about the face identity. Indeed, it has already been shown that these areas are not sensitive to the identity of the object being touched (Keysers et al., 2004). Thus, the most likely interpretation is that VPM integrates information about viewing touch on oneself with tactile information and then differently modulates the somatosensory areas where tactile information is processed.
To support this model, it remains to explain how a reduction in the activity of somatosensory areas results in an increase of reported bilateral tactile percept when viewing one's own face being touched. We suggest that when viewing oneself, visuotactile integration is favored, and therefore visual information might be taken into account in perceiving tactile stimulation. In other words, perception of touch while viewing oneself being touched might rely more strongly in what is seen and less on what is felt. As a consequence, a weaker bilateral activation in the somatosensory cortices might be sufficient to evoke a bilateral tactile percept because this percept is supported by bilateral visual information. In contrast, when the fingers just approach one's own face or when subjects view another person or an object, visuotactile integration is less effective, and therefore tactile perception more strongly depends on unisensory tactile signals: As a consequence, a stronger bilateral activity in the somatosensory areas is necessary to elicit a bilateral tactile percept.
To sum up, VRT is defined as a modulation of tactile perception felt on one' own body when viewing touch on an external stimulus, this effect being maximum when viewing touch on one's own body. The present results show that the neuronal counterpart of this effect relies on an extended network of fronto-parietal structures representing multisensory information pertaining the bodily self.
Acknowledgments
The authors are grateful to Dr. Mauro Gianni Perrucci for his help in collecting data. This work was supported by grants from MURST to E. L.
Reprint requests should be sent to Andrea Serino, Centro Studi e Ricerche in Neuroscienze Cognitive, Università di Bologna, Via Brusi, 20, 47023 Cesena, Italy, or via e-mail: [email protected].