Abstract

Parietofrontal pathways play an important role in visually guided motor control. In this pathway, hand manipulation-related neurons in the inferior parietal lobule represent 3-D properties of an object and motor patterns to grasp it. Furthermore, mirror neurons show visual responses that are concerned with the actions of others and motor-related activity during execution of the same grasping action. Because both of these categories of neurons integrate visual and motor signals, these neurons may play a role in motor control based on visual feedback signals. The aim of this study was to investigate whether these neurons in inferior parietal lobule including the anterior intraparietal area and PFG of macaques represent visual images of the monkey's own hand during a self-generated grasping action. We recorded 235 neurons related to hand manipulation tasks. Of these, 54 responded to video clips of the monkey's own hand action, the same as visual feedback during that action or clips of the experimenter's hand action in a lateral view. Of these 54 neurons, 25 responded to video clips of the monkey's own hand, even without an image of the target object. We designated these 25 neurons as “hand-type.” Thirty-three of 54 neurons that were defined as mirror neurons showed visual responses to the experimenter's action and motor responses. Thirteen of these mirror neurons were classified as hand-type. These results suggest that activity of hand manipulation-related and mirror neurons in anterior intraparietal/PFG plays a fundamental role in monitoring one's own body state based on visual feedback.

INTRODUCTION

Our brain monitors body posture for accurate movement and for its relationships with objects by interactions between motor output signals and sensory feedback signals (Hoff & Arbib, 1993; Head & Holmes, 1911). When we execute a grasping action, visual information relates not only to the object to be grasped but also to one's own body state involved in action control, particularly in the interaction phases of the body and the object (Schettino, Adamovich, & Poizner, 2003). Hand kinematics is influenced by the perturbation of the target size or position after the grasp is initiated (Paulignan, Jeannerod, MacKenzie, & Marteniuk, 1991; Paulignan, MacKenzie, Marteniuk, & Jeannerod, 1991). Human imaging studies have revealed that the posterior parietal cortex plays an important role in visual online control of an action by monitoring one's own body posture (Reichenbach, Bresciani, Peer, Bulthoff, & Thielscher, 2011; Desmurget et al., 1999; Sirigu, Daprati, Pradat-Diehl, Franck, & Jeannerod, 1999).

Physiological studies have revealed that neurons in the anterior intraparietal (AIP) area and ventral premotor cortex F5 of macaques show grasping activity that integrate motor signals and 3-D visual properties of objects (Murata, Gallese, Luppino, Kaseda, & Sakata, 2000; Murata et al., 1997; Sakata, Taira, Murata, & Mine, 1995; Taira, Mine, Georgopoulos, Murata, & Sakata, 1990). Some of these neurons show less activity during movement in the dark than in the light, suggesting that they encode visual representations. A portion of them respond to viewing an object per se (visual-motor object-type neurons), but the remaining neurons do not respond to viewing an object per se (visual-motor non-object-type neurons). Visual responses of the latter type may be concerned with the image of the hand during the action, but the properties of these neurons are not yet clear.

Mirror neurons in area F5 of ventral premotor cortex and PFG of the inferior parietal lobule (IPL) of macaques fire when animals manipulate objects in a certain way and when they observe others performing a similar action (Fogassi et al., 2005; Rizzolatti, Fogassi, & Gallese, 2001; Gallese, Fadiga, Fogassi, & Rizzolatti, 1996; Rizzolatti, Fadiga, Gallese, & Fogassi, 1996; Dipellegrino, Fadiga, Fogassi, Gallese, & Rizzolatti, 1992). These neurons have been considered to be involved in neural correlates of social cognitive function, such as recognizing goals or understanding the actions of others. On the one hand, discovery of auditory–vocal mirror neurons in the swamp sparrow forebrain has suggested that mirror neurons may work in the context of our own motor action control. These neurons fire when animals sing a song and when they hear the same note sequence of their own song as well as to a similar note sequence in other birds' songs (Prather, Peters, Nowicki, & Mooney, 2008). On the basis of these findings, Tchernichovski and Wallman suggested that auditory–vocal mirror neurons seem to play roles in acquiring motor skills by comparing auditory feedback with the corollary discharge of motor output as predicted feedback (Tchernichovski & Wallman, 2008). A hypothesis of Arbib et al. emphasized that the mirror neuron system may help to monitor the success of a self-action by integrating feedback and the corollary discharge of one's intended action (Bonaiuto & Arbib, 2010). However, visual properties of mirror neurons to one's own hand during actual execution of an action are unknown.

To address these questions, we investigated whether these hand manipulation-related neurons and mirror neurons in the macaque AIP/PFG specifically responded to movies of the monkey's own hand manipulation movement without objects. We revealed that a subset of hand manipulation-related neurons and mirror neurons in AIP/PFG responded to visual presentation of the hand kinematics that were corresponded to visual feedback during own hand action. These findings suggest that activities of hand manipulation-related neurons and mirror neurons in IPL play a fundamental role in visual feedback control of action and that this functional mechanism may also play a pivotal role in mapping of another's body on one's own body.

METHODS

Subjects

Two male Japanese monkeys (Macaca fuscata) were used in the experiments (MK1: 7.0 kg, MK2: 8.5 kg). One hemisphere (left hemisphere from MK1) had been partly used in another study published by Ishida, Nakajima, Inase, and Murata (2010). Because, in this hemisphere, we performed an experiment as a pilot study, the number of penetrations recorded by hand manipulation-related neurons from AIP and PFG was less than that of other hemispheres (Table 1). All animal preparations and procedures fully complied with the Science Council of Japan's Guidelines for Proper Conduct of Animal Experiments (2006) and were approved by the local ethics committee (Animal Care and Use Committee of Kinki University).

Table 1. 

Numbers of Recorded Neurons and Penetration

MK2 (Left Hemisphere)MK1 (Right Hemisphere)MK1 (Left Hemisphere)Total
Number of penetrationa (39) (34) (12) (85) 
Hand manipulation-related neurons 
 Action observation − 113 56 12 181 
 Action observation +b 32 17 54 
Neurons not related to hand manipulation tasks 63 52 59 174 
MK2 (Left Hemisphere)MK1 (Right Hemisphere)MK1 (Left Hemisphere)Total
Number of penetrationa (39) (34) (12) (85) 
Hand manipulation-related neurons 
 Action observation − 113 56 12 181 
 Action observation +b 32 17 54 
Neurons not related to hand manipulation tasks 63 52 59 174 

aPenetration that recorded hand manipulation-related neurons.

bAction observation +; Neurons responded for FH1, FH2 and/or FH3.

Task Instrumentation

The monkeys were trained on a task control unit (TCU) to perform hand manipulation and fixation tasks (Figure 1A). The lower chamber of the TCU under the half mirror was set up with a turntable to allow different-shaped target objects to be placed in each block. The turntable was randomly rotated under computer control. The upper chamber above the half mirror contained a monitor screen (170.9 mm wide, 128.2 mm high) in front of the face (30.0 cm from the face) and a CCD camera. The target objects included a horizontal plate for side grip with the thumb and the body of the index finger, a vertical plate in a groove for precision grip with the tips of the thumb and index finger, and a vertical bar for power grip with fingers other than the thumb (Figure 1B).

Figure 1. 

Experimental setup. (A) TCU. The image of the hand movement and object was recorded with a CCD camera and presented on a monitor screen. (B) The three objects used for tasks. The monkeys were required to grasp a part of each object (light blue). The grasping configurations were different for each object: (1) a horizontal plate for side grip with the thumb and the body of the index finger, (2) a vertical plate in a groove for precision grip with the tips of the thumb and index finger, and (3) a vertical bar for power grip with fingers other than the thumb. (C) Schematic of task sequences and stimuli. A spotlight from a red/green light-emitting diode was projected on the object. The monkeys were required to fixate on the spotlight on the monitor during all tasks. Top: manipulation tasks. In the ML task, according to the instruction of the spotlight, the monkeys performed the grasping action while seeing the online image of their hand and the object on the monitor. The monkeys were able to see their own hand or the target object to be grasped only on the monitor through the CCD video camera displaying a first person point-of-view. In the MD task, the monkeys performed the same task sequences without illumination of the object and hand. Bottom: fixation tasks. The monkeys were required to observe movies during fixation tasks. Movies of the same grasping action as performed in the ML task with/without the object (FH1 and FH2 tasks, respectively), experimenter's grasping action (FHE task), an object only (FOB task), or a spotlight only (FSP task) were presented on the monitor screen. At the end of each trial, the monkey received a reward.

Figure 1. 

Experimental setup. (A) TCU. The image of the hand movement and object was recorded with a CCD camera and presented on a monitor screen. (B) The three objects used for tasks. The monkeys were required to grasp a part of each object (light blue). The grasping configurations were different for each object: (1) a horizontal plate for side grip with the thumb and the body of the index finger, (2) a vertical plate in a groove for precision grip with the tips of the thumb and index finger, and (3) a vertical bar for power grip with fingers other than the thumb. (C) Schematic of task sequences and stimuli. A spotlight from a red/green light-emitting diode was projected on the object. The monkeys were required to fixate on the spotlight on the monitor during all tasks. Top: manipulation tasks. In the ML task, according to the instruction of the spotlight, the monkeys performed the grasping action while seeing the online image of their hand and the object on the monitor. The monkeys were able to see their own hand or the target object to be grasped only on the monitor through the CCD video camera displaying a first person point-of-view. In the MD task, the monkeys performed the same task sequences without illumination of the object and hand. Bottom: fixation tasks. The monkeys were required to observe movies during fixation tasks. Movies of the same grasping action as performed in the ML task with/without the object (FH1 and FH2 tasks, respectively), experimenter's grasping action (FHE task), an object only (FOB task), or a spotlight only (FSP task) were presented on the monitor screen. At the end of each trial, the monkey received a reward.

Behavioral Tasks

The monkeys were trained to perform two manipulation tasks and five fixation tasks (Figure 1C). During the experiments, the monkeys sat comfortably in a primate chair. The time courses of these tasks were similar to those used in previous studies (Raos, Umilta, Murata, Fogassi, & Gallese, 2006; Murata et al., 2000). During the manipulation in the light (ML) task, the monkeys performed the grasping action while seeing an online image of the hand with a first-person perspective and objects on the monitor according to instructions pertaining to the color of a spotlight, to maximize concordance with the condition in which movement of the hand was observed on the monitor in following fixation tasks. The fixation spotlight was superimposed on the image of the object. When the red spotlight turned on, the monkeys started fixating on it and pressed a home key. When the monkey pressed the home key, the light turned on to illuminate the object, and the monkey fixated on it for 1.0 sec. After this period, the color of the spotlight was changed to green, which cued the monkey to release the home key and grasp the object to pull it. At the moment of the key release, the color of the spotlight was changed back to red. The monkey had to hold the object for 1.0 sec until the green spotlight was turned on again and then release it. The monkeys also performed the manipulation in the dark (MD) task in which the monkeys did not see any image of the object or hand; rather, they were guided only by the color change of the spotlight on the monitor. Before starting a block of trials of the MD task, we briefly presented the object to the monkey. In the training session, we monitored grasping action using a CCD camera with infrared red light. If the monkey used a grip configuration during the MD task unlike in the ML task, the trials were aborted.

In the fixation tasks, when the green spotlight turned on, the monkeys were required to fixate on the spotlight on the monitor and press the home key during the fixation period without any movement. At the moment of key pressing, previously recorded movies of the monkey's own grasping action from the first person-perspective (FH1 and FH2 tasks), experimenter's grasping action (FHE task), only the object for grasping (FOB task), or only the spotlight (FSP task) were presented on the monitor (Figure 1C), and then the monkey fixated on the image. In the FH1 task, the video image was recorded from the same hand of the same monkey, which was the same image as shown on the monitor during the ML task. For the FH2 task, we erased the object image from the movie of the monkey's grasping action for the FH1 task and displayed the movie without the object. To remove the object from the movies of the object manipulation, the movies were processed with the ChromaKey effect with a digital AV mixer (WJ-MX50A, Panasonic, Osaka, Japan) and edited using a personal computer equipped with DV Storm software (Canopus, Kobe, Japan). In the FHE task, we displayed a movie of the experimenter's grasping action with the same time course as the FH1 task. In this movie, the image was a lateral view of the action, rather than a first-person perspective of the monkey. The image of the experimenter's hand crossed the monitor from the contralateral side to the ipsilateral side of the recording site. The monkeys performed 7–10 trials for three grip types in each task condition. The trial was aborted if the monkey moved its gaze away from the fixation spotlight. The monkey's eye positions were monitored with the search-coil technique.

Surgical Procedures

We recorded neuronal activity from both hemispheres in one monkey (MK1) and from the left hemisphere in the other monkey (MK2). Before starting the single-unit recording, stainless steel cylinders for fixing the animal's head to the monkey chair were implanted on the skull under general anesthesia using ketamine hydrochloride (5 mg/kg, im), xylazine hydrochloride (2 mg/kg, im) with atropine sulfate (0.01 mg/kg, im), and sodium pentobarbital (15 mg/kg, iv, once every hour). After the surgery, each monkey was trained to sit quietly in a primate chair with their head fixed while observing different types of visual stimuli and receiving somatic stimuli on various body parts. Following the training, under the above-described anesthesia, an opening of about 20 mm in diameter was made in the skull over the intraparietal sulcus (IPS), and a cylindrical stainless steel recording chamber (diameter = 20 mm) was implanted. The angle of xy coordinates of the chamber was set to correspond to stereotaxic coordinates. The center of the chamber in the stereotaxic coordinates was as follows: in MK2 over the left hemisphere (anterior = 5.0 mm; lateral = 20.0 mm, angle = 50°) and in MK1 over the right hemisphere (anterior = 1.0 mm; lateral = 19.0 mm, angle = 50°) and left hemisphere (anterior = 4.0 mm; lateral = 18.0 mm, angle = 50°). To monitor the eye position, a magnetic search coil was implanted in the sclera of the eye. Postoperatively, animals received analgesics and antibiotics (im) for 1 week to minimize the risk of infection. The surgical methods were performed according to previous descriptions (Ishida et al., 2010; Murata et al., 2000).

Unit Recording

Single-unit recordings were performed extracellularly using varnish-insulated tungsten microelectrodes (impedance 1.5–5.0 MΩ at 1 kHz; FHC, Bowdoin, ME). The electrode attached to the XY stage on the chamber was obliquely advanced into the cortex through the dura mater using a manipulator (MO-95, Narishige, Tokyo, Japan). Microelectrode penetrations were made in AIP and PFG. Single units were isolated online with a dual voltage time window discriminator, and spike and event signals were exported to a computer for offline analysis. To find grasping-related neurons, we manually presented the monkey a small piece of food (5- to 8-mm piece of sweet potato or raisin) or 3-D real objects (bar, plate, or spherical object and so on), and then the animal used to grasp them. Each neuron was tested before data collection by the experimenter stimulating the hand, forearm, upper arm, and shoulder of the monkey with light and deep touch and manipulating the joints of the fingers, the wrist, the elbow, and the shoulder. We discarded neurons with somatosensory or proprioceptive responses from the database. If the neurons responded with grasping action, the neurons were tested with the ML and FSP tasks to define hand manipulation-related neurons. To investigate visual responses to the hand, the neurons were tested with the FH1, FH2, and FHE tasks. Finally, the monkeys were required to perform the MD task to verify the motor representation and the FOB task to test visual responses to the objects.

We confirmed the recording areas based on previously reported physiological criteria and recording depths (Umilta et al., 2008; Murata et al., 2000; Andersen, Snyder, Bradley, & Xing, 1997; Rizzolatti, Fogassi, & Gallese, 1997; Andersen, Bracewell, Barash, Gnadt, & Fogassi, 1990). Briefly, we searched for hand manipulation-related neurons in the anterior part of the lateral bank of the IPS (AIP) and in the convexity of the IPL (PFG). To record from these areas, we mapped the hand and face areas in the primary somatosensory cortex as a landmark to find AIP and PFG. Neuronal responses were assessed from the cortical surface when each electrode passed into the lateral or medial bank of the IPS and reached the fundus. The physiological boundaries of each IPL subregion were identified based on previously described response properties and sulcus depth (Rozzi, Ferrari, Bonini, Rizzolatti, & Fogassi, 2008; Murata et al., 2000; Andersen et al., 1990). Somatosensory responses were mainly present in the rostral half of the IPL, whereas visual responses were evenly distributed throughout the IPL but were lacking in the very rostral part (area PF). Eye movement-related activity was limited to the caudal part of the IPL (area PG). Mirror neurons were found mainly in PFG, which was located between PG and PF. Then, AIP and the lateral intraparietal area (LIP) were localized inside the lateral bank of IPS deeper than 3 mm (Rozzi et al., 2008; Gregoriou, Borra, Matelli, & Luppino, 2006). AIP is characterized as the systematic presence of motor and visuo-motor neurons that exhibit discharges related to grasping and hand manipulation movements. Some of these neurons respond to the observation of graspable objects with a specific shape, orientation, and size (Murata et al., 2000). In addition, somatosensory responses are virtually absent. LIP is characterized by strong activation during saccades and eye fixation.

Data Analysis

For data analysis, we designated the behavioral events as follows. In the manipulation tasks: pressing the home key (event M1), releasing the home key (start of reaching, event M2), pulling the object (start of holding the object, event M3), and releasing the object (event M4). In the fixation tasks: pressing the home key (event F1), appearance of the actor's hand in the movie (event F2), pulling the object in the movie (event F3), and releasing the home key (event F4). Events M1, M2, M3, F1, and F4 were detected with microswitches. Events F2 and F3 were identified on a frame in movies that were filmed at 30 frames per second. To define hand manipulation-related activity, we compared the average firing rates of the grasp phase (from 100 msec before event M2 to event M3) or hold phase (from event M3 to 500 msec after) during the ML task with the baseline phase (from 100 to 600 msec after the key press in the FSP task) using a one-way ANOVA and multiple comparisons using Tukey–Kramer tests. To discriminate hand manipulation-related activity from visual fixation-related activity, we specified the baseline phase from the FSP task. If a neuron showed significantly higher responses than baseline activity, the neuron was defined as a hand manipulation-related neuron.

The preference of grip types was ranked for each neuron based on the activity level in grasp and hold phases in the ML task: preferred, second, and third grips. With respect to this preferred grip type, we investigated whether a neuron responded to the visual presentation of the hand in the FH1, FH2, and FHE tasks, comparing the average firing rates of the grasp phase (from 50 msec after event F2 until event F3) or hold phase (from event F3 to 500 msec after) with the baseline phase (from 600 msec after event F1 to event F2). Because we needed to detect enhanced activity after event F2 because of the visual image of the hand appearing on the monitor, especially in the FH1 task, the target object image was presented before event F2. To detect movement-related activity, we compared the average firing rates of the grasp phase (from 100 msec before event M2 to event M3) or hold phase (from event M3 to 500 msec after) during the MD task with the baseline phase in the FSP task. To detect object-related visual responses, the average firing rates of the object phase (from 100 to 600 msec after key press) during the FOB task were compared with the baseline phase in the FSP task. For comparison of more than three phases, one-way ANOVA and multiple comparison Tukey–Kramer tests were performed. For comparison of two phases, the F test followed by an unpaired t test or Welch test was performed.

According to a previous classification (Murata et al., 2000; Murata, Gallese, Kaseda, & Sakata, 1996), hand manipulation-related neurons were classified into three types: visual-dominant (V) neurons, which were not active during the MD task; motor-dominant (M) neurons, in which the activity was not significantly different between the ML and MD tasks and that did not respond to fixating on the image of the object during the FOB task; and visual-motor (VM) neurons, which were less active during the MD task than the ML task or that responded during the MD and FOB tasks. V and VM neurons were further classified into object-type (Vo and VMo) and non-object-type (Vn and VMn) neurons based on responses during the FOB task.

To verify whether the quality of tuning for these preferred grip types was maintained among the different tasks, we performed a receiver operator characteristic (ROC) analysis (Lehmann & Scherberger, 2013; Townsend, Subasi, & Scherberger, 2011). In the FH1, FH2, and FHE tasks, we used neuronal activity in a sliding window of 200-msec widths, which were shifted to 20-msec steps in the task sequence, and plotted the ROC curve for the two sets of grip types: the preferred versus the second or third grip. We then calculated the areas under the curve (AUCs). An AUC score of 1 indicates complete selectivity for the preferred grip type, and an AUC score of 0.5 shows chance level of preference between preferred and other grip types. We compared AUC score of each neuron by sign-rank test against median 0.5. These analyses were performed using custom scripts in Matlab (Mathworks Inc., Natick, MA).

Histological Reconstruction and Recording Sites

After the completion of recording, a series of electrolytic lesions were made along several of the penetrations. A few days after the lesions were placed, the monkeys were deeply anesthetized with an overdose of pentobarbital and perfused with saline followed by paraformaldehyde. Histological sections (50 μm thick) were made along the frontal plane in both hemispheres. Nissl staining was performed in every other section to trace the penetrations and verify the electrolytic lesions. Recording sites were determined from the relative positions of the penetration to the electrolytic lesions, stereotaxic coordinates, depths of penetration, and functional properties. More specifically, a depth of 3 mm from the surface was defined as the border between PFG and AIP (Rozzi et al., 2008; Gregoriou et al., 2006; Murata et al., 2000).

RESULTS

Identification of Hand Manipulation-related Neurons

We recorded 409 single units from AIP/PFG of two monkeys. Although the number of recorded neurons in left hemisphere in MK1 was less than that of other two hemispheres (see Methods), 235 units in all these hemisphere were categorized as hand manipulation-related neurons that exhibited a statistically significant increase in discharge rates during object manipulation in the ML task. Of these 235 neurons, 54 (23.0%) responded to viewing the movie of the FH1, FH2, or FHE tasks (Table 1). In this study, we examined the details of the visual and motor responses of these 54 neurons during the grasping action. We classified these 54 neurons using a previous classification of hand manipulation-related neurons (Murata et al., 1996, 2000).

Responses to the Monkey's Own Hand Image with/without an Object

Hand-type Neurons Responding to the Hand Image without an Object

Figure 2 shows representative examples of hand manipulation-related neurons responding to the manipulation tasks and the fixation tasks for the monkey's own hand action and objects. The firing rates of unit A increased during the hold phase of the ML task, and this increase during the ML task was greater than during the MD task, but it did not respond to the object image (FOB task; F = 0, Welch's t test, p > .05). According to the classification of hand manipulation-related neurons in a previous study (Murata et al., 2000), this neuron was classified as a VMn neuron. To examine visual response properties of the increased activity during the ML task, the neuron was investigated when the monkey only fixated on the same hand action image (FH1) as presented in the ML task. This neuron responded to the visual presentation during the hold phase (F[3.55] = 25.4, p < .05, Tukey–Kramer test, for each comparison, p < .05). To examine whether this neuron specifically responded to the image of the hand, the monkey fixated on a movie in which the object image was erased from the movie for the FH1 task (FH2 task). This neuron also responded to the visual image of the hand action even without the object (F[3.55] = 8.23, p < .05). We designated this neuron, which was activated during the FH2 task, as “hand-type.”

Figure 2. 

Five examples of hand manipulation-related neurons responding to the monkey's own hand and object images. ML, MD, FOB, FH1, and FH2 indicate each task condition. Black dots indicate event timing in temporal order. In the ML and MD tasks: turning on the red spotlight, pressing the home key (event M1), turning on the green spotlight, releasing the home key (event M2), pulling the object (event M3), turning on the green spotlight, and releasing the object (event M4). In the FOB task: turning on the green spotlight, pressing the home key (event F1), turning on the red spotlight, and releasing the home key (event F4). In the FH1, FH2, and FHE (shown in Figure 3) tasks: turning on the green spotlight, pressing the home key (event F1), appearance of the hand in movies (event F2), pulling the object in movies (event F3), turning on the red spotlight, and releasing the object (event F4). Single-neuron spike trains in raster plots (red or blue dots) aligned at event M3 in the ML and MD tasks, event F3 in the FH1 and FH2 tasks, or event F1 in the FOB task (event F1). These plots were used to obtain peristimulus time histograms that were smoothed with 50-msec time windows and a two-point moving average. Units A, B, C, and D were classified as VMn, Vn, motor-dominant, and VMo neurons, respectively, and these neurons responded to the visual configuration of the hand without the object (hand-type). Unit E responded to the visual configuration of the hand with the object during the FH1 task but not to the image of the hand without the object during the FH2 task (hand-object-type).

Figure 2. 

Five examples of hand manipulation-related neurons responding to the monkey's own hand and object images. ML, MD, FOB, FH1, and FH2 indicate each task condition. Black dots indicate event timing in temporal order. In the ML and MD tasks: turning on the red spotlight, pressing the home key (event M1), turning on the green spotlight, releasing the home key (event M2), pulling the object (event M3), turning on the green spotlight, and releasing the object (event M4). In the FOB task: turning on the green spotlight, pressing the home key (event F1), turning on the red spotlight, and releasing the home key (event F4). In the FH1, FH2, and FHE (shown in Figure 3) tasks: turning on the green spotlight, pressing the home key (event F1), appearance of the hand in movies (event F2), pulling the object in movies (event F3), turning on the red spotlight, and releasing the object (event F4). Single-neuron spike trains in raster plots (red or blue dots) aligned at event M3 in the ML and MD tasks, event F3 in the FH1 and FH2 tasks, or event F1 in the FOB task (event F1). These plots were used to obtain peristimulus time histograms that were smoothed with 50-msec time windows and a two-point moving average. Units A, B, C, and D were classified as VMn, Vn, motor-dominant, and VMo neurons, respectively, and these neurons responded to the visual configuration of the hand without the object (hand-type). Unit E responded to the visual configuration of the hand with the object during the FH1 task but not to the image of the hand without the object during the FH2 task (hand-object-type).

Other hand manipulation-related neurons also showed visual response properties of the hand-type. For example, unit B (Figure 2) was classified as a Vn neuron because it was activated during ML but not during manipulation in the dark or fixation on the object. The neuron also responded to the visual image of the motor actions during the FH1 and FH2 tasks. The firing rates of unit C (Figure 2) during both the grasp and hold phases showed no significant differences between the ML and MD tasks, and the neuron did not respond to fixating on the image of the object in the FOB task. According to the previous classification, the neuron was a motor-dominant neuron, which is considered to mainly represent the motor-related signal. However, when examining whether the neuron responded to the visual presentation of the hand image, the firing rates increased during both the grasp and hold phases in the FH1 and FH2 tasks (F[9.77, 9.34] = 34.79, 51.22, p < .001). Unit D (Figure 2) was activated during manipulation in the dark and responded to fixating on the image of the object in the FOB task (F = 0.09, p < .05). This unit was considered as a VMo neuron. Although its firing rates did not change significantly during the grasp and hold phases in the FH1 task (F[3.55] = 1.56, p > .05), the firing rates increased during the hold phase in the FH2 task (F[3.55] = 9.13, p < .05).

Hand-type neurons may encode the visual image of hand kinematics. This property would be different from F5 neurons that are known to represent the goal of the action (Umilta et al., 2001, 2008). Of 54 hand manipulation-related neurons, 25 were hand-type neurons (Table 2). Of these, 21 were activated during manipulation in the dark (previous classification: VMo, VMn, and motor-dominant neurons). Because these 21 neurons had motor and visual response properties related to body movements, we speculated that they were involved in comparing motor-related signals and actual visual feedback.

Table 2. 

Responses of Hand Manipulation-related Neurons to the Visual Presentation of One's Own Hand Action

Responses for Video ClipsClassification of Hand Manipulation-related NeuronsTotal
FH1 TaskFH2 TaskVMoVMnMVoVn
Hand-type +/− 25 
Hand-object-type − 11 
No response − − 18 
Total   20 10 14 54 
Responses for Video ClipsClassification of Hand Manipulation-related NeuronsTotal
FH1 TaskFH2 TaskVMoVMnMVoVn
Hand-type +/− 25 
Hand-object-type − 11 
No response − − 18 
Total   20 10 14 54 

Hand-object-type Neurons Responding Only to the Hand with the Object Image

Unit E (Figure 2) responded only to the concurrent presentation of the hand and object (FH1 task). The neuron responded during the hold phase of the FH1 task (F[3.55] = 4.56, p < .05). However, it did not respond to the visual presentation of the hand during FH2 nor the object image in the FOB task (F[3.55] = 0.86, p > .05/F = 0.1, p > .05). Of 54 neurons, 11 were active during the FH1 task but not the FH2 task and were designated as the “hand-object-type.” Furthermore, the majority of these 11 neurons did not respond to the visual presentation of the object image as in unit E (8/11, Table 2, previous classification: VMn, Vn, and motor-dominant neurons). To activate this type of neuron, visual input of both the object and hand kinematics is required, suggesting that they are concerned with the relationship between the goal and the visual image of hand kinematics to grasp it.

Responses to the Experimenter's Hand Image

We next examined whether the above 54 neurons responded to the visual presentation of the experimenter's hand action (Figure 3). Unit F responded to the visual image of the monkey's own hand during the FH1 and FH2 tasks (F[10.39, 10.39] = 49.13, 28.79, p < .001). When the monkey fixated on the movie of the experimenter's hand manipulation movement (FHE task), the neuron responded during the grasp phase (F[10.39] = 25.68, p < .001). Twenty-one neurons responded to the visual image of both the monkey's own and the experimenter's hand action and were termed “self-other image-type” (Table 3). Unit F was also activated during manipulation in the dark, that is, the neuron had motor representations (F[10.39] = 49.64, p < .001). Because the neuron also became active during observation of another's action, this neuron was defined as a mirror neuron (Rizzolatti & Arbib, 1998; Gallese et al., 1996). Seventeen of 21 self-other image-type neurons were defined as mirror neurons (Rizzolatti & Arbib, 1998; Gallese et al., 1996). The remaining four self-other image-type neurons did not have motor representations and were designated as mirror-like neurons as reported by Gallese et al. (1996). Importantly, 15 of the self-other image-type neurons were classified as hand-type neurons because they showed activity during the FH2 task. Furthermore, 13 of 17 mirror neurons were hand-type neurons. The results suggested that some parietal mirror neurons or mirror-like neurons encoded kinematics rather than the goal of the observed action. Although we did not investigate whether these neurons responded to FHE task with object removed, it is suggested that this property is different from that of F5 mirror neurons responding only to transitive action (Umilta et al., 2001).

Figure 3. 

Three examples of hand manipulation-related neurons responding to the monkey's own and/or the experimenter's hand image. Task conditions are the same as in Figure 2 except that the FHE task is shown instead of the FOB task. Single-neuron spike trains in raster plots aligned at event M3 in the ML and MD tasks and event F3 in the FH1, FH2, and FHE tasks. Unit F responded to the observation of the monkey's own hand action during the FH1/FH2 tasks and the experimenter's hand action during the FHE task (self-other image-type). The neuron was also activated during the MD task, showing response properties of mirror neurons. Unit G responded during the FHE task. However, the neuron did not respond to the observation of the monkey's own hand action during either the FH1 or FH2 tasks (other image-type). The neuron was also activated during the MD task. This is also a mirror neuron. Unit H responded to the visual presentation of the monkey's own hand action only (self image-type).

Figure 3. 

Three examples of hand manipulation-related neurons responding to the monkey's own and/or the experimenter's hand image. Task conditions are the same as in Figure 2 except that the FHE task is shown instead of the FOB task. Single-neuron spike trains in raster plots aligned at event M3 in the ML and MD tasks and event F3 in the FH1, FH2, and FHE tasks. Unit F responded to the observation of the monkey's own hand action during the FH1/FH2 tasks and the experimenter's hand action during the FHE task (self-other image-type). The neuron was also activated during the MD task, showing response properties of mirror neurons. Unit G responded during the FHE task. However, the neuron did not respond to the observation of the monkey's own hand action during either the FH1 or FH2 tasks (other image-type). The neuron was also activated during the MD task. This is also a mirror neuron. Unit H responded to the visual presentation of the monkey's own hand action only (self image-type).

Table 3. 

Numbers of Hand Manipulation-related Neurons Responding to the Visual Presentation of One's Own and the Experimenter's Hand Action

Motor RepresentationTotal
+
Self-other image-type 17 (13) 4 (2) 21 (15) 
Other image-type 16 18 
Self image-type 11 (8) 4 (2) 15 (10) 
Total 44 (21) 10 (4) 54 (25) 
Motor RepresentationTotal
+
Self-other image-type 17 (13) 4 (2) 21 (15) 
Other image-type 16 18 
Self image-type 11 (8) 4 (2) 15 (10) 
Total 44 (21) 10 (4) 54 (25) 

Bold indicate numbers of mirror neurons.

Numbers in parentheses indicate the number of hand-type neurons.

We found neurons that responded only to the observation of the experimenter's hand action but not to the monkey's own action. Firing rates of unit G increased during the grasp phase in the FHE task (F[5.78] = 5.97, p < .01). However, the neuron did not respond to the visual presentation of the monkey's own hand action during the FH1 and FH2 tasks (F[3.47, 3.47] = 1.21, 2.56 p > .05). Eighteen neurons responded to the image of only the experimenter's hand action and were termed as “other image-type” (Table 3). Unit G was also activated during manipulation in the dark (F[9.47] = 14.91, p < .001), and these neurons were defined as mirror neurons. Of the 18 other image-type neurons, 16 were mirror neurons.

Unit H responded only to the visual presentation of the monkey's own hand action but not of the experimenter's hand action, and these neurons were termed as “self image-type.” We identified 15 self image-type neurons. Among these, 11 neurons had motor representation (Table 3).

Grip Selectivity and Preference

For both the manipulation and fixation tasks, three grip types were employed (side grip, precision grip, and power grip; Figure 1B). For each neuron, the grip type preference was determined with the activity level during the grasp and hold phases during the ML task: preferred, second, and third grips. To examine whether selectivity for this preferred grip type was maintained among the different fixation tasks, we performed ROC analysis between the most preferred grip and each of the others in the neurons responding to fixation during the FH1 task (n = 27), FH2 task (n = 25), and FHE task (n = 39). The temporal changes in the AUC score of each neuron are plotted in Figure 4. A greater number of neurons showed a peak AUC value during the hold phase rather than the grasp phase (Table 4). For each neuron, we calculated the mean AUC scores of the phase that showed the peak of the scores. The neurons showing the score significantly higher than 0.5 were presented by hatched bar (sign-rank test against median 0.5; Figure 5, left column). Large number of neurons showed the score significantly higher than 0.5 in these fixation tasks (vs. third and second in the FH1 task: 17/27 and 20/27, vs. third and second in the FH2 task: 12/25 and 18/25, vs. third and second in the FHE task: 26/39 and 24/39). Medians of scores in each task are significantly higher than 0.5 (vs. third and second in the FH1 task: medians = 0.665 and 0.624, n = 27, vs. third and second in the FH2 task: medians = 0.603 and 0.608, n = 25, vs. third and second in the FHE task: medians = 0.642 and 0.613, n = 39, p < .001). These results indicate that selectivity of grip type during the ML task was maintained during the visual presentation of the monkey's own or the experimenter's hand action. The right column of Figure 5 indicates the number of neurons that are selective for one type of grips or multiple grips. Many of these neurons showed the score significantly higher than 0.5 in both preferred versus second and preferred versus third (grip-type selective; Figure 5, FH1 n = 14, FH2 n = 11, FHE n = 20). Some other neurons did not show significantly high AUC score to second and/or third grip-type compared with preferred grip-type (Figure 5, multiple grip-type selective n = 5, n = 6, n = 7 or non-grip selective n = 4, n = 4, n = 3 in FH1, FH2 and FHE, respectively); this might be due to coding features that are common to several grip types.

Figure 4. 

Time course of grip-type selectivity. The preferred grip was determined by the firing rates during the ML task. Temporal changes in the AUC score by ROC analysis were plotted for each neuron. The AUC scores between the preferred grip type and other grip types are given at each time point. The time courses are aligned at the beginning of the object manipulation (time 0). Color maps show temporal changes of the AUC score for each neuron.

Figure 4. 

Time course of grip-type selectivity. The preferred grip was determined by the firing rates during the ML task. Temporal changes in the AUC score by ROC analysis were plotted for each neuron. The AUC scores between the preferred grip type and other grip types are given at each time point. The time courses are aligned at the beginning of the object manipulation (time 0). Color maps show temporal changes of the AUC score for each neuron.

Table 4. 

Number of Neurons that Showed the Peak AUC Value during Grasp or Hold Phases

Preferred vs. ThirdPreferred vs. Second
Grasp PhaseHold PhaseGrasp PhaseHold Phase
FH1 task (n = 27) 13 14 12 15 
FH2 task (n = 25) 17 18 
FHE task (n = 39) 17 22 13 26 
Preferred vs. ThirdPreferred vs. Second
Grasp PhaseHold PhaseGrasp PhaseHold Phase
FH1 task (n = 27) 13 14 12 15 
FH2 task (n = 25) 17 18 
FHE task (n = 39) 17 22 13 26 
Figure 5. 

The number of grip-type selective neurons. Distributions of mean AUC scores of the phase that showed the peak of the scores (grasp or hold phases) are shown on the left column. Scores >0.5 indicate selectivity for the preferred grip type compared with the other grip types. The neurons showing the score significantly higher than 0.5 were presented by hatched bar (sign-rank test against median 0.5; left column). The right column indicates the number of neurons that are selective for one type of grips or less selective. The grip-type selective means neurons that showed AUC score significantly higher than 0.5 in ROC analysis preferred versus second grip-type and preferred versus third grip-type. Multi-grip-type selective neurons showed higher AUC score than 0.5 either in preferred versus second grip-type or preferred versus third grip-type. Non-grip-type selective neurons did not distinguish neither third nor second from the preferred grip-type (in FH1 task: 14, 5, and 4; in FH2 task: 11, 6, and 4; in FHE task: 20, 7, and 3, respectively).

Figure 5. 

The number of grip-type selective neurons. Distributions of mean AUC scores of the phase that showed the peak of the scores (grasp or hold phases) are shown on the left column. Scores >0.5 indicate selectivity for the preferred grip type compared with the other grip types. The neurons showing the score significantly higher than 0.5 were presented by hatched bar (sign-rank test against median 0.5; left column). The right column indicates the number of neurons that are selective for one type of grips or less selective. The grip-type selective means neurons that showed AUC score significantly higher than 0.5 in ROC analysis preferred versus second grip-type and preferred versus third grip-type. Multi-grip-type selective neurons showed higher AUC score than 0.5 either in preferred versus second grip-type or preferred versus third grip-type. Non-grip-type selective neurons did not distinguish neither third nor second from the preferred grip-type (in FH1 task: 14, 5, and 4; in FH2 task: 11, 6, and 4; in FHE task: 20, 7, and 3, respectively).

The Anatomical Location of Each Neuron in AIP/PFG

The recording sites are shown in Figure 6. We recorded 11 and 14 hand-type neurons in AIP and PFG, respectively. We found one hand-object-type neuron in AIP and 10 in PFG. These types of neurons were found in both areas, and significant differences in their proportion were found between the two areas (χ2 test: p < .05). Self-other image-type, other image-type, and self image-type neurons were also found in both areas, but significant differences in their proportion were not found between AIP and PFG (χ2 test: p > .05). Importantly, we found eight mirror neurons in AIP and 25 in PFG. Of these neurons, five in AIP and 12 in PFG also responded to visual presentation of the monkey's own hand action.

Figure 6. 

Recording sites. A, B, and C show inserted positions of electrodes (A: MK1, right hemisphere; B: MK1, left hemisphere; C: MK2, left hemisphere). D shows recording sites plotted on selected histological sections in MK2. Filled black symbols indicate hand manipulation-related neurons. Red symbols indicate specific visual responses to images of either the monkey's own hand during the FH1 and FH2 tasks or the experimenter's hand during the FHE task (action observation +). Green symbols indicate mirror neurons. Crossbars indicate sites where hand manipulation-related neurons were not found. We classified the convexity and crown parts of the bank (<3 mm in depth from the surface) as PFG and the deep parts of the bank (>3 mm) as AIP. As shown in sections a, c, e, and f, mirror neurons were also found in AIP. IPS, inferior parietal sulcus; CS, central sulcus; LF, lateral fissure.

Figure 6. 

Recording sites. A, B, and C show inserted positions of electrodes (A: MK1, right hemisphere; B: MK1, left hemisphere; C: MK2, left hemisphere). D shows recording sites plotted on selected histological sections in MK2. Filled black symbols indicate hand manipulation-related neurons. Red symbols indicate specific visual responses to images of either the monkey's own hand during the FH1 and FH2 tasks or the experimenter's hand during the FHE task (action observation +). Green symbols indicate mirror neurons. Crossbars indicate sites where hand manipulation-related neurons were not found. We classified the convexity and crown parts of the bank (<3 mm in depth from the surface) as PFG and the deep parts of the bank (>3 mm) as AIP. As shown in sections a, c, e, and f, mirror neurons were also found in AIP. IPS, inferior parietal sulcus; CS, central sulcus; LF, lateral fissure.

DISCUSSION

In the network of the premotor cortex and IPL that play important roles in visuo-motor control for hand manipulation, the interaction between the corollary discharge of motor output and multimodal sensory feedback should be involved in the functional mechanisms for representing the body in the brain (Blakemore & Sirigu, 2003). In particular, some previous studies have suggested that the parietal cortex is concerned with representing visual hand configurations with actions (Pani, Theys, Romero, & Janssen, 2014; Murata et al., 2000). We determined whether hand manipulation-related neurons and mirror neurons in AIP and PFG of the IPL exhibited visual responses to hand kinematics without objects that were concerned with hand manipulation movements. We found that (I) a subset of hand manipulation-related neurons responded to the visual presentation of a hand action from the first-person point of view (many of these neurons also responded to the same visual presentation without the object, suggesting that they are concerned with visuospatial processing of one's own hand kinematics rather than the goal of the action); (II) a subset of mirror neurons also responded to the visual presentation of the hand action without the object; and (III) mirror neurons were located in AIP as well as PFG.

Functional Roles of Hand Manipulation-related Neurons in Response to One's Own Hand Image

Hand-type Neurons

A previous study demonstrated that the non-object type of hand manipulation-related neurons did not respond to the object image but exhibited enhanced activity during hand manipulation movement in the light compared with the dark. These neurons were selective for the hand configuration rather than the 3-D features of the object, suggesting that these visual responses are related to an image of the hand (Murata et al., 2000). In the current study, we clearly demonstrated that the non-object type of neurons responded to the visual presentation of the hand action without the object. Interestingly, some other types of neurons such as object type or motor dominant were also activated during observation of the hand image. We designated these neurons as hand-type. These results are also supported by the recent study showing that AIP neurons strongly respond during observation of movement of the hand rather than movements of simple geometric graphics (Pani et al., 2014). Importantly, we further revealed that a subset of these neurons clearly showed selectivity for grip type. The preference for grip type in movement tasks was maintained during observation tasks. The activity thus appears to be concerned with monitoring one's own movement for accurate control of hand action, whereas some number of neurons showed poor selectivity for grip type. As for this result, to some degree, selectivity may reflect the movement parameter for grasping, for example, orientation of the hand or hand aperture that may be common to several grip types, because we did not change the orientation of the hand or the size of the object in this experiment. Actually, some non-object-type neurons in AIP showed selectivity for the size or orientation of objects (Murata et al., 2000). Studying neuronal responses to movies in which these parameters for the grip change will be necessary.

A large number of hand-type neurons were also activated during the manipulation movement in the dark, as revealed by our study and Pani et al. (2014); these were visual-motor neurons with visual input for hand kinematics and motor signals. AIP and F5 have strong reciprocal connections (Luppino, Murata, Govoni, & Matelli, 1999). Physiological studies have revealed that visual-motor neurons in F5 are active before movement onset during manipulation in the dark, meaning that they exhibited set-related activity (Raos et al., 2006; Raos, Umilta, Gallese, & Fogassi, 2004). Conversely, AIP neurons do not show set-related activity (Murata et al., 1996). Following to these data, we speculate that the motor-related response of visual-motor neurons in AIP may be less because of direct motor output signals but rather the corollary discharge of the motor program from F5 (Sakaguchi, Ishida, Shimizu, & Murata, 2010; Murata et al., 2000; Sakata et al., 1995). Furthermore, we excluded neurons with somatosensory response before recording. We hypothesize that these neurons are capable of monitoring ongoing hand movement by comparing the corollary discharge (predicted sensory feedback) with the actual visual feedback. Notably, the activity of premotor neurons is modulated by transient visual feedback during grasping actions (Fadiga et al., 2013), demonstrating that a subset of premotor neurons is sensitive to the observation of one's own grasping. Fadiga et al. suggested that this modulation is due to an effect along the parietal-frontal pathway. Similar interactions between motor and somatosensory signals have been shown (Ishida, Fornia, Grandi, Umilta, & Gallese, 2013; Gardner, Debowy, Ro, Ghosh, & Babu, 2002). The secondary somatosensory cortex and posterior insula cortex have reciprocal connections with the parieto-premotor grasping-related areas. Neurons in these areas respond both to passive somatosensory stimulation and to the motor task (Ishida et al., 2013). This network should be involved in somatosensory control of grasping.

Hand-object-type Neurons

A subset of hand manipulation-related neurons responded to the visual presentation only when both the object image and the hand action image appeared. This type of neuron may be involved in the recognition of goal-directed movement or of the relationship between the object and the hand rather than hand kinematics. We designated these neurons as hand-object-type. Hand-object-type neurons may also monitor abstract goals rather than the detailed kinematics of actions. Some neurons in F5 discharge during the same phase of grasping with normal and reverse pliers, regardless of whether this involves opening or closing of the hand (Umilta et al., 2008). Furthermore, activation of mirror neurons in F5 requires the existence of the object (Umilta et al., 2001). We believe that these neurons may represent the goals of actions. F5 activity may modulate parietal neuronal activity to monitor the results of actions.

Functional Roles of Hand Manipulation-related Neurons in Response to the Experimenter's Hand Image

We found that a subset of hand manipulation-related neurons with motor representation in AIP and PFG also responded to the visual presentation of the actions of others. These neurons are mirror neurons. Previous studies revealed that mirror neurons are involved in recognizing goals or understanding actions (Bonini et al., 2010; Fogassi et al., 2005; Rizzolatti et al., 1996, 2001; Umilta et al., 2001; Gallese et al., 1996; Dipellegrino et al., 1992). Different functional roles for mirror neurons have been suggested using various paradigms, such as a mental simulation routine (Gallese & Goldman, 1998), intention recognition (Fogassi et al., 2005), and mental state inference (Oztop, Kawato, & Arbib, 2013; Oztop, Wolpert, & Kawato, 2005).

In this study, we revealed the unique visual response properties of mirror neurons during a self-generated action. We found that a subset of mirror neurons responded to the visual presentation of one's own hand action. These self-other image-type neurons responded to the visual image of an observed action regardless of the self-other distinction. We also found neurons that only responded to the visual image of one's own hand movement or the experimenter's hand movement (self or other image-type). Caggiano et al. studied activity of F5 mirror neurons when they presented the monkey's action from a different point of view (Caggiano et al., 2011). They found that the majority of F5 mirror neurons show modulated responses according to the point of view, but a minority of neurons did not. They concluded that viewpoint-invariant mirror neurons represent the goal of the observed action. Although we did not present a different view of the monkey's action and our experimental methods were not same as theirs, self-other image-type neurons in our study may be viewpoint independent, similar to F5 mirror neurons.

Importantly, we found that a subset of self-other image-type neurons responded to the visual presentation of one's own hand action even without the image of the object. F5 mirror neurons have been reported to respond only to transitive actions (Umilta et al., 2001). We speculate that mirror neurons in IPL would respond not only to transitive action but also to kinematics of observed action, although we did not investigate whether these neurons respond to FHE task with the object removed. This property may be important difference of mirror neurons between F5 and IPL. It is suggested that these neurons in IPL commonly represent the kinematics of hand action regardless self-other distinction or viewpoint, which is important for online feedback control for various situations, rather than the goal of action. Of course, this type of activity may also be useful for action recognition. In Caggiao's experiments, the highest numbers of these neurons were viewpoint-dependent neurons that preferred the first-person point of view. This observation implies that activity of F5 mirror neurons derives from the visual feedback signal to control self-generated actions.

In novel computational models, the functional system of mirror neurons may also play a role in monitoring the success of one's own action and be activated by both recognition of one's own apparent actions and corollary discharge from one's intended actions (Oztop et al., 2013; Bonaiuto & Arbib, 2010). As such, mirror neurons likely play a role in acquiring motor skill (Cooper, Cook, Dickinson, & Heyes, 2013; Prather et al., 2008; Tchernichovski & Wallman, 2008). These models are consistent with our present hypothesis that hand manipulation-related and mirror neurons in the IPL, which form a cortico-cortical network with F5, likely contribute to comparing predicted and actual sensory feedback during ongoing movement and represent the body image in the brain.

Why do these hand manipulation-related neurons respond to the visual presentation of hand action? Ontogenetic accounts of mirror neuronal responses, such as Heyes' associative sequence learning theory (Heyes, 2001, 2010), hypothesize that mirror neurons initially had motor properties but no specific sensory properties or weak and unsystematic association with sensory properties (Catmur, 2013). Sensory neurons were responsive to different visual properties of the observed action regardless of the self-other distinction. During development, we commonly perceive visual images of our own action as visual feedback contingent to that action. Then, an association may easily occur between the visual image of our own action and the motor signal to execute that action. This may be an important process for identification of self-body action. The system also leads to generalization of matching between actions and viewing actions executed by other individuals (Fadiga et al., 2013; Raos et al., 2006). The idea is congruent with our results that mirror neurons are found in AIP where hand-type neurons are found. One's own body representation may provide a basic reference frame for mapping of another's body and/or action (Ishida et al., 2010), and neuronal substrates that monitor one's own action are shared with recognizing and understanding the actions of others (Murata & Ishida, 2007; Rizzolatti et al., 1997).

Visual representations of body movements are formed via visuo-motor neuronal processes, and these visual maps may depend on neuronal activities in the parieto-premotor network including the mirror neuron system. The network seems to make an essential neuronal operation for the representation of self and other's body movements.

Acknowledgments

This work was supported in part by Grants-in-Aid for Scientific Research on Priority Areas (13210133, 14017090, 15016102, 16015307), “Emergence of Adaptive Motor Function through Interaction between Body, Brain, and Environment” (18047026, 20033022) from the Japanese MEXT, JSPS KAKENHI (23500486, 26350989, 26120002), Grant-in-Aid for JSPS Fellows (13J00039), the Narishige Neuroscience Research Foundation, and Nakayama Foundation for Human Science. We thank all the staff of the Life Science Laboratory for caring for the animals.

Reprint requests should be sent to Akira Murata, Department of Physiology, Kinki University Faculty of Medicine, Ohno-Higashi, Osakasayama, Japan 589-8511, or via e-mail: akiviola@med.kindai.ac.jp.

REFERENCES

REFERENCES
Andersen
,
R. A.
,
Bracewell
,
R. M.
,
Barash
,
S.
,
Gnadt
,
J. W.
, &
Fogassi
,
L.
(
1990
).
Eye position effects on visual, memory, and saccade-related activity in areas lip and 7a of macaque.
Journal of Neuroscience
,
10
,
1176
1196
.
Andersen
,
R. A.
,
Snyder
,
L. H.
,
Bradley
,
D. C.
, &
Xing
,
J.
(
1997
).
Multimodal representation of space in the posterior parietal cortex and its use in planning movements.
Annual Review of Neuroscience
,
20
,
303
330
.
Blakemore
,
S. J.
, &
Sirigu
,
A.
(
2003
).
Action prediction in the cerebellum and in the parietal lobe.
Experimental Brain Research
,
153
,
239
245
.
Bonaiuto
,
J.
, &
Arbib
,
M. A.
(
2010
).
Extending the mirror neuron system model, II: What did I just do? A new role for mirror neurons.
Biological Cybernetics
,
102
,
341
359
.
Bonini
,
L.
,
Rozzi
,
S.
,
Serventi
,
F. U.
,
Simone
,
L.
,
Ferrari
,
P. F.
, &
Fogassi
,
L.
(
2010
).
Ventral premotor and inferior parietal cortices make distinct contribution to action organization and intention understanding.
Cerebral Cortex
,
20
,
1372
1385
.
Caggiano
,
V.
,
Fogassi
,
L.
,
Rizzolatti
,
G.
,
Pomper
,
J. K.
,
Thier
,
P.
,
Giese
,
M. A.
,
et al
(
2011
).
View-based encoding of actions in mirror neurons of area F5 in macaque premotor cortex.
Current Biology
,
21
,
144
148
.
Catmur
,
C.
(
2013
).
Sensorimotor learning and the ontogeny of the mirror neuron system.
Neuroscience Letters
,
540
,
21
27
.
Cooper
,
R. P.
,
Cook
,
R.
,
Dickinson
,
A.
, &
Heyes
,
C. M.
(
2013
).
Associative (not Hebbian) learning and the mirror neuron system.
Neuroscience Letters
,
540
,
28
36
.
Desmurget
,
M.
,
Epstein
,
C. M.
,
Turner
,
R. S.
,
Prablanc
,
C.
,
Alexander
,
G. E.
, &
Grafton
,
S. T.
(
1999
).
Role of the posterior parietal cortex in updating reaching movements to a visual target.
Nature Neuroscience
,
2
,
563
567
.
Dipellegrino
,
G.
,
Fadiga
,
L.
,
Fogassi
,
L.
,
Gallese
,
V.
, &
Rizzolatti
,
G.
(
1992
).
Understanding motor events—A neurophysiological study.
Experimental Brain Research
,
91
,
176
180
.
Fadiga
,
L.
,
Caselli
,
L.
,
Craighero
,
L.
,
Gesierich
,
B.
,
Oliynyk
,
A.
,
Tia
,
B.
,
et al
(
2013
).
Activity in ventral premotor cortex is modulated by vision of own hand in action.
PeerJ
,
1
,
e88
.
Fogassi
,
L.
,
Ferrari
,
P. F.
,
Gesierich
,
B.
,
Rozzi
,
S.
,
Chersi
,
F.
, &
Rizzolatti
,
G.
(
2005
).
Parietal lobe: From action organization to intention understanding.
Science
,
308
,
662
667
.
Gallese
,
V.
,
Fadiga
,
L.
,
Fogassi
,
L.
, &
Rizzolatti
,
G.
(
1996
).
Action recognition in the premotor cortex.
Brain
,
119
,
593
609
.
Gallese
,
V.
, &
Goldman
,
A.
(
1998
).
Mirror neurons and the simulation theory of mind-reading.
Trends in Cognitive Sciences
,
2
,
493
501
.
Gardner
,
E. P.
,
Debowy
,
D. J.
,
Ro
,
J. Y.
,
Ghosh
,
S.
, &
Babu
,
K. S.
(
2002
).
Sensory monitoring of prehension in the parietal lobe: A study using digital video.
Behavioural Brain Research
,
135
,
213
224
.
Gregoriou
,
G. G.
,
Borra
,
E.
,
Matelli
,
M.
, &
Luppino
,
G.
(
2006
).
Architectonic organization of the inferior parietal convexity of the macaque monkey.
Journal of Comparative Neurology
,
496
,
422
451
.
Head
,
H.
, &
Holmes
,
G.
(
1911
).
Sensory disturbances from cerebral lesions.
Brain
,
34
,
102
254
.
Heyes
,
C.
(
2001
).
Causes and consequences of imitation.
Trends in Cognitive Sciences
,
5
,
253
261
.
Heyes
,
C.
(
2010
).
Where do mirror neurons come from?
Neuroscience and Biobehavioral Reviews
,
34
,
575
583
.
Hoff
,
B.
, &
Arbib
,
M. A.
(
1993
).
Models of trajectory formation and temporal interaction of reach and grasp.
Journal of Motor Behavior
,
25
,
175
192
.
Ishida
,
H.
,
Fornia
,
L.
,
Grandi
,
L. C.
,
Umilta
,
M. A.
, &
Gallese
,
V.
(
2013
).
Somato-motor haptic processing in posterior inner perisylvian region (SII/pIC) of the macaque monkey.
Plos One
,
8
,
e69931
.
Ishida
,
H.
,
Nakajima
,
K.
,
Inase
,
M.
, &
Murata
,
A.
(
2010
).
Shared mapping of own and others' bodies in visuotactile bimodal area of monkey parietal cortex.
Journal of Cognitive Neuroscience
,
22
,
83
96
.
Lehmann
,
S. J.
, &
Scherberger
,
H.
(
2013
).
Reach and gaze representations in macaque parietal and premotor grasp areas.
Journal of Neuroscience
,
33
,
7038
7049
.
Luppino
,
G.
,
Murata
,
A.
,
Govoni
,
P.
, &
Matelli
,
M.
(
1999
).
Largely segregated parietofrontal connections linking rostral intraparietal cortex (areas AIP and VIP) and the ventral premotor cortex (areas F5 and F4).
Experimental Brain Research
,
128
,
181
187
.
Murata
,
A.
,
Fadiga
,
L.
,
Fogassi
,
L.
,
Gallese
,
V.
,
Raos
,
V.
, &
Rizzolatti
,
G.
(
1997
).
Object representation in the ventral premotor cortex (area F5) of the monkey.
Journal of Neurophysiology
,
78
,
2226
2230
.
Murata
,
A.
,
Gallese
,
V.
,
Kaseda
,
M.
, &
Sakata
,
H.
(
1996
).
Parietal neurons related to memory-guided hand manipulation.
Journal of Neurophysiology
,
75
,
2180
2186
.
Murata
,
A.
,
Gallese
,
V.
,
Luppino
,
G.
,
Kaseda
,
M.
, &
Sakata
,
H.
(
2000
).
Selectivity for the shape, size, and orientation of objects for grasping in neurons of monkey parietal area AIP.
Journal of Neurophysiology
,
83
,
2580
2601
.
Murata
,
A.
, &
Ishida
,
H.
(
2007
).
Representation of bodily self in the multimodal parieto-premotor network.
In
S.
Funahashi
(Ed.),
Representation and brain
(pp.
151
176
).
Japan
:
Springer
.
Oztop
,
E.
,
Kawato
,
M.
, &
Arbib
,
M. A.
(
2013
).
Mirror neurons: Functions, mechanisms and models.
Neuroscience Letters
,
540
,
43
55
.
Oztop
,
E.
,
Wolpert
,
D.
, &
Kawato
,
M.
(
2005
).
Mental state inference using visual control parameters.
Cognitive Brain Research
,
22
,
129
151
.
Pani
,
P.
,
Theys
,
T.
,
Romero
,
M. C.
, &
Janssen
,
P.
(
2014
).
Grasping execution and grasping observation activity of single neurons in the macaque anterior intraparietal area.
Journal of Cognitive Neuroscience
,
26
,
2342
2355
.
Paulignan
,
Y.
,
Jeannerod
,
M.
,
MacKenzie
,
C.
, &
Marteniuk
,
R.
(
1991
).
Selective perturbation of visual input during prehension movements. 2. The effects of changing object size.
Experimental Brain Research
,
87
,
407
420
.
Paulignan
,
Y.
,
MacKenzie
,
C.
,
Marteniuk
,
R.
, &
Jeannerod
,
M.
(
1991
).
Selective perturbation of visual input during prehension movements. 1. The effects of changing object position.
Experimental Brain Research
,
83
,
502
512
.
Prather
,
J. F.
,
Peters
,
S.
,
Nowicki
,
S.
, &
Mooney
,
R.
(
2008
).
Precise auditory–vocal mirroring in neurons for learned vocal communication.
Nature
,
451
,
305
310
.
Raos
,
V.
,
Umilta
,
M. A.
,
Gallese
,
V.
, &
Fogassi
,
L.
(
2004
).
Functional properties of grasping-related neurons in the dorsal premotor area F2 of the macaque monkey.
Journal of Neurophysiology
,
92
,
1990
2002
.
Raos
,
V.
,
Umilta
,
M. A.
,
Murata
,
A.
,
Fogassi
,
L.
, &
Gallese
,
V.
(
2006
).
Functional properties of grasping-related neurons in the ventral premotor area F5 of the macaque monkey.
Journal of Neurophysiology
,
95
,
709
729
.
Reichenbach
,
A.
,
Bresciani
,
J. P.
,
Peer
,
A.
,
Bulthoff
,
H. H.
, &
Thielscher
,
A.
(
2011
).
Contributions of the PPC to online control of visually guided reaching movements assessed with fMRI-guided TMS.
Cerebral Cortex
,
21
,
1602
1612
.
Rizzolatti
,
G.
, &
Arbib
,
M. A.
(
1998
).
Language within our grasp.
Trends in Neurosciences
,
21
,
188
194
.
Rizzolatti
,
G.
,
Fadiga
,
L.
,
Gallese
,
V.
, &
Fogassi
,
L.
(
1996
).
Premotor cortex and the recognition of motor actions.
Cognitive Brain Research
,
3
,
131
141
.
Rizzolatti
,
G.
,
Fogassi
,
L.
, &
Gallese
,
V.
(
1997
).
Parietal cortex: From sight to action.
Current Opinion in Neurobiology
,
7
,
562
567
.
Rizzolatti
,
G.
,
Fogassi
,
L.
, &
Gallese
,
V.
(
2001
).
Neurophysiological mechanisms underlying the understanding and imitation of action.
Nature Reviews Neuroscience
,
2
,
661
670
.
Rozzi
,
S.
,
Ferrari
,
P. F.
,
Bonini
,
L.
,
Rizzolatti
,
G.
, &
Fogassi
,
L.
(
2008
).
Functional organization of inferior parietal lobule convexity in the macaque monkey: Electrophysiological characterization of motor, sensory and mirror responses and their correlation with cytoarchitectonic areas.
European Journal of Neuroscience
,
28
,
1569
1588
.
Sakaguchi
,
Y.
,
Ishida
,
F.
,
Shimizu
,
T.
, &
Murata
,
A.
(
2010
).
Time course of information representation of macaque AIP neurons in hand manipulation task revealed by information analysis.
Journal of Neurophysiology
,
104
,
3625
3643
.
Sakata
,
H.
,
Taira
,
M.
,
Murata
,
A.
, &
Mine
,
S.
(
1995
).
Neural mechanisms of visual guidance of hand action in the parietal cortex of the monkey.
Cerebral Cortex
,
5
,
429
438
.
Schettino
,
L. F.
,
Adamovich
,
S. V.
, &
Poizner
,
H.
(
2003
).
Effects of object shape on hand configuration and visual feedback during grasping.
Experimental Brain Research
,
151
,
158
166
.
Sirigu
,
A.
,
Daprati
,
E.
,
Pradat-Diehl
,
P.
,
Franck
,
N.
, &
Jeannerod
,
M.
(
1999
).
Perception of self-generated movement following left parietal lesion.
Brain
,
122
,
1867
1874
.
Taira
,
M.
,
Mine
,
S.
,
Georgopoulos
,
A. P.
,
Murata
,
A.
, &
Sakata
,
H.
(
1990
).
Parietal cortex neurons of the monkey related to the visual guidance of hand movement.
Experimental Brain Research
,
83
,
29
36
.
Tchernichovski
,
O.
, &
Wallman
,
J.
(
2008
).
Behavioural neuroscience: Neurons of imitation.
Nature
,
451
,
249
250
.
Townsend
,
B. R.
,
Subasi
,
E.
, &
Scherberger
,
H.
(
2011
).
Grasp movement decoding from premotor and parietal cortex.
Journal of Neuroscience
,
31
,
14386
14398
.
Umilta
,
M. A.
,
Escola
,
L.
,
Intskirveli
,
I.
,
Grammont
,
F.
,
Rochat
,
M.
,
Caruana
,
F.
,
et al
(
2008
).
When pliers become fingers in the monkey motor system.
Proceedings of the National Academy of Sciences, U.S.A.
,
105
,
2209
2213
.
Umilta
,
M. A.
,
Kohler
,
E.
,
Gallese
,
V.
,
Fogassi
,
L.
,
Fadiga
,
L.
,
Keysers
,
C.
,
et al
(
2001
).
I know what you are doing: A neurophysiological study.
Neuron
,
31
,
155
165
.