Although it is established that F5 neurons can distinguish between nonsocial goals such as bringing food to the mouth for eating or placing it in a container, it is not clear whether they discriminate between social and nonsocial goals. Here, we recorded single-unit activity in the ventral premotor cortex of two female macaques and used a simple reach-to-grasp motor task in which a monkey grasped an object with a precision grip in three conditions, which only differed in terms of their final goal, that is, a subsequent motor act that was either social (placing in the experimenter's hand [“Hand” condition]) or nonsocial (placing in a container [“Container” condition] or bringing to the mouth for eating [“Mouth” condition]). We found that, during the execution of the grasping motor act, the response of a sizable proportion of F5 motor neurons was modulated by the final goal of the action, with some having a preference for the social goal condition. Our results reveal that the representation of goal-directed actions in ventral premotor cortex is influenced by contextual information not only extracted from physical cues but also from cues endowed with biological or social value. Our study suggests that the activity of grasping neurons in the premotor cortex is modulated by social context.
Naturalistic actions involve a hierarchy of goals, with subgoals acting in the service of an overarching goal (Rizzolatti, Cattaneo, Fabbri-Destro, & Rozzi, 2014). For example, the overarching goal of “eating” involves the subgoal of “taking possession.” Distinct motor acts like reaching, grasping, and biting are each endowed with a specific motor goal. For instance, “grasping” implies having the motor goal of taking possession of an object. When planning and executing an action such as “grasping and eating a piece of food,” an individual must use the overarching goal “eating” in this example to select an appropriate sequence of subgoals and the motor acts to achieve them.
In normal daily life, animals and humans do not just act independently from their conspecifics but are often required to perform actions directed to or jointly with others (Sebanz, Bekkering, & Knoblich, 2006). The motor system, therefore, must express a high degree of versatility to activate similar motor programs but aimed at different goals, both nonsocial and social (e.g., placing an object into a container or placing an object into the hand of another individual). Social goals involve coordination of an action not only with the external environment but also with other individuals who are not static environmental features but have their own intentions, motives, and internal states.
The ventral premotor area F5 is a core region for coding the goal of hand motor acts such as grasping, tearing, and holding (Rizzolatti, Fogassi, & Gallese, 2009). The neurons of F5 have the crucial property of coding goals independent of the sequence of movements or the effector used to achieve them (Umiltà et al., 2008; Rizzolatti et al., 1988), thus displaying a high level of motor abstraction (Rizzolatti et al., 2002). In area F5 and the inferior parietal lobule, the activation of specific neurons during the execution of a goal-directed motor act (grasping to place or grasping to eat) is modulated by the final goal of the action (eating or placing; Bonini et al., 2011; Fogassi et al., 2005). In these studies, the monkey was trained to execute two motor acts: grasp a piece of food and bring it to the mouth, or grasp an object and place it into a container (as well as to observe the experimenter performing the same two motor acts). A large proportion of motor neurons in F5 and inferior parietal lobule responded differentially to the same act (grasping) according to the final goal of the action in which the act was embedded. The ability of the motor system to encode the same action according to the final goal likely derives from the presence of contextual cues provided by the container in the grasp-to-place condition (i.e., the presence of a container inside which the object was subsequently placed) or by the type of object to grasp (i.e., a piece of food) in the grasp-to-eat condition.
Although it is established that F5 neurons can distinguish between nonsocial goals such as bringing food to the mouth for eating and placing in a container, it is not clear whether they also code social goals. Here, we devised a simple reach-to-grasp motor task in which a monkey grasped an object with a precision grip in three conditions, which only differed in term of their final goal, that is, a subsequent motor act that is either social (placing in the experimenter's hand [“Hand” condition]) or nonsocial (placing in a container [“Container” condition] or bringing to the mouth for eating [“Mouth” condition]). We found that, during performance of the grasping motor act, the response of a sizable proportion of F5 motor neurons was modulated by the final goal, with some having a preference for the social goal condition.
Animals and Surgical Procedures
Two captive-born and individually housed adult female rhesus macaques (Macaca mulatta) served as subjects (M1 and M2). The animal handling, as well as surgical and experimental procedures, complied with the European guidelines (86/609/EEC 2003/65/EC Directives and 2010/63/EU) and Italian laws (D.L. 116-92, D.L. 26-2014) in force on the care and use of laboratory animals and were approved by the Veterinarian Animal Care and Use Committee of the University of Parma (Prot. 78/12 17/07/2012) and authorized by the Italian Health Ministry (D.M. 294/2012-C, 11/12/2012). The monkeys were housed and handled in strict accordance with the recommendations of the Weatherall Report about good animal practice. The well-being and health conditions of the monkeys were constantly monitored by the institutional veterinary doctor of the University of Parma.
A titanium head post (Crist Instruments) was surgically implanted on the skull using titanium screws. A cilux recording chamber (18 × 18 mm, Alpha-Omega) was stereotaxically implanted and secured with dental cement. For both procedures, each animal was deeply anesthetized with ketamine hydrochloride (5 mg/kg im) and medetomidine hydrochloride (0.1 mg/kg im), and its heart rate, temperature, and respiration were carefully monitored and kept within physiological range. Pain medication was routinely given after surgery (dexamethasone, 2 mg/kg, every 12 hr, from 1 day before to 3 days after surgery; ketoprofen, 5 mg/kg, every 12 hr for 3 days following surgery).
The basic task is illustrated in Figure 1A. The monkey was seated facing a table (60 × 60 cm) onto which a metallic cube was placed along the monkey body midline, at 13 cm from the monkey's hand starting position. The monkey had to reach and grasp the object and then place it in a small container located 10 cm to the left of the grasping location. At the beginning of each trial, the monkey had to keep her right hand on a handle attached to the table for at least 1000 msec (Figure 1A-I), after which a transparent barrier was removed to give the “go” signal, and the monkey was required to grasp the object (Figure 1A-II) and place it in the container (Figure 1A-III). A juice reward (and a solid food reward) was delivered after 500–1000 msec if the monkey correctly executed the trial (Figure 1A-IV). The task was run in one or more blocks of five trials each until a minimum total of 12 trials per condition was reached. Any trial in which the grasping action was not properly executed was aborted, and no reward was delivered. Note that the reward consisted of juice (0.14 mL) and a piece of apple (1 cm3) for each trial and was the same for all three conditions (see Experimental conditions below). This reward schedule was used because in the Mouth condition (see below) the monkey had to grasp a piece of food, bring it to the mouth, and eat it. Therefore, in the other two conditions, we decided to introduce the same type of reward to avoid possible differential neuronal activity according to the type of reward.
The three experimental conditions are illustrated in Figure 1B. The first part of the task was the same in each condition: The monkey was required to release the handle and grasp an object or a piece of food with a precision grip. The experimental conditions differed only in the second part of the task in which the monkey had to place the object in a container (Container condition; Figure 1B-I), place the object in the hand of the experimenter (Hand condition; Figure 1B-II), or bring the object (a food morsel) to the mouth and eat it (Mouth condition; Figure 1B-III). Because the grasped object was a metallic cube in the Container and Hand conditions, but a cubic piece of apple in the Mouth condition, we had to balance the reward value across all conditions. Therefore, in addition to the juice reward delivered in all conditions, the same pieces of apple used in the Mouth condition were given to the monkey after the correct completion of the trials in the Container and Hand conditions. The conditions were executed in randomized blocks of five trials.
Neuronal activity was recorded with a linear multisite electrode (16-channel, 250-μm spacing; U-probe, Plexon, Inc.) and digitalized at 40 kHz using the Omniplex 16-channel recording system (Plexon). Task control (including handle holding, hand–object contact detection, and reward delivery) was computer-controlled through a customized Labview program. Contact detection panels (Crist Instruments) were used to record the exact moments in which the monkey's hand released the handle and touched the target. The latter event was used to align the neuronal activity recorded for every trial in all conditions.
We collected usable motor neuron data in 15 sessions from Monkey 1 and in 17 sessions from Monkey 2. Neuronal activity was recorded from area F5 of the ventral premotor cortex (see below for details). During the insertion of the electrode shaft in the brain, we ensured that the topmost electrode was positioned under the dura but remained outside the cortex, so that it could be used as a reference channel. Of the 15 available recording channels, two were dedicated to eye movement control for another task. The remaining 13 channels were used for single-unit recording. The multielectrode was lowered in the brain through the intact dura, and the general properties of the neurons were tested. When motor properties for grasping were found, we waited for approximately 1 hr to allow the neural activity to stabilize and then started to record single units using the behavioral protocol described above.
Preliminary Testing of Neuronal Activity
Before proceeding with the neural testing using our behavioral protocol, single-unit and multiunit activity were systematically tested for visuomotor properties to identify recording sites in area F5 endowed with hand grasping motor activity (Rozzi, Ferrari, Bonini, Rizzolatti, & Fogassi, 2008; see Maranesi et al., 2012). Briefly, we required the monkey to grasp food items in various conditions (i.e., with eyes closed or without flexing the wrist, elbow, or shoulder), enabling us to disentangle neuronal activity related to visual stimulation, reaching or grasping objects. Also, to exclude the possible presence of mouth-related responses, we tested changes in neural activity related to the delivery of small pieces of food directly into the mouth while the monkey's eyes were closed.
Neuronal spike waveforms were classified offline into units using a commercial spike-sorting software (OfflineSorter, Plexon). The procedure we used for spike sorting was as follows: We set the detection threshold at −2% of the energy of the signal. We used a prethreshold period of 200 μsec, a total time window of 1200 μsec, and a refractory interspike interval of 1000 μsec. We used the “valley seek” method for sorting and applied semiautomatic clustering using templates. These templates were constructed using manually selected waveforms that looked like action potentials. This method allowed us to reject possible artifacts and also spikes that were too small to be reliably sorted. We applied a PCA to summarize each spike waveform by a feature vector. Finally, we divided these vectors in clusters (putative neurons) using the k-means clustering feature of the software and excluded the outliers so that the clusters did not overlap.
The neural response selectivity was determined for each neuron by comparing the spike frequency for each trial across two epochs (Baseline and Grasping, described below) and three conditions including Container, Hand, and Mouth. The statistical analysis consisted of a 2 × 3 repeated-measures ANOVA (factors: Epochs and Conditions) followed by a Newman–Keuls post hoc test, all with a .05 alpha level. The two epochs had a 350-msec duration and were precisely calculated with reference to the digital events and the contact with the grasped object and verified by an offline kinematic analysis recorded in a separate recording session (see below Movement Kinematics Analysis). The Baseline epoch corresponded to a time in which the monkey remained still with her hand on the starting handle. This epoch lasted from 1000 to 650 msec before the monkey's release of the handle. The Grasping epoch was defined as starting 100 msec before the release of the handle (when the monkey's hand started to move as determined by the kinematic analysis) and ended at the object displacement onset (when the monkey had grasped the object or the piece of food and started to lift it). This event was signaled by the hand–object–table contact detection panel. The Grasping epoch was defined such as to encompass the whole reaching–grasping phase without including the placing phase of the action (which was expected to necessarily be kinematically different between the conditions). The moment when the monkey's hand touched the object (indicated by the “on” signal on the hand–object–table contact detection panel) was used to align the trials.
Neurons were classified as (1) grasping neurons if they showed a significant main effect of Epoch and (2) as having a goal preference if they also showed a main effect of condition or an interaction between epoch and condition. Planned contrasts indicated in which condition (or conditions) the mean discharge for a grasping epoch was significantly different than the other(s).
The population analysis was carried out by constructing a spike density function for each neuron by averaging the spike frequency for 20-msec bins across trials within each condition. The baseline activity (mean spikes per second during the Baseline epoch) was subtracted from the spike frequency of each bin. The data were then normalized by dividing the baseline-corrected spike frequency within each bin by the maximum absolute value found across all bins in all conditions. The result is a discharge rate ranging between −1 and 1 for each neuron. The baseline-corrected and normalized spike density function was then smoothed using a Gaussian kernel with a width of 30 msec.
The population curve was computed as the mean spike frequency in each bin over all neurons. The statistical differences between populations of neurons were calculated based on the firing rate during the Grasping epoch. One-way repeated-measures ANOVAs were used to compare the effect of condition on the population firing rate, followed by Newman–Keuls post hoc procedures.
The time at which goal-modulated neurons begin to show selectivity was calculated as follows: A baseline discharge rate was subtracted from each value (in 20-msec time bins) of the differential activity time course (preferred–nonpreferred conditions). The baseline value was the mean differential activity between preferred and nonpreferred conditions during the 350-msec baseline epoch plus 1 standard deviation. The time bin corresponding to the occurrence of a series of at least three consecutive bins with values <0 before the peak of selectivity was taken as the onset time.
K-means Clustering Analysis
The standard K-means clustering algorithm (MATLAB, MathWorks), with squared Euclidian distance as the metric, was applied to objectively characterize the discharge patterns of the neuronal population. Clustering was applied to the baseline-corrected, normalized, and smoothed spike density function of each neuron during the Grasping epoch, averaged over trials within each condition (each neuron contributed 18 time bins × 3 conditions, corresponding to the time between movement onset and object touch). The optimal number of clusters was determined using the Silhouette (Sil; Kaufman & Rousseeuw, 1990), Davies–Bouldin (DB; Davies & Bouldin, 1979), Calinski–Harabasz (CH; Calinski & Harabasz, 1974), Krzanowski–Lai (KL; Krzanowski & Lai, 1988), Hartigan (Hartigan, 1975), weighted inter–intra (Wint; Strehl, 2002), and homogeneity–separation (Chen et al., 2002) indices. Within each cluster, a one-way ANOVA was computed for each time bin of the grasping epoch to assess significant differences in discharge rate between conditions (Bonferroni-corrected according to the number of time bins).
Nonmetric Multidimensional Scaling
To examine the temporal trajectory of the neural population activity in each condition, we used multidimensional scaling (MDS) analyses carried out with MATLAB software (MathWorks). We used the baseline-corrected firing rate of each neuron, averaged across trials and concatenated over conditions, to construct a 267 × 423 dimensional matrix, X (89 time bins per trial in each of three conditions, 423 excitatory neurons), containing the mean firing rate of each neuron over the time course of each condition. We then computed a Euclidean distance matrix, D (267 × 267), where each element Di,j contained the Euclidean distance between rows i and j of X (Di,j = ∥Xi − Xj∥2) and thus the distance between snapshots of population activity at different time points in a 423-dimensional space. We then applied the standard nonmetric MDS (NMDS) algorithm (Shepard, 1980) using D to obtain a 267 × 2 dimensional representation of X (we chose two dimensions based on the reduction in stress from 0.127 to 0.057 going from one to two dimensions), while maintaining the approximate relative distance between pairs of observations in the original, higher-dimensional space (Kayaert, Biederman, & Vogels, 2005). We then reshaped the result to obtain one 89 × 2 dimensional population trajectory for each condition. NMDS makes fewer assumptions about the data compared with other techniques, such as principle components analysis, and because the number of dimensions is chosen prior the analysis, there are no hidden dimensions of variation. Because the neurons from different penetrations were necessarily recorded during different trials, we estimated the variability of the population activity trajectories by using a bootstrapping technique in which we obtained the population activity by averaging each neuron's activity over 10 trials sampled randomly without replacement from the corresponding penetration. This was repeated 100 times, resulting in 100 different 423-dimensional population activity vectors for each condition. A one-way analysis of similarities (ANOSIM) with Bonferroni-corrected pairwise follow-up tests (Clarke & Gorley, 2001) implemented as part of the Fathom toolbox (Jones, 2014) was used to test for significant differences in population activity trajectories between conditions at each trial event (movement onset, handle release, object touch, object lift) using dissimilarity matrices created from pairwise comparisons of trajectory coordinates from NMDS. ANOSIM is a nonparameteric permutation test (we used 1000 permutations) that calculates an R test statistic indicating the degree of separation between conditions, with a value of 1 indicating complete separation and 0 indicating no separation.
Movement Kinematics Analysis
The kinematic parameters of reaching and grasping movements were recorded in a dedicated session. The action was filmed from a lateral view (on the monkey's left side), encompassing the hand on the starting handle and object to grasp using a 50-fps temporal resolution camera (Canon EOS-7D) with a spatial resolution of 5184 × 3456 pixels. The hand excursion in x and y axes (sagittal and vertical axes, respectively) was measured using an open source video analysis software (Tracker). Three LEDs connected to the contact detection circuits indicated the handle release, object touch, and lifting phase of the action. They were placed in the background, within the camera field, but out of the monkey's sight. The data were first analyzed by comparing the reaching/grasping duration, as well as the velocity and movement amplitude of the wrist between the three conditions (20 trials each) using a one-way ANOVA. Data were first analyzed by comparing the reaching/grasping duration, as well as the velocity and movement amplitude of the wrist between the three conditions (20 trials each) using a one-way ANOVA. Because each trial varied in the timing of the behavioral events, the eight-dimensional trajectories (x, y positions for the wrist, CMC, index PIP, and thumb MCP) from each recorded trial were then aligned using canonical time warping (Zhou & Torre, 2009) to compare them between conditions. The aligned trajectories were then projected into a two-dimensional (based on the reduction in stress from 0.022 to 0.004 going from one to two dimensions) space using NMDS, with the Euclidean distance between each pair of the 60 (three conditions, 20 trials each) temporally aligned trajectories as input. A one-way ANOSIM with Bonferroni-corrected pairwise follow-up was used to test for significant differences in kinematic trajectories between conditions at the handle release and object touch events, using dissimilarity matrices created from pairwise comparisons of trajectory coordinates from NMDS.
The activity of 1499 neurons was recorded from area F5 in the left hemisphere of two monkeys. A total of 437 single units (Monkey 1, 157 units; Monkey 2, 280 units) were revealed to be “hand grasping neurons” as tested with the task illustrated in Figure 1A and statistically confirmed with the 2 × 3 ANOVA described in the Data Analysis section. The neurons were collected from 32 electrode penetrations and 13 different cortical depths. We found that 180 grasping neurons (41% of the recorded motor neurons) were modulated by the goal of the action and had a preference for either one or two of the conditions (Figure 2). We hereafter refer to these neurons as “goal-modulated.” The majority of these goal-modulated neurons (63%, 114/180) had a preference for only one condition. Three examples of goal-modulated neurons having a preference for one condition are presented in Figure 3. Unit 146 had a preference for grasping followed by placing the object in a container. Unit 54 had a preference for grasping condition followed by eating the piece of food. The third neuron, Unit 174, had a preference for grasping followed by placing the object in the hand of an experimenter.
The population analysis for the goal-modulated neurons modulated by a single goal (one condition) is shown in Figure 4. As expected, the discharge in the preferred condition for these neurons is significantly stronger than in the nonpreferred conditions (Figure 4A; Container-goal-modulated neurons; F(2, 56) = 45.63, p < .0001; Post hoc, Container vs. Hand, p < .001; Container vs. Mouth, p < .001; Figure 4B; Hand-goal-modulated neurons; F(2, 32) = 31.92, p < .0001; Post hoc, Hand vs. Container, p < .001; Hand vs. Mouth, p < .001; Figure 4C; Mouth-goal-modulated neurons; F(2, 128) = 259.27, p < .001; Post hoc, Mouth vs. Container, p < .0001; Mouth vs. Hand, p < .0001).
The results of the population analysis of all goal- and non-goal-modulated neurons are illustrated in Figure 5. Significant differences between the conditions among the goal-modulated neurons (Figure 5A) were found (one-way repeated-measure ANOVA; F(2, 350) = 10.13, p < .0001). Post hoc comparisons revealed that goal-modulated neurons fired significantly more in the Mouth condition than in the other conditions (Mouth vs. Container, p < .001; Mouth vs. Hand, p < .001). Likewise, for the non-goal-modulated neurons (Figure 5B), significant differences between conditions at a population level were found, F(2, 492) = 12.51, p < .0001, with post hoc comparisons revealing significantly more firing in the Mouth condition than in the other conditions (Mouth vs. Container, p < .0001; Mouth vs. Hand, p < .0001).
Mouth Condition Preference
The majority of neurons with a single preferred condition preferred the Mouth condition (see also Figure 2). Likewise, a stronger discharge in the Mouth condition was also a characteristic of all goal-modulated neurons and was also present among the non-goal-modulated neurons. The latter result might sound counterintuitive, but the fact that non-goal-modulated neurons do not individually show a significant preference does not mean that their firing rate was the same for all conditions as a population.
Note that the overall preference for the Mouth condition is not due to oral movements. No neuron included in our analyses showed any discharge during oral movements such as mastication, and therefore, we can exclude the possibility that motor preparation for mouth opening is responsible for the increase in firing activity during the grasping action in the Mouth condition.
Clustering Analysis of Neuronal Activity
Because differences were found between the three conditions at a population level for both modulated and nonmodulated neurons, we sought to analyze the neuronal population in more details without any of the a priori assumptions about neural selectivity embedded in the ANOVA-based neural classification analysis. We used cluster analysis to identify populations of neurons having different discharge patterns across conditions. The main results of this analysis are shown in Figure 6A. In the central panel, each of the 423 neurons (only excitatory neurons were considered for this analysis) is plotted as a point in a two-dimensional space using NMDS to show their similarity in discharge pattern across conditions, with different colors denoting cluster membership (seven clusters). The optimal number of clusters was chosen according to a variety of cluster dispersion metrics (see K-Means Clustering Analysis section). As each metric yielded a different optimal number of clusters (Sil = 3, DB = 7, CH = 7, KL = 5, Hartigan = 7, Wint = 3, homogeneity–separation = 7), we chose the mode seven clusters. To help visualize cluster differences, we plotted the average neuronal response of the neurons belonging to each cluster for the three conditions (Figure 6A). The seven clusters seem to represent different aspects of the neuronal discharge during the task, that is, preference for one or more conditions, peak RT, or a combination of both. We found three clusters reflecting the neuronal preference for the action goal with a peak of activity around the Touch event (Cluster 1 for Container, Cluster 2 for Hand, and Cluster 3 for Mouth). Two additional clusters represent the time of peak activity and a preference for Mouth condition (Cluster 7 for peak just after Release of the handle and Cluster 6 for peak around Lift Object). Cluster 5 represents neurons with a late peak of activity when the object is lifted (Lift Object) and no goal preference, whereas Cluster 4 seems to represent late peaking activity and a preference for nonsocial goals (Container and Mouth).
To assess the similarity between neural discharge patterns within and between clusters, we computed a correlation-based similarity matrix (Figure 6B) showing the mean pairwise Euclidean distance between the discharge patterns of neurons belonging to each cluster. The low distance measures in the diagonal of the matrix show that neurons in each cluster behave most similarly to neurons in the same cluster (as expected by the use of the clustering algorithm), with Cluster 6 being the most homogeneous. There was also a strong similarity between Cluster 6 and Clusters 3, 4, and 5, all of which represented neurons with late peaks of activity. Figure 6C shows the percentage breakdown of cluster membership for each ANOVA-based classification. Note that Clusters 1–4 contain 50% or more of the neurons that were found to be selective to Container (62.1% in Cluster 1), Hand (52.9% in Cluster 2), Mouth (52.3% in Cluster 3), or Container–Mouth (56.3% in Cluster 4), respectively. This result means that the coding of grasp goal is evident at the population level not only under a priori assumptions about possible goal specificity (as revealed by the ANOVAs) but also in an unbiased, unsupervised clustering of neuronal discharge profiles.
NMDS Assessment of Population Activity Trajectories
The general dissimilarity of the population activity during an action with a social goal is confirmed by an NMDS analysis (Figure 7). The NMDS analysis using two dimensions resulted in a good fit to the data (stress = 0.058, R2 = .98, p < .0001). This analysis shows that the population-level discharge pattern associated with the goal of grasping to place in the hand is qualitatively different from the one where the goal of grasping is to place in the container or eat. In this analysis, we treated the activity pattern of the entire population at each point in time (averaged over trials) as a vector in a high-dimensional space and visualized the trajectory of this vector as the trial unfolded. A bootstrapping procedure using random subsets of the trials on each iteration was used to estimate the variability of the population activity. The population activity pattern was very similar at the start of the trial in each condition (global R = .43, p = .001) and gradually diverged, with divergence of the Mouth and Hand conditions from the Container condition starting at the onset of movement (global R = .84, p = .001; Container–Hand R = .89, p = .003; Container–Mouth R = .99, p = .003; Hand–Mouth R = .63, p = .003) and continuing until the release of the monkey's hand from the handle following the “go” signal (global R = .77, p = .001; Container–Hand R = .77, p = .003; Container–Mouth R = 1.0, p = .003; Hand–Mouth R = .44, p = .003). Interestingly, the activity pattern in the Hand condition was maximally different from the Container and Mouth conditions around the time of the initial touch of the object (global R = .74, p = .001; Container–Hand R = .98, p = .003; Container–Mouth R = .30, p = .003; Hand–Mouth R = .97, p = .003) and became more similar to the Container condition by the time the object was lifted (global R = .90, p = .001; Container–Hand R = .77, p = .003; Container–Mouth R = 1.0, p = .003; Hand–Mouth R = .92, p = .003). This is in spite of the fact that all conditions are kinematically similar during the grasping epoch (see Movement Kinematics section below). Unsurprisingly, the population activity in the Mouth condition was most dissimilar to the other two conditions following the lifting of the object, which is when the kinematic differences in this action become manifest. This result shows that, at the population level, the Hand condition is coded differently from the Container and Mouth conditions even during the time of the greatest kinematic congruence between conditions.
Goal Modulation and Discrimination Latency
To better characterize the goal-modulated neurons, we analyzed the time at which they began to discriminate the goal of the action. The goal discrimination time for each neuron is shown in Figure 4 (bottom). There were no significant differences between the discrimination time latency of the three types of goal-modulated neurons (Container, mean = −143.40 msec, SE = 18.53; Hand, mean = −134.20 msec, SE = 29.44; Mouth, mean = −75.60 msec, SE = 11.46; Kruskall–Wallis, chi-square = 5.70, df = 2, ns).
Because the scope of the present experiment was to compare neuronal discharge between similar grasping actions that differed only by the subsequent action in which they were embedded, it was important to verify that the kinematic parameters of the reaching and grasping actions were the same across the three conditions. Results revealed no differences in movement extent (in height or in length) during the reaching phase between the three conditions. However, we found that reaching was slower in the mouth condition (Container: mean = 787.94, SE = 19.02 mm/sec; Hand: mean = 747.87, SE = 15.72 mm/sec, Mouth: mean = 706.47, SE = 6.88 mm/sec; F(2, 57) = 7.59, p < .005), probably indicating a deceleration of the reaching movement to grasp the cubic-shaped food morsel, which has a different texture than the metallic cube used in the Container and Hand conditions. This difference is, however, unlikely to have an impact on the proportion of goal-modulated neurons found in this experiment. Indeed, we found that the “grasping to eat” trials were also slower during the recording of neurons preferring the other conditions (Container or Hand condition), ruling out the possibility that the grasping velocity was a determining factor in neuronal preference. We verified if the overall neuronal firing was linked to the duration of the reaching phase in all goal-modulated neurons. We found no significant correlation between the duration of the reaching phase (from handle release to object touch) and the neuronal discharge across all the trials, for any of the conditions (Pearson correlation; Container: r = −.013, ns; Hand: r = −.037, ns; Mouth: r = −.046, ns), thus further supporting the lack of a causal link between different kinematics parameters of the reaching–grasping and the firing rate. In addition, the NMDS analysis (stress = 0.004, R2 = .99, p < .0001) demonstrated that the overall kinematic trajectories were very similar across conditions, with no separation between the trajectories at either the handle release (global R = .00, p = .538) or object touch events (R = .031, p = .087; Figure 8).
The main finding of this study is that, among a population of “hand grasping motor neurons” in F5, which have a preference for the final goal of an action, some neurons coded a goal that implied a social interaction between two individuals. In fact, notwithstanding the very limited interactions allowed by our experimental protocol, a number of neurons (10% of the goal-modulated neurons) had a preference for the condition “placing in the hand” of the experimenter, a condition involving a social interaction. In addition, 76% of the neurons modulated by two conditions were modulated by the “placing in the hand” condition. In our view, these results suggest that the activity of grasping neurons in the premotor cortex is modulated by social context. These results extend previous findings (Mazurek, Rouse, & Schieber, 2018; Bonini et al., 2010; Fogassi et al., 2005) showing that hand grasping neurons can differently code the same grasping act according to the goal of the action in which such act is embedded.
The activity of many motor neurons in the ventral premotor cortex is modulated by contextual cues, such as the presence of a container or the type of object to be grasped, which indicate to the monkey what the required final goal of the action is in the current experimental condition. The present findings unveiled what could be a new property of motor neurons in F5, namely, a preferential modulation by a forthcoming interaction with another individual. In the present experiment, this interaction consisted of placing an object in the hand of the experimenter. The type of motor act sequence required to accomplish such a goal is not kinematically different from “placing in a container” (Container condition), because the kinematic trajectories were very similar across conditions. Nevertheless, the differential activity of some grasping neurons in F5 demonstrates that such a difference is indeed relevant for the monkey.
Social information is indeed crucial for nonhuman primates (Klein, Shepherd, & Platt, 2009; Nummenmaa & Calder, 2009; Emery, 2000), as it affects motivational, affective, and decision-making processes. In a neurophysiological study, it has been shown that macaque monkeys track the behavior of other individuals during interactive tasks and that neuronal activity in the mesial frontal cortex reflects the monitoring of both the behavior and the errors of the other individual during self and others' actions (Yoshida, Saito, Iriki, & Isoda, 2011). Despite interesting behavioral (Ballesta & Duhamel, 2015; Deaner, Khera, & Platt, 2005) and neurophysiological investigations indicating the importance of social contexts in monkey decision-making and in modulating brain activity, no studies have yet explored the impact of social cues or social interactions on the motor discharge of neurons in the monkey's premotor cortex, an important area for high-level planning of manual actions. However, socially relevant cues such as gaze direction have been shown to directly impact the discharge of grasping mirror neurons in premotor cortex (Coudé et al., 2016). In that study, the visual discharge of some visuomotor neurons (mirror neurons) preferred a gaze direction either congruent or incongruent with the location of the grasped object. However, the monkey and the experimenter were not engaged in a direct interaction, with the monkey passively observing the experimenter as they performed grasping actions. One study has investigated the impact of social context on parietal neurons (Fujii, Hihara, & Iriki, 2007). In that experiment, parietal activity was strongly tuned to the use of the arm contralateral to the recorded hemisphere when two monkeys were sitting side-by-side and could reach for and grasp food without interacting. However, when the food was put in a shared space and social conflict could emerge, the neurons developed different combinations of preferences to self and other motion (Fujii et al., 2007). These results showed that, within the parietofrontal circuits involved in grasping, some neurons code space in terms of social relevance.
Interestingly, the influence of social cues on the motor system has been shown in many contexts in human studies (Chinellato, Castiello, & Sartori, 2015; Dolk et al., 2014; Sartori, Cavallo, Bucchioni, & Castiello, 2011; Sebanz et al., 2006). For instance, in a task where participants were required to grasp an object while observing an actor grasping with an interactive purpose, participants' grasping movements were delayed and reaching trajectories were deviated, indicating that socially relevant stimuli are acknowledged by the motor system (Chinellato et al., 2015). In another study, participants were required to grasp an object and to move it toward the experimenter or toward themselves (Scorolli, Miatton, Wheaton, & Borghi, 2014). Kinematic analyses confirmed that reaching movements were affected by the social goal of the action and the conventional use of the object (e.g., an object typically used in functional-cooperative relation). It is worth noting that, in our study, the monkeys had to grasp an object while the experimenter's hand remained still, in a supine position, in the reaching space of the monkey. Such an experimental design differs from the above-cited human studies where subjects had to grasp an object while observing an ongoing grasping action from another individual or had to coordinate their own action with another subject. These complex dynamics might have had an impact on kinematic parameters during the grasping phases. In contrast, in our study, we eliminated possible interference between the monkey grasping action and the observed experimenter movement. In addition, the long period of animal training probably contributed to reducing the variability of the kinematic parameters of the monkey's reaching–grasping movements. Nevertheless, the NMDS analysis revealed dissimilarity in neuronal population activity between the Container and Hand conditions despite the fact that they are kinematically similar. The activity pattern in the Hand condition was maximally different from the Container and Mouth conditions around the time of the initial touch of the object, where kinematic parameters tend to be more similar among conditions.
Although these data show that the Hand differs from the Container condition in terms of neuronal activity, we cannot be certain that the Hand condition is viewed by the monkey as a socially selective condition and not as grasping to place the object on a different container. Even if this possibility cannot be excluded, previous studies on the functional properties of neurons in ventral premotor cortex have shown that social signals, such as gaze direction, can modulate the activity of visuomotor neurons, suggesting that social information can modulate neuronal activity in this region of the motor cortex (Coudé et al., 2016).
Most of the human kinematic studies were motivated by Gibsonian ideas that object perception directly elicits action possibilities (Gibson, 1979). Together, these findings support the idea that the activation of affordances is modulated not only by physical object properties but also by the social context. Our findings complement such studies by providing key information in terms of neurophysiological mechanisms.
Mouth Condition Preference
The neurons preferring the Mouth condition have a stronger response in their preferred condition compared with the neurons preferring the Container or Hand conditions. Also, at a population level, the discharge in the Mouth condition is stronger among the nonmodulated neurons. This could be due to the fact that the Mouth condition is a more natural one and corresponds to a behavior that is already present in the repertoire of the monkey. Indeed, neurons preferring “Grasping to eat” were found to represent the majority of goal-modulated neurons in previous studies (Bonini et al., 2010; Fogassi et al., 2005). Long-train intracortical microstimulation of this premotor region elicits hand-to-mouth movements similar to the actions naturally performed by the monkeys (Graziano, Aflalo, & Cooke, 2005; Graziano, Taylor, Moore, & Cooke, 2002). The action goal preferences of some F5 neurons shown in this study could thus constitute a possible mechanism underlying the organization of natural actions in this region. The strength of the neuronal firing in the Mouth condition, consisting of a natural and very motorically familiar action, likely reflects this organization and the overlapping cortical representation of hand and mouth (Maranesi et al., 2012; Rizzolatti et al., 1988).
Latency of Goal Discrimination
A large proportion of the goal-modulated neurons recorded in this study are characterized by an early selectivity for the goal. Most of these neurons display differential activity before the time at which the hand makes contact with the object (78%). These neurons therefore discriminate between the forthcoming goal during the time at which the hand approaches the object and the finger aperture is “preshaped” to match the physical characteristics of the object (Pellegrino, Klatzky, & McCloskey, 1989). Neurophysiological studies on the grip selectivity of F5 motor and visuomotor neurons have shown that grip-type selectivity appears at a very early stage of grasping and progressively increases as the grasping action progresses (Umilta, Brochier, Spinks, & Lemon, 2007; Raos, 2005; Murata et al., 1997). The present data suggest that, beyond coding grip type and being influenced by the goal of the action, some F5 neurons are modulated by the social context in which the grasping action is made. Overall, it is not surprising to find early goal discrimination in the interactive condition because reactive, feedback-based adjustments of one's own movement likely cannot support the fine-tuned temporal contingency required by online interpersonal coordination. In real-life interactions, cooperative and competitive agents cannot simply react to a partner's behavior but must anticipate it for a successful interaction (Knoblich & Jordan, 2003).
Possible Effects of Reward Delay
Although the total reward quantity (liquid and solid) is the same in all conditions, the temporal proximity of the solid reward presentation to the grasping action in the Mouth condition is a possible factor that could have had an impact on the neuronal discharge. In the Mouth condition, the solid reward delivery was temporally close to the grasping action (the monkey brought the food to the mouth and ate it right after having grasped it), and the monkey received a juice reward afterward. In the Container and Hand conditions, the reward sequence was inverted. The time lapse between the two rewards were short (about 1 sec), and spurious factors, such as motivation and reward expectancy, can be reasonably ruled out.
Limits of the Present Experiments and Future Directions
The present experiment was a first step in investigating the influence of social interactions on ventral premotor cortex neuronal response. To avoid any visual stimulation (experimenter hand movement), the interaction between the monkey and the experimenter was kept minimal. The experimenter remained passive with the hand open, palm up, in the same location where the container was in the Container condition. The monkey was required to place the object in the experimenter's hand, and therefore, its action was stereotypical and did not require coordination between the monkey and the experimenter. We should also take into account the fact that we have tested a limited number of conditions, and this might have contributed to the reduced number of neurons selective for the social condition. For example, we did not vary the experimenter's hand configuration (e.g., an open hand but with the finger configuration required to grasp a small object from the monkey's hand) or the spatial location. It is therefore possible that other neurons could potentially show selectivity for the social condition for various combinations of hand configuration and location. Lastly, as already mentioned above, area F5 contains a considerable percentage (47%) of visuomotor neurons that are modulated by social cues (i.e., gaze; Coudé et al., 2016). The present finding further supports the idea that social cue modulation is present in area F5. The use of more elaborate interactions in future experiments could potentially reveal even stronger preferences with richer specificity patterns, which could help disentangle the role of the social cues themselves from the decision of the monkey to interact.
This research was supported by an NICHD grant (NIH P01HD064653).
Reprint requests should be sent to Gino Coudé, Institut des sciences cognitives Marc Jeannerod, 67 Bd Pinel, Bron, 69675, France, or via e-mail: firstname.lastname@example.org.
This research was supported by an NICHD grant (NIH P01HD064653).
Reprint requests should be sent to Gino Coudé, Institut des sciences cognitives Marc Jeannerod, 67 Bd Pinel, Bron, 69675, France, or via e-mail: email@example.com.