Area V6A is a visuomotor area of the dorsomedial visual stream that contains cells modulated by object observation and by grip formation. As different objects have different shapes but also evoke different grips, the response selectivity during object presentation could reflect either the coding of object geometry or object affordances. To clarify this point, we here investigate neural responses of V6A cells when monkeys observed two objects with similar visual features but different contextual information, such as the evoked grip type. We demonstrate that many V6A cells respond to the visual presentation of objects and about 30% of them by the object affordance. Given that area V6A is an early stage in the visuomotor processes underlying grasping, these data suggest that V6A may participate in the computation of object affordances. These results add some elements in the recent literature about the role of the dorsal visual stream areas in object representation and contribute in elucidating the neural correlates of the extraction of action-relevant information from general object properties, in agreement with recent neuroimaging studies on humans showing that vision of graspable objects activates action coding in the dorsomedial visual steam.

When we see an object, our visual system extracts the critical visual features that evoke the potential actions we can perform on it (object affordances; Gibson, 1979). A huge amount of data suggests that object affordances for grasping are encoded in some parietal and frontal areas involved in the visual control of grasp (Maranesi, Bonini, & Fogassi, 2014; Jeannerod, Arbib, Rizzolatti, & Sakata, 1995). Among the parietal areas, perhaps the most typical one involved in grasp control is the anterior intraparietal area (AIP), which contains visual as well as grasping neurons and is involved in the visual control of grasp (Baumann, Fluet, & Scherberger, 2009; Murata, Gallese, Luppino, Kaseda, & Sakata, 2000). Recently, it has been suggested that also the posterior parietal area V6A, which was known to contain visual as well as reaching neurons (Galletti, Kutz, Gamberini, Breveglieri, & Fattori, 2003), could be implicated in the control of grasp (Fattori et al., 2010). Because many V6A neurons displayed visual selectivity during passive fixation of objects of different shapes (Fattori, Breveglieri, Raos, Bosco, & Galletti, 2012), it was suggested that this visual selectivity subserves the computation of object affordances, but at present there are only indirect evidences supporting this view (see Fattori et al., 2012). In the present work, we aimed to check whether the visual selectivity observed in V6A reflects a pure visual analysis of objects or is an expression of an encoding of object affordance.

To achieve this goal, monkeys were engaged in fixation tasks while observing two objects of similar size and geometrical shape: a handle and a plate that had the same orientation. From the monkey's point of view, these objects appeared very similar, but evoked, when requested, different types of grips: finger prehension for the handle and primitive precision grip for the plate. Thus, in this task, we maintained nearly constant the object visual features while changing object affordances. Interestingly, several visual cells that underwent the task responded very differently to the presentation of the two objects. We suggest that these cells encode the object affordances instead of, or in addition to, the object visual features.

Experimental Procedures

Experiments were carried out in accordance with national laws on care and use of laboratory animals and the European Communities Council Directive of 22th September 2010 (2010/63/EU) and approved by the Bioethics Committee of the University of Bologna. Neither behavioral nor clinical signs of pain or distress were observed during training nor recording sessions.

Four male Macaca fascicularis monkeys weighing 2.4–3.9 kg were involved in the study. All the surgical procedures and the recording techniques followed the methodologies reported in other papers (Breveglieri, Galletti, Dal Bò, Hadjidimitrakis, & Fattori, 2014; Galletti, Battaglini, & Fattori, 1995). Briefly, a head-restraint system and a recording chamber were surgically implanted in asepsis and under general anesthesia (sodium thiopenthal, 8 mg/kg/h, iv). Adequate measures were taken to minimize pain or discomfort. A full program of postoperative analgesia (ketorolac trometazyn, 1 mg/kg im immediately after surgery, and 1.6 mg/kg im on the following days) and antibiotic care (Ritardomicina [benzatinic benzylpenicillin plus dihydrostreptomycin plus streptomycin], 1–1.5 ml/10 kg every 5–6 days) followed the surgery.

Single cell activity was recorded extracellularly from the posterior parietal area V6A (Galletti, Fattori, Kutz, & Gamberini, 1999). We performed single microelectrode penetrations using home-made glass-coated metal microelectrodes (for one animal) and multiple electrode penetrations using a five-channel multielectrode recording minimatrix (Thomas Recording, GMbH, Giessen, Germany) (for the remaining three animals). The electrode signals were amplified (at a gain of 10,000) and filtered (bandpass between 0.5 and 5 kHz). Action potentials in each channel were isolated with a dual time–amplitude window discriminator (DDIS-1, Bak Electronics, Mount Airy, MD) or with a waveform discriminator (Multi Spike Detector, Alpha Omega Engineering, Nazareth, Israel). Action potentials were sampled at 100 kHz.

The histological reconstruction of recording sites was performed as described in other papers (Gamberini, Galletti, Bosco, Breveglieri, & Fattori, 2011; Galletti et al., 1999).

Behavioral Task

All monkeys were trained to perform a visual observation task. The monkey sat in a primate chair (Crist Instruments, Hagerstown, MD) with the head fixed in front of a personal computer–controlled carousel containing two different objects. The objects were presented to the animal one at a time, in a random order, always in the same spatial position (22.5 cm away from the animal, in the midsagittal plane). Only the selected object was visible in each trial.

The task began when the monkey pressed a “home” button near its chest, outside its field of view, in complete darkness (Figure 1A; Home Button push). The animal was allowed to use only the arm contralateral to the recording side. It was required to keep the button pressed for 1 sec, during which it was free to look around, though remaining in darkness. After this interval, a LED mounted just above the object was switched on (fixation LED green), and the monkey had to fixate it. During fixation, eye position was controlled by an electronic window (4° × 4°) centered on the fixation LED. Breaking of fixation and/or premature button release interrupted the trial.

Figure 1. 

Behavioral task and objects used. (A) Top: Scheme of the task, as seen laterally with respect to the monkey. Bottom: Time sequence of the task. The sequence of status of the home button and status of the color of the fixation point (fixation LED) and of the light illuminating the object (LIGHT) are shown. Below the scheme, typical examples of eye traces during a single trial are shown. Dashed lines indicate task markers and behavioral markers: from left, trial start (Home Button push), fixation target appearance (fixation LED green), eye traces entering the fixation window, object illumination onset (light on), go signal for home button release (fixation LED red), home button release (Home Button release) coincident with object illumination off (light off), and fixation target switching off (fixation LED off), end of data acquisition. (B, C) Photos of the objects used in the task as seen from the monkeys' point of view (B) and from lateral view (C).

Figure 1. 

Behavioral task and objects used. (A) Top: Scheme of the task, as seen laterally with respect to the monkey. Bottom: Time sequence of the task. The sequence of status of the home button and status of the color of the fixation point (fixation LED) and of the light illuminating the object (LIGHT) are shown. Below the scheme, typical examples of eye traces during a single trial are shown. Dashed lines indicate task markers and behavioral markers: from left, trial start (Home Button push), fixation target appearance (fixation LED green), eye traces entering the fixation window, object illumination onset (light on), go signal for home button release (fixation LED red), home button release (Home Button release) coincident with object illumination off (light off), and fixation target switching off (fixation LED off), end of data acquisition. (B, C) Photos of the objects used in the task as seen from the monkeys' point of view (B) and from lateral view (C).

Close modal

After button pressing, during LED fixation, two white lights at the sides of the presented object were switched on, thus illuminating it (light on). The monkey was required to keep fixation without releasing the home button. Eye position was monitored through an infrared oculometer system (Voss Eyetracker, Karlsruhe, Germany) and sampled at 500 Hz for two animals and with a camera-based infrared oculometer system (ISCAN) and sampled at 100 Hz for the other two cases. After 1 sec, a color change of the fixation LED (from green to red; fixation LED red) instructed the monkey to release the home button (Home Button release). At the same time, the lights illuminating the object were turned off (light off), and the monkey could break fixation and received its reward. Reach and grasp actions were not allowed, being prevented by a transparent screen mounted on the chair blocking the hand access to the object. The two objects were presented in random order in a block of 20 correct trials, 10 trials for each one of the objects tested. Two of the four animals (Cases A and B) were previously trained to perform the reach-to-grasp task described in Fattori et al. (2010).

Tested Objects

The objects were chosen so as to have very similar visual features (i.e., almost the same retinal stimulation) but different affordability (i.e., evoking different grip types if the animal was required to grasp them).

The tested objects are as follows (Figure 1B, C): handle (aluminium made, thickness = 5 mm, width = 34 mm, depth = 13 mm; gap dimensions = 5 × 28 × 11 mm), evoking the finger prehension grip, that is all fingers, except the thumb, inserted into the gap and wrapped around the object; and plate (aluminium made, thickness = 4 mm, width = 30 mm, depth = 14 mm), evoking the primitive precision grip, using the thumb and the distal phalanges of the other fingers. Both these objects were mounted on the carousel with either a horizontal or vertical orientation. Each cell was tested with both objects mounted in the same orientation, generally the preferred one for that cell. If the cell remained in record for enough time, we tested it with the orientation orthogonal to the previous one. From the monkey's point of view the objects appeared very similar but distinguishable (see Figure 1B). In fact, there was a slight difference in size between the two objects; the handle was a bit thicker and longer than the plate, and the edges were smoothed in the handle and sharp in the plate. A further support to the view that the animals easily recognize the two objects comes from the fact that the monkeys performing the reach-to-grasp task (Cases A and B) grasped suddenly and in the right way both objects when requested.

Data Analysis

All the analyses were carried out using custom Matlab scripts (The Mathworks, Natick, MA). The neural activity was analyzed by quantifying the discharge in each trial in two time epochs: FIX, from 400 to 100 msec before the onset of object illumination; during this time, the monkey was fixating the fixation point in darkness, without any possibility to see the object. VIS, response to object presentation: from 40 msec after object illumination to 300 msec after it. In this epoch, the monkey was fixating in the same position as during FIX while observing the illuminated object. Visual responses were assessed by a comparison (Student's t test, p < .05) of neural activity during VIS and FIX. We analyzed only those units where we collected at least seven trials for each object. The reasons for these conservative criteria are due to the intrinsic high variability of biological responses (Kutz, Fattori, Gamberini, Breveglieri, & Galletti, 2003).

Population response was calculated as averaged spike density function (SDF; Gaussian kernel, half-width 40 msec) as detailed in Fattori et al. (2012). To statistically compare the different SDFs, we performed a permutation test using the sum of squared errors as the test statistic (Fattori et al., 2010). Comparisons of responses to object presentation have been made in the interval VIS.

A selectivity index (SI) of the VIS responses was calculated as defined by Luck, Chelazzi, Hillyard, and Desimone (1997) by using the strongest (preferred) and the weakest (unpreferred) discharges (SI = preferred − unpreferred/preferred + unpreferred). The SI ranged from 0 to 1. An SI value of 1 indicates a maximum modulation of responses between the preferred and nonpreferred condition, whereas a value of 0 indicates no modulation.

Neuronal latencies of visual responses were calculated trial by trial as the time between the onset of object illumination and the beginning of the neural discharge, using the knowledge-based spike train analysis method, as detailed in Xu, Ivanusic, Bourke, Butler, and Horne (1999).

The onset of object selectivity was calculated in each of the visually selective cell by finding the latency at which the two SDFs (preferred and unpreferred discharges) started to differ significantly. We calculated each SDF with half-width of 10 msec and performed a pairwise permutation test using a 100- msec time window that slid along the curves in 20-msec increments, starting from the time of object illumination (pairwise sliding permutation test). The latency of SDFs separation was defined as the midpoint of the first of three consecutive bins in which the permutation test was significant (p < .05).

Control Experiment

To provide additional proofs on the presence of affordance cells in V6A, we performed a control experiment on one of the cases studied in this work (case D). Specifically, we tested on the same cell two handles and two plates with different thickness. All the objects were aluminium made with the same width as used in the main experiment, but with a different thickness. In particular, one handle and one plate were the same as used in the main experiment, whereas the second handle and plate were thicker (thickness = 15 mm, which is about three times the thickness of the objects used in the main experiment). In this way, we kept constant the affordance (handles or plates) while changing the visual features (thin or thick objects) within objects and changed the affordance while keeping constant the visual features between objects of the same thickness (handles vs. plates).

To study the responses of V6A neurons to the presentation of 3-D graspable objects similar in shape and size but evoking different grasp type, we performed a task that required the monkey to keep fixation of a LED while a handle or a plate with the same orientation was illuminated. At each trial, the handle or the plate was randomly presented. From the monkey's point of view, the two objects appeared very similar (Figure 1B).

A total of 180 V6A neurons underwent this task. As shown in Table 1, we found that in 58% of tested cells the discharge during VIS was significantly different from that during FIX (visual cells; Student's t test, p < .05), in agreement with what we found in a previous study where nongraspable visual stimuli were used (Galletti et al., 1999). Visual latencies were calculated on 70/104 visual cells (mean value = 96 ± 37 msec; see Figure 2, cumulative sum plot). The mean latency is higher than the one found with 2-D stimuli projected on a screen (79.8 ± 29 msec; Kutz et al., 2003) and suggests possible complex processes going on in these cells during the observation of graspable objects.

Table 1. 

Number and Incidence of Visual and Affordance Cells in Each Studied Case

CaseVisual CellsAffordance Cells
Aa (n = 31) 23 (74%) 9 (39%) 
Ba (n = 57) 30 (53%) 10 (33%) 
C (n = 62) 30 (48%) 4 (13%) 
D (n = 30) 21 (70%) 10 (48%) 
Total (N = 180) 104 (58%) 33 (32%) 
CaseVisual CellsAffordance Cells
Aa (n = 31) 23 (74%) 9 (39%) 
Ba (n = 57) 30 (53%) 10 (33%) 
C (n = 62) 30 (48%) 4 (13%) 
D (n = 30) 21 (70%) 10 (48%) 
Total (N = 180) 104 (58%) 33 (32%) 

aAnimals trained also to the reach-to-grasp task.

Figure 2. 

Latencies of area V6A visual activity in the visual observation task. The plot is the cumulative frequency distribution of the latencies of the neural responses to observation of the handle and the plate. The horizontal axis shows time in milliseconds (zero is the onset of the object illumination), and the vertical axis shows the percentage of V6A visual cells where the latency of the visual responses has been calculated (n = 70).

Figure 2. 

Latencies of area V6A visual activity in the visual observation task. The plot is the cumulative frequency distribution of the latencies of the neural responses to observation of the handle and the plate. The horizontal axis shows time in milliseconds (zero is the onset of the object illumination), and the vertical axis shows the percentage of V6A visual cells where the latency of the visual responses has been calculated (n = 70).

Close modal

Interestingly, although the majority of visual cells responded in the same way to the plate/handle presentation, as expected given the similarity of the two objects from the animal's point of view (Figure 1B), 33 of the 104 visual cells (32%) displayed a different response to plate/handle presentation (ANOVA, p < .05). As the two objects, despite their visual similarity, required different grasp types, we believe that the responses of these cells signaled the affordance of the objects instead of their visual features. In other words, the cell discharge after object presentation could be the expression of the motor plan evoked by the objects. However, because the incidence of this type of cells was not different between the animals trained (Cases A and B; see Table 1) and those untrained (Cases C and D; see Table 1) to perform a reach-to-grasp task (Chi-square test, p > .05), we are in favor of the view that the cells encoded the potential actions the animal can perform on the objects, and from here on we will call them “affordance cells.”

Figure 3 shows three examples of affordance cells. The cell in Figure 3A, tested with horizontal objects, displayed a stronger discharge to the handle than to the plate presentation (ANOVA, p < .05). The cell in Figure 3B, tested with vertical objects, displayed a stronger discharge to the plate than to the handle presentation (ANOVA, p < .05). It is worthwhile to note that the responses to the object presentation were highly different although the visual features (geometry, shape) of handle and plate were almost identical as seen from the monkey's point of view (Figure 1B). The cell in Figure 3C was tested with both horizontal and vertical objects. The two horizontal objects evoked visual responses with small, not significant differences, whereas the two vertical objects showed responses with high and statistically significant differences. The cell displayed a strong visual discharge at the presentation of vertical plate (Figure 3C, bottom right) and a significantly weaker discharge at the presentation of vertical handle (Figure 3C, bottom left; ANOVA, p < .05). In other words, the cell was weakly sensitive to the visual features per se (weak differences between responses to horizontal and vertical objects; compare upper with bottom panels in Figure 3C, left and right) but responded in a completely different way to two very similar objects (vertical handle and plate; Figure 3C, bottom left and right, respectively) that required a different grip when grasped. We suggest that the parameter that mainly influences this cell, as well as the other two cells shown in Figure 3, is the object affordance. According to this view, all these cells would signal the type of grip needed to grasp the object instead of or in addition to their visual features.

Figure 3. 

Examples of affordance cells. Activity is shown as superimposed peristimulus time histograms (PSTHs) and raster displays of impulse activity. Below the display is a record of horizontal (upper trace) and vertical (lower trace) components of eye movements. Neural activity and eye traces are aligned (long vertical line) on the onset of the object illumination (black line below PSTH: time of object illumination). Vertical scale bars on histograms: (A) 95 spikes/sec; (B) 80 spikes/sec; (C) 43 spikes/sec. Eye traces: 60°/division. (A) Responses to horizontally oriented objects. (B) Responses to vertically oriented objects. (C) Responses to both objects' orientations. Note that the sketched objects do not represent the monkey's point of view.

Figure 3. 

Examples of affordance cells. Activity is shown as superimposed peristimulus time histograms (PSTHs) and raster displays of impulse activity. Below the display is a record of horizontal (upper trace) and vertical (lower trace) components of eye movements. Neural activity and eye traces are aligned (long vertical line) on the onset of the object illumination (black line below PSTH: time of object illumination). Vertical scale bars on histograms: (A) 95 spikes/sec; (B) 80 spikes/sec; (C) 43 spikes/sec. Eye traces: 60°/division. (A) Responses to horizontally oriented objects. (B) Responses to vertically oriented objects. (C) Responses to both objects' orientations. Note that the sketched objects do not represent the monkey's point of view.

Close modal

Figure 4A shows that the temporal profile and the amount of discharge of the 33 affordance cells is similar for the plate and the handle, indicating a lack of preference for either object in the population (permutation test, ns). Conversely, when we ranked neural activity according to each cell preference (Figure 4B), we observed that the population had a high capacity to discriminate between the handle and the plate (permutation test, p < .05). Figure 4C shows that the onset of object selectivity of the affordance cells (see Methods) ranged from 50 to 250 msec after object illumination, and Figure 4D shows that the SI of visual responses (see Methods) spans in a quite large range. Overall, these data show that the affordance cells were significantly differently activated by the plate and the handle.

Figure 4. 

Population activity of affordance cells. (A, B) Activity of affordance cells expressed as averaged normalized SDFs (thick lines) with variability bands (SEM, thin lines) constructed by ranking the response of each neuron for each individual object and ranked according to the intensity of the response (B) elicited in the VIS epoch (n = 33). Each cell of the population has been taken into account twice, ranking on each object (A) and ranking according to individual neuron preference (B). Neuronal activities are aligned with the onset of the object illumination (light on, vertical line). Scale on abscissa: 200 msec/division; vertical scale: 70% of normalized activity. Although the neural population did not show a preference for either object (A), it highly discriminated between the two objects, as shown by the huge separation between the two curves (B). (C) Cumulative frequency distributions of the latencies of the selectivity of affordance cells. Zero indicates the onset of the object illumination. (D) Cumulative frequency distributions of the SI of affordance cells. Other conventions as in Figure 2.

Figure 4. 

Population activity of affordance cells. (A, B) Activity of affordance cells expressed as averaged normalized SDFs (thick lines) with variability bands (SEM, thin lines) constructed by ranking the response of each neuron for each individual object and ranked according to the intensity of the response (B) elicited in the VIS epoch (n = 33). Each cell of the population has been taken into account twice, ranking on each object (A) and ranking according to individual neuron preference (B). Neuronal activities are aligned with the onset of the object illumination (light on, vertical line). Scale on abscissa: 200 msec/division; vertical scale: 70% of normalized activity. Although the neural population did not show a preference for either object (A), it highly discriminated between the two objects, as shown by the huge separation between the two curves (B). (C) Cumulative frequency distributions of the latencies of the selectivity of affordance cells. Zero indicates the onset of the object illumination. (D) Cumulative frequency distributions of the SI of affordance cells. Other conventions as in Figure 2.

Close modal

Effect of Object Orientation

Whenever possible, we tested the same cell with objects presented in two different orientations , as in the example of Figure 3C. This was done for 35/180 cells. We found that 26 of 35 cells (74%) were visual (Student's t test, p < .05), and among them, 10 (38%) were selective only for the orientation, 5 (19%) only for the object type, and 8 (30%) for both (two-way ANOVA, Orientation × Object, p < .05). The population SDF of object-selective cells is shown in Figure 5 for the preferred (left) and unpreferred (right) orientations. The selectivity is much stronger and significant (permutation test, p < .05) for preferred orientation. The onset of selectivity (significant difference between the curves for best and worst objects) was influenced by the orientation and was significant earlier for the preferred orientation (Kolmogorov Smirnov test, p = .03). These data highlight the coexistence of orientation sensitivity and affordance sensitivity in the same cells.

Figure 5. 

Orientation and affordance sensitivity. Activity of the 13 affordance cells tested with both orientations of the objects expressed as SDF constructed by ranking according to the intensity of the response elicited in the VIS epoch in the preferred orientation (A) and in the unpreferred orientation (B). The two SDFs start to separate at 120 msec for preferred orientation and at 240 msec for unpreferred one (permutation test, p < .05). Other conventions as in Figure 4. The coexistence of orientation sensitivity and affordance sensitivity in the same cell population is evident.

Figure 5. 

Orientation and affordance sensitivity. Activity of the 13 affordance cells tested with both orientations of the objects expressed as SDF constructed by ranking according to the intensity of the response elicited in the VIS epoch in the preferred orientation (A) and in the unpreferred orientation (B). The two SDFs start to separate at 120 msec for preferred orientation and at 240 msec for unpreferred one (permutation test, p < .05). Other conventions as in Figure 4. The coexistence of orientation sensitivity and affordance sensitivity in the same cell population is evident.

Close modal

Control Experiment

To support the view that the object affordance was a parameter influencing the cell discharge in at least part of V6A neurons, we carried out an additional experiment. In one monkey (Case D), we recorded the activity of further 43 V6A cells. We tested each one of these cells with two handles of two different thickness and two plates of two different thickness presented to the animal in a random order. The two thickness were the same in either handles and plates. In other words, we kept constant the affordance while changing the visual features within objects (handles or plates) and changed the affordance while keeping constant the visual features between objects (handles vs. plates of the same thickness). We found that 31 cells were sensitive to the visual stimulation. Nine of 31 visual cells (29%) were influenced by the visual features of the object, that is, by the thickness of the object. In 5 of 31 visual cells, the discharge during object observation was not significantly different within handles or plates of different thickness (ANOVA, p > .05), but the discharge was different between handles and plates of the same thickness (ANOVA, p < .05). Thus, the discharge of these cells was not modulated by the visual features of the objects, but likely by their affordances. The percentage of affordance cells in this control experiment was smaller but not significantly different to that we found in the main experiment (Chi-square test, p > .05).

An example of one of these cells is shown in Figure 6. The cell was strongly and similarly modulated by the observation of either handles, although they had very different thickness (Figure 6, two top panels; ANOVA p > .05). The cell was also similarly though weakly modulated, if not at all, by the observation of either plates (two bottom panels; ANOVA, p > .05). In strong contrast, the responses to the handle presentation were significantly different with respect to those responses to the plate presentation (Figure 6, top vs. bottom panels; ANOVA, p < .05). This means that the parameter influencing the discharge of this cell was not the visual features of the object, which were very different if seen from the animal's point of view, but likely the object affordance.

Figure 6. 

Example neuron of the control experiment: same affordance, different visual features. Activity is shown as superimposed peristimulus time histograms (PSTHs). Neural activity is aligned (long vertical line) on the onset of the object illumination (black line below PSTH: time of object illumination). Vertical scale bars on histograms: 45 spikes/sec. Top: responses to handles; Bottom: responses to plates. Other conventions as in Figure 3. Very different visual features do not evoke different neural responses, but different affordances do.

Figure 6. 

Example neuron of the control experiment: same affordance, different visual features. Activity is shown as superimposed peristimulus time histograms (PSTHs). Neural activity is aligned (long vertical line) on the onset of the object illumination (black line below PSTH: time of object illumination). Vertical scale bars on histograms: 45 spikes/sec. Top: responses to handles; Bottom: responses to plates. Other conventions as in Figure 3. Very different visual features do not evoke different neural responses, but different affordances do.

Close modal

In this study, we investigated the neural responses of cells belonging to an area of the dorsomedial visual stream when monkeys observed two objects with similar visual features but different contextual information such as the evoked grip type (affordance). In a recent paper (Fattori et al., 2012), a cluster analysis suggested that V6A visual responses subserve action monitoring, in that they were not related to the object geometry (i.e., physical attributes) but to the grip type required to grasp these objects. One aspect of this previous study is that we manipulated the object shape, making it difficult to judge whether the observed responses were related to the coding of object geometry (i.e., physical attributes) or whether the processing of these object features was indeed related to grasping. In the current study, we maintain nearly constant the visual features while changing the object affordance, and by doing so we suggest that V6A is involved in the extraction of object affordances, that is, in encoding “vision for action.”

It is worthwhile to note that in our experimental conditions the attentional load was the same for the two objects; thus, although it has been proved that attention modulates the activity of V6A (Ciavarro et al., 2013; Galletti et al., 2010), there are no reasons to believe that the attention is responsible for the object selectivity here shown by the affordance cells. Also, we know that spatial parameters are critically encoded in dorsal stream areas and by V6A itself (Galletti & Fattori, 2003; Galletti et al., 2003), but in this task the space cannot account for the different responses we observed, as the two objects were always presented in the same spatial position (within arm distance, straight ahead) and in the same retinal position (central part of inferior hemifield, where the majority of V6A visual receptive fields are located; Gamberini et al., 2011; Galletti et al., 1999).

We suggest that the cells that responded differently to the visual presentation of plates and handles are putatively computing object affordances. We are aware that we cannot completely rule out the more parsimonious explanation that the different responses to plates and handles seen here reflect the differences of the visual attributes of the two objects, but in our experimental design the two objects, when seen from the animal's point of view, were very similar (see Figure 1B) and were recognizable only by marginal differences. Thus, it is hard to believe that these minimal differences in visual features were responsible for the strong differences in cell activity that we found. The demonstration of the presence of affordance cells in V6A is further and even more strongly supported by the results of the control experiment, where in single cells we analyzed the responses when we changed the affordance while keeping constant the object visual features and we kept constant the affordance while widely changing the visual features of the objects.

Present data show that affordance cells are modulated by the orientation of the object. The object orientation influences both the amount of object selectivity and the selectivity onsets. As it is widely agreed that a successful object recognition system requires generalization across changing viewing conditions (Riesenhuber & Poggio, 2002), it can be predicted that a system that uses object information for the control of skilled actions would preserve the specificity of object appearance with regard to shape, size, and orientation (Craighero, Fadiga, Umiltà, & Rizzolatti, 1996). In fact, the same object presented with different orientations requires different hand postures during grasping (James, Humphrey, Gati, Menon, & Goodale, 2002) and different wrist orientations to grasp it. Our finding of object-related activity depending on the object orientation therefore supports a view that object information in the dorsal pathway is used for sensory motor transformations in visually guided behavior. This is in line with the “motor-related” discharges found in V6A for orienting the wrist appropriately to grasp differently oriented objects (Fattori et al., 2009) and for shaping the hand to grasp objects with different shapes (Fattori et al., 2010).

The present data as well as those of a previous study (Fattori et al., 2012) show that object selectivity is present in V6A also in the absence of animal's grasping movement. Interestingly, present data show that the incidence of affordance cells does not depend on task demands, being similar in animals trained and untrained to perform reach-to-grasp actions. This suggests that object affordance is an inherent attribute coded by V6A neurons.

It is widely known that real 3-D objects inherently provide affordances, properties relevant for its use, able to potentiate associated motor actions (Gibson, 1979). Accordingly, recent fMRI studies demonstrate that response patterns to 3-D objects and 2-D planar images are different (Snow et al., 2011) and that information about 3-D form is critical for the visual control of grasping and manipulation (e.g., Culham et al., 2003). For this reason, in this study we used real objects located in the peripersonal space, instead of the object images typically employed in human and monkey experiments when coding of object shape is studied (Subramanian & Colby, 2014; Rosenberg, Cowan, & Angelaki, 2013; Theys, Pani, van Loon, Goffin, & Janssen, 2013; Konen & Kastner, 2008).

Object Affordance in Monkey and Human V6A

Several studies showed that graspable objects, even when irrelevant for the current behavioral goals, automatically evoke motor programs, and their vision leads to activations in the motor cortices (Handy, Grafton, Shroff, Ketay, & Gazzaniga, 2003; Chao & Martin, 2000; Grafton, Fadiga, Arbib, & Rizzolatti, 1997). This happens because graspable objects facilitate visuomotor transformations, for example, by grabbing visual spatial attention (Handy et al., 2003). As area V6A is involved in the visuomotor control of prehension (Gamberini et al., 2011; Galletti et al., 2003) and in the shift of the spotlight of attention in both monkeys and humans (Ciavarro et al., 2013; Galletti et al., 2010), we suggest that in V6A the object selectivity may serve in the rapid transformation of visual representations into object-specific motor programs useful in visually guided grasping. It has been recently demonstrated that the putative human homolog of monkey area V6A (Pitzalis et al., 2013) is sensitive to the grasp-relevant object features regardless of size (Monaco et al., 2014). Like macaque V6A (Gamberini et al., 2011), human V6A is involved in processing visual information in the lower visual field specialized in the context of object-oriented actions (Rossit, McAdam, McLean, Goodale, & Culham, 2013) and in elaborating information on visual objects (“visual context sensitivity”; Smith & Goodale, 2014) to be transmitted to somatosensory and motor cortex for representing motor plans. From these very recent studies and from the present work, it emerges a frame where, in both monkey and human, area V6A is involved in processing grasp-relevant object features.

According to present and previous data, V6A neurons have access to feature information and use them in a behaviorally relevant manner. Visual inputs may be provided to V6A by other visual areas that process object shape, like area AIP (Romero, Pani, & Janssen, 2014; Srivastava, Orban, De Mazière, & Janssen, 2009; Murata et al., 2000), which is directly connected with V6A (Gamberini et al., 2009; Borra et al., 2008), and is implicated in object grasping (Baumann et al., 2009; Murata et al., 2000). V6A may modulate this information according to the needs of the task. Contextual information on graspable objects may be used in dorsomedial area V6A for guiding interaction with objects, in cooperation with dorsolateral area AIP, for selecting or generating appropriate grasp movements, in line with the suggested role of V6A in the monitoring of grasping actions (Fattori et al., 2010). This suggestion is also in agreement with the recent view that both dorsomedial and dorsolateral circuits specify the same grasping parameters, with dorsomedial computations depending on dorsolateral contributions (Verhagen, Dijkerman, Medendorp, & Toni, 2013).

Previous data from our laboratory indicate that some aspects of object visual attributes, like orientation and shape, strongly drive the discharges of V6A neurons (Fattori et al., 2012). Present data show that, in addition to the physical/visual aspects per se, the object affordance also modulates V6A cell activity. We suggest that V6A visual selectivity may serve in the rapid transformation of visual representations into object-specific motor programs useful in visually guided grasping. Although object recognition is a typical function of the ventral stream (Goodale & Milner, 1992; Ungerleider & Mishkin, 1982), recent remarkable findings in monkeys and humans demonstrate that areas of the dorsal stream are also able to process shape (Romero et al., 2014; Theys et al., 2013; Janssen, Srivastava, Ombelet, & Orban, 2008; Króliczak, McAdam, Quinlan, & Culham, 2008; Gardner, Babu, Ghosh, Sherwood, & Chen, 2007; Lehky & Sereno, 2007; Sereno & Maunsell, 1998). Present data, together with recently published data from our laboratory (Fattori et al., 2012), add V6A to the list of dorsal areas that can process object shape.

This work was supported by EU FP7-IST-217077-EYESHOTS, Firb 2013 N. RBFR132BKP, Ministero dell'Università e della Ricerca (Italy), and Fondazione del Monte di Bologna e Ravenna (Italy). The authors would like to thank Giacomo Placenti, Massimo Verdosci, and Francesco Campisi for the setup of the panel.

Reprint requests should be sent to Patrizia Fattori, Department of Pharmacy and Biotechnology, University of Bologna, Piazza di Porta San Donato, 2, I-40126, Bologna, Italy, or via e-mail: patrizia.fattori@unibo.it.

Baumann
,
M. A.
,
Fluet
,
M. C.
, &
Scherberger
,
H.
(
2009
).
Context-specific grasp movement representation in the macaque anterior intraparietal area
.
Journal of Neuroscience
,
29
,
6436
6448
.
Borra
,
E.
,
Belmalih
,
A.
,
Calzavara
,
R.
,
Gerbella
,
M.
,
Murata
,
A.
,
Rozzi
,
S.
, et al
(
2008
).
Cortical connections of the macaque anterior intraparietal (AIP) area
.
Cerebral Cortex
,
18
,
1094
1111
.
Breveglieri
,
R.
,
Galletti
,
C.
,
Dal Bò
,
G.
,
Hadjidimitrakis
,
K.
, &
Fattori
,
P.
(
2014
).
Multiple aspects of neural activity during reaching preparation in the medial posterior parietal area V6A
.
Journal of Cognitive Neuroscience
,
26
,
878
895
.
Chao
,
L. L.
, &
Martin
,
A.
(
2000
).
Representation of manipulable man-made objects in the dorsal stream
.
Neuroimage
,
12
,
478
484
.
Ciavarro
,
M.
,
Ambrosini
,
E.
,
Tosoni
,
A.
,
Committeri
,
G.
,
Fattori
,
P.
, &
Galletti
,
C.
(
2013
).
rTMS of medial parieto-occipital cortex interferes with attentional reorienting during attention and reaching tasks
.
Journal of Cognitive Neuroscience
,
25
,
1453
1462
.
Craighero
,
L.
,
Fadiga
,
L.
,
Umiltà
,
C. A.
, &
Rizzolatti
,
G.
(
1996
).
Evidence for visuomotor priming effect
.
NeuroReport
,
8
,
347
349
.
Culham
,
J. C.
,
Danckert
,
S. L.
,
DeSouza
,
J. F.
,
Gati
,
J. S.
,
Menon
,
R. S.
, &
Goodale
,
M. A.
(
2003
).
Visually guided grasping produces fMRI activation in dorsal but not ventral stream brain areas
.
Experimental Brain Research
,
153
,
180
189
.
Fattori
,
P.
,
Breveglieri
,
R.
,
Marzocchi
,
N.
,
Filippini
,
D.
,
Bosco
,
A.
, &
Galletti
,
C.
(
2009
).
Hand orientation during reach-to-grasp movements modulates neuronal activity in the medial posterior parietal area V6A
.
Journal of Neuroscience
,
29
,
1928
1936
.
Fattori
,
P.
,
Breveglieri
,
R.
,
Raos
,
V.
,
Bosco
,
A.
, &
Galletti
,
C.
(
2012
).
Vision for action in the macaque medial posterior parietal cortex
.
Journal of Neuroscience
,
32
,
3221
3234
.
Fattori
,
P.
,
Raos
,
V.
,
Breveglieri
,
R.
,
Bosco
,
A.
,
Marzocchi
,
N.
, &
Galletti
,
C.
(
2010
).
The dorsomedial pathway is not just for reaching: Grasping neurons in the medial parieto-occipital cortex of the macaque monkey
.
Journal of Neuroscience
,
30
,
342
349
.
Galletti
,
C.
,
Battaglini
,
P. P.
, &
Fattori
,
P.
(
1995
).
Eye position influence on the parieto-occipital area PO (V6) of the macaque monkey
.
European Journal of Neuroscience
,
7
,
2486
2501
.
Galletti
,
C.
,
Breveglieri
,
R.
,
Lappe
,
M.
,
Bosco
,
A.
,
Ciavarro
,
M.
, &
Fattori
,
P.
(
2010
).
Covert shift of attention modulates the ongoing neural activity in a reaching area of the macaque dorsomedial visual stream
.
PLoS One
,
5
,
e15078
.
Galletti
,
C.
, &
Fattori
,
P.
(
2003
).
Neuronal mechanisms for detection of motion in the field of view
.
Neuropsychologia
,
41
,
1717
1727
.
Galletti
,
C.
,
Fattori
,
P.
,
Kutz
,
D. F.
, &
Gamberini
,
M.
(
1999
).
Brain location and visual topography of cortical area V6A in the macaque monkey
.
European Journal of Neuroscience
,
11
,
575
582
.
Galletti
,
C.
,
Kutz
,
D. F.
,
Gamberini
,
M.
,
Breveglieri
,
R.
, &
Fattori
,
P.
(
2003
).
Role of the medial parieto-occipital cortex in the control of reaching and grasping movements
.
Experimental Brain Research
,
153
,
158
170
.
Gamberini
,
M.
,
Galletti
,
C.
,
Bosco
,
A.
,
Breveglieri
,
R.
, &
Fattori
,
P.
(
2011
).
Is the medial posterior parietal area V6A a single functional area?
Journal of Neuroscience
,
31
,
5145
5157
.
Gamberini
,
M.
,
Passarelli
,
L.
,
Fattori
,
P.
,
Zucchelli
,
M.
,
Bakola
,
S.
,
Luppino
,
G.
, et al
(
2009
).
Cortical connections of the visuomotor parietooccipital area V6Ad of the macaque monkey
.
Journal of Comparative Neurology
,
513
,
622
642
.
Gardner
,
E. P.
,
Babu
,
K. S.
,
Ghosh
,
S.
,
Sherwood
,
A.
, &
Chen
,
J.
(
2007
).
Neurophysiology of prehension. III. Representation of object features in posterior parietal cortex of the macaque monkey
.
Journal of Neurophysiology
,
98
,
3708
3730
.
Gibson
,
J. J.
(
1979
).
The ecological approach to visual perception
.
Boston
:
Houghton Mifflin
.
Goodale
,
M. A.
, &
Milner
,
A. D.
(
1992
).
Separate visual pathways for perception and action
.
Trends in Neurosciences
,
15
,
20
25
.
Grafton
,
S. T.
,
Fadiga
,
L.
,
Arbib
,
M. A.
, &
Rizzolatti
,
G.
(
1997
).
Premotor cortex activation during observation and naming of familiar tools
.
Neuroimage
,
6
,
231
236
.
Handy
,
T. C.
,
Grafton
,
S. T.
,
Shroff
,
N. M.
,
Ketay
,
S.
, &
Gazzaniga
,
M. S.
(
2003
).
Graspable objects grab attention when the potential for action is recognized
.
Nature Neuroscience
,
6
,
421
427
.
James
,
T. W.
,
Humphrey
,
G. K.
,
Gati
,
J. S.
,
Menon
,
R. S.
, &
Goodale
,
M. A.
(
2002
).
Differential effects of viewpoint on object-driven activation in dorsal and ventral streams
.
Neuron
,
35
,
793
801
.
Janssen
,
P.
,
Srivastava
,
S.
,
Ombelet
,
S.
, &
Orban
,
G. A.
(
2008
).
Coding of shape and position in macaque lateral intraparietal area
.
Journal of Neuroscience
,
28
,
6679
6690
.
Jeannerod
,
M.
,
Arbib
,
M. A.
,
Rizzolatti
,
G.
, &
Sakata
,
H.
(
1995
).
Grasping objects: The cortical mechanisms of visuomotor transformation
.
Trends in Neurosciences
,
18
,
314
320
.
Konen
,
C. S.
, &
Kastner
,
S.
(
2008
).
Two hierarchically organized neural systems for object information in human visual cortex
.
Nature Neuroscience
,
11
,
224
231
.
Króliczak
,
G.
,
McAdam
,
T. D.
,
Quinlan
,
D. J.
, &
Culham
,
J. C.
(
2008
).
The human dorsal stream adapts to real actions and 3D shape processing: A functional magnetic resonance imaging study
.
Journal of Neurophysiology
,
100
,
2627
2639
.
Kutz
,
D. F.
,
Fattori
,
P.
,
Gamberini
,
M.
,
Breveglieri
,
R.
, &
Galletti
,
C.
(
2003
).
Early- and late-responding cells to saccadic eye movements in the cortical area V6A of macaque monkey
.
Experimental Brain Research
,
149
,
83
95
.
Lehky
,
S. R.
, &
Sereno
,
A. B.
(
2007
).
Comparison of shape encoding in primate dorsal and ventral visual pathways
.
Journal of Neurophysiology
,
97
,
307
319
.
Luck
,
S. J.
,
Chelazzi
,
L.
,
Hillyard
,
S. A.
, &
Desimone
,
R.
(
1997
).
Neural mechanisms of spatial selective attention in areas V1, V2, and V4 of macaque visual cortex
.
Journal of Neurophysiology
,
77
,
24
42
.
Maranesi
,
M.
,
Bonini
,
L.
, &
Fogassi
,
L.
(
2014
).
Cortical processing of object affordances for self and others' action
.
Frontiers in Psychology
,
5
,
538
Monaco
,
S.
,
Chen
,
Y.
,
Medendorp
,
W. P.
,
Crawford
,
J. D.
,
Fiehler
,
K.
, &
Henriques
,
D. Y.
(
2014
).
Functional magnetic resonance imaging adaptation reveals the cortical networks for processing grasp-relevant object properties
.
Cerebral Cortex
,
24
,
1540
1554
.
Murata
,
A.
,
Gallese
,
V.
,
Luppino
,
G.
,
Kaseda
,
M.
, &
Sakata
,
H.
(
2000
).
Selectivity for the shape, size, and orientation of objects for grasping in neurons of monkey parietal area AIP
.
Journal of Neurophysiology
,
83
,
2580
2601
.
Pitzalis
,
S.
,
Sereno
,
M. I.
,
Committeri
,
G.
,
Fattori
,
P.
,
Galati
,
G.
,
Tosoni
,
A.
, et al
(
2013
).
The human homologue of macaque area V6A
.
Neuroimage
,
82
,
517
530
.
Riesenhuber
,
M.
, &
Poggio
,
T.
(
2002
).
Neural mechanisms of object recognition
.
Current Opinion in Neurobiology
,
12
,
162
168
.
Romero
,
M. C.
,
Pani
,
P.
, &
Janssen
,
P.
(
2014
).
Coding of shape features in the macaque anterior intraparietal area
.
Journal of Neuroscience
,
34
,
4006
4021
.
Rosenberg
,
A.
,
Cowan
,
N. J.
, &
Angelaki
,
D. E.
(
2013
).
The visual representation of 3D object orientation in parietal cortex
.
Journal of Neuroscience
,
33
,
19352
19361
.
Rossit
,
S.
,
McAdam
,
T.
,
McLean
,
D. A.
,
Goodale
,
M. A.
, &
Culham
,
J. C.
(
2013
).
fMRI reveals a lower visual field preference for hand actions in human superior parieto-occipital cortex (SPOC) and precuneus
.
Cortex
,
49
,
2525
2541
.
Sereno
,
A. B.
, &
Maunsell
,
J. H.
(
1998
).
Shape selectivity in primate lateral intraparietal cortex
.
Nature
,
395
,
500
503
.
Smith
,
F. W.
, &
Goodale
,
M. A.
(
2014
).
Decoding visual object categories in early somatosensory cortex
.
Cerebral Cortex
.
doi: 10.1093/cercor/bht292
.
Snow
,
J. C.
,
Pettypiece
,
C. E.
,
McAdam
,
T. D.
,
McLean
,
A. D.
,
Stroman
,
P. W.
,
Goodale
,
M. A.
, et al
(
2011
).
Bringing the real world into the fMRI scanner: Repetition effects for pictures versus real objects
.
Scientific Reports
,
1
,
130
.
Srivastava
,
S.
,
Orban
,
G. A.
,
De Mazière
,
P. A.
, &
Janssen
,
P.
(
2009
).
A distinct representation of three-dimensional shape in macaque anterior intraparietal area: Fast, metric, and coarse
.
Journal of Neuroscience
,
29
,
10613
10626
.
Subramanian
,
J.
, &
Colby
,
C. L.
(
2014
).
Shape selectivity and remapping in dorsal stream visual area LIP
.
Journal of Neurophysiology
,
111
,
613
627
.
Theys
,
T.
,
Pani
,
P.
,
van Loon
,
J.
,
Goffin
,
J.
, &
Janssen
,
P.
(
2013
).
Three-dimensional shape coding in grasping circuits: A comparison between the anterior intraparietal area and ventral premotor area F5a
.
Journal of Cognitive Neuroscience
,
25
,
352
364
.
Ungerleider
,
L. G.
, &
Mishkin
,
M.
(
1982
).
Two cortical visual systems
. In
D. J.
Ingle
,
M. A.
Goodale
, &
R. J. W.
Mansfield
(Eds.),
Analysis of visual behavior
(pp.
549
586
).
Cambridge, MA
:
MIT Press
.
Verhagen
,
L.
,
Dijkerman
,
H. C.
,
Medendorp
,
W. P.
, &
Toni
,
I.
(
2013
).
Hierarchical organization of parietofrontal circuits during goal-directed action
.
Journal of Neuroscience
,
33
,
6492
6503
.
Xu
,
Z. M.
,
Ivanusic
,
J. J.
,
Bourke
,
D. W.
,
Butler
,
E. G.
, &
Horne
,
M. K.
(
1999
).
Automatic detection of bursts in spike trains recorded from the thalamus of a monkey performing wrist movements
.
Journal of Neuroscience Methods
,
91
,
123
133
.