Area V6A is a visuomotor area of the dorsomedial visual stream that contains cells modulated by object observation and by grip formation. As different objects have different shapes but also evoke different grips, the response selectivity during object presentation could reflect either the coding of object geometry or object affordances. To clarify this point, we here investigate neural responses of V6A cells when monkeys observed two objects with similar visual features but different contextual information, such as the evoked grip type. We demonstrate that many V6A cells respond to the visual presentation of objects and about 30% of them by the object affordance. Given that area V6A is an early stage in the visuomotor processes underlying grasping, these data suggest that V6A may participate in the computation of object affordances. These results add some elements in the recent literature about the role of the dorsal visual stream areas in object representation and contribute in elucidating the neural correlates of the extraction of action-relevant information from general object properties, in agreement with recent neuroimaging studies on humans showing that vision of graspable objects activates action coding in the dorsomedial visual steam.
When we see an object, our visual system extracts the critical visual features that evoke the potential actions we can perform on it (object affordances; Gibson, 1979). A huge amount of data suggests that object affordances for grasping are encoded in some parietal and frontal areas involved in the visual control of grasp (Maranesi, Bonini, & Fogassi, 2014; Jeannerod, Arbib, Rizzolatti, & Sakata, 1995). Among the parietal areas, perhaps the most typical one involved in grasp control is the anterior intraparietal area (AIP), which contains visual as well as grasping neurons and is involved in the visual control of grasp (Baumann, Fluet, & Scherberger, 2009; Murata, Gallese, Luppino, Kaseda, & Sakata, 2000). Recently, it has been suggested that also the posterior parietal area V6A, which was known to contain visual as well as reaching neurons (Galletti, Kutz, Gamberini, Breveglieri, & Fattori, 2003), could be implicated in the control of grasp (Fattori et al., 2010). Because many V6A neurons displayed visual selectivity during passive fixation of objects of different shapes (Fattori, Breveglieri, Raos, Bosco, & Galletti, 2012), it was suggested that this visual selectivity subserves the computation of object affordances, but at present there are only indirect evidences supporting this view (see Fattori et al., 2012). In the present work, we aimed to check whether the visual selectivity observed in V6A reflects a pure visual analysis of objects or is an expression of an encoding of object affordance.
To achieve this goal, monkeys were engaged in fixation tasks while observing two objects of similar size and geometrical shape: a handle and a plate that had the same orientation. From the monkey's point of view, these objects appeared very similar, but evoked, when requested, different types of grips: finger prehension for the handle and primitive precision grip for the plate. Thus, in this task, we maintained nearly constant the object visual features while changing object affordances. Interestingly, several visual cells that underwent the task responded very differently to the presentation of the two objects. We suggest that these cells encode the object affordances instead of, or in addition to, the object visual features.
Experiments were carried out in accordance with national laws on care and use of laboratory animals and the European Communities Council Directive of 22th September 2010 (2010/63/EU) and approved by the Bioethics Committee of the University of Bologna. Neither behavioral nor clinical signs of pain or distress were observed during training nor recording sessions.
Four male Macaca fascicularis monkeys weighing 2.4–3.9 kg were involved in the study. All the surgical procedures and the recording techniques followed the methodologies reported in other papers (Breveglieri, Galletti, Dal Bò, Hadjidimitrakis, & Fattori, 2014; Galletti, Battaglini, & Fattori, 1995). Briefly, a head-restraint system and a recording chamber were surgically implanted in asepsis and under general anesthesia (sodium thiopenthal, 8 mg/kg/h, iv). Adequate measures were taken to minimize pain or discomfort. A full program of postoperative analgesia (ketorolac trometazyn, 1 mg/kg im immediately after surgery, and 1.6 mg/kg im on the following days) and antibiotic care (Ritardomicina [benzatinic benzylpenicillin plus dihydrostreptomycin plus streptomycin], 1–1.5 ml/10 kg every 5–6 days) followed the surgery.
Single cell activity was recorded extracellularly from the posterior parietal area V6A (Galletti, Fattori, Kutz, & Gamberini, 1999). We performed single microelectrode penetrations using home-made glass-coated metal microelectrodes (for one animal) and multiple electrode penetrations using a five-channel multielectrode recording minimatrix (Thomas Recording, GMbH, Giessen, Germany) (for the remaining three animals). The electrode signals were amplified (at a gain of 10,000) and filtered (bandpass between 0.5 and 5 kHz). Action potentials in each channel were isolated with a dual time–amplitude window discriminator (DDIS-1, Bak Electronics, Mount Airy, MD) or with a waveform discriminator (Multi Spike Detector, Alpha Omega Engineering, Nazareth, Israel). Action potentials were sampled at 100 kHz.
All monkeys were trained to perform a visual observation task. The monkey sat in a primate chair (Crist Instruments, Hagerstown, MD) with the head fixed in front of a personal computer–controlled carousel containing two different objects. The objects were presented to the animal one at a time, in a random order, always in the same spatial position (22.5 cm away from the animal, in the midsagittal plane). Only the selected object was visible in each trial.
The task began when the monkey pressed a “home” button near its chest, outside its field of view, in complete darkness (Figure 1A; Home Button push). The animal was allowed to use only the arm contralateral to the recording side. It was required to keep the button pressed for 1 sec, during which it was free to look around, though remaining in darkness. After this interval, a LED mounted just above the object was switched on (fixation LED green), and the monkey had to fixate it. During fixation, eye position was controlled by an electronic window (4° × 4°) centered on the fixation LED. Breaking of fixation and/or premature button release interrupted the trial.
After button pressing, during LED fixation, two white lights at the sides of the presented object were switched on, thus illuminating it (light on). The monkey was required to keep fixation without releasing the home button. Eye position was monitored through an infrared oculometer system (Voss Eyetracker, Karlsruhe, Germany) and sampled at 500 Hz for two animals and with a camera-based infrared oculometer system (ISCAN) and sampled at 100 Hz for the other two cases. After 1 sec, a color change of the fixation LED (from green to red; fixation LED red) instructed the monkey to release the home button (Home Button release). At the same time, the lights illuminating the object were turned off (light off), and the monkey could break fixation and received its reward. Reach and grasp actions were not allowed, being prevented by a transparent screen mounted on the chair blocking the hand access to the object. The two objects were presented in random order in a block of 20 correct trials, 10 trials for each one of the objects tested. Two of the four animals (Cases A and B) were previously trained to perform the reach-to-grasp task described in Fattori et al. (2010).
The objects were chosen so as to have very similar visual features (i.e., almost the same retinal stimulation) but different affordability (i.e., evoking different grip types if the animal was required to grasp them).
The tested objects are as follows (Figure 1B, C): handle (aluminium made, thickness = 5 mm, width = 34 mm, depth = 13 mm; gap dimensions = 5 × 28 × 11 mm), evoking the finger prehension grip, that is all fingers, except the thumb, inserted into the gap and wrapped around the object; and plate (aluminium made, thickness = 4 mm, width = 30 mm, depth = 14 mm), evoking the primitive precision grip, using the thumb and the distal phalanges of the other fingers. Both these objects were mounted on the carousel with either a horizontal or vertical orientation. Each cell was tested with both objects mounted in the same orientation, generally the preferred one for that cell. If the cell remained in record for enough time, we tested it with the orientation orthogonal to the previous one. From the monkey's point of view the objects appeared very similar but distinguishable (see Figure 1B). In fact, there was a slight difference in size between the two objects; the handle was a bit thicker and longer than the plate, and the edges were smoothed in the handle and sharp in the plate. A further support to the view that the animals easily recognize the two objects comes from the fact that the monkeys performing the reach-to-grasp task (Cases A and B) grasped suddenly and in the right way both objects when requested.
All the analyses were carried out using custom Matlab scripts (The Mathworks, Natick, MA). The neural activity was analyzed by quantifying the discharge in each trial in two time epochs: FIX, from 400 to 100 msec before the onset of object illumination; during this time, the monkey was fixating the fixation point in darkness, without any possibility to see the object. VIS, response to object presentation: from 40 msec after object illumination to 300 msec after it. In this epoch, the monkey was fixating in the same position as during FIX while observing the illuminated object. Visual responses were assessed by a comparison (Student's t test, p < .05) of neural activity during VIS and FIX. We analyzed only those units where we collected at least seven trials for each object. The reasons for these conservative criteria are due to the intrinsic high variability of biological responses (Kutz, Fattori, Gamberini, Breveglieri, & Galletti, 2003).
Population response was calculated as averaged spike density function (SDF; Gaussian kernel, half-width 40 msec) as detailed in Fattori et al. (2012). To statistically compare the different SDFs, we performed a permutation test using the sum of squared errors as the test statistic (Fattori et al., 2010). Comparisons of responses to object presentation have been made in the interval VIS.
A selectivity index (SI) of the VIS responses was calculated as defined by Luck, Chelazzi, Hillyard, and Desimone (1997) by using the strongest (preferred) and the weakest (unpreferred) discharges (SI = preferred − unpreferred/preferred + unpreferred). The SI ranged from 0 to 1. An SI value of 1 indicates a maximum modulation of responses between the preferred and nonpreferred condition, whereas a value of 0 indicates no modulation.
Neuronal latencies of visual responses were calculated trial by trial as the time between the onset of object illumination and the beginning of the neural discharge, using the knowledge-based spike train analysis method, as detailed in Xu, Ivanusic, Bourke, Butler, and Horne (1999).
The onset of object selectivity was calculated in each of the visually selective cell by finding the latency at which the two SDFs (preferred and unpreferred discharges) started to differ significantly. We calculated each SDF with half-width of 10 msec and performed a pairwise permutation test using a 100- msec time window that slid along the curves in 20-msec increments, starting from the time of object illumination (pairwise sliding permutation test). The latency of SDFs separation was defined as the midpoint of the first of three consecutive bins in which the permutation test was significant (p < .05).
To provide additional proofs on the presence of affordance cells in V6A, we performed a control experiment on one of the cases studied in this work (case D). Specifically, we tested on the same cell two handles and two plates with different thickness. All the objects were aluminium made with the same width as used in the main experiment, but with a different thickness. In particular, one handle and one plate were the same as used in the main experiment, whereas the second handle and plate were thicker (thickness = 15 mm, which is about three times the thickness of the objects used in the main experiment). In this way, we kept constant the affordance (handles or plates) while changing the visual features (thin or thick objects) within objects and changed the affordance while keeping constant the visual features between objects of the same thickness (handles vs. plates).
To study the responses of V6A neurons to the presentation of 3-D graspable objects similar in shape and size but evoking different grasp type, we performed a task that required the monkey to keep fixation of a LED while a handle or a plate with the same orientation was illuminated. At each trial, the handle or the plate was randomly presented. From the monkey's point of view, the two objects appeared very similar (Figure 1B).
A total of 180 V6A neurons underwent this task. As shown in Table 1, we found that in 58% of tested cells the discharge during VIS was significantly different from that during FIX (visual cells; Student's t test, p < .05), in agreement with what we found in a previous study where nongraspable visual stimuli were used (Galletti et al., 1999). Visual latencies were calculated on 70/104 visual cells (mean value = 96 ± 37 msec; see Figure 2, cumulative sum plot). The mean latency is higher than the one found with 2-D stimuli projected on a screen (79.8 ± 29 msec; Kutz et al., 2003) and suggests possible complex processes going on in these cells during the observation of graspable objects.
|Case .||Visual Cells .||Affordance Cells .|
|Aa (n = 31)||23 (74%)||9 (39%)|
|Ba (n = 57)||30 (53%)||10 (33%)|
|C (n = 62)||30 (48%)||4 (13%)|
|D (n = 30)||21 (70%)||10 (48%)|
|Total (N = 180)||104 (58%)||33 (32%)|
|Case .||Visual Cells .||Affordance Cells .|
|Aa (n = 31)||23 (74%)||9 (39%)|
|Ba (n = 57)||30 (53%)||10 (33%)|
|C (n = 62)||30 (48%)||4 (13%)|
|D (n = 30)||21 (70%)||10 (48%)|
|Total (N = 180)||104 (58%)||33 (32%)|
aAnimals trained also to the reach-to-grasp task.
Interestingly, although the majority of visual cells responded in the same way to the plate/handle presentation, as expected given the similarity of the two objects from the animal's point of view (Figure 1B), 33 of the 104 visual cells (32%) displayed a different response to plate/handle presentation (ANOVA, p < .05). As the two objects, despite their visual similarity, required different grasp types, we believe that the responses of these cells signaled the affordance of the objects instead of their visual features. In other words, the cell discharge after object presentation could be the expression of the motor plan evoked by the objects. However, because the incidence of this type of cells was not different between the animals trained (Cases A and B; see Table 1) and those untrained (Cases C and D; see Table 1) to perform a reach-to-grasp task (Chi-square test, p > .05), we are in favor of the view that the cells encoded the potential actions the animal can perform on the objects, and from here on we will call them “affordance cells.”
Figure 3 shows three examples of affordance cells. The cell in Figure 3A, tested with horizontal objects, displayed a stronger discharge to the handle than to the plate presentation (ANOVA, p < .05). The cell in Figure 3B, tested with vertical objects, displayed a stronger discharge to the plate than to the handle presentation (ANOVA, p < .05). It is worthwhile to note that the responses to the object presentation were highly different although the visual features (geometry, shape) of handle and plate were almost identical as seen from the monkey's point of view (Figure 1B). The cell in Figure 3C was tested with both horizontal and vertical objects. The two horizontal objects evoked visual responses with small, not significant differences, whereas the two vertical objects showed responses with high and statistically significant differences. The cell displayed a strong visual discharge at the presentation of vertical plate (Figure 3C, bottom right) and a significantly weaker discharge at the presentation of vertical handle (Figure 3C, bottom left; ANOVA, p < .05). In other words, the cell was weakly sensitive to the visual features per se (weak differences between responses to horizontal and vertical objects; compare upper with bottom panels in Figure 3C, left and right) but responded in a completely different way to two very similar objects (vertical handle and plate; Figure 3C, bottom left and right, respectively) that required a different grip when grasped. We suggest that the parameter that mainly influences this cell, as well as the other two cells shown in Figure 3, is the object affordance. According to this view, all these cells would signal the type of grip needed to grasp the object instead of or in addition to their visual features.
Figure 4A shows that the temporal profile and the amount of discharge of the 33 affordance cells is similar for the plate and the handle, indicating a lack of preference for either object in the population (permutation test, ns). Conversely, when we ranked neural activity according to each cell preference (Figure 4B), we observed that the population had a high capacity to discriminate between the handle and the plate (permutation test, p < .05). Figure 4C shows that the onset of object selectivity of the affordance cells (see Methods) ranged from 50 to 250 msec after object illumination, and Figure 4D shows that the SI of visual responses (see Methods) spans in a quite large range. Overall, these data show that the affordance cells were significantly differently activated by the plate and the handle.
Effect of Object Orientation
Whenever possible, we tested the same cell with objects presented in two different orientations , as in the example of Figure 3C. This was done for 35/180 cells. We found that 26 of 35 cells (74%) were visual (Student's t test, p < .05), and among them, 10 (38%) were selective only for the orientation, 5 (19%) only for the object type, and 8 (30%) for both (two-way ANOVA, Orientation × Object, p < .05). The population SDF of object-selective cells is shown in Figure 5 for the preferred (left) and unpreferred (right) orientations. The selectivity is much stronger and significant (permutation test, p < .05) for preferred orientation. The onset of selectivity (significant difference between the curves for best and worst objects) was influenced by the orientation and was significant earlier for the preferred orientation (Kolmogorov Smirnov test, p = .03). These data highlight the coexistence of orientation sensitivity and affordance sensitivity in the same cells.
To support the view that the object affordance was a parameter influencing the cell discharge in at least part of V6A neurons, we carried out an additional experiment. In one monkey (Case D), we recorded the activity of further 43 V6A cells. We tested each one of these cells with two handles of two different thickness and two plates of two different thickness presented to the animal in a random order. The two thickness were the same in either handles and plates. In other words, we kept constant the affordance while changing the visual features within objects (handles or plates) and changed the affordance while keeping constant the visual features between objects (handles vs. plates of the same thickness). We found that 31 cells were sensitive to the visual stimulation. Nine of 31 visual cells (29%) were influenced by the visual features of the object, that is, by the thickness of the object. In 5 of 31 visual cells, the discharge during object observation was not significantly different within handles or plates of different thickness (ANOVA, p > .05), but the discharge was different between handles and plates of the same thickness (ANOVA, p < .05). Thus, the discharge of these cells was not modulated by the visual features of the objects, but likely by their affordances. The percentage of affordance cells in this control experiment was smaller but not significantly different to that we found in the main experiment (Chi-square test, p > .05).
An example of one of these cells is shown in Figure 6. The cell was strongly and similarly modulated by the observation of either handles, although they had very different thickness (Figure 6, two top panels; ANOVA p > .05). The cell was also similarly though weakly modulated, if not at all, by the observation of either plates (two bottom panels; ANOVA, p > .05). In strong contrast, the responses to the handle presentation were significantly different with respect to those responses to the plate presentation (Figure 6, top vs. bottom panels; ANOVA, p < .05). This means that the parameter influencing the discharge of this cell was not the visual features of the object, which were very different if seen from the animal's point of view, but likely the object affordance.
In this study, we investigated the neural responses of cells belonging to an area of the dorsomedial visual stream when monkeys observed two objects with similar visual features but different contextual information such as the evoked grip type (affordance). In a recent paper (Fattori et al., 2012), a cluster analysis suggested that V6A visual responses subserve action monitoring, in that they were not related to the object geometry (i.e., physical attributes) but to the grip type required to grasp these objects. One aspect of this previous study is that we manipulated the object shape, making it difficult to judge whether the observed responses were related to the coding of object geometry (i.e., physical attributes) or whether the processing of these object features was indeed related to grasping. In the current study, we maintain nearly constant the visual features while changing the object affordance, and by doing so we suggest that V6A is involved in the extraction of object affordances, that is, in encoding “vision for action.”
It is worthwhile to note that in our experimental conditions the attentional load was the same for the two objects; thus, although it has been proved that attention modulates the activity of V6A (Ciavarro et al., 2013; Galletti et al., 2010), there are no reasons to believe that the attention is responsible for the object selectivity here shown by the affordance cells. Also, we know that spatial parameters are critically encoded in dorsal stream areas and by V6A itself (Galletti & Fattori, 2003; Galletti et al., 2003), but in this task the space cannot account for the different responses we observed, as the two objects were always presented in the same spatial position (within arm distance, straight ahead) and in the same retinal position (central part of inferior hemifield, where the majority of V6A visual receptive fields are located; Gamberini et al., 2011; Galletti et al., 1999).
We suggest that the cells that responded differently to the visual presentation of plates and handles are putatively computing object affordances. We are aware that we cannot completely rule out the more parsimonious explanation that the different responses to plates and handles seen here reflect the differences of the visual attributes of the two objects, but in our experimental design the two objects, when seen from the animal's point of view, were very similar (see Figure 1B) and were recognizable only by marginal differences. Thus, it is hard to believe that these minimal differences in visual features were responsible for the strong differences in cell activity that we found. The demonstration of the presence of affordance cells in V6A is further and even more strongly supported by the results of the control experiment, where in single cells we analyzed the responses when we changed the affordance while keeping constant the object visual features and we kept constant the affordance while widely changing the visual features of the objects.
Present data show that affordance cells are modulated by the orientation of the object. The object orientation influences both the amount of object selectivity and the selectivity onsets. As it is widely agreed that a successful object recognition system requires generalization across changing viewing conditions (Riesenhuber & Poggio, 2002), it can be predicted that a system that uses object information for the control of skilled actions would preserve the specificity of object appearance with regard to shape, size, and orientation (Craighero, Fadiga, Umiltà, & Rizzolatti, 1996). In fact, the same object presented with different orientations requires different hand postures during grasping (James, Humphrey, Gati, Menon, & Goodale, 2002) and different wrist orientations to grasp it. Our finding of object-related activity depending on the object orientation therefore supports a view that object information in the dorsal pathway is used for sensory motor transformations in visually guided behavior. This is in line with the “motor-related” discharges found in V6A for orienting the wrist appropriately to grasp differently oriented objects (Fattori et al., 2009) and for shaping the hand to grasp objects with different shapes (Fattori et al., 2010).
The present data as well as those of a previous study (Fattori et al., 2012) show that object selectivity is present in V6A also in the absence of animal's grasping movement. Interestingly, present data show that the incidence of affordance cells does not depend on task demands, being similar in animals trained and untrained to perform reach-to-grasp actions. This suggests that object affordance is an inherent attribute coded by V6A neurons.
It is widely known that real 3-D objects inherently provide affordances, properties relevant for its use, able to potentiate associated motor actions (Gibson, 1979). Accordingly, recent fMRI studies demonstrate that response patterns to 3-D objects and 2-D planar images are different (Snow et al., 2011) and that information about 3-D form is critical for the visual control of grasping and manipulation (e.g., Culham et al., 2003). For this reason, in this study we used real objects located in the peripersonal space, instead of the object images typically employed in human and monkey experiments when coding of object shape is studied (Subramanian & Colby, 2014; Rosenberg, Cowan, & Angelaki, 2013; Theys, Pani, van Loon, Goffin, & Janssen, 2013; Konen & Kastner, 2008).
Object Affordance in Monkey and Human V6A
Several studies showed that graspable objects, even when irrelevant for the current behavioral goals, automatically evoke motor programs, and their vision leads to activations in the motor cortices (Handy, Grafton, Shroff, Ketay, & Gazzaniga, 2003; Chao & Martin, 2000; Grafton, Fadiga, Arbib, & Rizzolatti, 1997). This happens because graspable objects facilitate visuomotor transformations, for example, by grabbing visual spatial attention (Handy et al., 2003). As area V6A is involved in the visuomotor control of prehension (Gamberini et al., 2011; Galletti et al., 2003) and in the shift of the spotlight of attention in both monkeys and humans (Ciavarro et al., 2013; Galletti et al., 2010), we suggest that in V6A the object selectivity may serve in the rapid transformation of visual representations into object-specific motor programs useful in visually guided grasping. It has been recently demonstrated that the putative human homolog of monkey area V6A (Pitzalis et al., 2013) is sensitive to the grasp-relevant object features regardless of size (Monaco et al., 2014). Like macaque V6A (Gamberini et al., 2011), human V6A is involved in processing visual information in the lower visual field specialized in the context of object-oriented actions (Rossit, McAdam, McLean, Goodale, & Culham, 2013) and in elaborating information on visual objects (“visual context sensitivity”; Smith & Goodale, 2014) to be transmitted to somatosensory and motor cortex for representing motor plans. From these very recent studies and from the present work, it emerges a frame where, in both monkey and human, area V6A is involved in processing grasp-relevant object features.
According to present and previous data, V6A neurons have access to feature information and use them in a behaviorally relevant manner. Visual inputs may be provided to V6A by other visual areas that process object shape, like area AIP (Romero, Pani, & Janssen, 2014; Srivastava, Orban, De Mazière, & Janssen, 2009; Murata et al., 2000), which is directly connected with V6A (Gamberini et al., 2009; Borra et al., 2008), and is implicated in object grasping (Baumann et al., 2009; Murata et al., 2000). V6A may modulate this information according to the needs of the task. Contextual information on graspable objects may be used in dorsomedial area V6A for guiding interaction with objects, in cooperation with dorsolateral area AIP, for selecting or generating appropriate grasp movements, in line with the suggested role of V6A in the monitoring of grasping actions (Fattori et al., 2010). This suggestion is also in agreement with the recent view that both dorsomedial and dorsolateral circuits specify the same grasping parameters, with dorsomedial computations depending on dorsolateral contributions (Verhagen, Dijkerman, Medendorp, & Toni, 2013).
Previous data from our laboratory indicate that some aspects of object visual attributes, like orientation and shape, strongly drive the discharges of V6A neurons (Fattori et al., 2012). Present data show that, in addition to the physical/visual aspects per se, the object affordance also modulates V6A cell activity. We suggest that V6A visual selectivity may serve in the rapid transformation of visual representations into object-specific motor programs useful in visually guided grasping. Although object recognition is a typical function of the ventral stream (Goodale & Milner, 1992; Ungerleider & Mishkin, 1982), recent remarkable findings in monkeys and humans demonstrate that areas of the dorsal stream are also able to process shape (Romero et al., 2014; Theys et al., 2013; Janssen, Srivastava, Ombelet, & Orban, 2008; Króliczak, McAdam, Quinlan, & Culham, 2008; Gardner, Babu, Ghosh, Sherwood, & Chen, 2007; Lehky & Sereno, 2007; Sereno & Maunsell, 1998). Present data, together with recently published data from our laboratory (Fattori et al., 2012), add V6A to the list of dorsal areas that can process object shape.
This work was supported by EU FP7-IST-217077-EYESHOTS, Firb 2013 N. RBFR132BKP, Ministero dell'Università e della Ricerca (Italy), and Fondazione del Monte di Bologna e Ravenna (Italy). The authors would like to thank Giacomo Placenti, Massimo Verdosci, and Francesco Campisi for the setup of the panel.
Reprint requests should be sent to Patrizia Fattori, Department of Pharmacy and Biotechnology, University of Bologna, Piazza di Porta San Donato, 2, I-40126, Bologna, Italy, or via e-mail: email@example.com.