Neurophysiological and behavioral studies suggest that the peripersonal space is represented in a multisensory fashion by integrating stimuli of different modalities. We developed a neural network to simulate the visual-tactile representation of the peripersonal space around the right and left hands. The model is composed of two networks (one per hemisphere), each with three areas of neurons: two are unimodal (visual and tactile) and communicate by synaptic connections with a third downstream multimodal (visual-tactile) area. The hemispheres are interconnected by inhibitory synapses. We applied a combination of analytic and computer simulation techniques. The analytic approach requires some simplifying assumptions and approximations (linearization and a reduced number of neurons) and is used to investigate network stability as a function of parameter values, providing some emergent properties. These are then tested and extended by computer simulations of a more complex nonlinear network that does not rely on the previous simplifications. With basal parameter values, the extended network reproduces several in vivo phenomena: multisensory coding of peripersonal space, reinforcement of unisensory perception by multimodal stimulation, and coexistence of simultaneous right- and left-hand representations in bilateral stimulation. By reducing the strength of the synapses from the right tactile neurons, the network is able to mimic the responses characteristic of right-brain-damaged patients with left tactile extinction: perception of unilateral left tactile stimulation, cross-modal extinction and cross-modal facilitation in bilateral stimulation. Finally, a variety of sensitivity analyses on some key parameters was performed to shed light on the contribution of single-model components in network behaviour. The model may help us understand the neural circuitry underlying peripersonal space representation and identify its alterations explaining neurological deficits. In perspective, it could help in interpreting results of psychophysical and behavioral trials and clarifying the neural correlates of multisensory-based rehabilitation procedures.
In order to guide body movement through space and allow interaction with immediate surroundings, the brain must continuously monitor the location of parts of the body across different postures and analyze the spatial relationship between parts of the body and nearby objects. This process requires the integration of proprioceptive, tactile, visual, and even auditory information. Many studies have focused on how these various sensory cues may be combined and integrated to achieve perception of limb location and representation of the space immediately around the body (i.e., peripersonal space). Numerous findings have been accumulated by using different methodologies: single-cell recordings in animals (Hyvarinen, 1981; Rizzolatti, Scandolara, Matelli, & Gentilucci, 1981; Graziano & Gross, 1995; Fogassi et al., 1996; Graziano, Hu, & Gross, 1997; Duhamel, Colby, & Goldberg, 1998; Avillac, Ben, & Duhamel, 2007), neuropsychological studies in brain-damaged patients (see Làdavas, 2002; Holmes & Spence, 2004, for a review), and psychophysical and neuroimaging investigation in healthy and brain-damaged subjects (Tipper et al., 1998; Macaluso, Frith, & Driver, 2000; Marzi, Girelli, Natale, & Miniussi, 2001; Kennett, Taylor-Clarke, & Haggard, 2001; Eimer, Maravita, van Velzen, Husain, & Driver, 2002; Taylor-Clarke, Kennett, & Haggard, 2002; Sarri, Blankenburg, & Driver, 2006).
Neurons that respond to both tactile and visual stimuli (i.e., bimodal neurons) have been found in several areas of the monkey's brain: in cortical [ventral premotor cortex (Rizzolatti et al., 1981; Graziano & Gross, 1995; Fogassi et al., 1996; Graziano, Hu, Gross, 1997) ventral intraparietal area (Duhamel et al., 1998), parietal area 7b (Hyvarinen, 1981)] and subcortical (putamen) structures (Graziano & Gross, 1995). Bimodal neurons respond to tactile stimuli located on specific part of the body (such as hand, arm, face, or shoulder) and to visual stimuli presented proximally to the same body part. In particular, the following properties characterize these cells: (1) the visual and tactile receptive fields (RFs) are spatially aligned; (2) the response to a visual stimulus declines as the stimulus is moved away from the body; and (3) the visual RF remains anchored to the body part—it moves as the body part is moved, whereas when the eyes are moved, the visual RF remains fixed in space near the corresponding tactile RF.
These properties suggest the existence of a distributed, visual-tactile system that codes the peripersonal space in a frame of reference centered on the part of the body. The fact that brain regions containing bimodal neurons are located in or connected to motor areas (Rizzolatti, Luppino, & Matelli, 1998; Cooke & Graziano, 2004) suggests that this system is involved in the representation of space and control of body position in order to plan appropriate movements in response to stimuli presented within the peripersonal space.
Recent neuroimaging studies in humans have shown that some parietal and prefrontal areas in the human brain have analogous multisensory proprieties (Bremmer et al., 2001; Calvert, 2001; Galati, Committeri, Sames, & Pizzamiglio, 2001; Lloyd, Shore, Spence, & Calvert, 2003; Macaluso & Driver, 2005; Swisher, Halko, Merabet, McMains, & Somers, 2007); in particular, Makin, Holmes, and Zohary (2007) have recently shown that ventral premotor and intraparietal areas respond to visual and tactile stimuli presented close to the hand.
Beside neuroimaging data, the most influential evidence of visuotactile interaction within the peripersonal space in humans comes from studies in unilateral brain-damaged patients suffering from extinction. Extinction patients can detect a single stimulus presented in either the ipsi- or contralesional side, but they miss a contralesional stimulus when presented concurrently with an ipsilesional stimulus. The presence of extinction only during bilateral stimulation is suggestive of an attentive competition mechanism between two neural representations, each devoted to process information from the contralateral side of space (Duncan, 1996; Mattingley, Driver, Beschin, & Robertson, 1997; Marzi et al., 2001; Hilgetag, Théoret, & Pascual-Leone, 2001; Battelli, Alvarez, Carlson, & Pascual-Leone, 2008). The competition between the two representations is normally balanced, but when one hemisphere is damaged, unilateral lesion chronically biases competition in favor of the ipsilesional event. Extinction can occur within each sensory modality (unimodal extinction), but also between sensory modalities: in right-brain-damaged (RBD) patients with left tactile extinction, a visual stimulus presented near the ipsilesional (right) hand extinguished awareness of a tactile stimulus applied on the contralesional (left) hand (cross-modal visual-tactile extinction) (Mattingley et al., 1997; Di Pellegrino, Làdavas, & Farne, 1997; Làdavas, Di Pellegrino, Farnè, & Zeloni, 1998; Bueti, Costantini, Forster, & Aglioti, 2004). Importantly, cross-modal extinction was reduced when the right visual stimulus was applied far from the hand. These results conceivably can be interpreted in terms of an integrated visual-tactile coding of the peripersonal space by bimodal neurons similar to those revealed by single-cell recordings in monkeys (Rizzolatti et al., 1981; Graziano & Gross, 1995; Fogassi et al., 1996; Graziano et al., 1997). According to this interpretation, the activation of the bimodal neurons by a visual stimulus near a part of the body boosts the corresponding somatosensory representation of that part. The latter, however, may conflict with a simultaneous representation of the opposite part of the body, thus inducing visuotactile extinction. Conversely, a visual stimulus delivered far from the ipsilesional hand (outside the peripersonal space) does not boost the representation of the hand, and thus no competition occurs.
Besides cross-modal extinction, cross-modal visual-tactile facilitation has been documented in RBD patients with left tactile extinction: under bilateral stimulation, patients were more accurate in detecting the left tactile stimulus when a visual stimulus was presented near the left hand (Làdavas et al., 1998; Làdavas, Farnè, Zeloni, & Di Pellegrino, 2000; Vaishnavi, Calhoun, & Chatterjee, 2001). In other words, the combination of a tactile stimulus with a visual stimulus in the contralesional side of space ameliorated the tactile perceptual deficit. A similar effect was observed in brain-damaged patients with reduced somatosensory sensitivity of the upper limb (Halligan, Hunt, Marshall, & Wade, 1996; Halligan, Marshall, Hunt, & Wade, 1997; Serino, Farnè, Rinaldesi, Haggard, & Làdavas, 2007): tactile performances ameliorated when the patients were allowed to see their affected arm being touched. Further evidence of cross-modal influences on unimodal sensory processing has been provided by studies in healthy subjects: viewing the stimulated body site improves performances in tactile detection tasks (Tipper et al., 1998) and enhances tactile acuity (Kennett et al., 2001; Taylor-Clarke et al., 2002; Serino et al., 2007). A recent hypothesis (Driver & Spence, 2000; Rockland & Ojima, 2003) suggests that the facilitatory cross-modal influences on unisensory processes may occur by feedback connections from multimodal areas to more specialized unimodal areas. Support for such feedback is provided by recent event-related potential measures and functional imaging data showing that tactile events can alter activity in unimodal visual areas of the brain (Macaluso et al., 2000) and vice versa (Taylor-Clarke et al., 2002; Schaefer, Heinze, & Rotte, 2005). However, this process appears to depend crucially on whether the tactile and visual stimuli are in spatial proximity (Macaluso et al., 2000).
In summary, a massive amount of data has contributed to describing functional properties of peripersonal space representation. The aim of this study is to integrate these findings into a theoretical model describing the mechanisms of peripersonal space representation in terms of a neural network. This effort is necessary to formulate plausible scenarios in quantitative terms and to synthesize the knowledge obtained using different approaches into a unique, coherent structure. In particular, models can help the interpretation of behavioral and psychophysical responses in terms of the reciprocal interconnections among neurons.
In this letter, we propose a neural network model that mimics the visual-tactile representation of the peripersonal space around the right hand and the left hand. An important point is that the model has been conceived assuming fixed postural conditions (eyes, head, and hands immobile); we did not include any postural signal in the model. By assuming these conditions, we avoid considering explicitly how coordinate transformations between different reference frames (e.g., from retinotopic to hand-centered coordinates) are made, and we focus on other relevant questions of spatial representation. The network considers the two hemispheres, each composed of three areas of neurons. The two upstream areas respond to visual and tactile stimuli, respectively, and are connected by feedback and feedforward synapses to the third downstream area, which is devoted to multisensory integration. The two hemispheres are interconnected by inhibitory synapses. Neuronal activity is described by a sigmoidal function and a first-order dynamics.
The analysis of the network is presented in two main parts. First, a simplified version of the model (just one unit per area and linearized relationships) was developed and used to explore analytically network stability and provide some emergent properties. Then the behavior of a more complex network (several neurons per area and nonlinear relationships) was investigated by computer simulations and simulation results related to the analytical ones. In particular, inspired by the results of the simple network study, several sensitivity analyses on some key parameters were performed to gain deeper insights into network properties and behavior.
In this section, we first present the complex nonlinear network, providing all mathematical equations and criteria for parameter assignment. Then the linearized and simplified network used for the analysis of stability will be described.
2.1. The Complex Nonlinear Model.
2.1.1. General Model Structure.
The model is composed of two networks, one per hemisphere, reciprocally interconnected (see Figure 1). Each network is defined with reference to the contralateral hand of a hypothetical subject. We assumed that the head and eyes of the subject are immobile and maintained in central alignment, with each hand located in its own hemispace and in a fixed position. Accordingly, no postural signal is considered, and the only inputs are tactile and visual stimuli. Each network consists of three regions of neurons that communicate through synaptic connections.
The two upstream regions are organized as matrices of neurons. Each neuron has its own RF through which it receives stimulation by an external input. Neurons in one area respond to tactile stimuli applied on the contralateral hand; their RFs are in hand-centered coordinates. We will refer to this area as the unimodal tactile area. Neurons in the other area respond to visual stimuli on the contralateral hand and around it: the RFs of these neurons are in hand-centered coordinates too. We assumed that this area represents the final outcome of an upstream neural process (presumably involving parietal neurons) that performs coordinate transformations of the visual target from retinotopic to hand-centered reference frame, using postural information (e.g., eye, head, and hand positions). Since the model is based on the assumption of a fixed posture, the only variable input becomes the visual one; hence, we refer to this upstream area as the unimodal visual area throughout the letter.
Neurons in each unimodal area are arranged according to a topological organization, so that their RFs map the external space in an orderly manner. Accordingly, proximal neurons within each area respond to stimuli coming from proximal positions of the hand and space. Moreover, the neurons in the same unimodal area interact by lateral synapses arranged according to a Mexican hat disposition (a circular excitatory region surrounded by a inhibitory annulus).
The third downstream region is multimodal, devoted to visual-tactile integration. It consists of:
A matrix of multimodal excitatory neurons arranged according to a topological organization. They receive inputs from unisensory neurons via feedforward synapses and send back excitatory inputs to the unimodal neurons through feedback synapses. Thus, within the same hemisphere, a multimodal stimulus may reinforce, via the feedback synapses, the perception of unimodal stimuli in the upstream areas. Moreover, the multimodal neurons send long-range projections towards the other hemisphere.
Multimodal inhibitory interneurons. These neurons realize interhemispheric interaction. They receive visual-tactile information from the multimodal excitatory neurons in the other hemisphere and send inhibitory synapses locally to the unisensory neurons within the same hemisphere. Inclusion of these connections recognizes a competition in case of the simultaneous activation of the right and left hand representations.
All neurons in the network are normally in a silent state (or exhibit only a weak basal activity) and can be activated if stimulated by a sufficiently high input. The activity of each neuron is described through a sigmoidal relationship (with a lower threshold and an upper saturation) and a first-order dynamics.
A single neuron in the model should not be considered as representative of an individual cell only, but rather as a pool of cells that approximately share the same RF. Similarly, synaptic weights should not be considered as the strength of individual synapses but rather summarize the overall synaptic strength of a pool of cells.
2.1.2. Mathematical Description.
Since the overall network has a symmetrical structure, only the equations for one hemisphere (the left one) will be presented. The superscripts t, v, and m will denote quantities referring to tactile, visual, and multimodal excitatory neurons, respectively; the superscript g will indicate quantities referring to inhibitory (GABAergic) interneurons; the superscripts L and R will distinguish the left and right hemisphere; the subscripts ij or hk will represent the spatial position of individual neurons.
Organization of the unimodal and multimodal regions. The unimodal areas are composed by NsxMs neurons (s = t, v), with Nt = 20, Mt = 40; Nv = 30, Mv = 140. In both areas, the RFs of neurons are arranged at a distance of 0.5 cm, one from each other, along the x and y directions. Hence, the tactile area covers a space of 10 cm by 20 cm, representing the surface of one hand in an extremely simplified form, while the visual area covers a space of 15 cm by 70 cm, representing the visual space on the hand and around the hand (extending by 2.5 cm on each side and 50 cm ahead).
Finally, for simplicity, a single inhibitory interneuron is considered in each hemisphere. The number of inhibitory interneurons versus excitatory neurons (one versus eight) agrees with values commonly found in the cortex (inhibitory neurons are approximately 15% to 25% of excitatory neurons) (Trappenberg, 2002).
According to equation 2.1, an external stimulus applied at the position x, y excites not only the neuron centered in that point but also the proximal neurons with RFs covering that position.
Activity of the unisensory neurons. The total input received by a generic neuron ij in the unisensory areas is the sum of four contributions:
The contribution due to the external stimulus (say, φij(t), since it depends on the RF Φij)
The contribution due to the lateral synapses linking the neuron with the other elements in the same area (say, λij(t), lateral)
The contribution due to the feedback excitatory projections from the multimodal neurons (say, βij(t), feedback);
The contribution due to the synapses from the inhibitory interneuron (say, γij(t), GABAergic interneurons).
Each contribution will be described below.
2.1.3. Parameter Assignment.
Parameter values of the complex nonlinear network are listed in Table 1. We assume that parameters of the left and right hemispheres have the same value in basal conditions. Criteria for parameter assignment are reported below.
Receptive fields of the unimodal neurons. Parameters Φ0t and Φ0v have been set to 1 to establish a scale for the inputs generated by the external stimuli. Standard deviations σ0t and σ0v have been assigned so that unimodal RFs are about 2 cm to 2.5 cm in diameter. This value is within the range of spatial resolution on the hand and arm (Kandel, Schwartz, & Jessell, 2000).
Lateral synapses in unimodal areas. Parameters characterizing the Mexican hat arrangement of lateral synapses (Λex, σex, Λin, σin), in both tactile and visual areas, have been given to reach a trade-off between excitation and inhibition in order to satisfy the following criteria: (1) an external stimulus produces an activation bubble of unimodal neurons approximately with the same dimension of the RF, and (2) excitation must stay confined to avoid instability, that is, uncontrolled excitation that propagates to the overall area.
Synaptic connections among the areas. The parameters of the feedforward connections from unimodal areas to the multimodal area (W0t, W0v, σWt, σWv) have been assigned to satisfy the following requirements: (1) multimodal neurons have large RFs, several centimeters in diameter, that may even encompass the entire surface of the hand (Rizzolatti et al., 1981; Graziano et al., 1997; Iriki et al., 2001; Maravita & Iriki, 2004); and (2) a single unimodal stimulus may significantly excite multimodal neurons (whose RFs cover that position). Indeed, data on monkeys indicate that even a light touch of the skin or a spot of light on or near the body produces a considerable response in the bimodal neurons (Rizzolatti et al., 1981; Graziano et al., 1997; Duhamel et al., 1998).
The extension of the feedback synapses from multimodal to unimodal neurons (parameters σBt, σBv) has been set equal to the extension of the feedforward synapses. In this way, cross-modal facilitation (i.e., the reinforcement that a unimodal stimulus exerts on a stimulus of different modality presented on the same hand; see sections 3 and 4) occurs only in the case of spatial coherence between the stimuli (Macaluso et al., 2000). The strength of the feedback synapses (B0t, B0v) has been maintained as lower than that of the feedforward synapses to avoid activation of multimodal neurons resulting from a one-modality stimulus (e.g., tactile) that produces a phantom activation bubble in the other modality area (e.g., visual).
The weight of the cross-connections between the two hemispheres (parameter X0) has been chosen so that even the activation of a single multimodal neuron in one hemisphere (signaling the involvement of the contralateral perihand space) significantly excites the inhibitory interneuron in the other hemisphere. In this way, the model realizes a rivalry between the two hemispheres for peripersonal attention (Driver & Spence, 1998; Graziano & Cooke, 2006; Dambeck et al., 2006).
Finally, the strength of the inhibitory synapses (parameter Γ0) has been set small enough to allow a right-hand stimulus and a left-hand stimulus, applied simultaneously, to be perceived as in healthy subjects (Hillis et al., 2006).
Parameters of the individual neurons. Parameters characterizing the static sigmoidal relationship of unimodal neurons have been assigned an elevated lower threshold (so that unimodal neurons are activated only by sufficiently high inputs) and a smooth transition from silence to saturation. These characteristics help to maintain stability of the unisensory areas.
The sigmoidal relationship of multimodal excitatory neurons has been set so that the neuron response rapidly decreases with a reduction in its input (e.g., when a visual stimulus is progressively moved away from one hand) (Colby, Duhamel, & Goldberg, 1993; Graziano et al., 1997; Duhamel et al., 1998).
In the model, the inhibitory interneuron in one hemisphere signals the activation of the multimodal area in the other hemisphere; hence, the corresponding sigmoidal function has been set to mimic an on-off behavior—that is, a negligible neuron activity at low input values and a rapid transition from silence to saturation.
For simplicity, we used the same time constant for the three types of neurons; its value (20 ms) is within the range of membrane time constants reported in the literature (Dayan & Abbott, 2001). Finally, the interhemispheric delay (D) reflects the time necessary for inhibitory projections to cross the corpus callosum, to be processed in the contralateral hemisphere and thus to become effective for the competition. Its value (34 ms) has been assigned in agreement with in vivo data (Kukaswadia, Wagle-Shukla, Morgante, Gunray, & Chen, 2005; Lee, Gunray, & Chen, 2007).
In order to test the behavior of the network, simulations were performed in different stimulation conditions: unilateral unimodal stimulation, unilateral cross-modal stimulation, and bilateral cross-modal stimulation.
2.2. The Simple Model.
In order to gain a deeper understanding of system behavior, as it arises from the interactions among the various areas, we performed a stability analysis using a simplified version of the model, which we refer to simple model. This simple model includes just one unit per area. Moreover, the sigmoidal relationship of each unit is approximated by means of a piecewise-linear curve with three regions: silent, central, and saturation. Finally, in order to simulate the role of lateral synapses within the unimodal areas, each unimodal unit contains a self-connection.
For simplicity, in the model presented here, the slopes, Sj, the thresholds, ϑj, the feedback weights, B, the inhibitory weights, G, and the competition weights, X, do not differ between the two hands nor the two unimodal areas. By contrast, the feedforward weights, Wkj, and the self-connection weights, Lkj, may be different between the two hands and between the two unimodal areas. In particular, the case of an RBD patient with left tactile extinction is simulated, reducing the synapses emerging from the right unimodal tactile area (i.e., parameters WRt and LRt; see section 3).
Parameters of the piecewise linear curve were given to simulate the sigmoidal relationships of the complex model as closely as possible. The weights in the simple model under normal conditions were given to ensure that all units can work in the linear region (i.e., a progressive change in one input, which brings one unit from the silent to the saturation region, does not cause a sudden saturation or inhibition of other units). The list of parameters values for the simple model is reported in Table 2.
In this section, we first present the results of the stability analysis performed by using the simple model; then simulation results of the complex nonlinear model are presented and related to the simple network analysis.
3.1. Stability Analysis by Using the Simple Model.
The stability analysis has been performed assuming that all units of the simple model are working in the central (linear) region of their static characteristic. When the system becomes unstable, at least one unit abruptly shifts to the silent or the saturation region. Network stability has been analyzed in different stimulation conditions, and the network properties emerging from each analysis have been outlined.
3.1.1. Unilateral Unimodal Stimulation.
If either B= 0 or Wj= 0, the feedback loop is cut, and the two units have independent dynamics. In this case, the multimodal unit is always stable (eigenvalue = −1), while the unimodal unit is stable provided Lj · Su < 1. The latter inequality means that excessive autoexcitation in the unimodal area may induce an uncontrollable activity, with the attainment of maximal saturation in response to any suprathreshold stimulus.
In the following, we will always assume Lj · Su < 1 for all unimodal units.
Hence, the following factors contribute to reduce the stability margin of the system: elevated values for the loop gain (B·Wj), the slope of the sigmoidal relationships, or the autoexcitation. If the stability condition does not hold, any suprathreshold stimulus would trigger either the unimodal unit or the multimodal unit directly to the saturation region (depending on the values of B or Wj).
3.1.2. Unilateral Cross-Modal Stimulation.
This is the same condition obtained above (see equation 3.3), but with an increased loop gain due to cross-modal facilitation.
By contrast, if the parameters of the two unimodal areas are different (i.e., when Lv · Su ≠ Lt · Su), the eigenvalue expression becomes very complex (not given here for brevity). However, the latter condition is of clinical importance since pathological subjects, such as patients with RBD, may be characterized by a deficit in one unimodal area compared with the other. In case of an RBD patient with a tactile deficit, for instance, the situation in the left hand may be simulated by reducing parameters Lt and Wt while maintaining parameters Lv and Wv at the normal value. Although the eigenvalue expression is much more intricate, the three eigenvalues are real, and just one of them can become positive by increasing the overall loop gain (i.e., B · (Wt + Wv)).
An example of the dependence of this eigenvalue on the feedback weight B is shown in Figure 3A for a healthy subject (assuming identical parameters in the visual and tactile areas) and an RBD patient (assuming decreased parameters for the tactile area; see the figure legend). The system becomes unstable for approximately B> 1 for the healthy case and approximately B> 1.5 for the pathological case.
How does instability manifest itself in this network? Two exemples are shown in Figure 3B, both referring to a healthy subject. In the upper plots, we used a feedback weight B= 0.8, which warrants stability. In the lower plots, we used B= 1.2, which causes instability. In all simulations, we used an input it= 10 in the tactile area (this value is under threshold; hence, without any visual input, the tactile unit works in the silent region), and we changed the input to the visual area from 14 to 18. The steady-state outputs of the three units, normalized with respect to the maximum activation value, are shown for each input level. In the stable condition (weaker feedback, upper plots), when the input iv becomes approximately greater than 16.7, the feedback from the multimodal unit evokes a suprathreshold activity in the tactile unit. In the range 16.7 <iv< 17.7, the three units work in the linear region. By contrast, in the unstable condition (stronger feedback, lower plots), as soon as the feedback from the multimodal unit evokes a suprathreshold activity in the tactile unit (iv> 15.4), the multimodal unit jumps to the saturation region. This is reflected in an abrupt increase in the activities (zv and zt) of the two unimodal units.
In conclusion, in the case of unilateral cross-modal stimulation, elevated values for the feedback gain and, consequently, the overall loop gain, B · (Wt + Wv) lead to instability and favor the facilitatory interaction, that is, improvement of unisensory perception by cross-modal stimulation (compare the upper plots with the lower plots in Figure 3B). In pathological conditions, for example, a tactile deficit that may correspond to a decrease in Lt and Wt, higher values of the feedback gain are necessary to obtain instability (see condition 3.6 and Figure 3A) and significant reinforcement of unisensory activation.
3.1.3. Bilateral Stimulation.
As a last example, let us consider the case of two inputs occurring in two different hands. In this situation, a competition, mediated by the inhibitory units, occurs between the two peripersonal space representations. The stability analysis requires the study of three units per each hand: the stimulated unimodal unit, the multimodal unit, and the inhibitory unit activated by the opposite hand representation. The analytical expression of six eigenvalues is quite intricate and is not given here for brevity. Their analysis reveals that the system may become unstable if the overall inhibitory gain G · X is increased (where, according to equations 2.23–2.26, G represents the weight from the inhibitory unit to the unimodal area and X is the weight from the multimodal unit in the controlateral hand to the ipsilateral inhibitory unit).
A summary of the results is given in Figure 4A. These plots represent the real part of the six eigenvalues, computed for different values of the inhibitory weight G. The continuous line was computed using the value X= 1 for the competition weight (weaker competition). The dashed line was computed using a value X= 8 (hence, assuming a much stronger competition between the two hands). In case of weaker competition, only the eigenvalue λ4 may assume a positive real part (when approximately G> 1). In case of strong competition, the eigenvalue λ4 assumes a positive real part at a lower value of G (G> 0.15), but also the eigenvalues λ1 and λ3 assume a positive real part provided G> 1.25.
In order to understand the corresponding behavior of the network, Figure 4B shows the steady-state response of the six units (normalized to the maximum activation), computed assuming a tactile input value 21 for the left hand and varying the visual input value to the right hand between 20.5 and 21.5. The inhibitory weight G was set at 1.5, which causes network instability. The case with weak competition (X= 1) is depicted with the continuous black line, while the dashed gray line shows the strong competition case (X= 8). In the first case, a winner-takes-all (WTA) instability affects the inhibitory units only: the inhibitory unit in the hand with a stronger input is silent, while the other works in the linear region. The four units in the unimodal and multimodal areas all work in the linear region. By contrast, in the strong competition case (remember that two additional eigenvalues assume a positive real part), the WTA instability affects the multimodal units too: only the multimodal unit in the hand with stronger input can be active; the other is completely inhibited. This means that only one peripersonal hand representation can be active. Nevertheless, both unimodal units work in the linear region. This signifies that in agreement with neuroimaging data (Sarri et al., 2006), moderate activity can be present in the unimodal area despite the absence of a multimodal representation of the peripersonal space.
Finally, we investigated how the addition of a time delay in the inter-hemispheric connection may affect the interaction between the two hands. We included a pure delay (D= 30 ms) in the cross-connections between the hemispheres, and repeated the same simulations as before using X = 8 (strong competition). The presence of the delay has the effect of introducing a transient oscillatory phase, in each of the six examined units, before a WTA instability manifests in the multimodal and inhibitory neurons (i.e., damped oscillations occur). By contrast, the transient response is negligible in the case of no delay. To show these results, for each simulation (left tactile input = 21 and right visual input varying between 20.5 and 21.5), we computed the average activity in the first 1000 ms of the simulation for each of the six units. The results are reported by the dashed-dot black line in Figure 4B. It is worth noting that the multimodal representations in the two hands coexist during the phasic response for moderate differences in the input signals, despite the presence of a strong competition between the two hands. Hence, the presence of a time delay favors the simultaneous representations of the two hands at least through the phasic activation of the neurons. The longer the time delay is, the longer the transient oscillatory phase is during which the two representations coexist.
This last analysis suggests that in the case of weak competition, two peripersonal space representations may coexist (despite different activation in the two hands), and a WTA instability affects the inhibitory units only. In case of strong competition without delay, only the stronger multimodal representation may be active, while the multimodal unit in the less stimulated hand is completely inhibited. However, a moderate unimodal representation may be preserved even in the weaker hand. Hence, WTA instability affects both the inhibitory units and the multimodal units but not the unimodal areas. A coexistence between the two peripersonal representations may be favored by the presence of a time delay between the two hands.
All previous results, emerging from the eigenvalue analysis in the simple model, can be reobtained in the complex model behavior as well.
3.2. Behavior of the Complex Nonlinear Model.
We first present results of model simulations by using basal values for all parameters (see Table 1), which mimic a healthy subject. Then the responses of RBD patients suffering from extinction are reproduced by modifying some model parameters in the right hemisphere (as in the simple model). The results shown in Figures 5 to 10, refer to steady-state conditions, that is, they show the network activity attained after the transient response to the stimulation was exhausted.
3.2.1. Healthy Subject: Unilateral Unimodal Stimulation.
Figure 5 shows the neural activity in the unimodal areas, and the response of the eight multimodal neurons in the right hemisphere, in four different conditions of left-hand stimulation: application of a tactile stimulus (see Figure 5A); application of a visual stimulus on the hand (see Figure 5B), application of a visual stimulus near the hand (see Figure 5C), and application of a visual stimulus far from the hand (see Figure 5D). In all four cases, the stimulus produces the activation of a bubble of neurons in the corresponding unimodal area, with most of the activated neurons reaching the saturation level. Multisensory neurons are triggered in three of the four examined conditions: they respond to the tactile stimulus and the visual stimulus placed on the hand or in the space immediately surrounding it. On the contrary, a visual stimulus distal from the hand does not induce any significant response. It is worth noting that the eight multimodal neurons are not triggered together during involvement of the peri-hand space; they are selectively activated depending on whether the initial or terminal portion of the hand is involved (see also the activity index values, AI, in the figure). Nevertheless, most of the activated multimodal neurons attain the saturation value. This is due to the high overall loop gain between unimodal and multimodal areas (see the stability condition expressed by equation 3.3). This setting has been intentionally included in the model in order to obtain an on-off behavior of the overall multimodal area. In this way, the attainment of saturation by even a single multimodal neuron signals the involvement, visual or tactile, of the peri-hand space.
3.2.2. Healthy Subject: Unilateral Cross-Modal Stimulation (Facilitatory Interaction).
Figure 6 shows an example of modulation of unisensory perception by cross-modal stimulation. In Figure 6A, the network is stimulated by a tactile input of low intensity applied on the left hand (only the right hemisphere network is displayed). The stimulus produces a weak activation of a single neuron in the tactile area, which is not able to trigger the surrounding neurons via lateral synapses (AI ≈ 0.3 in the tactile area). This unimodal activity is not sufficient to evoke a response in the multimodal neurons, which remain silent. In Figure 6B, the tactile stimulus is coupled with a weak visual stimulus delivered at the same location on the hand. The final outcome of the cross-modal stimulation is strong activation in the multisensory neurons and a remarkable reinforcement of unimodal activity (AI ≈ 22), which now shows a wide activation bubble. The cross-modal influence of vision on touch and vice versa occurs via the feedback projections from the multisensory neurons to the unisensory areas. This result resembles that illustrated in Figure 3B obtained with the simple model in case of a strong feedback leading to system instability.
3.2.3. Healthy Subject: Bilateral Stimulation.
The interaction between right and left hand representations is depicted in Figure 7 for a healthy subject (all parameters are set at their basal values). In Figure 7A, a tactile stimulus is applied on the left hand (right hemisphere) and a simultaneous visual stimulus of the same strength on the right hand (left hemisphere). Each stimulus boosts the multisensory representation of the corresponding hand (via activation of the multimodal neurons), leading to competition between the two representations (via activation of the inhibitory interneurons). Despite the inhibitory action of the interneurons, each unimodal area remains sufficiently activated to maintain excitation of the corresponding multimodal area. According to the results by the reduced model, coexistence of the two representations is facilitated by the presence of the pure delay in the connection between the two hands (see also the subsequent results of sensitivity analyses).
Figure 7B elucidates the effects of the competition. The two plots present a magnified image of the activation bubble in the right-hemisphere tactile area in the two conditions: left-hand tactile stimulus presented alone (left plot, as in Figure 5A) and left-hand tactile stimulus coupled with a right-hand visual stimulus (right plot, as in Figure 7A). In the latter condition, the activation bubble is reduced (AI = 18 versus 22) because of the inhibitory influence from the multisensory neurons in the left hemisphere.
3.2.4. RBD Patient: Unilateral Stimulation.
In order to simulate RBD patients suffering from left tactile extinction, we decreased the strength of all synapses originating from tactile unimodal neurons in the right hemisphere. Of course, this change may not really reflect synaptic depression, but rather a decrease in the number of effective neurons that contribute to the activity in the right somatosensory area (see Section 4 for more detail). To this end, we reduced the strength of the lateral excitation in the tactile unimodal area (Λt,Rex) and the weight of the feedforward synapses from the unimodal tactile area to the bimodal area (Wt,R0). The modified values for these parameters are reported in Table 3. As a consequence of these parameter changes, a tactile stimulus on the left hand activates a weak neural representation of the hand. This result is presented in Figure 8, which simulates the application of an isolated left tactile stimulus in an RBD patient. The activation bubble in the tactile area is significantly reduced with respect to a healthy subject (compare Figure 5A), involving only nine neurons (AI = 8.5 versus AI = 22 in healthy conditions); this degraded tactile activity evokes the response of a few multimodal neurons (three versus five obtained in a healthy subject).
|Λt,Rex = 2.3||Wt,R0 = 0.8|
|Λt,Rex = 2.3||Wt,R0 = 0.8|
3.2.5. RBD Patient: Bilateral Stimulation (Cross-Modal Extinction).
Figure 9A shows the same stimulation conditions as Figure 7A (left hand tactile stimulus and right hand visual stimulus) applied to an RBD patient. The activation of the multimodal neurons in the left hemisphere (evoked by the visual stimulus) competes with that in the right hemisphere (evoked by the tactile stimulus) through the inhibitory interneurons. In this case, the competition is uneven: right hemisphere activation is weakened by the lesion; thus, the ipsilesional stimulus has a higher competitive strength than the contralesional stimulus. The final outcome of the network is weak activity in the right hemisphere tactile area (only the central neuron remains slightly activated; AI = 0.32) and deactivation of multimodal neurons. Hence, only the right hand representation survives.
3.2.6. RBD Patient: Bilateral Stimulation (Cross-Modal Facilitation).
Behavioral studies in RBD patients indicate that under conditions of bilateral stimulation, left tactile stimulus detection is improved by a simultaneous left visual stimulus. This situation is simulated in Figure 10, where a visual stimulus is applied on the right hand and a double stimulation (tactile and visual) is delivered to the left hand. In this condition, the left tactile stimulus is not extinguished thanks to the presence of the left visual stimulus, which sustains the activation of the multisensory neurons; the latter, in turn, reinforce the tactile area activity via feedback projections. Consequently, an activation bubble survives in the right tactile area (AI ≈ 4), and right and left hand representations coexist.
3.3. Sensitivity Analyses in the Complex Nonlinear Model.
This section presents the results of sensitivity analyses on some key parameters of the complex network.
3.3.1. Parameters Affecting Cross-Modal Facilitation.
Analysis of the simple model has evidenced the role of the feedback gain in facilitatory interaction (see Figure 3) and suggests that other parameters (e.g., lateral excitation) may affect facilitation. Here, we extended the analysis to the complex network to achieve further insight into the mechanisms.
Figure 11A presents the results of cross-modal stimulation to the left hand in the healthy subject (basal values of parameters) at different values of the visual input. In particular, we applied a tactile input of strength It0= 0.8 (this value is largely under threshold) and changed the input to the visual area from 0.8 to 2 (the visual and tactile stimuli are applied in the same position). The feedback gain is maintained at its basal value. The three plots display the maximal steady-state activity reached in each of the three areas (tactile, visual, and multimodal) as a function of the visual input value. In this condition, the application of a strong visual stimulus (approximately above 1.7) leads multimodal neurons to the saturation and activates the tactile neurons via the feedback input. Activity in the tactile area reaches the saturation because of the strong lateral excitation.
The simulations performed in Figure 11A have been repeated in lesion conditions (reduced values of lateral excitatory synapses in the right tactile area: Λt,Rex= 2.3 versus 2.7). All other parameters are maintained at their basal value. Results are reported in Figure 11B. These results evidence the important role played by unimodal lateral excitation in facilitatory interaction. Indeed, when facilitation occurs (i.e., for visual inputs above 1.7), the amount of reinforcement in the unimodal tactile area is much lower with respect to intact conditions (compare the left plots of Figure 11A and 11B). This is the consequence of the reduced strength of lateral excitatory synapses in the tactile area, which is unable to increase neuron activity up to the saturation level.
In Figure 11C, the same simulations as in Figure 11B (that is, with impaired lateral excitation in the right hemisphere) were repeated, using a higher value for the feedback gain (Bt0 = Bv0 = 3). Increasing the strength of the feedback connections reduces the value of the visual stimulus at which facilitation occurs (1.6 versus 1.7); moreover, the high-feedback input leads tactile neurons to maximal activation, despite the reduction in lateral excitation.
3.3.2. Parameters Affecting Competition.
Results obtained by the study of the simplified network (see in particular Figure 4) suggest that in basal conditions (with left and right hemispheres having the same parameter values), coexistence of both hand representations in bilateral stimulation depends mainly on the strength of the inhibitory gain between the two hemispheres and on pure delay in interhemispheric connections. In order to clarify the role of these parameters in the complex model, we performed a sensitivity analysis on each of them.
In Figure 12A, we simulated bilateral cross-modal stimulation, with the left tactile stimulus It0= 2 and the right visual stimulus Iv0= 2.6, using different values for the weight of inhibitory synapses. All other model parameters are preserved at their basal values (see Table 1). The two plots display the steady-state activities of the unimodal and multimodal neurons on which the two stimuli are centered. Values of inhibitory synapses below 1.9 enable the coexistence of the two representations; for higher values, the representation of the left hand, which receives weaker stimulation, is suppressed. Suppression of the left touch is characterized by a complete silence of both the unimodal and multimodal areas in the corresponding hemisphere.
Figures 12B and 12C elucidate the role of the interhemispheric delay. In Figure 12B, the same bilateral stimulation as in Figure 12A was performed, now maintaining inhibitory gain at its basal value (i.e., 1.8) and using different values for the pure delay. If the delay is decreased below 33 ms, the left tactile stimulus is extinguished and only the right one survives. To better understand the role of the delay, Figure 12C presents the time response of the involved unimodal and multimodal areas for some values of the delay. It is worth noticing that the response to the right visual stimulus precedes the response to the left touch due to its higher value. A gradual raise of the time delay from 26 to 33 ms progressively lengthens the transient response to the weaker stimulation before it is extinguished by the stronger one. A further increase of the time delay above 33 ms allows a multimodal response to the weaker stimulus to survive permanently. In these conditions, the unimodal neurons have enough time to reach an elevated activation level (thanks to the lateral excitation) before inhibition starts its action: inhibition slows the unimodal activation but is not able to suppress it.
3.3.3. Parameters Affecting RBD Patient.
Two parameters in the right hemisphere (Λt,Rex, Wt,R0, representing lateral and feedforward synaptic strength from tactile unimodal neurons) have been reduced to simulate RBD patients with left tactile extinction. The specific role of each of these two parameters has been elucidated in the following sensitivity analyses (see Figure 13). In Figure 13A, we simulated the condition of a left-hand tactile stimulation with a simultaneous right-hand visual stimulation of the same intensity (It0 = It0= 2.6) by using different values of parameter Λt,Rex, while maintaining parameter Wt,R0 at its reduced pathological value (Wt,R0= 0.8). Parameter Λt,Rex has been progressively reduced from the basal value 2.7 to the value 2 (at which Λt,Rex = Λt,Rin, and so the effect of lateral excitatory synapses is cancelled by lateral inhibition). In order to obtain left tactile extinction, reduction in parameter Wt,R0 has to be associated with a decrease of parameter Λt,Rex below 2.5. In these conditions, only the central tactile neuron remains slightly activated in the right hemisphere, while the activity of the multimodal neurons is annihilated (see Figure 9). On the contrary, for values above 2.5, a balanced coexistence of the two stimuli (with an unimodal activation bubble in both hemispheres) occurs.
In Figure 13B, the same simulation has been repeated by progressively reducing parameter Wt,R0 from its basal value 2 to the value 0.2, while maintaining parameter Λt,Rex at the pathological value (Λt,Rex= 2.3). Reducing only parameter Λt,Rex is not sufficient to produce left tactile extinction: only for values of Wt,R0 below 1.4 is the left tactile stimulus extinguished.
According to previous results, reproduction of left tactile extinction requires simultaneous decrease of these two parameters. However, even under these conditions, extinction or survival of the left touch is strongly dependent on the inhibitory synapses. Figure 13C displays the responses to the same bilateral stimulation as in Figure 13A and 13B, in case of reduced right tactile synapses (Λt,Rex= 2.3; Wt,R0= 0.8), and by using different values of the inhibitory weights. Extinction does not occur for values of inhibitory synapses below 1.2, despite the reduction in the other two parameters.
Investigation of the neural mechanisms underlying the representation of the external space has been assuming increasing relevance. The brain appears to construct multiple and functionally segregated representations of space; these representations include at least personal, peripersonal, and extrapersonal space (Gross & Graziano, 1995; Rizzolatti, Fadiga, Fogassi, & Gallese, 1997). Recent results from different approaches, involving electrophysiology, psychology, and neuropsychology, converge in indicating that representation of peripersonal space is constructed by integrating multisensory inputs, plausibly via multisensory neuronal populations in prefrontal, parietal, and subcortical regions, which respond to both tactile stimuli on a particular body part and visual stimuli in the surrounding external space (Rizzolatti et al., 1981; Graziano et al., 1997; Duhamel et al., 1998). Intrinsically linked with the mechanisms of internal representation of space is the process of spatial attention (Driver & Spence, 1998; Eimer & Schröger, 1998); it has been proven that attention to the space near the body operates cross-modally and not merely within single sensory modalities. Cross-modal interaction in spatial attention has been related to the activation of multimodal neural structures (Driver & Spence, 1998; Bremmer et al., 2001; Eimer & Van Velzen, 2002; McDonald, Teder-Sälejärvi, Di Russo, & Hylliard, 2003).
Despite the recent advances in the comprehension of peripersonal space representation, several important questions still remain open: What is the organization of the neural circuitry underlying peripersonal space representation? How does it relate to the multimodal neurons identified by electrophysiological studies? How can information from unimodal areas be conveyed into multisensory neurons? Can multimodal structures influence unimodal activity (e.g., via feedback projections)? How can two simultaneous spatial representations interact? Which are the alterations in the neural circuitry that may explain extinction in brain-damaged patients? Answering these questions may be important for both theoretical and clinical knowledge.
Neural network models and computer simulation techniques may offer important contributions to these questions and provide deeper insight into the neural mechanisms at the basis of peripersonal space coding. This rationale has inspired the work presented here. In particular, our model focuses on the visual-tactile representation of the peripersonal space around the two hands and on the interaction between the representations of the two sides of space. It is worth noting that the proposed model does not aspire to reflect in detail all neurophysiological and neuroanatomical knowledge gathered on the subject, but rather to identify a plausible structure of the network and the functional links between its parts to account for psychophysical and behavioral results. In particular, two points deserve attention.
First, in our present model, a single neural unit should not be considered as an individual cell, but rather as a representation of a pool of cells, with the RF approximately located in the same position. Similarly, synaptic weights do not represent the strength of individual synapses, but rather summarize the overall effect of a pool of neurons (e.g., the synaptic strength multiplied per the number of the involved cells).
Second, we did not face the problem of coordinate transformations between different reference frames; we just assumed that an upstream process converts the visual input from eye-centered to hand-centered coordinates. This assumption is justified since we considered fixed postural conditions in the model. Thus, we did not deal with the complex problem of coordinate transformation and focused on other relevant unanswered questions. This approach also differentiates our work from previous works. Indeed, the problem of coordinate transformations (in which parietal neurons are thought to be primarily involved) has been widely investigated in previous studies by means of neural network models, which we briefly discuss.
Andersen and colleagues set up three-layer feedforward networks, where the parietal cortex behaved as the hidden layer performing coordinate transformations from two or more input maps (e.g., retinal stimulus location, eye position, head position), into different reference frames (e.g., eye centered, head centered, body centered) (Zipser & Andersen, 1988; Xing & Andersen, 2000). In these models, the hidden units showed properties similar to those measured experimentally in some parietal neurons, such as gain field modulation and partially shifting RFs (Andersen, Essick, & Siegel, 1985; Duhamel, Bremmer, Ben, & Graf, 1997; Avillac, Denève, Olivier, Pouget, & Duhamel, 2005). In the model of Salinas and Abbot (1995), a sensory array mimicking parietal neurons drove a motor array through a set of synaptic connections. The sensory neurons had gaussian retinal RFs modulated by gaze direction. After training, the motor array represented the target location in head-centered coordinates, and the motor units showed head-centered visual RFs, in agreement with the existence of cells in the premotor cortex with RFs anchored to parts of the body (Graziano, Yap, & Gross, 1994; Graziano & Gross, 1995). The approach proposed by Pouget and Sejnowski (1995, 1997) postulates that the parietal cortex performs coordinate transformations by computing basis functions of their sensory and postural inputs. This basis function model was also used to simulate unilateral spatial neglect (Pouget & Sejnowski, 2001) in vision modality by assuming unilateral lesions in the parietal maps of the network. In subsequent papers (Denève, Latham, & Pouget, 2001; Pouget, Deneve, & Duhamel, 2002; Avillac et al., 2005), the basis functions approach was extended to include attractor dynamics in order to solve statistical issues in multisensory integration. To this end, recurrent connections were made between parietal neurons and the afferent areas coding inputs of different modalities (sensory and postural). The networks were able to perform both cue integration (visual-auditory-postural in Denève et al., 2001, and visual-tactile-postural in Avillac et al., 2005) and coordinate transformations with noisy neurons. Moreover, in these models, the basis function units showed partially shifting RFs similar to those found in ventral intraparietal area. The previous models have helped to clarify how neurons in the parietal cortex contribute to coordinate transformations and how their response properties (e.g., gain field modulation and partially shifting RFs) subserve the needed computation. Furthermore, the studies by Pouget and colleagues share some features and aspects with our model, such as the presence of recurrent connections between the unimodal and the multimodal areas, the topological organization of the unimodal layers, the issue of cue integration (Denève et al., 2001; Pouget et al., 2002; Avillac et al., 2005), the attempt to provide links between response properties of single cells and the behavioral responses of patients with spatial deficits (Pouget & Sejnowski, 2001). However, they neglect important issues of spatial representation that we address in our study, such as the segregation between near and far space representation, the involvement of bimodal neurons with visual RFs anchored to parts of the body in peripersonal space representation, the attentional competition between the two hands, and intermodality interactions (facilitatory or inhibitory) in spatial deficits.
The basic idea inspiring the model is that the involvement of the perihand space by an event or object is signaled through the activation of multimodal visual-tactile neurons, which integrate somatosensory inputs from the hand and visual inputs from the space immediately surrounding the hand. This concept has been formalized in our model by a multimodal layer connected to two upstream layers—one devoted to the somatotopic representation of the hand and the other to the coding of the visual space in hand-centered coordinates.
The multimodal layer of the model simulates the role of multisensory neurons, responding to both visual and somatosensory stimuli in body part centered coordinates, found in several areas of the macaque brain (Hyvarinen, 1981; Rizzolatti et al., 1981; Graziano & Gross, 1995; Fogassi et al., 1996; Graziano et al., 1997; Duhamel et al., 1998) and related to multisensory regions of the human brain (Bremmer et al., 2001; Calvert, 2001; Galati et al., 2001; Lloyd et al., 2003; Macaluso & Driver, 2005; Swisher et al., 2007). The two upstream layers account for the somatosensory and visual inputs from primary and secondary cortical areas, which converge into the multisensory areas through different pathways (Rizzolatti et al., 1981; Graziano et al., 1997; Duhamel et al., 1998; Hihara et al., 2006). Moreover, they account for the process of coordinate transformation of the visual target from retinotopic to hand-centered reference frame, presumably involving parietal neurons (Avillac et al., 2005). However, in our model, we explicitly did not establish any exact anatomical location for these areas of neurons (i.e., the model does not reflect any definite anatomical structure).
Two sets of multimodal neurons (with the corresponding upstream areas) have been considered in the model in order to account for both hemispheres (i.e., both hands). Then several different synaptic connections have been introduced within this structure. In the following, the role of each mechanism included in the network is critically discussed on the basis of the results we obtained and in vivo data. To elucidate the specific implication of each model component on network behavior, we used both analytical and computer simulations techniques. In particular, the preliminary study performed with the simplified model has evidenced some key parameters that mainly affect the stability and behavior of the network; this study has been useful for a deeper interpretation of the results obtained with the complex network and has inspired further analyses on network parameters.
Lateral synapses. Unimodal areas are made of matrices of units whose RFs topographically map the hand surface and surrounding space. The units in these areas are connected to each other by lateral synapses modeled according to a Mexican hat disposition (a central excitatory area surrounded by an inhibitory annulus). This choice is justified by the fact that short-range excitation and long-range inhibition among neurons, with a spatial function similar to that of a Mexican hat, is a pattern of connectivity that recurs frequently throughout the brain (Rolls & Deco, 2002; Thivierge & Marcus, 2007). Moreover, we believe that this kind of connectivity plays a central role in the formation of topographically organized maps that can be found at different processing stages in the cortex (Thivierge & Marcus, 2007). In the model, these connections are fundamental to sustain activation in the unimodal areas in response to a stimulus and to produce a sufficiently extended activation bubble. Moreover, lateral excitatory synapses in unimodal layers play an important role in facilitatory interaction.
Feedforward synapses. The feedforward synapses connecting unimodal to multimodal neurons affect the behavior and properties of multimodal neurons. Thanks to the gaussian arrangement of these synapses, multimodal neurons integrate information across homologous spatial locations in the two unisensory maps and create a multimodal map where visual and tactile receptive fields are aligned. By appropriately setting the standard deviation of the synapses, broad RFs—largely wider than unisensory neurons RFs—can be obtained for the multimodal neurons (see Figures 5A and 5B). As a direct consequence of this disposition, the multimodal area still responds to visual stimuli near the hand (provided they are applied approximately within three standard deviations from the hand) and remains silent with more distant stimuli (see Figures 5C and 5D). All these characteristics (RFs alignment in different modalities, wide dimension of RFs, visually related activity degrading with the distance from the body) have been extensively documented by electrophysiological studies on multimodal neurons in the parietal and frontal lobes of the macaque brain (Rizzolatti et al., 1981; Colby et al., 1993; Graziano et al., 1997; Duhamel et al., 1998; Iriki et al., 2001).
Feedback synapses. Within the model, unimodal and multimodal areas interact not only by feedforward synapses but also by feedback connections. Modulation of unisensory activity by backprojections from multimodal areas is supported by several studies in healthy subjects (Macaluso et al., 2000; Taylor-Clarke et al., 2002; Schaefer et al., 2005; Schaefer, Flor, Heinze, & Rotte, 2006; Serino et al., 2007). For instance, Macaluso et al. (2000) found that a visual stimulation near the right hand produced a cluster of activation in the left lingual gyrus (in the occipital lobe) that was significantly amplified by a concurrent tactile stimulation on the right hand. Amplification did not occur in the case of a spatially incongruent bimodal stimulation (i.e., when the visual stimulus was applied far from the hand). Analyzing changes in effective connectivity, a circumscribed area in inferior parietal lobe (homolog to area 7b in the macaque brain) showed a higher coupling with the left lingual gyrus during bimodal spatially congruent stimulation. The authors concluded that the tactile input to the somatosensory cortex may influence the visual cortex via backprojections from multimodal populations in the parietal lobe, similar to that documented in area 7b of the macaque monkey. The model provides a theoretical sketch of this hypothesis: feedback synapses from multimodal to unimodal neurons produce an amplification of unimodal activity via cross-modal spatially congruent stimulation (see Figure 6). Thanks to the gaussian distribution assigned to these synapses, the cross-modal amplification is spatially specific and does not occur in the case of a visual stimulus delivered far from the hand.
Moreover, the presence of the feedback synapses is able to mimic behavioral results on pathological subjects. Halligan and colleagues (1997) reported that brain-damaged patients with hemisensory loss of the upper limb felt a tactile sensation on the affected hand only when they were allowed to see the hand being touched. The authors proposed that this performance was determined by bimodal visual-somatosensory cells: “When limited tactile information is available, correlated visual input may boost sub-threshold tactile stimulation into conscious awareness.” This situation corresponds to the simulation results reported in Figure 6, where a subthreshold tactile stimulus, not perceived at the level of multisensory area, is impressively amplified by a concomitant visual stimulation in the homologous spatial location. Moreover, the model provides suggestions on neural correlates of perceptual awareness. According to the model, the absence of the tactile conscious percept in the case of unimodal stimulation is due to the lack of activation of the multimodal neurons, despite the residual activity in the unisensory area. Conscious awareness of the tactile stimulus in the case of bimodal stimulation is related to activation of the multisensory neurons and the consequent reinforcement of the unimodal tactile response thanks to the feedback synapses. This model postulation is supported by recent studies on extinction patients (Marzi et al., 2001; Eimer et al., 2002; Sarri et al., 2006). Results of these studies indicate that multimodal regions in the frontal-parietal area may be implicated in perceptual awareness.
Of course, reinforcement of unisensory perception by cross-modal stimulation can take place because of the presence of the feedback connections from the multimodal to unimodal neurons. However, our analysis (see the results in Figure 11) indicates that the occurrence of facilitatory interaction and the amount of reinforcement depend not only on the value of the feedback gain, but result from the combination of several factors: the values of the input with respect to the activation threshold of unimodal neurons, the strength of lateral excitation and the ratio between excitation and inhibition in unimodal areas, and the weight of the feedback connections. The combination of all these mechanisms establishes the level of activation in the corresponding unimodal area. Directing attention to one spatial location improves perceptual performances at that location (Posner, Snyder, & Davidson, 1980; Hawkins et al., 1990; Macaluso, Frith, & Driver, 2002). Our model results provide some indications on the possible neural correlates of top-down attention. According to our analysis, attention might increase feedback gain, reduce the activation threshold of unisensory neurons, or affect lateral connectivity.
Competitive inhibitory synapses. In our model, the two hemispheres are interconnected by inhibitory synapses realized through inhibitory interneurons. Accordingly, these connections realize a competitive mechanism between the two hemispheres when two events are simultaneously presented at the two sides of the peripersonal space. The existence of a competitive mechanism when attending two simultaneous bilateral events is strongly suggested by the observation that unilateral brain-damaged patients show extinction only in the case of bilateral stimulation. Indeed, patients with extinction are able to report isolated contralesional stimuli; this has led several authors to exclude any peripheral sensory loss in these patients (Mattingley et al., 1997). Rather, extinction is commonly attributed to a pathological exaggeration of an attentional limit that already exists for concurrent events in healthy subjects (Duncan, 1996; Mattingley et al., 1997; Marzi et al., 2001). Furthermore, extinction has been found not only unimodally but also cross-modally (e.g., visual-tactile extinction) (Mattingley et al., 1997; Di Pellegrino et al., 1997; Làdavas et al., 1998; Bueti et al., 2004), suggesting that competition between two simultaneous events occurs also when they arise in separate modalities (as assumed in our model). This is consistent with the “integrated competition” hypothesis of attention proposed by Duncan (1996), according to which attention competition is played out across widespread neural networks involving many sensory modalities. Despite previous evidence, the neural correlates of extinction and perceptual awareness are still controversial. The model may shed light on the neural mechanisms underlying extinction in unilateral brain-damaged patients.
Recent fMRI and ERP studies were designed to investigate the neural basis of extinction (Marzi et al., 2001; Eimer et al., 2002; Sarri et al., 2006). Sarri et colleagues (2006) performed an fMRI study on a right hemisphere stroke patient and observed that unilateral tactile stimulation of the left hand produced a near-threshold activation in the right primary somatosensory cortex. However, the stimulus was consciously perceived. Eimer et al. (2002), using ERP measures on a different patient, found that the P60 component (a sensory specific somatosensory component) was significantly reduced over the right hemisphere in response to unilateral left tactile stimulation, compared with the left hemisphere component in response to unilateral right tactile stimulation. Similar findings were observed in ERP studies on patients with left visual extinction (Marzi et al., 2001). The reduced activation in the sensory cortex suggests that in extinction patients, an underlying deficit also exists for unilateral stimulation, but it is behaviorally unmasked only during bilateral stimulation. Besides sensory cortex, surviving right parietal and frontal cortices were activated by consciously perceived left touches (Sarri et al., 2006). Extinguished left touch in the presence of a competing right event was still accompanied by some residual activation of the right somatosensory cortex (Sarri et al., 2006), and could still trigger some P60 and N110 components over right somatosensory cortex (Eimer et al., 2002). However, no activation in the right parietal and frontal region was observed in the case of left tactile extinction. The previous data suggest that activation of the sensory cortex alone is not sufficient to produce conscious percept; rather, higher-level multisensory regions in the frontal-parietal area could be implicated in perceptual awareness.
Consistent reproduction of these experimental findings has been obtained with our model by assuming only the reduction of two parameters in the right hemisphere, representing synapses from unimodal tactile neurons: the lateral excitation in the right tactile area and the strength of the feedforward connections from the tactile neurons to the multimodal neurons. The first parameter influences the intensity and extension of the activation bubble in the right tactile area in response to a left tactile stimulus; the second affects the responsiveness of multimodal neurons to a left tactile stimulus. The hypothesized reduction in synaptic strength should be interpreted not as a real synaptic depression, but rather as the effect of a reduction in the number of effective excitatory units that contribute to activity in that region. Of course, the smaller the number of effective excitatory cells, the smaller the overall excitatory input arriving at the other connected areas.
With these parameter modifications, the following results have been obtained. (i) A unilateral left-hand tactile stimulus produces a significantly smaller cluster of activation in the tactile area compared with the healthy subject (compare Figure 7B left plot with Figure 9B left plot). Hence, in the model, a certain impairment exists also for unilateral stimulation (as observed in vivo). (2) However, the reduced unimodal activity is still able to evoke a response in the multimodal area (see Figure 8). This reflects the activation of right parietal and frontal region observed experimentally. (3) When the left tactile stimulus is coupled with a right-hand stimulation, a large part of activation in the right tactile area is abolished, but weak activity survives (see Figure 9A). The residual unimodal activity, however, is not sufficient to excite multimodal neurons (see Figure 9A). This corresponds to the lack of activation of the right parietal-frontal cortex, despite activation of the sensory cortex, reported by fMRI studies in extinguished left touches.
According to the previous description, the model identifies potential functional alterations in the neural circuitry able to explain extinction and relating cortical phenomena. It is worth noting that achieving the previous results crucially depends on two factors. First, both parameters have to be changed to reproduce left tactile extinction; reduction of only one parameter, while maintaining the other at its basal value is not sufficient (see Figures 13A and 13B). Second, the presence of a strong competition between the two hand representations is an essential requirement for extinction in the pathological subject: indeed, even the simultaneous reduction of the two parameters in the right hemisphere does not produce extinction of the left touch in the presence of a mild competition between the two hemispheres (see Figure 13C). Hence, several neural mechanisms are concurrently involved in extinction, and only certain combinations of parameters (e.g., high competitive gain and reduction of synapses in the right tactile area) seem to be consistent with this perceptual impairment.
The scenario provided by the model is also able to reproduce the phenomenon of cross-modal visual-tactile facilitation in pathological subjects (see Figure 10). This phenomenon arises from the activation of multisensory neurons by a simultaneous visual stimulus, with the consequent reinforcement of tactile activity in the same spatial position (and in the same hand) by feedback projections.
According to the previous description, the model predicts the existence of a high interhemispheric competition as a fundamental prerequisite for extinction. Regarding this, an important aspect emerges from network results: due to the high competitive gain, suppression of one stimulus in bilateral stimulation may occur even in intact conditions, that is, with left and right hemispheres having the same basal parameter values. In particular, our analysis discloses the importance of the interhemispheric delay in favoring the coexistence of the two simultaneous representations when competition is strong: a slight difference in the intensity of the two stimuli results in the absence of the weaker representation in case of no (or too low) interhemispheric delay. Increasing the delay introduces a transient phase in neuron response during which both multimodal areas are activated and the two representations coexist, before the system settles to the WTA working point, where suppression of the weaker stimulus occurs. The longer the time delay, the longer the transient period of coexistence. This behavior can be observed in both the linear and nonlinear network, (see Figures 4B, 12B, and 12C, with the difference that in the linear model, the transient phase is characterized by on oscillatory pattern) and suggests that the presence of the time delay supports the simultaneous representation of the two hands, at least through the phasic activation of the neurons. Furthermore, in the nonlinear model, a sufficiently high delay leads to a new working point, characterized by the tonic activation of the multimodal neurons and permanent coexistence between the two representations (see Figures 12B and 12C).
Finally, we wish to evidence some possible improvements and applications of our model that can be the subject of future work.
In this study, the network is noiseless. In fact, sensory signals are usually corrupted by noise because of the stimulus itself or neural noise. Including random noise in the network would allow its performance to be evaluated in terms of percentages of success (e.g., considering as success the activation of the multimodal area), performing series of trials within each condition (such as unilateral, bilateral unimodal, or bilateral cross-modal stimulation). In this way, network results could be quantitatively compared with results of behavioral and psychological studies, where healthy subjects and patients are tested in different stimulation conditions and their performance assessed in terms of number of correct detections.
The spatial attention mechanisms included in this model refer only to exogenous attention, that is, captured reflexively and involuntarily. A more active form of attention, called endogenous attention, is directed in a voluntary manner on the basis of current spatial expectancies (Driver & Spence, 1998). It is still unclear whether the two forms of attention (endogenous and exogenous) are mediated by the same or different neural substrates (Driver & Spence, 1998; Pinsk, Doniger, & Kastner, 2004; Peelen, Heslenfeld, & Theeuwes, 2004). In subsequent studies, our model might be used to investigate neural correlates of endogenous visuotactile attention and its relationships with exogenous attention.
Although the structure of the network and the set of parameter values provide a plausible scenario, other mechanisms not included in the network could play a role. For example, the two unimodal areas might be linked also by direct synapses (Schroeder & Foxe, 2005); currently, neurons in the visual area and in the tactile area communicate only indirectly, by feedback projections from the multimodal neurons. This choice has been adopted according to the criterion of parsimony (i.e., to reduce the number of mechanisms included in the model and find a good compromise between completeness and computational simplicity). However, direct communication between unimodal neurons is possible, and its role may be investigated in future studies.
Multisensory integration may improve the performance of perceptive systems when poor information derives from unisensory channels ((Tipper et al., 1998; Kennett et al., 2001; Calvert, Spence, & Stein, 2004; Press, Taylor-Clarke, Kennett, & Haggard, 2004); see also Figures 6 and 10). Brain multisensory capabilities may be effectively exploited in rehabilitation procedure to recovery sensory or spatial deficits after damage (see Ladavas, 2008, for a review). Systematic audiovisual integration has been demonstrated to produce long-lasting improvement in visual detection in patients with visual-field deficit (hemianopia) (Bolognini, Rasi, Coccia, & Làdavas, 2005). Interaction between vision and touch was found to be effective in producing immediate enhancement of tactile sensation in patients with somatosensory deficits and might also promote long-term recovery of somatosensation (Serino et al., 2007). The model could provide important contributions in this field. In particular, it can be of value in identifying the neural correlates of rehabilitation procedures by identifying potential changes in synaptic connectivity that promote recovery. To this end, Hebbian rules of synapses plasticity need to be included in the model to reproduce the effects of rehabilitation. Moreover, the model can suggest new rehabilitation strategies for improving sensory or spatial deficits. For example, according to the model, systematic visuotactile stimulation of the pathological side in extinction patients may promote, by producing strong activation of the bimodal area, a Hebbian reinforcement of the feedback and feedforward synapses in the damaged hemisphere; this reinforcement could be effective in reequilibrating the competition between the two hemispheres in the case of bilateral stimulation. Further computational and in vivo studies are required for a thorough investigation of these aspects.
Finally, recent studies, performed in both animals and human subjects, have shown that peripersonal space representation is not fixed but exhibits plasticity (Iriki, Tanaka, & Iwamura, 1996; Farnè & Làdavas, 2000; Làdavas, 2002; Maravita & Iriki, 2004). Data on peri-hand space show that the use of tools, the viewing of the hand in mirrors or in videoscreens, may modulate the visuotactile representation of the peripersonal space. The issue of plastic modification of peripersonal space representation has not been addressed in this study. However, in the future, the model, with the addition of learning rules on synapses, may be used to simulate the dynamic properties of peripersonal space representation and provide an explanation of the neural basis of tool-use behaviors, testing the hypotheses suggested in the literature.