Experimental data have revealed that neuronal connection efficacy exhibits two forms of short-term plasticity: short-term depression (STD) and short-term facilitation (STF). They have time constants residing between fast neural signaling and rapid learning and may serve as substrates for neural systems manipulating temporal information on relevant timescales. This study investigates the impact of STD and STF on the dynamics of continuous attractor neural networks and their potential roles in neural information processing. We find that STD endows the network with slow-decaying plateau behaviors: the network that is initially being stimulated to an active state decays to a silent state very slowly on the timescale of STD rather than on that of neuralsignaling. This provides a mechanism for neural systems to hold sensory memory easily and shut off persistent activities gracefully. With STF, we find that the network can hold a memory trace of external inputs in the facilitated neuronal interactions, which provides a way to stabilize the network response to noisy inputs, leading to improved accuracy in population decoding. Furthermore, we find that STD increases the mobility of the network states. The increased mobility enhances the tracking performance of the network in response to time-varying stimuli, leading to anticipative neural responses. In general, we find that STD and STP tend to have opposite effects on network dynamics and complementary computational advantages, suggesting that the brain may employ a strategy of weighting them differentially depending on the computational purpose.
Experimental data have consistently revealed that the neuronal connection weight, which models the efficacy of the firing of a presynaptic neuron in modulating the state of a postsynaptic one, varies on short timescales, ranging from hundreds to thousands of milliseconds (Stevens & Wang, 1995; Markram & Tsodyks, 1996; Dobrunz & Stevens, 1997; Markram, Wang, & Tsodyks, 1999). This is called short-term plasticity (STP). Two types of STP, with opposite effects on the connection efficacy, have been observed: short-term depression (STD) and short-term facilitation (STF). STD is caused by the depletion of available resources when neurotransmitters are released from the axon terminal of the presynaptic neuron during signal transmission (Stevens & Wang, 1995; Markram & Tsodyks, 1996; Dayan & Abbott, 2001). STF is caused by the influx of calcium into the presynaptic terminal after spike generation, which increases the probability of releasing neurotransmitters.
Computational studies on the impact of STP on network dynamics have strongly suggested that STP can play many important roles in neural computation. For instance, cortical neurons receive presynaptic signals with firing rates ranging from less than 1 Hertz to more than 200 Hertz. It was suggested that STD provides a dynamic gain control mechanism that allows equal fractional changes on rapidly and slowly firing afferents to produce postsynaptic responses, realizing Weber's law (Tsodyks & Markram, 1997; Abbott, Varela, Sen, & Nelson, 1997). Besides, computations can be performed in recurrent networks by population spikes in response to external inputs, which are enabled through STD by recurrent connections (Tsodyks, Uziel, & Markram, 2000; Loebel & Tsodyks, 2002).
Another role played by synaptic depression was proposed by Levina, Herrmann, and Giesel (2007). In neuronal systems, critical avalanches are believed to bring about optimal computational capabilities and are observed experimentally. Synaptic depression enables a feedback mechanism so that the system can be maintained at a critical state, making the self-organized critical behavior robust (Levina et al., 2007). Herding, a computational algorithm reminiscent of the neuronal dynamics with synaptic depression, was recently found to have a similar effect on the complexity of information processing (Welling, 2009). STP was also recently thought to play a role in the way a neuron estimates the membrane potential information of the presynaptic neuron based on the spikes it receives (Pfister, Dayan, & Lengyel, 2010).
Concerning the computational significance of STF, a recent work proposed an interesting idea for achieving working memory in the prefrontal cortex (Mongillo, Barak, & Tsodyks, 2008). The residual calcium of STF is used as a buffer to facilitate synaptic connections, so that inputs in a subsequent delay period can be used to retrieve the information encoded by the facilitated synaptic connections. The STF-based memory mechanism has the advantage of not having to rely on persistent neural firing during the time the working memory is functioning, and hence is energetically more efficient.
From the computational point of view, the timescale of STP resides between fast neural signaling (in the order of milliseconds) and rapid learning (in the order of minutes or above), which is the timescale of many important temporal processes occurring in our daily lives, such as the passive holding of a temporal memory of objects coming into our visual field (the so-called iconic sensory memory) or the active use of the memory trace of recent events for motion control. Thus, STP may serve as a substrate for neural systems manipulating temporal information on the relevant timescales. STP has been observed in many parts of the cortex and also exhibits large diversity in different cortical areas, suggesting that the brain may employ a strategy of weighting STD and STF differently depending on the computational purpose.
In this study, we explore the potential roles of STP in processing information derived from external stimuli, an issue of fundamental importance yet inadequately investigated so far. For ease of exposition, we use continuous attractor neural networks (CANNs) as our working model, but our main results are qualitatively applicable to general cases. CANNs are recurrent networks that can hold a continuous family of localized active states (Amari, 1977). Neutral stability is a key property of CANNs, which enables neural systems to update memory states easily and track time-varying stimuli smoothly. CANNs have been successfully applied to describe the generation of persistent neural activities (Wang, 2001), the encoding of continuous stimuli such as the orientation, the head direction and the spatial location of objects (Ben-Yishai, Lev Bar-Or, & Sompolinsky, 1995; Zhang, 1996; Samsonovich & McNaughton, 1997), and a framework for implementing efficient population decoding (Deneve, Latham, & Pouget, 1999).
When STP is included in a CANN, the dynamics of the network is governed by two timescales. The time constant of STP is much slower than that of neural signaling (100–1000 ms versus 1–10 ms). The interplay between the fast and the slow dynamics causes the network to exhibit rich dynamical behaviors, laying the foundation for the neural system to implement complicated functions.
In CANNs with STD, various intrinsic behaviors have been reported, including damped oscillations (Tsodyks, Pawelzik, & Markram, 1998), periodic and aperiodic dynamics (Tsodyks et al., 1998), state hopping with transient population spikes (Holcman & Tsodyks, 2006), traveling fronts and pulses (Pinto & Ermentrout, 2001; Bressloff, Folias, Prat, & Li, 2003; Folias & Bressloff, 2004; Kilpatrick & Bressloff, 2010), breathers and pulse-emitting breathers (Bressloff et al., 2003; Folias & Bressloff, 2004), spiral waves (Kilpatrick & Bressloff, 2009), rotating bump states (York & van Rossum, 2009; Igarashi, Oizumi, Otsubo, Nagata, & Okada, 2009), and self-sustained non-periodic activities (Stratton & Wiles, 2010). Here, we focus on those network states relevant to the processing of stimuli in CANNs, including static, moving, and metastatic bumps (Wu & Amari, 2005; Fung, Wong, & Wu, 2010). More significant, we find that with STD, the network state can display slow-decaying plateau behaviors, that is, the network that is initially being stimulated to an active state by a transient input decays to the silent state very slowly on the timescale of STD relaxation rather than on the timescale of neural signaling. This is a very interesting property. It implies that STD can provide a way for the neural system to maintain sensory memory for a duration unachievable by the signaling of single neurons and shut off the network activity of sensory memory naturally. The latter has been a challenging technical issue in the study of theoretical neuroscience (Gutkin, Laing, Colby, Chow, & Ermentrout, 2001).
With STF, neuronal connections become strengthened during the presence of an external stimulus. This stimulus-specific facilitation lasts for a period on the timescale of STF and provides a way for the neural system to hold a memory trace of external inputs (Mongillo et al., 2008). This information can be used by the neural system for various computational tasks. To demonstrate this idea, we consider CANNs as a framework for implementing population decoding (Deneve et al., 1999; Wu, Amari, & Nakahara, 2002). In the presence of STF, the network response is determined not only by the instant input value but also by the history of external inputs (the latter being mediated by the facilitated neuronal interactions). Therefore, temporal fluctuations in external inputs can be largely averaged out, leading to improved decoding results.
In general, STD and STF tend to have opposite effects on network dynamics (Torres, Cortes, Marro, & Kappen, 2007). The former increases the mobility of network states, whereas the latter increases their stability. Enhanced mobility and stability can contribute positively to different computational tasks. Enhanced stability mediated by STF can improve the computational and behavioral stability of CANNs. To demonstrate that enhanced mobility does have a positive role in information processing, we investigate a computational task in which the network tracks time-varying stimuli. We find that STD increases the tracking speed of a CANN. Interestingly, for strong STD, the network state can even overtake the moving stimulus, reminiscent of the anticipative responses of head direction and place cells (Blair & Sharp, 1995; O'Keefe & Recce, 1993; Romani & Tsodyks, in press).
The rest of the letter is organized as follows. After introducing the models and methods in section 2, we discuss the intrinsic properties of CANNs in the absence of external stimuli by studying their phase diagram in section 3. In sections 4 to 6, we study the network behavior in the presence of various stimuli. In section 4, we consider the aftereffects of a transient stimulus and find that sensory memories can persist for a desirable duration and then decay gracefully. In section 5, we consider the response of the network to a noisy stimulus and find that the accuracy in population decoding can be enhanced. In section 6, we consider the response of the network to a moving stimulus and find that the tracking performance is improved by the enhanced mobility of the network states. The letter ends with conclusions and discussions in section 7. Our preliminary results on the effects of STD have been reported in Fung et al. (2010).
2. Models and Methods
We consider a one-dimensional continuous stimulus x encoded by an ensemble of neurons. For example, the stimulus may represent a moving direction, an orientation, or a general continuous feature of objects extracted by the neural system. We consider the case where the range of possible values of the stimulus is much larger than the range of neuronal interactions. We can thus effectively take x ∈ (−∞, ∞) in our analysis. In simulations, however, we set the stimulus range to be −L/2 < x ≤ L/2 and have N neurons uniformly distributed in the range obeying a periodic boundary condition.
Our theoretical analysis of the network dynamics is based on the observations that the stationary states of the network, as well as the profile of STP across all neurons, can be well approximated as gaussian-shaped bumps, and the state change of the network, and hence the profile of STP, can be well described by distortions of the gaussian bump in various forms. We can therefore use a perturbation approach developed by Fung et al. (2010) to solve the network dynamics analytically.
We can generalize the perturbation approach developed by Fung et al. (2010) to study the dynamics of CANNs with dynamical synapses. We will present the detailed analysis only for the case of STD. Extension to the case of STF is straightforward.
Substituting equations 2.10 and 2.13 into equations 2.1 and 2.4, and using the orthonormality and completeness of the distortion modes, we get the dynamical equations for the coefficients an(t) and bn(t). The details are presented in appendix B.
Truncating the perturbation expansion at increasingly high orders corresponds to the inclusion of increasingly complex distortions, and hence provides increasingly accurate descriptions of the network dynamics. As confirmed in the subsequent sections, the perturbative approach is in excellent agreement with simulation results. The agreement is especially remarkable when the STD strength is weak and the lowest few orders are already sufficient to explain the dynamical features. The agreement is less satisfactory when STD is strong, and the perturbative approach typically overestimates the stability of the moving bump. This is probably due to the considerable distortion of the gaussian profile of the synaptic depression when STD is strong.
3. Phase Diagrams of CANNs with STP
We first study the impact of STP on the stationary states of CANNs when no external input is applied. For convenience of analysis, we explore the effects of STD and STF separately. This corresponds to the limits of high or low neuronal firing frequencies or the cases where only one type of STP dynamics is significant.
3.1. The Phase Diagram of CANNs with STD.
We set α = 0 in equation 2.5 to turn off STF. In the presence of STD, CANNs exhibit new and interesting dynamical behaviors. Apart from the static bump state, the network also supports spontaneously moving bump states. Examining the steady-state solutions of equations 2.1 and 2.4, we find that u0 has the same dimension as ρCrJ0u20, and 1 − p(x, t) scales as τdβu20. Hence we introduce the dimensionless parameters and . The phase diagram obtained by numerical solutions to the network dynamics is shown in Figure 3.
When STD is weak, the network behaves similarly to CANNs without STD, that is, the static bump state is present up to near 1. However, when increases, a state with the bump spontaneously moving at a constant velocity comes into existence. Such moving states have been predicted in CANNs (York & van Rossum, 2009; Kilpatrick & Bressloff, 2010), and may be associated with the traveling wave behaviors widely observed in the neocortex (Wu, Huang, & Zhang, 2008). At an intermediate range of , the static and moving states coexist, and the final state of the network depends on the initial condition. As increases further, static bumps disappear. In the limit of high , only the silent state is present. Below, we will use the perturbation approach to analyze the network dynamical behaviors.
3.1.1. Zeroth Order: The Static Bump.
3.1.2. First Order: The Moving Bump.
When the network bump is moving, the profile of STD lags behind due to its slow dynamics, and this induces an asymmetric distortion in the profile of STD. Figure 4 illustrates this behavior. Comparing the static and moving bumps shown in Figures 4a and 4b, one can see that the profile of a moving bump is characterized by the synaptic depression lagging behind the moving bump. This is because neurons tend to be less active in the locations of low values of p(x, t), causing the bump to move away from the locations of strong synaptic depression. In turn, the region of synaptic depression tends to follow the bump. However, if the timescale of synaptic depression is large, the recovery of the synaptic depressed region will be slowed down, and the region will be unable to catch up with the bump motion. Thus, the bump will start moving spontaneously. This is the same cause attributed to anticipative nonlocal events modeled in neural systems (Blair, & Sharp, 1995; O'Keefe & Recce, 1993; Romani & Tsodyks, in press).
Next, we consider solutions with nonvanishing p1 and v. We find that real solutions exist only if condition 3.9 is satisfied. This means that as soon as the static bump becomes unstable, the moving bump comes into existence. As shown in Figure 3, the boundary of this region effectively coincides with the numerical solution of the line separating the static and moving phases. In the entire region bounded by equations 3.6 and 3.9, the moving and (meta)static bumps coexist.
We also find that when τd/τs increases, the moving phase expands at the expense of the (pure) static phase. This is because the recovery of the synaptic depressed region becomes increasingly slow, making it harder for the region to catch up with the changes in the bump motion, hence sustaining the bump motion.
3.2. The Phase Diagram of CANNs with STF.
We set β = 0 in equation 2.4 to turn off STD. Compared with STD, STF has qualitatively the opposite effect on the network dynamics. When an external perturbation is applied, the dynamical synapses will not push the neural bump away. Instead they will try to pull the bump back to its original position. The phase diagram in the space of and the rescaled STF parameter is shown in Figure 5. When increases, the range of inhibitory strength allowing for a bump state is enlarged. Note that since STF tends to stabilize the bump states against asymmetric fluctuations, no moving bumps exist. The phase boundary of the static bump is well predicted by the second-order perturbation.
Concerning the timescale of neural information processing, it should be noted that it takes time of the order of τf for neuronal interactions to be fully facilitated. In the parameter range of , where the facilitated neuronal interaction is necessary for holding a bump state, we need to present an external input for a time up to the order of τf before a bump state can be sustained.
4. Memories with Graceful Degradation in CANNs with STD
The network dynamics displays a very interesting behavior in the marginally unstable region of the static bump. In this regime, the static bump solution barely loses its stability. The bump is stable if the level of synaptic depression is low but unstable at high levels. Since the STD timescale is much longer than the synaptic timescale, a bump can exist before the synaptic depression becomes effective. This maintains the bump in the plateau state with a slowly decaying amplitude, as shown in Figure 6a. After a time duration of the order of τd, the STD strength becomes sufficiently significant, as shown in Figure 6b, and the bump state eventually decays to the silent state.
4.1. First Order: Trajectory Analysis.
It is instructive to analyze the plateau behavior first by using the first-order perturbation. We select a point in the marginally unstable regime of the silent phase, that is, in the vicinity of the static phase. As shown in Figure 7, the nullclines of and p0 ( and dp0/dt = 0, respectively) do not have any intersections as they do in the static phase where the bump state exists. Yet they are still close enough to create a region with very slow dynamics near the apex of the u-nullcline at . Then, in Figure 7, we plot the trajectories of the dynamics starting from different initial conditions.
The most interesting family of trajectories is represented by B and C in Figure 7. Due to the much faster dynamics of , trajectories starting from a wide range of initial conditions converge rapidly, in a time of the order of τs, to a common trajectory in the vicinity of the -nullcline. Along this common trajectory, is effectively the steady-state solution of equation 3.4 at the instantaneous value of p0(t), which evolves with the much longer timescale of τd. This gives rise to the plateau region of , which can survive for a duration of the order of τd. The plateau ends after the trajectory has passed the slow region near the apex of the -nullcline. This dynamics is in clear contrast with trajectory D, in which the bump height decays to zero in a time of the order of τs.
Trajectory A represents another family of trajectories having rather similar behaviors, although the lifetimes of their plateaus are not so long. These trajectories start from more depleted initial conditions, and hence do not have a chance to get close to the -nullcline. Nevertheless, they converge rapidly, in a time of the order of τs, to the band , where the dynamics of is slow. The trajectories then rely mainly on the dynamics of p0 to carry them out of this slow region, and hence plateaus with lifetimes of the order of τd are created.
Following similar arguments, the plateau behavior also exists in the stable region of the static states. This happens when the initial condition of the network lies outside the basin of attraction of the static states but still in the vicinity of the basin boundary.
When one goes deeper into the silent phase, the region of slow dynamics between the - and p0-nullclines broadens. Hence, plateau lifetimes are longest near the phase boundary between the bump and silent state, and become shorter when one goes deeper into the silent phase. This is confirmed by the contours of plateau lifetimes in the phase diagram shown in Figure 8 obtained numerically. The initial condition is uniformly set by introducing an external stimulus Iext(x|z0) = Aexp [− x2/(4a2)] to the right-hand side of equation 2.1, where A is the stimulus strength. After the network has reached a steady state, the stimulus is removed at t = 0, leaving the network to relax. It is observed in Figure 8 that the plateau behavior can be found in an extensive region of the parameter space.
4.2. Second Order: Lifetime Analysis.
To incorporate the effects of a broadened STD profile, we introduce the second-order perturbation. Dynamical equations are obtained by truncating the equations beyond the second order. As shown in Figure 6, the second-order perturbation yields a much more satisfactory agreement with simulation results than do lower-order perturbations.
5. Decoding with Enhanced Accuracy in CANNs with STF
CANNs have been interpreted as an efficient framework for neural systems implementing population decoding (Deneve et al., 1999; Wu et al., 2002). Consider the reading out of an external feature z0 from noisy inputs by CANNs. For example, z0 may represent the moving direction of an object. In the decoding paradigm, a CANN responds to an external input Iext(x) with a bump state , where the peak position of the bump hatz is interpreted as the decoding result of the network.
In the presence of STF, neuronal connections are facilitated around the area where neurons are most active. With this additional feature, the network decoding will be determined not only by the instantaneous input but also by the recent history of external inputs. Consequently, temporal fluctuations in external inputs are largely averaged out, leading to improved decoding accuracies.
In the presence of weak noise, the position of the bump state is found to be centered at z0 + s(t), where s(t) is the deviation of the center of mass of the bump from the position of stimulus z0, as derived in appendix E. Hence, the decoding error of the network is measured by the variance of the bump position over time, namely, 〈s(t)2〉. Figure 9 shows the typical decoding performance of the network with and without STF. We see that with STF, the fluctuation of the bump position is reduced significantly. Figure 10 compares the theoretical and measured decoding errors in different noise strengths (see appendix E).
6. Tracking with Enhanced Mobility in CANNs with STD
We first investigate the impact of STD and consider a tracking task in which the z0(t) abruptly changes from 0 at t = 0 to a new value at t = 0. Figure 11 shows the network responses during the tracking process. Compared with the case without STD, we find that the bump shifts to the new position faster. When is too strong, the bump may overshoot the target before eventually approaching it. This is due to the metastatic behavior of the bumps, which enhances their readiness to move from the static state when a small push is exerted.
We also study the tracking of an external stimulus moving with a constant velocity v, that is, z0(t) changes from 0 to vt at t = 0. As shown in Figure 12a, when STD is weak, the initial speed of the bump is almost zero. Then, when the stimulus is moving away, the bump accelerates in an attempt to catch up with the stimulus. After some time, the separation between the bump and the stimulus converges to a constant. This tracking behavior is similar to the case without STD. The tracking behavior in the case of strong STD is more interesting. As shown in Figure 12b, the bump position eventually overtakes the stimulus, displaying an anticipative behavior. This can be attributed to the metastatic property of STD.
We further explore how STF affects the tracking performance of CANNs. In general, there is a trade-off between the stability of network states and the capacity of the network to track time-varying stimuli. Since STD and STF have opposite effects on the mobility of the network states, we expect that they will also have opposite impacts on the tracking performance of CANNs. Indeed, STF degrades the tracking performance of CANNs (see Figure 13). The larger the STF strength, the slower the tracking speed of the network.
7. Discussion and Conclusion
In this study, we have investigated the impact of STD and STF on the dynamics of CANNs and their potential roles in neural information processing. We have analyzed the dynamics using successive orders of perturbation. The perturbation analysis works well when STD is not too strong, although it overestimates the stability of the bumps when STD is strong. The zeroth-order analysis accounts for the gaussian shape of the bump, and hence can predict the boundary of the static phase satisfactory. The first-order analysis includes the displacement mode and asymmetry with respect to the bump peak, and hence can describe the onset of the moving phase. Furthermore, it provides insights into the metastatic nature of the bumps and its relation with the enhanced tracking performance. The second-order analysis further includes the width distortions, and hence improves the prediction of the boundary of the moving phase, as well as the lifetimes of the plateau states. Higher-order perturbations are required to yield more accurate descriptions of results such as the overshooting in the tracking process. We anticipate that the perturbation analysis will also be useful in many other population decoding problems, such as in quantifying the deformation of tuning curves due to neural adaptation (Cortes et al., in press).
More important, our work reveals a number of interesting behaviors that may have far-reaching implications in neural computation.
First, STD endows CANNs with slow-decaying behaviors. When a network is initially stimulated to an active state by an external input, it will decay to the silent state very slowly after the input is removed. The duration of the plateau is of the timescale of STD rather than of neural signaling. This provides a way for the network to hold the stimulus information for up to hundreds of millisecond if the network operates in the parameter regime where the bumps are marginally unstable. This property is, on the other hand, extremely difficult to implement in attractor networks without STD. In a CANN without STD, an active state of the network will either decay to the silent state exponentially fast or be retained forever, depending on the initial activity level of the network. Indeed, how to shut off the activity of a CANN gracefully has been a challenging issue that has received wide attention in theoretical neuroscience, with researchers suggesting that a strong external input in the form of either inhibition or excitation must be applied (see Gutkin et al., 2001). Here, we have shown that in certain circumstances, STD can provide a mechanism for closing down network activities naturally and after a desirable duration. Taking into account the timescale of STD (in the order of 100 ms) and the passive nature of its dynamics, the STD-based memory is most likely associated with the sensory memory of the brain, for example, the iconic and the echoic memories (Baddeley, 1999).
Second, with STD, CANNs can support both static and moving bumps. Static bumps exist only when the synaptic depression is sufficiently weak. A consequence of synaptic depression is that static bumps are placed in the metastatic state, so that its response to changing stimuli is speeded up, enhancing its tracking performance. The states of moving bumps may be associated with the traveling wave behaviors widely observed in the neurocortex. We have also observed that for strong STD, the network state can even overtake the moving stimulus, reminiscent of the anticipative responses of head direction and place cells (Blair & Sharp, 1995; O'Keefe & Recce, 1993). It is interesting to note that this occurs in the parameter range where the network holds spontaneous moving bump solutions, suggesting that traveling wave phenomena may be closely related to the predicting capacity of neural systems.
Third, STF improves the decoding accuracy of CANNs. When an external stimulus is presented, STF strengthens the interactions among neurons that are tuned to the stimulus. This stimulus-specific facilitation provides a mechanism for the network to hold a memory trace of external inputs up to the timescale of STF, and this information can be used by the neural system for executing various memory-based operations, such as operating the working memory. We have tested this idea in a population decoding task and found that the error is indeed decreased. This is due to the determination of the network response by both the instantaneous value and the history of external inputs, which effectively averages out temporal fluctuations.
These computational advantages of dynamical synapses lead to the following implications for the modeling of neural systems. First, it sheds some light on the long-standing debate in the field about the instability of CANNs in the presence of noise. Two aspects of instability have been identified (Wu & Amari, 2005; Seung, Lee, Reis, & Tank, 2000). One is the structural instability, which refers to the argument that network components in reality, such as the neuronal synapses, are unlikely to be as perfect as mathematically required in CANNs. A small amount of discrepancy in the network structure can destroy the state space considerably, destabilizing the bump state after the stimulus is removed. The other instability refers to the computational sensitivity of the network to input noises. Because of neutral stability, the bump position is very susceptible to fluctuations in external inputs, rendering the network decoding unreliable. We have shown that STF can largely improve the computational robustness of CANNs by averaging out the temporal fluctuations in inputs. Similarly, STF can overcome the structural susceptibility of CANNs. With STF, the neuronal connections around the bump area are strengthened temporally, which effectively stabilizes the bump on the timescale of STF (M. Tsodyks & D. Hansel, personal communication, 2011). Another mechanism in a similar spirit is the reduction of the inhibition strength around the bump area (Carter & Wang, 2007).
Second, STD and STF should be dominant in different areas of the brain. We have investigated the impact of STD and STF on the tracking performance of CANNs. There is, in general, a trade-off between the stability of bump states and the tracking performance of the network. STD increases the mobility of bump states and, hence, the tracking speed of the network, whereas STF has the opposite effect. These differences predict that in cortical areas where time-varying stimuli, such as the head direction and the moving direction of objects, are encoded, STD should have a stronger effect than STF. On the other hand, in cortical areas where the robustness of bump states (i.e., the decoding accuracy of stimuli) is preferred, STF should have a stronger effect.
Third, STD and STF consume different levels of energy and operate on different timescales. We have shown that both STD and STF can generate temporal memories, but their ways of achieving it are quite different. In STD, the memory is held in the prolonged neural activities, whereas in STF, it is in the facilitated neuronal connections. Mongillo et al. (2008) proposed that with STF, neurons may not even have to be active after the stimulus is removed. The facilitated neuronal connections, mediated by the elevated calcium residue, are sufficient to carry out the memory retrieval. In our model, this is equivalent to setting the network in the parameter regime without static bump solutions or in the regime with static bump solutions but with the external stimulus presented for such a short time that neuronal interactions cannot be fully facilitated. Thus, taking into account the energy consumption associated with neural firing, the STF-based mechanism for short-term memory has the advantage of being economically efficient. However, the STD-based one also has the desirable property of enabling the stimulus information to be propagated to other cortical areas, since neural firing is necessary for signal transmission, and this is critical in the early information pathways. Furthermore, the time durations required for eliciting STF- and STD-based memory are significantly different. The former needs a stimulus to be presented for an amount of time up to τf to facilitate neuronal interactions sufficiently, whereas the latter simply requires a transient appearance of a stimulus. This difference implies that the two memory mechanisms may have potentially different applications in neural systems.
In summary, we have revealed that STP can play very valuable roles in neural information processing, including achieving temporal memory, improving decoding accuracy, enhancing tracking performance, and stabilizing CANNs. We have also shown that STD and STF tend to have different impacts on the network dynamics. These results, together with the fact that STP displays large diversity in the neural cortex, suggest that the brain may employ a strategy of weighting STD and STF differentially depending on the computational task. In this study, for simplicity of analysis, we have explored the effects of STD and STP separately. In practice, a proper combination of STD and STP can make the network exhibit new and interesting behaviors and implement new and computationally desirable properties. For instance, a CANN with both STD and STF, and with the timescale of the former shorter than that of the latter, can hold bump states for a period of time before shifting the memory to facilitated neural connections. This enables the network to achieve both goals of conveying the stimulus information to other cortical areas and holding the memory cheaply. Alternatively, the network may have the timescale of STD longer than that of STF, so that the network can produce improved encoding results for external stimuli and also close down bump activities easily. We will explore these interesting issues in the future.
Appendix A: Consistency with the Model of Tsodyks, Pawelzik, and Markram
Appendix B: The Perturbation Approach for Solving the Dynamics of CANNs with STD
Below, we analyze the dynamics of the bump in successive orders of perturbation, where the perturbation order is defined by the highest integer value of index k involved in the approximation. We start with the zeroth-order perturbation to describe the behavior of the static bumps, since their profile is effectively gaussian. We then move on to the first-order perturbation, which includes asymmetric distortions. Since spontaneous movements of the bumps are induced by asymmetric profiles of the synaptic depression, we demonstrate that the first-order perturbation is able to provide the solution of the moving bump. Proceeding to the second-order perturbation, we allow for the flexibility of varying the width of the bump and demonstrate that this is important in explaining the lifetime of the plateau state. Tracking behaviors are predicted by the 11th-order perturbation.
Appendix C: Static Bump: Lowest-Order Perturbation
Introducing the rescaled variables and , we get equation 3.4.
Rescaling the variables u, k, β, and A, we get equation 3.5.
C.1 Stability of the Static Bump
Appendix D: Moving Bump: Lowest-Order Perturbation
We substitute equations 3.7 and 3.8 into equations 2.1 and 2.4. Equation 2.1 becomes an equation containing exp [− (x − vt)2/4a2] and exp [− (x − vt)2/4a2](x − vt)/a, after making use of the projections , and .
D.1 Stability of the Moving Bump
We compare the dynamical stability of the ansatz in equations 3.7 and 3.8 with simulation results. As shown in Figure 17, the region of stability is overestimated by the ansatz. The major cause of this is that the width of the synaptic depression profile is restricted to be a. While this provides a self-consistent solution when STD is weak, this is no longer valid when STD is strong. Due to the slow recovery of synaptic depression, its profile leaves a long, partially recovered tail behind the moving bump, thus reducing the stability of the bump. This requires us to consider the second-order perturbation, which takes into account variation of the width of the STD profile. As shown in Figure 17, the second-order perturbation yields a phase boundary much closer to the simulation results when STD is weak. However, as shown in the inset of Figure 17, the discrepancy increases when STD is stronger and higher-order corrections are required.
Appendix E: Decoding in CANNs with STF
We acknowledge the valuable comments of Terry Sejnowski on this work. This study is supported by the Research Grants Council of Hong Kong (grant numbers 604008 and 605010), the 973 program of China (grant number 2011CBA00406), and the Natural Science Foundation of China (grant number 91132702).