Abstract

Experimental data have revealed that neuronal connection efficacy exhibits two forms of short-term plasticity: short-term depression (STD) and short-term facilitation (STF). They have time constants residing between fast neural signaling and rapid learning and may serve as substrates for neural systems manipulating temporal information on relevant timescales. This study investigates the impact of STD and STF on the dynamics of continuous attractor neural networks and their potential roles in neural information processing. We find that STD endows the network with slow-decaying plateau behaviors: the network that is initially being stimulated to an active state decays to a silent state very slowly on the timescale of STD rather than on that of neuralsignaling. This provides a mechanism for neural systems to hold sensory memory easily and shut off persistent activities gracefully. With STF, we find that the network can hold a memory trace of external inputs in the facilitated neuronal interactions, which provides a way to stabilize the network response to noisy inputs, leading to improved accuracy in population decoding. Furthermore, we find that STD increases the mobility of the network states. The increased mobility enhances the tracking performance of the network in response to time-varying stimuli, leading to anticipative neural responses. In general, we find that STD and STP tend to have opposite effects on network dynamics and complementary computational advantages, suggesting that the brain may employ a strategy of weighting them differentially depending on the computational purpose.

1.  Introduction

Experimental data have consistently revealed that the neuronal connection weight, which models the efficacy of the firing of a presynaptic neuron in modulating the state of a postsynaptic one, varies on short timescales, ranging from hundreds to thousands of milliseconds (Stevens & Wang, 1995; Markram & Tsodyks, 1996; Dobrunz & Stevens, 1997; Markram, Wang, & Tsodyks, 1999). This is called short-term plasticity (STP). Two types of STP, with opposite effects on the connection efficacy, have been observed: short-term depression (STD) and short-term facilitation (STF). STD is caused by the depletion of available resources when neurotransmitters are released from the axon terminal of the presynaptic neuron during signal transmission (Stevens & Wang, 1995; Markram & Tsodyks, 1996; Dayan & Abbott, 2001). STF is caused by the influx of calcium into the presynaptic terminal after spike generation, which increases the probability of releasing neurotransmitters.

Computational studies on the impact of STP on network dynamics have strongly suggested that STP can play many important roles in neural computation. For instance, cortical neurons receive presynaptic signals with firing rates ranging from less than 1 Hertz to more than 200 Hertz. It was suggested that STD provides a dynamic gain control mechanism that allows equal fractional changes on rapidly and slowly firing afferents to produce postsynaptic responses, realizing Weber's law (Tsodyks & Markram, 1997; Abbott, Varela, Sen, & Nelson, 1997). Besides, computations can be performed in recurrent networks by population spikes in response to external inputs, which are enabled through STD by recurrent connections (Tsodyks, Uziel, & Markram, 2000; Loebel & Tsodyks, 2002).

Another role played by synaptic depression was proposed by Levina, Herrmann, and Giesel (2007). In neuronal systems, critical avalanches are believed to bring about optimal computational capabilities and are observed experimentally. Synaptic depression enables a feedback mechanism so that the system can be maintained at a critical state, making the self-organized critical behavior robust (Levina et al., 2007). Herding, a computational algorithm reminiscent of the neuronal dynamics with synaptic depression, was recently found to have a similar effect on the complexity of information processing (Welling, 2009). STP was also recently thought to play a role in the way a neuron estimates the membrane potential information of the presynaptic neuron based on the spikes it receives (Pfister, Dayan, & Lengyel, 2010).

Concerning the computational significance of STF, a recent work proposed an interesting idea for achieving working memory in the prefrontal cortex (Mongillo, Barak, & Tsodyks, 2008). The residual calcium of STF is used as a buffer to facilitate synaptic connections, so that inputs in a subsequent delay period can be used to retrieve the information encoded by the facilitated synaptic connections. The STF-based memory mechanism has the advantage of not having to rely on persistent neural firing during the time the working memory is functioning, and hence is energetically more efficient.

From the computational point of view, the timescale of STP resides between fast neural signaling (in the order of milliseconds) and rapid learning (in the order of minutes or above), which is the timescale of many important temporal processes occurring in our daily lives, such as the passive holding of a temporal memory of objects coming into our visual field (the so-called iconic sensory memory) or the active use of the memory trace of recent events for motion control. Thus, STP may serve as a substrate for neural systems manipulating temporal information on the relevant timescales. STP has been observed in many parts of the cortex and also exhibits large diversity in different cortical areas, suggesting that the brain may employ a strategy of weighting STD and STF differently depending on the computational purpose.

In this study, we explore the potential roles of STP in processing information derived from external stimuli, an issue of fundamental importance yet inadequately investigated so far. For ease of exposition, we use continuous attractor neural networks (CANNs) as our working model, but our main results are qualitatively applicable to general cases. CANNs are recurrent networks that can hold a continuous family of localized active states (Amari, 1977). Neutral stability is a key property of CANNs, which enables neural systems to update memory states easily and track time-varying stimuli smoothly. CANNs have been successfully applied to describe the generation of persistent neural activities (Wang, 2001), the encoding of continuous stimuli such as the orientation, the head direction and the spatial location of objects (Ben-Yishai, Lev Bar-Or, & Sompolinsky, 1995; Zhang, 1996; Samsonovich & McNaughton, 1997), and a framework for implementing efficient population decoding (Deneve, Latham, & Pouget, 1999).

When STP is included in a CANN, the dynamics of the network is governed by two timescales. The time constant of STP is much slower than that of neural signaling (100–1000 ms versus 1–10 ms). The interplay between the fast and the slow dynamics causes the network to exhibit rich dynamical behaviors, laying the foundation for the neural system to implement complicated functions.

In CANNs with STD, various intrinsic behaviors have been reported, including damped oscillations (Tsodyks, Pawelzik, & Markram, 1998), periodic and aperiodic dynamics (Tsodyks et al., 1998), state hopping with transient population spikes (Holcman & Tsodyks, 2006), traveling fronts and pulses (Pinto & Ermentrout, 2001; Bressloff, Folias, Prat, & Li, 2003; Folias & Bressloff, 2004; Kilpatrick & Bressloff, 2010), breathers and pulse-emitting breathers (Bressloff et al., 2003; Folias & Bressloff, 2004), spiral waves (Kilpatrick & Bressloff, 2009), rotating bump states (York & van Rossum, 2009; Igarashi, Oizumi, Otsubo, Nagata, & Okada, 2009), and self-sustained non-periodic activities (Stratton & Wiles, 2010). Here, we focus on those network states relevant to the processing of stimuli in CANNs, including static, moving, and metastatic bumps (Wu & Amari, 2005; Fung, Wong, & Wu, 2010). More significant, we find that with STD, the network state can display slow-decaying plateau behaviors, that is, the network that is initially being stimulated to an active state by a transient input decays to the silent state very slowly on the timescale of STD relaxation rather than on the timescale of neural signaling. This is a very interesting property. It implies that STD can provide a way for the neural system to maintain sensory memory for a duration unachievable by the signaling of single neurons and shut off the network activity of sensory memory naturally. The latter has been a challenging technical issue in the study of theoretical neuroscience (Gutkin, Laing, Colby, Chow, & Ermentrout, 2001).

With STF, neuronal connections become strengthened during the presence of an external stimulus. This stimulus-specific facilitation lasts for a period on the timescale of STF and provides a way for the neural system to hold a memory trace of external inputs (Mongillo et al., 2008). This information can be used by the neural system for various computational tasks. To demonstrate this idea, we consider CANNs as a framework for implementing population decoding (Deneve et al., 1999; Wu, Amari, & Nakahara, 2002). In the presence of STF, the network response is determined not only by the instant input value but also by the history of external inputs (the latter being mediated by the facilitated neuronal interactions). Therefore, temporal fluctuations in external inputs can be largely averaged out, leading to improved decoding results.

In general, STD and STF tend to have opposite effects on network dynamics (Torres, Cortes, Marro, & Kappen, 2007). The former increases the mobility of network states, whereas the latter increases their stability. Enhanced mobility and stability can contribute positively to different computational tasks. Enhanced stability mediated by STF can improve the computational and behavioral stability of CANNs. To demonstrate that enhanced mobility does have a positive role in information processing, we investigate a computational task in which the network tracks time-varying stimuli. We find that STD increases the tracking speed of a CANN. Interestingly, for strong STD, the network state can even overtake the moving stimulus, reminiscent of the anticipative responses of head direction and place cells (Blair & Sharp, 1995; O'Keefe & Recce, 1993; Romani & Tsodyks, in press).

The rest of the letter is organized as follows. After introducing the models and methods in section 2, we discuss the intrinsic properties of CANNs in the absence of external stimuli by studying their phase diagram in section 3. In sections 4 to 6, we study the network behavior in the presence of various stimuli. In section 4, we consider the aftereffects of a transient stimulus and find that sensory memories can persist for a desirable duration and then decay gracefully. In section 5, we consider the response of the network to a noisy stimulus and find that the accuracy in population decoding can be enhanced. In section 6, we consider the response of the network to a moving stimulus and find that the tracking performance is improved by the enhanced mobility of the network states. The letter ends with conclusions and discussions in section 7. Our preliminary results on the effects of STD have been reported in Fung et al. (2010).

2.  Models and Methods

We consider a one-dimensional continuous stimulus x encoded by an ensemble of neurons. For example, the stimulus may represent a moving direction, an orientation, or a general continuous feature of objects extracted by the neural system. We consider the case where the range of possible values of the stimulus is much larger than the range of neuronal interactions. We can thus effectively take x ∈ (−∞, ∞) in our analysis. In simulations, however, we set the stimulus range to be −L/2 < xL/2 and have N neurons uniformly distributed in the range obeying a periodic boundary condition.

Let u(x, t) be the current at time t in the neurons whose preferred stimulus is x. The dynamics of u(x, t) is determined by the external input Iext(x, t), the network input from other neurons, and its own relaxation. It is given by
formula
2.1
where τs is the synaptic time constant, which is typically of the order of 2 to 5 ms, and ρ the neural density. r(x, t) is the firing rate of neurons, which increases with the synaptic input but saturates in the presence of global activity-dependent inhibition. A solvable model that captures these features is given by Deneve et al. (1999) and Wu et al. (2002),
formula
2.2
where k is a positive constant controlling the strength of global inhibition and Cr is a constant whose dimension is r(x, t)u(x, t)−2. This type of global inhibition can be achieved by shunting inhibition (Heeger, 1992; Hao, Wang, Dan, Poo, & Zhang, 2009).
J(x, x′) is the baseline neural interaction from x′ to x when no STP exists. In our solvable model, we choose J(x, x′) to be of the gaussian form with an interaction range a,
formula
2.3
where J0 is a constant. J(x, x′) is translationally invariant, in the sense that it is a function of xx′ rather than x or x′. This is the key to generating the neutral stability of CANNs.
The variable p(x, t) represents the presynaptic STD effect, which has a maximum value of 1 and decreases with the firing rate of the neurons (Tsodyks et al., 1998; Zucker & Regehr, 2002). Its dynamics is given by
formula
2.4
where τd is the time constant for synaptic depression, and the parameter β controls the depression effect due to neuronal firing.
The variable f(x, t) represents the presynaptic STF effect, which increases with the firing rate of the neurons but saturates at a maximum value fmax. Its dynamics is given by
formula
2.5
where τf is the time constant for synaptic facilitation, and the parameter α controls the facilitation effect due to neuronal firing.
Dynamical equations 2.4 and 2.5 are consistent with the phenomenological models of STD and STF fitted to experimental data (Tsodyks et al., 1998; see appendix  A). From equations 2.4 and 2.5, we can calculate the steady-state values of p and f, which are
formula
2.6
formula
2.7
Hence, in the high-frequency limit, τfαr ≫ 1, we have ffmax, which can be regarded as a constant, and p = 1/[1 + τdβ(1 + fmax)r]. In this case, we have to consider only the effect of STD. In the low-frequency limit, τdβ(1 + f)r ≪ 1, and so p ≈ 1, and we need to consider only the effect of STF. Note, however, that the terms “high-frequency limit” and “low-frequency limit” are used figuratively. The actual limits should depend on the other parameters mentioned above.

Our theoretical analysis of the network dynamics is based on the observations that the stationary states of the network, as well as the profile of STP across all neurons, can be well approximated as gaussian-shaped bumps, and the state change of the network, and hence the profile of STP, can be well described by distortions of the gaussian bump in various forms. We can therefore use a perturbation approach developed by Fung et al. (2010) to solve the network dynamics analytically.

It is instructive for us to first review the network dynamics when no STP is included. This is done by setting β = 0 in equation 2.4 and α = 0 in equation 2.5, so that p(x, t) = 1 and f(x, t) = 0 for all t. In this case, the network can support a continuous family of stationary states when the global inhibition is not too strong. These steady states are
formula
2.8
formula
2.9
where , , and . These stationary states are translationally invariant among themselves and have the gaussian shape with a free parameter z representing the peak position of the gaussian bumps. They exist for 0 < k < kc, and kc is the critical inhibition strength above which only silent states with u0 = 0 exist.
Because of the translational invariance of the neuronal interactions, the dynamics of CANNs exhibits unique features. Fung et al. (2010) have shown that the wave functions of the quantum harmonic oscillators can well describe the different distortion modes of a bump state. For instance, during the process of tracking an external stimulus, the synaptic input u(x, t) can be written as
formula
2.10
where vn(x|z(t)) are the wave functions of the quantum harmonic oscillator (see Figure 1),
formula
2.11
These functions have clear physical meanings, corresponding to distortions in the height, position, width, skewness and other higher-order features of the gaussian bump (see Figure 1). We can use a perturbation approach to solve the network dynamics effectively, with each distortion mode characterized by an eigenvalue determining its rate of evolution in time. A key property of CANNs is that the translational mode has a zero eigenvalue, and all other distortion modes have negative eigenvalues for k < kc. This implies that the gaussian bumps are able to track changes in the position of the external stimuli by continuously shifting the position of the bumps, with other distortion modes affecting the tracking process only in the transients. An example of the tracking process is shown in Figure 2, where we consider an external stimulus with a gaussian profile given by
formula
2.12
The stimulus is initially centered at z = 0, pinning the center of a gaussian neuronal response at the same position. At time t = 0, the stimulus shifts its center from z0 = 0 to z0 = 3a abruptly. The bump moves toward the new stimulus position and catches up with the shift of the stimulus after a certain time, referred to as the reaction time.
Figure 1:

(a–d) The first distortion modes of the bump state. (e–h) Their effects of producing distortions, respectively, in the height, position, width, and skewness of the gaussian bump. Solid and dashed lines represent distorted and undistorted bumps, respectively.

Figure 1:

(a–d) The first distortion modes of the bump state. (e–h) Their effects of producing distortions, respectively, in the height, position, width, and skewness of the gaussian bump. Solid and dashed lines represent distorted and undistorted bumps, respectively.

Figure 2:

(a) The neural response profile tracks the change in position of the external stimulus from z0/a = 0 to 3 at t = 0. Parameters: a = 0.5, , , ρCrJ0A = 7.979. (b) The profile of u(x, t) at t/τ = 0, 1, 2, …, 10 during the tracking process in panel a.

Figure 2:

(a) The neural response profile tracks the change in position of the external stimulus from z0/a = 0 to 3 at t = 0. Parameters: a = 0.5, , , ρCrJ0A = 7.979. (b) The profile of u(x, t) at t/τ = 0, 1, 2, …, 10 during the tracking process in panel a.

We can generalize the perturbation approach developed by Fung et al. (2010) to study the dynamics of CANNs with dynamical synapses. We will present the detailed analysis only for the case of STD. Extension to the case of STF is straightforward.

Similar to the synaptic input u(x, t), the profile of STD can be expanded in terms of the distortion modes,
formula
2.13
where wn(x|z) is given by
formula
2.14
Note that the width of wn(x|z) is times that of vn(x|z) due to the appearance of r(x, t) ∝ u(x, t)2 in equation 2.4.

Substituting equations 2.10 and 2.13 into equations 2.1 and 2.4, and using the orthonormality and completeness of the distortion modes, we get the dynamical equations for the coefficients an(t) and bn(t). The details are presented in appendix  B.

The peak position z(t) of the bump is determined from the self-consistent condition,
formula
2.15

Truncating the perturbation expansion at increasingly high orders corresponds to the inclusion of increasingly complex distortions, and hence provides increasingly accurate descriptions of the network dynamics. As confirmed in the subsequent sections, the perturbative approach is in excellent agreement with simulation results. The agreement is especially remarkable when the STD strength is weak and the lowest few orders are already sufficient to explain the dynamical features. The agreement is less satisfactory when STD is strong, and the perturbative approach typically overestimates the stability of the moving bump. This is probably due to the considerable distortion of the gaussian profile of the synaptic depression when STD is strong.

3.  Phase Diagrams of CANNs with STP

We first study the impact of STP on the stationary states of CANNs when no external input is applied. For convenience of analysis, we explore the effects of STD and STF separately. This corresponds to the limits of high or low neuronal firing frequencies or the cases where only one type of STP dynamics is significant.

3.1.  The Phase Diagram of CANNs with STD.

We set α = 0 in equation 2.5 to turn off STF. In the presence of STD, CANNs exhibit new and interesting dynamical behaviors. Apart from the static bump state, the network also supports spontaneously moving bump states. Examining the steady-state solutions of equations 2.1 and 2.4, we find that u0 has the same dimension as ρCrJ0u20, and 1 − p(x, t) scales as τdβu20. Hence we introduce the dimensionless parameters and . The phase diagram obtained by numerical solutions to the network dynamics is shown in Figure 3.

Figure 3:

Phase diagram of the network states with STD. Symbols: numerical solutions. Dashed line: equation 3.6. Dotted line: equation 3.9. Solid line: gaussian approximation using the 11th-order perturbation of the STD coefficient. Point P: the working point for Figurs 6 and 7. Parameters: τds = 50, a = 0.5/6, range of the network x ∈ [− π, π).

Figure 3:

Phase diagram of the network states with STD. Symbols: numerical solutions. Dashed line: equation 3.6. Dotted line: equation 3.9. Solid line: gaussian approximation using the 11th-order perturbation of the STD coefficient. Point P: the working point for Figurs 6 and 7. Parameters: τds = 50, a = 0.5/6, range of the network x ∈ [− π, π).

We first note that the synaptic depression and global inhibition play the same role in reducing the amplitude of the bump states. This can be seen from the steady-state solution of u(x, t):
formula
3.1
The third term in the denominator of the integrand arises from STD and plays the role of a local inhibition that is strongest where the neurons are most active. Hence we see that the silent state with u(x, t) = 0 is the only stable state when either or is large.

When STD is weak, the network behaves similarly to CANNs without STD, that is, the static bump state is present up to near 1. However, when increases, a state with the bump spontaneously moving at a constant velocity comes into existence. Such moving states have been predicted in CANNs (York & van Rossum, 2009; Kilpatrick & Bressloff, 2010), and may be associated with the traveling wave behaviors widely observed in the neocortex (Wu, Huang, & Zhang, 2008). At an intermediate range of , the static and moving states coexist, and the final state of the network depends on the initial condition. As increases further, static bumps disappear. In the limit of high , only the silent state is present. Below, we will use the perturbation approach to analyze the network dynamical behaviors.

3.1.1.  Zeroth Order: The Static Bump.

The zeroth-order perturbation is applicable to the solution of the static bump, since the profile of the bump remains effectively gaussian in the presence of synaptic depression. Hence, when STD is weak and for aL, we propose the following gaussian approximations:
formula
3.2
formula
3.3
As derived in appendix  C, the dynamical equations for u0 and p0 are given by
formula
3.4
formula
3.5
where is the dimensionless bump height and is the dimensionless stimulus strength. For , the steady-state solution of and p0 and its stability against fluctuations of and p0 are described in appendix  C. We find that stable solutions exist when
formula
3.6
when p0 is the steady-state solution of equations 3.4 and 3.5. The boundary of this region is shown as a dashed line in Figure 3. Unfortunately, this line is not easily observed in numerical solutions since the static bump is unstable against fluctuations that are asymmetric with respect to its central position. Although the bump is stable against symmetric fluctuations, asymmetric fluctuations can displace its position and eventually convert it to a moving bump. This will be considered in the first-order perturbation in the section 3.1.2.

3.1.2.  First Order: The Moving Bump.

When the network bump is moving, the profile of STD lags behind due to its slow dynamics, and this induces an asymmetric distortion in the profile of STD. Figure 4 illustrates this behavior. Comparing the static and moving bumps shown in Figures 4a and 4b, one can see that the profile of a moving bump is characterized by the synaptic depression lagging behind the moving bump. This is because neurons tend to be less active in the locations of low values of p(x, t), causing the bump to move away from the locations of strong synaptic depression. In turn, the region of synaptic depression tends to follow the bump. However, if the timescale of synaptic depression is large, the recovery of the synaptic depressed region will be slowed down, and the region will be unable to catch up with the bump motion. Thus, the bump will start moving spontaneously. This is the same cause attributed to anticipative nonlocal events modeled in neural systems (Blair, & Sharp, 1995; O'Keefe & Recce, 1993; Romani & Tsodyks, in press).

Figure 4:

Neuronal input u(x, t) and the STD coefficient p(x, t) in (a) the static state at and (b) the moving state at . Parameter: τds = 50.

Figure 4:

Neuronal input u(x, t) and the STD coefficient p(x, t) in (a) the static state at and (b) the moving state at . Parameter: τds = 50.

To incorporate this asymmetry into the network dynamics, we consider the first-order perturbation. However, to facilitate our analysis, we make a further simplification as follows:
formula
3.7
formula
3.8
This means that we have restricted the bump profile to the zeroth order. Comparison with the full first-order perturbation shows that the discrepancy is not significant. This is because the synaptic interactions among the neurons effectively maintain the bump profile in a gaussian shape, whereas the STD profile is much more susceptible to asymmetric perturbations.
As described in appendix  D, we obtain four steady-state equations for , p0, p1, and vτs/a in terms of the parameters and τsd, where is the global inhibition factor. It is easy to first check if the static bump obtained in appendix  D is also a valid solution by setting v and p1 to 0. We can then study the stability of the static bump against asymmetric fluctuations. This is done by introducing small values of p1 and vτs/a into the static bump solution and considering how they evolve. As shown in appendix  D, the static bump becomes unstable when
formula
3.9
where , , and . This means that in the region bounded by equations 3.6 and 3.9, the static bump is unstable to asymmetric fluctuations. It is stable (or more precisely, metastable) when it is static, but once it is pushed to one side, it will continue to move along that direction. We call this behavior metastatic. As we shall see, this metastatic behavior is the cause of the enhanced tracking performance.

Next, we consider solutions with nonvanishing p1 and v. We find that real solutions exist only if condition 3.9 is satisfied. This means that as soon as the static bump becomes unstable, the moving bump comes into existence. As shown in Figure 3, the boundary of this region effectively coincides with the numerical solution of the line separating the static and moving phases. In the entire region bounded by equations 3.6 and 3.9, the moving and (meta)static bumps coexist.

We also find that when τds increases, the moving phase expands at the expense of the (pure) static phase. This is because the recovery of the synaptic depressed region becomes increasingly slow, making it harder for the region to catch up with the changes in the bump motion, hence sustaining the bump motion.

3.2.  The Phase Diagram of CANNs with STF.

We set β = 0 in equation 2.4 to turn off STD. Compared with STD, STF has qualitatively the opposite effect on the network dynamics. When an external perturbation is applied, the dynamical synapses will not push the neural bump away. Instead they will try to pull the bump back to its original position. The phase diagram in the space of and the rescaled STF parameter is shown in Figure 5. When increases, the range of inhibitory strength allowing for a bump state is enlarged. Note that since STF tends to stabilize the bump states against asymmetric fluctuations, no moving bumps exist. The phase boundary of the static bump is well predicted by the second-order perturbation.

Figure 5:

Phase diagram of CANNs in the presence of STF. As the synaptic facilitation is present, the range allowed to support a stationary bump is broadened. Dashed line: Prediction by zeroth-order approximation. Solid line: Prediction by second-order approximation. Symbols: Simulations. Parameters: N/L = 80/(2π), a/L = 0.5/(2π), τfs = 50 and fmax = 1.

Figure 5:

Phase diagram of CANNs in the presence of STF. As the synaptic facilitation is present, the range allowed to support a stationary bump is broadened. Dashed line: Prediction by zeroth-order approximation. Solid line: Prediction by second-order approximation. Symbols: Simulations. Parameters: N/L = 80/(2π), a/L = 0.5/(2π), τfs = 50 and fmax = 1.

Concerning the timescale of neural information processing, it should be noted that it takes time of the order of τf for neuronal interactions to be fully facilitated. In the parameter range of , where the facilitated neuronal interaction is necessary for holding a bump state, we need to present an external input for a time up to the order of τf before a bump state can be sustained.

4.  Memories with Graceful Degradation in CANNs with STD

We consider the response of the network to a transient stimulus given by
formula
4.1
Here A(t) is nonzero for some duration before t = 0, so that a bump is rapidly formed, but A(t) vanishes after t = 0.

The network dynamics displays a very interesting behavior in the marginally unstable region of the static bump. In this regime, the static bump solution barely loses its stability. The bump is stable if the level of synaptic depression is low but unstable at high levels. Since the STD timescale is much longer than the synaptic timescale, a bump can exist before the synaptic depression becomes effective. This maintains the bump in the plateau state with a slowly decaying amplitude, as shown in Figure 6a. After a time duration of the order of τd, the STD strength becomes sufficiently significant, as shown in Figure 6b, and the bump state eventually decays to the silent state.

Figure 6:

The height of the bump decays over time for two initial conditions of types A and B in Figure 7 at (point P in Figure 3). Symbols: numerical solutions. Dashed lines: First-order perturbation using equations 3.4 and 3.5. Solid lines: second-order perturbation. Other parameters: τds = 50, a = 0.5, and x ∈ [− π, π).

Figure 6:

The height of the bump decays over time for two initial conditions of types A and B in Figure 7 at (point P in Figure 3). Symbols: numerical solutions. Dashed lines: First-order perturbation using equations 3.4 and 3.5. Solid lines: second-order perturbation. Other parameters: τds = 50, a = 0.5, and x ∈ [− π, π).

4.1.  First Order: Trajectory Analysis.

It is instructive to analyze the plateau behavior first by using the first-order perturbation. We select a point in the marginally unstable regime of the silent phase, that is, in the vicinity of the static phase. As shown in Figure 7, the nullclines of and p0 ( and dp0/dt = 0, respectively) do not have any intersections as they do in the static phase where the bump state exists. Yet they are still close enough to create a region with very slow dynamics near the apex of the u-nullcline at . Then, in Figure 7, we plot the trajectories of the dynamics starting from different initial conditions.

Figure 7:

Trajectories of network dynamics starting from various initial conditions at = (0.95, 0.0085) (point P in Figure 3). Solid line: -nullcline. Dashed line: p0-nullcline. Symbols are data points spaced at time intervals of 2τs. Parameter: τds = 50.

Figure 7:

Trajectories of network dynamics starting from various initial conditions at = (0.95, 0.0085) (point P in Figure 3). Solid line: -nullcline. Dashed line: p0-nullcline. Symbols are data points spaced at time intervals of 2τs. Parameter: τds = 50.

The most interesting family of trajectories is represented by B and C in Figure 7. Due to the much faster dynamics of , trajectories starting from a wide range of initial conditions converge rapidly, in a time of the order of τs, to a common trajectory in the vicinity of the -nullcline. Along this common trajectory, is effectively the steady-state solution of equation 3.4 at the instantaneous value of p0(t), which evolves with the much longer timescale of τd. This gives rise to the plateau region of , which can survive for a duration of the order of τd. The plateau ends after the trajectory has passed the slow region near the apex of the -nullcline. This dynamics is in clear contrast with trajectory D, in which the bump height decays to zero in a time of the order of τs.

Trajectory A represents another family of trajectories having rather similar behaviors, although the lifetimes of their plateaus are not so long. These trajectories start from more depleted initial conditions, and hence do not have a chance to get close to the -nullcline. Nevertheless, they converge rapidly, in a time of the order of τs, to the band , where the dynamics of is slow. The trajectories then rely mainly on the dynamics of p0 to carry them out of this slow region, and hence plateaus with lifetimes of the order of τd are created.

Following similar arguments, the plateau behavior also exists in the stable region of the static states. This happens when the initial condition of the network lies outside the basin of attraction of the static states but still in the vicinity of the basin boundary.

When one goes deeper into the silent phase, the region of slow dynamics between the - and p0-nullclines broadens. Hence, plateau lifetimes are longest near the phase boundary between the bump and silent state, and become shorter when one goes deeper into the silent phase. This is confirmed by the contours of plateau lifetimes in the phase diagram shown in Figure 8 obtained numerically. The initial condition is uniformly set by introducing an external stimulus Iext(x|z0) = Aexp [− x2/(4a2)] to the right-hand side of equation 2.1, where A is the stimulus strength. After the network has reached a steady state, the stimulus is removed at t = 0, leaving the network to relax. It is observed in Figure 8 that the plateau behavior can be found in an extensive region of the parameter space.

Figure 8:

Contours of plateau lifetimes in the space of and . The lines are the two top-most phase boundaries in Figure 3. In the initial condition, .

Figure 8:

Contours of plateau lifetimes in the space of and . The lines are the two top-most phase boundaries in Figure 3. In the initial condition, .

4.2.  Second Order: Lifetime Analysis.

As shown in Figure 6, the first-order perturbation overestimates the stability of the plateau state, yielding lifetimes longer than the simulation results. The main reason is that the width of the synaptic depression profile is constrained to be a constant in the first-order perturbation. However, the synaptic depression profile is broader than the bump. This can be seen from equation 2.4, rewritten as
formula
4.2
This shows that the neurotransmitter loss, 1 − p(x, t), relaxes toward an expression consisting of the gaussian r(x, t), normalized by the factor 1 + τdβr(x, t). This normalization factor is smaller where the firing rate is low, so that the profile of 1 − p(x, t) is broader than the firing rate profile r(x, t).

To incorporate the effects of a broadened STD profile, we introduce the second-order perturbation. Dynamical equations are obtained by truncating the equations beyond the second order. As shown in Figure 6, the second-order perturbation yields a much more satisfactory agreement with simulation results than do lower-order perturbations.

5.  Decoding with Enhanced Accuracy in CANNs with STF

CANNs have been interpreted as an efficient framework for neural systems implementing population decoding (Deneve et al., 1999; Wu et al., 2002). Consider the reading out of an external feature z0 from noisy inputs by CANNs. For example, z0 may represent the moving direction of an object. In the decoding paradigm, a CANN responds to an external input Iext(x) with a bump state , where the peak position of the bump hatz is interpreted as the decoding result of the network.

In the presence of STF, neuronal connections are facilitated around the area where neurons are most active. With this additional feature, the network decoding will be determined not only by the instantaneous input but also by the recent history of external inputs. Consequently, temporal fluctuations in external inputs are largely averaged out, leading to improved decoding accuracies.

We consider an external input given by
formula
5.1
where z0 represents the true stimulus value, and η(t) white noise of zero mean, and satisfies 〈η(t)η(t′)〉 = 2Ta2τsδ(tt′), with T denoting the noise strength.

In the presence of weak noise, the position of the bump state is found to be centered at z0 + s(t), where s(t) is the deviation of the center of mass of the bump from the position of stimulus z0, as derived in appendix  E. Hence, the decoding error of the network is measured by the variance of the bump position over time, namely, 〈s(t)2〉. Figure 9 shows the typical decoding performance of the network with and without STF. We see that with STF, the fluctuation of the bump position is reduced significantly. Figure 10 compares the theoretical and measured decoding errors in different noise strengths (see appendix  E).

Figure 9:

The typical decoding performance of the network with and without STF. Parameters: N = 80, a = 0.5, x ∈ [− π, π) , , and T = 0.02.

Figure 9:

The typical decoding performance of the network with and without STF. Parameters: N = 80, a = 0.5, x ∈ [− π, π) , , and T = 0.02.

Figure 10:

The decoding errors of the network versus different levels of noise. Parameters other than T are the same as those in Figure 9. Symbols: simulations. Dashed line: predictions for . Solid line: predictions for .

Figure 10:

The decoding errors of the network versus different levels of noise. Parameters other than T are the same as those in Figure 9. Symbols: simulations. Dashed line: predictions for . Solid line: predictions for .

6.  Tracking with Enhanced Mobility in CANNs with STD

A key property of CANNs is their capacity to track time-varying stimuli, which lays the foundation for CANNs to implement spatial navigation and population decoding and to update head-direction memory. To investigate the tracking performance of CANNs, we consider
formula
6.1
where the stimulus position z0(t) is time dependent.

We first investigate the impact of STD and consider a tracking task in which the z0(t) abruptly changes from 0 at t = 0 to a new value at t = 0. Figure 11 shows the network responses during the tracking process. Compared with the case without STD, we find that the bump shifts to the new position faster. When is too strong, the bump may overshoot the target before eventually approaching it. This is due to the metastatic behavior of the bumps, which enhances their readiness to move from the static state when a small push is exerted.

Figure 11:

The response of CANNs with STD to a stimulus that changed abruptly from z0/a = 0 to z0/a = 3.0 at t = 0. Symbols: numerical solutions. Lines: gaussian approximation using 11th-order perturbation of the STD coefficient. Parameters: τds = 50, , N = 80, a = 0.5, and x ∈ [− π, π).

Figure 11:

The response of CANNs with STD to a stimulus that changed abruptly from z0/a = 0 to z0/a = 3.0 at t = 0. Symbols: numerical solutions. Lines: gaussian approximation using 11th-order perturbation of the STD coefficient. Parameters: τds = 50, , N = 80, a = 0.5, and x ∈ [− π, π).

We also study the tracking of an external stimulus moving with a constant velocity v, that is, z0(t) changes from 0 to vt at t = 0. As shown in Figure 12a, when STD is weak, the initial speed of the bump is almost zero. Then, when the stimulus is moving away, the bump accelerates in an attempt to catch up with the stimulus. After some time, the separation between the bump and the stimulus converges to a constant. This tracking behavior is similar to the case without STD. The tracking behavior in the case of strong STD is more interesting. As shown in Figure 12b, the bump position eventually overtakes the stimulus, displaying an anticipative behavior. This can be attributed to the metastatic property of STD.

Figure 12:

Tracking of a neuronal bump with a continuously moving stimulus. Symbols: the peak of the bump from simulation. Dashed line: 11th-order perturbation prediction. Solid line: a continuously moving stimulus with speed τsv/a = 0.06. Parameters: , . (a) . (b) . Other parameters are the same as those in Figure 11.

Figure 12:

Tracking of a neuronal bump with a continuously moving stimulus. Symbols: the peak of the bump from simulation. Dashed line: 11th-order perturbation prediction. Solid line: a continuously moving stimulus with speed τsv/a = 0.06. Parameters: , . (a) . (b) . Other parameters are the same as those in Figure 11.

We further explore how STF affects the tracking performance of CANNs. In general, there is a trade-off between the stability of network states and the capacity of the network to track time-varying stimuli. Since STD and STF have opposite effects on the mobility of the network states, we expect that they will also have opposite impacts on the tracking performance of CANNs. Indeed, STF degrades the tracking performance of CANNs (see Figure 13). The larger the STF strength, the slower the tracking speed of the network.

Figure 13:

The response of CANNs with STD to a stimulus that changed abruptly from z0/a = 0 to z0/a = 3.0 at t = 0. Parameters: , τds = 50, and τfs = 50.

Figure 13:

The response of CANNs with STD to a stimulus that changed abruptly from z0/a = 0 to z0/a = 3.0 at t = 0. Parameters: , τds = 50, and τfs = 50.

7.  Discussion and Conclusion

In this study, we have investigated the impact of STD and STF on the dynamics of CANNs and their potential roles in neural information processing. We have analyzed the dynamics using successive orders of perturbation. The perturbation analysis works well when STD is not too strong, although it overestimates the stability of the bumps when STD is strong. The zeroth-order analysis accounts for the gaussian shape of the bump, and hence can predict the boundary of the static phase satisfactory. The first-order analysis includes the displacement mode and asymmetry with respect to the bump peak, and hence can describe the onset of the moving phase. Furthermore, it provides insights into the metastatic nature of the bumps and its relation with the enhanced tracking performance. The second-order analysis further includes the width distortions, and hence improves the prediction of the boundary of the moving phase, as well as the lifetimes of the plateau states. Higher-order perturbations are required to yield more accurate descriptions of results such as the overshooting in the tracking process. We anticipate that the perturbation analysis will also be useful in many other population decoding problems, such as in quantifying the deformation of tuning curves due to neural adaptation (Cortes et al., in press).

More important, our work reveals a number of interesting behaviors that may have far-reaching implications in neural computation.

First, STD endows CANNs with slow-decaying behaviors. When a network is initially stimulated to an active state by an external input, it will decay to the silent state very slowly after the input is removed. The duration of the plateau is of the timescale of STD rather than of neural signaling. This provides a way for the network to hold the stimulus information for up to hundreds of millisecond if the network operates in the parameter regime where the bumps are marginally unstable. This property is, on the other hand, extremely difficult to implement in attractor networks without STD. In a CANN without STD, an active state of the network will either decay to the silent state exponentially fast or be retained forever, depending on the initial activity level of the network. Indeed, how to shut off the activity of a CANN gracefully has been a challenging issue that has received wide attention in theoretical neuroscience, with researchers suggesting that a strong external input in the form of either inhibition or excitation must be applied (see Gutkin et al., 2001). Here, we have shown that in certain circumstances, STD can provide a mechanism for closing down network activities naturally and after a desirable duration. Taking into account the timescale of STD (in the order of 100 ms) and the passive nature of its dynamics, the STD-based memory is most likely associated with the sensory memory of the brain, for example, the iconic and the echoic memories (Baddeley, 1999).

Second, with STD, CANNs can support both static and moving bumps. Static bumps exist only when the synaptic depression is sufficiently weak. A consequence of synaptic depression is that static bumps are placed in the metastatic state, so that its response to changing stimuli is speeded up, enhancing its tracking performance. The states of moving bumps may be associated with the traveling wave behaviors widely observed in the neurocortex. We have also observed that for strong STD, the network state can even overtake the moving stimulus, reminiscent of the anticipative responses of head direction and place cells (Blair & Sharp, 1995; O'Keefe & Recce, 1993). It is interesting to note that this occurs in the parameter range where the network holds spontaneous moving bump solutions, suggesting that traveling wave phenomena may be closely related to the predicting capacity of neural systems.

Third, STF improves the decoding accuracy of CANNs. When an external stimulus is presented, STF strengthens the interactions among neurons that are tuned to the stimulus. This stimulus-specific facilitation provides a mechanism for the network to hold a memory trace of external inputs up to the timescale of STF, and this information can be used by the neural system for executing various memory-based operations, such as operating the working memory. We have tested this idea in a population decoding task and found that the error is indeed decreased. This is due to the determination of the network response by both the instantaneous value and the history of external inputs, which effectively averages out temporal fluctuations.

These computational advantages of dynamical synapses lead to the following implications for the modeling of neural systems. First, it sheds some light on the long-standing debate in the field about the instability of CANNs in the presence of noise. Two aspects of instability have been identified (Wu & Amari, 2005; Seung, Lee, Reis, & Tank, 2000). One is the structural instability, which refers to the argument that network components in reality, such as the neuronal synapses, are unlikely to be as perfect as mathematically required in CANNs. A small amount of discrepancy in the network structure can destroy the state space considerably, destabilizing the bump state after the stimulus is removed. The other instability refers to the computational sensitivity of the network to input noises. Because of neutral stability, the bump position is very susceptible to fluctuations in external inputs, rendering the network decoding unreliable. We have shown that STF can largely improve the computational robustness of CANNs by averaging out the temporal fluctuations in inputs. Similarly, STF can overcome the structural susceptibility of CANNs. With STF, the neuronal connections around the bump area are strengthened temporally, which effectively stabilizes the bump on the timescale of STF (M. Tsodyks & D. Hansel, personal communication, 2011). Another mechanism in a similar spirit is the reduction of the inhibition strength around the bump area (Carter & Wang, 2007).

Second, STD and STF should be dominant in different areas of the brain. We have investigated the impact of STD and STF on the tracking performance of CANNs. There is, in general, a trade-off between the stability of bump states and the tracking performance of the network. STD increases the mobility of bump states and, hence, the tracking speed of the network, whereas STF has the opposite effect. These differences predict that in cortical areas where time-varying stimuli, such as the head direction and the moving direction of objects, are encoded, STD should have a stronger effect than STF. On the other hand, in cortical areas where the robustness of bump states (i.e., the decoding accuracy of stimuli) is preferred, STF should have a stronger effect.

Third, STD and STF consume different levels of energy and operate on different timescales. We have shown that both STD and STF can generate temporal memories, but their ways of achieving it are quite different. In STD, the memory is held in the prolonged neural activities, whereas in STF, it is in the facilitated neuronal connections. Mongillo et al. (2008) proposed that with STF, neurons may not even have to be active after the stimulus is removed. The facilitated neuronal connections, mediated by the elevated calcium residue, are sufficient to carry out the memory retrieval. In our model, this is equivalent to setting the network in the parameter regime without static bump solutions or in the regime with static bump solutions but with the external stimulus presented for such a short time that neuronal interactions cannot be fully facilitated. Thus, taking into account the energy consumption associated with neural firing, the STF-based mechanism for short-term memory has the advantage of being economically efficient. However, the STD-based one also has the desirable property of enabling the stimulus information to be propagated to other cortical areas, since neural firing is necessary for signal transmission, and this is critical in the early information pathways. Furthermore, the time durations required for eliciting STF- and STD-based memory are significantly different. The former needs a stimulus to be presented for an amount of time up to τf to facilitate neuronal interactions sufficiently, whereas the latter simply requires a transient appearance of a stimulus. This difference implies that the two memory mechanisms may have potentially different applications in neural systems.

In summary, we have revealed that STP can play very valuable roles in neural information processing, including achieving temporal memory, improving decoding accuracy, enhancing tracking performance, and stabilizing CANNs. We have also shown that STD and STF tend to have different impacts on the network dynamics. These results, together with the fact that STP displays large diversity in the neural cortex, suggest that the brain may employ a strategy of weighting STD and STF differentially depending on the computational task. In this study, for simplicity of analysis, we have explored the effects of STD and STP separately. In practice, a proper combination of STD and STP can make the network exhibit new and interesting behaviors and implement new and computationally desirable properties. For instance, a CANN with both STD and STF, and with the timescale of the former shorter than that of the latter, can hold bump states for a period of time before shifting the memory to facilitated neural connections. This enables the network to achieve both goals of conveying the stimulus information to other cortical areas and holding the memory cheaply. Alternatively, the network may have the timescale of STD longer than that of STF, so that the network can produce improved encoding results for external stimuli and also close down bump activities easily. We will explore these interesting issues in the future.

Appendix A: Consistency with the Model of Tsodyks, Pawelzik, and Markram

Our modeling of the dynamics of STP is consistent with the phenomenological model proposed by Tsodyks et al. (1998). They modeled STD by considering p as the fraction of usable neurotransmitters, and STF by introducing U0 as the release probability of the neurotransmitters. The release probability U0 relaxes to a nonzero constant, urest, but is enhanced at the arrival of a spike by an amount equal to u0(1 − U0). Hence, the dynamics of p and U0 are given by
formula
A.1
formula
A.2
formula
A.3
where U1u0 + U0(1 − u0) is the release probability of the neurotransmitters after the arrival of a spike. The x and t dependence of u, p, r, and U0 are omitted in the above equations for convenience. Eliminating U0, we obtain from equation A.3
formula
A.4
Substituting α, β and f via α = u0, β = u0 + (1 − u0)urest, U1 = [u0 + (1 − u0)urest](1 + f), fmax = (1 − β)/β, we obtain equations 2.4 and 2.5. Rescaling βJ to J, we obtain equation 2.1. α and β are the STF and STD parameters, respectively, subject to β ≥ α.

Appendix B: The Perturbation Approach for Solving the Dynamics of CANNs with STD

We use the perturbation approach to solve the network dynamics. Substituting equations 2.10 and 2.13 into equation 2.1, the right-hand side of equation 2.1 becomes
formula
B.1
This expression can be resolved into a linear combination of the distortion modes vk(x, t). The coefficients of these modes are obtained by multiplying the expression with vk(x, t) and integrating x. Using the orthonormal property of the distortion modes, we have
formula
B.2
where
formula
B.3
formula
B.4
and Ik(t) is the kth component of Iext(x, t).
Similarly, the right-hand side of equation 2.4 becomes
formula
B.5
where
formula
B.6
formula
B.7
We choose vn(x, t) = vn(xz(t)) and wn(x, t) = wn(xz(t)). Using the following relationship of Hermite polynomials,
formula
B.8
formula
B.9
we have and . Making use of the orthonormality of vn’s and wn’s, we have
formula
B.10
formula
B.11
The values of , , , and can be obtained from recurrence relations derived using integration by parts and the relationships B.8 and B.9, which are given by
formula
B.12
formula
B.13
formula
B.14
where n1n − 1 and m2m − 2, and so on, in the indices. Similarly,
formula
B.15
formula
B.16
formula
B.17
formula
B.18
formula
B.19
formula
B.20
formula
B.21
formula
B.22
formula
B.23
formula
B.24
formula
B.25
Since , , , and can be calculated explicitly, all other , , , and can be deduced.

Below, we analyze the dynamics of the bump in successive orders of perturbation, where the perturbation order is defined by the highest integer value of index k involved in the approximation. We start with the zeroth-order perturbation to describe the behavior of the static bumps, since their profile is effectively gaussian. We then move on to the first-order perturbation, which includes asymmetric distortions. Since spontaneous movements of the bumps are induced by asymmetric profiles of the synaptic depression, we demonstrate that the first-order perturbation is able to provide the solution of the moving bump. Proceeding to the second-order perturbation, we allow for the flexibility of varying the width of the bump and demonstrate that this is important in explaining the lifetime of the plateau state. Tracking behaviors are predicted by the 11th-order perturbation.

Appendix C: Static Bump: Lowest-Order Perturbation

Without loss of generality, we let z = 0. Substituting equations 3.2 and 3.3 into equation 2.1, we get
formula
C.1
Using the projection onto v0, we find that . This reduces the equation to
formula
C.2

Introducing the rescaled variables and , we get equation 3.4.

Similarly, substituting equations 3.3 and 3.4 into equation 2.4 gives
formula
C.3
Making use of the projection , the equation simplifies to
formula
C.4

Rescaling the variables u, k, β, and A, we get equation 3.5.

The steady-state solution is obtained by setting the time derivatives in equations 3.4 and 3.5 to zero, yielding
formula
C.5
formula
C.6
where is the divisive inhibition.
Dividing equation C.5 by C.6, we eliminate B and get
formula
C.7
We can eliminate from equation C.6. This gives rise to an equation for p0:
formula
C.8
Rearranging the terms, we have
formula
C.9
Therefore, for each fixed p0, we can plot a parabolic curve in the space of versus . The dashed lines in Figure 14 are parabolas for different values of p0. The family of all parabolas maps out the region of existence of static bumps.
Figure 14:

The region of existence of static bump solutions. Solid line: the boundary of existence of static bump solutions. Dashed lines: the parabolic curves for different constant values of p0.

Figure 14:

The region of existence of static bump solutions. Solid line: the boundary of existence of static bump solutions. Dashed lines: the parabolic curves for different constant values of p0.

C.1 Stability of the Static Bump

To analyze the stability of the static bump, we consider the time evolution of and δ = p0(t) − p*0, where is the fixed-point solution of equations C.5 and C.6. Then we have
formula
C.10
where , , , . The stability condition is determined by the eigenvalues of the stability matrix, , where D and T are the determinant and the trace of the matrix, respectively. Using equations C.5 and C.6, we can simplify the determinant and the trace to
formula
C.11
formula
C.12
The static bump is stable if the real parts of the eigenvalues are negative. The eigenvalues are real when T2 ≥ 4D. This corresponds to nonoscillating solutions. After some algebra, we obtain the boundary T2 = 4D given by
formula
C.13
This boundary is shown in Figure 15. Below this boundary, the stability condition can be obtained as
formula
C.14
This upper bound is identical to the existence condition of equation C.9, which is above the boundary of nonoscillating solutions. This implies that all nonoscillating solutions are stable.
Figure 15:

The region of stable solutions of the static bump for τds = 50. Solid line: the boundary of stable static bumps. Dashed line: the boundary separating the oscillating and nonoscillating convergence. Dotted lines: the curves for different constant values of p0.

Figure 15:

The region of stable solutions of the static bump for τds = 50. Solid line: the boundary of stable static bumps. Dashed line: the boundary separating the oscillating and nonoscillating convergence. Dotted lines: the curves for different constant values of p0.

Above the boundary, C.13, the convergence to the steady state becomes oscillating, and the stability condition reduces to T ≤ 0, yielding equation 3.6. This condition narrows the region of static bump considerably, as shown in Figure 15.

Appendix D: Moving Bump: Lowest-Order Perturbation

We substitute equations 3.7 and 3.8 into equations 2.1 and 2.4. Equation 2.1 becomes an equation containing exp [− (xvt)2/4a2] and exp [− (xvt)2/4a2](xvt)/a, after making use of the projections , and .

Equating the coefficients of exp [− (xvt)2/4a2] and exp [− (xvt)2/4a2](xvt)/a, and rescaling the variables, we arrive at
formula
D.1
Similarly, making use of the projections , , we find that equation 2.4 gives rise to
formula
D.2
formula
D.3
The solution can be parameterized by ,
formula
formula
D.4
where , and G(ξ) = (4/7)3/2 + (4/7)1/2sd)[1 + (2/3)3/2ξ]. A real solution exists only if equation 3.9 is satisfied. The solution enables us to plot the contours of constant ξ in the space of and . Using the definition of ξ, we can write
formula
D.5
where the quadratic coefficient can be obtained from equation D.4. Figure 16 shows the family of curves with constant ξ, each with a constant bump velocity. The lowest curve saturates the inequality in equation 2.13, and yields the boundary between the static and metastatic or moving regions in Figure 3. Considering the stability condition in section D.2, only the stable branches of the parabolas are shown.
Figure 16:

The stable branches of the family of the curves with constant values of ξ at τds = 50. The dashed line is the phase boundary of the static bump.

Figure 16:

The stable branches of the family of the curves with constant values of ξ at τds = 50. The dashed line is the phase boundary of the static bump.

D.1 Stability of the Moving Bump

To study the stability of the moving bump, we consider fluctuations around the moving bump solution. Suppose
formula
D.6
formula
D.7
These expressions are substituted into the dynamical equations. The result is
formula
D.8
formula
D.9
formula
D.10
formula
D.11
We first revisit the stability of the static bump. By setting v and p1 to 0, considering the asymmetric fluctuations s and ε1 in equations D.9 and D.11 and eliminating ds/dt, we have
formula
D.12
Hence the static bump remains stable when the coefficient of ε1 on the right-hand side is nonpositive. Using equation D.4 to eliminate p0 and , we recover the condition in equation 3.9. This shows that the bump becomes a moving one as soon as it becomes unstable against asymmetric fluctuations, as described in the main text.
Now we consider the stability of the moving bump. Eliminating ds/dt and summarizing the equations into matrix form,
formula
D.13
where , , and . For the moving bump to be stable, the real parts of the eigenvalues of the stability matrix should be nonpositive. The stable branches of the family of curves are shown in Figure 16. The results show that the boundary of stability of the moving bumps is almost indistinguishable from the envelope of the family of curves. Higher-order perturbations produce phase boundaries that agree more with simulation results, as shown in Figure 3.

We compare the dynamical stability of the ansatz in equations 3.7 and 3.8 with simulation results. As shown in Figure 17, the region of stability is overestimated by the ansatz. The major cause of this is that the width of the synaptic depression profile is restricted to be a. While this provides a self-consistent solution when STD is weak, this is no longer valid when STD is strong. Due to the slow recovery of synaptic depression, its profile leaves a long, partially recovered tail behind the moving bump, thus reducing the stability of the bump. This requires us to consider the second-order perturbation, which takes into account variation of the width of the STD profile. As shown in Figure 17, the second-order perturbation yields a phase boundary much closer to the simulation results when STD is weak. However, as shown in the inset of Figure 17, the discrepancy increases when STD is stronger and higher-order corrections are required.

Figure 17:

The boundary of the moving phase. Symbols: simulation results. Dashed line: first-order perturbation. Solid line: second-order perturbation. Inset: The boundary of the moving phase in a broader range of β. Parameters: N/L = 80/(2π), a/L = 0.5/(2π).

Figure 17:

The boundary of the moving phase. Symbols: simulation results. Dashed line: first-order perturbation. Solid line: second-order perturbation. Inset: The boundary of the moving phase in a broader range of β. Parameters: N/L = 80/(2π), a/L = 0.5/(2π).

Appendix E: Decoding in CANNs with STF

We start by considering bumps and STF profiles of the form
formula
E.1
formula
E.2
Note that since the noise occurs in the position of the bump, we can neglect changes in the height. Substituting equations E.1 and E.2 into equation 2.1, and removing terms orthogonal to the position distortion mode, we have
formula
E.3
where
formula
E.4
This differential equation can be solved by first diagonalizing M. Let −λ± be the eigenvalues of M and (Us±Uf±)T be the corresponding eigenvectors. Then the solution becomes
formula
E.5
where E± = exp [− λ±(tt1)]. Squaring the expression of s/a, averaging over noise, and integrating, we get
formula
E.6

Acknowledgments

We acknowledge the valuable comments of Terry Sejnowski on this work. This study is supported by the Research Grants Council of Hong Kong (grant numbers 604008 and 605010), the 973 program of China (grant number 2011CBA00406), and the Natural Science Foundation of China (grant number 91132702).

References

Abbott
,
L. F.
,
Varela
,
J. A.
,
Sen
,
K.
, &
Nelson
,
S. B.
(
1997
).
Synaptic depression and cortical gain control
.
Science
,
275
,
220
224
.
Amari
,
S.
(
1977
).
Dynamics of pattern formation in lateral-inhibition type neural fields
.
Biol. Cybern.
,
27
,
77
87
.
Baddeley
,
D.
(
1999
).
Essentials of human memory
.
Philadelphia
:
Psychology Press
.
Ben-Yishai
,
R.
,
Lev Bar-Or
,
R.
, &
Sompolinsky
,
H.
(
1995
).
Theory of orientation tuning in visual cortex
.
Proc. Natl. Acad. Sci. U.S.A.
,
92
,
3844
3848
.
Blair
,
H. T.
, &
Sharp
,
P. E.
(
1995
).
Anticipatory head direction signals in anterior thalamus: Evidence for a thalamocortical circuit that integrates angular head motion to compute head direction
.
J. Neurosci.
,
15
,
6260
6270
.
Bressloff
,
P. C.
,
Folias
,
S. E.
,
Prat
,
A.
, &
Li
,
Y.-X.
(
2003
).
Oscillatory waves in inhomogeneous neural media
.
Phys. Rev. Lett.
,
91
,
178101
.
Carter
,
E.
, &
Wang
,
X. J.
(
2007
).
Cannabinoid-mediated disinhibition and working memory: Dynamical interplay of multiple feedback mechanisms in a continuous attractor model of prefrontal cortex
.
Cerebral Cortex
,
17
,
i16
i27
.
Cortes
,
J. M.
,
Marinazzo
,
D.
,
Series
,
B.
,
Oram
,
M. W.
,
Sejnowski
,
T.
, &
van Rossum
,
M.
(
in press
).
Neural adaptation while preserving coding accuracy
.
J. Comput. Neurosci.
Dayan
,
P.
, &
Abbott
,
L. F.
(
2001
).
Theoretical neuroscience
.
Cambridge, MA
:
MIT Press
.
Deneve
,
S.
,
Latham
,
P.
, &
Pouget
,
A.
(
1999
).
Reading population codes: A neural implementation of ideal observers
.
Nature Neuroscience
,
2
,
740
745
.
Dobrunz
,
L. E.
, &
Stevens
,
C. F.
(
1997
).
Heterogeneity of release probability, facilitation, and depletion at central synapses
.
Neuron
,
18
,
995
1008
.
Folias
,
S. E.
, &
Bressloff
,
P. C.
(
2004
).
Breathing pulses in an excitatory neural network
.
SIAM J. Appl. Dyn. Syst.
,
3
,
378
407
.
Fung
,
C.C.A.
,
Wong
,
K.Y.M.
, &
Wu
,
S.
(
2010
).
A moving bump in a continuous manifold: A comprehensive study of the tracking dynamics of continuous attractor neural networks
.
Neural Comput.
,
22
,
752
792
.
Gutkin
,
B.
,
Laing
,
C.
,
Colby
,
C.
,
Chow
,
C.
, &
Ermentrout
,
B.
(
2001
).
Turning on and off with excitation: The role of spike-timing asynchrony and synchrony in sustained neural activity
.
J. Comput. Neurosci.
,
11
,
121
134
.
Hao
,
J.
,
Wang
,
X.
,
Dan
,
Y.
,
Poo
,
M.
, &
Zhang
,
X.
(
2009
).
An arithmetic rule for spatial summation of excitatory and inhibitory inputs in pyramidal neurons
.
Proc. Natl. Acad. Sci. U.S.A.
,
106
,
21906
21911
.
Heeger
,
D.
(
1992
).
Normalization of cell responses in cat striate cortex
.
Visual Neuroscience
,
9
,
181
197
.
Holcman
,
D.
, &
Tsodyks
,
M.
(
2006
).
The emergence of up and down states in cortical networks
.
PLoS Comput. Biol.
,
2
,
174
181
.
Igarashi
,
Y.
,
Oizumi
,
M.
,
Otsubo
,
Y.
,
Nagata
,
K.
, &
Okada
,
M.
(
2009
).
Statistical mechanics of attractor neural network models with synaptic depression
.
Journal of Physics: Conference Series
,
197
,
012018
.
Kilpatrick
,
Z. P.
, &
Bressloff
,
P. C.
(
2009
).
Spatially structured oscillations in a two-dimensional neuronal network with synaptic depression
.
J. Comput. Neurosci.
,
28
,
193
209
.
Kilpatrick
,
Z. P.
, &
Bressloff
,
P. C.
(
2010
).
Effects of synaptic depression and adaptation on spatiotemporal dynamics of an excitatory neuronal network
.
Physica D
,
239
,
547
560
.
Levina
,
A.
,
Herrmann
,
J. M.
, &
Giesel
,
T.
(
2007
).
Dynamical synapses causing self-organized criticality in neural networks
.
Nature Physics
,
3
,
857
860
.
Loebel
,
A.
, &
Tsodyks
,
M.
(
2002
).
Computation by ensemble synchronization in recurrent networks with synaptic depression
.
J. Comput. Neurosci.
,
13
,
111
124
.
Markram
,
H.
, &
Tsodyks
,
M. V.
(
1996
).
Redistribution of synaptic efficacy between pyramidal neurons
.
Nature
,
382
,
807
810
.
Markram
,
H.
,
Wang
,
Y.
, &
Tsodyks
,
M.
(
1999
).
Differential signaling via the same axon from neocortical layer 5 pyramidal neurons
.
Proc. Natl. Acad. Sci. U.S.A.
,
95
,
5323
5328
.
Mongillo
,
G.
,
Barak
,
O.
, &
Tsodyks
,
M.
(
2008
).
Synaptic theory of working memory
.
Science
,
319
,
1543
1546
.
O'Keefe
,
J.
, &
Recce
,
M. L.
(
1993
).
Phase relationship between hippocampal place units and the EEG theta rhythm
.
Hippocampus
,
3
,
317
330
.
Pfister
,
J.-P.
,
Dayan
,
P.
, &
Lengyel
,
M.
(
2010
).
Synapses with short-term plasticity are optimal estimators of presynaptic membrane potentials
.
Nature Neuroscience
,
13
,
1271
1275
.
Pinto
,
D. J.
, &
Ermentrout
,
G. B.
(
2001
).
Spatially structured activity in synaptically coupled neuronal networks: 1. Traveling fronts and pulses
.
SIAM J. Appl. Math.
,
62
,
206
225
.
Romani
,
S.
, &
Tsodyks
,
M.
(
in press
).
A unified network model of coexisting dynamical regimes in hippocampus
.
PLoS computational biology
.
Samsonovich
,
A.
, &
McNaughton
,
B.
(
1997
).
Path integration and cognitive mapping in a continuous attractor neural network model
.
J. Neurosci.
,
17
,
5900
5920
.
Seung
,
H. S.
,
Lee
,
D. D.
,
Reis
,
B. Y.
, &
Tank
,
D. W.
(
2000
).
Stability of the memory of eye position in a recurrent network of conductance-based model neurons
.
Neuron
,
26
,
259
271
.
Stevens
,
C. F.
, &
Wang
,
Y.
(
1995
).
Facilitation and depression at single central synapses
.
Neuron
,
14
,
795
802
.
Stratton
,
P.
, &
Wiles
,
J.
(
2010
).
Self-sustained non-periodic activity in networks of spiking neurons: The contribution of local and long-range connections and dynamic synapses
.
NeuroImage
,
52
,
1070
1079
.
Torres
,
J. J.
,
Cortes
,
J. M.
,
Marro
,
J.
, &
Kappen
,
H. J.
(
2007
).
Competition between synaptic depression and facilitation in attractor neural networks
.
Neural Comput.
,
19
,
2739
2755
.
Tsodyks
,
M.
, &
Markram
,
H.
(
1997
).
Excitatory-inhibitory network in the visual cortex: Psychophysical evidence
.
Proc. Natl. Acad. Sci. U.S.A.
,
94
,
719
723
.
Tsodyks
,
M. S.
,
Pawelzik
,
K.
, &
Markram
,
H.
(
1998
).
Neural networks with dynamic synapses
.
Neural Comput.
,
10
,
821
835
.
Tsodyks
,
M.
,
Uziel
,
A.
, &
Markram
,
H.
(
2000
).
Synchrony generation in recurrent network with frequency-dependent synapses
.
J. Neurosci.
,
20
,
1
5
.
Wang
,
X. J.
(
2001
).
Synaptic reverberation underlying mnemonic persistent activity
.
Trends in Neuroscience
,
24
,
455
463
.
Welling
,
M.
(
2009
).
Herding dynamical weights to learn
.
ACM Int. Conf. Proc. Series
,
382
,
1121
1128
.
Wu
,
J.
,
Huang
,
X.
, &
Zhang
,
C.
(
2008
).
Propagating waves of activity in the neocortex: What they are, what they do
.
Neuroscientist
,
14
,
487
502
.
Wu
,
S.
, &
Amari
,
S.
(
2005
).
Computing with continuous attractors: Stability and online aspects
.
Neural Comput.
,
17
,
2215
2239
.
Wu
,
S.
,
Amari
,
S.
, &
Nakahara
,
H.
(
2002
).
Population coding and decoding in a neural field: A computational study
.
Neural Comput.
,
14
,
999
1026
.
York
,
L. C.
, &
van Rossum
,
M. C. W.
(
2009
).
Recurrent networks with short term synaptic depression
.
J. Comput. Neurosci.
,
27
,
607
620
.
Zhang
,
K.-C.
(
1996
).
Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: A theory
.
J. Neurosci.
,
16
,
2112
2126
.
Zucker
,
R. S.
, &
Regehr
,
W. G.
(
2002
).
Short-term synaptic plasticity
.
Annu. Rev. Physiol.
,
64
,
355
405
.