Abstract
Continuous attractors have been used to understand recent neuroscience experiments where persistent activity patterns encode internal representations of external attributes like head direction or spatial location. However, the conditions under which the emergent bump of neural activity in such networks can be manipulated by space and time-dependent external sensory or motor signals are not understood. Here, we find fundamental limits on how rapidly internal representations encoded along continuous attractors can be updated by an external signal. We apply these results to place cell networks to derive a velocity-dependent nonequilibrium memory capacity in neural networks.
1 Introduction
Dynamical attractors have found much use in neuroscience as models for carrying out computation and signal processing (Poucet & Save, 2005). While point-like neural attractors and analogies to spin glasses have been widely explored (Hopfield, 1982; Amit, Gutfreund, & Sompolinsky, 1985b), an important class of experiments is explained by continuous attractors, where the collective dynamics of strongly interacting neurons stabilizes a low-dimensional family of activity patterns. Such continuous attractors have been invoked to explain experiments on motor control based on path integration (Seung, 1996; Seung, Lee, Reis, & Tank, 2000), head direction (Kim, Rouault, Druckmann, & Jayaraman, 2017) control, spatial representation in grid or place cells (Yoon et al., 2013; O'Keefe & Dostrovsky, 1971; Colgin et al., 2010; Wills, Lever, Cacucci, Burgess, & O'Keefe, 2005; Wimmer, Nykamp, Constantinidis, & Compte, 2014; Pfeiffer & Foster, 2013), among other information processing tasks (Hopfield, 2015; Roudi & Latham, 2007; Latham, Deneve, & Pouget, 2003; Burak & Fiete, 2012).
These continuous attractor models are at the fascinating intersection of dynamical systems and neural information processing. The neural activity in these models of strongly interacting neurons is described by an emergent collective coordinate (Yoon et al., 2013; Wu, Hamaguchi, & Amari, 2008; Amari, 1977). This collective coordinate stores an internal representation (Sontag, 2003; Erdem & Hasselmo, 2012) of the organism's state in its external environment, such as position in space (Pfeiffer & Foster, 2013; McNaughton et al., 2006) or head direction (Seelig & Jayaraman, 2015).
However, such internal representations are useful only if they can be driven and updated by external signals that provide crucial motor and sensory input (Hopfield, 2015; Pfeiffer & Foster, 2013; Erdem & Hasselmo, 2012; Hardcastle, Ganguli, & Giocomo, 2015; Ocko, Hardcastle, Giocomo, & Ganguli, 2018). Driving and updating the collective coordinate using external sensory signals opens up a variety of capabilities, such as path planning (Ponulak & Hopfield, 2013; Pfeiffer & Foster, 2013), correcting errors in the internal representation or in sensory signals (Erdem & Hasselmo, 2012; Ocko et al., 2018), and the ability to resolve ambiguities in the external sensory and motor input (Hardcastle et al., 2015; Evans, Bicanski, Bush, & Burgess, 2016; Fyhn, Hafting, Treves, Moser, & Moser, 2007).
In all of these examples, the functional use of attractors requires interaction between external signals and the internal recurrent network dynamics. However, with a few significant exceptions (Fung, Wong, Mao, & Wu, 2015; Mi, Fung, Wong, & Wu, 2014; Wu et al., 2008; Wu & Amari, 2005; Monasson & Rosay, 2014; Burak & Fiete, 2012), most theoretical work has either been in the limit of no external forces and strong internal recurrent dynamics, or in the limit of strong external forces where the internal recurrent dynamics can be ignored (Moser, Moser, & McNaughton, 2017; Tsodyks, 1999).
Here, we study continuous attractors in neural networks subject to external driving forces that are neither small relative to internal dynamics nor adiabatic. We show that the physics of the emergent collective coordinate sets limits on the maximum speed at which internal representations can be updated by external signals.
Our approach begins by deriving simple classical and statistical laws satisfied by the collective coordinate of many neurons with strong, structured interactions that are subject to time-varying external signals, Langevin noise, and quenched disorder. Exploiting these equations, we demonstrate two simple principles: (1) an equivalence principle that predicts how much the internal representation lags a rapidly moving external signal, and (2) under externally driven conditions, quenched disorder in network connectivity that can be modeled as a state-dependent effective temperature. Finally, we apply these results to place cell networks and derive a nonequilibrium driving-dependent memory capacity, complementing numerous earlier works on memory capacity in the absence of external driving.
2 Collective Coordinates in Continuous Attractors
As shown in Figure 1, encodes the continuous attractor. We will focus on 1D networks with -nearest neighbor excitatory interactions to keep bookkeeping to a minimum: if neurons , and otherwise. The latter term, , with , represents long-range, nonspecific inhibitory connections as frequently assumed in models of place cells (Monasson & Rosay, 2014; Hopfield, 2010), head direction cells (Chaudhuri & Fiete, 2016), and other continuous attractors (Seung et al., 2000; Burak & Fiete, 2012).
The effective dynamics of neural networks implicated in head direction and spatial memory is described by a continuous attractor. Consider neurons connected in a 1D topology, with local excitatory connections between nearest neighbors (blue), global inhibitory connections (not shown), and random long-range disorder (orange). Any activity pattern quickly condenses into a droplet of contiguous firing neurons (red) of characteristic size; the droplet center of mass is a collective coordinate parameterizing a continuous attractor. The droplet can be driven by space and time-varying external currents (green).
The effective dynamics of neural networks implicated in head direction and spatial memory is described by a continuous attractor. Consider neurons connected in a 1D topology, with local excitatory connections between nearest neighbors (blue), global inhibitory connections (not shown), and random long-range disorder (orange). Any activity pattern quickly condenses into a droplet of contiguous firing neurons (red) of characteristic size; the droplet center of mass is a collective coordinate parameterizing a continuous attractor. The droplet can be driven by space and time-varying external currents (green).
The disorder matrix represents random long-range connections, a form of quenched disorder (Seung, 1998; Kilpatrick, Ermentrout, & Doiron, 2013). Finally, represents external driving currents from, for example, sensory and motor input possibly routed through other regions of the brain. The Langevin noise represents private noise internal to each neuron (Lim & Goldman, 2012; Burak & Fiete, 2012).
A neural network like equation 2.1 qualitatively resembles a similarly connected network of Ising spins at fixed magnetization (Monasson & Rosay, 2014). At low noise, the activity in such a system will condense (Monasson & Rosay, 2014; Hopfield, 2010) to a localized droplet, since interfaces between firing and nonfiring neurons are penalized by . The center of mass of such a droplet, is an emergent collective coordinate that approximately describes the stable low-dimensional neural activity patterns of these neurons. Fluctuations about this coordinate have been extensively studied (Wu et al., 2008; Burak & Fiete, 2012; Hopfield, 2015; Monasson & Rosay, 2014).
3 Space and Time-Dependent External Signals
We focus on how space and time-varying external signals, modeled here as external currents , can drive and reposition the droplet along the attractor. We will be primarily interested in a cup-shaped current profile that moves at a constant velocity , , where , otherwise. Such a localized time-dependent drive could represent landmark-related sensory signals (Hardcastle et al., 2015).
The strength of the external signal is set by the depth of the cup . Previous studies have explored the case—undriven diffusive dynamics of the droplet (Burak & Fiete, 2012; Monasson & Rosay, 2013, 2014, 2015) or the large limit (Hopfield, 2015) when the internal dynamics can be ignored. Here we focus on an intermediate regime, , where internal representations are updated continuously by the external currents, without any jumps (Ponulak & Hopfield, 2013; Pfeiffer & Foster, 2013; Erdem & Hasselmo, 2012).
In fact, as shown in the section C.2 we find a threshold signal strength beyond which the external signal destabilizes the droplet, instantly “teleporting” the droplet from any distant location to the cup without continuity along the attractor, erasing any prior positional information held in the internal representation.
We focus here on , a regime with continuity of internal representations. Such continuity is critical for many applications, such as path planning (Ponulak & Hopfield, 2013; Pfeiffer & Foster, 2013; Erdem & Hasselmo, 2012) and resolving local ambiguities in position within the global context (Hardcastle et al., 2015; Evans et al., 2016; Fyhn et al., 2007). In this regime, the external signal updates the internal representation with finite gain (Fyhn et al., 2007) and can thus fruitfully combine information in both the internal representation and the external signal. Other applications that simply require short-term memory storage of a strongly fluctuating variable may not require this continuity restriction.
3.1 Equivalence Principle
We first consider driving the droplet in a network at constant velocity using an external current . We allow for Langevin noise but no disorder in the couplings in this section. For very slow driving (), the droplet will settle into and track the bottom of the cup. When driven at a finite velocity , the droplet cannot stay at the bottom since there is no net force exerted by the currents at that point.
(a) The mean position and fluctuations of the droplet driven by currents are described by an “equivalence” principle; in a frame co-moving with with velocity , we simply add an effective force where is a drag coefficient. (b) This prescription correctly predicts that the droplet lags the external driving force by an amount linearly proportional to velocity , as seen in simulations. (c) Fluctuations of the driven droplet's position, due to internal noise in neurons, are also captured by the equivalence principle. If is the probability of finding the droplet at a lag , we find that is independent of velocity and can be collapsed onto each other (with fitting parameter ). (Inset: before subtracting .)
(a) The mean position and fluctuations of the droplet driven by currents are described by an “equivalence” principle; in a frame co-moving with with velocity , we simply add an effective force where is a drag coefficient. (b) This prescription correctly predicts that the droplet lags the external driving force by an amount linearly proportional to velocity , as seen in simulations. (c) Fluctuations of the driven droplet's position, due to internal noise in neurons, are also captured by the equivalence principle. If is the probability of finding the droplet at a lag , we find that is independent of velocity and can be collapsed onto each other (with fitting parameter ). (Inset: before subtracting .)
Our results here are consistent with the fluctuation-dissipation result obtained in Monasson and Rosay (2014) for driven droplets. In summary, in the co-moving frame of the driving signal, the droplet's position fluctuates as if it were in thermal equilibrium in the modified potential .
4 Speed Limits on Updates of Internal Representation
The simple equivalence principle implies a striking bound on the update speed of internal representations. A driving signal cannot drive the droplet at velocities greater than some if the predicted lag for is larger than the cup. In the appendix, we find , where is the droplet size.
5 Disordered Connections and Effective Temperature
We now consider the effect of long-range quenched disorder in the synaptic matrix (Seung, 1998; Kilpatrick et al., 2013), which breaks the exact degeneracy of the continuous attractor, creating an effectively rugged landscape, , as shown schematically in Figure 3 and computed in sections E.1 and E.2. When driven by a time-varying external signal, , the droplet now experiences a net potential . The first term causes motion with velocity and a lag predicted by the equivalence principle, and for sufficiently large velocities , the effect of the second term can be modeled as effective Langevin white noise. To see this, note that is uncorrelated on length scales larger than the droplet size; hence, for large enough droplet velocity , the forces due to disorder are effectively random and uncorrelated in time. More precisely, let . In section E.3, we compute and show that has an autocorrelation time, , due to the finite size of the droplet.
Disorder in neural connectivity is well approximated by an effective temperature for a moving droplet. (a) Long-range disorder breaks the degeneracy of the continuous attractor, creating a rough landscape. A droplet moving at velocity in this rough landscape experiences random forces. (b) The fluctuations of a moving droplet's position, relative to the cup's bottom, can be described by an effective temperature . We define a potential where is the probability of the droplet's position fluctuating to a distance from the peak external current. We find that corresponding to different amounts of disorder (where is the average number of long-range disordered connections per neuron in units of ), can be collapsed by the one fitting parameter . Inset: is linearly proportional to the strength of disorder .
Disorder in neural connectivity is well approximated by an effective temperature for a moving droplet. (a) Long-range disorder breaks the degeneracy of the continuous attractor, creating a rough landscape. A droplet moving at velocity in this rough landscape experiences random forces. (b) The fluctuations of a moving droplet's position, relative to the cup's bottom, can be described by an effective temperature . We define a potential where is the probability of the droplet's position fluctuating to a distance from the peak external current. We find that corresponding to different amounts of disorder (where is the average number of long-range disordered connections per neuron in units of ), can be collapsed by the one fitting parameter . Inset: is linearly proportional to the strength of disorder .
Thus, on longer timescales, is uncorrelated and can be viewed as Langevin noise for the droplet center of mass , associated with a disordered-induced temperature . Through repeated simulations with different amounts of disorder , we inferred the distribution of the droplet position in the presence of such disorder-induced fluctuations (see Figure 3). The data collapse in Figure 3b confirms that the effect of disorder (of size ) on a rapidly moving droplet can indeed be modeled by an effective disorder-induced temperature . (For simplicity, we assume that internal noise in equation 2.1 is absent here. Note that in general, will also contribute to . Here we focus on the contribution of disorder to an effective temperature since internal noise has been considered in prior work (Fung et al., 2015).)
6 Implications: Memory Capacity of Driven Place Cell Networks
The capacity of a neural network to encode multiple memories has been studied in numerous contexts since Hopfield's original work (Hopfield, 1982). While specifics differ (Amit, Gutfreund, & Sompolinsky, 1985a; Battaglia & Treves, 1998; Monasson & Rosay, 2014; Hopfield, 2010), the capacity is generally set by the failure to retrieve a specific memory because of the effective disorder in neural connectivity due to other stored memories.
However, these works on capacity do not account for nonadiabatic external driving. Here, we use our results to determine the capacity of a place cell network (O'Keefe & Dostrovsky, 1971; Battaglia & Treves, 1998; Monasson & Rosay, 2014) to both encode and manipulate memories of multiple spatial environments at a finite velocity. Place cell networks (Tsodyks, 1999; Monasson & Rosay, 2013, 2014, 2015) encode memories of multiple spatial environments as multiple continuous attractors in one network. Such networks have been used to describe recent experiments on place cells and grid cells in the hippocampus (Yoon et al., 2013; Hardcastle et al., 2015; Moser, Moser, & Roudi, 2014).
In experiments that expose a rodent to different spatial environments (Alme et al., 2014; Moser, Moser, & McNaughton, 2017; Moser, Moser, & Roudi, 2014; Kubie & Muller, 1991), the same place cells are seen having “place fields” in different spatial arrangements as seen in Figure 4a, where is a permutation specific to environment . Consequently, Hebbian plasticity suggests that each environment would induce a set of synaptic connections that corresponds to the place field arrangement in that environment: if . That is, each environment corresponds to a 1D network when the neurons are laid out in a specific permutation . The actual network has the sum of all these connections over the environments the rodent is exposed to.
Nonequilibrium capacity of place cell networks limits retrieval of spatial memories at finite velocity. (a) Place cell networks model the storage of multiple spatial memories in parts of the hippocampus by coding multiple continuous attractors in the same set of neurons. Neural connections encoding spatial memory 2, 3, … act like long-range disorder for spatial memory 1. Such disorder, through an increased effective temperature, reduces the probability of tracking a finite velocity driving signal. (b) The probability of successful retrieval, , decreases with the number of simultaneous memories and velocity (with held fixed). (c) simulation data collapse when plotted against (parameters same as panel b with held fixed and varies). (d) The nonequilibrium capacity as a function of retrieval velocity .
Nonequilibrium capacity of place cell networks limits retrieval of spatial memories at finite velocity. (a) Place cell networks model the storage of multiple spatial memories in parts of the hippocampus by coding multiple continuous attractors in the same set of neurons. Neural connections encoding spatial memory 2, 3, … act like long-range disorder for spatial memory 1. Such disorder, through an increased effective temperature, reduces the probability of tracking a finite velocity driving signal. (b) The probability of successful retrieval, , decreases with the number of simultaneous memories and velocity (with held fixed). (c) simulation data collapse when plotted against (parameters same as panel b with held fixed and varies). (d) The nonequilibrium capacity as a function of retrieval velocity .
While above is obtained by summing over structured environments, from the perspective of, say, , the remaining look like long-range disordered connections. We will assume that the permutations corresponding to different environments are random and uncorrelated, a common modeling choice with experimental support (Hopfield, 2010; Monasson & Rosay, 2014, 2015; Alme et al., 2014; Moser et al., 2017). Without loss of generality, we assume that (blue environment in Figure 4.) Thus, . The disordered matrix then has an effective variance . Hence, we can apply our previous results to this system. Now consider driving the droplet with velocity in environment 1 using external currents. The probability of successfully updating the internal representation over a distance is given by , where is given by equation 5.1.
7 Conclusion
We have considered continuous attractors in neural networks driven by localized time-dependent currents . In recent experiments, such currents can represent landmark-related sensory signals (Hardcastle et al., 2015) when a rodent is traversing a spatial environment at velocity or signals that update the internal representation of head direction (Seelig & Jayaraman, 2015). Several recent experiments have controlled the effective speed of visual stimuli in virtual reality environments (Meshulam, Gauthier, Brody, Tank, & Bialek, 2017; Aronov, Nevers, & Tank, 2017; Kim et al., 2017; Turner-Evans et al., 2017). Other experiments have probed cross-talk between memories of multiple spatial environments (Alme et al., 2014). Our results predict an error rate that rises with speed and with the number of environments.
While our analysis used specific functional forms for, among others, the current profile , our bound simply reflects the finite response time in moving emergent objects, much like moving a magnetic domain in a ferromagnet using space and time-varying fields. Thus, we expect our bound to hold qualitatively for other related forms (Hopfield, 2015).
In addition to positional information considered here, continuous attractors are known to also receive velocity information (Major, Baker, Aksay, Seung, & Tank, 2004; McNaughton et al., 2006; Seelig & Jayaraman, 2015; Ocko et al., 2018). We do not consider such input in the main text but extend our analysis to velocity integration in appendix D.
In summary, we found that the nonequilibrium statistical mechanics of a strongly interacting neural network can be captured by a simple equivalence principle and a disorder-induced temperature for the network's collective coordinate. Consequently, we were able to derive a velocity-dependent bound on the number of simultaneous memories that can be stored and retrieved from a network. We discussed how these results, based on general theoretical principles on driven neural networks, allow us to connect robustly to recent time-resolved experiments in neuroscience (Kim et al., 2017; Turner-Evans et al., 2017; Hardcastle et al., 2015; Hardcastle, Maheswaranathan, Ganguli, & Giocomo, 2017; Campbell et al., 2018) on the response of neural networks to dynamic perturbations.
Appendix A: Equations for the Collective Coordinate
The description of neural activity in terms of such a collective coordinate greatly simplifies the problem, reducing the configuration space from the states for the neurons to -state, and consists of the center of mass of the droplet along the continuous attractor (Wu et al., 2008). The computational abilities of these place cell networks, such as spatial memory storage, path planning, and pattern recognition, are limited to parameter regimes in which such a collective coordinate approximation holds (e.g., noise levels less than a critical value ) .
The droplet can be driven by external signals such as sensory or motor input or input from other parts of the brain. We model such external input by the currents in equation A.1—for example, sensory landmark-based input (Hardcastle et al., 2015). When an animal is physically in a region covered by place fields of neurons , currents through can be expected to be high compared to all other currents . Other models of driving in the literature include adding an antisymmetric component to synaptic connectivities (Ponulak & Hopfield, 2013); we consider such a model in appendix D.
For a quasi 1D network with -nearest neighbor interactions and no disorder, is constant, giving a smooth, continuous attractor. However, as discussed later, in the presence of disorder, has bumps (i.e., quenched disorder) and is no longer a smooth, continuous attractor.
A.1 Fluctuation and Dissipation
We next numerically verify that the droplet obeys a fluctuation-dissipation-like relation by driving the droplet using external currents and comparing the response to diffusion of the droplet in the absence of external currents.
We use a finite ramp as the external driving, with , and otherwise (see Figure 5a). We choose to be such that it takes considerable time for the droplet to relax to its steady-state position at the end of the ramp. We notice that for different slopes of the , the droplet has different velocities, and it is natural to define a mobility of the droplet, , by , where is the slope of . Next, we notice that on a single continuous attractor, the droplet can diffuse because of internal noise in the neural network. Therefore, we can infer the diffusion coefficient of the droplet from for a collection of diffusive trajectories (see Figure 5b), where we have used to denote the center of mass for the droplet to avoid confusion.
(a) Schematics of the droplet being driven by a linear potential (ramp), illustrating the idea of mobility. Green lines are inputs, red dots are active neurons, and the more transparent ones represent earlier time. (b) Schematics of the droplet diffusing under an input with no gradient, giving rise to diffusion. The inset is the plot of mean-squared distance versus time, clearly showing diffusive behavior. Note here that we have changed the droplet center of mass (c.o.m.) position as to avoid confusion with the mean position. (c) Comparison between mobility and diffusion coefficient . Both and depend on blob size and in the same way, and thus is proportional to .
(a) Schematics of the droplet being driven by a linear potential (ramp), illustrating the idea of mobility. Green lines are inputs, red dots are active neurons, and the more transparent ones represent earlier time. (b) Schematics of the droplet diffusing under an input with no gradient, giving rise to diffusion. The inset is the plot of mean-squared distance versus time, clearly showing diffusive behavior. Note here that we have changed the droplet center of mass (c.o.m.) position as to avoid confusion with the mean position. (c) Comparison between mobility and diffusion coefficient . Both and depend on blob size and in the same way, and thus is proportional to .
Appendix B: Space- and Time-Dependent External Driving Signals
We consider the model of sensory input used in the main text: , otherwise. We focus on time-dependent currents . Such a drive was previously considered in Wu and Amari (2005), albeit without time dependence. Throughout this article, we refer to as the linear size of the drive, as the depth of the drive, and set the drive moving at a constant velocity . From now on, we will go to the continuum limit and denote .
(a) for external driving signal with , plotted from equation B.1 with , , . (b) Effective potential experienced by the droplet for a moving cup-shaped external driving signal, plotted from equation C.1 with , , , . (c) Schematic illustrating the idea of the equivalence principle (see equation 3.2). The difference between the effective potential, , experienced by a moving droplet, and that of a stationary droplet, , is a linear potential, . The slope of the linear potential is proportional to velocity as .
(a) for external driving signal with , plotted from equation B.1 with , , . (b) Effective potential experienced by the droplet for a moving cup-shaped external driving signal, plotted from equation C.1 with , , , . (c) Schematic illustrating the idea of the equivalence principle (see equation 3.2). The difference between the effective potential, , experienced by a moving droplet, and that of a stationary droplet, , is a linear potential, . The slope of the linear potential is proportional to velocity as .
B.1 A Thermal Equivalence Principle
The equivalence principle we introduced in the main text allows us to compute the steady-state position and the effective new potential seen in the co-moving frame. Crucially, the fluctuations of the collective coordinate are described by the potential obtained through the equivalence principle. The principle correctly predicts both the mean (see equation 3.2) and the fluctuation (see equation 3.3) of the lag . Therefore, it is actually a statement about the equivalence of effective dynamics in the rest frame and in the co-moving frame. Specializing to the drive , the equivalence principle predicts that the effective potential felt by the droplet (moving at constant velocity ) in the co-moving frame equals the effective potential in the stationary frame shifted by a linear potential, , that accounts for the fictitious forces due to the change of coordinates (see Figure 6c).
Since we used equation B.1 for the cup shape and the lag depends linearly on , we expect that the slope of the linear potential also depends linearly on . Here the sign convention is chosen such that corresponds to the droplet moving to the right.
Appendix C: Speed Limit for External Driving Signals
In the following, we work in the co-moving frame with velocity at which the driving signal is moving. We denote the steady-state c.o.m. position in this frame to be and a generic position to be .
We are now in position to derive presented in the main text. We observe that as the driving velocity increases, and will get closer to each other, and there will be a critical velocity such that the two coincide.
C.1 Steady-State Droplet Size
Compared to the equation of motion (e.o.m.), equaiton A.1, we see that the first term corresponds to the decay of neurons in the absence of interaction from neighbors (decay from the on state to the off state), and the second term corresponds to the interaction term in the e.o.m, and the third term corresponds to the in the e.o.m. Since we are interested in the steady-state droplet size, and thus only in the neurons that are on, the effect of the first term can be neglected (also note that ; when using the Lyapunov function to compute steady-state properties, the first term can be ignored).
To obtain general results, we also account for long-range disordered connections here. We assume consists of random connections among all the neurons. We can approximate these random connections as random permutations of , and the full is the sum over such permutations plus .
For the cup-shaped driving and its corresponding effective potential, equation C.1, we are interested in the steady-state droplet size under this driving, so we first evaluate at the steady-state position in equation C.2. To make the -dependence explicit in the Lyapunov function, we evaluate under the rigid bump approximation used in Hopfield (2015), assuming for , and otherwise.
C.2 Upper Limit on External Signal Strength
Here we present the calculation for maximal driving strength beyond which the activity droplet will “teleport”—that is, disappear at the original location and recondense at the location of the drive, even if these two locations are widely separated. We now refer to this maximal signal strength as the teleportation limit. We can determine this limit by finding out the critical point where the energy barrier of breaking up the droplet at the original location is zero.
For simplicity, we assume that initially, the cup-shaped driving signal is some distance from the droplet and not moving (the moving case can be solved in exactly the same way by using the equivalence principle and going to the co-moving frame of the droplet). We consider three scenarios during the teleportation process. (1) In the initial configuration, the droplet has not yet teleported and stays at the original location with radius . (2) In the intermediate configuration, the activity is no longer contiguous, giving a droplet with radius at the center of the cup, and another droplet with radius at the original location (when teleportation happens, the total firing neurons changes from to ). (3) In the final configuration, the droplet has successfully teleported to the center of the cup, with radius . The three scenarios are depicted schematically in Figure 7.
Schematics of three scenarios during a teleportation process. In the initial configuration, the droplet is outside the cup. An energetically unfavorable intermediate configuration is penalized by : the droplet breaks apart into two droplets—one outside the cup and one inside it. In the final configuration, with the lowest energy, the droplet inside the cup grows to a full droplet while the droplet outside shrinks to zero size. Above each droplet is its corresponding radius .
Schematics of three scenarios during a teleportation process. In the initial configuration, the droplet is outside the cup. An energetically unfavorable intermediate configuration is penalized by : the droplet breaks apart into two droplets—one outside the cup and one inside it. In the final configuration, with the lowest energy, the droplet inside the cup grows to a full droplet while the droplet outside shrinks to zero size. Above each droplet is its corresponding radius .
The global minimum of the Lyapunov function corresponds to scenario 3. However, there is an energy barrier between configuration 1 and configuration 3, corresponding to the difference between configuration 1 and 2. We would like to find the critical split size that maximizes the difference in , which corresponds to the largest energy barrier the network has to overcome in order to teleport from configuration 1 to 3. For the purpose of derivation, in the following we rename in equation C.5 as to emphasize its dependence on the external driving parameters and disordered interactions. The subscript 0 stands for the default one-droplet configuration, and it is understood that is evaluated at the network configuration of a single droplet at location with radius .
Now we have obtained the maximum energy barrier during a teleportation process, . A spontaneous teleportation will occur if , and this in turn gives an upper bound on the external driving signal strength one can have without any teleportation spontaneous occurring.
We plot the numerical solution of obtained from the solving , compared with results obtained from the simulation in Figure 8, and find perfect agreement.
Teleportation depth plotted against disorder parameter . The dots are data obtained from simulations for different but with , , , , and held fixed. The dotted line is the theoretical curve plotted from solving for numerically.
Teleportation depth plotted against disorder parameter . The dots are data obtained from simulations for different but with , , , , and held fixed. The dotted line is the theoretical curve plotted from solving for numerically.
We also obtain an approximate solution by observing that the only relevant scale for the critical split size is the radius of the droplet, . We set for some constant . In general, can depend on dimensionless parameters like and . Empirically we found the constant to be about 0.29 in our simulation.
Note that the denominator is positive because and . The simulation result also confirms that the critical split size stays approximately constant. We have checked that the dependence on parameters in equation C.10 agrees with the numerical solution obtained from solving , up to the undetermined constant .
C.3 Speed Limit on External Driving
Appendix D: Path Integration and Velocity Input
Place cell networks (Ocko et al., 2018) and head direction networks (Kim et al., 2017) are known to receive information about both velocity and landmark information. Velocity input can be modeled by adding an antisymmetric part to the connectivity matrix , which effectively tilts the continuous attractor.
The antisymmetric part will provide a velocity that is proportional to the size of for the droplet (see Figure 9). In the presence of disorder, we can simply go to the co-moving frame of velocity , and the droplet experiences an extra disorder-induced noise in addition to the disorder-induced temperature .
Velocity of droplet plotted against the size of the antisymmetric matrix. We hold all other parameters fixed with the same value as in Figure 8.
Velocity of droplet plotted against the size of the antisymmetric matrix. We hold all other parameters fixed with the same value as in Figure 8.
We found that (see Figure 10), where is the average number of disordered connection per neuron in units of .
Left: At fixed , a collection of 500 diffusive trajectories in the co-moving frame at velocity , where is taken to be the average velocity of all the trajectories. We can infer the diffusion coefficient from the variance of these trajectories as Var. Right: log plotted against log. The straight line has slope , corresponding to .
Left: At fixed , a collection of 500 diffusive trajectories in the co-moving frame at velocity , where is taken to be the average velocity of all the trajectories. We can infer the diffusion coefficient from the variance of these trajectories as Var. Right: log plotted against log. The straight line has slope , corresponding to .
Therefore, all our results in the main text apply to the case when both the external drive and the antisymmetric part exist. Specifically, we can just replace the velocity used in the main text as the sum of the two velocities corresponding to and .
Appendix E: Quenched Disorder: Driving and Disorder-Induced Temperature
E.1 Disordered Connections and Disordered Forces
From now on, we include disorder connections in addition to ordered connections that correspond to the nearest -neighbor interactions. We assume consists of random connections among all the neurons. These random connections can be approximated as random permutations of , such that the full is the sum over such permutations plus .
E.2 Variance of Disorder Forces
We compute the distribution of using a combinatorial argument as follows.
Under the rigid droplet approximation, calculating amounts to summing all the entries within an -by- diagonal block submatrix within the full synaptic matrix (recall that ). Each set of disorder connections is a random permutation of , and thus has the same number of excitatory entries as , namely, . Since the inhibitory connections do not play a role in the summation by virtue of equation E.1, it suffices to consider only the effect of adding excitatory connections in to .
There are sets of disordered connections in , and each has excitatory connections. Suppose we add these excitatory connections one by one to . Each time an excitatory entry is added to an entry in the -by- block , there are two possible situations depending on the value of before addition: if (excitatory), the addition of an excitatory connection does not change the value of because of the clipping rule in equation E.1; if (inhibitory), the addition of an excitatory connection to changes to . In the latter case, the value of is changed because the summation of entries within has changed, while in the former case, stays the same. (Note that if the excitatory connection is added outside , it does not change and thus can be neglected.)
We have in total excitatory connections to be added, and in total potential inhibitory connections in the -by- block to be flipped to an excitatory connection. We are interested in, after adding all the excitatory connections, how many inhibitory connections are changed to excitatory connections and the corresponding change in .
We can get an approximate solution if we assume that the probability of flipping an inhibitory connection does not change after the subsequent addition of excitatory connections and stays constant throughout the addition of all the excitatory connections. This requires , that is, , which is a reasonable assumption since the capacity cannot be .
E.3 Disorder Temperature from Disorder-Induced Force
We focus here on the case where gives rise to a constant velocity for the droplet (as in the main text). In the co-moving frame, the disorder-induced force acts on the c.o.m. like random kicks with correlation within the droplet size. For fast enough velocity, those random kicks are sufficiently decorrelated and become white noise at temperature .
Uncollapsed data for the occupancies for different amounts of long-range disordered connections. Parameters are the same as in Figure 3 (see section of F.1 for further details).
Appendix F: Derivation of the Memory Capacity for Driven Place Cell Network
In this section, we derive the memory capacity for driven place cell network described equation 6.1.
Our continuous attractor network can be applied to study the place cell network. We assume a 1D physical region of length . We study a network with place cell neurons and assume each neuron has a place field of size that covers the region as a regular tiling. The neurons are assumed to interact as in the leaky integrate-and-fire model of neurons. The external driving currents can model sensory input when the mouse is physically in a region covered by place fields of neurons ; currents through can be expected to be high compared to all other currents , which corresponds to the cup-shape drive we used throughout the main text.
It has been shown that the collective coordinate in the continuous attractor survives to multiple environments provided the number of stored memories is below the capacity of the network. Below the capacity, the neural activity droplet is multistable; that is, neural activity forms a stable contiguous droplet as seen in the place field arrangement corresponding to any one of the environments. Note that such a contiguous droplet will not appear contiguous in the place field arrangement of any other environment. Capacity was shown to scale as , where is an number that depends on the size of the droplet and the range of interactions . However, this capacity is about the intrinsic stability of the droplet and does not consider the effect of rapid driving forces.
Note this is the result we used in equation 6.1.
F.1 Numerics of the Place Cell Network Simulations
In this section, we explain our simulations in Fig. 4 in detail.
Therefore, our model, equation F.10, has three parameters to determine: , , and . In Figure 12 we determine the parameters by collapsing data and see that the best fit is found provided . Henceforth, we fix these three parameters to these values.
Top: Plotting against . Different solid lines correspond to data with different , and the dashed line corresponds to the curve. Bottom: Plotting against . Different solid lines correspond to data with different , and the dashed line corresponds to the curve.
Top: Plotting against . Different solid lines correspond to data with different , and the dashed line corresponds to the curve. Bottom: Plotting against . Different solid lines correspond to data with different , and the dashed line corresponds to the curve.
In the bottom plot of Figure 12, we offset the effect of by multiplying by , and we see that curves corresponding to different collapse to each other, confirming the dependence in . The collapsed line we are left with is just the -dependence of , up to overall constant.
In the top panel of Figure 12, we offset the effect of in by multiplying to . We see that different curves corresponding to different 's collapse to each other, confirming the dependence in . The curve we are left with is the dependence in , which we see fits nicely with the predicted .
In Figure 4b, we run our simulation with the following parameters held fixed: , and . Along the same curve, we vary from 6 to 30, and the series of curves corresponds to different from 0.6 to 1.2.
In Figure 4c, we hold the following parameters fixed: , and . Along the same curve, we vary from 0.1 to 0.6, and the series of curves corresponds to different from 1000 to 8000.
In Figures 4b and 4c, the theoretical model we used is equation F.10 with the same parameters given above.
In Figure 4d, we replotted the theory and data from Figure 4b. For the theoretical curve, we find the location where , and call the corresponding value “theoretical capacity.” For the simulation curve, we extrapolate to where , and call the corresponding value the “simulation capacity.”
For all simulation curves above, we drag the droplet from one end of the continuous attractor to the other end of the attractor and run the simulation 300 times. We then measure the fraction of successful events (defined as the droplet survived in the cup throughout the entire trajectory of moving) and failed events (defined as the droplet escape from the cup at some point before reaching the other end of the continuous attractor). We define the simulation as the fraction of successful events.
Acknowledgments
We thank Jeremy England, Ila Fiete, John Hopfield, and Dmitry Krotov for discussions. A.M. and D.S. are grateful for support from the Simons Foundation MMLS investigator program. We acknowledge the University of Chicago Research Computing Center for support of this work.
References
Author notes
D.S. and A.M. contributed equally to this work.