## Abstract

We investigate rhythms in networks of neurons with recurrent excitation, that is, with excitatory cells exciting each other. Recurrent excitation can sustain activity even when the cells in the network are driven below threshold, too weak to fire on their own. This sort of “reverberating” activity is often thought to be the basis of working memory. Recurrent excitation can also lead to “runaway” transitions, sudden transitions to high-frequency firing; this may be related to epileptic seizures. Not all fundamental questions about these phenomena have been answered with clarity in the literature. We focus on three questions here: (1) How much recurrent excitation is needed to sustain reverberating activity? How does the answer depend on parameters? (2) Is there a positive minimum frequency of reverberating activity, a positive “onset frequency”? How does it depend on parameters? (3) When do runaway transitions occur? For reduced models, we give mathematical answers to these questions. We also examine computationally to which extent our findings are reflected in the behavior of biophysically more realistic model networks. Our main results can be summarized as follows. (1) Reverberating activity can be fueled by extremely weak slow recurrent excitation, but only by sufficiently strong fast recurrent excitation. (2) The onset of reverberating activity, as recurrent excitation is strengthened or external drive is raised, occurs at a positive frequency. It is faster when the external drive is weaker (and the recurrent excitation stronger). It is slower when the recurrent excitation has a longer decay time constant. (3) Runaway transitions occur only with fast, not with slow, recurrent excitation. We also demonstrate that the relation between reverberating activity fueled by recurrent excitation and runaway transitions can be visualized in an instructive way by a (generalized) cusp catastrophe surface.

## 1 Introduction

We think about rhythms in networks of neurons with recurrent excitation, that is, with excitatory cells exciting each other. Recurrent excitation can sustain activity even when the cells in the network are driven below threshold, too weak to fire on their own. This sort of “reverberating” activity is thought to be the basis of working memory (see Compte, Brunel, Goldman-Rakic, & Wang, 2000; Li, Daie, Svoboda, & Druckmann, 2016; McCormick, Hasenstaub, Sanchez-Vives, Badoual, & Bal, 2003; and Riley & Constantinidis, 2015; but also Barak & Tsodyks, 2014; and Mongillo, Barak, & Tsodyks, 2008, for alternative viewpoints). Working memory dysfunction is among the principal impairments associated with schizophrenia (Goldman-Rakic, 1994). Furthermore, in schizophrenia, slow (NMDA receptor-mediated) recurrent excitation in the prefrontal cortex is weakened (Coyle, 2012) and the excitability of excitatory cells raised (Eichhammer et al., 2004). (Simplifying, we will always identify “excitability” and “external drive” in this letter, provided that the external drive is below the firing threshold.) For this reason, the parameter dependence of reverberating activity is of medical interest.

We also study runaway transitions—discontinuous transitions from firing at lower but positive frequencies to higher-frequency firing due to recurrent excitation. There is a potential medical connection here as well: runaway transitions are reminiscent of epileptic seizures (McCormick & Contreras, 2001). There have been discussions in the epilepsy literature about the role of AMPA receptor–mediated (i.e., rapidly decaying) versus NMDA receptor–mediated (slowly decaying) excitation in seizures (Rogawski, 2013); one can view this as a discussion of the parameter dependence of runaway transitions.

We ask three questions:

How much recurrent excitation is needed to sustain reverberating activity? How does the answer depend on parameters?

Is there a positive minimum frequency of reverberating activity, a positive onset frequency? If so, how does it depend on parameters?

When do runaway transitions occur?

We focus on synchronized, rhythmic activity here for two reasons. The first is opportunistic: if we assume idealized, perfect synchronization and idealized, all-to-all connectivity, we can replace a whole network by a single cell with an excitatory autapse or—as a caricature model of a network in which excitatory cells receive feedback inhibition when they fire—with an excitatory and an inhibitory autapse. This allows a much easier yet still instructive mathematical analysis. The second reason is that synchronized, rhythmic activity does in fact seem to play an important role in working memory (Alekseichuk, Turi, de Lara, Antal, & Paulus, 2016; Leszczyński, Fell, & Axmacher, 2015; Roux & Uhlhaas, 2014; Yamamoto, Suh, Takeuchi, & Tonegawa, 2014) and epileptic seizures (Mormann & Jefferys, 2013).

We are by no means the first to have the idea of using a single cell with an autapse as a model of a network with recurrent connectivity (see Gómez, Budelli, & Pakdaman, 2001; Seung, Lee, Reis, & Tank, 2000; Wang, 1999; and Xie & Seung, 2000, for instance). When studying a single cell with an autapse, synchrony is built into the modeling a priori. One loses the ability to address the question of whether there is a synchronous (or nearly synchronous) attracting network state. There is a vast literature on this question, for both purely excitatory networks (Gerstner, van Hemmen, & Cowan, 1996; Mirollo & Strogatz, 1990; Peskin, 1975) and networks including inhibitory neurons (for reviews, see, e.g., Buzsáki & Wang, 2012; Tiesinga & Sejnowski, 2009; Whittington, Traub, Kopell, Ermentrout, & Buhl, 2000; and Börgers, 2017). We do report on some simulations of biophysical model networks of excitatory and inhibitory neurons in this letter, and they confirm that the single-cell analysis yields insight that is relevant for larger networks.

Simplifying somewhat, our answers to our three questions are as follows:

There is a positive minimum strength of recurrent excitation needed for reverberating activity when recurrent excitation decays rapidly but not when it decays slowly.

Experimentally, NMDA, not AMPA receptor-mediated excitation has been found to underlie working memory (Wang & Arnsten, 2015; Wang et al., 2013). Therefore our result suggests that rhythmic reverberating activity lost because of weakening (but not complete loss) of recurrent excitation can always be restored by raising the excitability of pyramidal cells (without raising it so much that the cells fire intrinsically).

There is a positive onset frequency. As recurrent excitation gets weaker, the external drive at which onset of reverberating activity occurs gets stronger, and the onset frequency decreases. The onset frequency is lower for slowly decaying recurrent excitation than for rapidly decaying recurrent excitation. As mentioned earlier, hypofunction of NMDA receptors and enhanced excitability of pyramidal cells have been observed in the prefrontal cortex of patients with schizophrenia. In addition, some gamma oscillations have been reported to be slower in schizophrenia patients than in healthy individuals (Spencer et al., 2004). This seems in line with our results. However, regardless of whether this rather speculative connection is real, the parameter dependence of the frequency of reverberating activity is of interest.

We prove that runaway transitions require, in our models, the interaction of rapidly decaying recurrent excitation with less rapidly decaying feedback inhibition, and external drive that is not too strong. Again there is a speculative connection to medicine: perampanel (Zwart et al., 2014), an AMPA receptor antagonist, was approved by the U.S. Food and Drug Administration for the treatment of epilepsy in 2012 (Faulkner & Burke, 2013).

The proofs of these results are elementary, but some are not easy, even for the reduced model problems studied here.

## 2 Models

We begin by listing the various models we use in this letter.

### 2.1 Population Activity Model with Recurrent Excitation

### 2.2 Self-Exciting Integrate-and-Fire Neuron

^{1}The membrane potential is shifted and scaled to vary between 0 and 1. When crosses the threshold value 1, an action potential is assumed to occur, which causes a reset of to 0 and triggers transient self-excitation, which models recurrent excitation in a network. The equations are as follows: Here denotes the membrane time constant, represents external drive or excitability, is the strength of recurrent excitation, and is the decay time constant. The statement of the reset condition in equation 2.4 is slightly unusual. For a reset to be triggered, it is not sufficient for to reach 1; it must reach 1 with a positive derivative. This convention will make some later arguments a bit smoother. For , periodic firing occurs if and only if with Although is a nondimensionalized membrane potential, we think of as time measured in ms. However, we think of frequencies as measured in Hz, not in ms; the relation between frequency and period is therefore not but

We will study periodically firing self-exciting LIF neurons. We pointed out in section 1 that this does not necessarily correspond to a stable (attracting) synchronous network state; this issue cannot be analyzed with a single-cell model. However, one might also ask whether the periodic solutions are stable even as solutions of the single-cell model. If a solution of equations 2.2 to 2.4 is close to the periodic solution at time 0, will it become the periodic solution eventually? Because of the discontinuous reset described by equation 2.4, this amounts to asking whether there will be a reset; once there is one, what happens from then on is independent of the initial condition. However, in a solution that is initially close to the periodic solution, there will in fact be a reset because of the continuous dependence of the solution of a system of ordinary differential equations on initial conditions. Similar comments apply to the models of sections 2.3 and 2.4 because they too assume discontinuous resets of gating variables.

### 2.3 Self-Exciting, Self-Inhibiting Integrate-and-Fire Neuron

### 2.4 Self-Exciting Theta Neuron

Our fourth model is a theta neuron (Ermentrout & Kopell, 1986) with self-excitation. For brevity, we make little use of this model in this letter, but we do use it to make one important point later on.

At first sight, it seems rather dramatically unrealistic to let the firing threshold be and the reset voltage . However, for , rises from 1 to , and from to 0, in a finite (and typically short) amount of time because of the quadratic nonlinearity on the right-hand side of equation 2.9, so defining the reset and threshold voltages to be and is in some sense not very different from taking them to be 0 and 1 (see figure 1 of Börgers & Kopell, 2005, for a picture illustrating this point). We let the firing threshold be and the reset voltage because then a simple change of coordinates removes the discontinuous reset altogether (Ermentrout, 2008). We will review the change of coordinates shortly.

*self-exciting theta neuron*, is the self-exciting QIF neuron defined by equations 2.11 to 2.13. Notice, however, that the reset condition for in equation 2.16 plays no role for subsequent calculations, since the right-hand side of equation 2.15 is periodic with period ; this reset can therefore be omitted without consequence if we replace equation 2.16 by Although , the transformed voltage variable, no longer resets discontinuously, the gating variable still does. This discontinuity could be removed by smoothing, but we will refrain from doing so here (see Takeuchi, 2017, where a self-inhibition term is also added).

### 2.5 Biophysical Network Models

We will compare our results for the very simple model problems with simulations of networks of excitatory and inhibitory cells (E-cells and I-cells), modeled using the Hodgkin-Huxley formalism. For all details of modeling and notation, see the appendix.

## 3 How Much Recurrent Excitation Is Needed to Sustain Reverberating Activity?

### 3.1 Population Activity Model with Recurrent Excitation

For the model of section 2.1, we must first explain what “reverberating activity” should mean. For many values of and , equation 2.1 has a single stable fixed point. However, for some values of and , there are three fixed points—two stable ones with an unstable one in between. The area in the -plane for which there are three fixed points will be called the bistability region. The larger of the two fixed points in the bistability region is the analogue of reverberating activity in this model. The cusp catastrophe surface (Strogatz, 2015) in Figure 1 shows the dependence of the fixed points on and .

So in this simple model, the strength of recurrent excitation needed for the existence of a bistability interval has a strictly positive lower bound.

### 3.2 Self-Exciting LIF Neuron

We fix positive values of and and plot the firing frequency of the self-exciting LIF neuron as a function of and . This results in surfaces of the sort shown in Figure 2, panels A and B, analogous to the cusp catastrophe surface of Figure 1. Note that in panels A and B of Figure 2, the vertical axis denotes , not . For , , but cannot exceed 300; this makes the plots easier to read.

Let and be fixed.

- The region in the -plane in which both rest and periodic firing are possible for the self-exciting LIF neuron can be characterized in the form where and is a strictly decreasing continuous function of with and See Figure 2C for illustration.

Thus, for rapidly decaying recurrent excitation (), recurrent excitation strength must exceed the positive lower bound for there to be a range of values in which there is bistability between rest and reverberating activity. For slowly decaying recurrent excitation (), there is a range of bistability for any positive . In other words, for slowly decaying recurrent excitation, reverberating activity lost by lowering (keeping it above 0) can always be restored by raising (keeping it below ).

The solution of equation 3.6 depends on and continuously. Furthermore, , strictly increases with increasing or . Taken together, these facts imply that the region in the -plane for which rest is possible () and periodic firing is possible as well (the solution of equation 3.6 rises above 1 at a finite time) can be characterized in the form of 3.4, with a strictly decreasing function .

If , the total drive in the differential equation in equation 3.6, , remains below for all , and therefore cannot reach 1. Thus, .

### 3.3 Self-Exciting, Self-Inhibiting LIF Neuron

We fix positive values of , , , and and plot the firing frequency of the self-exciting LIF neuron as a function of and . This results in surfaces of the sort shown in Figure 4, analogous to the surfaces in Figures 1 and 2. There is an interesting new feature in the left panel of Figure 4: the surface has a tear, that is, there are values of for which the firing frequency is a discontinuous function of . We postpone the analysis of the tear to section 6.

Let , , , and be fixed.

- The region in the -plane in which both rest and periodic firing are possible for the self-exciting, self-inhibiting LIF neuron can be characterized in the form for some , where is a strictly decreasing continuous function of with and (For illustration, compare Figure 2C.)

^{2}.

^{2}. Furthermore, suppose the solution of equation 3.16 crosses with a positive slope at time . Then and if , this is impossible unless . This implies equation 3.13. Finally, note that the solution of is a lower bound on the solution of equation 3.16 as long as . It is easy to calculate explicitly, using an integration factor: If is greater than both and , then for large , Thus, eventually exceeds 1, and therefore so does . This proves equation 3.14.

### 3.4 Self-Exciting Theta Neuron

We fix positive values of and and plot the firing frequency of the self-exciting theta neuron as a function of and . This results in surfaces of the sort shown in Figures 5A and B, analogous to the surfaces shown previously.

Let and be fixed.

- The region in the -plane in which both rest and periodic firing are possible for the self-exciting theta neuron (see Figure 2C for illustration) can be characterized in the form for some , where is a strictly decreasing continuous function of with and

Thus, in contrast with the models discussed previously, is always positive here. However, it is much smaller for large than for small .

The proof of part a is analogous to the proofs of the same statements in the two preceding propositions, except for the claim that is strictly positive, which will follow from part b.

A borderline trajectory, depicted in bold black in Figure 5C, originates at , for some finite . To see that must be finite, suppose that is the finite time it takes for the borderline trajectory to move from to , and let be the value of when the trajectory reaches , (see Figure 5C). Then .

### 3.5 Biophysical Networks

Do our results about the onset of reverberating activity in radically simplified models give any insight into the onset of (rhythmic) reverberating activity in biophysical networks? We refer to the appendix for the definition of our model networks. As explained there, we denote by a measure of the strength of recurrent excitation ( is the “” of the appendix) and by the decay time constant of recurrent excitation. The decay time constant of E-to-I synapses is always taken to be 3 ms in this letter.

Figures 6A and 6B show two simulations, one with a long (NMDA-like) and the other with a short (AMPA-like) . The drive to the E-cells, , is not far below its threshold value, . In both cases, is chosen so that reverberating activity results, but just barely. As formulas 3.18 and 3.19 would suggest, the threshold value of for reverberating activity is much lower for large than for small ; this of course is not at all surprising.

As decreases, in equation 3.5 is proportional to , while in equation 3.19, it is proportional to . In Figures 6A and 6B, the truth seems to be somewhere in between: when is reduced by about a factor of 33, rises by about a factor of 147.

The most striking difference between Figures 6A and 6B is of course the difference in the frequency of reverberating activity. This will be discussed in section 4.

Both formulas 3.18 and 3.19 show that the threshold decreases as the membrane time constant decreases. Note that for both the LIF and the theta neurons, a reduction in means an overall acceleration in intrinsic dynamics. When is near , the intrinsic dynamics bring to the vicinity of the threshold value 1. This takes less time when is smaller, and therefore more recurrent excitation will be left at the time arrives in the vicinity of , making it easier for the recurrent excitation to generate another spike. This reasoning makes the -dependence in equations 3.18 and 3.19 plausible. However, we have not found any analoge of it in our biophysical networks: greater leakiness of the E-cells does not lead to onset of reverberating activity for lower values of . In fact, a reduction in in the reduced models is of course not the same as an increase in the leak conductance in the biophysically modeled E-cells; increasing the leak conductance does not amount simply to an overall acceleration of intrinsic dynamics. Further, equations 3.18 and 3.19 are formulas for reverberating activity without feedback inhibition, and in our biophysical networks, feedback inhibition is present and important.

Formula 3.13 suggests that feedback inhibition is crucial in determining whether reverberating activity is possible if the inhibition is slow in comparison with the recurrent excitation. (If it is not, our results in section 3.4 do not say anything about the dependence of on the strength of feedback inhibition.) Intuitively, this is not hard to understand: Rapidly decaying inhibition will only briefly prevent the recurrent excitation from doing its work, while long-lasting inhibition will block the effect of recurrent excitation long enough for the recurrent excitation to decay substantially and therefore become ineffective. This is confirmed in Figures 6C and D, where the reverberating activity is seen to survive a slight increase in the I-to-E synaptic strength when recurrent excitation decays much more slowly than inhibition, but not when it decays faster than inhibition.

## 4 Onset Frequency of Reverberating Activity

### 4.1 Self-Exciting, Self-Inhibiting LIF Neuron

### 4.2 Self-Exciting Theta Neuron

For the self-exciting theta neuron, the onset frequency of reverberating activity is zero for all parameter values.

At first sight, proposition ^{6} seems to be bad news for us: the inequality proved in section 4.1 seems to be irrelevant for the theta neuron, which is arguably a more realistic neuronal model than the LIF neuron. However, although the frequency, , of reverberating activity starts out at zero as rises above , the rise in is extremely rapid (see Figure 7A). We will argue heuristically that the frequency , as a function of , is not just infinitely steep, but steep to infinite order at . By this statement, we mean that , as a function of , is flat to infinite order, that is, all derivatives of with respect to are zero at .

This follows from a general observation about flow in the vicinity of a saddle. Suppose that a trajectory passes a saddle point at a small distance . Near the saddle point, the motion must of course be slow, since the saddle is a fixed point, albeit an unstable one. The time that the passage takes (a notion made completely precise in a simple example below) diverges to as . However, this divergence is only logarithmic. Thus, for the passage to take a long time, must be very close to 0. Consequently, when a limit cycle is broken in a saddle-cycle collision, the oscillation gets slow only when the saddle comes very close to the cycle.

To clarify the connection between the model system, equation 4.10, and equations 4.8, 4.9, note that the bold curve in Figure 7B is the stable manifold of the saddle point, the analoge of the -axis in Figure 7C. Denote this curve by , with and . Let be one of the red curves in Figure 7B and assume and . The trajectory enters the strip at a point , and the trajectory enters it at a point with (see Figure 7D). The difference is the analog of the