Abstract

We investigate rhythms in networks of neurons with recurrent excitation, that is, with excitatory cells exciting each other. Recurrent excitation can sustain activity even when the cells in the network are driven below threshold, too weak to fire on their own. This sort of “reverberating” activity is often thought to be the basis of working memory. Recurrent excitation can also lead to “runaway” transitions, sudden transitions to high-frequency firing; this may be related to epileptic seizures. Not all fundamental questions about these phenomena have been answered with clarity in the literature. We focus on three questions here: (1) How much recurrent excitation is needed to sustain reverberating activity? How does the answer depend on parameters? (2) Is there a positive minimum frequency of reverberating activity, a positive “onset frequency”? How does it depend on parameters? (3) When do runaway transitions occur? For reduced models, we give mathematical answers to these questions. We also examine computationally to which extent our findings are reflected in the behavior of biophysically more realistic model networks. Our main results can be summarized as follows. (1) Reverberating activity can be fueled by extremely weak slow recurrent excitation, but only by sufficiently strong fast recurrent excitation. (2) The onset of reverberating activity, as recurrent excitation is strengthened or external drive is raised, occurs at a positive frequency. It is faster when the external drive is weaker (and the recurrent excitation stronger). It is slower when the recurrent excitation has a longer decay time constant. (3) Runaway transitions occur only with fast, not with slow, recurrent excitation. We also demonstrate that the relation between reverberating activity fueled by recurrent excitation and runaway transitions can be visualized in an instructive way by a (generalized) cusp catastrophe surface.

1  Introduction

We think about rhythms in networks of neurons with recurrent excitation, that is, with excitatory cells exciting each other. Recurrent excitation can sustain activity even when the cells in the network are driven below threshold, too weak to fire on their own. This sort of “reverberating” activity is thought to be the basis of working memory (see Compte, Brunel, Goldman-Rakic, & Wang, 2000; Li, Daie, Svoboda, & Druckmann, 2016; McCormick, Hasenstaub, Sanchez-Vives, Badoual, & Bal, 2003; and Riley & Constantinidis, 2015; but also Barak & Tsodyks, 2014; and Mongillo, Barak, & Tsodyks, 2008, for alternative viewpoints). Working memory dysfunction is among the principal impairments associated with schizophrenia (Goldman-Rakic, 1994). Furthermore, in schizophrenia, slow (NMDA receptor-mediated) recurrent excitation in the prefrontal cortex is weakened (Coyle, 2012) and the excitability of excitatory cells raised (Eichhammer et al., 2004). (Simplifying, we will always identify “excitability” and “external drive” in this letter, provided that the external drive is below the firing threshold.) For this reason, the parameter dependence of reverberating activity is of medical interest.

We also study runaway transitions—discontinuous transitions from firing at lower but positive frequencies to higher-frequency firing due to recurrent excitation. There is a potential medical connection here as well: runaway transitions are reminiscent of epileptic seizures (McCormick & Contreras, 2001). There have been discussions in the epilepsy literature about the role of AMPA receptor–mediated (i.e., rapidly decaying) versus NMDA receptor–mediated (slowly decaying) excitation in seizures (Rogawski, 2013); one can view this as a discussion of the parameter dependence of runaway transitions.

We ask three questions:

  1. How much recurrent excitation is needed to sustain reverberating activity? How does the answer depend on parameters?

  2. Is there a positive minimum frequency of reverberating activity, a positive onset frequency? If so, how does it depend on parameters?

  3. When do runaway transitions occur?

We focus on synchronized, rhythmic activity here for two reasons. The first is opportunistic: if we assume idealized, perfect synchronization and idealized, all-to-all connectivity, we can replace a whole network by a single cell with an excitatory autapse or—as a caricature model of a network in which excitatory cells receive feedback inhibition when they fire—with an excitatory and an inhibitory autapse. This allows a much easier yet still instructive mathematical analysis. The second reason is that synchronized, rhythmic activity does in fact seem to play an important role in working memory (Alekseichuk, Turi, de Lara, Antal, & Paulus, 2016; Leszczyński, Fell, & Axmacher, 2015; Roux & Uhlhaas, 2014; Yamamoto, Suh, Takeuchi, & Tonegawa, 2014) and epileptic seizures (Mormann & Jefferys, 2013).

We are by no means the first to have the idea of using a single cell with an autapse as a model of a network with recurrent connectivity (see Gómez, Budelli, & Pakdaman, 2001; Seung, Lee, Reis, & Tank, 2000; Wang, 1999; and Xie & Seung, 2000, for instance). When studying a single cell with an autapse, synchrony is built into the modeling a priori. One loses the ability to address the question of whether there is a synchronous (or nearly synchronous) attracting network state. There is a vast literature on this question, for both purely excitatory networks (Gerstner, van Hemmen, & Cowan, 1996; Mirollo & Strogatz, 1990; Peskin, 1975) and networks including inhibitory neurons (for reviews, see, e.g., Buzsáki & Wang, 2012; Tiesinga & Sejnowski, 2009; Whittington, Traub, Kopell, Ermentrout, & Buhl, 2000; and Börgers, 2017). We do report on some simulations of biophysical model networks of excitatory and inhibitory neurons in this letter, and they confirm that the single-cell analysis yields insight that is relevant for larger networks.

Simplifying somewhat, our answers to our three questions are as follows:

  1. There is a positive minimum strength of recurrent excitation needed for reverberating activity when recurrent excitation decays rapidly but not when it decays slowly.

    Experimentally, NMDA, not AMPA receptor-mediated excitation has been found to underlie working memory (Wang & Arnsten, 2015; Wang et al., 2013). Therefore our result suggests that rhythmic reverberating activity lost because of weakening (but not complete loss) of recurrent excitation can always be restored by raising the excitability of pyramidal cells (without raising it so much that the cells fire intrinsically).

  2. There is a positive onset frequency. As recurrent excitation gets weaker, the external drive at which onset of reverberating activity occurs gets stronger, and the onset frequency decreases. The onset frequency is lower for slowly decaying recurrent excitation than for rapidly decaying recurrent excitation. As mentioned earlier, hypofunction of NMDA receptors and enhanced excitability of pyramidal cells have been observed in the prefrontal cortex of patients with schizophrenia. In addition, some gamma oscillations have been reported to be slower in schizophrenia patients than in healthy individuals (Spencer et al., 2004). This seems in line with our results. However, regardless of whether this rather speculative connection is real, the parameter dependence of the frequency of reverberating activity is of interest.

  3. We prove that runaway transitions require, in our models, the interaction of rapidly decaying recurrent excitation with less rapidly decaying feedback inhibition, and external drive that is not too strong. Again there is a speculative connection to medicine: perampanel (Zwart et al., 2014), an AMPA receptor antagonist, was approved by the U.S. Food and Drug Administration for the treatment of epilepsy in 2012 (Faulkner & Burke, 2013).

The proofs of these results are elementary, but some are not easy, even for the reduced model problems studied here.

2  Models

We begin by listing the various models we use in this letter.

2.1  Population Activity Model with Recurrent Excitation

The simplest model of recurrent excitation is
formula
where for , for , and for . The quantity is a normalized measure of activity of a neuronal network, such as mean firing frequency measured in a suitable unit; we call it “frequency” from here on. The parameter is a measure of the external drive or excitability of the cells, and , with , models recurrent excitation. To make the model aesthetically more appealing and perhaps a little more realistic, we replace by the smooth approximation , where is the gaussian density with mean zero and standard deviation , and denotes convolution:
formula
2.1
In this letter, this highly idealized model mostly serves the purpose of motivating our studies of more realistic models.

2.2  Self-Exciting Integrate-and-Fire Neuron

Our next model is a single linear integrate-and-fire (LIF) neuron, an admittedly crude model of a network of synchronously firing cells.1 The membrane potential is shifted and scaled to vary between 0 and 1. When crosses the threshold value 1, an action potential is assumed to occur, which causes a reset of to 0 and triggers transient self-excitation, which models recurrent excitation in a network. The equations are as follows:
formula
2.2
formula
2.3
formula
2.4
Here denotes the membrane time constant, represents external drive or excitability, is the strength of recurrent excitation, and is the decay time constant. The statement of the reset condition in equation 2.4 is slightly unusual. For a reset to be triggered, it is not sufficient for to reach 1; it must reach 1 with a positive derivative. This convention will make some later arguments a bit smoother. For , periodic firing occurs if and only if with
formula
Although is a nondimensionalized membrane potential, we think of as time measured in ms. However, we think of frequencies as measured in Hz, not in ms; the relation between frequency and period is therefore not but
formula

We will study periodically firing self-exciting LIF neurons. We pointed out in section 1 that this does not necessarily correspond to a stable (attracting) synchronous network state; this issue cannot be analyzed with a single-cell model. However, one might also ask whether the periodic solutions are stable even as solutions of the single-cell model. If a solution of equations 2.2 to 2.4 is close to the periodic solution at time 0, will it become the periodic solution eventually? Because of the discontinuous reset described by equation 2.4, this amounts to asking whether there will be a reset; once there is one, what happens from then on is independent of the initial condition. However, in a solution that is initially close to the periodic solution, there will in fact be a reset because of the continuous dependence of the solution of a system of ordinary differential equations on initial conditions. Similar comments apply to the models of sections 2.3 and 2.4 because they too assume discontinuous resets of gating variables.

2.3  Self-Exciting, Self-Inhibiting Integrate-and-Fire Neuron

To the model of the preceding section, we add another term to model feedback inhibition:
formula
2.5
formula
2.6
formula
2.7
formula
2.8
Here represents the strength of feedback inhibition, and is its decay time constant. Note that we model the transient inhibitory current with a reversal term, . The reversal potential is zero. We might similarly model the transient excitatory current by a term of the form , not just , with . However, as varies from 0 to 1, varies by 100% when , but only by 10% when , as might be roughly appropriate for excitatory synapses. This motivates the simplification of dropping the factor for excitatory synapses, but not for inhibitory ones.

2.4  Self-Exciting Theta Neuron

Our fourth model is a theta neuron (Ermentrout & Kopell, 1986) with self-excitation. For brevity, we make little use of this model in this letter, but we do use it to make one important point later on.

Following Ermentrout (2008), we view the theta neuron as a quadratic integrate-and-fire (QIF) neuron, up to a change of coordinates. We write the QIF neuron as follows:
formula
2.9
formula
2.10
As for the LIF neuron, we think of as nondimensionalized membrane potential, but of and as dimensional times, measured in ms. We say that the QIF neuron fires when reaches and is reset to . When , equation 2.9 has the two fixed points,
formula
The fixed point is stable, and is unstable. As rises above , the two fixed points collide and annihilate each other. For , there are no fixed points. We write
formula
for the QIF neuron.

At first sight, it seems rather dramatically unrealistic to let the firing threshold be and the reset voltage . However, for , rises from 1 to , and from to 0, in a finite (and typically short) amount of time because of the quadratic nonlinearity on the right-hand side of equation 2.9, so defining the reset and threshold voltages to be and is in some sense not very different from taking them to be 0 and 1 (see figure 1 of Börgers & Kopell, 2005, for a picture illustrating this point). We let the firing threshold be and the reset voltage because then a simple change of coordinates removes the discontinuous reset altogether (Ermentrout, 2008). We will review the change of coordinates shortly.

Figure 1:

Dependence of the fixed points of equation 2.1 with on the parameters and . Unstable fixed points are indicated in gray, stable ones below in blue, and stable ones above in red. The bold red curve indicates the onset edge of reverberating activity.

Figure 1:

Dependence of the fixed points of equation 2.1 with on the parameters and . Unstable fixed points are indicated in gray, stable ones below in blue, and stable ones above in red. The bold red curve indicates the onset edge of reverberating activity.

We add a simple term representing an excitatory autapse:
formula
2.11
formula
2.12
formula
2.13
As before, represents the strength of the excitatory autapse and its decay time constant. We call the model given by equations 2.11 to 2.13 the self-exciting QIF neuron.
In equation 2.11, we introduce the following change of coordinates (Ermentrout, 2008):
formula
2.14
Note that corresponds to (the value around which the right-hand side of equation 2.9, as a function of , is even), and correspond to . Inserting equation 2.14 into 2.11, we find
formula
2.15
Equation 2.15 is supplemented by equation 2.12, which remains unchanged. The reset condition, equation 2.13, becomes
formula
2.16
Up to notation, the model defined by equations 2.12, 2.15, and 2.16, which we call the self-exciting theta neuron, is the self-exciting QIF neuron defined by equations 2.11 to 2.13. Notice, however, that the reset condition for in equation 2.16 plays no role for subsequent calculations, since the right-hand side of equation 2.15 is periodic with period ; this reset can therefore be omitted without consequence if we replace equation 2.16 by
formula
2.17
Although , the transformed voltage variable, no longer resets discontinuously, the gating variable still does. This discontinuity could be removed by smoothing, but we will refrain from doing so here (see Takeuchi, 2017, where a self-inhibition term is also added).

2.5  Biophysical Network Models

We will compare our results for the very simple model problems with simulations of networks of excitatory and inhibitory cells (E-cells and I-cells), modeled using the Hodgkin-Huxley formalism. For all details of modeling and notation, see the appendix.

3  How Much Recurrent Excitation Is Needed to Sustain Reverberating Activity?

3.1  Population Activity Model with Recurrent Excitation

For the model of section 2.1, we must first explain what “reverberating activity” should mean. For many values of and , equation 2.1 has a single stable fixed point. However, for some values of and , there are three fixed points—two stable ones with an unstable one in between. The area in the -plane for which there are three fixed points will be called the bistability region. The larger of the two fixed points in the bistability region is the analogue of reverberating activity in this model. The cusp catastrophe surface (Strogatz, 2015) in Figure 1 shows the dependence of the fixed points on and .

Proposition 1.
The region of bistability for equation 2.1 in the -plane is described by inequalities of the form
formula
where .

So in this simple model, the strength of recurrent excitation needed for the existence of a bistability interval has a strictly positive lower bound.

Proof.
The fixed points of equation 2.1 are the -coordinates of intersection points of the graph of with the graph of . A fixed point is stable if
formula
that is,
formula
and unstable if
formula
We note that
formula
3.1
where denotes the derivative of . (Note that is piecewise constant, defined for and . Therefore, is defined for all , and the function is infinitely often differentiable.) From equation 3.1, it follows immediately that is strictly increasing for all , and its derivative is strictly increasing for and strictly decreasing for , with a maximal value at . If
formula
there is exactly one fixed point, and it is stable. If
formula
3.2
then there is an interval of bistability, a range of values of of the form
formula
for which there are two stable fixed points, one greater than and the other less than , with an unstable fixed point in between. We write equation 3.2 in the form , with
formula
3.3

3.2  Self-Exciting LIF Neuron

We fix positive values of and and plot the firing frequency of the self-exciting LIF neuron as a function of and . This results in surfaces of the sort shown in Figure 2, panels A and B, analogous to the cusp catastrophe surface of Figure 1. Note that in panels A and B of Figure 2, the vertical axis denotes , not . For , , but cannot exceed 300; this makes the plots easier to read.

Figure 2:

(A) as a multivalued function of and for equations 2.2 to 2.4, with , . For all and , rest is a possibility; this is indicated in blue. For some values of and (, but also with large enough), periodic firing is a possibility; this is indicated in red. As before, the onset edge of reverberating activity is drawn as a bold red curve. (Plotting , not , makes the figure easier to read.) (B) Same with . (C) The region in the -plane in which both rest and periodic firing are possible for the self-exciting LIF neuron, and . Notice that increases in the downward direction here, to make it easier to see how panel C is related to panel A. Along the curve indicated in red, in the notation of proposition 2. In section 5, we discuss lowering to a point where reverberant activity is lost, then regaining it by raising —that is, leaving the blue region via point A and reentering it via point B.

Figure 2:

(A) as a multivalued function of and for equations 2.2 to 2.4, with , . For all and , rest is a possibility; this is indicated in blue. For some values of and (, but also with large enough), periodic firing is a possibility; this is indicated in red. As before, the onset edge of reverberating activity is drawn as a bold red curve. (Plotting , not , makes the figure easier to read.) (B) Same with . (C) The region in the -plane in which both rest and periodic firing are possible for the self-exciting LIF neuron, and . Notice that increases in the downward direction here, to make it easier to see how panel C is related to panel A. Along the curve indicated in red, in the notation of proposition 2. In section 5, we discuss lowering to a point where reverberant activity is lost, then regaining it by raising —that is, leaving the blue region via point A and reentering it via point B.

Proposition 2.

Let and be fixed.

  1. The region in the -plane in which both rest and periodic firing are possible for the self-exciting LIF neuron can be characterized in the form
    formula
    where and is a strictly decreasing continuous function of with and See Figure 2C for illustration.
  2. formula

Thus, for rapidly decaying recurrent excitation (), recurrent excitation strength must exceed the positive lower bound for there to be a range of values in which there is bistability between rest and reverberating activity. For slowly decaying recurrent excitation (), there is a range of bistability for any positive . In other words, for slowly decaying recurrent excitation, reverberating activity lost by lowering (keeping it above 0) can always be restored by raising (keeping it below ).

Proof.
The proof of this proposition is based on analyzing the initial value problem
formula
3.6
which would govern following an action potential and reset at time 0. The self-exciting LIF neuron fires periodically if and only if the solution of equation 3.6 rises above 1 at some time.

The solution of equation 3.6 depends on and continuously. Furthermore, , strictly increases with increasing or . Taken together, these facts imply that the region in the -plane for which rest is possible () and periodic firing is possible as well (the solution of equation 3.6 rises above 1 at a finite time) can be characterized in the form of 3.4, with a strictly decreasing function .

To prove that is a continuous function of , suppose it were not. Since it is strictly decreasing, any discontinuity would have to be a jump. So suppose that at some value , there were a jump
formula
By continuity, this would imply that for and , the solution of equation 3.6 would reach, but not exceed, 1 in finite time. This, however, is impossible because for any , is a strictly increasing function of . That (i.e., ) follows from a very similar argument.

If , the total drive in the differential equation in equation 3.6, , remains below for all , and therefore cannot reach 1. Thus, .

This proves part a of the proposition, and all that remains to be proved is the formula for in equation 3.5. The differential equation in equation 3.6 is scalar and linear with constant coefficients. Therefore, the initial value problem can easily be solved explicitly, using, for instance, an integrating factor. For ,
formula
3.7
Taking the limit as , using l'Hospital's rule, we get the solution for the case :
formula
3.8
These formulas imply that as .
Some information about the solution can most easily be deduced immediately from the differential equation, without using the explicit formula for the solution. For instance, at a local extremum, and therefore . Furthermore, differentiating both sides of the differential equation once, we find
formula
which implies that the second derivative is negative at any stationary point—so all stationary points are local maxima. This implies that there are two possible qualitative shapes of the solution of equation 3.6. Either converges monotonically to (see Figures 3A and 3C) or rises initially, reaching a local maximum value greater than , and decreasing from then on to the limiting value of (see Figures 3B and 3D). The bumps in Figures 3B and 3D are crucial to our discussion. Similar bumps in voltage traces, for different models but in a similar context, play a central role in Rotstein (2013).
It is not hard to write down the exact condition under which there is a bump. When , the condition is
formula
that is,
formula
3.9
When , there is a bump if and only if approaches from above. To decide whether approaches from below or from above, we use the explicit formulas 3.7 and 3.8. If , we drop the more rapidly decaying exponential from the right-hand side of equation 3.7, finding that
formula
for large , so approaches from above and there is a bump for any . If , we drop the more rapidly decaying exponential , finding
formula
so there is a bump if
formula
3.10
and no bump if the reverse inequality holds. If the two sides of 3.10 are exactly equal to each other, then equation 3.7 becomes
formula
so approaches from below, and therefore there is no bump. The last case we must think about is . In that case, equation 3.8 is the explicit formula for the solution, and for and large , clearly , so there is a bump.
We have now clarified precisely for which parameters each of the four panels of Figure 3 is relevant; the results of our discussion are summarized in the caption of Figure 3. We note that if and only if the solution of equation 3.6 with exceeds 1 in a finite time, that is, if and only if has a bump for . This is the case (see the caption of Figure 3) if and only if
formula
This implies the formula for given in equation 3.5.
Figure 3:

The possible qualitative shapes of solutions of equation 3.6. The horizontal line indicates . (A) , . (B) , . (C) , . (D) , .

Figure 3:

The possible qualitative shapes of solutions of equation 3.6. The horizontal line indicates . (A) , . (B) , . (C) , . (D) , .

3.3  Self-Exciting, Self-Inhibiting LIF Neuron

We fix positive values of , , , and and plot the firing frequency of the self-exciting LIF neuron as a function of and . This results in surfaces of the sort shown in Figure 4, analogous to the surfaces in Figures 1 and 2. There is an interesting new feature in the left panel of Figure 4: the surface has a tear, that is, there are values of for which the firing frequency is a discontinuous function of . We postpone the analysis of the tear to section 6.

Figure 4:

as a multivalued function of and for equations 2.5 to 2.8 with , (left), and (right), , and . For all and , rest is a possibility; this is indicated in blue. For some values of and ( or but large enough), periodic firing is a possibility as well; this is indicated in red. As before, the onset edge of reverberating activity is drawn as a bold red curve.

Figure 4:

as a multivalued function of and for equations 2.5 to 2.8 with , (left), and (right), , and . For all and , rest is a possibility; this is indicated in blue. For some values of and ( or but large enough), periodic firing is a possibility as well; this is indicated in red. As before, the onset edge of reverberating activity is drawn as a bold red curve.

Proposition 3.

Let , , , and be fixed.

  1. The region in the -plane in which both rest and periodic firing are possible for the self-exciting, self-inhibiting LIF neuron can be characterized in the form
    formula
    for some , where is a strictly decreasing continuous function of with and (For illustration, compare Figure 2C.)
  2. formula
    formula
    formula

Proof.
The proof of this proposition is based on analyzing the initial value problem
formula
3.15
which would govern following an action potential and reset at time 0. The differential equation is almost as simple as that in equation 3.6, scalar and linear, the only complication being that now a time-dependent coefficient multiplies on the right-hand side. The self-exciting, self-inhibiting LIF neuron fires periodically if and only if the solution of equation 3.15 exceeds 1 at some time. The proof of part a is in essence identical to the proof of part a of proposition 2.
To prove the estimates on , we note that if and only if the solution of
formula
3.16
rises above 1 at a finite time. (The only difference between equations 3.15 and 3.16 is that has been set to in equation 3.16.) The inhibitory term can only reduce , and therefore the value of with inhibition must be at least as big as that without inhibition; therefore, 3.12 follows from the formula for in proposition 2. Furthermore, suppose the solution of equation 3.16 crosses with a positive slope at time . Then
formula
and if , this is impossible unless . This implies equation 3.13. Finally, note that the solution of
formula
3.17
is a lower bound on the solution of equation 3.16 as long as . It is easy to calculate explicitly, using an integration factor:
formula
If is greater than both and , then for large ,
formula
Thus, eventually exceeds 1, and therefore so does . This proves equation 3.14.

3.4  Self-Exciting Theta Neuron

We fix positive values of and and plot the firing frequency of the self-exciting theta neuron as a function of and . This results in surfaces of the sort shown in Figures 5A and B, analogous to the surfaces shown previously.

Figure 5:

(A) as a multivalued function of and for the self-exciting theta neuron with , . For all and , rest is a possibility; this is indicated in blue. For some values of and ( or but large enough), periodic firing is a possibility; this is indicated in red. (B) Same with . (C) Phase plane picture for equations 3.23 and 3.24. Trajectories with are indicated in red and trajectories for which converges to the fixed point in blue. The two regions are bounded by the trajectory indicated in bold black. The borderline trajectory passes through the point , , which plays a role in our reasoning.

Figure 5:

(A) as a multivalued function of and for the self-exciting theta neuron with , . For all and , rest is a possibility; this is indicated in blue. For some values of and ( or but large enough), periodic firing is a possibility; this is indicated in red. (B) Same with . (C) Phase plane picture for equations 3.23 and 3.24. Trajectories with are indicated in red and trajectories for which converges to the fixed point in blue. The two regions are bounded by the trajectory indicated in bold black. The borderline trajectory passes through the point , , which plays a role in our reasoning.

Proposition 4.

Let and be fixed.

  1. The region in the -plane in which both rest and periodic firing are possible for the self-exciting theta neuron (see Figure 2C for illustration) can be characterized in the form
    formula
    for some , where is a strictly decreasing continuous function of with and
  2. formula
    for a constant independent of and . Rounded to three significant digits, .

Thus, in contrast with the models discussed previously, is always positive here. However, it is much smaller for large than for small .

Proof.

The proof of part a is analogous to the proofs of the same statements in the two preceding propositions, except for the claim that is strictly positive, which will follow from part b.

To prove part b, we write equations 2.11 and 2.12 in terms of nondimensional variables , , and defined by
formula
3.20
formula
3.21
formula
3.22
(One is led naturally to these coordinate changes simply by asking which linear coordinate changes result in the maximal simplification of the equations.) With these coordinate changes, equations 2.11 and 2.12 with become
formula
3.23
formula
3.24
The upper half () of the phase-plane picture for this system is shown in Figure 5C. Because of the quadratic nonlinearity in equation 3.23, trajectories move from to (red trajectories) or (blue trajectories) in finite time. We omit the elementary arguments proving that indeed the phase-plane picture looks as depicted in Figure 5.

A borderline trajectory, depicted in bold black in Figure 5C, originates at , for some finite . To see that must be finite, suppose that is the finite time it takes for the borderline trajectory to move from to , and let be the value of when the trajectory reaches , (see Figure 5C). Then .

It is easy to determine numerically. Rounding to three significant digits, . Trajectories that start at reach if and only if . Recalling equations 3.21 and 3.22, we see that this means that starting at , the solution of equations 2.11 and 2.12 with will reach if and only if
formula
This implies equation 3.19 and completes the proof.

3.5  Biophysical Networks

Do our results about the onset of reverberating activity in radically simplified models give any insight into the onset of (rhythmic) reverberating activity in biophysical networks? We refer to the appendix for the definition of our model networks. As explained there, we denote by a measure of the strength of recurrent excitation ( is the “” of the appendix) and by the decay time constant of recurrent excitation. The decay time constant of E-to-I synapses is always taken to be 3 ms in this letter.

Figures 6A and 6B show two simulations, one with a long (NMDA-like) and the other with a short (AMPA-like) . The drive to the E-cells, , is not far below its threshold value, . In both cases, is chosen so that reverberating activity results, but just barely. As formulas 3.18 and 3.19 would suggest, the threshold value of for reverberating activity is much lower for large than for small ; this of course is not at all surprising.

Figure 6:

Spike rastergrams for networks of excitatory and inhibitory Hodgkin-Huxley–like cells, modeled as described in the appendix, with parameters , , , , and , , and as given in the headings of the panels. All other parameters as defined in the appendix. The vertical axis indicates neuronal indices. Red dots indicate spike times of excitatory cells (E-cells), and blue ones indicate spike times of inhibitory cells (I-cells). The drive is just barely insufficient to sustain firing in the absence of recurrent excitation: for the E-cells. The values of in panels A and B are chosen to be just barely sufficient to sustain reverberating firing. Panels C and D differ from panels A and B only in the value of , which is slightly raised.

Figure 6:

Spike rastergrams for networks of excitatory and inhibitory Hodgkin-Huxley–like cells, modeled as described in the appendix, with parameters , , , , and , , and as given in the headings of the panels. All other parameters as defined in the appendix. The vertical axis indicates neuronal indices. Red dots indicate spike times of excitatory cells (E-cells), and blue ones indicate spike times of inhibitory cells (I-cells). The drive is just barely insufficient to sustain firing in the absence of recurrent excitation: for the E-cells. The values of in panels A and B are chosen to be just barely sufficient to sustain reverberating firing. Panels C and D differ from panels A and B only in the value of , which is slightly raised.

As decreases, in equation 3.5 is proportional to , while in equation 3.19, it is proportional to . In Figures 6A and 6B, the truth seems to be somewhere in between: when is reduced by about a factor of 33, rises by about a factor of 147.

The most striking difference between Figures 6A and 6B is of course the difference in the frequency of reverberating activity. This will be discussed in section 4.

Both formulas 3.18 and 3.19 show that the threshold decreases as the membrane time constant decreases. Note that for both the LIF and the theta neurons, a reduction in means an overall acceleration in intrinsic dynamics. When is near , the intrinsic dynamics bring to the vicinity of the threshold value 1. This takes less time when is smaller, and therefore more recurrent excitation will be left at the time arrives in the vicinity of , making it easier for the recurrent excitation to generate another spike. This reasoning makes the -dependence in equations 3.18 and 3.19 plausible. However, we have not found any analoge of it in our biophysical networks: greater leakiness of the E-cells does not lead to onset of reverberating activity for lower values of . In fact, a reduction in in the reduced models is of course not the same as an increase in the leak conductance in the biophysically modeled E-cells; increasing the leak conductance does not amount simply to an overall acceleration of intrinsic dynamics. Further, equations 3.18 and 3.19 are formulas for reverberating activity without feedback inhibition, and in our biophysical networks, feedback inhibition is present and important.

Formula 3.13 suggests that feedback inhibition is crucial in determining whether reverberating activity is possible if the inhibition is slow in comparison with the recurrent excitation. (If it is not, our results in section 3.4 do not say anything about the dependence of on the strength of feedback inhibition.) Intuitively, this is not hard to understand: Rapidly decaying inhibition will only briefly prevent the recurrent excitation from doing its work, while long-lasting inhibition will block the effect of recurrent excitation long enough for the recurrent excitation to decay substantially and therefore become ineffective. This is confirmed in Figures 6C and D, where the reverberating activity is seen to survive a slight increase in the I-to-E synaptic strength when recurrent excitation decays much more slowly than inhibition, but not when it decays faster than inhibition.

4  Onset Frequency of Reverberating Activity

As noted in section 3, for a fixed , there is a threshold value so that reverberating firing is possible for but not for (compare, for instance, Figure 4). We let be the frequency of reverberating firing for and call
formula
the “onset frequency of reverberating firing.” Since the frequency of reverberating firing is an increasing function of , is the infimum of the possible frequencies of reverberating firing for the given .
Alternatively, for a given , there is a threshold value so that reverberating firing is possible for but not for . (The function is the inverse of the function .) We can think of as a function of instead of :
formula

4.1  Self-Exciting, Self-Inhibiting LIF Neuron

 

Proposition 5.
For the self-exciting, self-inhibiting LIF neuron, if ,
formula
4.1
Thus, onset of reverberating activity is at positive frequency and at a fast frequency when is small.
Proof.
We must give an upper bound on the time at which the solution of the initial value problem 3.15 crosses the threshold value 1. At the time when crosses 1,
formula
This implies
formula
and, thus (using ),
formula
4.2
This shows that is small if is short unless is very large.
However, if is very large, then is small for an even simpler reason: there is, for a brief amount of time, a large amount of recurrent excitation. For instance, for , the total (external and synaptic) drive satisfies the inequality
formula
Thus, if the solution of
formula
4.3
reaches 1 in the time interval , then exceeds 1 in the same interval. The solution of 4.3 can easily be written down explicitly:
formula
We conclude
formula
4.4
formula
4.5
formula
4.6
Inequality 4.5 implies 4.4 because for all real ,
formula
If 4.6 does not hold, that is, if
formula
then by 4.2,
formula
4.7
But if 4.6 does hold, then , which is smaller than the right-hand side of 4.7. Therefore, in any case, 4.7 holds, and this implies 4.1.

4.2  Self-Exciting Theta Neuron

 

Proposition 6.

For the self-exciting theta neuron, the onset frequency of reverberating activity is zero for all parameter values.

Our reasoning will not be entirely rigorous here. The onset frequency is zero because in the limit as , reverberating activity is lost as a result of a saddle-cycle collision. To see this, we rewrite equations 2.11 and 2.12 using the coordinate changes, of equations 3.20 to 3.22 but this time assuming . (We had previously done this for , and obtained equations 3.23 and 3.24.) We obtain
formula
4.8
formula
4.9
where
formula
Figure 7B shows the phase-plane picture for . Different values correspond to different red trajectories in Figure 7B. As , the trajectory merges with the bold black trajectory in Figure 7B, which follows the stable manifold of the saddle point. As the distance between saddle and cycle becomes infinitesimally small, so does the speed of motion in the vicinity of the saddle. As a result, the frequency becomes infinitesimally small.
Figure 7:

(A) as a function of for the self-exciting theta neuron with , , and . The --curve is continuous, but steep to infinite order at onset. (B) Phase-plane pictures for equations 4.8 and 4.9 with . Trajectories with are indicated in red and trajectories for which converges to the stable node in blue. (C) Trajectory passing near a saddle point. (D) Same as panel (B) but indicating and (see text).

Figure 7:

(A) as a function of for the self-exciting theta neuron with , , and . The --curve is continuous, but steep to infinite order at onset. (B) Phase-plane pictures for equations 4.8 and 4.9 with . Trajectories with are indicated in red and trajectories for which converges to the stable node in blue. (C) Trajectory passing near a saddle point. (D) Same as panel (B) but indicating and (see text).

At first sight, proposition 6 seems to be bad news for us: the inequality proved in section 4.1 seems to be irrelevant for the theta neuron, which is arguably a more realistic neuronal model than the LIF neuron. However, although the frequency, , of reverberating activity starts out at zero as rises above , the rise in is extremely rapid (see Figure 7A). We will argue heuristically that the frequency , as a function of , is not just infinitely steep, but steep to infinite order at . By this statement, we mean that , as a function of , is flat to infinite order, that is, all derivatives of with respect to are zero at .

This follows from a general observation about flow in the vicinity of a saddle. Suppose that a trajectory passes a saddle point at a small distance . Near the saddle point, the motion must of course be slow, since the saddle is a fixed point, albeit an unstable one. The time that the passage takes (a notion made completely precise in a simple example below) diverges to as . However, this divergence is only logarithmic. Thus, for the passage to take a long time, must be very close to 0. Consequently, when a limit cycle is broken in a saddle-cycle collision, the oscillation gets slow only when the saddle comes very close to the cycle.

For illustration, consider the model system
formula
4.10
which has a saddle point at the origin. Let , and suppose at time , the trajectory starts at . The time at which the trajectory arrives at (see Figure 7C) equals . Solving for and setting , we find
formula
Thus, the function is flat to infinite order at , or as a function of is steep to infinite order at . (Since the distance at which the trajectory passes the saddle at the origin is , the same statement holds for as a function of .)

To clarify the connection between the model system, equation 4.10, and equations 4.8, 4.9, note that the bold curve in Figure 7B is the stable manifold of the saddle point, the analoge of the -axis in Figure 7C. Denote this curve by , with and . Let be one of the red curves in Figure 7B and assume and . The trajectory enters the strip at a point , and the trajectory enters it at a point with (see Figure 7D). The difference is the analog of the