## Abstract

Experimental constraints have traditionally implied separate studies of different cortical functions, such as memory and sensory-motor control. Yet certain cortical modalities, while repeatedly observed and reported, have not been clearly identified with one cortical function or another. Specifically, while neuronal membrane and synapse polarities with respect to a certain potential value have been attracting considerable interest in recent years, the purposes of such polarities have largely remained a subject for speculation and debate. Formally identifying these polarities as on-off neuronal polarity gates, we analytically show that cortical circuit structure, behavior, and memory are all governed by the combined potent effect of these gates, which we collectively term circuit polarity. Employing widely accepted and biologically validated firing rate and plasticity paradigms, we show that circuit polarity is mathematically embedded in the corresponding models. Moreover, we show that the firing rate dynamics implied by these models are driven by ongoing circuit polarity gating dynamics. Furthermore, circuit polarity is shown to segregate cortical circuits into internally synchronous, externally asynchronous subcircuits, defining their firing rate modes in accordance with different cortical tasks. In contrast to the Hebbian paradigm, which is shown to be susceptible to mutual neuronal interference in the face of asynchrony, circuit polarity is shown to block such interference. Noting convergence of synaptic weights, we show that circuit polarity holds the key to cortical memory, having a segregated capacity linear in the number of neurons. While memory concealment is implied by complete neuronal silencing, memory is restored by reactivating the original circuit polarity. Finally, we show that incomplete deterioration or restoration of circuit polarity results in memory modification, which may be associated with partial or false recall, or novel innovation.

## 1  Introduction

The nature of cortically embedded information has been one of the most intensely studied yet highly evasive mysteries of neuroscience for over a century. Traditionally, the experimental nature of biological research has required a high degree of phenomenological specificity. Consequently, the seemingly separate issues of cortical development, interneuron connectivity, neuronal firing dynamics, learning, and memory have been studied in essentially complete mutual isolation. Yet recent years have seen a trend toward integrative neuroscience, driven by a realization that, on the one hand, practically every cortical function combines a variety of molecular, biological, and physical processes, and, on the other, different cortical functions are often contrived by similar mechanisms. It is also becoming increasingly clear that a complete understanding of such integration would involve a new look at the cortical arena, incorporating advanced theoretical reasoning and new methods of analysis. The relatively recent emergence of numerous publications presenting advanced mathematical analysis of neurobiological processes is seemingly strong evidence to this effect. Recent interdisciplinary interest in general common notions, such as information, learning, memory, computation, cognition, and control, is further evidence of the same effect.

In this study, we show that cortical connectivity, behavior, and memory are all controlled by on-off gates, corresponding to neuronal membrane and synapse polarities with respect to a certain potential value. Extending the local nature of such polarities to multineuron circuits, we find that such circuits are segregated by circuit polarity into smaller subcircuits. Employing widely accepted paradigms of firing rate and plasticity, we first put experimental neurobiological findings of membrane and synapse polarities on mathematical grounds. Different firing rate modes, specifically, silent, fixed-point, oscillatory, and chaotic, are shown to be governed by negative, positive, and mixed polarity gating. Neural circuits are shown to be segregated by polarization into synchronous subcircuits. Changes in synchronous circuit size, caused by changes in neuronal membrane polarities, are shown to change circuit firing modes. Circuit segregation by synapse silencing is shown to yield interference-free asynchrony between synchronous subcircuits, facilitating, by different firing-rate dynamics, simultaneous execution of different cortical functions. While neural circuits often involve thousands of neurons, we limit our illustration of the underlying concepts to circuits of a few neurons for graphical convenience, without compromising coverage of the entire range of dynamic modes. Convergence of synaptic weights holds the key to circuit memory, shown to have a capacity linear in the number of circuit neurons. Following memory deterioration, caused by membrane silencing due to external inputs or electric charge dissipation, concealed memory, maintained by constant synaptic weights, is restored by presentation of the original circuit activation. Partial and false memory, as well as novelty and innovation effects, in the form of new circuit structures and firing dynamics, are created by incomplete deterioration or restoration of the circuit polarity that has instigated the original memory.

## 2  Historical Account

Since most neuroscience research has evolved along separate topical paths, any historical account must make a similar distinction among these topics. Following the discovery of the neuron in the late nineteenth century (Cajal, 1890), early investigations of cortically embedded information concerned the nature of neuronal firing (Lapicque, 1907). Elaborate chemical, physical, and mathematical analysis led to the celebrated conductance-based model of action potential (Hodgkin & Huxley, 1952), inspiring the spiking neurons paradigm (Gerstner & Kistler, 2002). Yet firing-rate dynamics, while lacking in detail with respect to the spiking neuron paradigm, have been found to offer not only plausible means for cortical information representation but also a certain mathematical convenience and a reliable representation of empirically observed firing sequences (Wilson & Cowan, 1972; Gerstner, 1995; Dayan & Abbott, 2001; Jolivet, Lewis, & Gerstner, 2004). Following a mathematical analysis of discrete iteration maps for neural network firing rate, a global attractor code, relating the modes of firing rate dynamics to internal neuron properties, has been derived (Baram, 2012, 2013).

Synaptic plasticity has been believed to affect and to be affected by neuronal firing underlying learning and memory in the nervous system (Hebb, 1949; Bienenstock, Cooper, & Munro, 1982; Dudai, 1989; Cooper, Intrator, Blais, & Shouval, 2004). A detailed biophysical model of long-term synaptic potentiation and long-term synaptic depression has been presented (Castellani, Quinlan, Cooper, & Shouval, 2001), supporting the BCM plasticity theory (Bienenstock et al., 1982). Assumed to evolve on a slower timescale, synaptic plasticity dynamics has been often separated from firing dynamics for analytic convenience. While almost all theoretical and experimental studies make the implicit assumption that synaptic efficacy is both necessary and sufficient to account for learning and memory, it has been suggested that learning and memory in neural networks result from an ongoing interplay between changes in synaptic efficacy and intrinsic membrane properties (Marder, Abbott, Turrigiano, Liu, & Golowasch, 1996). The combined effects of firing rate and plasticity time constants on firing rate dynamics corresponding to different developmental stages have been analyzed, laying the ground for analytic unification with respect to neuronal properties, on the one hand, and cortical development, on the other (Baram, 2017a).

Cortical circuit connectivity has been attracting increasing interest for the past three decades. Sensory inputs have been shown to evoke ongoing shunting in visual cortex circuits (Borg-Graham, Monier, & Frégnac, 1998). Activity-independent neural circuit formation in early development (Hsia, Malenka, & Nicoll 1998; Gibson & Ma, 2011; Weiner, Burgess, & Jontes, 2013) has been found to be followed by activity-dependent circuit formation (Katz & Shatz, 1996; Gage, 2002) and, later, modulation (Tessier & Broadie, 2009). Neural circuit modification by activation and silencing of neuronal membrane and individual synapses has been observed in early development (Melnick, 1994; Atwood & Wojtowicz, 1999; Liao, Zhang, O'Brien, Ehlers, & Huganir, 1999; Losi, Prybylowski, Fu, Luo, & Vicini, 2002; Kerchner & Nicoll, 2008), maturation (Ashby & Isaac, 2011), and later life (McGahon, Martin, Horrobin, & Lynch, 1999). Localized polarity of membrane and synapse potential with respect to a certain activation threshold, supported by extensive molecular studies, has been found to control the directional travel of action potentials between neurons (Arimura & Kaibuchi, 2005; Kimata et al., 2012; Tanizawa et al., 2006). Evidence of circuit-encoded memory has been found in spatial neural maps of place, identified by firing activity (O'Keefe & Dostrovsky, 1971; O'Keefe & Nadel, 1978; Calvin, 1996; Hafting, Fyhn, Molden, Moser, & Moser, 2005; Epsztein, Brecht, & Lee, 2011). Cortical segregation into small groups of neurons has been related by simulation to radius of inhibition and found to have an effect on spiking dynamics, leaving a deeper understanding of the observed activity for future research (Stratton & Wiles, 2015). Synchronous and asynchronous reverberation have been synthetically embedded in local cortical circuits and individual neurons (Vardi et al., 2012). Connectivity changes due to synapse elimination (Dennis & Yip, 1978; Huttenlocher, 1979) have been suggested as a means for long-term memory (Balice-Gordon, Chua, Nelson, & Lichtman, 1993) and followed by studies of structure (Balice-Gordon & Lichtman, 1994; Chklovsii, Mel, & Svoboda1, 2004; Knoblauch & Sommer, 2016), information capacity (Knoblauch & Sommer, 2016), and cortical segregation (Baram, 2017b).

The notion of cortical memory, attracting considerable interest in mathematical dynamics and information-theoretic circles, has been linked to the convergence and stability of neural network activity models with respect to parametrically stored states. However, the analysis of such models in discrete (McCulloch & Pitts, 1943; Amari, 1972; Hopfield, 1982) and continuous (Cohen & Grossberg, 1983; Peterfreund & Baram, 1994) space and time has presented a sharp trade-off between low storage capacity, sublinear in the number of neurons (McEliece, Posner, Rodemich, & Venkatesh, 1987; Kuh & Dickinson, 1989; Dembo, 1989), on the one hand, and a multitude of spurious outcomes (Bruck & Roychowdhury, 1990), on the other. Sparse distribution of active neurons has been shown to raise the sublinear capacity bound (Baram & Sal'ee, 1992), while an exponential increase in the number of neurons (Kanerva, 1988) has been shown to result in exponential growth in the storage capacity with respect to the dimension of the data stored (Chou, 1989). However, the mechanization of the associative memory concept presented in these works seems to have fallen short of a widely acceptable biological support.

## 3  Neuronal Polarity Gates and Cortical Circuit Segregation

The state of neuronal activity, contrasted by neuronal silence, has been found to depend on the somatic membrane potential being above or below a certain threshold value (about $-$60 mV; Melnick, 1994). The state of synaptic transmissivity, contrasted by synaptic silence, has been found to depend on the value of presynaptic membrane potential, controlled by external stimulation and molecular properties, with respect to a certain threshold value (also about $-$60 mV; Atwood & Wojtowicz, 1999). A detailed biophysical model relates long-term synaptic potentiation and long-term synaptic depression, which are also viewed in a binary (bidirectional) context, to the variable properties and relative numbers of AMPA and NMDA receptors and their external stimulation (Castellani et al., 2001). Following a terminology used for synaptic thresholding (Arimura & Kaibuchi, 2005; Kimata et al., 2012; Tanizawa et al., 2006), we jointly call the binary polarities of neuronal membranes and synapses local polarity.

As the definitions of membrane and synapse polarities have been derived directly from experimental neurobiological findings, independent of any particular firing or plasticity models, we are able to address neuronal polarity gates, neural circuit polarity codes, and the corresponding issue of circuit segregation in a discrete-mathematical framework independent of such models. The implications of circuit segregation on circuit firing structure and dynamics and on memory storage and retrieval capacity will be considered in more specific detail in the following sections, where we bring the relevant dynamical models of firing and plasticity into play.

### 3.1  Neuronal Polarity Gates and Neural Circuit Polarity

Neuronal membrane and synapse polarities can be described as on-off gates that in the off (disconnect) position represent negative polarity, and in the on (connect) position represent positive polarity. The polarity gates of a single neuron are represented in Figure 1a by angular discontinuities in line segments representing membranes and synapses as specified. As the neuron-external inputs directly affect the membrane polarity, they are accounted for in the three-word polarity code: (1) positive membrane and self-synapse polarities, (2) positive membrane polarity and negative self-synapse polarity, and (3) negative membrane and self-synapse polarities, as depicted by Figures 1b to 1d, respectively.

Figure 1:

(a) Neuronal polarity gates and their (b–d) code. As the neuron-external inputs directly affect the membrane polarity, they are accounted for in the neuronal three-word polarity code, which specifies the neuron-internal elements only.

Figure 1:

(a) Neuronal polarity gates and their (b–d) code. As the neuron-external inputs directly affect the membrane polarity, they are accounted for in the neuronal three-word polarity code, which specifies the neuron-internal elements only.

The circuit polarity gates of a two-neuron circuit are depicted in Figure 2a. The total number of circuit polarity states in this case is somewhat more difficult to figure out by a quick glance than it is in the case of a single neuron. We therefore turn to the general case of a circuit of $n$ neurons whose connectivity pattern is determined by the states of polarity of the membranes and the synapses involved.

Figure 2:

(a) Two-neuron polarity-gated circuit (omitting circuit-external components). Each of the neurons has its three-word polarity code, as illustrated in Figures 1b to 1d, but the interneuron gating further extends the circuit polarity code to 25 words, as represented by equation 3.2 (b) Two-neuron circuit segregation by interneuron synapse silencing.

Figure 2:

(a) Two-neuron polarity-gated circuit (omitting circuit-external components). Each of the neurons has its three-word polarity code, as illustrated in Figures 1b to 1d, but the interneuron gating further extends the circuit polarity code to 25 words, as represented by equation 3.2 (b) Two-neuron circuit segregation by interneuron synapse silencing.

For a fully connected circuit of $n$ neurons with $k$ silent membranes, there are $n-k$ positive polarity (active) membranes, each connected to each of the $n$ neurons, with the corresponding synapse having positive or negative polarity. Hence, for $k$ silent membranes, there are $2n(n-k)$ circuit polarity states. As there are
$nk=n!(n-k)!k!$
(3.1)
possibilities of $k$ silent membranes, the total number of circuit polarity patterns is
$P(n)=∑k=0nnk2n(n-k),$
(3.2)
constituting the circuit polarity code size for circuits of $n$ neurons. As $P(n)$ is dominated by $2n2$, it is said to have an exponential-of-polynomial growth.

As predicted by equation 3.2, the circuit polarity code size for a single neuron, depicted by Figures 1b to 1d, is 3. For a two-neuron circuit, the code size is 25, and for a three-neuron circuit, 729.

### 3.2  Cortical Segregation Capacity

Equation 3.2 specifies the size of the circuit polarity code (or vocabulary of words) for a circuit of size $n$. However, only one of these words can be expressed by the circuit throughout the duration of a given circuit polarity pattern. As we show in the following section, for given internal properties of the neurons and circuit-external activation, the state of circuit polarity uniquely determines the firing modes of the circuit neurons. Viewing, then, the circuit polarity state at a given time as the information expressed by the circuit at that time, the instantaneous expression capacity of a circuit is one, regardless of the circuit size, $n$. For the single neuron illustrated in Figure 1, only one of the three polarity patterns (b–d) can be expressed by the neuron at a time. For a circuit of many neurons, the representation of one word at a time seems highly wasteful. How can more than one word be expressed at a time by one circuit? The answer comes in the form of neural circuit segregation into smaller subcircuits. A subcircuit is said to be segregated from the rest of the circuit when it does not affect and is not affected by the rest of the circuit. Such segregation can be done by means of circuit polarization. Specifically, membrane silencing will segregate a silent neuron from the rest of the circuit, while synapse silencing can segregate subcircuits of active neurons with respect to each other. Figure 2b illustrates the mutual segregation of the neurons of a two-neuron circuit by silencing interneuron synapses.

The total number of subcircuits that can be segregated with respect to each other by polarization of a fully connected circuit of $n$ neurons is given by
$S(n)=∑k=0nnk=2n,$
(3.3)
which constitutes the segregation code size. The polarity and segregation code sizes for a circuit of $n$ neurons are depicted in Figures 3a and 3b, respectively. As the circuit polarity code size explodes exponentially, reaching an effectual computational limit at $n=32$, we have stopped graphs at $n=30$. It can be seen that the circuit polarity code size is considerably greater than the circuit segregation code size. Yet as we show in a subsequent section, a circuit's memory capacity can benefit from the circuit's gated polarization only if it results in circuit segregation. The information capacity of a circuit is the number of segregated subcircuits, each representing one word (polarity pattern). The maximal information capacity of an $n$-neuron circuit that can be achieved by segregation is, then,
$C(n)=n,$
(3.4)
which is depicted in Figure 3c so as to contrast the considerably greater sizes of the circuit polarity and segregation code sizes. Yet it might be noted that in comparison to this linear capacity, a sublinear capacity has been previously found for associative memory of $n$-dimensional binary activity vectors (McEliece et al., 1987; Kuh & Dickinson, 1989; Dembo, 1989). Sparse distribution of active neurons with probability $p$ has been shown to raise the sublinear capacity bound without increasing the cortical hardware (Baram & Sal'ee, 1992). It might also be noted, however, that the notion of associative memory, as perceived in these earlier works, does not appear to have the strong biological support associated with circuit polarity.
Figure 3:

Neural circuit polarity (a) and segregation (b) code sizes, and maximal information capacity by circuit segregation (c) versus circuit size $n$.

Figure 3:

Neural circuit polarity (a) and segregation (b) code sizes, and maximal information capacity by circuit segregation (c) versus circuit size $n$.

So far, we have considered only the binary components of a neural circuit, namely, the neuronal membrane and synapse polarity gates and their information representation capacity. In the following sections, we address the dynamics of neuronal and neural circuit firing activity, employing widely accepted firing rate and plasticity models.

## 4  Dynamics of Local Polarity and Firing Rate in the Individual Neuron

### 4.1  Underlying Firing Rate and Plasticity Model

While the main issue considered in this letter, the dominant role of neuronal polarity gates in cortical connectivity, activity, and memory, constitutes a general concept that is independent of firing and plasticity dynamics, its effects can be best studied and demonstrated when applied to a concrete neural firing and plasticity model. The instantaneous single-neuron version of the spiking rate model (Gerstner, 1995) is (Carandini & Ferster, 2000; Dayan & Abbott, 2001; Jolivet et al., 2004; Miller & Fumarola, 2012)
$τmddtυ(t)=-υ(t)+fω(t)υ(t)+u(t),$
(4.1)
where $υ(t)$ is the firing rate, $τm$ is the membrane time constant, $ω(t)$ is the self-feedback synaptic weight, $u(t)$ is the membrane-centered activation potential, generally composed as $u(t)=I(t)-r(t)$, with $I(t)$ an external input potential and $r(t)$ the membrane activation threshold (approximately $-$60 mV), and where
$f(x)=xifx≥00ifx<0$
(4.2)
is the conductance-based rectification kernel (Carandini & Ferster, 2000), first observed in empirical data (Granit, Kernell, & Shortess, 1963; Connor & Stevens, 1971). The firing rate to membrane potential ratio has been coarsely determined by experiment as (Carandini & Ferster, 2000) $7.2±0.6spikes·sec-1·mV-1$. It might be noted that the firing-rate model, equation 4.1, corresponds directly with a membrane potential model (Miller & Fumarola, 2012).
The BCM plasticity rule (Bienenstock et al., 1982), enhanced by stabilizing modifications (Intrator & Cooper, 1992; Cooper et al., 2004) and supported by an elaborate biophysical model of bidirectional synaptic plasticity (which, in turn, corresponds to our polarity paradigm), is a widely recognized, biologically plausible, mathematical representation of the Hebbian learning paradigm (Hebb, 1949). It suggests the computation of $ω(t)$ by the model
$τωddtω(t)=-ω(t)+υ(t)-θ(t)υ2(t),$
(4.3)
where $τω$ is the learning time constant and $θ(t)$ is a variable threshold satisfying (Intrator & Cooper, 1992; Cooper et al., 2004)
$θ(t)=1τθ∫-∞tυ2(τ)exp(-(t-τ)/τθ)dτ,$
(4.4)
where $τθ$ is the thresholding time constant. The synaptic weight $ω(t)$ has been shown to converge to a finite value $ω$ as $t→∞$ (Cooper et al., 2004).
The discretization of a continuous-time stable dynamical system (Qwakenaak & Sivan, 1972) is done under the assumption that the time steps are sufficiently small so as to allow for the approximation of the input by a constant value throughout the time step, scaled to unity, without introducing instability. The discrete time versions of the system, equations 4.1 to 4.4, are
$υ(k)=αυ(k-1)+βfω(k)υ(k-1)+u(k-1),$
(4.5)
with
$α=exp(-1/τm),β=1-α,$
(4.6)
$ω(k)=ɛω(k-1)+γυ(k-1)-θ(k-1)υ2(k-1),$
(4.7)
and
$θ(k)=δ∑i=0Nexp(-i/τθ)υ2(k-i),$
(4.8)
with
$ɛ=exp(-1/τω),γ=1-ɛ,δ=1/τθ.$
(4.9)

### 4.2  Dynamics of Firing Rate under Negative Polarity

The polarity of the neuron's membrane potential with respect to the membrane activation threshold is identical to the sign of the operand of the function $f$ in equation 4.5. A negative operand will imply that the map, equation 4.5, takes the form
$υ(k)=αυ(k-1),$
(4.10)
which, as $0<α<1$, converges to 0 as $k→∞$. It further follows from equation 4.7 that as $υ(k)$ converges to 0, so does $ω(k)$. Membrane silencing in the individual neuron implies, then, nullification of the feedback synaptic weight.
Synaptic polarity is controlled by the value of the corresponding synaptic weight. Synapse silencing, represented by a zero value of the feedback synaptic weight $(ω=0)$, accommodates a positive constant value of $u$, which, by equation 4.5, yields the map
$υ(k)=αυ(k-1)+βu.$
(4.11)
Note that since $α<1$, the line represented by equation 4.11 can only intersect the diagonal $υ(k)=υ(k-1)$ if, indeed, $u>0$.

Scalar global attractors are graphically described by cobweb diagrams (Koenigs, 1884; Lemeray, 1885, Knoebel, 1981; Abraham, Gardini, & Mira, 1997), which are initiated at some value, $υ(0)$, on the map, then connected horizontally to the diagonal $υ(k)=υ(k-1)$, then connected vertically to the map, and so on. The cobweb diagram depicted in Figure 4a, corresponding to the parameter values $τm=2,τω=300,τθ=0.1$, and $u=-1$, illustrates the membrane silencing process, converging to the point $p$ placed at the origin, and represented by a red $X$ (note that by equations 4.7 and 4.10, $ω$ becomes 0, regardless of the values of $τω$ and $τθ)$. The cobweb diagram depicted in Figure 4b, corresponding to the parameters $τm=2$ and $u=1$, illustrates the synapse silencing process, converging to the global attractor at point $p$. It follows, then, that in contrast to membrane silencing, synapse silencing, even in the case of the single feedback synapse of the individual neuron, allows for continued neuronal firing due to external activation and feedback from other circuit neurons.

Figure 4:

Membrane and synapse silencing in the individual neuron. While membrane silencing is represented by a global attractor at the origin (a), feedback synapse silencing is represented by a nonzero fixed-point attractor (b). While membrane silencing stops neuron firing altogether, synapse silencing allows for continued firing due to external activation.

Figure 4:

Membrane and synapse silencing in the individual neuron. While membrane silencing is represented by a global attractor at the origin (a), feedback synapse silencing is represented by a nonzero fixed-point attractor (b). While membrane silencing stops neuron firing altogether, synapse silencing allows for continued firing due to external activation.

Membrane and synapse silencing is one side of local polarization. The other side is membrane and synapse activation. These states, governed by equations 4.5 and 4.7, respectively, will affect the modes of firing-rate dynamics, as discussed next.

### 4.3  Dynamics of Firing Rate under Positive and Mixed Polarities

Equation 4.5 may be written, for a constant value of $u$, as
$υ(k)=f1υ(k-1)=λ1υ(k-1)forβω(k-1)υ(k-1)+βu≤0f2υ(k-1)=λ2(k-1)υ(k-1)+βuforβω(k-1)υ(k-1)+βu>0,$
(4.12)
where
$λ1=α$
(4.13)
and
$λ2(k-1)=α+βω(k-1).$
(4.14)
Denoting the limit value of equation 4.7, $ω$, the map attains the corresponding form of equation 4.12. The fixed (limit) point of $f1$ is the origin, while the fixed point of $f2$ is
$p=βu1-βω-α.$
(4.15)
The parameters
$c1=2λ1λ2+1+1+4λ12$
(4.16)
and
$c2=λ1λ2+1$
(4.17)
define transition points from one global attractor type of the map to another (Baram, 2013).

Figure 5 shows cobweb diagrams representing the global attractors of a single neuron with an active self-synapse and silenced synapses of other preneurons in the same circuit. The characteristic condition ranges specified below constitute a condensed but complete coverage of the entire range of the global attractor alphabet of firing-rate dynamics (Baram, 2013):

1. Chaotic attractor. For $u>0$, $λ2<-1$, $c1≤0$, and $c2<0$, yielding $qυ≥pυ$ (where $qυ$ and $pυ$ are the vertical coordinates of the corresponding points on the map in Figure 5a); the attractor is represented in Figure 5a by the interval $ab$ on the diagonal $υ(k)=υ(k-1)$, with $a,b$, and $q$ created by a cobweb sequence initiating at the bend point $g$, which defines the boundaries of the attractor, as shown in the figure. An orbit of period three $a1→a2→a3→a1$, rendering Li-Yorke chaos (Li & Yorke, 1975), is defined by $a1=f3(a1)$, $a1≠f(a1)$, where $f$ is the map (here, equation 4.5), and $f3(x)=f(f(f(x)))$. Starting with $a1>g$, we obtain $a2=λ1a1$, $a3=λ12a1$, $a1=λ2λ12a1+βu$, yielding $a1=βu/(1-λ2λ12)$.

2. Largely oscillatory attractor. For $u>0$, $λ2<-1$, $c1>0$, and $c2<0$, yielding $qυ, the attractor, represented in Figure 5b by the two intervals $ab$ and $cd$, separated by the repelling interval $bc$, is largely oscillatory. Within the attractor domain, defined by a cobweb initiating at the bend point $g$ and ending at point c, trajectories alternate between the two intervals $ab$ and $cd$. As implied by the cobweb diagram, depending on the circuit parameters, this alternation may, but need not necessarily, repeat precisely the same points, which may then represent oscillatory or cyclically multiplexed dynamics (detailed conditions are specified in Baram, 2013).

3. Fixed-point attractor. For $u>0$ and $-1<λ2≤1$, we have a fixed-point attractor at $p$. For $-1<λ2≤0$, the fixed point will be approached by alternate convergence (increasing the $υ(k)$ step followed by decreasing $υ(k+1)$ step and vice versa). For $0<λ2≤λ1$, convergence will be monotone, bimodal (according to $f1$ far from $p$ and according to $f2$ near $p$, as illustrated by Figure 5c for $0<λ2≤λ1)$, and unimodal (according to $f2)$ for $λ1<λ2≤1$.

4. Silent attractor. For $u≤0$ and $λ2≤1$ the attractor is at the origin.

Figure 5:

The neuronal global attractor code of firing-rate dynamics with active self-synapse and silenced synapses of circuit preneurons: (a) chaotic, (b) largely oscillatory, (c) fixed point, (d) silent. Cobweb trajectories are represented by black dashed lines and converge to global attractors represented by red line segments or red x. The model is specified by equations 4.5 to 4.9. The parameter values for each of the global attractors are specified in the text.

Figure 5:

The neuronal global attractor code of firing-rate dynamics with active self-synapse and silenced synapses of circuit preneurons: (a) chaotic, (b) largely oscillatory, (c) fixed point, (d) silent. Cobweb trajectories are represented by black dashed lines and converge to global attractors represented by red line segments or red x. The model is specified by equations 4.5 to 4.9. The parameter values for each of the global attractors are specified in the text.

Specifically, Figure 5 employs the following cases of the map, equations 4.5 to 4.8:

1. $u=10$, $τm=2$, $τω=10,000$, $τθ=0.1$, yielding $λ1=0.6055$, $λ2=-4.7336$, $c1=-3.1701$, $c2=-1.8711$, $ω=-13.5720$

2. $u=10$, $τm=2$, $τω=10,000$, $τθ=1$, yielding $λ1=0.6055$, $λ2=-1.8160$, $c1=0.3692$, $c2=-0.1015$, $ω=-6.1569$

3. $u=1$, $τm=2$, $τω=5$, $τθ=1$, yielding $λ1=0.6065$, $λ2=0.5308$, $ω=-0.1925$

4. $u=-1$, $τm=2$, yielding $λ1=λ2=0.6065$, $ω=0$

For each of the cases, the steady-state value of the synaptic weight, $ω$, was calculated by driving equations 4.5, 4.7, and 4.8 with $ω(0)=0$ and $υ(0)=1$, to convergence (practically, this was achieved for $N=100$), and the corresponding values of $λ1,λ2,c1$ and $c2$ were calculated by equations 4.13, 4.14, 4.16, and 4.17, respectively. As can be verified, the conditions stated above for the attractor types chaotic, largely oscillatory, fixed point, and silent are satisfied, respectively, by the parameter values obtained.

As we have shown, the dynamic modes of firing rate are defined by the parameters $λ1,λ2,c1$, and $c2$, which, through equations 4.13 to 4.17, relate to the parameters of the model, equation 4.12, related, in turn, to the model, equations 4.5 and 4.6. It can be seen from Figure 5 that while the cobweb trajectories of the chaotic mode and the largely oscillatory mode involve endless transitions from the function $f1$ (blue) to the function $f2$ and vice versa, the cobweb trajectory of the fixed-point mode (c) may, depending on initial condition, involve at most one such transition (from $f1$ to $f2)$, while that of the silent mode, d, associated with negative membrane polarity, does not involve any transition between the two functions. As each transition between the two functions represents, through the function $f$ of equation 4.2, neuronal polarity gating, it can be seen that the firing rate dynamics are driven by the polarity gating dynamics. Conversely, the dynamics of the model, equation 4.12 (hence, the dynamics of the model equation 4.5), drive the neuronal gating dynamics. It follows, then, that chaotic and oscillatory firing rate dynamics, fixed-point firing rate dynamics, and silence are governed by mixed polarity gating, positive polarity gating, and negative polarity gating, respectively.

#### 4.3.1  Simulation Results

We have simulated the model, equations 4.5 to 4.9, for cases a to d above with the specified parameter values. The simulation results are shown in Figure 6, where the values of the feedback synaptic weight $ω$ throughout the run, the firing rate during learning (firing mode convergence) in the early period of the run, the firing rate during learning in the late period of the run, and the firing rate during retrieval in the early period of the run, initiated at the limit value of the synaptic weight obtained in learning, are given, respectively, from left to right in the four columns. It can be seen that in each case, the feedback synaptic weight converged to a constant value. It can also be seen that while convergence to a persistent firing mode in retrieval is noticeably faster than convergence in learning for the first three cases, learning and retrieval end, as desirable, in the same attractor in all cases. The convergent modes of firing visually conform to the attractor types predicted by the theory and illustrated in Figure 5. Specifically, case a produced a highly disordered, bifurcated, and bursting sequence, as expected from a chaotic attractor, case b produced a quasi-four-cycle, multiplexing two uneven two-cycles, case c converged to a constant sequence, and case d converged to a silent (zero-valued) sequence. Similarly behaved biological firing modes, notably, chaotic (Hayashi & Ishizuka, 1992; Fell, Roschke, & Beckmann, 1993), quasi-cyclic (Lankheet, Klink, Borghius, & Noest, 2012), multiplexed (Panzeri, Brunel, Logothetis, & Kayser, 2009), fixed point, or tonic (Bennett, Callaway, & Wilson, 2000) and silent (Melnick, 1994), have been reported.

Figure 6:

Simulated sequences of the individual neuron during learning and retrieval, representing (a) chaotic, (b) largely oscillatory, (c) fixed point, and (d) silent modes. The first column represents the synaptic weight ($ω$) evolution throughout the run, and the remaining three represent the firing rate ($v$) during learning throughout the run and in the late period of the run, and during retrieval in the early period of the run, respectively. It might be noted that in the silent mode, the feedback synaptic weight converges to 0, as predicted by equation 4.7.

Figure 6:

Simulated sequences of the individual neuron during learning and retrieval, representing (a) chaotic, (b) largely oscillatory, (c) fixed point, and (d) silent modes. The first column represents the synaptic weight ($ω$) evolution throughout the run, and the remaining three represent the firing rate ($v$) during learning throughout the run and in the late period of the run, and during retrieval in the early period of the run, respectively. It might be noted that in the silent mode, the feedback synaptic weight converges to 0, as predicted by equation 4.7.

It is particularly interesting to note the bifurcations of the firing rate sequences during the learning phases in cases a and b, around time samples (a) k $=$ 200 and (b) k $=$ 1000 and k $=$ 5000, respectively. Such behavior has been observed in neurobiological data and reported with some fascination, suggesting that it may require rethinking existing hypotheses, modeling, and beliefs concerning long-term neuronal behavior, stability, and function. However, such behavior is in fact predicted by our analysis. It can be seen in the first column of both cases a and b that the synaptic weight, while converging, increases its negativity. This, by equation 4.14, increases the negative slope $λ2(k)$ of $f2$, which, as can be seen in Figures 5a to 5c, would cause transitions from a constant firing rate (see Figures 5c and 6c) to largely oscillatory (see Figures 5b and 6b) and, further, to chaotic (see Figures 5a and 6a). As such transitions are simply consequences of the corresponding learning processes, they may be justified by the functionality of the learned behaviors rather than viewed as instability in long-term neuronal behavior.

## 5  Cortical Firing Mode Segregation and Control by Circuit Polarization

### 5.1  Underlying Firing-Rate and Plasticity Models

An extension of the discrete-time firing-rate model for the individual neuron, equation 4.5, to a multineuron circuit yields
$υi(k)=αiυi(k-1)+βifωiT(k)υ(k-1)+ui,$
(5.1)
where $υ(k)$ is the vector of the circuit neurons' firing rates, $υi,i=1,2,…,n$, $αi=exp(-1/τmi)$, and $βi=1-αi$, with $τmi$ the membrane time constant of the $i$th neuron, $ωi(k)$ is the vector of synaptic weights corresponding to the circuit's preneurons of the $i$th neuron (including self-feedback), $ui=Ii-ri$, with $Ii$ the circuit-external activation input and $ri$ the membrane resting potential of the $i$th neuron, and $f$ is the conductance-based rectification kernel defined by equation 4.2.
The BCM plasticity rule (Bienenstock et al., 1982) now takes the multineuron form
$ωi(k)=ɛiωi(k-1)+γiυi(k-1)-θi(k-1)υ2(k-1),$
(5.2)
where $υ2$ is the vector whose components are the squares of the components of $υ$ and
$θi(k)=δi∑ℓ=0Nexp(-ℓ/τθi)υi2(k-ℓ).$
(5.3)
Next, we analyze and demonstrate the effects of membrane and synapse polarization on circuit structure and firing dynamics.

### 5.2  Hebbian and Asynchronous Synaptic Polarity-Gated Firing Mode Segregation

The Hebb paradigm (Hebb, 1949; Hertz, Krogh, & Palmer, 1991), supported by the eigenfrequency preference paradigm of spiking neurons (Izhikevich, 2001), implies inhibitory effects between asynchronously firing neurons. This will regularly tend to segregate a circuit into internally synchronous, externally asynchronous subcircuits. However, as we show next, the resulting mutually asynchronous firing of the segregated subcircuits is likely to result in mutual interference between the corresponding firing-rate modes. On the other hand, as we show, the segregation of two internally synchronous, externally asynchronous subcircuits by synapse silencing results in interference-free firing. It should be noted that the analysis of asynchronous circuit firing-rate dynamics cannot employ cobweb diagrams, which are essentially limited to scalar systems. We therefore employ representative simulation of small circuits. While circuits of many asynchronous neurons would require a prohibitive number of simulation plots, small circuits are rather effective in demonstrating the main concepts of interest.

In order to demonstrate the difference between Hebbian segregation and polarity-gated segregation of asynchronous neural circuits, we consider a three-neuron circuit, where the first two neurons have the same parameter values and the third has different parameter values, as specified below:
$u1=u2=1,u3=5,τm,1=τm,2=1,τm,3=3,τω,1=τω,2=5,τω,3=0.5,τθ,1=τθ,2=1,τθ,3=0.5$
Starting with full connectivity, changes in circuit connectivity due to synapse silencing are illustrated in Figure 7. The resulting changes in the neuronal firing modes, simulated by running equations 5.1 to 5.3 are displayed in Figure 8.
Figure 7:

Asynchronous three-neuron circuit modification and segregation by synapse silencing. (a) Fully connected circuit. (b) Asynchronous circuit segregation into a single isolated neuron and an asynchronous two-neuron circuit by synapse silencing. (c) Asynchronous circuit segregation into a synchronous two-neuron circuit and a single neuron by synapse silencing. (d) Asynchronous circuit segregation into three single neurons by synapse silencing.

Figure 7:

Asynchronous three-neuron circuit modification and segregation by synapse silencing. (a) Fully connected circuit. (b) Asynchronous circuit segregation into a single isolated neuron and an asynchronous two-neuron circuit by synapse silencing. (c) Asynchronous circuit segregation into a synchronous two-neuron circuit and a single neuron by synapse silencing. (d) Asynchronous circuit segregation into three single neurons by synapse silencing.

Figure 8:

Firing-rate sequences corresponding to the circuit modification and segregation cases depicted in Figure 7. Full connectivity (a) results in mutual interference. Neuron 1 isolation by synaptic silencing (b) results in its interference-free firing of that neuron. Neuron 3 isolation by synaptic silencing (c) results in its interference-free firing, while neurons 1 and 2 fire in interference-free synchrony. Mutual isolation of the three neurons by complete synapse silencing allows each of them to fire in its own characteristic mode.

Figure 8:

Firing-rate sequences corresponding to the circuit modification and segregation cases depicted in Figure 7. Full connectivity (a) results in mutual interference. Neuron 1 isolation by synaptic silencing (b) results in its interference-free firing of that neuron. Neuron 3 isolation by synaptic silencing (c) results in its interference-free firing, while neurons 1 and 2 fire in interference-free synchrony. Mutual isolation of the three neurons by complete synapse silencing allows each of them to fire in its own characteristic mode.

The Hebb paradigm, if fully applicable, should segregate the fully connected three-neuron circuit in Figure 8a into two totally isolated subcircuits, the first consisting of two synchronous neurons (neurons 1 and 2) firing in unison according to one firing-rate mode and the second consisting of one neuron (neuron 3), firing in a different firing-rate mode. Yet as can be seen in Figure 8a, there is visible mutual interference between the firing modes. Isolating neuron 1 by polarity-gated synapse silencing (see Figure 7b), it produces a smooth fixed-point firing rate mode, while neurons 2 and 3 continue to interfere with each other, as can be seen in Figure 8b. When neuron 3 is isolated from neurons 1 and 2 by polarity-gated synapse silencing (see Figure 7c), the latter form a two-neuron circuit, which fires in fully matched fixed-point modes, while neuron 3 fires in an oscillatory mode, as can be seen in Figure 8c. When each of the neurons is isolated by polarity-gated synapse silencing (see Figure 7d), each fires in its characteristic mode. It can be seen in Figures 8c and 8d respectively, that there is a slight difference between the firing modes of neurons 1 and 2 in cases c and d. This implies that while in both cases there is complete synchrony between the two neurons, the two-neuron circuit connectivity in case c does not allow each of the two neurons complete freedom to fire in its own independent firing rate mode.

It follows that while the Hebbian paradigm may result in strong inhibitory effects between asynchronously firing neurons and circuits, it need not necessarily completely eliminate mutual interference. On the other hand, the polarity-gated synapse silencing mechanism simply eliminates directional interaction, removing the corresponding interference.

### 5.3  Firing Mode Control by Synchronous Polarity-Gated Circuit Segregation

A cortical circuit of identical neurons may be segregated by polarity-gated synapse silencing into internally synchronous, externally asynchronous subcircuits. Circuit and subcircuit size can be further reduced by polarity-gated membrane silencing. As we show next, circuit and subcircuit size modification results in firing-rate mode modification, which may serve different cortical functions even when the neurons are identical.

Consider first a fully connected circuit of $n$ identical neurons firing in synchrony. As $N→∞$, equations 5.2 and 5.3 will take $ω(k)$ to its constant limit value (Cooper et al., 2004), $ω$, which in turn implies that $τω$ attains an infinitely large value (Baram, 2017a). Equation 5.1 then yields the following firing-rate model for each of the circuit's neurons:
$υ(k)=f1υ(k-1)=λ1υ(k-1)forβnωυ(k-1)+βu≤0f2υ(k-1)=λ2υ(k-1)+βuforβnωυ(k-1)+βu>0,$
(5.4)
where $λ1=α=exp(-1/τm)$, $λ2=α+βnω$, and $β=1-α$. The transition points from one global attractor type of the map to another are defined by the parameters (Baram, 2013) $λ1,λ2$, $c1=2λ1λ2+1+1+4λ12$ and $c2=λ1λ2+1$. Clearly, equation 5.4 will produce a different mode of firing-rate dynamics for every circuit size $n$. This is illustrated by Figure 9, where the four panels depict global attractors of circuits having identical neurons but different circuit sizes. It can be seen that different circuit sizes result in different global attractor types of firing-rate modes, even when the individual neuron's parameters, external activation, and initial conditions are identical.
Figure 9:

Global attractor types corresponding to fully connected synchronous circuits of (a) 10 neurons (chaotic), (b) 5 neurons (largely oscillatory), (c) 2 neurons (oscillatory), and (d) 1 neuron (fixed point). The global attractors are represented by red line segments or by red x.

Figure 9:

Global attractor types corresponding to fully connected synchronous circuits of (a) 10 neurons (chaotic), (b) 5 neurons (largely oscillatory), (c) 2 neurons (oscillatory), and (d) 1 neuron (fixed point). The global attractors are represented by red line segments or by red x.

Specifically, the circuits represented by Figure 9 obey the model of equations 5.1 to 5.3. Keeping in mind the linear ratio between membrane potential and firing rate ($7.2±0.6spikes·sec-1·mV-1$, Carandini & Ferster, 2000), we have uniformly assumed for illustration purposes the parameter values $u=4,τm=2,τω=300,τθ=0.1$ in units of mV and msec, respectively, zero initial conditions, with $N=100$ taking $ω(k)$ to its constant limit $ω$, and the circuit size values specified below, yielding the corresponding modal parameter and attractor condition values:

1. $n=10$, yielding $ω=-0.7870$, $λ1=0.6065$, $λ2=-2.4901$, $c1=-0.4485$, $c2=-0.5103$

2. $n=5$, yielding $ω=-1.3128$, $λ1=0.6065$, $λ2=-1.9762$, $c1=0.1748$, $c2=-0.1987$

3. $n=2$, yielding $ω=-2.5695$, $λ1=0.6065$, $λ2=-1.4155$, $c1=0.8550$, $c2=0.1415$

4. $n=1$, yielding $ωf=-3.8993$, $λ1=0.6065$, $λ2=-0.9277$, $c1=1.4467$, $c2=0.4373$

An examination of the conditions for the global attractor types specified for the individual neuron (see section 4) shows that the above cases represent (a) chaotic, (b) largely oscillatory, (c) oscillatory, and (d) fixed-point global attractors, as ratified by the cobweb diagrams in Figure 5. We also note that simulated increase of the number of neurons above 10 in a synchronous fully connected circuit has resulted in a chaotic global attractor, as that for $n=10$, which indicates that the four attractors represented by the above four cases span the entire range of active global attractor types.

Simulating the circuits of cases a to d with identical initial conditions, $υ(0)=0$, we obtained the results displayed in Figure 10. It can be seen that decreasing the circuit size changes the firing mode from chaotic to largely oscillatory, then oscillatory, then constant, as predicted by the analysis and the cobweb diagrams of Figure 9. At the same time, the learning and the retrieval response times are increased considerably. In all cases, retrieval is considerably faster than learning.

Figure 10:

Simulated firing-rate modal change by fully connected synchronous circuit size reduction, controlled by membrane silencing in learning and retrieval. Cases a, b, c, and d correspond to circuits of 10, 5, 2, and 1 neurons, respectively, all having the same parameter values, as specified in the text. Decreasing the circuit size changes the firing-rate mode from chaotic to largely oscillatory, then oscillatory, then constant. At the same time, the learning and the retrieval periods are increased considerably.

Figure 10:

Simulated firing-rate modal change by fully connected synchronous circuit size reduction, controlled by membrane silencing in learning and retrieval. Cases a, b, c, and d correspond to circuits of 10, 5, 2, and 1 neurons, respectively, all having the same parameter values, as specified in the text. Decreasing the circuit size changes the firing-rate mode from chaotic to largely oscillatory, then oscillatory, then constant. At the same time, the learning and the retrieval periods are increased considerably.

Cobweb diagrams are essentially restricted to scalar evolution, and therefore the analysis and, for comparison purposes, the corresponding simulations have addressed synchronous circuits of identical neurons under the same initial conditions. However, a broader notion of synchronization under different initial conditions can be illustrated by simulation. For example, case c above, illustrated by cases c of Figures 9 and 10 for a fully connected circuit of two identically parameterized neurons under the same initial conditions for the two neurons, is now illustrated in Figure 11 under different initial conditions, $υ1(0)=1$ and $υ2(0)=2$, respectively. The simulated sequences of firing rate corresponding to the two neurons are shown in Figure 11a. It can be seen that following different initial periods (see Figure 11b), the two neurons converge to fully synchronized behaviors (see Figure 11c). We have further found similar synchronization of fully connected neurons under different initial conditions in a large variety of simulated cases.

Figure 11:

Synchronized oscillations in a circuit of two identical neurons under different initial conditions. The two sequences (a) have different initial responses (b), synchronizing in time (c).

Figure 11:

Synchronized oscillations in a circuit of two identical neurons under different initial conditions. The two sequences (a) have different initial responses (b), synchronizing in time (c).

## 6  Memory by Circuit Polarity

As shown in the previous sections, the behavior of a neural circuit is completely determined by external activation, which, together with internal properties, defines the circuit polarity and, with it, the circuit's firing modes. Given neuronal parameters, specifically $τm,τω,andτθ$, the only variable internal properties are the synaptic weights, which are subject to convergence to constant limit values (Cooper et al., 2004), often referred to as learning. The limit synaptic weights represent the memory of the circuit. Once the synaptic weights have attained their limit values, exertion of the same external inputs that have instigated learning will reproduce the original circuit behavior, represented by firing-rate modes. Moreover, as illustrated in Figure 11, the memorized firing-rate attractors are robustly retrieved in the face of different initial conditions.

### 6.1  Memory Concealment and Restoration in the Individual Neuron

Suppose that the operand of the function $f$ in equation 4.1 becomes negative and stays negative for a long period of time. This may be a consequence of a drop in the level of the external input potential, $I(k)$, or by an unforced change in membrane potential resulting from slow dissipation of electric charge. As explained and demonstrated in section 4, this will result in neuronal silencing, yielding partial circuit memory loss and, because all external inputs to the circuit drop to sufficiently low levels, a total loss of circuit memory. Memory restoration will occur when external inputs return to their original levels. Since, as shown in section 4 for the individual neuron and in section 5 for multineuron circuits, firing-rate modes are uniquely determined by internal neuron properties and circuit polarity–controlled connectivity, the neuronal and circuit learned attributes (hence, firing-rate modes) are uniquely recovered by restoration of the corresponding circuit polarity pattern. Such memory restoration may be partial, affecting, at different times, individual neurons or subcircuits.

In order to demonstrate the concept of memory concealment and restoration, we first consider the individual neuron addressed in section 4. Specifically, we employ the four cases a to d represented by cobweb diagrams in Figure 5 and illustrated by simulation in Figure 6. A positive value of centered membrane activation $u$ implies learning, as manifested by a convergent value of the feedback synaptic weight $ω$, and a simultaneous convergence of the firing rate to a global attractor. A negative value of the centered membrane activation $u$ implies a negative membrane polarity, represented by a negative operand of the function $f$ in equation 4.1. Consequently, equation 4.1 reduces to equation 4.10, yielding membrane silencing, as illustrated by Figure 4a. Representing temporal memory concealment, membrane silence is interrupted by reinstating the original level of activation, which results in restoration of the memorized (or learned) mode of firing rate.

Figure 12 shows convergence of the self-synapse weight $ω$ representing the learning process (first column), the firing-rate modes of learning (second column), memory concealment (third column), and memory restoration (fourth column) by an individual neuron in the four cases under consideration. Membrane silencing and, consequently, memory concealment were accomplished by changing $u$ from 10 to $-$10 in cases a, b, and c, and from $-$1 to $-$10 in case d. In memory restoration, the original value of $u$ was applied. It can be seen that in all cases, the convergent mode of firing rate (later part of the sequence in the second column) was recovered (fourth column).

Figure 12:

Learning (first and second columns), memory concealment (third column), and memory restoration (fourth column) in the individual neuron are represented by firing-rate modes in the four cases considered in section 4, representing (a) chaotic, (b) largely oscillatory, (c) fixed point, and (d) silent firing rate modes.

Figure 12:

Learning (first and second columns), memory concealment (third column), and memory restoration (fourth column) in the individual neuron are represented by firing-rate modes in the four cases considered in section 4, representing (a) chaotic, (b) largely oscillatory, (c) fixed point, and (d) silent firing rate modes.

### 6.2  Memory Deterioration, Concealment, and Restoration in Synchronous Circuit

As explained and demonstrated in section 5, neural circuits can be segregated by synapse silencing into synchronous subcircuits. Asynchronous segregation, as represented by Figure 7, can be analyzed in the present context by similar means but will be omitted for brevity. External input decline, or membrane potential decline by spontaneous electric discharge, will result in individual neuron silencing. A change in the number of active circuit neurons will result in changing firing-rate dynamics of the remaining circuit neurons, possibly to the point of total circuit silence and memory concealment. Restoration of external activation will result in partial, or complete, return of circuit activity to its previous dynamics.

In order to demonstrate memory deterioration, concealment, and restoration in a synchronous circuit, consider a circuit of three neurons, all having the same parameter values $τω=300,τθ=0.1$, $τm=2$, and $u=4$. In time, the circuit undergoes several stages of individual neuron silencing and reactivation, as depicted in Figure 13. Figure 13a describes initial learning and memory, followed by memory deterioration due to a sequence of neuronal silencing, achieved by changing the value of the corresponding centralized membrane activation from $u=4$ to $u=-1$. During the time interval between $k=0$ and k $=$ 1000, the circuit experiences learning, by the end of which it reaches a chaotic mode of firing rate. At $k=1001$ one of the neurons becomes silent. The remaining active part of the circuit, constituting a synchronous subcircuit of two neurons, displays an oscillatory mode of firing rate (identical to case c in Figure 9 and case c in the third column of Figure 10, having the same circuit size of 2 and the same parameters), until $k=2000$. At this point, another neuron becomes silent, and, following an abrupt response to the change of subcircuit size at $k=2001$, the remaining active neuron displays a constant (fixed-point) mode of firing rate. This last active neuron becomes silent at $k=3001$, and the entire circuit remains silent until $k=5000$. The sequence of neuron silencing thus described represents memory deterioration, ending in complete memory concealment. The memory deterioration process described by Figure 13a is reversed in Figure 13b by reactivating, in sequence, each of the neurons by applying $u=4$, as in the original learning process. The circuit remains silent until, at $k=6001$, one of the neurons becomes active again, producing a constant firing rate. At $k=7001$, a second neuron becomes active, and the subcircuit of two active neurons produces an oscillatory mode of firing rate. At $k=8001$, the third neuron becomes active as well, and the original external activation of all three neurons is restored. It can be seen that the circuit activity is back to the original chaotic mode.

Figure 13:

Synchronous circuit learning, memory deterioration, and concealment (a) and memory restoration (b). A three-neuron synchronous circuit undergoes a sequence of learning converging to a chaotic firing rate mode ($k=0,…,1000)$, followed by one neuron silencing yielding an oscillatory firing-rate mode of the active two-neuron subcircuit ($k=1001,…,2000)$, a second neuron silencing yielding a constant firing-rate mode of the remaining active neuron ($k=2001,…,3000)$, and a third neuron silencing yielding complete memory concealment ($k=3001,…,5000)$. Neuronal activation in reversed order ($k=5001,…,10,000)$ ends in restoration of the learned chaotic firing-rate mode.

Figure 13:

Synchronous circuit learning, memory deterioration, and concealment (a) and memory restoration (b). A three-neuron synchronous circuit undergoes a sequence of learning converging to a chaotic firing rate mode ($k=0,…,1000)$, followed by one neuron silencing yielding an oscillatory firing-rate mode of the active two-neuron subcircuit ($k=1001,…,2000)$, a second neuron silencing yielding a constant firing-rate mode of the remaining active neuron ($k=2001,…,3000)$, and a third neuron silencing yielding complete memory concealment ($k=3001,…,5000)$. Neuronal activation in reversed order ($k=5001,…,10,000)$ ends in restoration of the learned chaotic firing-rate mode.

### 6.3  Memory Modification

Memory modification may result from changes in neural circuit polarity and membrane activation level. While the circuit polarity pattern uniquely determines which neurons are active, the dynamic nature of the firing rate produced by a neuron as a result of restored activity will depend on the level of external activation as well. Since, as can be seen in equation 4.12, the actual value of $u$ determines the intersection point $βu$ of the function $f2(υ(k-1))$ with the axis $υ(k)$, but not the slope $λ2$ of $f2(υ(k-1))$, defined by equation 4.14, the value of $u$ affects the amplitude but not the characteristic firing-rate mode of the individual neuron (this can be further ratified by examination of the cobweb diagrams in Figure 5).

Repeating the simulations of the individual neurons depicted in Figure 12, with the retrieval activation values changed from $u=10$, used in the learning stage of cases a to c, to (a) $u=1$, (b) $u=0.1$, (c) $u=0.01$, and (d) $u=-1$ applied in retrieval, we obtain the results depicted in Figure 14. It can be seen that while the evolution of the feedback synaptic weight and the firing rates during learning depicted in Figure 14 are identical to those depicted in Figure 12, the firing-rate sequences in retrieval have different amplitudes in the two figures while maintaining the same dynamic modes.

Figure 14:

Memory modification in individual neuron. Using different activation values in learning (second column) and retrieval (fourth column) by an individual neuron (cases a–d of section 4) results in different firing-rate amplitudes while maintaining the same firing rate modes.

Figure 14:

Memory modification in individual neuron. Using different activation values in learning (second column) and retrieval (fourth column) by an individual neuron (cases a–d of section 4) results in different firing-rate amplitudes while maintaining the same firing rate modes.

In a synchronous neural circuit, a process of memory deterioration or restoration may stop at any stage, producing what might be considered a partial memory, an illusion, or an innovation. For instance, if the process of memory deterioration in the synchronous three-neuron circuit considered in section 6.2 stops between $k=1001$ and $k=2000$, then “memory” will continue as an oscillatory sequence produced by a two-neuron circuit, which is different from the original chaotic sequence. Similarly, if the process of deterioration stops between $k=2001$ and $k=3000$, then “memory” will continue as a constant sequence produced by a single neuron, as illustrated in Figure 15a. The same is true for the memory restoration sequence, which, if stopped between $k=6001$ and $k=7000$, will produce “memory” of a constant sequence produced by a single neuron, while if stopped between $k=7001$ and $k=8000$, will produce “memory” of an oscillatory synchronous sequence in a two-neuron circuit, as illustrated in Figure 15b.

Figure 15:

Memory modification in a synchronous three-neuron circuit. (a) Following learning, which produces a chaotic firing rate $(k=0,…,1000)$, silencing of one neuron $(k=1001,…,2000)$, followed by silencing of another neuron $(k=2001,…,7000)$ takes the circuit to oscillatory, then to constant firing rate, contrasting the original chaotic mode. (b) Reactivating one of the silenced neurons takes the circuit back to oscillatory firing rate, again contrasting the original chaotic mode.

Figure 15:

Memory modification in a synchronous three-neuron circuit. (a) Following learning, which produces a chaotic firing rate $(k=0,…,1000)$, silencing of one neuron $(k=1001,…,2000)$, followed by silencing of another neuron $(k=2001,…,7000)$ takes the circuit to oscillatory, then to constant firing rate, contrasting the original chaotic mode. (b) Reactivating one of the silenced neurons takes the circuit back to oscillatory firing rate, again contrasting the original chaotic mode.

## 7  Discussion

Extending the notion of local polarity, experimentally discovered in the separate forms of membrane and synapse silencing and reactivation about two decades ago and debated ever since, to the unified notion of polarity-gated neural circuits, we have analytically shown that the latter holds the key to cortical connectivity, activity, and memory. We have further shown that polarity gates segregate a neural circuit into interference-free, internally synchronous, externally asynchronous subcircuits. Employing widely accepted, biologically validated firing rate and plasticity paradigms, we have shown that chaotic, oscillatory, fixed-point, and silent firing-rate modes are governed by mixed, positive, and negative polarity gating, respectively, maintaining the firing variety of the different neurons and allowing each of the neurons its unique expression by firing-rate dynamics.

Circuit and subcircuit structures define their firing modes, which are in turn matched to their functions. These are learned through convergence of the synaptic weights. The rate of learning is determined by the corresponding time constants, which in turn determine, together with the final values of the synaptic weights and the circuit polarities, both the circuit's structure and firing modes. Further changes in circuit (and subcircuit) size will change its firing mode, and, accordingly, its functionality. While we do not attempt to map the entire range of firing-rate modes to cortical functions, it seems widely accepted that certain homeostatic functions are associated with tonic, fixed-rate firing, while others are associated with an oscillatory firing rate. Chaotic neural firing has been conjectured to represent functional pace making by bursting (Hayashi & Ishizuka, 1992). Temporal multiplexing (i.e., transmitting and receiving independent signals over a common signal path) of different firing signals, analytically modeled (Izhikevich, 2001) and observed in sensory cortices (Fairhall, Lewen, Bialek, & van Steveninck, 2001; Lundstrom & Fairhall, 2006), enhances the coding and information transmission capacity (Bullock, 1997; Lisman & Grace, 2005; Kayser, Montemurro, Logothetis, & Panzeri, 2009). While a chaotic attractor drives temporal mixing of firing rates over the entire state space, a largely cyclic attractor can perform multiplexing of two oscillatory signals. Depending on the function of the receiving neuron, demultiplexing can be done, in principle, by bandpass filtering. Neuronal low-pass (Pettersen & Einevoll, 2008) and high-pass (Poon, Young, & Siniaia, 2000) filtering have been reported. Multiplexed red, green, and blue (RGB) color coding is a known example of creating mixtures of the primary colors, found in both biological and technological vision systems (Hunt, 2004). A silent attractor, representing the state of a silent neuron, has been found to play a major role in cortical representation of place (O'Keefe & Dostrovsky, 1971; O'Keefe & Nadel, 1978; Calvin, 1996; Hafting et al., 2005; Epsztein et al., 2011).

Our results put the notions of cortical learning and memory in a new perspective. Following circuit segregation, memory, as circuit activity, is manifested by internally synchronous, externally asynchronous, isolated subcircuits. Given internal neuronal properties, a state of circuit polarity and external activation, subcircuit learning is manifested by convergence of the synaptic weights to constant values. Once such values are established, the persistence of the circuit polarity pattern also implies the persistence of the connectivity and activity patterns, along with the neuronal firing modes. We have further suggested and demonstrated that memory deterioration is caused by a gradual silencing of circuit neurons, to the point of complete memory concealment, while memory restoration is caused by complete reactivation of the original, learned circuit polarity pattern. Partial and false memory, as well as novelty and innovation effects, in the form of new circuit structures and firing dynamics, are created when the new external inputs are different from those creating the original memory or when the restoration process is terminated before it is completed. In sharp contrast, more permanent memory effects have been suggested to result from synapse and whole axon elimination, or pruning, and regrowth (Balice-Gordon & Lichtman, 1994; Culican, Nelson, & Lichtman, 1998; Chklovskii et al., 2004; Vanderhaeghen & Cheng, 2010; Knoblauch & Sommer, 2016; Baram, 2017b). While our proposed memory mechanization by circuit polarization does not rule out a certain role for the death and regrowth of neurons and synapses in the implementation of memory, it appears to be considerably more economical, controllable, and agile than the latter.

Finally, we note that the underlying expression of cortical information is a cortical activity pattern. The translation of cortical activity to and from sensory information is mediated by sensory lobes, which may be defined as read-in/read-out circuits. Yet memory storage and retrieval is performed at the neural circuit activity level, governed by circuit polarity. A striking analogy is that of computer memory, storing and retrieving patterns of binary bits, which are occasionally translated to and from sensory information by external devices.

## Acknowledgments

This study was supported by the Technion's Roy Matas/Winnipeg Chair in Biomedical Engineering.

## References

Abraham
,
R. H.
,
Gardini
,
L.
, &
Mira
,
C.
(
1997
).
Chaos in discrete dynamical systems.
Berlin
:
Springer-Verlag
.
Amari
,
S.
(
1972
).
Learning patterns and pattern sequences by self-organizing nets of threshold elements
.
IEEE Trans. Comput., 21
,
1197
1206
.
Arimura
,
N.
, &
Kaibuchi
,
K.
(
2005
).
Key regulators in neuronal polarity
.
Neuron
,
48
(
6
),
881
884
.
Ashby
,
M. C.
, &
Isaac
,
J. T. R.
(
2011
).
Maturation of a recurrent excitatory neocortical circuit by experience-dependent unsilencing of newly formed dendritic spines
.
Neuron
,
70
(
3
),
510
521
.
Atwood
,
H. L.
, &
Wojtowicz
,
J. M.
(
1999
).
Silent synapses in neural plasticity: Current evidence
.
Learn. Mem., 6
,
542
571
.
Balice-Gordon
,
R. J.
,
Chua
,
C.
,
Nelson
,
C. C.
, &
Lichtman
,
J. W.
(
1993
).
Gradual loss of synaptic cartels precedes axon withdrawal at developing neuromuscular junctions
.
Neuron
,
11
,
801
815
.
Balice-Gordon
R. J.
, &
Lichtman
,
J. W.
(
1994
)
Long-term synapse loss induced by focal blockade of postsynaptlc receptors
.
Nature, 372
,
519
524
.
Baram
,
Y.
(
2012
).
Noninvertibility, chaotic coding and chaotic multiplexity in synaptically modulated neural firing
.
Neur. Comp.
,
24
(
3
),
676
699
.
Baram
,
Y.
(
2013
).
Global attractor alphabet of neural firing modes
.
J. Neurophys.
,
110
,
907
915
.
Baram
,
Y.
(
2017a
).
Developmental metaplasticity in neural circuit codes of firing and structure
.
Neur. Netw.
,
85
,
182
196
.
Baram
,
Y.
(
2017b
).
Asynchronous segregation of cortical circuits and their function: A life-long role for synaptic death
.
AIMS Neurosci.
,
4
(
2
),
87
101
.
Baram
,
Y.
, &
Sal'ee
,
D.
(
1992
).
Lower bounds on the capacities of binary and ternary networks storing sparse random vectors
.
IEEE Trans. on Inform. Thy.
,
38
(
6
),
1633
1647
.
Bennett
,
B. D.
,
Callaway
,
J. C.
, &
Wilson
,
C. J.
(
2000
).
Intrinsic membrane properties underlying spontaneous tonic firing in neostriatal cholinergic interneurons
.
J. Neurosci.
,
20
(
22
),
8493
8503
.
Bienenstock
,
E. L.
,
Cooper
,
L. N.
, &
Munro
,
P. W.
(
1982
).
Theory for the development of neuron selectivity: Orientation specificity and binocular interaction in visual cortex
.
J. Neurosci.
,
2
,
32
48
.
Borg-Graham
,
L. J.
,
Monier
,
C.
, &
Frégnac
,
Y.
(
1998
).
Visual input evokes transient and strong shunting inhibition in visual cortical neurons
.
Nature
,
393
(
6683
),
369
373
.
Bruck
,
J.
, &
Roychowdhury
,
V. P.
(
1990
).
On the number of spurious memories in the Hopfield model
.
IEEE Trans. Info. Theo.
,
IT-36
(
2
),
393
397
.
Bullock
,
T. H.
(
1997
).
Signals and signs in the nervous system: The dynamic anatomy of electrical activity is probably information-rich
.
,
94
,
1
6
.
Cajal
,
R. S.
(
1890
).
Manual de Anatomia Patológica General
(in Spanish).
Calvin
,
W. H.
(
1996
).
The cerebral code: Thinking a thought in the mosaics of the mind
.
Cambridge, MA
:
MIT Press
.
Carandini
,
M.
, &
Ferster
,
D.
(
2000
).
Membrane potential and firing rate in cat primary visual cortex
.
J. Neurosci.
,
20
,
470
484
.
Castellani
,
G. C.
,
Quinlan
,
E. M.
,
Cooper
,
L. N.
, &
Shouval
,
H. Z.
(
2001
)
A biophysical model of bidirectional synaptic plasticity: Dependence on AMPA and NMDA receptors
.
,
98
(
22
),
12772
12777
.
Chklovsii
,
D. B.
,
Mel
,
B. W.
, &
Svoboda1
,
K.
(
2004
).
Cortical rewiring and information storage
.
Nature
,
431
,
782
788
.
Chou
,
P. A.
(
1989
).
The capacity of the Kanerva associative memory
.
IEEE Trans. on Inform. Thy.
,
35
(
2
),
281
298
.
Cohen
,
M. A.
, &
Grossberg
,
S.
(
1983
).
Absolute stability of global pattern formation and parallel memory storage by competitive neural networks
.
IEEE Trans. Syst., Man, and Cybern.
,
13
,
815
826
.
Connor
,
J. A.
, &
Stevens
,
C. F.
(
1971
).
Voltage clamp studies of a transient outward membrane current in gastropod neural somata
.
J. Physiol.
,
213
,
21
30
.
Cooper
,
L. N.
,
Intrator
,
N.
,
Blais
,
B. S.
, &
Shouval
,
H. Z.
(
2004
).
Theory of cortical plasticity
.
Singapore
:
World Scientific
.
Culican
,
S. M.
,
Nelson
,
C. C.
, &
Lichtman
,
J. W.
(
1998
).
Axon withdrawal during synapse elimination at the neuromuscular junction is accompanied by disassembly of the postsynaptic specialization and withdrawal of Schwann cell processes
.
J. Neurosci.
,
18
(
13
),
4953
4965
.
Dayan
,
P.
, &
Abbott
,
L. F.
(
2001
).
Theoretical neuroscience
.
Cambridge, MA
:
MIT Press
.
Dembo
,
A.
(
1989
).
On the capacity of associative memories with linear threshold functions
.
IEEE Trans. on Info. Theo.
,
35
(
4
),
709
720
.
Dennis
,
M. J.
, &
Yip
,
J. W.
(
1978
)
Formation and elimination of foreign synapses on adult salamander muscle
.
J. Physiol., 274
,
299
310
.
Dudai
,
Y.
(
1989
).
Neurobiology of memory
.
New York
:
Oxford University Press
.
Epsztein
,
J.
,
Brecht
,
M.
, & Lee A. K.,
(
2011
).
Intracellular determinants of hippocampal CA1 place and silent cell activity in a novel environment
.
Neuron
,
70
,
109
120
.
Fairhall
,
A. L.
,
Lewen
,
G. D.
,
Bialek
,
W.
, &
van Steveninck
,
R. R. R.
(
2001
).
Efficiency and ambiguity in an adaptive neural code
.
Nature
,
412
,
787
792
.
Fell
,
J.
,
Roschke
,
J.
, &
Beckmann
,
P.
(
1993
).
Deterministic chaos and the first positive Lyapunov exponent: A nonlinear analysis of the human electroencephalogram during sleep
.
Biol. Cybern.
,
69
,
139
146
.
Gage
,
F. H.
(
2002
).
.
J. Neurosci.
,
22
(
3
),
612
613
.
Gal
,
A. Eytan
, D.,
Wallach
,
A.
,
Sandler
,
M.
,
Schiller
,
J.
, &
Marom
,
S.
(
2010
).
Dynamics of excitability over extended timescales in cultured cortical neurons
.
J. Neurosci.
,
30
(
48
),
16332
16342
.
Gerstner
,
W.
(
1995
).
Time structure of the activity in neural network models
.
Phys. Rev., E
,
51
,
738
758
.
Gerstner
,
W.
, &
Kistler
,
W. M.
(
2002
).
Spiking neuron models
.
Cambridge
:
Cambridge University Press
.
Gibson
,
D. A.
, &
Ma
,
L.
(
2011
).
Developmental regulation of axon branching in the vertebrate nervous system
.
Development
,
138
,
183
195
.
Granit
,
R.
, D.
Kernell
,
D.
, &
Shortess
,
G. K.
(
1963
).
Quantitative aspects of repetitive firing of mammalian motoneurons caused by injected currents
.
J. Physiol.
,
168
,
911
931
.
Hafting
,
T.
,
Fyhn
,
M.
,
Molden
,
S.
,
Moser
,
M.-B.
, &
Moser
,
E. I.
(
2005
).
Microstructure of a spatial map in the entorhinal cortex
.
Nature
,
436
(
7052
),
801
806
.
Hayashi
,
H.
, &
Ishizuka
,
S.
(
1992
).
Chaotic nature of bursting discharges in the Onchidium pacemaker neuron
.
J. Theor. Biol.
,
156
,
269
291
.
Hebb
,
D. O.
(
1949
).
The organization of behavior: A neuropsychological theory.
New York
:
Wiley
.
Hertz
,
J.
,
Krogh
,
A.
, &
Palmer
,
R. G.
(
1991
).
Introduction to the theory of neural computation.
:
.
Hodgkin
,
A.
, &
Huxley
,
A. A.
(
1952
).
Quantitative description of membrane current and its application to conduction and excitation in nerve
.
J. Physiol.
,
117
,
500
544
.
Hopfield
,
J. J.
(
1982
).
Neural networks and physical systems with emergent collective computational abilities
.
,
79
,
2554
2558
.
Hsia
,
A. Y.
,
Malenka
,
R. C.
, &
Nicoll
,
R. A.
(
1998
).
Development of excitatory circuitry in the hippocampus
.
J. Neurophysiol.
,
79
,
2013
2024
.
Hunt
,
R. W.
(
2004
).
The reproduction of colour
(6th ed).
Chichester, UK
:
Wiley
.
Huttenlocher
,
P. R.
(
1979
).
Synaptic density in human frontal cortex: Development changes and effects of age
.
Brain Res.
,
163
,
195
205
.
Intrator
,
N.
, &
Cooper
,
L. N.
(
1992
).
Objective function formulation of the BCM theory of visual cortical plasticity: Statistical connections, stability conditions
.
Neur. Netw.
,
5
,
3
17
.
Izhikevich
,
E. M.
(
2001
).
Resonate-and-fire neurons
.
Neur. Netw.
,
14
(
6
),
883
894
.
Jolivet
,
R.
,
Lewis
,
T. J.
, &
Gerstner
,
W.
(
2004
).
Generalized integrate-and-fire models of neuronal activity approximate spike trains of a detailed model to a high degree of accuracy
.
J. Neurophysiol.
,
92
,
959
976
.
Kanerva
,
P.
(
1988
).
Sparse distributed memory
.
Cambridge, MA
:
MIT Press
.
Katz
,
L. C.
, &
Shatz
,
C. J.
(
1996
).
Synaptic activity and the construction of cortical circuits
.
Science
,
274
,
1133
1138
.
Kayser
,
C.
,
Montemurro
,
M. A.
,
Logothetis
,
N. K.
, &
Panzeri
,
S.
(
2009
).
Spike-phase coding boosts and stabilizes the information carried by spatial and temporal spike patterns
.
Neuron
,
61
,
597
608
.
Kerchner
,
G. A.
, &
Nicoll
,
R. A.
(
2008
).
Silent synapses and the emergence of a postsynaptic mechanism for LTP
.
Nature Reviews Neuroscience
,
9
,
813
825
.
Kimata
,
T.
,
Tanizawa
,
Y.
,
Can
,
Y.
,
Ikeda
,
S.
,
Kunara
,
A.
, &
Mori
,
I.
(
2012
).
Synaptic polarity depends on phosphatidylinositol signaling regulated by myo-inositol monophosphatase in Caenorhabditis elegans
.
Genetics
,
191
(
2
),
509
521
.
Knoblauch
,
A.
, &
Sommer
,
F. T.
(
2016
).
Structural plasticity, effectual connectivity, and memory in cortex
.
Front. Neuroanat.
,
10
, 63.
Knoebel
,
R. A.
(
1981
).
Exponential reiterated
.
Amer. Math. Month.
,
88
,
235
252
.
Koenigs
,
G.
(
1884
).
Recherches sur les integrals de certains equations fonctionnelles
.
Annales scientifiques de l'Ecole normale supérieure
,
3
(
1
),
3
41
.
Kuh
,
A.
, &
Dickinson
,
B. W.
(
1989
).
Information capacity of associative memories
.
IEEE Trans. Info. Theo.
,
35
(
1
),
59
68
.
Lankheet
,
M. J. M.
,
,
P. C.
,
Borghius
,
B. G.
, &
Noest
,
A. J.
(
2012
).
Spike-interval triggered averaging reveals a quasi-periodic spiking alternative for stochastic resonance in catfish electroreceptors
.
PLoS One
,
7
(
3
), e32786.
Lapicque
,
L.
(
1907
).
Recherches quantitatives sur l'excitation électrique des nerfs traitée comme une polarisation
.
J. Physiol. Pathol. Gen.
,
9
,
620
635
.
Lemeray
,
E. M.
(
1895a
).
Sur les fonctions iteratives et sur une nouvelle fonction
. In
Proceedings of the Association Française pour l'Avancement des Sciences, Congres Bordeaux
(vol.
2
, pp.
149
165
).
Li
,
T-Y.
, &
Yorke
,
J. A.
(
1975
).
Period three implies chaos
.
Amer. Math. Month.
,
82
,
985
992
.
Liao
,
D.
,
Zhang
,
X.
,
O'Brien
,
R.
,
Ehlers
,
M. D.
, &
Huganir
,
R. L.
(
1999
).
Regulation of morphological postsynaptic silent synapses in developing hippocampal neurons
.
Nat. Neurosci.
,
2
(
1
),
37
43
.
Lisman
,
J. E.
, &
Grace
,
A. A.
(
2005
).
The hippocampal-VTA loop: Controlling the entry of information into long-term memory
.
Neuron.
,
46
(
5
),
703
713
.
Losi
,
G.
,
Prybylowski
,
K.
,
Fu
,
Z.
,
Luo
,
J. H.
, &
Vicini
,
S.
(
2002
).
Silent synapses in developing cerebellar granule neurons
.
J. Neurophysiol.
,
87
(
3
), 1263--1270.
Lundstrom
,
B. N.
, &
Fairhall
,
A. L.
(
2006
).
Decoding stimulus variance from a distributional neural code of interspike intervals
.
J. Neurosci.
,
26
,
9030
9037
.
Marder
,
E.
,
Abbott
,
L. F.
,
Turrigiano
,
G. G.
,
Liu
,
Z.
, &
Golowasch
,
J.
(
1996
).
Memory from the dynamics of intrinsic membrane currents
.
,
93
,
13481
13486
.
McCulloch
,
W. S.
, &
Pitts
,
W.
(
1943
).
A logical calculus of the idea immanent in nervous activity
.
Bullet. Math. Biophys.
,
5
,
115
133
.
McEliece
,
R. J.
,
Posner
,
E. C.
,
Rodemich
,
E. R.
, &
Venkatesh
,
S.
(
1987
).
The capacity of the Hopfield associative memory
.
IEEE Trans. Info. Theo.
,
IT-33
(
4
),
461
482
.
McGahon
,
B. M.
,
Martin
,
D. S.
,
Horrobin
,
D. F.
, &
Lynch
,
M. A.
(
1999
).
Age-related changes in synaptic function: Analysis of the effect of dietary supplementation with omega-3 fatty acids
.
Neuroscience
,
94
(
1
),
305
314
.
Melnick
,
I. V.
(
1994
).
Electrically silent neurons in the substantia gelatinosa of the rat spinal cord
.
Fiziol. Zh.
,
56
(
5
),
34
--
39
. (in Russian)
Miller
,
K. D.
, &
Fumarola
,
F.
(
2012
).
Mathematical equivalence of two common forms of firing rate models of neural networks
.
Neural Comput.
,
24
,
25
31
.
O'Keefe
,
J.
, &
Dostrovsky
,
J.
(
1971
).
The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat
.
Brain Research
,
34
(
1
),
171
175
.
O'Keefe
,
J.
, &
,
L.
(
1978
).
The hippocampus as a cognitive map
.
New York
:
Oxford University Press
.
Panzeri
,
S.
,
Brunel
,
N.
,
Logothetis
,
N. K.
, &
Kayser
,
C.
(
2009
).
Sensory neural codes using multiplexed temporal scales (review)
.
Trends in Neurosci.
,
33
,
111
120
.
Pettersen
,
K. H.
, &
Einevoll
,
G. T.
(
2008
).
Amplitude variability and extracellular low-pass filtering of neuronal spikes
.
Biophys. J.
,
94
,
784
802
.
Peterfreund
,
N.
, &
Baram
,
Y.
(
1994
).
Second–order bounds on the domain of attraction and the rate of convergence of nonlinear dynamical systems and neural networks
.
IEEE Trans. Neur. Netw.
,
5
(
4
),
551
560
.
Poon
,
C-S.
,
Young
,
D. L.
, &
Siniaia
,
M.
(
2000
).
M. High-pass filtering of carotid-vagal influences on expiration in rat: Role of N-methyl-D-aspartate receptors
.
Neurosci. Let.
,
284
,
5
8
.
Qwakenaak
,
H.
, &
Sivan
,
R.
(
1972
).
Linear optimal control systems
.
New York
:
Wiley InterScience
.
Stratton
,
P.
, &
Wiles
,
J.
(
2015
).
Global segregation of cortical activity and metastable dynamics
.
Front. Syst. Neurosci.
,
9
, 119.
Tanizawa
,
Y.
,
Kuhara
,
A.
,
,
H.
,
Kodama
,
E.
,
Mizuno
,
T.
, &
Mori
,
I.
(
2006
).
Inositol monophosphatase regulates localization of synaptic components and behavior in the mature nervous system of
C. elegans. Genes and Develop.
,
20
(
23
),
3296
3310
.
Tessier
,
C. R.
, &
,
K.
(
2009
).
Activity-dependent modulation of neural circuit synaptic connectivity
.
Front. Mol. Neurosci.
,
30
,
2
8
.
Vanderhaeghen
,
P.
, &
Cheng
,
H. J.
(
2010
).
Guidance molecules in axon pruning and cell death
.
Cold Spring Harbor Perspect. in Biol.
,
2
(
6
),
1
18
.
Vardi
,
R.
,
Wallach
,
A.
,
Kopelowitz
,
E. Abeles
, M.,
Marom
,
S.
, &
Kanter
,
I.
(
2012
).
Synthetic reverberating activity patterns embedded in networks of cortical neurons
.
Europhysics Letters
,
97
, 66002.
Weiner
,
J. A.
,
Burgess
,
R. W.
, & Jontes J.
(
2013
).
Mechanisms of neural circuit formation
.
Front. Mol. Neurosci.
,
6
, 12.
Wilson
,
H. R.
, &
Cowan
,
J. D.
(
1972
).
Excitatory and inhibitory interactions in localized populations of model neurons
.
Biophys. J.
,
12
,
1
24
.