## Abstract

Experimental constraints have traditionally implied separate studies of different cortical functions, such as memory and sensory-motor control. Yet certain cortical modalities, while repeatedly observed and reported, have not been clearly identified with one cortical function or another. Specifically, while neuronal membrane and synapse polarities with respect to a certain potential value have been attracting considerable interest in recent years, the purposes of such polarities have largely remained a subject for speculation and debate. Formally identifying these polarities as on-off neuronal polarity gates, we analytically show that cortical circuit structure, behavior, and memory are all governed by the combined potent effect of these gates, which we collectively term *circuit polarity*. Employing widely accepted and biologically validated firing rate and plasticity paradigms, we show that circuit polarity is mathematically embedded in the corresponding models. Moreover, we show that the firing rate dynamics implied by these models are driven by ongoing circuit polarity gating dynamics. Furthermore, circuit polarity is shown to segregate cortical circuits into internally synchronous, externally asynchronous subcircuits, defining their firing rate modes in accordance with different cortical tasks. In contrast to the Hebbian paradigm, which is shown to be susceptible to mutual neuronal interference in the face of asynchrony, circuit polarity is shown to block such interference. Noting convergence of synaptic weights, we show that circuit polarity holds the key to cortical memory, having a segregated capacity linear in the number of neurons. While memory concealment is implied by complete neuronal silencing, memory is restored by reactivating the original circuit polarity. Finally, we show that incomplete deterioration or restoration of circuit polarity results in memory modification, which may be associated with partial or false recall, or novel innovation.

## 1 Introduction

The nature of cortically embedded information has been one of the most intensely studied yet highly evasive mysteries of neuroscience for over a century. Traditionally, the experimental nature of biological research has required a high degree of phenomenological specificity. Consequently, the seemingly separate issues of cortical development, interneuron connectivity, neuronal firing dynamics, learning, and memory have been studied in essentially complete mutual isolation. Yet recent years have seen a trend toward integrative neuroscience, driven by a realization that, on the one hand, practically every cortical function combines a variety of molecular, biological, and physical processes, and, on the other, different cortical functions are often contrived by similar mechanisms. It is also becoming increasingly clear that a complete understanding of such integration would involve a new look at the cortical arena, incorporating advanced theoretical reasoning and new methods of analysis. The relatively recent emergence of numerous publications presenting advanced mathematical analysis of neurobiological processes is seemingly strong evidence to this effect. Recent interdisciplinary interest in general common notions, such as information, learning, memory, computation, cognition, and control, is further evidence of the same effect.

In this study, we show that cortical connectivity, behavior, and memory are all controlled by on-off gates, corresponding to neuronal membrane and synapse polarities with respect to a certain potential value. Extending the local nature of such polarities to multineuron circuits, we find that such circuits are segregated by circuit polarity into smaller subcircuits. Employing widely accepted paradigms of firing rate and plasticity, we first put experimental neurobiological findings of membrane and synapse polarities on mathematical grounds. Different firing rate modes, specifically, silent, fixed-point, oscillatory, and chaotic, are shown to be governed by negative, positive, and mixed polarity gating. Neural circuits are shown to be segregated by polarization into synchronous subcircuits. Changes in synchronous circuit size, caused by changes in neuronal membrane polarities, are shown to change circuit firing modes. Circuit segregation by synapse silencing is shown to yield interference-free asynchrony between synchronous subcircuits, facilitating, by different firing-rate dynamics, simultaneous execution of different cortical functions. While neural circuits often involve thousands of neurons, we limit our illustration of the underlying concepts to circuits of a few neurons for graphical convenience, without compromising coverage of the entire range of dynamic modes. Convergence of synaptic weights holds the key to circuit memory, shown to have a capacity linear in the number of circuit neurons. Following memory deterioration, caused by membrane silencing due to external inputs or electric charge dissipation, concealed memory, maintained by constant synaptic weights, is restored by presentation of the original circuit activation. Partial and false memory, as well as novelty and innovation effects, in the form of new circuit structures and firing dynamics, are created by incomplete deterioration or restoration of the circuit polarity that has instigated the original memory.

## 2 Historical Account

Since most neuroscience research has evolved along separate topical paths, any historical account must make a similar distinction among these topics. Following the discovery of the neuron in the late nineteenth century (Cajal, 1890), early investigations of cortically embedded information concerned the nature of neuronal firing (Lapicque, 1907). Elaborate chemical, physical, and mathematical analysis led to the celebrated conductance-based model of action potential (Hodgkin & Huxley, 1952), inspiring the spiking neurons paradigm (Gerstner & Kistler, 2002). Yet firing-rate dynamics, while lacking in detail with respect to the spiking neuron paradigm, have been found to offer not only plausible means for cortical information representation but also a certain mathematical convenience and a reliable representation of empirically observed firing sequences (Wilson & Cowan, 1972; Gerstner, 1995; Dayan & Abbott, 2001; Jolivet, Lewis, & Gerstner, 2004). Following a mathematical analysis of discrete iteration maps for neural network firing rate, a global attractor code, relating the modes of firing rate dynamics to internal neuron properties, has been derived (Baram, 2012, 2013).

Synaptic plasticity has been believed to affect and to be affected by neuronal firing underlying learning and memory in the nervous system (Hebb, 1949; Bienenstock, Cooper, & Munro, 1982; Dudai, 1989; Cooper, Intrator, Blais, & Shouval, 2004). A detailed biophysical model of long-term synaptic potentiation and long-term synaptic depression has been presented (Castellani, Quinlan, Cooper, & Shouval, 2001), supporting the BCM plasticity theory (Bienenstock et al., 1982). Assumed to evolve on a slower timescale, synaptic plasticity dynamics has been often separated from firing dynamics for analytic convenience. While almost all theoretical and experimental studies make the implicit assumption that synaptic efficacy is both necessary and sufficient to account for learning and memory, it has been suggested that learning and memory in neural networks result from an ongoing interplay between changes in synaptic efficacy and intrinsic membrane properties (Marder, Abbott, Turrigiano, Liu, & Golowasch, 1996). The combined effects of firing rate and plasticity time constants on firing rate dynamics corresponding to different developmental stages have been analyzed, laying the ground for analytic unification with respect to neuronal properties, on the one hand, and cortical development, on the other (Baram, 2017a).

Cortical circuit connectivity has been attracting increasing interest for the past three decades. Sensory inputs have been shown to evoke ongoing shunting in visual cortex circuits (Borg-Graham, Monier, & Frégnac, 1998). Activity-independent neural circuit formation in early development (Hsia, Malenka, & Nicoll 1998; Gibson & Ma, 2011; Weiner, Burgess, & Jontes, 2013) has been found to be followed by activity-dependent circuit formation (Katz & Shatz, 1996; Gage, 2002) and, later, modulation (Tessier & Broadie, 2009). Neural circuit modification by activation and silencing of neuronal membrane and individual synapses has been observed in early development (Melnick, 1994; Atwood & Wojtowicz, 1999; Liao, Zhang, O'Brien, Ehlers, & Huganir, 1999; Losi, Prybylowski, Fu, Luo, & Vicini, 2002; Kerchner & Nicoll, 2008), maturation (Ashby & Isaac, 2011), and later life (McGahon, Martin, Horrobin, & Lynch, 1999). Localized polarity of membrane and synapse potential with respect to a certain activation threshold, supported by extensive molecular studies, has been found to control the directional travel of action potentials between neurons (Arimura & Kaibuchi, 2005; Kimata et al., 2012; Tanizawa et al., 2006). Evidence of circuit-encoded memory has been found in spatial neural maps of place, identified by firing activity (O'Keefe & Dostrovsky, 1971; O'Keefe & Nadel, 1978; Calvin, 1996; Hafting, Fyhn, Molden, Moser, & Moser, 2005; Epsztein, Brecht, & Lee, 2011). Cortical segregation into small groups of neurons has been related by simulation to radius of inhibition and found to have an effect on spiking dynamics, leaving a deeper understanding of the observed activity for future research (Stratton & Wiles, 2015). Synchronous and asynchronous reverberation have been synthetically embedded in local cortical circuits and individual neurons (Vardi et al., 2012). Connectivity changes due to synapse elimination (Dennis & Yip, 1978; Huttenlocher, 1979) have been suggested as a means for long-term memory (Balice-Gordon, Chua, Nelson, & Lichtman, 1993) and followed by studies of structure (Balice-Gordon & Lichtman, 1994; Chklovsii, Mel, & Svoboda1, 2004; Knoblauch & Sommer, 2016), information capacity (Knoblauch & Sommer, 2016), and cortical segregation (Baram, 2017b).

The notion of cortical memory, attracting considerable interest in mathematical dynamics and information-theoretic circles, has been linked to the convergence and stability of neural network activity models with respect to parametrically stored states. However, the analysis of such models in discrete (McCulloch & Pitts, 1943; Amari, 1972; Hopfield, 1982) and continuous (Cohen & Grossberg, 1983; Peterfreund & Baram, 1994) space and time has presented a sharp trade-off between low storage capacity, sublinear in the number of neurons (McEliece, Posner, Rodemich, & Venkatesh, 1987; Kuh & Dickinson, 1989; Dembo, 1989), on the one hand, and a multitude of spurious outcomes (Bruck & Roychowdhury, 1990), on the other. Sparse distribution of active neurons has been shown to raise the sublinear capacity bound (Baram & Sal'ee, 1992), while an exponential increase in the number of neurons (Kanerva, 1988) has been shown to result in exponential growth in the storage capacity with respect to the dimension of the data stored (Chou, 1989). However, the mechanization of the associative memory concept presented in these works seems to have fallen short of a widely acceptable biological support.

## 3 Neuronal Polarity Gates and Cortical Circuit Segregation

The state of neuronal activity, contrasted by neuronal silence, has been found to depend on the somatic membrane potential being above or below a certain threshold value (about $-$60 mV; Melnick, 1994). The state of synaptic transmissivity, contrasted by synaptic silence, has been found to depend on the value of presynaptic membrane potential, controlled by external stimulation and molecular properties, with respect to a certain threshold value (also about $-$60 mV; Atwood & Wojtowicz, 1999). A detailed biophysical model relates long-term synaptic potentiation and long-term synaptic depression, which are also viewed in a binary (bidirectional) context, to the variable properties and relative numbers of AMPA and NMDA receptors and their external stimulation (Castellani et al., 2001). Following a terminology used for synaptic thresholding (Arimura & Kaibuchi, 2005; Kimata et al., 2012; Tanizawa et al., 2006), we jointly call the binary polarities of neuronal membranes and synapses *local polarity*.

As the definitions of membrane and synapse polarities have been derived directly from experimental neurobiological findings, independent of any particular firing or plasticity models, we are able to address neuronal polarity gates, neural circuit polarity codes, and the corresponding issue of circuit segregation in a discrete-mathematical framework independent of such models. The implications of circuit segregation on circuit firing structure and dynamics and on memory storage and retrieval capacity will be considered in more specific detail in the following sections, where we bring the relevant dynamical models of firing and plasticity into play.

### 3.1 Neuronal Polarity Gates and Neural Circuit Polarity

Neuronal membrane and synapse polarities can be described as on-off gates that in the off (disconnect) position represent negative polarity, and in the on (connect) position represent positive polarity. The polarity gates of a single neuron are represented in Figure 1a by angular discontinuities in line segments representing membranes and synapses as specified. As the neuron-external inputs directly affect the membrane polarity, they are accounted for in the three-word polarity code: (1) positive membrane and self-synapse polarities, (2) positive membrane polarity and negative self-synapse polarity, and (3) negative membrane and self-synapse polarities, as depicted by Figures 1b to 1d, respectively.

The circuit polarity gates of a two-neuron circuit are depicted in Figure 2a. The total number of circuit polarity states in this case is somewhat more difficult to figure out by a quick glance than it is in the case of a single neuron. We therefore turn to the general case of a circuit of $n$ neurons whose connectivity pattern is determined by the states of polarity of the membranes and the synapses involved.

### 3.2 Cortical Segregation Capacity

Equation 3.2 specifies the size of the circuit polarity code (or vocabulary of words) for a circuit of size $n$. However, only one of these words can be expressed by the circuit throughout the duration of a given circuit polarity pattern. As we show in the following section, for given internal properties of the neurons and circuit-external activation, the state of circuit polarity uniquely determines the firing modes of the circuit neurons. Viewing, then, the circuit polarity state at a given time as the information expressed by the circuit at that time, the instantaneous expression capacity of a circuit is one, regardless of the circuit size, $n$. For the single neuron illustrated in Figure 1, only one of the three polarity patterns (b–d) can be expressed by the neuron at a time. For a circuit of many neurons, the representation of one word at a time seems highly wasteful. How can more than one word be expressed at a time by one circuit? The answer comes in the form of neural circuit segregation into smaller subcircuits. A subcircuit is said to be segregated from the rest of the circuit when it does not affect and is not affected by the rest of the circuit. Such segregation can be done by means of circuit polarization. Specifically, membrane silencing will segregate a silent neuron from the rest of the circuit, while synapse silencing can segregate subcircuits of active neurons with respect to each other. Figure 2b illustrates the mutual segregation of the neurons of a two-neuron circuit by silencing interneuron synapses.

So far, we have considered only the binary components of a neural circuit, namely, the neuronal membrane and synapse polarity gates and their information representation capacity. In the following sections, we address the dynamics of neuronal and neural circuit firing activity, employing widely accepted firing rate and plasticity models.

## 4 Dynamics of Local Polarity and Firing Rate in the Individual Neuron

### 4.1 Underlying Firing Rate and Plasticity Model

### 4.2 Dynamics of Firing Rate under Negative Polarity

Scalar global attractors are graphically described by cobweb diagrams (Koenigs, 1884; Lemeray, 1885, Knoebel, 1981; Abraham, Gardini, & Mira, 1997), which are initiated at some value, $\upsilon (0)$, on the map, then connected horizontally to the diagonal $\upsilon (k)=\upsilon (k-1)$, then connected vertically to the map, and so on. The cobweb diagram depicted in Figure 4a, corresponding to the parameter values $\tau m=2,\tau \omega =300,\tau \theta =0.1$, and $u=-1$, illustrates the membrane silencing process, converging to the point $p$ placed at the origin, and represented by a red $X$ (note that by equations 4.7 and 4.10, $\omega $ becomes 0, regardless of the values of $\tau \omega $ and $\tau \theta )$. The cobweb diagram depicted in Figure 4b, corresponding to the parameters $\tau m=2$ and $u=1$, illustrates the synapse silencing process, converging to the global attractor at point $p$. It follows, then, that in contrast to membrane silencing, synapse silencing, even in the case of the single feedback synapse of the individual neuron, allows for continued neuronal firing due to external activation and feedback from other circuit neurons.

### 4.3 Dynamics of Firing Rate under Positive and Mixed Polarities

Figure 5 shows cobweb diagrams representing the global attractors of a single neuron with an active self-synapse and silenced synapses of other preneurons in the same circuit. The characteristic condition ranges specified below constitute a condensed but complete coverage of the entire range of the global attractor alphabet of firing-rate dynamics (Baram, 2013):

*Chaotic attractor.*For $u>0$, $\lambda 2<-1$, $c1\u22640$, and $c2<0$, yielding $q\upsilon \u2265p\upsilon $ (where $q\upsilon $ and $p\upsilon $ are the vertical coordinates of the corresponding points on the map in Figure 5a); the attractor is represented in Figure 5a by the interval $ab$ on the diagonal $\upsilon (k)=\upsilon (k-1)$, with $a,b$, and $q$ created by a cobweb sequence initiating at the bend point $g$, which defines the boundaries of the attractor, as shown in the figure. An orbit of period three $a1\u2192a2\u2192a3\u2192a1$, rendering Li-Yorke chaos (Li & Yorke, 1975), is defined by $a1=f3(a1)$, $a1\u2260f(a1)$, where $f$ is the map (here, equation 4.5), and $f3(x)=f(f(f(x)))$. Starting with $a1>g$, we obtain $a2=\lambda 1a1$, $a3=\lambda 12a1$, $a1=\lambda 2\lambda 12a1+\beta u$, yielding $a1=\beta u/(1-\lambda 2\lambda 12)$.*Largely oscillatory attractor.*For $u>0$, $\lambda 2<-1$, $c1>0$, and $c2<0$, yielding $q\upsilon <p\upsilon $, the attractor, represented in Figure 5b by the two intervals $ab$ and $cd$, separated by the repelling interval $bc$, is largely oscillatory. Within the attractor domain, defined by a cobweb initiating at the bend point $g$ and ending at point c, trajectories alternate between the two intervals $ab$ and $cd$. As implied by the cobweb diagram, depending on the circuit parameters, this alternation may, but need not necessarily, repeat precisely the same points, which may then represent oscillatory or cyclically multiplexed dynamics (detailed conditions are specified in Baram, 2013).*Fixed-point attractor*. For $u>0$ and $-1<\lambda 2\u22641$, we have a fixed-point attractor at $p$. For $-1<\lambda 2\u22640$, the fixed point will be approached by alternate convergence (increasing the $\upsilon (k)$ step followed by decreasing $\upsilon (k+1)$ step and vice versa). For $0<\lambda 2\u2264\lambda 1$, convergence will be monotone, bimodal (according to $f1$ far from $p$ and according to $f2$ near $p$, as illustrated by Figure 5c for $0<\lambda 2\u2264\lambda 1)$, and unimodal (according to $f2)$ for $\lambda 1<\lambda 2\u22641$.*Silent attractor.*For $u\u22640$ and $\lambda 2\u22641$ the attractor is at the origin.

Specifically, Figure 5 employs the following cases of the map, equations 4.5 to 4.8:

$u=10$, $\tau m=2$, $\tau \omega =10,000$, $\tau \theta =0.1$, yielding $\lambda 1=0.6055$, $\lambda 2=-4.7336$, $c1=-3.1701$, $c2=-1.8711$, $\omega =-13.5720$

$u=10$, $\tau m=2$, $\tau \omega =10,000$, $\tau \theta =1$, yielding $\lambda 1=0.6055$, $\lambda 2=-1.8160$, $c1=0.3692$, $c2=-0.1015$, $\omega =-6.1569$

$u=1$, $\tau m=2$, $\tau \omega =5$, $\tau \theta =1$, yielding $\lambda 1=0.6065$, $\lambda 2=0.5308$, $\omega =-0.1925$

$u=-1$, $\tau m=2$, yielding $\lambda 1=\lambda 2=0.6065$, $\omega =0$

For each of the cases, the steady-state value of the synaptic weight, $\omega $, was calculated by driving equations 4.5, 4.7, and 4.8 with $\omega (0)=0$ and $\upsilon (0)=1$, to convergence (practically, this was achieved for $N=100$), and the corresponding values of $\lambda 1,\lambda 2,c1$ and $c2$ were calculated by equations 4.13, 4.14, 4.16, and 4.17, respectively. As can be verified, the conditions stated above for the attractor types chaotic, largely oscillatory, fixed point, and silent are satisfied, respectively, by the parameter values obtained.

As we have shown, the dynamic modes of firing rate are defined by the parameters $\lambda 1,\lambda 2,c1$, and $c2$, which, through equations 4.13 to 4.17, relate to the parameters of the model, equation 4.12, related, in turn, to the model, equations 4.5 and 4.6. It can be seen from Figure 5 that while the cobweb trajectories of the chaotic mode and the largely oscillatory mode involve endless transitions from the function $f1$ (blue) to the function $f2$ and vice versa, the cobweb trajectory of the fixed-point mode (c) may, depending on initial condition, involve at most one such transition (from $f1$ to $f2)$, while that of the silent mode, d, associated with negative membrane polarity, does not involve any transition between the two functions. As each transition between the two functions represents, through the function $f$ of equation 4.2, neuronal polarity gating, it can be seen that the firing rate dynamics are driven by the polarity gating dynamics. Conversely, the dynamics of the model, equation 4.12 (hence, the dynamics of the model equation 4.5), drive the neuronal gating dynamics. It follows, then, that chaotic and oscillatory firing rate dynamics, fixed-point firing rate dynamics, and silence are governed by mixed polarity gating, positive polarity gating, and negative polarity gating, respectively.

#### 4.3.1 Simulation Results

We have simulated the model, equations 4.5 to 4.9, for cases a to d above with the specified parameter values. The simulation results are shown in Figure 6, where the values of the feedback synaptic weight $\omega $ throughout the run, the firing rate during learning (firing mode convergence) in the early period of the run, the firing rate during learning in the late period of the run, and the firing rate during retrieval in the early period of the run, initiated at the limit value of the synaptic weight obtained in learning, are given, respectively, from left to right in the four columns. It can be seen that in each case, the feedback synaptic weight converged to a constant value. It can also be seen that while convergence to a persistent firing mode in retrieval is noticeably faster than convergence in learning for the first three cases, learning and retrieval end, as desirable, in the same attractor in all cases. The convergent modes of firing visually conform to the attractor types predicted by the theory and illustrated in Figure 5. Specifically, case a produced a highly disordered, bifurcated, and bursting sequence, as expected from a chaotic attractor, case b produced a quasi-four-cycle, multiplexing two uneven two-cycles, case c converged to a constant sequence, and case d converged to a silent (zero-valued) sequence. Similarly behaved biological firing modes, notably, chaotic (Hayashi & Ishizuka, 1992; Fell, Roschke, & Beckmann, 1993), quasi-cyclic (Lankheet, Klink, Borghius, & Noest, 2012), multiplexed (Panzeri, Brunel, Logothetis, & Kayser, 2009), fixed point, or tonic (Bennett, Callaway, & Wilson, 2000) and silent (Melnick, 1994), have been reported.

It is particularly interesting to note the bifurcations of the firing rate sequences during the learning phases in cases a and b, around time samples (a) k $=$ 200 and (b) k $=$ 1000 and k $=$ 5000, respectively. Such behavior has been observed in neurobiological data and reported with some fascination, suggesting that it may require rethinking existing hypotheses, modeling, and beliefs concerning long-term neuronal behavior, stability, and function. However, such behavior is in fact predicted by our analysis. It can be seen in the first column of both cases a and b that the synaptic weight, while converging, increases its negativity. This, by equation 4.14, increases the negative slope $\lambda 2(k)$ of $f2$, which, as can be seen in Figures 5a to 5c, would cause transitions from a constant firing rate (see Figures 5c and 6c) to largely oscillatory (see Figures 5b and 6b) and, further, to chaotic (see Figures 5a and 6a). As such transitions are simply consequences of the corresponding learning processes, they may be justified by the functionality of the learned behaviors rather than viewed as instability in long-term neuronal behavior.

## 5 Cortical Firing Mode Segregation and Control by Circuit Polarization

### 5.1 Underlying Firing-Rate and Plasticity Models

### 5.2 Hebbian and Asynchronous Synaptic Polarity-Gated Firing Mode Segregation

The Hebb paradigm (Hebb, 1949; Hertz, Krogh, & Palmer, 1991), supported by the eigenfrequency preference paradigm of spiking neurons (Izhikevich, 2001), implies inhibitory effects between asynchronously firing neurons. This will regularly tend to segregate a circuit into internally synchronous, externally asynchronous subcircuits. However, as we show next, the resulting mutually asynchronous firing of the segregated subcircuits is likely to result in mutual interference between the corresponding firing-rate modes. On the other hand, as we show, the segregation of two internally synchronous, externally asynchronous subcircuits by synapse silencing results in interference-free firing. It should be noted that the analysis of asynchronous circuit firing-rate dynamics cannot employ cobweb diagrams, which are essentially limited to scalar systems. We therefore employ representative simulation of small circuits. While circuits of many asynchronous neurons would require a prohibitive number of simulation plots, small circuits are rather effective in demonstrating the main concepts of interest.

The Hebb paradigm, if fully applicable, should segregate the fully connected three-neuron circuit in Figure 8a into two totally isolated subcircuits, the first consisting of two synchronous neurons (neurons 1 and 2) firing in unison according to one firing-rate mode and the second consisting of one neuron (neuron 3), firing in a different firing-rate mode. Yet as can be seen in Figure 8a, there is visible mutual interference between the firing modes. Isolating neuron 1 by polarity-gated synapse silencing (see Figure 7b), it produces a smooth fixed-point firing rate mode, while neurons 2 and 3 continue to interfere with each other, as can be seen in Figure 8b. When neuron 3 is isolated from neurons 1 and 2 by polarity-gated synapse silencing (see Figure 7c), the latter form a two-neuron circuit, which fires in fully matched fixed-point modes, while neuron 3 fires in an oscillatory mode, as can be seen in Figure 8c. When each of the neurons is isolated by polarity-gated synapse silencing (see Figure 7d), each fires in its characteristic mode. It can be seen in Figures 8c and 8d respectively, that there is a slight difference between the firing modes of neurons 1 and 2 in cases c and d. This implies that while in both cases there is complete synchrony between the two neurons, the two-neuron circuit connectivity in case c does not allow each of the two neurons complete freedom to fire in its own independent firing rate mode.

It follows that while the Hebbian paradigm may result in strong inhibitory effects between asynchronously firing neurons and circuits, it need not necessarily completely eliminate mutual interference. On the other hand, the polarity-gated synapse silencing mechanism simply eliminates directional interaction, removing the corresponding interference.

### 5.3 Firing Mode Control by Synchronous Polarity-Gated Circuit Segregation

A cortical circuit of identical neurons may be segregated by polarity-gated synapse silencing into internally synchronous, externally asynchronous subcircuits. Circuit and subcircuit size can be further reduced by polarity-gated membrane silencing. As we show next, circuit and subcircuit size modification results in firing-rate mode modification, which may serve different cortical functions even when the neurons are identical.

Specifically, the circuits represented by Figure 9 obey the model of equations 5.1 to 5.3. Keeping in mind the linear ratio between membrane potential and firing rate ($7.2\xb10.6spikes\xb7sec-1\xb7mV-1$, Carandini & Ferster, 2000), we have uniformly assumed for illustration purposes the parameter values $u=4,\tau m=2,\tau \omega =300,\tau \theta =0.1$ in units of mV and msec, respectively, zero initial conditions, with $N=100$ taking $\omega (k)$ to its constant limit $\omega $, and the circuit size values specified below, yielding the corresponding modal parameter and attractor condition values:

$n=10$, yielding $\omega =-0.7870$, $\lambda 1=0.6065$, $\lambda 2=-2.4901$, $c1=-0.4485$, $c2=-0.5103$

$n=5$, yielding $\omega =-1.3128$, $\lambda 1=0.6065$, $\lambda 2=-1.9762$, $c1=0.1748$, $c2=-0.1987$

$n=2$, yielding $\omega =-2.5695$, $\lambda 1=0.6065$, $\lambda 2=-1.4155$, $c1=0.8550$, $c2=0.1415$

$n=1$, yielding $\omega f=-3.8993$, $\lambda 1=0.6065$, $\lambda 2=-0.9277$, $c1=1.4467$, $c2=0.4373$

An examination of the conditions for the global attractor types specified for the individual neuron (see section 4) shows that the above cases represent (a) chaotic, (b) largely oscillatory, (c) oscillatory, and (d) fixed-point global attractors, as ratified by the cobweb diagrams in Figure 5. We also note that simulated increase of the number of neurons above 10 in a synchronous fully connected circuit has resulted in a chaotic global attractor, as that for $n=10$, which indicates that the four attractors represented by the above four cases span the entire range of active global attractor types.

Simulating the circuits of cases a to d with identical initial conditions, $\upsilon (0)=0$, we obtained the results displayed in Figure 10. It can be seen that decreasing the circuit size changes the firing mode from chaotic to largely oscillatory, then oscillatory, then constant, as predicted by the analysis and the cobweb diagrams of Figure 9. At the same time, the learning and the retrieval response times are increased considerably. In all cases, retrieval is considerably faster than learning.

Cobweb diagrams are essentially restricted to scalar evolution, and therefore the analysis and, for comparison purposes, the corresponding simulations have addressed synchronous circuits of identical neurons under the same initial conditions. However, a broader notion of synchronization under different initial conditions can be illustrated by simulation. For example, case c above, illustrated by cases c of Figures 9 and 10 for a fully connected circuit of two identically parameterized neurons under the same initial conditions for the two neurons, is now illustrated in Figure 11 under different initial conditions, $\upsilon 1(0)=1$ and $\upsilon 2(0)=2$, respectively. The simulated sequences of firing rate corresponding to the two neurons are shown in Figure 11a. It can be seen that following different initial periods (see Figure 11b), the two neurons converge to fully synchronized behaviors (see Figure 11c). We have further found similar synchronization of fully connected neurons under different initial conditions in a large variety of simulated cases.

## 6 Memory by Circuit Polarity

As shown in the previous sections, the behavior of a neural circuit is completely determined by external activation, which, together with internal properties, defines the circuit polarity and, with it, the circuit's firing modes. Given neuronal parameters, specifically $\tau m,\tau \omega ,and\tau \theta $, the only variable internal properties are the synaptic weights, which are subject to convergence to constant limit values (Cooper et al., 2004), often referred to as *learning*. The limit synaptic weights represent the memory of the circuit. Once the synaptic weights have attained their limit values, exertion of the same external inputs that have instigated learning will reproduce the original circuit behavior, represented by firing-rate modes. Moreover, as illustrated in Figure 11, the memorized firing-rate attractors are robustly retrieved in the face of different initial conditions.

### 6.1 Memory Concealment and Restoration in the Individual Neuron

Suppose that the operand of the function $f$ in equation 4.1 becomes negative and stays negative for a long period of time. This may be a consequence of a drop in the level of the external input potential, $I(k)$, or by an unforced change in membrane potential resulting from slow dissipation of electric charge. As explained and demonstrated in section 4, this will result in neuronal silencing, yielding partial circuit memory loss and, because all external inputs to the circuit drop to sufficiently low levels, a total loss of circuit memory. Memory restoration will occur when external inputs return to their original levels. Since, as shown in section 4 for the individual neuron and in section 5 for multineuron circuits, firing-rate modes are uniquely determined by internal neuron properties and circuit polarity–controlled connectivity, the neuronal and circuit learned attributes (hence, firing-rate modes) are uniquely recovered by restoration of the corresponding circuit polarity pattern. Such memory restoration may be partial, affecting, at different times, individual neurons or subcircuits.

In order to demonstrate the concept of memory concealment and restoration, we first consider the individual neuron addressed in section 4. Specifically, we employ the four cases a to d represented by cobweb diagrams in Figure 5 and illustrated by simulation in Figure 6. A positive value of centered membrane activation $u$ implies learning, as manifested by a convergent value of the feedback synaptic weight $\omega $, and a simultaneous convergence of the firing rate to a global attractor. A negative value of the centered membrane activation $u$ implies a negative membrane polarity, represented by a negative operand of the function $f$ in equation 4.1. Consequently, equation 4.1 reduces to equation 4.10, yielding membrane silencing, as illustrated by Figure 4a. Representing temporal memory concealment, membrane silence is interrupted by reinstating the original level of activation, which results in restoration of the memorized (or learned) mode of firing rate.

Figure 12 shows convergence of the self-synapse weight $\omega $ representing the learning process (first column), the firing-rate modes of learning (second column), memory concealment (third column), and memory restoration (fourth column) by an individual neuron in the four cases under consideration. Membrane silencing and, consequently, memory concealment were accomplished by changing $u$ from 10 to $-$10 in cases a, b, and c, and from $-$1 to $-$10 in case d. In memory restoration, the original value of $u$ was applied. It can be seen that in all cases, the convergent mode of firing rate (later part of the sequence in the second column) was recovered (fourth column).

### 6.2 Memory Deterioration, Concealment, and Restoration in Synchronous Circuit

As explained and demonstrated in section 5, neural circuits can be segregated by synapse silencing into synchronous subcircuits. Asynchronous segregation, as represented by Figure 7, can be analyzed in the present context by similar means but will be omitted for brevity. External input decline, or membrane potential decline by spontaneous electric discharge, will result in individual neuron silencing. A change in the number of active circuit neurons will result in changing firing-rate dynamics of the remaining circuit neurons, possibly to the point of total circuit silence and memory concealment. Restoration of external activation will result in partial, or complete, return of circuit activity to its previous dynamics.

In order to demonstrate memory deterioration, concealment, and restoration in a synchronous circuit, consider a circuit of three neurons, all having the same parameter values $\tau \omega =300,\tau \theta =0.1$, $\tau m=2$, and $u=4$. In time, the circuit undergoes several stages of individual neuron silencing and reactivation, as depicted in Figure 13. Figure 13a describes initial learning and memory, followed by memory deterioration due to a sequence of neuronal silencing, achieved by changing the value of the corresponding centralized membrane activation from $u=4$ to $u=-1$. During the time interval between $k=0$ and k $=$ 1000, the circuit experiences learning, by the end of which it reaches a chaotic mode of firing rate. At $k=1001$ one of the neurons becomes silent. The remaining active part of the circuit, constituting a synchronous subcircuit of two neurons, displays an oscillatory mode of firing rate (identical to case c in Figure 9 and case c in the third column of Figure 10, having the same circuit size of 2 and the same parameters), until $k=2000$. At this point, another neuron becomes silent, and, following an abrupt response to the change of subcircuit size at $k=2001$, the remaining active neuron displays a constant (fixed-point) mode of firing rate. This last active neuron becomes silent at $k=3001$, and the entire circuit remains silent until $k=5000$. The sequence of neuron silencing thus described represents memory deterioration, ending in complete memory concealment. The memory deterioration process described by Figure 13a is reversed in Figure 13b by reactivating, in sequence, each of the neurons by applying $u=4$, as in the original learning process. The circuit remains silent until, at $k=6001$, one of the neurons becomes active again, producing a constant firing rate. At $k=7001$, a second neuron becomes active, and the subcircuit of two active neurons produces an oscillatory mode of firing rate. At $k=8001$, the third neuron becomes active as well, and the original external activation of all three neurons is restored. It can be seen that the circuit activity is back to the original chaotic mode.

### 6.3 Memory Modification

Memory modification may result from changes in neural circuit polarity and membrane activation level. While the circuit polarity pattern uniquely determines which neurons are active, the dynamic nature of the firing rate produced by a neuron as a result of restored activity will depend on the level of external activation as well. Since, as can be seen in equation 4.12, the actual value of $u$ determines the intersection point $\beta u$ of the function $f2(\upsilon (k-1))$ with the axis $\upsilon (k)$, but not the slope $\lambda 2$ of $f2(\upsilon (k-1))$, defined by equation 4.14, the value of $u$ affects the amplitude but not the characteristic firing-rate mode of the individual neuron (this can be further ratified by examination of the cobweb diagrams in Figure 5).

Repeating the simulations of the individual neurons depicted in Figure 12, with the retrieval activation values changed from $u=10$, used in the learning stage of cases a to c, to (a) $u=1$, (b) $u=0.1$, (c) $u=0.01$, and (d) $u=-1$ applied in retrieval, we obtain the results depicted in Figure 14. It can be seen that while the evolution of the feedback synaptic weight and the firing rates during learning depicted in Figure 14 are identical to those depicted in Figure 12, the firing-rate sequences in retrieval have different amplitudes in the two figures while maintaining the same dynamic modes.

In a synchronous neural circuit, a process of memory deterioration or restoration may stop at any stage, producing what might be considered a partial memory, an illusion, or an innovation. For instance, if the process of memory deterioration in the synchronous three-neuron circuit considered in section 6.2 stops between $k=1001$ and $k=2000$, then “memory” will continue as an oscillatory sequence produced by a two-neuron circuit, which is different from the original chaotic sequence. Similarly, if the process of deterioration stops between $k=2001$ and $k=3000$, then “memory” will continue as a constant sequence produced by a single neuron, as illustrated in Figure 15a. The same is true for the memory restoration sequence, which, if stopped between $k=6001$ and $k=7000$, will produce “memory” of a constant sequence produced by a single neuron, while if stopped between $k=7001$ and $k=8000$, will produce “memory” of an oscillatory synchronous sequence in a two-neuron circuit, as illustrated in Figure 15b.

## 7 Discussion

Extending the notion of local polarity, experimentally discovered in the separate forms of membrane and synapse silencing and reactivation about two decades ago and debated ever since, to the unified notion of polarity-gated neural circuits, we have analytically shown that the latter holds the key to cortical connectivity, activity, and memory. We have further shown that polarity gates segregate a neural circuit into interference-free, internally synchronous, externally asynchronous subcircuits. Employing widely accepted, biologically validated firing rate and plasticity paradigms, we have shown that chaotic, oscillatory, fixed-point, and silent firing-rate modes are governed by mixed, positive, and negative polarity gating, respectively, maintaining the firing variety of the different neurons and allowing each of the neurons its unique expression by firing-rate dynamics.

Circuit and subcircuit structures define their firing modes, which are in turn matched to their functions. These are learned through convergence of the synaptic weights. The rate of learning is determined by the corresponding time constants, which in turn determine, together with the final values of the synaptic weights and the circuit polarities, both the circuit's structure and firing modes. Further changes in circuit (and subcircuit) size will change its firing mode, and, accordingly, its functionality. While we do not attempt to map the entire range of firing-rate modes to cortical functions, it seems widely accepted that certain homeostatic functions are associated with tonic, fixed-rate firing, while others are associated with an oscillatory firing rate. Chaotic neural firing has been conjectured to represent functional pace making by bursting (Hayashi & Ishizuka, 1992). Temporal multiplexing (i.e., transmitting and receiving independent signals over a common signal path) of different firing signals, analytically modeled (Izhikevich, 2001) and observed in sensory cortices (Fairhall, Lewen, Bialek, & van Steveninck, 2001; Lundstrom & Fairhall, 2006), enhances the coding and information transmission capacity (Bullock, 1997; Lisman & Grace, 2005; Kayser, Montemurro, Logothetis, & Panzeri, 2009). While a chaotic attractor drives temporal mixing of firing rates over the entire state space, a largely cyclic attractor can perform multiplexing of two oscillatory signals. Depending on the function of the receiving neuron, demultiplexing can be done, in principle, by bandpass filtering. Neuronal low-pass (Pettersen & Einevoll, 2008) and high-pass (Poon, Young, & Siniaia, 2000) filtering have been reported. Multiplexed red, green, and blue (RGB) color coding is a known example of creating mixtures of the primary colors, found in both biological and technological vision systems (Hunt, 2004). A silent attractor, representing the state of a silent neuron, has been found to play a major role in cortical representation of place (O'Keefe & Dostrovsky, 1971; O'Keefe & Nadel, 1978; Calvin, 1996; Hafting et al., 2005; Epsztein et al., 2011).

Our results put the notions of cortical learning and memory in a new perspective. Following circuit segregation, memory, as circuit activity, is manifested by internally synchronous, externally asynchronous, isolated subcircuits. Given internal neuronal properties, a state of circuit polarity and external activation, subcircuit learning is manifested by convergence of the synaptic weights to constant values. Once such values are established, the persistence of the circuit polarity pattern also implies the persistence of the connectivity and activity patterns, along with the neuronal firing modes. We have further suggested and demonstrated that memory deterioration is caused by a gradual silencing of circuit neurons, to the point of complete memory concealment, while memory restoration is caused by complete reactivation of the original, learned circuit polarity pattern. Partial and false memory, as well as novelty and innovation effects, in the form of new circuit structures and firing dynamics, are created when the new external inputs are different from those creating the original memory or when the restoration process is terminated before it is completed. In sharp contrast, more permanent memory effects have been suggested to result from synapse and whole axon elimination, or pruning, and regrowth (Balice-Gordon & Lichtman, 1994; Culican, Nelson, & Lichtman, 1998; Chklovskii et al., 2004; Vanderhaeghen & Cheng, 2010; Knoblauch & Sommer, 2016; Baram, 2017b). While our proposed memory mechanization by circuit polarization does not rule out a certain role for the death and regrowth of neurons and synapses in the implementation of memory, it appears to be considerably more economical, controllable, and agile than the latter.

Finally, we note that the underlying expression of cortical information is a cortical activity pattern. The translation of cortical activity to and from sensory information is mediated by sensory lobes, which may be defined as read-in/read-out circuits. Yet memory storage and retrieval is performed at the neural circuit activity level, governed by circuit polarity. A striking analogy is that of computer memory, storing and retrieving patterns of binary bits, which are occasionally translated to and from sensory information by external devices.

## Acknowledgments

This study was supported by the Technion's Roy Matas/Winnipeg Chair in Biomedical Engineering.

## References

*Caenorhabditis elegans*