Widely accepted neural firing and synaptic potentiation rules specify a cross-dependence of the two processes, which, evolving on different timescales, have been separated for analytic purposes, concealing essential dynamics. Here, the morphology of the firing rates process, modulated by synaptic potentiation, is shown to be described by a discrete iteration map in the form of a thresholded polynomial. Given initial synaptic weights, a firing activity is triggered by conductance. Elementary dynamic modes are defined by fixed points, cycles, and saddles of the map, building blocks of the underlying firing code. Showing parameter-dependent multiplicity of real polynomial roots, the map is proved to be noninvertible. The incidence of chaos is then implied by the parameter-dependent existence of snap-back repellers. The highly patterned geometric and statistical structures of the associated chaotic attractors suggest that these attractors are an integral part of the neural code. It further suggests the chaotic attractor as a natural mechanism for statistical encoding and temporal multiplexing of neural information. The analytic findings are supported by simulation.
There is abundant empirical evidence on a variety of neuronal firing modes, including random spiking (Gerstein & Mandelbrot, 1964), tonic spiking (Cymbalyuk & Shilnikov, 2005), transient bursting (Kuznetsov, Kopell & Wilson, 2006), oscillatory bursting (Elson, Selverstone, Abarbanel, & Rabinovich, 2002), bifurcations with chaos (Ren et al., 1997), and bifurcations without chaos (Li, Gu, Yang, Liu, & Ren, 2004). Individual neurons of the same type are often capable of producing different firing modes, switching from one to another in a seemingly unpredictable manner (Hyland, Reynolds, Hay, Perk, & Miller, 2002). However, the mathematical rules underlying such behavior, the purpose it might serve, or the harm it might cause have not been well understood. A particularly meaningful behavior, discovered in sensory cortices, is the temporal multiplexing of different firing rates (Fairhall, Lewen, Bialek, & van Steveninck, 2001; Wark, Fairhall & Rieke, 2009; Lundstrom & Fairhall, 2006), which enhances the coding and information transmission capacity (Bullock, 1997, Lisman, 2005; Kayser, Montemurro, Logothetis, & Panzeri, 2009). While temporal precision of the multiplexing code can be achieved by narrow windowing, the need for such precision in neural coding has been questioned (Panzeri, Brunel, Logothetis, & Kayser, 2009).
In the dynamic analysis of neural activity, firing and synaptic potentiation have been treated separately—one as a relatively fast process of information transfer, the other as a relatively slow process of learning. However, a complete description of the intricate dynamics requires the integration of these processes into a single model. Real-valued dynamical systems are conveniently represented by discrete iteration maps of the form x(k + 1) = F[x(k) + y(k)], which specifies the transition of a state matrix x from one time instance to the next, given an input sequence y. A map is said to be invertible if there is an inverse map, which produces a unique real solution to the previous state from the present one and the input. If this does not hold for some value of the present state, the map is said to be noninvertible. Since much of the dynamic complexity associated with discrete iteration maps is derived from their correspondence with parameter variation, we shall say that a map is noninvertible if it is noninvertible for some parameter value. Noninvertible discrete iteration maps have been shown to possess a large repertoire of dynamic behaviors associated with map singularities, such as fixed points, cycles, saddles, and fractal boundaries (Gumowski & Mira, 1980; Abraham, Gardini, & Mira, 1997). Although there is no universally accepted definition of chaos, we employ the widely used mathematical definition of Li and Yorke (1975). Studies of chaos have been largely restricted to low-order maps (Abraham et al., 1997; Zeraoulia & Sprott, 2010). Singularities normally define regions of attraction or repulsion, called basins, in the phase space.
Changes in the nature of the singularities caused by changes in map parameters are called local bifurcations. In contrast, changes in the nature of the dynamics caused by the transition of phase trajectories across basin boundaries are called global bifurcations. Although theoretical aspects of chaos, a centerpiece of dynamical systems theory for the past 30 years, are well understood, the empirical detection of chaos in a given time series is a nontrivial task. Mathematical measures, such as the largest Lyapunov exponent (Wright, 1984), often produce ambiguous empirical results for limited time series, even when applied to data generated by simulating low-dimensional models (Sprott, 2003). Consequently, the intuitive characterization of chaos as a “deterministically unpredictable” process (Elyadi, 1999) often seems as reliable as any formal empirical measure. Noninvertible maps represent, then, not only an inability to deduce past from present states but also, through bifurcation and chaos, an inability to predict future from past states. It has been suggested that chaos may represent deficient states of neural information processing (Fell, Roschke, & Beckmann, 1993). Certain applications, such as the solution of combinatorial problems, suggest a functional role for chaos in artificial neural networks, specifically, annealed escape from local attractors (Lin, 2001; Ohta, 2002; Liu, Shi, Wang, & Zurada, 2006). Yet the role of chaos in biological neural systems has remained a mystery.
This letter develops a joint discrete iteration map of neural firing and synaptic potentiation, based on biologically supported models. The map, formed as a multivariate thresholded polynomial, is proved to be noninvertible, regardless of neural self-connectivity. Noninvertibility, coupled with the existence of snap-back repellers (Marotto, 1978, 2005; Gardini & Tramontana, 2010), which is proved to hold for the map of interest under appropriate parameterization, facilitates the incidence of chaos in the Li-Yorke sense. The map at hand is a transcendental function of the initial conditions, the activation levels, and the polynomial coefficients. Noninvertibility and chaos are commonly related to specific parameterizations. In the Li-Yorke framework, the particular set of initial conditions that result in chaos forms an uncountable, scrambled subset of the underlying state space. Local bifurcations, including the onset of chaos, are caused by parameter changes. Parameterizing the underlying neural network in a symmetric manner for analytical purposes, we reduce the multivariate problem to a univariate one, enabling the application of fundamental algebra. The analytical evidence that biological neural networks are prone to chaotic behavior is particularly significant in view of the difficulty of obtaining concrete empirical measurement of chaos. It further implies, by the decomposition of the map portrait into basins of cyclic and chaotic attractors (Abraham et al., 2009) and the nonperiodicity of the range of the Li-Yorke scrambled set (Li & Yorke, 1975), convergence of the latter to a chaotic attractor. The highly patterned structures associated with the chaotic attractors of polynomial maps (Field & Golubitsky, 2009), which are shown in this work to hold for elementary neural circuits, suggest that such attractors are an integral part of the neural firing code. The resulting transcendental prescription of the sampling pattern and statistical intensity defines a natural mechanism for mixing and multiplexing different firing rates.
2. Discrete Iteration Map of Neural Firing and Synaptic Potentiation
3. Noninvertibility of the Map
The map, equations 2.1 to 2.4, is invertible if and only if it has an inverse that yields, for any values of the map parameters and any real values of the firing rates at time k + 1, unique real values for the firing rates at time k. If this is not the case, the map is said to be noninvertible. It should be noted that only positive (including zero) values are acceptable for the firing rates.
4. Chaotic Coding and Chaotic Multiplexity
The regular dynamic modes of the firing rates process are defined by fixed points, cycles, and saddles of the map, which have also been called periodic points or singularities (Abraham et al., 1997; Mira, 2000). Li and Yorke (1975) established a formal framework for the characterization of chaotic maps. Given an interval J, a continuous mapf: J → J is chaotic in the Li-Yorke sense if the following conditions are satisfied:
For every n = 1, 2, …there is a periodic point in J having period n.
There is an uncountable (“scrambled”) set S ⊂ J, containing no periodic points, which satisfies the following conditions:
Marotto's theorem (1978, 2005), extended to noninvertible piece-wise smooth maps by Gardini and Tramontana (2010), implies that a sufficient condition for chaotic behavior in the Li-Yorke sense is the existence of a snap-back repeller, which is defined as follows (Marotto, 2005): Suppose that z is a fixed point of a map Q with all eigenvalues of DQ(z), the Jacobian of the map at z, exceeding 1 in magnitude, and suppose that there exists a point x(0) ≠ z in a repelling neighborhood of z, such that x(M) = z and , for some positive integer M, where x(k) = Qk(x(0)). Then z is called a snap-back repeller of Q. As in the case of noninvertibility, we shall say that a map is chaotic if it is chaotic for some parameter values. We have the following result:
The domain of a map decomposes into the basins of point attractors, cyclic attractors, chaotic attractors, and the “basin of infinity,” which consists of all points whose trajectories run away from any bounded set (Abraham et al., 1997). While, mathematically, indefinite divergence of the neuronal firing rates is possible, physical constraints exclude such divergence. At the same time, asymptotic convergence to a periodic point from the scrambled set S is excluded by the Li-Yorke conditions (specifically, condition 2b). Initiated within the scrambled set, a trajectory will enter the basin of a chaotic attractor, which, for a polynomial map, has a highly patterned geometric and statistical (intensity) structure (Field & Golubitsky, 2009). It can be seen that our map is polynomial in the domain P(υp(k), p = 0, …, N − 1)>0. Global bifurcations from chaotic and repelling basins (the latter a part of the basin of infinity) into cyclic (and point) attractors are made possible by absorbing areas (Abraham et al., 1997).
A chaotic attractor may be viewed, then, as an integral part of the neural firing code. It will produce a mixture of firing rates corresponding to its geometric and intensity structure. We call the temporal production of such a mixture chaotic multiplexity. The characteristics of single-mode, bifurcated, chaotic and multiplexed firing are demonstrated by the examples in section 5.
We simulated a two-neuron circuit in a closed-loop configuration, employing the BCM version of the model, (equations 2.1 to 2.4, with, q = r = 2). Three sets of parameter values were considered. Taking N = 10, the relative approximation error ρ(see the appendix) was smaller than 0.07 in all cases.
Set 1: Different Time Constants
τi = 25, .
τi = 25, .
τi = 12, .
In all cases υi(0) = 0, ωi,j(0) = 0, , βi = 1, i, j = 1, 2.
It can be seen from Figure 2 that cases a and b represent single-mode firing activities—the first an initial transient attenuating to a constant firing rate and the second an initial transient attenuating to a constant oscillatory firing rate. Cases c and d represent bifurcations from a relatively uniform spiking mode to multiplexed modes of different natures, decaying to constant firing rates.
Set 2: Different Initial Synaptic Weights
ω1,1(0) = ω2,2(0) = 0, ω1,2(0) = ω2,1(0) = −1.
ω1,1(0) = ω2,2(0) = 1, ω1,2(0) = ω2,1(0) = 0.
ω1,1(0) = 0.1, ω2,2(0) = 0, ω1,2(0) = ω2,1(0) = −0.1.
ω1,1(0) = ω2,2(0) = −0.1, ω1,2(0) = ω2,1(0) = 0.
In all cases υi(0) = 0, τi = 12, , , βi = 1, i, j = 1, 2.
It can be seen from Figure 3 that while case a represents an initial transient attenuating to a constant firing rate, case b represents an instantaneous transient followed by constant-amplitude spiking with a varying waveform. Case c shows a brief period of spiking, bifurcating into a constant state of zero firing. Case d shows a brief period of spiking, bifurcating into a multiplexed signal attenuating to zero.
Set 3: Different Initial Firing Rates
υi(0) = 0, i = 1, 2.
υi(0) = 0.1, i = 1, 2.
In both cases ωi,j(0) = 0, , , βi = 1, i, j = 1, 2.
The seemingly disorderly behaviors displayed in both cases a and b of set 3, as seen in Figure 4, persisted for the 10,000 iterations checked. It can be seen that the individual time sequences generated by the different initial conditions are substantially different, and the difference persists along time. Such sensitivity to initial conditions is typical of chaotic maps.
Figure 5 shows the sampling of the phase space υ1 × υ2 by the trajectories corresponding to cases a (left-hand figures) and b (right-hand figures) of set 3. The initial 10% of each of the trajectories were excluded so as to eliminate initial transients. From top to bottom, the three rows in the figure show the phase-space samples generated for the two initial conditions in 100, 1000, and 10,000 iterations, respectively. While the sampling graphs produced in 100 iterations are quite different for the two initial conditions, those produced in 1000 iterations are considerably more similar, and those produced in 10,000 iterations are almost identical.
Finally, we note that in the reported simulations, the neurons involved were allowed to form self-connections. When self-disconnections were enforced by nullifying the corresponding synaptic weights, the simulation results were of a similar nature.
We have analytically shown that, depending on parameter values, neural firing can become noninvertible and chaotic. As the first property implies an inability to infer past from present states and the second an inability to infer future from past states, one may wonder what the utility of these seemingly peculiar properties might be. While noninvertibility and chaos have been identified with ambiguity and deteriorated performance in certain applications involving artificial feedforward networks (Bertels, Neuberg, Vassiliadis, & Pechanek, 1998; Gicquel, Anderson, & Kevrekidis, 1998; Reco-Martinez, Adomaitis & Kevrekidis, 2000; Verschure, 1991) and in biological neural networks (Fell et al., 1993), several works have noted the utility of chaos in learning and problem solving by annealing (Lin, 2001; Ohta, 2002; Liu et al., 2006; Verschure, 1991; Sato, Akiyama, & Farmer, 2002). Moreover, it has been claimed (Langton, 1990) and disputed (Mitchell, Hraber, & Crutchfield, 1993) that the boundary between order and chaos provides favorable conditions for universal computation. Chaos has been incorporated into neuron models through a bimodal logistic function (Aihara, Takabe, & Toyoda, 1990), simplified phenomenological models (Lin, Ruen, & Zhao, 2002; Shilnikov & Rulkov, 2003), or negative self-feedback (Lin, 2001; Ohta, 2002; Liu et al., 2006). Our analysis shows that map noninvertibility and chaotic behavior apply not only to networks of self-connected neurons but also to networks of self-disconnected neurons. However, the problem-solving and computation capabilities of biological neural networks are rather vague notions, and the possible role of chaos in such networks has remained a mystery.
The analytic approach taken in this work is particularly noteworthy in view of the fact that empirical measures, such as the Lyapunov exponents, often fail to provide conclusive evidence of chaos. We suggest that the highly patterned geometric and statistical structures of the chaotic attractors associated with neural networks make such attractors an integral part of the neural code. While our mathematical analysis has primarily led to the consideration of symmetric neural circuits, resulting, as in the cases considered by Field and Golubitsky (2009), in symmetric chaotic attractors, we have also noticed in simulation that highly patterned, albeit nonsymmetric, chaotic attractors arise in nonsymmetric neural circuits. Providing a transcendental prescription for mixing different firing rates, a chaotic attractor constitutes a natural mechanism for statistical encoding and temporal multiplexing of neural information. Signal multiplexing is known to enhance information transmission capacity and is widely used in communication systems (Li & Stuber, 2010). However, in contrast to technological applications, which require rhythmic and synchronous multiplexing, especially for the purpose of decoding (Keller, Piazzo, Mandarini, & Hanzo, 2001), biological multiplexing does not appear to require temporal precision (Panzeri et al., 2009). It should be noted that the application of chaos for the purpose of synchronization in technological communication systems (Itoh & Chua, 1997) is quite different from our context, where there is no synchronization. Since the model considered in our work deals with firing rates, amplitude corresponds to firing frequency. Depending on the function of the receiving neuron, demultiplexing can be done, in principle, by bend-pass filtering. Neuronal low-pass (Pettersen & Einevoll, 2008) and high-pass (Poon, Young, & Siniaia, 2000) filtering have been reported. Yet the raw multiplexed signal, representing, as implied by the models under consideration, locally smoothed (averaged) information, can be useful in sensory systems. (Multiplexed red, green, and blue color coding is a known example found in both biological and technological vision systems; Hunt, 2004). Combined with empirical evidence on multiplexed firing rates in sensory cortices, this study suggests a constructive role for chaos in biological neural networks.