Abstract

Widely accepted neural firing and synaptic potentiation rules specify a cross-dependence of the two processes, which, evolving on different timescales, have been separated for analytic purposes, concealing essential dynamics. Here, the morphology of the firing rates process, modulated by synaptic potentiation, is shown to be described by a discrete iteration map in the form of a thresholded polynomial. Given initial synaptic weights, a firing activity is triggered by conductance. Elementary dynamic modes are defined by fixed points, cycles, and saddles of the map, building blocks of the underlying firing code. Showing parameter-dependent multiplicity of real polynomial roots, the map is proved to be noninvertible. The incidence of chaos is then implied by the parameter-dependent existence of snap-back repellers. The highly patterned geometric and statistical structures of the associated chaotic attractors suggest that these attractors are an integral part of the neural code. It further suggests the chaotic attractor as a natural mechanism for statistical encoding and temporal multiplexing of neural information. The analytic findings are supported by simulation.

1.  Introduction

There is abundant empirical evidence on a variety of neuronal firing modes, including random spiking (Gerstein & Mandelbrot, 1964), tonic spiking (Cymbalyuk & Shilnikov, 2005), transient bursting (Kuznetsov, Kopell & Wilson, 2006), oscillatory bursting (Elson, Selverstone, Abarbanel, & Rabinovich, 2002), bifurcations with chaos (Ren et al., 1997), and bifurcations without chaos (Li, Gu, Yang, Liu, & Ren, 2004). Individual neurons of the same type are often capable of producing different firing modes, switching from one to another in a seemingly unpredictable manner (Hyland, Reynolds, Hay, Perk, & Miller, 2002). However, the mathematical rules underlying such behavior, the purpose it might serve, or the harm it might cause have not been well understood. A particularly meaningful behavior, discovered in sensory cortices, is the temporal multiplexing of different firing rates (Fairhall, Lewen, Bialek, & van Steveninck, 2001; Wark, Fairhall & Rieke, 2009; Lundstrom & Fairhall, 2006), which enhances the coding and information transmission capacity (Bullock, 1997, Lisman, 2005; Kayser, Montemurro, Logothetis, & Panzeri, 2009). While temporal precision of the multiplexing code can be achieved by narrow windowing, the need for such precision in neural coding has been questioned (Panzeri, Brunel, Logothetis, & Kayser, 2009).

In the dynamic analysis of neural activity, firing and synaptic potentiation have been treated separately—one as a relatively fast process of information transfer, the other as a relatively slow process of learning. However, a complete description of the intricate dynamics requires the integration of these processes into a single model. Real-valued dynamical systems are conveniently represented by discrete iteration maps of the form x(k + 1) = F[x(k) + y(k)], which specifies the transition of a state matrix x from one time instance to the next, given an input sequence y. A map is said to be invertible if there is an inverse map, which produces a unique real solution to the previous state from the present one and the input. If this does not hold for some value of the present state, the map is said to be noninvertible. Since much of the dynamic complexity associated with discrete iteration maps is derived from their correspondence with parameter variation, we shall say that a map is noninvertible if it is noninvertible for some parameter value. Noninvertible discrete iteration maps have been shown to possess a large repertoire of dynamic behaviors associated with map singularities, such as fixed points, cycles, saddles, and fractal boundaries (Gumowski & Mira, 1980; Abraham, Gardini, & Mira, 1997). Although there is no universally accepted definition of chaos, we employ the widely used mathematical definition of Li and Yorke (1975). Studies of chaos have been largely restricted to low-order maps (Abraham et al., 1997; Zeraoulia & Sprott, 2010). Singularities normally define regions of attraction or repulsion, called basins, in the phase space.

Changes in the nature of the singularities caused by changes in map parameters are called local bifurcations. In contrast, changes in the nature of the dynamics caused by the transition of phase trajectories across basin boundaries are called global bifurcations. Although theoretical aspects of chaos, a centerpiece of dynamical systems theory for the past 30 years, are well understood, the empirical detection of chaos in a given time series is a nontrivial task. Mathematical measures, such as the largest Lyapunov exponent (Wright, 1984), often produce ambiguous empirical results for limited time series, even when applied to data generated by simulating low-dimensional models (Sprott, 2003). Consequently, the intuitive characterization of chaos as a “deterministically unpredictable” process (Elyadi, 1999) often seems as reliable as any formal empirical measure. Noninvertible maps represent, then, not only an inability to deduce past from present states but also, through bifurcation and chaos, an inability to predict future from past states. It has been suggested that chaos may represent deficient states of neural information processing (Fell, Roschke, & Beckmann, 1993). Certain applications, such as the solution of combinatorial problems, suggest a functional role for chaos in artificial neural networks, specifically, annealed escape from local attractors (Lin, 2001; Ohta, 2002; Liu, Shi, Wang, & Zurada, 2006). Yet the role of chaos in biological neural systems has remained a mystery.

This letter develops a joint discrete iteration map of neural firing and synaptic potentiation, based on biologically supported models. The map, formed as a multivariate thresholded polynomial, is proved to be noninvertible, regardless of neural self-connectivity. Noninvertibility, coupled with the existence of snap-back repellers (Marotto, 1978, 2005; Gardini & Tramontana, 2010), which is proved to hold for the map of interest under appropriate parameterization, facilitates the incidence of chaos in the Li-Yorke sense. The map at hand is a transcendental function of the initial conditions, the activation levels, and the polynomial coefficients. Noninvertibility and chaos are commonly related to specific parameterizations. In the Li-Yorke framework, the particular set of initial conditions that result in chaos forms an uncountable, scrambled subset of the underlying state space. Local bifurcations, including the onset of chaos, are caused by parameter changes. Parameterizing the underlying neural network in a symmetric manner for analytical purposes, we reduce the multivariate problem to a univariate one, enabling the application of fundamental algebra. The analytical evidence that biological neural networks are prone to chaotic behavior is particularly significant in view of the difficulty of obtaining concrete empirical measurement of chaos. It further implies, by the decomposition of the map portrait into basins of cyclic and chaotic attractors (Abraham et al., 2009) and the nonperiodicity of the range of the Li-Yorke scrambled set (Li & Yorke, 1975), convergence of the latter to a chaotic attractor. The highly patterned structures associated with the chaotic attractors of polynomial maps (Field & Golubitsky, 2009), which are shown in this work to hold for elementary neural circuits, suggest that such attractors are an integral part of the neural firing code. The resulting transcendental prescription of the sampling pattern and statistical intensity defines a natural mechanism for mixing and multiplexing different firing rates.

2.  Discrete Iteration Map of Neural Firing and Synaptic Potentiation

In order to formulate a joint discrete iteration map for the firing rates and the synaptic weights in a network of n neurons, we first derive approximate discrete time models by sampling and truncating biologically motivated continuous-time models. As shown in the appendix, the truncated discrete time version of the spike response model (Gerstner, 1995) for the firing rates is given by
formula
2.1
where υ(k) is the vector of firing rates corresponding to the neurons at time instance k, W(k) is the matrix of synaptic weights, β(k) is the vector of conductance-based activation inputs, Kυ, Tυ and Ep are matrices specified in the appendix, N is a positive integer that satisfies an approximation criterion and f is a vector-valued function, whose scalar elements are defined as (Carandini & Ferster, 2000)
formula
2.2
A generalized model of synaptic potentiation is given by
formula
2.3
where q is a positive integer, υq is the vector whose ith element is υqi, S is the connectivity matrix, having the value 1 in all positions corresponding to connected neurons and the value 0 elsewhere, * denotes the element-wise product between two matrices of the same dimension, as in
formula
with ai,j and xi,j the respective scalar elements, and
formula
2.4
where r is a positive integer. The matrices Tω and Tθ are specified in the appendix. It might be noted that the values q = r = 1 correspond to the Hebb plasticity rule (Hertz, Krogh, & Palmer, 1991), while the values q = r = 2 correspond to the biologically supported BCM plasticity rule (Bienenstock, Cooper, & Munro, 1982). We have empirically found the latter to be more stable than the former. Although the generalized model, equations 2.3 and 2.4, allows, in principle, for higher values of q and r, there is no biological evidence for such values. While N = 1 corresponds to the basic versions of these rules, N>1 represents their smoothed versions.

3.  Noninvertibility of the Map

The map, equations 2.1 to 2.4, is invertible if and only if it has an inverse that yields, for any values of the map parameters and any real values of the firing rates at time k + 1, unique real values for the firing rates at time k. If this is not the case, the map is said to be noninvertible. It should be noted that only positive (including zero) values are acceptable for the firing rates.

Theorem 1. 

The map (equations 2.1 to 2.4) is noninvertible.

Proof. 
Denoting
formula
3.1
equation 2.1 can be written as
formula
3.2
and equation 2.3 can be written as
formula
3.3
where, by equation 2.4,
formula
3.4
Substituting equations 3.3 and 3.4 into equation 3.2 and employing equation 3.1, we obtain
formula
3.5
where
formula
3.6
Given a unique positive real value for υi(k + 1), i = 1, …, N − 1, the last N − 1 equations of equation 3.5 yield unique positive real values for υi(k), i = 0, …, N − 2. It remains to determine whether, given, in addition, υ0(k + 1), equation 3.5 provides a unique positive real value for υN−1(k). The threshold nonlinearity, equation 2.2, and the first equation of equation 3.5 imply that either
formula
3.7
or
formula
3.8
If equation 3.7 is satisfied, then, by the first equation of equation 3.5,
formula
Hence, equation 3.8 is satisfied in the range
formula
3.9
In order to show that the map is noninvertible, it will suffice, then, to show that equation 3.8 does not have a unique real-valued solution for υN−1(k). Substituting equation 3.6 into 3.8, we have
formula
3.10
The left-hand side of equation 3.10 can be written as a polynomial in υN−1(k):
formula
3.11
where, incorporating equation 3.5,
formula
3.12
The ith row of the vector equation 3.10 is a scalar polynomial equation of order q + r + 1 in the elements of υi,N−1(k). The terms of equation 3.12 corresponding to the scalar polynomial are, in descending order of power of υi,N−1(k),
formula
3.13
where ωi,j(k + 1) is the i, jth scalar element of W(k + 1), and [T−1υβ(k)]i and [μ(k + 1)]i are the ith scalar elements of the vectors T−1υβ(k) and
formula
respectively. While the map at hand is polynomial in the firing rates, it is also a transcendental function of the initial conditions and the parameter values (including, in our case, external activation inputs; indeed, local bifurcations are caused by changes in these parameters). If all the neurons have the same parameter values, the same initial conditions, and symmetric connectivity, they all must behave in the same manner. This means that, within this parametric domain (the domain of symmetry), all the scalar rows of equation 3.10 represent the same scalar polynomial equation. If the ith neuron is self-connected, the ith row of equation 3.10 can then be written as a univariate polynomial equation in υi,N−1(k), whose terms are, in descending order of power,
formula
3.14
(note that due to the assumed symmetry, all the synaptic weights are the same; hence, ω(k + 1) = ωi,j(k + 1); i, j = 1, …, n). If the neurons are self-disconnected, n will be replaced by N − 1 in all but the term for pi,1(υN−1(k)). Descartes’ rule of signs (Fine & Rosenberger, 1997) states that the number of real positive roots of a univariate polynomial equation with real coefficients equals the number of sign changes, or less than it by a multiple of two, when the coefficients are ordered according to descending powers of the polynomial. It can be seen that since υi,N−1(k) is positive, there is an even number of Descartes sign changes in equation 3.14 within the domain
formula
3.15
(For r>2, this follows immediately from equation 3.14.) In order to see that it is also true for r = 2, let us write equation 3.11 for the scalar case,
formula
3.16
which, for r = 2, can be rearranged in the order of decreasing powers of υN−1(k) as
formula
3.17
It can be seen that, according to equation 3.14, the right-hand side of equation 3.17 has two Descartes’ sign changes regardless of the sign of [pi,q+r(υN−1(k)) + pi,q+2(υN−1(k))]. For r = 1, equation 3.16 can be rearranged as
formula
3.18
Now, whether the term [pi,q+r(υN−1(k)) + pi,q+1(υN−1(k))] is positive or negative, the right-hand side of equation 3.18 has two Descartes sign changes). It follows that within the domain 3.15, the map, under parametric symmetry, has no unique positive real inverse; hence, it is noninvertible. This holds for networks of any number of self-connected neurons, and for networks of two or more self-disconnected neurons (for a single self-disconnected neuron, all of the terms of equation 3.14 but the last two vanish, and the map becomes invertible).

4.  Chaotic Coding and Chaotic Multiplexity

The regular dynamic modes of the firing rates process are defined by fixed points, cycles, and saddles of the map, which have also been called periodic points or singularities (Abraham et al., 1997; Mira, 2000). Li and Yorke (1975) established a formal framework for the characterization of chaotic maps. Given an interval J, a continuous mapf: JJ is chaotic in the Li-Yorke sense if the following conditions are satisfied:

  1. For every n = 1, 2, …there is a periodic point in J having period n.

  2. There is an uncountable (“scrambled”) set SJ, containing no periodic points, which satisfies the following conditions:

    1. For every x, yS with
      formula
    2. For every xS and any periodic point y of f,
      formula
where fk = f(fk−1(x)). Condition 2a implies sensitivity to initial conditions. While trajectories originating at different initial conditions will intersect each other (see condition 2a1), they will not converge to each other (see condition 2a2). Condition 2b implies that for a given subset (S) of initial conditions, the periodic points of the map are not asymptotically attractive.

Marotto's theorem (1978, 2005), extended to noninvertible piece-wise smooth maps by Gardini and Tramontana (2010), implies that a sufficient condition for chaotic behavior in the Li-Yorke sense is the existence of a snap-back repeller, which is defined as follows (Marotto, 2005): Suppose that z is a fixed point of a map Q with all eigenvalues of DQ(z), the Jacobian of the map at z, exceeding 1 in magnitude, and suppose that there exists a point x(0) ≠ z in a repelling neighborhood of z, such that x(M) = z and , for some positive integer M, where x(k) = Qk(x(0)). Then z is called a snap-back repeller of Q. As in the case of noninvertibility, we shall say that a map is chaotic if it is chaotic for some parameter values. We have the following result:

Theorem 2. 

The map, equations 2.1 to 2.4, is chaotic.

Proof. 
As in the proof of noninvertibility, let us assume for analytical purposes that all neurons possess the same parameters and initial conditions. Then they all exhibit the same behavior, facilitating the algebraic convenience of a scalar map, which allows for the removal of the index i. The analysis is further simplified by noting that as ep is arbitrarily smaller than 1 for a sufficiently small value of τ and the firing rates are physically bounded, the dependence of υ0(k + 1) on υp(k), p ≥ 1 diminishes as τ → 0 (as shown in the appendix, the error caused by eliminating the corresponding terms from the map decays exponentially as τ → 0). It can be seen from equation 2.3 that for υ(k)>1 and a sufficiently small τθ, the changes in ω(k) are negative (as υ(k) − θ(k) is negative), and for some −∞ < k < 0, ω(k) becomes negative. The Jacobian of the scalar version of the map, equation 2.1, which is negative, is greater in magnitude than 1 for ω(k) < −2 − 1/τυ. This magnitude becomes arbitrarily large for a sufficiently small value of τω. At some point, υ(k) = z, the scalar version of the map, equation 2.1, will cross the line υ(k + 1) = υ(k) and, at a point υ(k) ≡ b = −β(k)/ω(k) where b>z, the map will bend and become υ(k + 1) = ((τυ − 1)/τυ)υ(k). The bending zone of the map is depicted in Figure 1. A cobweb diagram (Abraham et al., 1997) is constructed as follows. A straight line passing through z in parallel to the υ(k) axis crosses the υ(k + 1) = ((τυ − 1)/τυ)υ(k) part of the map at some point, say, υ(1). A straight line passing through υ(1) in parallel to the υ(k + 1) axis crosses the line υ(k + 1) = υ(k) at some point, say, c, and a straight line passing through c in parallel to the υ(k) axis crosses the map at a point, say, υ(0), above z. For a sufficiently large value of τυ, the lines υ(k + 1) = ((τυ − 1)/τυ)υ(k) and υ(k + 1) = υ(k) become nearly parallel. The distance between the points υ(0) and z becomes arbitrarily close to the distance between the points c and υ(1), which becomes arbitrarily close to the distance between the points b and z. As, for a sufficiently small value of τω, the slope of the map at the two points is arbitrarily close to infinity, the distance between them is arbitrarily close to (1/τυ)υ(k). And as, for a sufficiently small value of τω, υ(k) is arbitrarily close to 1 at b and z, the distance between these points is arbitrarily close to 1/τυ. Hence, for a sufficiently large value of τυ, the point υ(0) is in an arbitrarily small neighborhood of z and the slope of the map at υ(0) is greater in magnitude than 1. The sequence υ(k), k = 0, 1, 2, which first diverges from υ(0) to υ(1) and then snaps back from υ(1) to υ(2) = z, establishes z as a snap-back repeller of the map. The assertion now follows from theorem 1 and Gardini and Tramontana's extension (2010) of Marotto's theorem (1978, 2005) to noninvertible piece-wise smooth maps.
Figure 1:

Snap-back repeller z. The map (see equation 3.3), the line υ(k + 1) = υ(k), and the cobweb graph for the specified parameter values are represented by the solid, the dashed, and the dash-dot lines, respectively.

Figure 1:

Snap-back repeller z. The map (see equation 3.3), the line υ(k + 1) = υ(k), and the cobweb graph for the specified parameter values are represented by the solid, the dashed, and the dash-dot lines, respectively.

The domain of a map decomposes into the basins of point attractors, cyclic attractors, chaotic attractors, and the “basin of infinity,” which consists of all points whose trajectories run away from any bounded set (Abraham et al., 1997). While, mathematically, indefinite divergence of the neuronal firing rates is possible, physical constraints exclude such divergence. At the same time, asymptotic convergence to a periodic point from the scrambled set S is excluded by the Li-Yorke conditions (specifically, condition 2b). Initiated within the scrambled set, a trajectory will enter the basin of a chaotic attractor, which, for a polynomial map, has a highly patterned geometric and statistical (intensity) structure (Field & Golubitsky, 2009). It can be seen that our map is polynomial in the domain P(υp(k), p = 0, …, N − 1)>0. Global bifurcations from chaotic and repelling basins (the latter a part of the basin of infinity) into cyclic (and point) attractors are made possible by absorbing areas (Abraham et al., 1997).

A chaotic attractor may be viewed, then, as an integral part of the neural firing code. It will produce a mixture of firing rates corresponding to its geometric and intensity structure. We call the temporal production of such a mixture chaotic multiplexity. The characteristics of single-mode, bifurcated, chaotic and multiplexed firing are demonstrated by the examples in section 5.

5.  Examples

We simulated a two-neuron circuit in a closed-loop configuration, employing the BCM version of the model, (equations 2.1 to 2.4, with, q = r = 2). Three sets of parameter values were considered. Taking N = 10, the relative approximation error ρ(see the appendix) was smaller than 0.07 in all cases.

Set 1: Different Time Constants

  1. τi = 25, .

  2. τi = 25, .

  3. τi = 12, .

  4. .

In all cases υi(0) = 0, ωi,j(0) = 0, , βi = 1, i, j = 1, 2.

It can be seen from Figure 2 that cases a and b represent single-mode firing activities—the first an initial transient attenuating to a constant firing rate and the second an initial transient attenuating to a constant oscillatory firing rate. Cases c and d represent bifurcations from a relatively uniform spiking mode to multiplexed modes of different natures, decaying to constant firing rates.

Figure 2:

Firing rates for set 1.

Figure 2:

Firing rates for set 1.

Set 2: Different Initial Synaptic Weights

  1. ω1,1(0) = ω2,2(0) = 0, ω1,2(0) = ω2,1(0) = −1.

  2. ω1,1(0) = ω2,2(0) = 1, ω1,2(0) = ω2,1(0) = 0.

  3. ω1,1(0) = 0.1, ω2,2(0) = 0, ω1,2(0) = ω2,1(0) = −0.1.

  4. ω1,1(0) = ω2,2(0) = −0.1, ω1,2(0) = ω2,1(0) = 0.

In all cases υi(0) = 0, τi = 12, , , βi = 1, i, j = 1, 2.

It can be seen from Figure 3 that while case a represents an initial transient attenuating to a constant firing rate, case b represents an instantaneous transient followed by constant-amplitude spiking with a varying waveform. Case c shows a brief period of spiking, bifurcating into a constant state of zero firing. Case d shows a brief period of spiking, bifurcating into a multiplexed signal attenuating to zero.

Figure 3:

Firing rates for set 2.

Figure 3:

Firing rates for set 2.

Set 3: Different Initial Firing Rates

  1. υi(0) = 0, i = 1, 2.

  2. υi(0) = 0.1, i = 1, 2.

In both cases ωi,j(0) = 0, , , βi = 1, i, j = 1, 2.

The seemingly disorderly behaviors displayed in both cases a and b of set 3, as seen in Figure 4, persisted for the 10,000 iterations checked. It can be seen that the individual time sequences generated by the different initial conditions are substantially different, and the difference persists along time. Such sensitivity to initial conditions is typical of chaotic maps.

Figure 4:

Firing rates for set 3.

Figure 4:

Firing rates for set 3.

Figure 5 shows the sampling of the phase space υ1 × υ2 by the trajectories corresponding to cases a (left-hand figures) and b (right-hand figures) of set 3. The initial 10% of each of the trajectories were excluded so as to eliminate initial transients. From top to bottom, the three rows in the figure show the phase-space samples generated for the two initial conditions in 100, 1000, and 10,000 iterations, respectively. While the sampling graphs produced in 100 iterations are quite different for the two initial conditions, those produced in 1000 iterations are considerably more similar, and those produced in 10,000 iterations are almost identical.

Figure 5:

Phase-space formation of a chaotic attractor for set 3. (a, c, e) These correspond to initial condition υ1(0) = υ2(0) = 0. (b, d, f) These correspond to initial condition υ1(0) = υ2(0) = 0.1. Plots (a and b) show the phase samples generated for the two cases in 100 iterations, plots c and d show the samples generated in 1000 iterations, and plots e and f show the samples generated in 10,000 iterations.

Figure 5:

Phase-space formation of a chaotic attractor for set 3. (a, c, e) These correspond to initial condition υ1(0) = υ2(0) = 0. (b, d, f) These correspond to initial condition υ1(0) = υ2(0) = 0.1. Plots (a and b) show the phase samples generated for the two cases in 100 iterations, plots c and d show the samples generated in 1000 iterations, and plots e and f show the samples generated in 10,000 iterations.

Finally, we note that in the reported simulations, the neurons involved were allowed to form self-connections. When self-disconnections were enforced by nullifying the corresponding synaptic weights, the simulation results were of a similar nature.

6.  Discussion

We have analytically shown that, depending on parameter values, neural firing can become noninvertible and chaotic. As the first property implies an inability to infer past from present states and the second an inability to infer future from past states, one may wonder what the utility of these seemingly peculiar properties might be. While noninvertibility and chaos have been identified with ambiguity and deteriorated performance in certain applications involving artificial feedforward networks (Bertels, Neuberg, Vassiliadis, & Pechanek, 1998; Gicquel, Anderson, & Kevrekidis, 1998; Reco-Martinez, Adomaitis & Kevrekidis, 2000; Verschure, 1991) and in biological neural networks (Fell et al., 1993), several works have noted the utility of chaos in learning and problem solving by annealing (Lin, 2001; Ohta, 2002; Liu et al., 2006; Verschure, 1991; Sato, Akiyama, & Farmer, 2002). Moreover, it has been claimed (Langton, 1990) and disputed (Mitchell, Hraber, & Crutchfield, 1993) that the boundary between order and chaos provides favorable conditions for universal computation. Chaos has been incorporated into neuron models through a bimodal logistic function (Aihara, Takabe, & Toyoda, 1990), simplified phenomenological models (Lin, Ruen, & Zhao, 2002; Shilnikov & Rulkov, 2003), or negative self-feedback (Lin, 2001; Ohta, 2002; Liu et al., 2006). Our analysis shows that map noninvertibility and chaotic behavior apply not only to networks of self-connected neurons but also to networks of self-disconnected neurons. However, the problem-solving and computation capabilities of biological neural networks are rather vague notions, and the possible role of chaos in such networks has remained a mystery.

The analytic approach taken in this work is particularly noteworthy in view of the fact that empirical measures, such as the Lyapunov exponents, often fail to provide conclusive evidence of chaos. We suggest that the highly patterned geometric and statistical structures of the chaotic attractors associated with neural networks make such attractors an integral part of the neural code. While our mathematical analysis has primarily led to the consideration of symmetric neural circuits, resulting, as in the cases considered by Field and Golubitsky (2009), in symmetric chaotic attractors, we have also noticed in simulation that highly patterned, albeit nonsymmetric, chaotic attractors arise in nonsymmetric neural circuits. Providing a transcendental prescription for mixing different firing rates, a chaotic attractor constitutes a natural mechanism for statistical encoding and temporal multiplexing of neural information. Signal multiplexing is known to enhance information transmission capacity and is widely used in communication systems (Li & Stuber, 2010). However, in contrast to technological applications, which require rhythmic and synchronous multiplexing, especially for the purpose of decoding (Keller, Piazzo, Mandarini, & Hanzo, 2001), biological multiplexing does not appear to require temporal precision (Panzeri et al., 2009). It should be noted that the application of chaos for the purpose of synchronization in technological communication systems (Itoh & Chua, 1997) is quite different from our context, where there is no synchronization. Since the model considered in our work deals with firing rates, amplitude corresponds to firing frequency. Depending on the function of the receiving neuron, demultiplexing can be done, in principle, by bend-pass filtering. Neuronal low-pass (Pettersen & Einevoll, 2008) and high-pass (Poon, Young, & Siniaia, 2000) filtering have been reported. Yet the raw multiplexed signal, representing, as implied by the models under consideration, locally smoothed (averaged) information, can be useful in sensory systems. (Multiplexed red, green, and blue color coding is a known example found in both biological and technological vision systems; Hunt, 2004). Combined with empirical evidence on multiplexed firing rates in sensory cortices, this study suggests a constructive role for chaos in biological neural networks.

Appendix:  From Continuous to Discrete Time Models

The firing rate υi(t) of the ith of n neurons in a neural network has been represented by the model (Gerstner, 1995; Dayan & Abbott, 2001)
formula
where is a time constant, κi(t) is a function representing the membrane response to the input current, and βi(k) is the conductance-based activation input (Jolivet, Lewis, & Gerster, 2004). The basic Hebb rule of synaptic potentiation (Hertz, Krogh, & Palmer, 1991) may be written as
formula
where ωi is the presynaptic weights vector of the ith neuron, is the synaptic time constant, and si is the n-dimensional vector having 1 in all positions corresponding to the preneurons connected to the neuron and 0 elsewhere. Because the firing rates are positive quantities, the Hebb basic rule can only produce growth of the synaptic weights, not allowing for weight reduction. A modification of the basic rule to
formula
where θi(t) is a threshold, will allow both processes to take place. It has been suggested to set θi to the average value of the firing activity; however, a constant θi deems the plasticity rule unstable. A smoothing effect can be achieved by integrating the modulated activity,
formula
where is some constant. Since synaptic potentiation is thought to be a slower, smoother process than the firing activity, a common modification of the Hebb model is replacing the right-hand side by its average value for the inputs used during training (Dayan & Abbott, 2001). Similar smoothing can be performed by
formula
The BCM plasticity rule (Bienenstock, Cooper, & Munro, 1982), supported by experimental evidence, has the basic form
formula
We have found this instantaneous update version, much like the basic Hebb rule, to be unstable. A smoothed version is
formula
The BCM rule further suggests a quadratic dependence of the threshold θi(t) on the activity (Bienenstock, Cooper, & Munro, 1982). Immediate dependence of θi(t) on υi(t) would cause abrupt changes in θi(t), which is physiologically unrealistic (Cooper, Intrator, Blais, & Shouval, 2004). The following smoothed dependence of θi(t) on the activity has been suggested (Intrator & Cooper, 1992),
formula
where is some constant.
Euler's discretization samples the equations at sufficiently short time intervals δ(assumed, for accuracy, to be much shorter than any time constant involved and to satisfy the Nyquist sampling criterion with respect to any oscillatory mode; Oppenheim, Willsky & Young, 1983). Employing the exponential kernel,
formula
the model for the firing rates becomes
formula
Hereafter we assume that δ = 1 and that time is expressed in units of δ. A change of variables p = k − ℓ yields
formula
Since υi can be assumed to be bounded and since
formula
the error incurred by truncating the infinite sum decays exponentially. Consequently, the firing rate model can be approximated to any desired accuracy by the model
formula
for some integer N. The value of N may be selected as one, which yields a sufficiently small value for the relative error
formula
The discrete model for the firing rates can be written in vector form,
formula
where , and are diagonal matrices of the corresponding elements, f(x) = vec[fi(xi)]i=1,…,n and β(k) = vec[βi(k)]i=1,…,n are vectors of the corresponding elements, and W(k) = [ω1(k) ω2(k) ⋅ ⋅ ⋅ ωn(k)] is a matrix whose columns are ωi(k), i = 1, …, n.
The discrete time version of the smoothed Hebb synaptic plasticity rule is
formula
where
formula
It can be verified that θi(k) becomes the window average of υi(k) for and ri → ∞; i = 1, …, n. The discrete time version of the smoothed BCM rule is
formula
with
formula
A generalization of the Hebb and the BCM synaptic plasticity rules takes the form
formula
with
formula
where υq(kp) is the vector whose ith element is υqi(kp) and
formula

References

Abraham
,
R. H.
,
Gardini
,
L.
, &
Mira
,
C.
(
1997
).
Chaos in discrete dynamical systems.
Berlin
:
Springer-Verlag
.
Aihara
,
K.
,
Takabe
,
T.
, &
Toyoda
,
M.
(
1990
).
Chaotic neural networks
.
Phys. Lett. A
,
144
(
6,7
),
333
338
.
Bertels
,
K.
,
Neuberg
,
L.
,
Vassiliadis
,
S.
, &
Pechanek
,
D. G.
(
1998
).
Chaos and neural network learning. Some observations
.
Neur. Proc. Lett.
,
7
,
69
80
.
Bienenstock
,
E. L.
,
Cooper
,
L. N.
, &
Munro
,
P. W.
(
1982
).
Theory for the development of neuron selectivity: Orientation specificity and binocular interaction in visual cortex
.
J. Neurosci.
,
2
,
32
48
.
Bullock
,
T. H.
(
1997
).
Signals and signs in the nervous system: The dynamic anatomy of electrical activity is probably information-rich
.
Proc. Natl. Acad. Sci. U.S.A.
,
94
,
1
6
.
Carandini
,
M.
, &
Ferster
,
D.
(
2000
).
Orientation tuning of membrane potential and firing rate in cat primary visual cortex
.
J. Neurosci.
,
20
,
470
484
.
Cooper
,
L. N.
,
Intrator
,
N.
,
Blais
,
B. S.
, &
Shouval
,
H. Z.
(
2004
).
Theory of cortical plasticity
.
Singapore
:
World Scientific
.
Cymbalyuk
,
G. S.
, &
Shilnikov
,
A. L.
(
2005
).
Coexistence of tonic spiking oscillations in a leech neuron model
,
J. Comp. Neurosci.
,
18
,
255
263
.
Dayan
,
P.
, &
Abbott
,
L. F.
(
2001
).
Theoretical neuroscience
.
Cambridge, MA
:
MIT Press
.
Elaydi
,
S. N.
(
1999
).
Discrete chaos
.
London
:
Chapman & Hall/CRC
.
Elson
,
R. N.
,
Selverstone
,
A. I.
,
Abarbanel
,
H.D.I.
, &
Rabinovich
,
M. I.
(
2002
).
Inhibitory synchronization of bursting in biological neurons: Dependence on synaptic time constant
.
J. Neurophysiol.
,
88
,
1166
1176
.
Fairhall
,
A. L.
,
Lewen
,
G. D.
,
Bialek
,
W.
, &
van Steveninck
,
R.R.R.
(
2001
).
Efficiency and ambiguity in an adaptive neural code
.
Nature
,
412
,
787
792
.
Fell
,
J.
,
Roschke
,
J.
, &
Beckmann
,
P.
(
1993
).
Deterministic chaos and the first positive Lyapunov exponent: A nonlinear analysis of the human electroencephalogram during sleep
.
Biol. Cybern.
,
69
,
139
146
.
Field
,
M.
, &
Golubitsky
,
M.
(
2009
).
Symmetry in chaos
(2nd ed.).
Philadelphia
:
SIAM
.
Fine
,
B.
, &
Rosenberger
,
G.
(
1997
).
The fundamental theorem of algebra
.
Berlin
:
Springer-Verlag
.
Gardini
,
L.
, &
Tramontana
,
F.
(
2010
).
Snap-back repellers in non-smooth functions
.
Regular and Chaotic Dynamics
,
2–3
,
237
245
.
Gerstein
,
G. L.
, &
Mandelbrot
,
B.
(
1964
).
Random walk models for the spike activity of a single neuron
.
Biophys. J.
,
4
,
41
68
.
Gerstner
,
W.
(
1995
).
Time structure of the activity in neural network models.
.
Phys. Rev. E
,
51
,
738
758
.
Gicquel
,
N.
,
Anderson
,
J. S.
, &
Kevrekidis
,
I. G.
(
1998
).
Noninvertibility and resonance in discrete-time neural networks for time-series processing
.
Phys. Lett. A
,
238
,
8
18
.
Gumowski
,
I.
, &
Mira
,
C.
(
1980
).
Recurrences and discrete dynamical systems
.
New York
:
Springer-Verlag
.
Hertz
,
J.
,
Krogh
,
A.
, &
Palmer
,
R. G.
(
1991
).
Introduction to the theory of neural computation
.
Redwood City, CA
:
Addison-Wesley
.
Hunt
,
R.W.G.
(
2004
).
The reproduction of colour
(6th6th Ed.).
Chichester, UK
:
Wiley
.
Hyland
,
B. I.
,
Reynolds
,
J.N.J.
,
Hay
,
J.
,
Perk
,
C. G.
, &
Miller
,
R.
(
2002
).
Firing modes of midbrain dopamine cells in the freely moving rat
.
Neuroscience
,
114
,
475
492
.
Intrator
,
N.
, &
Cooper
,
L. N.
(
1992
).
Objective function formulation of the BCM theory of visual cortical plasticity: Statistical connections, stability conditions
.
Neural Networks
,
5
,
3
17
.
Itoh
,
M.
, &
Chua
,
L. O.
(
1997
).
Multiplexing techniques via chaos
.
In Proceedings of the IEEE Intern. Sym. Circ. Syst.
Piscataway, NJ
:
IEEE
.
Jolivet
,
R.
,
Lewis
,
T. J.
, &
Gerstner
,
W.
(
2004
).
Generalized integrate-and-fire models of neuronal activity approximate spike trains of a detailed model to a high degree of accuracy
.
J. Neurophysiol.
,
92
,
959
976
.
Kayser
,
C.
,
Montemurro
,
M. A.
,
Logothetis
,
N. K.
, &
Panzeri
,
S.
(
2009
).
Spike-phase coding boosts and stabilizes the information carried by spatial and temporal spike patterns
.
Neuron
,
61
,
597
608
.
Keller
,
T.
,
Piazzo
,
L.
,
Mandarini
,
P.
, &
Hanzo
,
L.
(
2001
).
Orthogonal frequency division multiplex synchronization techniques for frequency-selective fading channels
.
IEEE J. Select. Ar. in Comm.
,
19
,
999
1008
.
Kuznetsov
,
A. S.
,
Kopell
,
N. J.
, &
Wilson
,
C. J.
(
2006
).
Transient high-frequency firing in a coupled-oscillator model of the mesencephalic dopamimergic neuron
.
J. Neurophysiol.
,
95
,
932
947
.
Langton
,
C. G.
(
1990
).
Computation at the edge of chaos: Phase transition and emergent computation
,
Physica D
,
42
,
12
37
.
Li
,
L.
,
Gu
,
H.
,
Yang
,
M. Y.
,
Liu
,
Z.
, &
Ren
,
W.
(
2004
).
A series of bifurcation scenarios in the firing pattern transitions in an experimental neural pacemaker
.
Int. J. Bifur. Chaos
,
14
,
1813
1817
.
Li
,
T.-Y.
&
Yorke
,
J. A.
(
1975
).
Period three implies chaos
.
Am. Math. Month.
,
82
(
10
),
985
992
.
Li
,
Y.
, &
Stuber
,
G.
(Eds.). (
2010
).
Orthogonal frequency division multiplexing for wireless communications
.
New York
:
Springer
.
Lin
,
J.-S.
(
2001
).
Annealed chaotic neural network with nonlinear self-feedback and its application to clustering problem
,
Pattern Recog.
,
34
,
1093
1104
.
Lin
,
W. L.
,
Ruen
,
J.
, &
Zhao
,
W.
(
2002
).
On the mathematical clarification of the snap-back repeller in high dimensional systems and chaos in a discrete neural network model
.
Int. J. Bifur. Chaos
,
12
(
5
),
1129
1139
.
Lisman
,
J.
(
2005
).
The theta/gamma discrete phase code occurring during the hippocampal phase precession may be a more general brain coding scheme
.
Hippocampus
,
15
,
913
922
.
Liu
,
W.
,
Shi
,
H.
,
Wang
,
L.
, &
Zurada
,
J. M.
(
2006
).
Chaotic cellular neural networks with negative self-feedback
(pp.
65
75
). In
Artific. Intell. Soft Comp.
Berlin
:
Springer
.
Lundstrom
,
B. N.
, &
Fairhall
,
A. L.
(
2006
).
Decoding stimulus variance from a distributional neural code of interspike intervals
.
J. Neurosci.
,
26
,
9030
9037
.
Marotto
,
F. R.
(
1978
).
Snap-back repellers imply chaos in Rn
.
J. Math. Anal. Appl.
,
63
(
1
),
199
223
.
Marotto
,
F. R.
(
2005
).
On redefining a snap-back repeller
.
Chaos, Solitons and Fractals
,
25
,
25
28
.
Mira
,
C.
(
2000
).
Chaos and fractal properties induced by noninvertibility of models in the form of maps
.
Chaos, Solitons and Fractals
,
11
,
251
262
.
Mitchell
,
M.
,
Hraber
,
P. T.
, &
Crutchfield
,
J. P.
(
1993
).
Revisiting the edge of chaos: Evolving cellular automata to perform computations
.
Complex Systems
,
7
,
89
130
.
Ohta
,
M.
(
2002
).
Chaotic neural networks with reinforced self feedbacks and its application to N-queen problem
.
Math. Comp. Sim.
,
59
,
305
317
.
Oppenheim
,
A. V.
,
Willsky
,
A. S.
, &
Young
,
I. T.
(
1983
).
Signals and systems
.
Upper Saddle River, NJ
:
Prentice Hall
.
Panzeri
,
S.
,
Brunel
,
N.
,
Logothetis
,
N. K.
, &
Kayser
,
C.
(
2009
).
Sensory neural codes using multiplexed temporal scales. (Review)
.
Trends in Neurosciences
,
33
(
3
),
111
120
.
Pettersen
,
K. H.
, &
Einevoll
,
G. T.
(
2008
).
Amplitude variability and extracellular low-pass filtering of neuronal spikes
.
Biophys. J.
,
94
,
784
802
.
Poon
,
C-S.
,
Young
,
D. L.
,
Siniaia
,
M.
(
2000
).
High-pass filtering of carotid-vagal influences on expiration in rat: Role of N-methyl-D-aspartate receptors
.
Neurosci. Lett.
,
284
,
5
8
.
Reco-Martinez
,
R.
,
Adomaitis
,
R. A.
, &
Kevrekidis
,
I. G.
(
2000
).
Noninvertibility in neural networks
.
Computers and Chemical Engineering
,
24
,
2417
2433
.
Ren
,
W. S.
,
Hu
,
J.
,
Zhang
,
B. J.
,
Wang
,
F. Z.
,
Gong
,
Y. F.
, &
Xu
,
J. X.
(
1997
).
Period-adding bifurcation with chaos in the interspike intervals generated by an experimental neural pacemaker
.
Int. J. Bifur. Chaos
,
7
,
1867
1872
.
Sato
,
Y.
,
Akiyama
,
E.
, &
Farmer
,
J. D.
(
2002
).
Chaos in learning a simple two-person game
.
Proc. Nat. Acad. Sci. (PNAS)
,
99
(
7
),
4748
4751
.
Shilnikov
,
A. L.
, &
Rulkov
,
N. F.
(
2003
).
Origin of chaos in a two-dimensional map modeling spiking-bursting neural activity
.
Int. J. Bifur., and Chaos
,
13
(
11
),
3325
3340
.
Sprott
,
J. C.
(
2003
).
Chaos and time-series analysis
.
New York
:
Oxford University Press
.
Verschure
,
P.F.M.J.
(
1991
).
Chaos-based learning
.
Complex Systems
,
5
,
359
370
.
Wark
,
B.
,
Fairhall
,
A.
, &
Rieke
,
F.
(
2009
).
Timescales of inference in visual adaptation
.
Neuron
,
61
,
750
761
.
Wright
,
J.
(
1984
).
Method for calculating a Lyapunov exponent
.
Phys. Rev. A
,
29
(
5
),
2924
2927
.
Zeraoulia
,
E.
, &
Sprott
,
J. C.
(
2010
).
2-D quadratic maps and 3-D ODE systems
.
Singapore
:
World Scientific
.