## Abstract

Widely accepted neural firing and synaptic potentiation rules specify a cross-dependence of the two processes, which, evolving on different timescales, have been separated for analytic purposes, concealing essential dynamics. Here, the morphology of the firing rates process, modulated by synaptic potentiation, is shown to be described by a discrete iteration map in the form of a thresholded polynomial. Given initial synaptic weights, a firing activity is triggered by conductance. Elementary dynamic modes are defined by fixed points, cycles, and saddles of the map, building blocks of the underlying firing code. Showing parameter-dependent multiplicity of real polynomial roots, the map is proved to be noninvertible. The incidence of chaos is then implied by the parameter-dependent existence of snap-back repellers. The highly patterned geometric and statistical structures of the associated chaotic attractors suggest that these attractors are an integral part of the neural code. It further suggests the chaotic attractor as a natural mechanism for statistical encoding and temporal multiplexing of neural information. The analytic findings are supported by simulation.

## 1. Introduction

There is abundant empirical evidence on a variety of neuronal firing modes, including random spiking (Gerstein & Mandelbrot, 1964), tonic spiking (Cymbalyuk & Shilnikov, 2005), transient bursting (Kuznetsov, Kopell & Wilson, 2006), oscillatory bursting (Elson, Selverstone, Abarbanel, & Rabinovich, 2002), bifurcations with chaos (Ren et al., 1997), and bifurcations without chaos (Li, Gu, Yang, Liu, & Ren, 2004). Individual neurons of the same type are often capable of producing different firing modes, switching from one to another in a seemingly unpredictable manner (Hyland, Reynolds, Hay, Perk, & Miller, 2002). However, the mathematical rules underlying such behavior, the purpose it might serve, or the harm it might cause have not been well understood. A particularly meaningful behavior, discovered in sensory cortices, is the temporal multiplexing of different firing rates (Fairhall, Lewen, Bialek, & van Steveninck, 2001; Wark, Fairhall & Rieke, 2009; Lundstrom & Fairhall, 2006), which enhances the coding and information transmission capacity (Bullock, 1997, Lisman, 2005; Kayser, Montemurro, Logothetis, & Panzeri, 2009). While temporal precision of the multiplexing code can be achieved by narrow windowing, the need for such precision in neural coding has been questioned (Panzeri, Brunel, Logothetis, & Kayser, 2009).

In the dynamic analysis of neural activity, firing and synaptic potentiation have been treated separately—one as a relatively fast process of information transfer, the other as a relatively slow process of learning. However, a complete description of the intricate dynamics requires the integration of these processes into a single model. Real-valued dynamical systems are conveniently represented by discrete iteration maps of the form **x**(*k* + 1) = *F*[**x**(*k*) + **y**(*k*)], which specifies the transition of a state matrix **x** from one time instance to the next, given an input sequence **y**. A map is said to be invertible if there is an inverse map, which produces a unique real solution to the previous state from the present one and the input. If this does not hold for some value of the present state, the map is said to be noninvertible. Since much of the dynamic complexity associated with discrete iteration maps is derived from their correspondence with parameter variation, we shall say that a map is noninvertible if it is noninvertible for some parameter value. Noninvertible discrete iteration maps have been shown to possess a large repertoire of dynamic behaviors associated with map singularities, such as fixed points, cycles, saddles, and fractal boundaries (Gumowski & Mira, 1980; Abraham, Gardini, & Mira, 1997). Although there is no universally accepted definition of chaos, we employ the widely used mathematical definition of Li and Yorke (1975). Studies of chaos have been largely restricted to low-order maps (Abraham et al., 1997; Zeraoulia & Sprott, 2010). Singularities normally define regions of attraction or repulsion, called basins, in the phase space.

Changes in the nature of the singularities caused by changes in map parameters are called local bifurcations. In contrast, changes in the nature of the dynamics caused by the transition of phase trajectories across basin boundaries are called global bifurcations. Although theoretical aspects of chaos, a centerpiece of dynamical systems theory for the past 30 years, are well understood, the empirical detection of chaos in a given time series is a nontrivial task. Mathematical measures, such as the largest Lyapunov exponent (Wright, 1984), often produce ambiguous empirical results for limited time series, even when applied to data generated by simulating low-dimensional models (Sprott, 2003). Consequently, the intuitive characterization of chaos as a “deterministically unpredictable” process (Elyadi, 1999) often seems as reliable as any formal empirical measure. Noninvertible maps represent, then, not only an inability to deduce past from present states but also, through bifurcation and chaos, an inability to predict future from past states. It has been suggested that chaos may represent deficient states of neural information processing (Fell, Roschke, & Beckmann, 1993). Certain applications, such as the solution of combinatorial problems, suggest a functional role for chaos in artificial neural networks, specifically, annealed escape from local attractors (Lin, 2001; Ohta, 2002; Liu, Shi, Wang, & Zurada, 2006). Yet the role of chaos in biological neural systems has remained a mystery.

This letter develops a joint discrete iteration map of neural firing and synaptic potentiation, based on biologically supported models. The map, formed as a multivariate thresholded polynomial, is proved to be noninvertible, regardless of neural self-connectivity. Noninvertibility, coupled with the existence of snap-back repellers (Marotto, 1978, 2005; Gardini & Tramontana, 2010), which is proved to hold for the map of interest under appropriate parameterization, facilitates the incidence of chaos in the Li-Yorke sense. The map at hand is a transcendental function of the initial conditions, the activation levels, and the polynomial coefficients. Noninvertibility and chaos are commonly related to specific parameterizations. In the Li-Yorke framework, the particular set of initial conditions that result in chaos forms an uncountable, scrambled subset of the underlying state space. Local bifurcations, including the onset of chaos, are caused by parameter changes. Parameterizing the underlying neural network in a symmetric manner for analytical purposes, we reduce the multivariate problem to a univariate one, enabling the application of fundamental algebra. The analytical evidence that biological neural networks are prone to chaotic behavior is particularly significant in view of the difficulty of obtaining concrete empirical measurement of chaos. It further implies, by the decomposition of the map portrait into basins of cyclic and chaotic attractors (Abraham et al., 2009) and the nonperiodicity of the range of the Li-Yorke scrambled set (Li & Yorke, 1975), convergence of the latter to a chaotic attractor. The highly patterned structures associated with the chaotic attractors of polynomial maps (Field & Golubitsky, 2009), which are shown in this work to hold for elementary neural circuits, suggest that such attractors are an integral part of the neural firing code. The resulting transcendental prescription of the sampling pattern and statistical intensity defines a natural mechanism for mixing and multiplexing different firing rates.

## 2. Discrete Iteration Map of Neural Firing and Synaptic Potentiation

*n*neurons, we first derive approximate discrete time models by sampling and truncating biologically motivated continuous-time models. As shown in the appendix, the truncated discrete time version of the spike response model (Gerstner, 1995) for the firing rates is given by where

**(**

*υ**k*) is the vector of firing rates corresponding to the neurons at time instance

*k*,

**W**(

*k*) is the matrix of synaptic weights,

**(**

*β**k*) is the vector of conductance-based activation inputs,

**K**

_{υ},

**T**

_{υ}and

**E**

_{p}are matrices specified in the appendix,

*N*is a positive integer that satisfies an approximation criterion and

**f**is a vector-valued function, whose scalar elements are defined as (Carandini & Ferster, 2000) A generalized model of synaptic potentiation is given by where

*q*is a positive integer,

*υ*^{q}is the vector whose

*i*th element is υ

^{q}

_{i},

**S**is the connectivity matrix, having the value 1 in all positions corresponding to connected neurons and the value 0 elsewhere, * denotes the element-wise product between two matrices of the same dimension, as in with

*a*

_{i,j}and

*x*

_{i,j}the respective scalar elements, and where

*r*is a positive integer. The matrices

**T**

_{ω}and

**T**

_{θ}are specified in the appendix. It might be noted that the values

*q*=

*r*= 1 correspond to the Hebb plasticity rule (Hertz, Krogh, & Palmer, 1991), while the values

*q*=

*r*= 2 correspond to the biologically supported BCM plasticity rule (Bienenstock, Cooper, & Munro, 1982). We have empirically found the latter to be more stable than the former. Although the generalized model, equations 2.3 and 2.4, allows, in principle, for higher values of

*q*and

*r*, there is no biological evidence for such values. While

*N*= 1 corresponds to the basic versions of these rules,

*N*>1 represents their smoothed versions.

## 3. Noninvertibility of the Map

The map, equations 2.1 to 2.4, is invertible if and only if it has an inverse that yields, for any values of the map parameters and any real values of the firing rates at time *k* + 1, unique real values for the firing rates at time *k*. If this is not the case, the map is said to be noninvertible. It should be noted that only positive (including zero) values are acceptable for the firing rates.

*υ*_{i}(

*k*+ 1),

*i*= 1, …,

*N*− 1, the last

*N*− 1 equations of equation 3.5 yield unique positive real values for

*υ*_{i}(

*k*),

*i*= 0, …,

*N*− 2. It remains to determine whether, given, in addition,

*υ*_{0}(

*k*+ 1), equation 3.5 provides a unique positive real value for

*υ*_{N−1}(

*k*). The threshold nonlinearity, equation 2.2, and the first equation of equation 3.5 imply that either or If equation 3.7 is satisfied, then, by the first equation of equation 3.5, Hence, equation 3.8 is satisfied in the range In order to show that the map is noninvertible, it will suffice, then, to show that equation 3.8 does not have a unique real-valued solution for

*υ*_{N−1}(

*k*). Substituting equation 3.6 into 3.8, we have The left-hand side of equation 3.10 can be written as a polynomial in

*υ*_{N−1}(

*k*): where, incorporating equation 3.5, The

*i*th row of the vector equation 3.10 is a scalar polynomial equation of order

*q*+

*r*+ 1 in the elements of

*υ*_{i,N−1}(

*k*). The terms of equation 3.12 corresponding to the scalar polynomial are, in descending order of power of

*υ*_{i,N−1}(

*k*), where ω

_{i,j}(

*k*+ 1) is the

*i*,

*j*th scalar element of

**W**(

*k*+ 1), and [

**T**

^{−1}

_{υ}

**(**

*β**k*)]

_{i}and [

**(**

*μ**k*+ 1)]

_{i}are the

*ith*scalar elements of the vectors

**T**

^{−1}

_{υ}

**(**

*β**k*) and respectively. While the map at hand is polynomial in the firing rates, it is also a transcendental function of the initial conditions and the parameter values (including, in our case, external activation inputs; indeed, local bifurcations are caused by changes in these parameters). If all the neurons have the same parameter values, the same initial conditions, and symmetric connectivity, they all must behave in the same manner. This means that, within this parametric domain (the domain of symmetry), all the scalar rows of equation 3.10 represent the same scalar polynomial equation. If the

*ith*neuron is self-connected, the

*i*th row of equation 3.10 can then be written as a univariate polynomial equation in υ

_{i,N−1}(

*k*), whose terms are, in descending order of power, (note that due to the assumed symmetry, all the synaptic weights are the same; hence, ω(

*k*+ 1) = ω

_{i,j}(

*k*+ 1);

*i*,

*j*= 1, …,

*n*). If the neurons are self-disconnected,

*n*will be replaced by

*N*− 1 in all but the term for

*p*

_{i,1}(

*υ*_{N−1}(

*k*)). Descartes’ rule of signs (Fine & Rosenberger, 1997) states that the number of real positive roots of a univariate polynomial equation with real coefficients equals the number of sign changes, or less than it by a multiple of two, when the coefficients are ordered according to descending powers of the polynomial. It can be seen that since

*υ*_{i,N−1}(

*k*) is positive, there is an even number of Descartes sign changes in equation 3.14 within the domain (For

*r*>2, this follows immediately from equation 3.14.) In order to see that it is also true for

*r*= 2, let us write equation 3.11 for the scalar case, which, for

*r*= 2, can be rearranged in the order of decreasing powers of

*υ*_{N−1}(

*k*) as It can be seen that, according to equation 3.14, the right-hand side of equation 3.17 has two Descartes’ sign changes regardless of the sign of [

*p*

_{i,q+r}(

*υ*_{N−1}(

*k*)) +

*p*

_{i,q+2}(

*υ*_{N−1}(

*k*))]. For

*r*= 1, equation 3.16 can be rearranged as Now, whether the term [

*p*

_{i,q+r}(

*υ*_{N−1}(

*k*)) +

*p*

_{i,q+1}(

*υ*_{N−1}(

*k*))] is positive or negative, the right-hand side of equation 3.18 has two Descartes sign changes). It follows that within the domain 3.15, the map, under parametric symmetry, has no unique positive real inverse; hence, it is noninvertible. This holds for networks of any number of self-connected neurons, and for networks of two or more self-disconnected neurons (for a single self-disconnected neuron, all of the terms of equation 3.14 but the last two vanish, and the map becomes invertible).

## 4. Chaotic Coding and Chaotic Multiplexity

The regular dynamic modes of the firing rates process are defined by fixed points, cycles, and saddles of the map, which have also been called periodic points or singularities (Abraham et al., 1997; Mira, 2000). Li and Yorke (1975) established a formal framework for the characterization of chaotic maps. Given an interval *J*, a continuous map*f: J* → *J* is chaotic in the Li-Yorke sense if the following conditions are satisfied:

For every

*n*= 1, 2, …there is a periodic point in*J*having period*n*.There is an uncountable (“scrambled”) set

*S*⊂*J*, containing no periodic points, which satisfies the following conditions:

*f*=

^{k}*f*(

*f*

^{k−1}(

*x*)). Condition 2a implies sensitivity to initial conditions. While trajectories originating at different initial conditions will intersect each other (see condition 2a1), they will not converge to each other (see condition 2a2). Condition 2b implies that for a given subset (

*S*) of initial conditions, the periodic points of the map are not asymptotically attractive.

Marotto's theorem (1978, 2005), extended to noninvertible piece-wise smooth maps by Gardini and Tramontana (2010), implies that a sufficient condition for chaotic behavior in the Li-Yorke sense is the existence of a snap-back repeller, which is defined as follows (Marotto, 2005): Suppose that *z* is a fixed point of a map *Q* with all eigenvalues of *DQ*(*z*), the Jacobian of the map at *z*, exceeding 1 in magnitude, and suppose that there exists a point *x*(0) ≠ *z* in a repelling neighborhood of *z*, such that *x*(*M*) = *z* and , for some positive integer *M*, where *x*(*k*) = *Q ^{k}*(

*x*(0)). Then

*z*is called a snap-back repeller of

*Q*. As in the case of noninvertibility, we shall say that a map is chaotic if it is chaotic for some parameter values. We have the following result:

*i*. The analysis is further simplified by noting that as

*e*

^{−p/τ}is arbitrarily smaller than 1 for a sufficiently small value of τ and the firing rates are physically bounded, the dependence of

*υ*_{0}(

*k*+ 1) on

*υ*_{p}(

*k*),

*p*≥ 1 diminishes as τ → 0 (as shown in the appendix, the error caused by eliminating the corresponding terms from the map decays exponentially as τ → 0). It can be seen from equation 2.3 that for υ(

*k*)>1 and a sufficiently small τ

_{θ}, the changes in ω(

*k*) are negative (as υ(

*k*) − θ(

*k*) is negative), and for some −∞ <

*k*< 0, ω(

*k*) becomes negative. The Jacobian of the scalar version of the map, equation 2.1, which is negative, is greater in magnitude than 1 for ω(

*k*) < −2 − 1/τ

_{υ}. This magnitude becomes arbitrarily large for a sufficiently small value of τ

_{ω}. At some point, υ(

*k*) =

*z*, the scalar version of the map, equation 2.1, will cross the line υ(

*k*+ 1) = υ(

*k*) and, at a point υ(

*k*) ≡

*b*= −β(

*k*)/ω(

*k*) where

*b*>

*z*, the map will bend and become υ(

*k*+ 1) = ((τ

_{υ}− 1)/τ

_{υ})υ(

*k*). The bending zone of the map is depicted in Figure 1. A cobweb diagram (Abraham et al., 1997) is constructed as follows. A straight line passing through

*z*in parallel to the υ(

*k*) axis crosses the υ(

*k*+ 1) = ((τ

_{υ}− 1)/τ

_{υ})υ(

*k*) part of the map at some point, say, υ(1). A straight line passing through υ(1) in parallel to the υ(

*k*+ 1) axis crosses the line υ(

*k*+ 1) = υ(

*k*) at some point, say,

*c*, and a straight line passing through

*c*in parallel to the υ(

*k*) axis crosses the map at a point, say, υ(0), above

*z*. For a sufficiently large value of τ

_{υ}, the lines υ(

*k*+ 1) = ((τ

_{υ}− 1)/τ

_{υ})υ(

*k*) and υ(

*k*+ 1) = υ(

*k*) become nearly parallel. The distance between the points υ(0) and

*z*becomes arbitrarily close to the distance between the points

*c*and υ(1), which becomes arbitrarily close to the distance between the points

*b*and

*z*. As, for a sufficiently small value of τ

_{ω}, the slope of the map at the two points is arbitrarily close to infinity, the distance between them is arbitrarily close to (1/τ

_{υ})υ(

*k*). And as, for a sufficiently small value of τ

_{ω}, υ(

*k*) is arbitrarily close to 1 at

*b*and

*z*, the distance between these points is arbitrarily close to 1/τ

_{υ}. Hence, for a sufficiently large value of τ

_{υ}, the point υ(0) is in an arbitrarily small neighborhood of

*z*and the slope of the map at υ(0) is greater in magnitude than 1. The sequence υ(

*k*),

*k*= 0, 1, 2, which first diverges from υ(0) to υ(1) and then snaps back from υ(1) to υ(2) =

*z*, establishes

*z*as a snap-back repeller of the map. The assertion now follows from theorem 1 and Gardini and Tramontana's extension (2010) of Marotto's theorem (1978, 2005) to noninvertible piece-wise smooth maps.

The domain of a map decomposes into the basins of point attractors, cyclic attractors, chaotic attractors, and the “basin of infinity,” which consists of all points whose trajectories run away from any bounded set (Abraham et al., 1997). While, mathematically, indefinite divergence of the neuronal firing rates is possible, physical constraints exclude such divergence. At the same time, asymptotic convergence to a periodic point from the scrambled set *S* is excluded by the Li-Yorke conditions (specifically, condition 2b). Initiated within the scrambled set, a trajectory will enter the basin of a chaotic attractor, which, for a polynomial map, has a highly patterned geometric and statistical (intensity) structure (Field & Golubitsky, 2009). It can be seen that our map is polynomial in the domain **P**(*υ*_{p}(*k*), *p* = 0, …, *N* − 1)>0. Global bifurcations from chaotic and repelling basins (the latter a part of the basin of infinity) into cyclic (and point) attractors are made possible by absorbing areas (Abraham et al., 1997).

A chaotic attractor may be viewed, then, as an integral part of the neural firing code. It will produce a mixture of firing rates corresponding to its geometric and intensity structure. We call the temporal production of such a mixture chaotic multiplexity. The characteristics of single-mode, bifurcated, chaotic and multiplexed firing are demonstrated by the examples in section 5.

## 5. Examples

We simulated a two-neuron circuit in a closed-loop configuration, employing the BCM version of the model, (equations 2.1 to 2.4, with, *q* = *r* = 2). Three sets of parameter values were considered. Taking *N* = 10, the relative approximation error ρ(see the appendix) was smaller than 0.07 in all cases.

**Set 1: Different Time Constants**

τ

_{i}= 25, .τ

_{i}= 25, .τ

_{i}= 12, ..

In all cases υ_{i}(0) = 0, ω_{i,j}(0) = 0, , β_{i} = 1, *i*, *j* = 1, 2.

It can be seen from Figure 2 that cases a and b represent single-mode firing activities—the first an initial transient attenuating to a constant firing rate and the second an initial transient attenuating to a constant oscillatory firing rate. Cases c and d represent bifurcations from a relatively uniform spiking mode to multiplexed modes of different natures, decaying to constant firing rates.

**Set 2: Different Initial Synaptic Weights**

ω

_{1,1}(0) = ω_{2,2}(0) = 0, ω_{1,2}(0) = ω_{2,1}(0) = −1.ω

_{1,1}(0) = ω_{2,2}(0) = 1, ω_{1,2}(0) = ω_{2,1}(0) = 0.ω

_{1,1}(0) = 0.1, ω_{2,2}(0) = 0, ω_{1,2}(0) = ω_{2,1}(0) = −0.1.ω

_{1,1}(0) = ω_{2,2}(0) = −0.1, ω_{1,2}(0) = ω_{2,1}(0) = 0.

In all cases υ_{i}(0) = 0, τ_{i} = 12, , , β_{i} = 1, *i*, *j* = 1, 2.

It can be seen from Figure 3 that while case a represents an initial transient attenuating to a constant firing rate, case b represents an instantaneous transient followed by constant-amplitude spiking with a varying waveform. Case c shows a brief period of spiking, bifurcating into a constant state of zero firing. Case d shows a brief period of spiking, bifurcating into a multiplexed signal attenuating to zero.

**Set 3: Different Initial Firing Rates**

υ

_{i}(0) = 0,*i*= 1, 2.υ

_{i}(0) = 0.1,*i*= 1, 2.

In both cases ω_{i,j}(0) = 0, , , β_{i} = 1, *i*, *j* = 1, 2.

The seemingly disorderly behaviors displayed in both cases a and b of set 3, as seen in Figure 4, persisted for the 10,000 iterations checked. It can be seen that the individual time sequences generated by the different initial conditions are substantially different, and the difference persists along time. Such sensitivity to initial conditions is typical of chaotic maps.

Figure 5 shows the sampling of the phase space υ_{1} × υ_{2} by the trajectories corresponding to cases a (left-hand figures) and b (right-hand figures) of set 3. The initial 10% of each of the trajectories were excluded so as to eliminate initial transients. From top to bottom, the three rows in the figure show the phase-space samples generated for the two initial conditions in 100, 1000, and 10,000 iterations, respectively. While the sampling graphs produced in 100 iterations are quite different for the two initial conditions, those produced in 1000 iterations are considerably more similar, and those produced in 10,000 iterations are almost identical.

Finally, we note that in the reported simulations, the neurons involved were allowed to form self-connections. When self-disconnections were enforced by nullifying the corresponding synaptic weights, the simulation results were of a similar nature.

## 6. Discussion

We have analytically shown that, depending on parameter values, neural firing can become noninvertible and chaotic. As the first property implies an inability to infer past from present states and the second an inability to infer future from past states, one may wonder what the utility of these seemingly peculiar properties might be. While noninvertibility and chaos have been identified with ambiguity and deteriorated performance in certain applications involving artificial feedforward networks (Bertels, Neuberg, Vassiliadis, & Pechanek, 1998; Gicquel, Anderson, & Kevrekidis, 1998; Reco-Martinez, Adomaitis & Kevrekidis, 2000; Verschure, 1991) and in biological neural networks (Fell et al., 1993), several works have noted the utility of chaos in learning and problem solving by annealing (Lin, 2001; Ohta, 2002; Liu et al., 2006; Verschure, 1991; Sato, Akiyama, & Farmer, 2002). Moreover, it has been claimed (Langton, 1990) and disputed (Mitchell, Hraber, & Crutchfield, 1993) that the boundary between order and chaos provides favorable conditions for universal computation. Chaos has been incorporated into neuron models through a bimodal logistic function (Aihara, Takabe, & Toyoda, 1990), simplified phenomenological models (Lin, Ruen, & Zhao, 2002; Shilnikov & Rulkov, 2003), or negative self-feedback (Lin, 2001; Ohta, 2002; Liu et al., 2006). Our analysis shows that map noninvertibility and chaotic behavior apply not only to networks of self-connected neurons but also to networks of self-disconnected neurons. However, the problem-solving and computation capabilities of biological neural networks are rather vague notions, and the possible role of chaos in such networks has remained a mystery.

The analytic approach taken in this work is particularly noteworthy in view of the fact that empirical measures, such as the Lyapunov exponents, often fail to provide conclusive evidence of chaos. We suggest that the highly patterned geometric and statistical structures of the chaotic attractors associated with neural networks make such attractors an integral part of the neural code. While our mathematical analysis has primarily led to the consideration of symmetric neural circuits, resulting, as in the cases considered by Field and Golubitsky (2009), in symmetric chaotic attractors, we have also noticed in simulation that highly patterned, albeit nonsymmetric, chaotic attractors arise in nonsymmetric neural circuits. Providing a transcendental prescription for mixing different firing rates, a chaotic attractor constitutes a natural mechanism for statistical encoding and temporal multiplexing of neural information. Signal multiplexing is known to enhance information transmission capacity and is widely used in communication systems (Li & Stuber, 2010). However, in contrast to technological applications, which require rhythmic and synchronous multiplexing, especially for the purpose of decoding (Keller, Piazzo, Mandarini, & Hanzo, 2001), biological multiplexing does not appear to require temporal precision (Panzeri et al., 2009). It should be noted that the application of chaos for the purpose of synchronization in technological communication systems (Itoh & Chua, 1997) is quite different from our context, where there is no synchronization. Since the model considered in our work deals with firing rates, amplitude corresponds to firing frequency. Depending on the function of the receiving neuron, demultiplexing can be done, in principle, by bend-pass filtering. Neuronal low-pass (Pettersen & Einevoll, 2008) and high-pass (Poon, Young, & Siniaia, 2000) filtering have been reported. Yet the raw multiplexed signal, representing, as implied by the models under consideration, locally smoothed (averaged) information, can be useful in sensory systems. (Multiplexed red, green, and blue color coding is a known example found in both biological and technological vision systems; Hunt, 2004). Combined with empirical evidence on multiplexed firing rates in sensory cortices, this study suggests a constructive role for chaos in biological neural networks.

## Appendix: From Continuous to Discrete Time Models

_{i}(

*t*) of the

*i*th of

*n*neurons in a neural network has been represented by the model (Gerstner, 1995; Dayan & Abbott, 2001) where is a time constant, κ

_{i}(

*t*) is a function representing the membrane response to the input current, and β

_{i}(

*k*) is the conductance-based activation input (Jolivet, Lewis, & Gerster, 2004). The basic Hebb rule of synaptic potentiation (Hertz, Krogh, & Palmer, 1991) may be written as where

*ω*_{i}is the presynaptic weights vector of the

*i*th neuron, is the synaptic time constant, and

**s**

_{i}is the

*n*-dimensional vector having 1 in all positions corresponding to the preneurons connected to the neuron and 0 elsewhere. Because the firing rates are positive quantities, the Hebb basic rule can only produce growth of the synaptic weights, not allowing for weight reduction. A modification of the basic rule to where θ

_{i}(

*t*) is a threshold, will allow both processes to take place. It has been suggested to set θ

_{i}to the average value of the firing activity; however, a constant θ

_{i}deems the plasticity rule unstable. A smoothing effect can be achieved by integrating the modulated activity, where is some constant. Since synaptic potentiation is thought to be a slower, smoother process than the firing activity, a common modification of the Hebb model is replacing the right-hand side by its average value for the inputs used during training (Dayan & Abbott, 2001). Similar smoothing can be performed by The BCM plasticity rule (Bienenstock, Cooper, & Munro, 1982), supported by experimental evidence, has the basic form We have found this instantaneous update version, much like the basic Hebb rule, to be unstable. A smoothed version is The BCM rule further suggests a quadratic dependence of the threshold θ

_{i}(

*t*) on the activity (Bienenstock, Cooper, & Munro, 1982). Immediate dependence of θ

_{i}(

*t*) on υ

_{i}(

*t*) would cause abrupt changes in θ

_{i}(

*t*), which is physiologically unrealistic (Cooper, Intrator, Blais, & Shouval, 2004). The following smoothed dependence of θ

_{i}(

*t*) on the activity has been suggested (Intrator & Cooper, 1992), where is some constant.

*p*=

*k*− ℓ yields Since υ

_{i}can be assumed to be bounded and since the error incurred by truncating the infinite sum decays exponentially. Consequently, the firing rate model can be approximated to any desired accuracy by the model for some integer

*N*. The value of

*N*may be selected as one, which yields a sufficiently small value for the relative error The discrete model for the firing rates can be written in vector form, where , and are diagonal matrices of the corresponding elements,

**f**(

**x**) = vec[

*f*(

_{i}*x*)]

_{i}_{i=1,…,n}and

**(**

*β**k*) = vec[β

_{i}(

*k*)]

_{i=1,…,n}are vectors of the corresponding elements, and

**W**(

*k*) = [

*ω*_{1}(

*k*)

*ω*_{2}(

*k*) ⋅ ⋅ ⋅

*ω*_{n}(

*k*)] is a matrix whose columns are

*ω*_{i}(

*k*),

*i*= 1, …,

*n*.

_{i}(

*k*) becomes the window average of υ

_{i}(

*k*) for and

*r*→ ∞;

_{i}*i*= 1, …,

*n*. The discrete time version of the smoothed BCM rule is with A generalization of the Hebb and the BCM synaptic plasticity rules takes the form with where

*υ*^{q}(

*k*−

*p*) is the vector whose

*i*th element is υ

^{q}

_{i}(

*k*−

*p*) and

## References

^{n}