## Abstract

Despite their biological plausibility, neural network models with asymmetric weights are rarely solved analytically, and closed-form solutions are available only in some limiting cases or in some mean-field approximations. We found exact analytical solutions of an asymmetric spin model of neural networks with arbitrary size without resorting to any approximation, and we comprehensively studied its dynamical and statistical properties. The network had discrete time evolution equations and binary firing rates, and it could be driven by noise with any distribution. We found analytical expressions of the conditional and stationary joint probability distributions of the membrane potentials and the firing rates. By manipulating the conditional probability distribution of the firing rates, we extend to stochastic networks the associating learning rule previously introduced by Personnaz and coworkers. The new learning rule allowed the safe storage, under the presence of noise, of point and cyclic attractors, with useful implications for content-addressable memories. Furthermore, we studied the bifurcation structure of the network dynamics in the zero-noise limit. We analytically derived examples of the codimension 1 and codimension 2 bifurcation diagrams of the network, which describe how the neuronal dynamics changes with the external stimuli. This showed that the network may undergo transitions among multistable regimes, oscillatory behavior elicited by asymmetric synaptic connections, and various forms of spontaneous symmetry breaking. We also calculated analytically groupwise correlations of neural activity in the network in the stationary regime. This revealed neuronal regimes where, statistically, the membrane potentials and the firing rates are either synchronous or asynchronous. Our results are valid for networks with any number of neurons, although our equations can be realistically solved only for small networks. For completeness, we also derived the network equations in the thermodynamic limit of infinite network size and we analytically studied their local bifurcations. All the analytical results were extensively validated by numerical simulations.

## 1  Introduction

In networks of neurons in real biological brains, the vast majority of synapses are asymmetric, meaning that the synapse from a neuron to another has a different strength from the synapse in the reverse direction (DeFelipe, Marco, Busturia, & Merchán-Pérez, 1999). In many cases, a synapse may be present between one neuron and another along one direction but not in the other direction. Because of this, the most biologically relevant mathematical models of neural networks are those with asymmetric synaptic weights. These models are widespread and have been intensively studied numerically (see, e.g., Bray & Moore, 1984; Mattia & Del Giudice, 2000; Amit & Mongillo, 2003; Markram, 2006; Morrison, Straube, Plesser, & Diesmann, 2007; Brette et al., 2007; Paik, Kumar, & Glaser, 2009; Yucesoy, Katzgraber, & Machta, 2012; Baños et al., 2012; Sanz Leon et al., 2013; Wang, Machta, & Katzgraber, 2015; Zhu, Ochoa, & Katzgraber, 2015). Yet neural network models with asymmetric synaptic connections that can be mathematically solved either exactly in closed form or through approximations are rare and typically are made of model neurons with discrete, spin-like outputs (see, e.g., Derrida, 1980; Crisanti & Sompolinsky, 1988; Roudi & Hertz, 2011; Sakellariou, Roudi, Mezard, & Hertz, 2012; Dahmen, Bos, & Helias, 2016). Developing closed-form analytical solutions of such neural network models is important because asymmetric networks typically exhibit a rich set of oscillatory behavior that does not occur in models with symmetric connections. In symmetric networks, only a limited set of oscillations can occur, and only if the network is synchronously updated (Goles-Chacc, Fogelman-Soulie, & Pellegrin, 1985) or if time delays are present (Marcus & Westervelt, 1989). Closed-form solutions of asymmetric models would thus make it possible to thoroughly study the formation and the functional role of a wide range of neural oscillations. In particular, a full analytical understanding of asymmetric neural network models would help us to better understand how the emergence of neural oscillations depends on the network parameters in systems composed of excitatory and inhibitory populations, to understand how oscillations can be stored in the neural tissue, and to study how oscillations influence the integration of information among neurons.

The added difficulty in studying asymmetric neural networks is the impossibility of applying the powerful methods of equilibrium statistical mechanics because no energy function exists for these systems. Due to their complexity, in general, models with spin-like discrete neurons admit exact analytical solutions only for specific network topologies and distributions of the synaptic weights, and in some limiting cases or mean-field approaches, such as the replica trick (Mézard, Parisi, & Virasoro, 1986), the limit of an infinite number of spin components (Hertz, Grinstein, & Solla, 1987), or the thermodynamic limit of infinite network size (Crisanti & Sompolinsky, 1987; Derrida et al., 1987). Yet many biologically important neural networks have a finite or even small size, such as microcolumns in the primate cortex (Mountcastle, 1997) or the nervous system of some invertebrates (Williams & Herrup, 1988). Thus, it is desirable for methods to mathematically study networks with asymmetric connections to be able to analytically solve the dynamics of networks of arbitrary size, including small sizes.

Here, we make progress on these issues by showing how a spin-like (discrete output) model of neural networks with asymmetric connections can be solved exactly without resorting to a limiting case or a mean-field approximation. In particular, our solutions are valid for networks composed of an arbitrary number of neurons, do not require additional constraints (such as the synaptic dilution assumption adopted in Derrida, Gardner, & Zippelius, 1987), and can be derived in the presence of noisy inputs with any statistics.

Our approach is based on calculating exact analytical solutions for the conditional probability distributions of the membrane potentials and the firing rates, as well as for the joint probability distributions in the stationary regime (see section 3.1). The ability to study the network analytically enabled us to define a new learning rule that safely stores stationary and oscillating solutions even in the presence of noise (see section 3.2), study the qualitative changes in network dynamics as a function of the input parameters by means of bifurcation theory (see section 3.3), and compute experimentally observable quantities such as correlations among the activity of different neurons (see section 3.4). While all the results are valid for networks composed of an arbitrary number of neurons, for completeness, we also derived the mean-field equations in the thermodynamic limit of infinite network size, and we found exact analytical expressions of the local codimension 1 bifurcations (see section 3.5).

## 2  Materials and Methods

### 2.1  The Stochastic Firing-Rate Network Model

In this section, we describe the neural network model we study.

The activity of each neuron in the network is influenced (through synapses that may have asymmetric weights) by the activity of other neurons in the network, as well by an external input that has both a deterministic and a noisy component. For clarity and simplicity, in the main text, we study and solve a model in which neurons are driven by independent noise sources with gaussian distribution. In the supplemental information, we show how to solve the model with an arbitrary noise distribution.

We assume that time is discrete and that all neurons in the network are synchronously updated, a common assumption in studying network dynamics (Amari, 1972; Little, 1974; Peretto, 1984; Ermentrout, 1998). The basic equation defining the network dynamics, is the following one describing the membrane potential $Vit+1$ of the $i$th neuron at the time instant $t+1$:
$Vit+1=1Mi∑j=0N-1JijHVjt-θj+Iit+σiBBit,i=0,…,N-1,$
2.1
where $N≥2$ represents the number of neurons in the network and is generally finite. $Iit$ is the deterministic component of the external input to the $i$th neuron. $σiBBit$ is the stochastic component of the stimulus, where $Bit$ for $i=0,…,N-1$ are independent normally distributed noise sources with unit variance. $σiB$ represents the overall intensity (i.e., the standard deviation) of the noisy term. Moreover, $H·$ is the Heaviside step function with threshold $θ$,
$HV-θ=0ifV≤θ1otherwise,$
2.2
which converts the membrane potential of the $j$th neuron into its corresponding binary firing rate, $νjt=HVjt-θj∈0,1$. $Jij$ is the synaptic weight from the $j$th (presynaptic) neuron to the $i$th (postsynaptic) neuron. In this letter, the matrix $J$ is arbitrary; therefore, it may be asymmetric, and self-connections (loops) may be present. $Mi$ is the total number of incoming connections to the $i$th neuron and represents a normalization factor that prevents the divergence of the sum $∑j=0N-1JijHVjt-θj$ for large networks.
In section S1 of the supplemental information, we prove that under the following change of variables between the (continuous) membrane potentials $Vit$ and the (discrete) neural activity $Ait$,
$Vit+1=1Mi∑j=0N-1JijAjt+Iit+σiBBit,$
2.3
and in the special case when the matrix
$J^=defJ0,0M0…J0,N-1M0⋮⋱⋮JN-1,0MN-1…JN-1,N-1MN-1$
2.4
is invertible, equation 2.1 implies
$Ait+1=H1Mi∑j=0N-1JijAjt+Iit+σiBBit-θi,i=0,…,N-1.$
2.5
From equation 2.5, it follows that the probability distribution of the neural activities $Ait$ can be obtained by the theory of Markov processes (see, e.g., Coolen 2001a, 2001b), which can be used to calculate, through equation 2.3, the probability distribution of the membrane potentials $Vit$. This provides a direct connection between our model and the theory of Markov processes for networks with invertible matrices $^J$. To complement this work, in sections S1 and S2 of the supplemental information, we introduce an alternative and more general approach for deriving the probability distributions that does not require any constraint on $^J$.

To conclude, we observe that in the special case when $^J$ is symmetric with zeros on the main diagonal ($^Jii=0$) and $Ii=σiB=0∀i$, equation 2.5 describes the synchronous version of the network model introduced by Hopfield (1982). Note, however, that when the connections are asymmetric, our model will exhibit important differences in dynamics and behavior with respect to the Hopfield model, as we show in section 3.

### 2.2  Numerical Simulations

To further validate our analytical results, we compared them with numerical simulations (see section 3, where all the results in the figures are expressed in simulation units).

The joint probability distributions of the membrane potentials and the firing rates in the stationary regime have been obtained through a Monte Carlo method by iteratively solving equation 2.1 in the temporal interval $t=0,100$. Then the probability distributions have been evaluated from the collected data at $t=100$ through a kernel density estimator. We assume that at $t=100$, the probability distributions have already reached a stationary regime, which is confirmed by the good agreement between the analytical and numerical results. The network statistics have been calculated numerically in a similar way, by means of the sample variance and sample (distance) correlation over the Monte Carlo repetitions.

To conclude, the bifurcation diagrams of the network in the thermodynamic limit have been obtained numerically by the MatCont Matlab toolbox (Dhooge, Govaerts, & Kuznetsov, 2003).

## 3  Results

In this section, we study the dynamical and statistical properties of the network defined above. In section 3.1, we report the exact solutions of the conditional probability distributions of the membrane potentials and the firing rates, as well as the joint probability distributions in the stationary regime. In section 3.2, we invert the formula of the conditional probability distribution of the firing rates, which allows us to define a new learning rule to safely store stationary and oscillatory patterns of neural activity even in the presence of noise. In section 3.3, we study analytically the bifurcations of the network dynamics in the zero-noise limit $σB→0$, in terms of its codimension 1 and codimension 2 bifurcation diagrams. In section 3.4, we derive exact analytical expressions for the groupwise cross-correlations of the membrane potentials and the firing rates for any noise intensity. Finally, in section 3.5, we report the mean-field equations of the network, which are exact in the thermodynamic limit $N→∞$, and we analytically study its local bifurcations. All the analytical results are extensively validated by numerical simulations.

### 3.1  Conditional and Joint Probability Distributions

We begin by studying the probability distribution of network activity. We call $V=V0,…,VN-1T$ the vector containing the membrane potentials of all $N$ neurons at time $t+1$ and $V'$ the vector of the membrane potentials of all neurons at time $t$. As proven in the supplemental information (see section S2.1.3) the conditional probability distribution of the membrane potentials $V$ at time $t+1$ given the membrane potential values $V'$ at the previous time state is
$pV|V'=12πN2∏i=0N-1σiB∏m=0N-1e-12Vm-1Mm∑n=0N-1JmnHVn'-θn-ImtσmB2.$
3.1
Equation 3.1 holds for any $t$ and also for time-varying stimuli $Iit$. Moreover, if the network statistics reach a stationary regime (typically this occurs for $t→∞$ and for constant stimuli $Iit=Ii∀t$), the joint probability distribution of the membrane potentials is
$pV=12πN2∏i=0N-1σiB∑j=02N-1Fj∏m=0N-1e-12Vm-1Mm∑n=0N-1JmnBj,nN-ImσmB2.$
3.2
In equation 3.2, $Bj,nN=defBjNn$ is the $n$th component of $BjN$, the latter being the $N×1$ vector whose entries are the digits of the binary representation of $j$ (e.g., $B54=0,1,0,1T$, so that $B5,04=B5,24=0$ and $B5,14=B5,34=1$). By taking advantage of the piecewise constant activation function $H·$, we showed (see section S2.1 of the supplemental information) that the coefficient $Fj$ represents the probability that the firing rate vector $ν0t,…,νN-1tT=defHV0t-θ0,…,HVN-1t-θN-1T$ equals the binary representation of the index $j$ in the stationary regime. For example, in a network of size $N=4,$ we get $F12=Pν0t,ν1t,ν2t,ν3tT=1,1,0,0T$ in the limit $t→∞$. It is important to observe that the number of coefficients $Fj$ increases as $2N$; therefore, the calculation of the stationary probability distribution (see equation 3.2) is characterized by an exponential complexity in the network size $N$.
From equation 3.2, we obtain the following expression of the single-neuron marginal probability distribution:
$piV=12πσiB∑j=02N-1Fje-12V-1Mi∑k=0N-1JikBj,kN-IiσiB2,i=0,…N-1.$
3.3
Examples of comparison between the analytical and numerical probability distributions of the membrane potentials for $N=2$ and $N=5$ are shown in Figure 1 and in Figures 2A and 2B, respectively (see also Tables 1 and 2). The correctness of equation 3.3 is illustrated more rigorously in Figure 2C, where we showed that the Kullback-Leibler divergence between the analytical single-neuron distributions, and the corresponding numerical distributions, decreases with the number of Monte Carlo repetitions of the network dynamics.
Figure 1:

Examples of probability distributions for $N=2$. This figure reports examples of probability distributions of the membrane potentials obtained for an asymmetric network of two neurons. (A, B) The red curves show the single-neuron marginal probability distributions of the two neurons, as given by equation 3.3. The blue dots represent the numerical evaluation of the same distributions. (C) Two-neurons' joint probability distribution, as given by equation 3.2 (left) and numerically evaluated (right). The numerical distributions of this figure have been obtained as described in section 2.2 through a Monte Carlo method over $106$ repetitions of the network dynamics. All panels of this figure are obtained for the model parameter values in Table 1.

Figure 1:

Examples of probability distributions for $N=2$. This figure reports examples of probability distributions of the membrane potentials obtained for an asymmetric network of two neurons. (A, B) The red curves show the single-neuron marginal probability distributions of the two neurons, as given by equation 3.3. The blue dots represent the numerical evaluation of the same distributions. (C) Two-neurons' joint probability distribution, as given by equation 3.2 (left) and numerically evaluated (right). The numerical distributions of this figure have been obtained as described in section 2.2 through a Monte Carlo method over $106$ repetitions of the network dynamics. All panels of this figure are obtained for the model parameter values in Table 1.

Table 1:
Set of Parameters Used for Generating Figures 1 and 3.
 $J=0-11110,I=1-1,σB=11$
 $J=0-11110,I=1-1,σB=11$
Table 2:
Set of Parameters Used for Generating Figure 2.
 $J=040-3660-361040-4032-404080040-85260-560-843664-44480,I=-10-220,σB=21123$
 $J=040-3660-361040-4032-404080040-85260-560-843664-44480,I=-10-220,σB=21123$
Neuroscientists often make use of measures of correlation between firing rates rather than between membrane potentials. For this reason, it is interesting to also evaluate the probability distributions of the firing rates, which in our model are binary quantities defined as $νit=defHVit-θi$. If we introduce the vector $ν=[ν0,…,νN-1]T$ containing the firing rates of all $N$ neurons at time $t+1$, and $ν'$ the vector of the firing rates of all neurons at time $t$, then from equations 3.1 to 3.3, it is easy to prove (see section S2.2.1 of the supplemental information) that the conditional probability distribution and the stationary joint and single-neuron marginal distributions of the firing rates are
$Pν|ν'=12N∏m=0N-11+-1νmerfθm-1Mm∑n=0N-1Jmnνn'-Imt2σmB,$
3.4
$Pν=12N∑j=02N-1Fj∏m=0N-11+-1νmerfθm-1Mm∑n=0N-1JmnBj,nN-Im2σmB,$
3.5
$Piν=12∑j=02N-1Fj1+-1νerfθi-1Mi∑n=0N-1JinBj,nN-Ii2σiB,i=0,…N-1,$
3.6
respectively. Since $Pν=FDν$, where $Dν$ is the decimal representation of the binary vector $ν$ (e.g., $D([1,0,1,1,0]T)=22$ in a network of size $N=5$), from equation 3.5, it finally follows that
$FDν=12N∑j=02N-1Fj∏m=0N-11+-1νmerfθm-1Mm∑n=0N-1JmnBj,nN-Im2σmB.$
By writing this identity for all the $2N$ firing rate states $ν$, we obtain a linear system of algebraic equations in the unknowns $Fj$. This system can be solved by taking into account the constraint $∑j=02N-1Fj=1$, leading to an analytical expression of the coefficients $Fj$ in terms of the network parameters.

We also observe that while equations 3.1 to 3.3 represent probability density functions (pdfs), namely, probability distributions of continuous random variables (the membrane potentials), equations 3.4 to 3.6 represent probability mass functions (pmfs), namely, probability distributions of discrete random variables (the firing rates). In particular, $Pν|ν'$ is known as a state-to-state transition probability matrix in the context of the theory of Markov processes (Feller, 1971). Examples of $Pν$ for $N=5$ and $N=2$ are shown in Figures 2D and 3D, which report the comparison between the analytical and numerical probability distribution of the firing rates over the $25=32$ and $22=4$ states of the network, respectively.

Figure 2:

Examples of probability distributions for $N=5$. Examples of probability distributions of the membrane potentials obtained for an asymmetric network of five neurons. (A, B) Examples of the single-neuron marginal probability distributions of the membrane potentials. (C) Kullback-Leibler divergence between the analytical single-neuron distributions and the corresponding numerical distributions for an increasing number of Monte Carlo repetitions (trials) of the network dynamics. The divergence decreases with the number of repetitions; therefore, the numerical distribution is better approximated by its analytical counterpart when the statistical error (due to the finite number of repetitions) decreases. (D) Comparison between the analytical joint probability distribution of the firing rates (red bars), and the corresponding numerical distribution (blue bars) over the $2N=32$ states of the network activity. All panels of this figure are obtained for the model parameter values in Table 2.

Figure 2:

Examples of probability distributions for $N=5$. Examples of probability distributions of the membrane potentials obtained for an asymmetric network of five neurons. (A, B) Examples of the single-neuron marginal probability distributions of the membrane potentials. (C) Kullback-Leibler divergence between the analytical single-neuron distributions and the corresponding numerical distributions for an increasing number of Monte Carlo repetitions (trials) of the network dynamics. The divergence decreases with the number of repetitions; therefore, the numerical distribution is better approximated by its analytical counterpart when the statistical error (due to the finite number of repetitions) decreases. (D) Comparison between the analytical joint probability distribution of the firing rates (red bars), and the corresponding numerical distribution (blue bars) over the $2N=32$ states of the network activity. All panels of this figure are obtained for the model parameter values in Table 2.

Figure 3:

An example of oscillatory dynamics for $N=2$. An example of oscillatory dynamics obtained for an asymmetric network of two neurons that corresponds to the stationary probability distributions of Figure 1. (A) An example of temporal dynamics of the membrane potentials during the oscillation. (B) The dynamics in the phase space of the model is illustrated by plotting the membrane potential of the first neuron as a function of the potential of the second neuron. (C) Plot of the transition probabilities between the four possible states of the firing rates $ν$, computed according to equation 3.4. (D) Comparison between the analytical joint probability distribution of the firing rates (red bars, calculated by equation 3.5) and the corresponding numerical distribution (blue bars). All panels of this figure are obtained for the model parameter values in Table 1.

Figure 3:

An example of oscillatory dynamics for $N=2$. An example of oscillatory dynamics obtained for an asymmetric network of two neurons that corresponds to the stationary probability distributions of Figure 1. (A) An example of temporal dynamics of the membrane potentials during the oscillation. (B) The dynamics in the phase space of the model is illustrated by plotting the membrane potential of the first neuron as a function of the potential of the second neuron. (C) Plot of the transition probabilities between the four possible states of the firing rates $ν$, computed according to equation 3.4. (D) Comparison between the analytical joint probability distribution of the firing rates (red bars, calculated by equation 3.5) and the corresponding numerical distribution (blue bars). All panels of this figure are obtained for the model parameter values in Table 1.

It is important to observe that while the synchronous version of the Hopfield network (which has symmetric connections) can sustain only oscillations with period $T=2$ (known as two-cycles; Goles-Chacc et al., 1985), Figure 3 (see also Table 1) shows that the asymmetric network can undergo oscillations with larger period. In this figure, the oscillations are perturbed by the presence of the noisy terms $σiBBit$, so that the network switches randomly between an oscillation with $T=3$, $00→10→01→00$, and an oscillation with $T=4$, $00→10→11→01→00$. In particular, Figure 3A shows an example of oscillatory temporal evolution of the stochastic membrane potentials, while Figure 3B represents the corresponding dynamics in the phase space of the network. Then, Figure 3C describes the formation of neural oscillations in terms of the state-to-state transition probability matrix.

### 3.2  A New Learning Rule for Storing Point and Cyclic Attractors

At time $t$, the state of the neural network is described by the vector of the firing rates, $νt=[ν0t,…,νN-1t]T$, which represents the activity pattern of the system at time $t$. In the context of content-addressable memories, one aims to determine a synaptic connectivity matrix $J$ that stores one or more desired sequences of activity patterns. The way such a matrix is built defines a learning rule for storing these patterns.

In particular, we suppose we want to store $D$ pattern sequences $νit0→…→νitLi$ of length $Li+1$, for $i=0,…,D-1$. By inverting equation 3.4, in section S3 of the supplemental information we prove that if, for example, the network is fully connected without loops, the matrix $J$ that stores these pattern sequences satisfies the following sets of linear algebraic equations:
$ΩjJj=uj,j=0,…,N-1.$
3.7
In equation 3.7, $Jj$ is the $N-1×1$ vector with entries $Jjk$ for $k≠j$ (the weights $Jjj$ are equal to zero by hypothesis; therefore, they are already known). Moreover, if we define $L=def∑i=0D-1Li$, $Li=def∑k=1iLk-1$ (for $i>0$) and $L0=def0$, then $uj$ is a $L×1$ vector with entries
$ujLi+ni=N-1θj--1νjitni+1Kji,ni2σjB-Ij,ni=0,…,Li-1,i=0,…,D-1,$
where $Kji,ni$ is any sufficiently large and positive constant. Moreover, $Ωj$ is the $L×N-1$ matrix obtained by removing the $j$th column of the matrix:
$Ω=ν00t0…νN-10t0⋮⋱⋮ν00tL0-1…νN-10tL0-1⋮⋱⋮ν0D-1t0…νN-1D-1t0⋮⋱⋮ν0D-1tLD-1-1…νN-1D-1tLD-1-1.$
Generally, each system in equation 3.7 may be solved through the pseudoinverse of the matrix $Ωj$, providing the set of synaptic weights that store the $D$ pattern sequences $νit0→…→νitLi$ (even though solutions to equation 3.7 do not always exist, depending on $Ωj$ and $uj$).

In particular, we observe that oscillations of period $T$ correspond to the special case $Li=T$ with $νit=νit+T$. For $T=1$, this condition allows us to store stationary states. Examples of $M$-stable systems obtained through this method are shown in Figure 4 (see also Table 3) for $M=2,3,4$, while examples of oscillatory dynamics with period $T=2,3,4$ are shown in Figure 5 (see also Table 4).

Figure 4:

Neuronal multistability. Examples of stationary states stored in a network of size $N=4$ by means of learning rule 3.7. $M$ represents the degree of multistability of the network, namely, the total number of stationary states. This figure plots the transitions between the 16 states of the network, from $0,0,0,0T$ to $1,1,1,1T$, and the color of the arrows is determined by $Pν|ν'$ (see equation 3.4) for the values of the parameters in Table 3. The stationary states are highlighted in red.

Figure 4:

Neuronal multistability. Examples of stationary states stored in a network of size $N=4$ by means of learning rule 3.7. $M$ represents the degree of multistability of the network, namely, the total number of stationary states. This figure plots the transitions between the 16 states of the network, from $0,0,0,0T$ to $1,1,1,1T$, and the color of the arrows is determined by $Pν|ν'$ (see equation 3.4) for the values of the parameters in Table 3. The stationary states are highlighted in red.

Figure 5:

Neuronal oscillations. Examples of oscillatory states with period $T$ stored in a network of size $N=4$, by means of the learning rule 3.7. This figure plots the conditional probability $Pν|ν'$ for the values of the parameters in Table 4. The oscillatory states are highlighted in red.

Figure 5:

Neuronal oscillations. Examples of oscillatory states with period $T$ stored in a network of size $N=4$, by means of the learning rule 3.7. This figure plots the conditional probability $Pν|ν'$ for the values of the parameters in Table 4. The oscillatory states are highlighted in red.

Table 3:
Set of Parameters Used for Generating Figure 4.
 $J=0-3993-39-840-84171123-660-66-1554-150,J=0-1234848-51001711713781230-25513854-840,J=09393-1711710-339171123-2550123-8454540$ $I=-212-3,σB=2431$
 $J=0-3993-39-840-84171123-660-66-1554-150,J=0-1234848-51001711713781230-25513854-840,J=09393-1711710-339171123-2550123-8454540$ $I=-212-3,σB=2431$

Note: The synaptic connectivity matrices $J$ have been obtained from equation 3.7 for $Kji,ni=10∀i,j,ni$.

Table 4:
Set of Parameters Used for Generating Figure 5 ($I$ and $σB$ as in Table 3).
 $J=0306330-840-843398442042-1584-150,J=0-93939317103-339123-25503-303840,J=0-8493-84-1710171-1714242042-42-42540$
 $J=0306330-840-843398442042-1584-150,J=0-93939317103-339123-25503-303840,J=0-8493-84-1710171-1714242042-42-42540$

Note: The synaptic connectivity matrices $J$ have been obtained from equation 3.7 for $Kji,ni=10∀i,j,ni$.

Learning rule equation 3.7 can be easily extended by including further constraints. For example, it is possible to relax the full-connectivity assumption and set a given portion of the synaptic weights to zero. This allows us to define a learning rule for sparse neural networks. The storage capacity is expected to depend on the network topology and the noise intensity, but a detailed investigation is beyond the purpose of this letter.

To conclude, we observe that in order to store the $D$ sequences of patterns, according to equation 3.7, we need to invert $N$ linear systems of size $L×N-1$. Therefore, the learning rule has polynomial complexity in the network size $N$.

### 3.3  Bifurcations in the Deterministic Network

Bifurcation analysis is a mathematical technique for investigating the qualitative change in the neuronal dynamics produced by varying model parameters. Therefore, it represents a fundamental tool for performing a systematic analysis of the complexity of the neuronal activity patterns. In particular, in this section, we study how the dynamics of the network depends on the external stimuli $Ii$. Moreover, we perform our analysis in the zero-noise limit $σB→0$, as is common practice in bifurcation theory (see, e.g., Borisyuk & Kirillov, 1992; Grimbert & Faugeras, 2006; Haschke & Steil, 2005; Fasoli, Cattani, & Panzeri, 2016). We observe that local bifurcation analysis in graded (i.e. smooth) models is performed through the calculation of the eigenvalues of the Jacobian matrix (Kuznetsov, 1998). However, the Jacobian matrix of our model is not defined at the discontinuity of the activation function, equation 2.2, thus preventing the use of the powerful methods of bifurcation analysis developed for graded models. The bifurcation analysis of nonsmooth dynamical systems has been developed only for continuous-time models, described by nonsmooth differential equations or differential inclusions (Leine, Van Campen, & Van De Vrande, 2000; Awrejcewicz & Lamarque, 2003; Leine & Van Campen, 2006; Makarenkov & Lamb, 2012). For this reason, we need to define an alternative approach for studying bifurcations in our binary network model.

Since we study the bifurcations in terms of formation or destruction of stationary and oscillatory solutions, the bifurcation diagrams of equation 2.1 can be obtained analytically from the conditional probability distribution of the firing rates $Pν|ν'$ (see equation 3.4), which in the zero-noise limit $σB→0$ becomes
$Pν|ν'=12N∏j=0N-11+-1νjsgnθj-1Mj∑k=0N-1Jjkνk'-Ijt.$
3.8
In equation 3.8, $sgn·$ is the sign function, which is defined as
$sgnx=-1ifx<01otherwise.$
In particular, the stationary states $ν$ satisfy the condition $Pν|ν=1$. Therefore, if the stationary states were known, it would be possible to invert the condition $Pν|ν=1$ in the currents $Ii$, obtaining analytical expressions of the range of the stimuli where the network admits the stationary solution $ν$. In a similar way, each oscillatory solution of the network has to satisfy the condition $Pν|ν'=1$ at the same time for each of its transitions $ν'→ν$. For example, given the oscillation $ν0→ν1→ν2→ν0$, it must be $Pν1|ν0=Pν2|ν1=Pν0|ν2=1$ for the same combination of stimuli. Again, by analytically inverting these conditions, we get the range of the stimuli where the network admits this oscillatory solution.

This approach requires prior knowledge of the stationary and oscillatory solutions of the network. One possibility is to determine these solutions numerically, for example, by solving, for $σiB=0$, equation 2.1 iteratively for all the $2N$ initial conditions of the firing rates and for all the combinations of the currents $I0,…,IN-1$ on a sufficiently dense discretization of the stimulus space. Another possibility is to perform a numerical calculation of the conditional probability distribution (similarly to Figures 4 and 5), from which the stationary and oscillatory solutions can be detected through a search of the simple cycles of length $L$ of the $2N×2N$ binary matrix $Pν|ν'$. In this approach, the stationary states correspond to loops of the matrix $Pν|ν'$ (i.e., to cycles of length $L=1$). In a similar way, oscillations correspond to simple cycles of length $L=T>1$, where $T$ represents the period of the oscillation. More efficient techniques are considered in Fasoli and Panzeri (2017).

The bifurcation diagrams of the network strongly depend on its connectivity matrix $J$. However, a detailed analysis of the relation between the bifurcation structure and the network topology is beyond the purpose of this letter. For an example, we apply our method to a fully connected network composed of $NE=3$ excitatory and $NI=3$ inhibitory neurons, even though this technique can be easily employed for calculating the bifurcation diagrams of networks with any size and topology. We suppose that each excitatory (resp. inhibitory) neuron receives an external stimulus $IE$ (resp. $II$), and we derive the codimension 2 bifurcation diagram of the network in the $IE-II$ plane. The remaining parameters of the network are reported in Table 5.

Table 5:
Set of Parameters Used for Generating Figures 6 and 7.
 $θ=1⋮1,J=JEEJEIJIEJII,Jαα=Jαα011101110,Jαβ=Jαβ111111111forα≠β$ $JEE=-JII=80,JIE=-JEI=70$
 $θ=1⋮1,J=JEEJEIJIEJII,Jαα=Jαα011101110,Jαβ=Jαβ111111111forα≠β$ $JEE=-JII=80,JIE=-JEI=70$

By representing the firing rates of the excitatory neurons through the three top entries of the vector $ν$, we found numerically that the network admits the stationary states 0 to 7 and 56 to 63 (in decimal representation) for particular combinations of $IE,I$. Thus, for example, the state $ν=[0,0,0,1,0,1]T$ (i.e., the state 5 in decimal representation), which is characterized by two active inhibitory neurons (while the remaining neurons in the network are not firing), is a stationary state for some values of the stimuli. Moreover, we found numerically that the network undergoes the oscillations $0→7→0$, $56→63→56$, $0→56→63→0$, $0→63→7→0$, and $0→56→63→7→0$. Now, by inverting the conditions provided by the conditional probability distribution 3.8, we get the portions of the $IE-II$ plane where each stationary state and oscillation occurs. The details of the analytical calculations are shown in sections S4.1 and S4.2 of the supplemental information for the stationary and oscillatory solutions, respectively. The resulting codimension 2 bifurcation diagram is shown in Figure 6 (see also Table 5).

Figure 6:

An example of codimension 2 bifurcation diagram. The bifurcation diagram in the stimulus space, for an asymmetric fully connected network composed of $NE=3$ excitatory and $NI=3$ inhibitory neurons. Each excitatory (resp. inhibitory) neuron receives an external stimulus $IE$ (resp. $II$), while the remaining parameters of the network are reported in Table 5. The two panels of the figure refer to the same bifurcation diagram but have been separated for clarity in order to show how multistability (A) and oscillations (B) depend on the external stimuli. In more detail, panel A shows how different combinations of the currents $IE,I$, for this specific network architecture give rise to a number of stationary solutions $M$ that change from 1 (monostability) to 4 (quadristability). Moreover, panel B shows how, for different values of the stimuli, the network undergoes oscillations with period $T=2$, 3, or 4. The blue lines in the diagram represent the combinations of the current $IE,I$ at which a bifurcation occurs. Here, the stationary states lose their stability, turning into different stationary states or oscillatory solutions. In a similar way, the red lines represent the bifurcations at which oscillations turn into new oscillations or stationary solutions. For this reason, some blue and red lines in the diagram overlap (see sections S4.1 and S4.2 of the supplemental information for the derivation of their analytical formulas). The shaded areas represent the regions of the $IE-II$ plane where the symmetry of the inhibitory neurons is spontaneously broken, despite the symmetry of the underlying neural equations. This occurs, for example, with the stationary state $ν=[0,0,0,1,0,0]T$, since in this case, only two inhibitory neurons over three do not fire. On the contrary, in this example, the symmetry in the excitatory population is never broken, although this may occur with different topologies of the synaptic connections. The horizontal dashed line in the panels corresponds to the value of the current to the inhibitory population ($II=-20$) that we have chosen for the calculation of the codimension 1 bifurcation diagrams of Figure 7.

Figure 6:

An example of codimension 2 bifurcation diagram. The bifurcation diagram in the stimulus space, for an asymmetric fully connected network composed of $NE=3$ excitatory and $NI=3$ inhibitory neurons. Each excitatory (resp. inhibitory) neuron receives an external stimulus $IE$ (resp. $II$), while the remaining parameters of the network are reported in Table 5. The two panels of the figure refer to the same bifurcation diagram but have been separated for clarity in order to show how multistability (A) and oscillations (B) depend on the external stimuli. In more detail, panel A shows how different combinations of the currents $IE,I$, for this specific network architecture give rise to a number of stationary solutions $M$ that change from 1 (monostability) to 4 (quadristability). Moreover, panel B shows how, for different values of the stimuli, the network undergoes oscillations with period $T=2$, 3, or 4. The blue lines in the diagram represent the combinations of the current $IE,I$ at which a bifurcation occurs. Here, the stationary states lose their stability, turning into different stationary states or oscillatory solutions. In a similar way, the red lines represent the bifurcations at which oscillations turn into new oscillations or stationary solutions. For this reason, some blue and red lines in the diagram overlap (see sections S4.1 and S4.2 of the supplemental information for the derivation of their analytical formulas). The shaded areas represent the regions of the $IE-II$ plane where the symmetry of the inhibitory neurons is spontaneously broken, despite the symmetry of the underlying neural equations. This occurs, for example, with the stationary state $ν=[0,0,0,1,0,0]T$, since in this case, only two inhibitory neurons over three do not fire. On the contrary, in this example, the symmetry in the excitatory population is never broken, although this may occur with different topologies of the synaptic connections. The horizontal dashed line in the panels corresponds to the value of the current to the inhibitory population ($II=-20$) that we have chosen for the calculation of the codimension 1 bifurcation diagrams of Figure 7.

The codimension 1 bifurcation diagrams can be derived analytically from equation 2.1 for $σiB=0$ by replacing the firing rates of the stationary solutions or those of the oscillatory solutions, and then by calculating $V$ as a function of the stimuli. More explicitly, in our example of a fully connected network, we obtain:
$VEIE,II=NE-1N-1JEEνEIE,II+JEIN-1∑j=0NI-1νI,jIE,II+IEVI,iIE,II=NEN-1JIEνEIE,II+JIIN-1∑j=0j≠iNI-1νI,jIE,II+II,i=0,1,2,$
3.9
where $VEIE,II$ and $VI,iIE,II$ represent the (stimulus-dependent) membrane potentials in the excitatory and inhibitory population, respectively. Moreover, $νEIE,II$ and $νI,iIE,II$ are the excitatory and inhibitory firing rates of the stationary/oscillatory solutions, that we obtained numerically as described above. The relation between $νE,I$ and7$IE,I$ is calculated analytically in section S4 of the supplemental information. In particular, we observe that in the excitatory population, the stationary firing rates (the three top entries of the binary representation of the states 0 to 7 and 56 to 63) and the corresponding membrane potentials are homogeneous ($νi=νEIE,II$ and $Vi=VEIE,II$ for $i=0,1,2$), while in the inhibitory population, they are generally heterogeneous ($νNE+i=νI,iIE,II$ and $VNE+i=VI,iIE,II$ for $i=0,1,2$). This is an example of spontaneous symmetry breaking that occurs in the inhibitory population (see the shaded areas in Figure 6A). On the contrary, according to the numerical solutions, there is no spontaneous symmetry breaking during the oscillatory dynamics in this example (see Figure 6B); therefore, in this case, the firing rates are homogeneous in both the neural populations. Equation 3.9 is plotted in Figures 7A and 7B (see also Table 5). Figures 7C to 7F show the variations of the state-to-state transition probability matrix caused by the bifurcation points of the model. In particular, the panels show that for $II=-20$ and for an increasing value of $IE$, the network exhibits monostability (C), quadristability (D), tristability and an oscillation with period $T=3$ (E), and tristability and an oscillation with period $T=2$ (F). Compare Figures 7C to 7F with the four ranges of the stimulus $IE$ delimited by the vertical dashed lines in Figures 7A and 7B (for a derivation of the boundaries of these ranges, see section S4 in the supplemental information).
Figure 7:

Examples of codimension 1 bifurcation diagrams. The codimension 1 bifurcation diagrams obtained in the case of the fully connected network discussed in section 3.3 for $II=-20$ (see the horizontal dashed line in Figure 6). (A, B) Codimension 1 bifurcation diagrams of the excitatory (A) and inhibitory (B) neurons, obtained from equation 3.9 as a function of the stimulus $IE$. In particular, from the firing rates $νE$ and $νI,j$ of the stationary states, equation 3.9 provides the fixed-point solutions of the membrane potentials (black lines). In a similar way, from the firing rates of the oscillatory states, we obtain the fixed-point cycle solutions (brown lines). (C–F) Bifurcations of the stationary and oscillatory solutions for increasing $IE$. The graphs have been obtained from equation 3.8, and highlight in red all the stationary and oscillatory solutions (compare with the areas crossed by the dashed line in Figure 6 when moving from left to right).

Figure 7:

Examples of codimension 1 bifurcation diagrams. The codimension 1 bifurcation diagrams obtained in the case of the fully connected network discussed in section 3.3 for $II=-20$ (see the horizontal dashed line in Figure 6). (A, B) Codimension 1 bifurcation diagrams of the excitatory (A) and inhibitory (B) neurons, obtained from equation 3.9 as a function of the stimulus $IE$. In particular, from the firing rates $νE$ and $νI,j$ of the stationary states, equation 3.9 provides the fixed-point solutions of the membrane potentials (black lines). In a similar way, from the firing rates of the oscillatory states, we obtain the fixed-point cycle solutions (brown lines). (C–F) Bifurcations of the stationary and oscillatory solutions for increasing $IE$. The graphs have been obtained from equation 3.8, and highlight in red all the stationary and oscillatory solutions (compare with the areas crossed by the dashed line in Figure 6 when moving from left to right).

### 3.4  Groupwise Cross-Correlations

The study of correlations among neurons is a topic of central importance to systems neuroscience. Pairwise and groupwise cross-neuron correlations are key to understanding the information encoding capabilities of neural populations (Abbott & Dayan, 1999; Pola, Thiele, Hoffmann, & Panzeri, 2003; Pillow et al., 2008; Cohen & Maunsell, 2009; Moreno-Bote et al., 2014) and to making inferences about how neurons exchange and integrate information (Singer, 1993; Tononi, Sporns, & Edelman, 1994; David, Cosmelli, & Friston, 2004; Rogers, Morgan, Newton, & Gore, 2007; Friston, 2011).

Fasoli, Faugeras, and Panzeri (2015) introduced the following normalized coefficient for quantifying the groupwise correlations among an arbitrary number $n$ of neurons in a network of size $N$ (with $2≤n≤N$):
$Corrnxi0t,…,xin-1t=∏m=0n-1ximt-x‾imt‾∏m=0n-1ximt-x‾imtn‾n.$
3.10
The bar represents the statistical mean over trials computed at time $t$. The variables $x$ in equation 3.10 can be either the membrane potentials or the firing rates. In the first case, in section S5.1 of the supplemental information, we prove that in the stationary regime,
$CorrnVi0t,…,Vin-1t=π∑j=02N-1Fj∏m=0n-1Rj,imN2n2Γn+12∏m=0n-1σimBn∑j=02N-1FjΦ-n2,12;-12Rj,imNσimB2nRj,imN=1Mim∑l=0N-1Bj,lN-∑k=02N-1FkBk,lNJiml,$
3.11
where $Γ$ and $Φ$ are the gamma function and Kummer's confluent hypergeometric function of the first kind, respectively. Moreover, in section S5.2 of the supplemental information, we prove that the groupwise correlation structure of the firing rates is given by the following formula:
$Corrnνi0t,…,νin-1t=∑j=02N-1Fj∏m=0n-11-2ν‾im-Ej,im2n∏m=0n-1Znν‾imn,Znx=xn1-x+x1-xn,ν‾im=121-∑j=02N-1FjEj,im,Ej,im=erfθim-1Mim∑l=0N-1JimlBj,lN-Iim2σimB.$
3.12
In Figure 8 (see also Table 6), we show an example of cross-correlations between pairs of neurons (i.e., $n=2$, in which case equation 3.10 corresponds to the Pearson's correlation coefficient). In the same figure, we also show the corresponding standard deviations of the membrane potentials and the firing rates:
$σiV=σiB2+∑m=02N-1FmRm,iN2,σiν=ν‾i-ν‾i2,$
3.13
(derived in equations S34 and S38 of the supplemental information).
Figure 8:

An example of cross-correlation structure for $N=4$. The variance and pairwise cross-correlation obtained for an asymmetric network of four neurons. (A–B) Comparison between the analytical standard deviations (given by equation 3.13), for $I∈-12,12$) of the membrane potentials (A) and the firing rates (B), and the corresponding numerical approximations. For each value of the stimulus $I$, we calculated the network statistics as described in section 2.2 through a Monte Carlo method over $105$ repetitions. (C–D) Comparison of the cross-correlations between neurons 0 and 1 (the red curves are described by equations 3.11, panel C, and 3.12, panel D). (E–F) Examples of highly correlated activity (synchronous states) between the membrane potentials (for $I=0$, panel E) and between the firing rates (for $I=2$, panel F). All panels of this figure are obtained for the model parameter values in Table 6.

Figure 8:

An example of cross-correlation structure for $N=4$. The variance and pairwise cross-correlation obtained for an asymmetric network of four neurons. (A–B) Comparison between the analytical standard deviations (given by equation 3.13), for $I∈-12,12$) of the membrane potentials (A) and the firing rates (B), and the corresponding numerical approximations. For each value of the stimulus $I$, we calculated the network statistics as described in section 2.2 through a Monte Carlo method over $105$ repetitions. (C–D) Comparison of the cross-correlations between neurons 0 and 1 (the red curves are described by equations 3.11, panel C, and 3.12, panel D). (E–F) Examples of highly correlated activity (synchronous states) between the membrane potentials (for $I=0$, panel E) and between the firing rates (for $I=2$, panel F). All panels of this figure are obtained for the model parameter values in Table 6.

Table 6:
Set of Parameters Used for Generating Figures 8 and 9.
 $J=0-1545-12015090-28590-1050-3165-60750,I=I1111,σ=1322$
 $J=0-1545-12015090-28590-1050-3165-60750,I=I1111,σ=1322$

Note: The values of the stimulus $I$ are specified in the figures.

Examples of cross-correlations for $n>2$ are shown in Figure S3 of the supplemental information. Our analysis shows that the external stimulus $I$ dynamically switches the network between synchronous (i.e., highly correlated) and asynchronous (i.e., uncorrelated) states. The conditions under which these states may occur are discussed in section S5.1.1, S5.1.2, and S5.2.1 of the supplemental information. In general, we did not observe a fixed relation between firing rate and membrane potential correlations, and low (resp. high) correlations between the membrane potentials did not necessarily correspond to low (resp. high) correlations between the firing rates. In Figure 9, we consider the network reported in Table 6 for $I=-3$, and we show an example of state characterized by highly correlated membrane potentials and low correlated firing rates. The membrane potentials of neurons 0 and 1 fluctuate randomly around the same mean, which for $I=-3$ is close to the firing threshold $θ=1$ (represented by the dashed line in the figure). The sources of noise have different standard deviations ($σ0B=1$, $σ1B=3$), so that the membrane potentials fluctuate with standard deviations $σ1V>σ0V$. For this reason, the membrane potential of neuron 1 is more likely to cross the firing threshold than the potential of neuron 0. Therefore, the firing rate of neuron 1 switches often and randomly between the rate states 0 (neuron not firing) and 1 (neuron firing at the maximum rate). On the contrary, neuron 0 does not fire very often; therefore, the correlation between the firing rates is small even though the corresponding membrane potentials are highly correlated.

Figure 9:

Membrane potential versus firing rate dynamics. An example of network dynamics characterized by high correlation between the membrane potentials and low correlation between the firing rates. All panels are obtained for the model parameter values in Table 6 and for $I=-3$. (A) Highly correlated temporal dynamics of the membrane potentials ($Corr2V0t,V1t≈0.97$), generated through the mechanism described in section S5.1.1. (B) Low correlated temporal dynamics of the firing rates ($Corr2ν0t,ν1t≈0.41$). See the text.

Figure 9:

Membrane potential versus firing rate dynamics. An example of network dynamics characterized by high correlation between the membrane potentials and low correlation between the firing rates. All panels are obtained for the model parameter values in Table 6 and for $I=-3$. (A) Highly correlated temporal dynamics of the membrane potentials ($Corr2V0t,V1t≈0.97$), generated through the mechanism described in section S5.1.1. (B) Low correlated temporal dynamics of the firing rates ($Corr2ν0t,ν1t≈0.41$). See the text.

### 3.5  Mean-Field Limit

In this section, we study equation 2.1 in the thermodynamic limit $N→∞$ by means of Sznitman's mean-field theory (see Touboul, Hermann, & Faugeras, 2012; Baladron, Fasoli, Faugeras, & Touboul, 2012). As discussed in Fasoli et al. (2015), generally Sznitman's theory can be applied only to networks with sufficiently dense synaptic connections. For this reason, we suppose that the network is composed of $P$ neural populations $α$ (for $α=0,…,P-1$), and that within each population, the neurons have fully connected topology, as well as homogeneous parameters and identical (marginal) probability distributions. From this assumption, it follows, for example, that the stimuli are organized into $P$ vectors $Iα$, one to each population, and such that
$Iαt=Iαt1Nα,$
3.14
where $1Nα$ is the $Nα×1$ all-ones vector and $Nα$ is the size of the population $α$. In a similar way, the synaptic connectivity matrix can be written as
$J=J00J01…J0,P-1J10J11…J1,P-1⋮⋮⋱⋮JP-1,0JP-1,1…JP-1,P-1,Jαβ=JααINα-IdNα,forα=βJαβINα,Nβ,forα≠β,$
3.15
for $α,β=0,…,P-1$. The real numbers $Jαβ$ are free parameters that describe the strength of the synaptic connections from the population $β$ to the population $α$. Moreover, $INα,Nβ$ is the $Nα×Nβ$ all-ones matrix (here we use the simplified notation $INα=defINα,Nα$), while $IdNα$ is the $Nα×Nα$ identity matrix.
Even if at time instant $t=0$, the membrane potentials were statistically independent, they would be generally correlated at any $t>0$ when the network is composed of a finite number of neurons (see section 3.4). However, in the thermodynamic limit, the neurons become independent at any $t>0$, given independent initial conditions. In the mathematical literature, this phenomenon is known as propagation of chaos (Touboul et al., 2012; Baladron et al., 2012; Fasoli et al., 2015; Fasoli, Cattani, & Panzeri, 2017). Statistical dependence can be measured by the distance correlation $dCorr$ (Székely, Rizzo, & Bakirov, 2007), which is defined in terms of the distance covariance $dCov$ and the distance variance $dVar$ as follows:
$dCorrVit,Vjt=dCovVit,VjtdVarVitdVarVjt,dVarVit=dCovVit,Vit.$
Given $n$ samples of both the membrane potentials $Vit$ and $Vjt$ (that we call $Vikt$ and $Vjkt$, for $k=0,…,n-1$, the distance covariance is calculated through the following formula:
$dCovVit,Vjt=1n∑k,l=0n-1Aik,ltAjk,lt$
where
$Aik,lt=aik,lt-a‾ik·t-a‾i·lt+a‾i··t,aik,lt=Vikt-Vjlt,a‾ik·t=1n∑l=0n-1aik,lt,a‾i·lt=1n∑k=0n-1aik,lt,a‾i··t=1n2∑k,l=0n-1aik,lt,$
and similarly for $Ajk,lt$. In the thermodynamic limit, the distance correlation between the membrane potentials tends to zero, meaning that the neurons become independent (see Figure 10C in the case $P=2$; see also Table 7). Moreover, for $N→∞$, the neurons become normally distributed, according to the law $Vii∼NV‾αt,σαB2$ for every neuron $i$ in population $α$ (see Figure 10A). As we show in section S6.1 of the supplemental information, the mean membrane potentials $V‾α$ satisfy the following set of equations:
$V‾αt+1=12∑β=0P-1RβJαβ1-erfθβ-V‾βt2σβB+Iαt,α=0,…,P-1,$
3.16
where $Rα=limN→∞NαMα$. Therefore in the thermodynamic limit, the stochastic network model can be reduced to a set of $P$ deterministic equations in the unknowns $V‾α$.
Figure 10:

Network statistics in the thermodynamic limit. Examples of probability distributions, correlations, and distance correlations of the membrane potentials as a function of the network size $N$, for an asymmetric fully connected network composed of one excitatory and one inhibitory population, with $NE=NI=N2$ and $IE=-II=10$ (the remaining parameters are reported in Table 7). (A) Fast convergence of the single-neuron marginal probability distributions of the membrane potentials to the mean-field normal distribution $NV‾αt,σαB2$ for increasing network size $N$ (see the text). In particular, the panel shows the evolution of the probability distribution of the excitatory neurons, but this result also holds for the inhibitory population. (B–C) Corresponding decrease of the pairwise correlation between the membrane potentials (B) and the decrease of their distance correlation (C). $Corrαβ=defCorrVit,Vjt$ (resp., $dCorrαβ=defdCorrVit,Vjt$), for any neuron $i$ in population $α$ and any neuron $j$ in population $β$, represents the correlation (resp. the distance correlation) between the membrane potentials of the populations $α$ and $β$ calculated at the time instant $t=100$. In particular, the decrease of the distance correlation with the network size numerically proves that the neurons become increasingly independent in the thermodynamic limit (Székely et al., 2007). In this figure, for each value of $N$, we calculated the numerical probability distributions, the correlations, and the distance correlations through a Monte Carlo method over $104$ repetitions so that the statistical error is of the order of $10-2$.

Figure 10:

Network statistics in the thermodynamic limit. Examples of probability distributions, correlations, and distance correlations of the membrane potentials as a function of the network size $N$, for an asymmetric fully connected network composed of one excitatory and one inhibitory population, with $NE=NI=N2$ and $IE=-II=10$ (the remaining parameters are reported in Table 7). (A) Fast convergence of the single-neuron marginal probability distributions of the membrane potentials to the mean-field normal distribution $NV‾αt,σαB2$ for increasing network size $N$ (see the text). In particular, the panel shows the evolution of the probability distribution of the excitatory neurons, but this result also holds for the inhibitory population. (B–C) Corresponding decrease of the pairwise correlation between the membrane potentials (B) and the decrease of their distance correlation (C). $Corrαβ=defCorrVit,Vjt$ (resp., $dCorrαβ=defdCorrVit,Vjt$), for any neuron $i$ in population $α$ and any neuron $j$ in population $β$, represents the correlation (resp. the distance correlation) between the membrane potentials of the populations $α$ and $β$ calculated at the time instant $t=100$. In particular, the decrease of the distance correlation with the network size numerically proves that the neurons become increasingly independent in the thermodynamic limit (Székely et al., 2007). In this figure, for each value of $N$, we calculated the numerical probability distributions, the correlations, and the distance correlations through a Monte Carlo method over $104$ repetitions so that the statistical error is of the order of $10-2$.

Table 7:
Set of Parameters Used for Generating Figure 10, Figures 11A and 11B, and Figure 12A.
 $JEE=-JII=80,$ $θE=θI=1$ $JIE=-JEI=70,$ $σEB=1,σIB=2$
 $JEE=-JII=80,$ $θE=θI=1$ $JIE=-JEI=70,$ $σEB=1,σIB=2$

Unlike the original network equations, 2.1, the activation functions in the mean-field equations 3.16 are continuous and differentiable everywhere for $σB>0$. Networks with real-valued activation functions such as that described by equation 3.16 (and the corresponding activity-based versions, which can be obtained through the change of variables, equation 2.3) are known in neural modeling as discrete-time recurrent neural networks (Jin, Nikiforuk, & Gupta, 1994; Tsoi & Back, 1997; Haschke & Steil, 2005). Thus, when noise is present in every population, the bifurcation structure of equation 3.16 can be studied through the bifurcation theory of graded systems (Kuznetsov, 1998). For an example, we focus on the case of a network composed of two neural populations, one excitatory and one inhibitory (even though the bifurcation structure may be studied for every $P$). From now on, it is convenient to change the notation slightly and consider $α=E,I$ rather than $α=0,1$. Beyond global bifurcations, which generally cannot be studied analytically, the mean-field network may undergo limit point, period doubling, and Neimark-Sacker local bifurcations. Limit points correspond to the formation of network bistability. We observe that in this model, bistability coexists with hysteresis phenomena, which were proposed as a candidate neurophysiological mechanism of working memory (Wang, 2001). Period doubling and Neimark-Sacker bifurcations correspond to the formation of neural oscillations with period $T=2$ and $T>2$, respectively.

By applying the technique developed in Haschke and Steil (2005) and Fasoli et al. (2016), in section S6.2 of the supplemental information, we prove that these local codimension 1 bifurcations are analytically described by the following set of parametric equations:
$IEv=v-12REJEE1+erfv-θE2σEB-12RIJEI1+erfμIv-θI2σIB,IIv=μIv-12REJIE1+erfv-θE2σEB-12RIJII1+erfμIv-θI2σIB,$
3.17
in the parameter $v$, where
$μIv=θI±-2σIB2ln2πσIBgIμI.$
For the limit-point and period-doubling bifurcations, we get
$gIμI=sREJEEgEv-s2RERIJEEJII-JEIJIEgEv-sRIJII,$
3.18
with $s=1$ and $s=-1$, respectively, and for the Neimark-Sacker bifurcation, we obtain
$gIμI=1RERIJEEJII-JEIJIEgEv,$
3.19
where
$gEv=12πσEBe-v-θE22σEB2.$
Equations 3.17 to 3.19 describe the local codimension 1 bifurcations in the $IE-II$ parameter space (i.e., in the codimension 2 bifurcation diagram of the mean-field network). Finally, in order to validate our analytical results, we calculated the bifurcation structure of the mean-field network numerically by the MatCont Matlab toolbox (Dhooge et al., 2003; see Figure 11), and we verified its agreement with the bifurcation curves described by equations 3.17 to 3.19. Examples of network dynamics corresponding to these bifurcation diagrams are shown in Figure 12.
Figure 11:

Bifurcation diagrams of the mean-field model. Codimension 2 (A, C) and codimension 1 (B, D) bifurcation diagrams of equation 3.16 obtained numerically by the MatCont Matlab toolbox (Dhooge et al., 2003). (A, B) Formation of the limit-point ($LP$) and period-doubling ($PD$) bifurcations, obtained for $RE=RI=0.5$ and the parameters of Table 7. These local codimension 1 bifurcations correspond to the blue and red curves in panel A and are described analytically by equations 3.17 and 3.18. In this panel, we also show other bifurcations of the mean-field model. In particular, the $LPPD$ bifurcation is a local codimension 2 bifurcation, which represents the simultaneous occurrence of the limit-point and period-doubling bifurcations. Again, this bifurcation can be described analytically by intersecting the $LP$ and $PD$ curves obtained from equations 3.17 and 3.18. Moreover, $LPC$, $CP$, and $GPD$ are limit point of cycles, cusp, and generalized period-doubling bifurcations, respectively. For more details, see Kuznetsov (1998). (B) We show the fixed-point curves (black) and the fixed-point cycle curves (brown) obtained for $II=0$, which are mathematically described by equations S48 and S57 in the supplemental information, respectively. (C, D) Formation of the supercritical Neimark-Sacker ($NS$) bifurcation, obtained for $JEE=-JII=10$. This local codimension 1 bifurcation corresponds to the green curve in panel C, which is described analytically by equations 3.17 and 3.19. In (D), which is obtained again for $II=0$, we omitted the fixed-point cycle curve from our analysis because of its high complexity. This curve may be calculated numerically as explained in section S6.2.3 of the supplemental information. In all panels of the figure, for simplicity we also omitted other bifurcations that are not relevant for this study.

Figure 11:

Bifurcation diagrams of the mean-field model. Codimension 2 (A, C) and codimension 1 (B, D) bifurcation diagrams of equation 3.16 obtained numerically by the MatCont Matlab toolbox (Dhooge et al., 2003). (A, B) Formation of the limit-point ($LP$) and period-doubling ($PD$) bifurcations, obtained for $RE=RI=0.5$ and the parameters of Table 7. These local codimension 1 bifurcations correspond to the blue and red curves in panel A and are described analytically by equations 3.17 and 3.18. In this panel, we also show other bifurcations of the mean-field model. In particular, the $LPPD$ bifurcation is a local codimension 2 bifurcation, which represents the simultaneous occurrence of the limit-point and period-doubling bifurcations. Again, this bifurcation can be described analytically by intersecting the $LP$ and $PD$ curves obtained from equations 3.17 and 3.18. Moreover, $LPC$, $CP$, and $GPD$ are limit point of cycles, cusp, and generalized period-doubling bifurcations, respectively. For more details, see Kuznetsov (1998). (B) We show the fixed-point curves (black) and the fixed-point cycle curves (brown) obtained for $II=0$, which are mathematically described by equations S48 and S57 in the supplemental information, respectively. (C, D) Formation of the supercritical Neimark-Sacker ($NS$) bifurcation, obtained for $JEE=-JII=10$. This local codimension 1 bifurcation corresponds to the green curve in panel C, which is described analytically by equations 3.17 and 3.19. In (D), which is obtained again for $II=0$, we omitted the fixed-point cycle curve from our analysis because of its high complexity. This curve may be calculated numerically as explained in section S6.2.3 of the supplemental information. In all panels of the figure, for simplicity we also omitted other bifurcations that are not relevant for this study.

Figure 12:

Examples of network dynamics in the mean-field model. Examples of dynamics of the mean of the membrane potentials $V‾E,I$, obtained by iteratively solving equation 3.16. The stimulus $IE$ increases linearly with time, while the stimulus to the inhibitory population is fixed to $II=0$. (A) An example of oscillation with period $T=2$, generated from a period-doubling bifurcation ($PD$). Network parameters as in Figure 11B. The amplitude of the membrane potentials during the oscillation corresponds to the fixed-point cycle curves of the model (compare $V‾E$ with the brown curves in Figure 11B). (B) An example of oscillation with period $T=4$, generated from a Neimark-Sacker bifurcation ($NS$). Network parameters as in Figure 11D.

Figure 12:

Examples of network dynamics in the mean-field model. Examples of dynamics of the mean of the membrane potentials $V‾E,I$, obtained by iteratively solving equation 3.16. The stimulus $IE$ increases linearly with time, while the stimulus to the inhibitory population is fixed to $II=0$. (A) An example of oscillation with period $T=2$, generated from a period-doubling bifurcation ($PD$). Network parameters as in Figure 11B. The amplitude of the membrane potentials during the oscillation corresponds to the fixed-point cycle curves of the model (compare $V‾E$ with the brown curves in Figure 11B). (B) An example of oscillation with period $T=4$, generated from a Neimark-Sacker bifurcation ($NS$). Network parameters as in Figure 11D.

## 4  Discussion

### 4.1  Comparison with Previous Analytical Solutions of Networks with Asymmetric Synaptic Weights

We studied a synchronously updated firing-rate neural network model with asymmetric synaptic weights and discrete-time evolution that can be exactly solved for any network size and topology. Network models with asymmetric synaptic weights are important because real neural networks in the brain have asymmetric synapses. Exact analytical study, though, has been limited to a small number of studies valid in some limiting cases or in some mean-field approaches, such as the replica trick (Mézard et al., 1986), the limit of an infinite number of spin components (Hertz et al., 1987), or the thermodynamic limit of infinite network size (Crisanti & Sompolinsky, 1987; Derrida et al., 1987).

The main mathematical difficulty in studying asymmetric neural networks is the impossibility to apply the powerful methods of equilibrium statistical mechanics because no energy function exists for these networks. Our formalism allows exact solutions of such network models without resorting to a limiting case or a mean-field approximation. In particular, our results are exact for any network size. We used the stochastic recurrence relation, equation 2.1, rather than Little's (1974) definition of temperature (in particular, compare our equation 3.6 with Little's equation 4). In this respect, our work follows the approach described in Wang, Pichler, and Ross (1990), but with some important differences. Wang et al. (1990) specifically considered a diluted network with gaussian noise and studied the dynamical evolution of the overlap between the state of the network and the stored point attractors in the thermodynamic limit. In our work, we considered a network with an arbitrary (generally nondiluted) synaptic connectivity matrix and arbitrary noise statistics, and in section S2 of the supplemental information, we derived exact solutions for any network size $N$ without further assumptions.

In this letter (see section S1 of the supplemental information), we showed the relationship between our approach and the mathematical techniques developed for Markovian systems (Coolen, 2001a, 2001b). In particular, we showed that the original voltage-based equation 2.1 can be transformed into the activity-based equation 2.5, which can be solved with standard methods from the theory of Markovian processes. This approach requires the normalized connectivity matrix $^J$ (defined in equation 2.4) to be invertible. Our approach generalizes this previous work in that it does not rely on this transformation, and therefore it can be applied to networks with any matrix $^J$.

In particular, through this technique, we derived exact solutions for the conditional probability distributions of the membrane potentials and the firing rates, as well as for the joint probability distributions in the stationary regime. Due to the asymmetry of the synaptic weights, the network we studied can undergo oscillations with period $T≥2$, while synchronous Hopfield networks (which have symmetric connections) can sustain only oscillations with period $T=2$, known as two-cycles (Goles-Chacc et al., 1985). Going from activity-based to voltage-based equations through equation 2.3 requires the hypothesis of synchronous updates of the neural activity. For this reason, we cannot extend the formalism in a straightforward way to asynchronous dynamics.

Compared to small-size graded networks (Fasoli et al., 2015, 2017), where the impossibility to using statistical methods restricts the derivation of (approximate) analytical formulas of the joint probability distributions and of the cross-correlations to simple network topologies, here we derived exact solutions that are valid for any connectivity matrix $J$.

### 4.2  Implications of Analytical Solvability for Learning Rules

The analytical solution of the conditional probability distribution of the firing rates allowed us to define a new learning rule to store point and periodic attractors. Point attractors correspond to stable states, while periodic attractors represent oscillatory solutions of the network activity. The synchronously updated Hopfield network (which can be obtained from equation 2.1 in the special case of symmetric connections) can store only point attractors and two-cycles, while our learning rule for asymmetric networks can be used to store any oscillatory solution or temporal pattern that is compatible with the network topology and size. From a biological point of view, learning specific temporal patterns of neural activity is a general problem involved in several cognitive and sensory-motor tasks (Grossberg & Pearson, 2008), and therefore it represents an essential ingredient in advancing our understanding of cortical activity.

Learning rules for deterministic networks are important theoretical tools for investigating pattern storage in idealized, noise-free models (Hagan et al., 1996). However, cortical activity is often characterized by the presence of noise (Deco, Rolls, & Romo, 2009). This underlines the importance of developing new learning rules to safely store activity patterns in a noisy medium such as the cortical tissue. The rule that we introduced belongs to this class of learning algorithms for stochastic networks. Therefore, our rule ensures that a stored sequence of activity patterns will not be disrupted by the noise, and it does so by setting the membrane potentials far away from the firing thresholds during their temporal evolution. This distance is proportional to the noise intensity and ensures that undesired random switching of the firing rates will be very unlikely (compare with Figure 3B, where the membrane potentials fluctuate near the firing threshold, causing the random switching of the firing rates described by Figure 3C).

For simplicity, in section 3.2, we showed the learning rule for a fully connected network driven by gaussian noise; however, our derivation can be easily extended to networks with any topology of the synaptic connections and driven by noise with any distribution. The storage capacity depends on the topology of the network (random or regular), the sparseness of the synaptic connections, and the intensity and distribution of the noise, but a detailed investigation is beyond the purpose of this letter.

Several learning rules have been proposed for storing activity patterns in networks of model neurons with discrete, spin-like output. One of the first studies was proposed by Amari (1972), who analyzed the effects of random spin flips on Hebbian learning of activity patterns. Sompolinsky and Kanter (1986) introduced an infinite-size network where the response of the neurons is not instantaneous but has a dynamic memory that slows the transition between patterns. This ad hoc assumption proves sufficient for generating arbitrarily long pattern sequences by preventing the network state to become progressively mixed among all the activity patterns to be stored. A similar model was introduced independently by Kleinfeld (1986). A different approach was followed by Buhmann and Schulten (1987), who proved that noise and sufficiently asymmetric connections can induce random transitions between patterns. Interestingly, our learning rule does not rely on any of these assumptions, since it can also be used for storing pattern sequences if the neurons have instantaneous response and the network is deterministic. Our result can be considered an extension of the associating learning rule introduced by Personnaz, Guyon, and Dreyfus (1986) which is also known as the pseudoinverse learning rule (Zhang, Dangelmayr, & Oprea, 2013). While Personnaz and coworkers derived the learning rule by making implicit use of the state-to-state transition probability matrix of their deterministic network model (see equation 10 in Personnaz et al., 1986), we generalized this approach by taking advantage of the transition probability matrix described by equation 3.4 to define a new learning rule for storing arbitrary sequences of activity patterns in stochastic networks.

### 4.3  Bifurcations in Neural Activity

We performed an analytical study of the bifurcations to understand how the formation of stable and oscillatory solutions is affected by a variation of the external stimuli to the network. The method we proposed can be applied to networks with any size and topology, but for an example, we considered the case of a network composed of six fully connected neurons. As is common practice, we performed the bifurcation analysis in the zero-noise limit. We derived analytical expressions for the codimension 1 and codimension 2 bifurcation diagrams, showing how the external stimuli affect the neuronal dynamics. It is important to observe that the bifurcation analysis of graded (i.e., smooth) networks is based on differential analysis—in particular, on the Jacobian matrix of the system. However, the Jacobian matrix of the model we studied in this letter is not defined everywhere due to the discontinuity of the activation function. For this reason, bifurcation theory of smooth dynamical systems cannot be applied directly to the binary model we discussed. Although the bifurcation analysis of nonsmooth dynamical systems has recently received increased attention, it has been developed only for continuous-time models, described by nonsmooth differential equations or differential inclusions (Leine et al., 2000; Awrejcewicz & Lamarque, 2003; Leine & Van Campen, 2006; Makarenkov & Lamb, 2012). For this reason, in our work we followed another approach. By combining the analytical formula of the conditional probability distribution of the firing rates with a numerical brute force search of the stationary and oscillatory solutions, we determined the combinations of the external stimuli for which the network undergoes multistability, oscillations, or symmetry breaking. In particular, for a fully connected network composed of one excitatory and one inhibitory population in a stationary state, we observed the formation of spontaneous symmetry breaking only in the inhibitory population. Intuitively, this phenomenon is similar to the magnetic spin ordering (i.e., neighboring spins pointing in opposite directions) observed in antiferromagnetic materials in a weak external field below their Néel temperature (Néel, 1952). In antiferromagnetic materials, the interaction weights between spins are negative, as in the inhibitory neural population of our network model. Moreover, when the network is noise free (as in section 3.3), it can be interpreted as a spin model at zero temperature. Thus, the noise-free network we considered is below its Néel temperature, which intuitively explains the formation of an “antiferromagnetic” phase characterized by heterogeneous activity in the inhibitory population. In a similar way, the absence of symmetry breaking observed in the excitatory population is similar to spin alignment observed in ferromagnetic materials (which are characterized by positive interaction weights) below their Curie temperature (Néel, 1952). However it is important to bear in mind that a first difference between the spin systems and our network model is that our model has two distinct populations, whose neurons may interact in more complex ways than purely ferromagnetically or antiferromagnetically. Moreover, while magnetic materials are usually described by models where spins are arranged on a lattice, here we considered a neural network model with fully connected topology.

### 4.4  Analytical Estimates of Correlations of Neural Activity in Asymmetric Networks

Thanks to the analytical solvability of the model, we derived exact expressions of the groupwise correlation structure of noisy networks at any order (beyond simple pairwise correlations often considered in both theoretical and empirical studies of neural correlations) in the stationary regime for both the membrane potentials and the firing rates. In the case of the time-continuous graded version of the network model, Fasoli et al. (2015, 2017) found analytical (approximate) solutions of the correlation structure through a perturbation analysis of the neural equations in the small-noise limit. A consequence of this approximation is that the correlations between the membrane potentials and those between the firing rates have the same mathematical expression in the graded model. On the contrary, in this letter, we derived exact expressions of the correlations for any noise intensity. Due to the discontinuous activation function, equation 2.2, the two correlations structures are never identical, even in the small-noise limit. Moreover, similarly to the case of graded networks (Fasoli et al., 2017), we found that the external stimuli can dynamically switch the neural activity from asynchronous (i.e., uncorrelated) to synchronous (i.e., highly correlated) states, with two important differences. The first is that low (resp. high) correlations between the membrane potentials do not necessarily correspond to low (resp. high) correlations between the firing rates. The second is that while in graded networks synchronous states may occur through critical slowing down (Fasoli et al., 2017), the discrete network considered here relies on different mechanisms for generating highly correlated activity that we have only partially covered.

### 4.5  Behavior of Asymmetric Networks in the Large-Population Limit

For completeness, and to link our results to previous work on asymmetric models, we derived the mean-field equations of the network in the thermodynamic limit of large network size (see equation 3.16). Due to the limitations of Sznitman's mean-field theory, we derived these equations only for sufficiently dense multipopulation networks driven by independent sources of noise. These hypotheses allowed obtaining a dimensionally reduced set of equations to describe neural activity, which proved very convenient in studying the dynamical behavior of the multipopulation network in the thermodynamic limit.

The presence of a noisy component in the external stimulus smoothed out the activation function of the network in the thermodynamic limit. Unlike the case of finite-size networks, this allowed us to study the local bifurcations of the infinite-size network for any nonzero noise intensity by applying standard techniques developed for smooth dynamical systems (Kuznetsov, 1998; Haschke & Steil, 2005; Fasoli et al., 2016). In particular, we derived exact analytical expressions for the local codimension 1 bifurcations of the mean-field model in terms of the external stimuli. This method can be applied to networks composed of an arbitrary number of populations, but for an example, we considered the simple case of two populations. We also derived analytically part of the codimension 2 bifurcation diagram of the network and we compared it with the bifurcation structure obtained numerically by the MatCont Matlab toolbox (Dhooge et al., 2003).

Equation 3.16 can be considered the discrete-time version of the mean-field equations derived in Touboul et al. (2012) for a continuous-time stochastic network composed of an arbitrary number of neural populations. While Touboul et al. (2012) were able to derive simple mean-field equations for a graded network with error-function activation functions, in our work we considered a network with binary firing rates (see equation 2.2). Despite these differences, through the bifurcation analysis tools that we developed in the this letter, it is possible to prove that our network model exhibits similar phenomena to those that Touboul et al. (2012) studied, such as noise-induced formation and destruction of stationary and oscillatory solutions (results not shown).

It is important to observe that the use of Sznitman's mean-field theory is limited to networks driven by independent sources of noise and with sufficiently dense synaptic connectivity. In Fasoli et al. (2015), the authors proved that in a graded firing-rate network model, propagation of chaos does not occur in the thermodynamic limit whenever the sources of noise are correlated or the synaptic connections are too sparse. The same results can be proven for the discrete network described by equation 2.1. The study of infinite-size neural networks driven by correlated sources of noise typically requires more advanced techniques such as large deviations theory, which is discussed in Faugeras and MacLaurin (2015).

We also observe that in Sznitman's mean-field theory, the neurons are identically distributed within each population. For this reason, the theory does not describe intrapopulation symmetry breaking of neural activity, as we already observed in Fasoli et al. (2016) for a graded firing-rate network model. This proves that the dimensionally reduced set of mean-field equations generally oversimplifies the dynamical properties of the neural network; therefore, it may neglect important neurophysiological processes that can be captured only by finite-size network models.

### 4.6  Computational Complexity and Limits of Applicability of the Arbitrary-Network-Size Theory

Although we set out to develop a formalism that can be applied to arbitrary network sizes, the complexity of the various analyses makes them applicable in practice to selected regimes. In the following, we discuss the practical domain of applicability of different aspects of our techniques.

The analytical calculations of the stationary distributions and the cross-correlation structure require an exponentially increasing number of calculations in the network size $N$. These formulas become intractable for networks of a few tens of neurons. However, our theoretical framework provides new insights into the behavior of cross-correlations that are expected to hold for any network size $N$ and therefore are not restricted to small-size circuits. Indeed, several properties of cross-correlations between the membrane potentials or the firing rates can be shown to hold for any $N$ (see section S5 of the supplemental information). Therefore, our analytical solutions complement the knowledge that can be obtained from numerical simulations and provide new insights into the behavior of arbitrary-sized neural networks. In contrast, our learning rule (see section 3.2) has polynomial complexity; therefore, its applicability is not restricted to small-size networks only. Finally, efficient algorithms for the bifurcation analysis of noiseless networks of spin-like neurons in principle may be applicable also to large networks, provided the density of their synaptic connections is small enough (Fasoli & Panzeri, 2017).

### 4.7  Concluding Remarks

To conclude, we observe that solvable finite-size network models are invaluable theoretical tools for studying the brain at its multiple scales of spatial organization. Studying how the complexity of neuronal activity changes with increasing network size is of fundamental importance for unveiling the emergent properties of the brain. In this letter, we made an effort in this direction, trying to fill the gap in the current neuroscientific literature. Asymmetric synaptic connections, which are widely considered as mathematically challenging for a rigorous understanding of the network's dynamics, increase the biological plausibility of the model and allow a more complete description of neural oscillations. In future work, we will rigorously determine how the two main assumptions of the model, the discrete-time evolution and the binary firing rates, affect its capability to describe realistic neuronal activity.

## Acknowledgments

We thank Davide Corti for contributing to the initial stages of this work. This research was supported by the Autonomous Province of Trento, Call “Grandi Progetti 2012,” project “Characterizing and Improving Brain Mechanisms of Attention—ATTEND.” The funders had no role in study design, data collection and analysis, decision to publish, interpretation of results, or preparation of the manuscript.

## References

Abbott
,
L. F.
, &
Dayan
,
P.
(
1999
).
The effect of correlated variability on the accuracy of a population code
.
Neural Comput.
,
11
,
91
101
.
Amari
,
S.-I.
(
1972
).
Learning patterns and pattern sequences by self-organizing nets of threshold elements
.
IEEE Trans. Comput.
,
C-21
,
1197
1206
.
Amit
,
D. J.
, &
Mongillo
,
G.
(
2003
).
Spike-driven synaptic dynamics generating working memory states
.
Neural Comput.
,
15
,
565
596
.
Awrejcewicz
,
J.
, &
Lamarque
,
C. H.
(
2003
).
Bifurcation and chaos in nonsmooth mechanical systems
.
Singapore
:
World Scientific
.
,
J.
,
Fasoli
,
D.
,
Faugeras
,
O.
, &
Touboul
,
J.
(
2012
).
Mean-field description and propagation of chaos in networks of Hodgkin-Huxley and FitzHugh-Nagumo neurons
.
J. Math. Neurosci.
,
2
,
10
.
Baños
,
R. A.
,
Cruz
,
A.
,
Fernandez
,
L. A.
,
Gil-Narvion
,
J. M.
,
Gordillo-Guerrero
,
A.
,
Guidetti
,
M.
, … &
Yllanes
,
D.
(
2012
).
Thermodynamic glass transition in a spin glass without time-reversal symmetry
.
,
109
,
6452
6456
.
Borisyuk
,
R. M.
, &
Kirillov
,
A. B.
(
1992
).
Bifurcation analysis of a neural network model
.
Biol. Cybern.
,
66
,
319
325
.
Bray
,
A. J.
, &
Moore
,
M. A.
(
1984
).
Lower critical dimension of Ising spin glasses: A numerical study
.
J. Phys. C
,
17
,
L463
.
Brette
,
R.
,
Rudolph
,
M.
,
Carnevale
,
T.
,
Hines
,
M.
,
Beeman
,
D.
,
Bower
,
J. M.
, … &
Destexhe
,
A.
(
2007
).
Simulation of networks of spiking neurons: A review of tools and strategies
.
J. Comput. Neurosci.
,
23
,
349
398
.
Buhmann
,
J.
, &
Schulten
,
K.
(
1987
).
Noise-driven temporal association in neural networks
.
Europhys Lett.
,
4
,
1205
.
Cohen
,
M. R.
, &
Maunsell
,
J. H.
(
2009
).
Attention improves performance primarily by reducing interneuronal correlations
.
Nature Neurosci.
,
12
,
1594
1600
.
Coolen
,
A. C. C.
(
2001a
).
Statistical mechanics of recurrent neural networks I: Statics
. In
F.
Moss
&
S.
Gielen
(Eds.),
Handbook of biological physics
(vol.
4
, pp.
553
618
).
Amsterdam
:
North-Holland
.
Coolen
,
A. C. C.
(
2001b
).
Statistical mechanics of recurrent neural networks II: Dynamics
. In
F.
Moss
&
S.
Gielen
(Eds.),
Handbook of biological physics
(vol.
4
, pp.
619
684
).
Amsterdam
:
North-Holland
.
Crisanti
,
A.
, &
Sompolinsky
,
H.
(
1987
).
Dynamics of spin systems with randomly asymmetric bonds: Langevin dynamics and a spherical model
.
Phys. Rev. A
,
36
,
4922
4939
.
Crisanti
,
A.
, &
Sompolinsky
,
H.
(
1988
).
Dynamics of spin systems with randomly asymmetric bonds: Ising spins and Glauber dynamics
.
Phys. Rev. A
,
37
,
4865
4874
.
Dahmen
,
D.
,
Bos
,
H.
, &
Helias
,
M.
(
2016
).
Correlated fluctuations in strongly coupled binary networks beyond equilibrium
.
Phys. Rev. X
,
6
,
031024
.
David
,
O.
,
Cosmelli
,
D.
, &
Friston
,
K. J.
(
2004
).
Evaluation of different measures of functional connectivity using a neural mass model
.
NeuroImage
,
21
,
659
673
.
Deco
,
G.
,
Rolls
,
E. T.
, &
Romo
,
R.
(
2009
).
Stochastic dynamics as a principle of brain function
.
Prog. in Neurobiol.
,
88
,
1
16
.
DeFelipe
,
J.
,
Marco
,
P.
,
Busturia
,
I.
, &
Merchán-Pérez
,
A.
(
1999
).
Estimation of the number of synapses in the cerebral cortex: Methodological considerations
.
Cereb. Cortex
,
9
,
722
.
Derrida
,
B.
(
1980
).
Random-energy model: Limit of a family of disordered models
.
Phys. Rev. Lett.
,
45
,
79
82
.
Derrida
,
B.
,
Gardner
,
E.
, &
Zippelius
,
A.
(
1987
).
An exactly solvable asymmetric neural network model
.
Europhys. Lett.
,
4
,
167
173
.
Dhooge
,
A.
,
Govaerts
,
W.
, &
Kuznetsov
,
Y. A.
(
2003
).
MATCONT: A Matlab package for numerical bifurcation analysis of ODEs
.
ACM Trans. Math. Softw.
,
29
,
141
164
.
Ermentrout
,
B.
(
1998
).
Neural networks as spatio-temporal pattern-forming systems
.
Rep. Prog. Phys.
,
61
,
353
430
.
Fasoli
,
D.
,
Cattani
,
A.
, &
Panzeri
,
S.
(
2016
).
The complexity of dynamics in small neural circuits
.
PLoS Comput. Biol.
,
12
,
e1004992
.
Fasoli
,
D.
,
Cattani
,
A.
, &
Panzeri
,
S.
(
2017
).
Transitions between asynchronous and synchronous states: A theory of correlations in small neural circuits
.
J. Comput. Neurosci.
,
44
,
25
43
.
Fasoli
,
D.
,
Faugeras
,
O.
, &
Panzeri
,
S.
(
2015
).
A formalism for evaluating analytically the cross-correlation structure of a firing-rate network model
.
J. Math. Neurosci.
,
5
,
6
.
Fasoli
,
D.
, &
Panzeri
,
S.
(
2017
).
Optimized brute-force algorithms for the bifurcation analysis of a spin-glass-like neural network model
.
arXiv preprint
.
Faugeras
,
O.
, &
MacLaurin
,
J.
(
2015
).
Asymptotic description of neural networks with correlated synaptic weights
.
Entropy
,
17
,
4701
4743
.
Feller
,
W.
(
1971
).
An introduction to probability theory and its applications
, vol.
1
.
New York
:
Wiley
.
Friston
,
K. J.
(
2011
).
Functional and effective connectivity: A review
.
Brain Connect.
,
1
,
13
36
.
Goles-Chacc
,
E.
,
Fogelman-Soulie
,
F.
, &
Pellegrin
,
D.
(
1985
).
Decreasing energy functions as a tool for studying threshold networks
.
Discrete Appl. Math.
,
12
,
261
277
.
Grimbert
,
F.
, &
Faugeras
,
O.
(
2006
).
Bifurcation analysis of Jansen's neural mass model
.
Neural Comput.
,
18
,
3052
3068
.
Grossberg
,
S.
, &
Pearson
,
L. R.
(
2008
).
Laminar cortical dynamics of cognitive and motor working memory, sequence learning and performance: Toward a unified theory of how the cerebral cortex works
.
Psychol. Rev.
,
115
,
677
732
.
Hagan
,
M. T.
,
Demuth
,
H. B.
, &
Beale
,
M.
(
1996
).
Neural network design
.
Boston
:
PWS
.
Haschke
,
R.
, &
Steil
,
J. J.
(
2005
).
Input space bifurcation manifolds of recurrent neural networks
.
Neurocomputing
,
64
,
25
38
.
Hertz
,
J. A.
,
Grinstein
,
G.
, &
Solla
,
S. A.
(
1987
).
Irreversible spin glasses and neural networks
.
Berlin
:
Springer
.
Hopfield
,
J. J.
(
1982
).
Neural networks and physical systems with emergent collective computational abilities
.
,
79
,
2554
2558
.
Jin
,
L.
,
Nikiforuk
,
P. N.
, &
Gupta
,
M. M.
(
1994
).
Absolute stability conditions for discrete-time recurrent neural networks
.
IEEE Trans. Neural Netw.
,
5
,
954
964
.
Kleinfeld
,
D.
(
1986
).
Sequential state generation by model neural networks
.
,
83
,
9469
9473
.
Kuznetsov
,
Y. A.
(
1998
).
Elements of applied bifurcation theory
.
New York
:
Springer-Verlag
.
Leine
,
R. I.
, &
Van Campen
,
D. H
(
2006
).
Bifurcation phenomena in non-smooth dynamical systems
.
Eur. J. Mech. A-Solid
,
25
,
595
616
.
Leine
,
R. I.
,
Van Campen
,
D. H.
, & Van De
Vrande
,
B. L.
(
2000
).
Bifurcations in nonlinear discontinuous systems
.
Nonlinear Dyn.
,
23
,
105
164
.
Little
,
W. A.
(
1974
).
The existence of persistent states in the brain
.
Math. Biosci.
,
19
,
101
120
.
Makarenkov
,
O.
, &
Lamb
,
J. S. W.
(
2012
).
Dynamics and bifurcations of nonsmooth systems: A survey
.
Physica D
,
241
,
1826
1844
.
Marcus
,
C. M.
, &
Westervelt
,
R. M.
(
1989
).
Stability of analog neural networks with delay
.
Phys. Rev. A
,
39
,
347
359
.
Markram
,
H.
(
2006
).
The Blue Brain Project
.
Nat. Rev. Neurosci.
,
7
,
153
160
.
Mattia
,
M.
, & Del
Giudice
,
P.
(
2000
).
Efficient event-driven simulation of large networks of spiking neurons and dynamical synapses
.
Neural Comput.
,
12
,
2305
2329
.
Mézard
,
M.
,
Parisi
,
G.
, &
Virasoro
,
M.
(
1986
).
Spin glass theory and beyond: An introduction to the replica method and its applications
.
Singapore
:
World Scientific
.
Moreno-Bote
,
R.
,
Beck
,
J.
,
Kanitscheider
,
I.
,
Pitkow
,
X.
,
Latham
,
P.
, &
Pouget
,
A.
(
2014
).
Information-limiting correlations
.
Nature Neurosci.
,
17
,
1410
1417
.
Morrison
,
A.
,
Straube
,
S.
,
Plesser
,
H. E.
, &
Diesmann
,
M.
(
2007
).
Exact subthreshold integration with continuous spike times in discrete-time neural network simulations
.
Neural Comput.
,
19
,
47
79
.
Mountcastle
,
V. B.
(
1997
).
The columnar organization of the neocortex
.
Brain
,
120
,
701
722
.
Néel
,
L.
(
1952
).
Antiferromagnetism and ferrimagnetism
.
Proc. Phys. Soc. Sec. A
,
65
,
869
.
Paik
,
S.-B.
,
Kumar
,
T.
, &
Glaser
,
D. A.
(
2009
).
Spontaneous local gamma oscillation selectively enhances neural network responsiveness
.
PLoS Comput. Biol.
,
5
,
e1000342
.
Peretto
,
P.
(
1984
).
Collective properties of neural networks: A statistical physics approach
.
Biol. Cybern.
,
50
,
51
62
.
Personnaz
,
L.
,
Guyon
,
I.
, &
Dreyfus
,
G.
(
1986
).
Collective computational properties of neural networks: New learning mechanisms
.
Phys. Rev. A
,
34
,
4217
4228
.
Pillow
,
J. W.
,
Shlens
,
J.
,
Paninski
,
L.
,
Sher
,
A.
,
Litke
,
A. M.
,
Chichilnisky
,
E. J.
, &
Simoncelli
,
E. P.
(
2008
).
Spatio-temporal correlations and visual signaling in a complete neuronal population
.
Nature
,
454
,
995
999
.
Pola
,
G.
,
Thiele
,
A.
,
Hoffmann
,
K. P.
, &
Panzeri
,
S.
(
2003
).
An exact method to quantify the information transmitted by different mechanisms of correlational coding
.
Network: Comput. Neural Syst.
,
14
,
35
60
.
Rogers
,
B. P.
,
Morgan
,
V. L.
,
Newton
,
A. T.
, &
Gore
,
J. C.
(
2007
).
Assessing functional connectivity in the human brain by FMRI
.
Magn. Reson. Imaging
,
25
,
1347
1357
.
Roudi
,
Y.
, &
Hertz
,
J.
(
2011
).
Mean field theory for nonequilibrium network reconstruction
.
Phys. Rev. Lett.
,
106
,
048702
.
Sakellariou
,
J.
,
Roudi
,
Y.
,
Mezard
,
M.
, &
Hertz
,
J.
(
2012
).
Effect of coupling asymmetry on mean-field solutions of the direct and inverse Sherrington-Kirkpatrick model
.
Philos. Mag.
,
92
,
272
279
.
Sanz
Leon
,
P.
,
Knock
,
S. A.
,
Woodman
,
M. M.
,
Domide
,
L.
,
Mersmann
,
J.
,
McIntosh
,
A. R.
, &
Jirsa
,
V.
(
2013
).
The Virtual Brain: A simulator of primate brain network dynamics
.
Front. Neuroinform.
,
7
,
10
.
Singer
,
W.
(
1993
).
Synchronization of cortical activity and its putative role in information processing and learning
.
Annu. Rev. Physiol.
,
55
,
349
374
.
Sompolinsky
,
H.
, &
Kanter
,
I.
(
1986
).
Temporal association in asymmetric neural networks
.
Phys. Rev. Lett.
,
57
,
2861
2864
.
Székely
,
G. J.
,
Rizzo
,
M. L.
, &
Bakirov
,
N. K.
(
2007
).
Measuring and testing dependence by correlation of distances
.
Ann. Stat.
,
35
,
2769
2794
.
Tononi
,
G.
,
Sporns
,
O.
, &
Edelman
,
G. M.
(
1994
).
A measure for brain complexity: Relating functional segregation and integration in the nervous system
.
,
91
,
5033
5037
.
Touboul
,
J.
,
Hermann
,
G.
, &
Faugeras
,
O.
(
2012
).
Noise-induced behaviors in neural mean field dynamics
.
SIAM J. Appl. Dyn. Syst.
,
11
,
49
81
.
Tsoi
,
A. C.
, &
Back
,
A.
(
1997
).
Discrete time recurrent neural network architectures: A unifying review
.
Neurocomputing
,
15
,
183
223
.
Wang
,
L.
,
Pichler
,
E. E.
, &
Ross
,
J.
(
1990
).
Oscillations and chaos in neural networks: An exactly solvable model
.
,
87
,
9467
9471
.
Wang
,
W.
,
Machta
,
J.
, &
Katzgraber
,
H. G
(
2015
).
Chaos in spin glasses revealed through thermal boundary conditions
.
Phys. Rev. B
,
92
,
094410
.
Wang
,
X.-J.
(
2001
).
Synaptic reverberation underlying mnemonic persistent activity
.
Trends Neurosci.
,
24
,
455
463
.
Williams
,
R. W.
, &
Herrup
,
K.
(
1988
).
The control of neuron number
.
Ann. Rev. Neurosci.
,
11
,
423
453
.
Yucesoy
,
B.
,
Katzgraber
,
H. G.
, &
Machta
,
J.
(
2012
).
Evidence of non-mean-field-like low-temperature behavior in the Edwards-Anderson spin-glass model
.
Phys. Rev. Lett.
,
109
,
177204
.
Zhang
,
C.
,
Dangelmayr
,
G.
, &
Oprea
,
I.
(
2013
).
Storing cycles in Hopfield-type networks with pseudoinverse learning rule: Admissibility and network topology
.
Neural Netw.
,
46
,
283
298
.
Zhu
,
Z.
,
Ochoa
,
A. J.
, &
Katzgraber
,
H. G.
(
2015
).
Efficient cluster algorithm for spin glasses in any space dimension
.
Phys. Rev. Lett.
,
115
,
077201
.

## Author notes

D.F. and S.P. designed the study. D.F. performed the study through analytical calculations and numerical simulations. A.C. performed the numerical study of the bifurcations of the mean-field network model. D.F., S.P., and A.C. wrote the paper.