Abstract

Networks of neurons in the brain encode preferred patterns of neural activity via their synaptic connections. Despite receiving considerable attention, the precise relationship between network connectivity and encoded patterns is still poorly understood. Here we consider this problem for networks of threshold-linear neurons whose computational function is to learn and store a set of binary patterns (e.g., a neural code) as “permitted sets” of the network. We introduce a simple encoding rule that selectively turns “on” synapses between neurons that coappear in one or more patterns. The rule uses synapses that are binary, in the sense of having only two states (“on” or “off”), but also heterogeneous, with weights drawn from an underlying synaptic strength matrix S. Our main results precisely describe the stored patterns that result from the encoding rule, including unintended “spurious” states, and give an explicit characterization of the dependence on S. In particular, we find that binary patterns are successfully stored in these networks when the excitatory connections between neurons are geometrically balanced—i.e., they satisfy a set of geometric constraints. Furthermore, we find that certain types of neural codes are natural in the context of these networks, meaning that the full code can be accurately learned from a highly undersampled set of patterns. Interestingly, many commonly observed neural codes in cortical and hippocampal areas are natural in this sense. As an application, we construct networks that encode hippocampal place field codes nearly exactly, following presentation of only a small fraction of patterns. To obtain our results, we prove new theorems using classical ideas from convex and distance geometry, such as Cayley-Menger determinants, revealing a novel connection between these areas of mathematics and coding properties of neural networks.

1.  Introduction

Recurrent networks in cortex and hippocampus exhibit highly constrained patterns of neural activity, even in the absence of sensory inputs (Kenet, Bibitchkov, Tsodyks, Grinvald, & Arieli, 2003; Yuste, MacLean, Smith, & Lansner, 2005; Luczak, Bartho, & Harris, 2009; Berkes, Orban, Lengyel, & Fiser, 2011). These patterns are strikingly similar in both stimulus-evoked and spontaneous activity (Kenet et al., 2003; Luczak et al., 2009), suggesting that cortical networks store neural codes consisting of a relatively small number of allowed activity patterns (Yuste et al., 2005; Berkes et al., 2011). What is the relationship between the stored patterns of a network and its underlying connectivity? More specifically, given a prescribed set of binary patterns (e.g., a binary neural code), how can one arrange the connectivity of a network such that precisely those patterns are encoded as fixed-point attractors of the dynamics, while minimizing the emergence of unwanted “spurious” states? This problem, which we refer to as the network encoding (NE) problem, dates back at least to 1982 and has been most commonly studied in the context of the Hopfield model (Hopfield, 1982, 1984; Amit, 1989b; Hertz, Krogh, & Palmer, 1991). A major challenge in this line of work has been to characterize the spurious states (Amit, Gutfreund, & Sompolinsky, 1985, 1987; Amit, 1989a; Hertz et al., 1991; Roudi & Treves, 2003).

In this letter, we take a new look at the NE problem for networks of threshold-linear neurons whose computational function is to learn and store binary neural codes. Following Xie, Hahnloser, and Seung (2002) and Hahnloser, Seung, and Slotine (2003), we regard stored patterns of a threshold-linear network as “permitted sets” (aka “stable sets”; Curto, Degeratu, & Itskov, 2012), corresponding to subsets of neurons that may be coactive at stable fixed points of the dynamics in the presence of one or more external inputs. Although our main results do not make any special assumptions about the prescribed sets of patterns to be stored, many commonly observed neural codes are sparse and have a rich internal structure, with correlated patterns reflecting similarities among represented stimuli. Our perspective thus differs somewhat from the traditional Hopfield model (Hopfield, 1982, 1984), where binary patterns are typically assumed to be uncorrelated and dense (Amit, 1989b; Hertz et al., 1991).

To tackle the NE problem, we introduce a simple learning rule, called the encoding rule, that constructs a network W from a set of prescribed binary patterns . The rule selectively turns “on” connections between neurons that co-appear in one or more of the presented patterns and uses synapses that are binary (in the sense of having only two states—“on” or “off”), but also heterogeneous, with weights drawn from an underlying synaptic strength matrix S. Our main result, theorem 2, precisely characterizes the full set of permitted sets for any network constructed using the encoding rule, and shows explicitly the dependence on S. In particular, we find that binary patterns can be successfully stored in these networks if and only if the strengths of excitatory connections among co-active neurons in a pattern are geometrically balanced, that is, they satisfy a set of geometric constraints. Theorem 3 shows that any set of binary patterns that can be exactly encoded as for symmetric W can in fact be exactly encoded using our encoding rule. Furthermore, when a set of binary patterns is not encoded exactly, we are able to completely describe the spurious states and find that they correspond to cliques in the “cofiring” graph .

An important consequence of these findings is that certain neural codes are natural in the context of symmetric threshold-linear networks; that is, the structure of the code closely matches the structure of emerging spurious states via the encoding rule, allowing the full code to be accurately learned from a highly undersampled set of patterns. Interestingly, using Helly's theorem (Barvinok, 2002), we can show that many commonly observed neural codes in cortical and hippocampal areas are natural in this sense. As an application, we construct networks that encode hippocampal place field codes nearly exactly, following presentation of only a small and randomly sampled fraction of patterns in the code.

The organization of this letter is as follows. In section 2 we introduce some necessary background on binary neural codes, threshold-linear networks, and permitted sets. In section 3, we introduce the encoding rule and present our results. The proofs of our main results are given in section 4 and use ideas from classical distance and convex geometry, such as Cayley-Menger determinants (Blumenthal, 1953), establishing a novel connection between these areas of mathematics and neural network theory. Section 5 contains the discussion. The appendixes follow. Table 1 provides frequently used notation in this letter.

Table 1:
Frequently Used Notation.
NotationMeaning
[n 
2[n] The set of all subsets of [n
 A subset of neurons; a binary pattern; a codeword; a permitted set 
 Number of elements (neurons) in the set  
 A prescribed set of binary patterns, for example, a binary neural code 
 The cofiring graph of ; for some  
 The clique complex of the graph G or , respectively 
supp(x, for a nonnegative vector 
W An connectivity matrix; the network with dynamics 2.1 
D Fixed diagonal matrix of inverse time constants 
 ; set of all permitted sets of W 
A An matrix 
 The principal submatrix of A with index set  
stab(A 
cm(ACayley-Menger determinant of A 
 The column vector with all entries equal to 1 
−11T  rank 1 matrix with all entries equal to −1 
NotationMeaning
[n 
2[n] The set of all subsets of [n
 A subset of neurons; a binary pattern; a codeword; a permitted set 
 Number of elements (neurons) in the set  
 A prescribed set of binary patterns, for example, a binary neural code 
 The cofiring graph of ; for some  
 The clique complex of the graph G or , respectively 
supp(x, for a nonnegative vector 
W An connectivity matrix; the network with dynamics 2.1 
D Fixed diagonal matrix of inverse time constants 
 ; set of all permitted sets of W 
A An matrix 
 The principal submatrix of A with index set  
stab(A 
cm(ACayley-Menger determinant of A 
 The column vector with all entries equal to 1 
−11T  rank 1 matrix with all entries equal to −1 

2.  Background

2.1.  Binary Neural Codes.

A binary pattern on n neurons is simply a string of 0s and 1s, with a 1 for each active neuron and a 0 denoting silence; equivalently, it is a subset of (active) neurons,
formula
A binary neural code (aka a combinatorial neural code; Curto, Itskov, Morrison, Roth, & Walker, 2013; Osborne, Palmer, Lisberger, & Bialek, 2008) is a collection of binary patterns , where 2[n] denotes the set of all subsets of [n].

Experimentally observed neural activity in cortical and hippocampal areas suggests that neural codes are sparse (Hromádka, Deweese, & Zador, 2008; Barth & Poulet, 2012), meaning that relatively few neurons are coactive in response to any given stimulus. Correspondingly, we say that a binary neural code is k-sparse, for k<n, if all patterns satisfy . Note that in order for a code to have good error-correcting capability, the total number of code words must be considerably smaller than 2n (MacWilliams & Sloane, 1983; Huffman & Pless, 2003; Curto et al., 2013), a fact that may account for the limited repertoire of observed neural activity.

Important examples of binary neural codes are classical population codes, such as receptive field codes (RF codes) (Curto et al., 2013). A simple yet paradigmatic example is the hippocampal place field code (PF code), where single neuron activity is characterized by place fields (O'Keefe, 1976; O'Keefe & Nadel, 1978). We consider general RF codes in section 3.6 and specialize to sparse PF codes in section 3.7.

2.2.  Threshold-Linear Networks.

A threshold-linear network (Hahnloser et al., 2003; Curto et al., 2012) is a firing rate model for a recurrent network (Dayan & Abbott, 2001; Ermentrout & Terman, 2010), where the neurons all have threshold nonlinearity, . The dynamics are given by
formula
where n is the number of neurons, xi(t) is the firing rate of the ith neuron at time t, ei is the external input to the ith neuron, and is its threshold. The matrix entry Wij denotes the effective strength of the connection from the jth to the ith neuron, and the timescale gives the rate at which a neuron's activity decays to zero in the absence of any inputs (see Figure 1).
Figure 1:

A recurrent network receiving an input vector . The firing rate of each neuron is given by xi=xi(t) and evolves in time according to equation 2.1. The strengths of recurrent connections are captured by the matrix W.

Figure 1:

A recurrent network receiving an input vector . The firing rate of each neuron is given by xi=xi(t) and evolves in time according to equation 2.1. The strengths of recurrent connections are captured by the matrix W.

Although sigmoids more closely match experimentally measured input-output curves for neurons, the above-threshold nonlinearity is often a good approximation when neurons are far from saturation (Dayan & Abbott, 2001; Shriki, Hansel, & Sompolinsky, 2003). Assuming that encoded patterns of a network are in fact realized by neurons that are firing far from saturation, it is reasonable to approximate them as stable fixed points of the threshold-linear dynamics.

These dynamics can be expressed more compactly as
formula
2.1
where is the diagonal matrix of inverse time constants, W is the synaptic connectivity matrix, with , and is applied elementwise. Note that unlike in the Hopfield model, the “input” to the network comes in the form of a constant (in time) external drive b rather than an initial condition x(0). We think of equation 2.1 as describing the fast-timescale dynamics of the network and b as representing the effect of an external stimulus. So long as b changes slowly as compared to the fast network dynamics, the neural responses to individual stimuli are captured by the steady states of equation 2.1 in the presence of a constant input vector b.

In the encoding rule (see section 3.1), we assume homogeneous timescales and use D=I (the identity matrix). Nevertheless, all results apply equally well to heterogeneous timescales (i.e., for any diagonal D having strictly positive diagonal). We also assume that −D+W has a strictly negative diagonal, so that the activity of an individual neuron always decays to zero in the absence of external or recurrent inputs. Although we consider responses to the full range of inputs , the possible steady states of equation 2.1 are sharply constrained by the connectivity matrix W. Assuming fixed D, we refer to a particular threshold-linear network simply as W.

2.3.  Permitted Sets of Threshold-Linear Networks.

We consider threshold-linear networks whose computational function is to encode a set of binary patterns. These patterns are stored as “permitted sets” of the network. The theory of permitted (and forbidden) sets was introduced in Xie et al. (2002) and Hahnloser et al. (2003), and many interesting results were obtained in the case of symmetric threshold-linear networks. Here we review some definitions and results that apply more generally, though later we will also restrict ourselves to the symmetric case.

Informally, a permitted set of a recurrent network is a binary pattern that can be activated. This means there exists an external input to the network such that the neural activity converges to a steady state (i.e., is a stable fixed point with all firing rates nonnegative) having support :
formula
Definition 1. 

A permitted set of the network 2.1 is a subset of neurons with the property that for at least one external input , there exists an asymptotically stable fixed point such that (Hahnloser et al., 2003). For a given choice of network dynamics, the connectivity matrix W determines the set of all permitted sets of the network, denoted .

For threshold-linear networks of the form 2.1, it has been previously shown that permitted sets of W correspond to stable principal submatrices of −D+W (Hahnloser et al., 2003; Curto et al., 2012). Recall that a stable matrix is one whose eigenvalues all have strictly negative real part. For any matrix A, the notation denotes the principal submatrix obtained by restricting to the index set ; if , then is the matrix with . We denote the set of all stable principal submatrices of A as
formula
With this notation we can now restate our prior result, which generalizes an earlier result of Hahnloser et al. (2003) to nonsymmetric networks.

Theorem 1 
(Curto et al., 2012, theorem 1.2).1  Let W be a threshold-linear network on n neurons (not necessarily symmetric) with dynamics given by equation 2.1, and let be the set of all permitted sets of W. Then
formula

Theorem 1 implies that a binary neural code can be exactly encoded as the set of permitted sets in a threshold-linear network if and only if there exists a pair of matrices (D, W) such that . From this observation, it is not difficult to see that not all codes are realizable by threshold-linear networks. This follows from a simple lemma:

Lemma 1. 

Let A be an real-valued matrix (not necessarily symmetric) with strictly negative diagonal and . If A is stable, then there exists a principal submatrix of A that is also stable.

Proof. 
We use the formula for the characteristic polynomial in terms of sums of principal minors:
formula
where mk(A) is the sum of the principal minors of A. Writing the characteristic polynomial in terms of symmetric polynomials in the eigenvalues , and assuming A is stable, we have This implies that at least one principal minor is positive. Since the corresponding principal submatrix has negative trace, it must be stable.

Combining lemma 1 with theorem 1 then gives:

Corollary 1. 

Let . If there exists a pattern such that no order 2 subset of belongs to , then is not realizable as for any threshold-linear network W.

Here we will not pay attention to the relationship between the input to the network b and the corresponding permitted sets that may be activated, as it is beyond the scope of this letter. In prior work, however, we were able to understand with significant detail the relationship between a given b and the set of resulting fixed points of the dynamics (Curto et al., 2012, proposition 2.1). For completeness, we summarize these findings in appendix  D.

2.4.  Structure of Permitted Sets of Symmetric Threshold-Linear Networks.

In the remainder of this work, we restrict attention to the case of symmetric networks. With this assumption, we can immediately say more about the structure of permitted sets . Namely, if W is symmetric, then the permitted sets have the combinatorial structure of a simplicial complex.

Definition 2. 

An (abstract) simplicial complex is a set of subsets of such that the following two properties hold: (1) for each , and (2) if and , then .

Lemma 2. 

If W is a symmetric threshold-linear network, then is a simplicial complex.

In other words, if W is symmetric, then every subset of a permitted set is permitted, and every superset of a set that is not permitted is also not permitted. This was first observed in Hahnloser et al. (2003), using an earlier version of theorem 1 for symmetric W. It follows from the fact that , by theorem 1, and stab(A) is a simplicial complex for any symmetric matrix A having strictly negative diagonal (see corollary 7 in appendix  A). The proof of this fact is a straightforward application of Cauchy's interlacing theorem (appendix A), which applies only to symmetric matrices.

We are not currently aware of any simplicial complex that is not realizable as for a symmetric threshold-linear network, although we believe such examples are likely to exist.

3.  Results

Theorem 1 allows one to find all permitted sets of a given network W. Our primary interest, however, is in the inverse problem:

NE problem: Given a set of binary patterns , how can one construct a network W such that while minimizing the emergence of unwanted spurious states?

Note that spurious states are elements of that were not in the prescribed set of patterns to be stored; these are precisely the elements of . If , so that all patterns in are stored as permitted sets of W but may contain additional spurious states, then we say that has been encoded by the network W. If , so that there are no spurious states, then we say that has been exactly encoded by W.

We tackle the NE problem by analyzing a novel learning rule, called the encoding rule. In what follows, the problem is broken into four motivating questions that address (1) the learning rule, (2) the resulting structure of permitted sets, (3) binary codes that are exactly encodable, and (4) the structure of spurious states when codes are not encoded exactly. In section 3.6 we use our results to uncover “natural” codes for symmetric threshold-linear networks and illustrate this phenomenon in the case of hippocampal PF codes in section 3.7.

3.1.  The Encoding Rule.

Question 1:  Is there a biologically plausible learning rule that allows arbitrary neural codes to be stored as permitted sets in threshold-linear networks?

In this section we introduce a novel encoding rule that constructs a network W from a prescribed set of binary patterns . The rule is similar to the classical Hopfield learning rule (Hopfield, 1982) in that it updates the weights of the connectivity matrix W following sequential presentation of binary patterns, and strengthens excitatory synapses between coactive neurons in the patterns. In particular, the rule is Hebbian and local: each synapse is updated only in response to the coactivation of the two adjacent neurons, and the updates can be implemented by presenting only one pattern at a time (Hopfield, 1982; Dayan & Abbott, 2001). A key difference from the Hopfield rule is that the synapses are binary: once a synapse (ij) has been turned “on,” the value of Wij stays the same irrespective of the remaining patterns.2 A new ingredient is that synapses are allowed to be heterogeneous: in other words, the actual weights of connections are varied among “on” synapses. These weights are assigned according to a predetermined synaptic strength matrix S, which is considered fixed and reflects the underlying architecture of the network. For example, if no physical connection exists between neurons i and j, then Sij=0, indicating that no amount of cofiring can cause a direct excitatory connection betwen those neurons. On the other hand, if two neurons have multiple points of physical contact, then Sij will be greater than if there are only a few anatomical contacts. There is, in fact, experimental evidence in hippocampus for synapses that appear binary and heterogeneous in this sense (Petersen, Malenka, Nicoll, & Hopfield, 1998), with individual synapses exhibiting potentiation in an all-or-nothing fashion, but having different thresholds for potentiation and heterogeneous synaptic strengths.

Here we describe the encoding rule in general, with minimal assumptions on S. Later, in sections 3.4 and 3.5, we investigate the consequences of various choices of S on the network's ability to encode different types of binary neural codes.

Encoding rule. This is a prescription for constructing (i.e., “learning”) a network W from a set of binary patterns on n neurons, (e.g., is a binary neural code). It consists of three steps: two initialization steps, followed by an update step:

  1. Fix an synaptic strength matrix S and an . We think of S and as intrinsic properties of the underlying network architecture, established prior to learning. Because S contains synaptic strengths for symmetric excitatory connections, we require that and Sii=0.

  2. The network W is initialized to be symmetric with effective connection strengths Wij=Wji<−1 for , and Wii=0. (Beyond this requirement, the initial values of W do not affect the results.)

  3. Following presentation of each pattern , we turn “on” all excitatory synapses between neurons that coappear in .3 This means we update the relevant entries of W as follows:
    formula
    Note that the order of presentation does not matter; once an excitatory connection has been turned “on,” the value of Wij stays the same irrespective of remaining patterns.

To better understand what kinds of networks result from the encoding rule, observe that any initial W in step 2 can be written as , where Rij=Rji>0 for and , so that Wii=0. Assuming a threshold-linear network with homogeneous timescales (i.e., fixing D=I), the final network W obtained from after step 3 satisfies
formula
3.1
where is the graph on n vertices (neurons) having an edge for each pair of neurons that coappears in one or more patterns of . We call this graph the cofiring graph of . In essence, the rule allows the network to “learn” , selecting which excitatory synapses are turned “on” and assigned to their predetermined weights.
Consequently, any matrix −D+W obtained via the encoding rule has the form
formula
where −11T denotes the matrix of all −1s and A is a symmetric matrix with zero diagonal and off-diagonal entries or Aij=−Rij<0, depending on . It then follows from theorem 1 that the permitted sets of this network are
formula
Furthermore, it turns out that for any symmetric W is of this form, even if −D+W is not of the form

Lemma 3. 

If W is a symmetric threshold-linear network (with D not necessarily equal to the identity matrix I), then there exists a symmetric matrix A with zero diagonal such that

The proof is given in appendix  B (see lemma 14).

In addition to being symmetric, the encoding rule (for small enough ) generates “lateral inhibition” networks where the matrix −D+W has strictly negative entries. In particular, this means that the matrix DW is copositive—that is, xT(DW)x>0 for all nonnegative x except x = 0. It follows from (Hahnloser et al., 2003, theorem 1) that for all input vectors and for all initial conditions, the network dynamics of equation 2.1 converge to an equilibrium point. This was proven by constructing a Lyapunov-like function, similar to the strategy in Cohen and Grossberg (1983).4

3.2.  Main Result.

Question 2:  What is the full set of permitted sets stored in a network constructed using the encoding rule?

Our main result, theorem 2, characterizes the full set of permitted sets obtained using the encoding rule, revealing a detailed understanding of the structure of spurious states. Recall from lemma 3 that the set of permitted sets of any symmetric network on n neurons has the form for and A a symmetric matrix with zero diagonal.5 Describing thus requires understanding the stability of the principal submatrices for each . Note that these submatrices all have the same form: , where −11T is the all −1s matrix of size . Proposition 1 (below) provides an unexpected connection between the stability of these matrices and classical distance geometry.6 We first present proposition 1 and then show how it leads to theorem 2.

For symmetric matrices of the form , with , it is easy to identify the conditions for the matrix to be stable. One needs the determinant to be positive, so A12>0 and . For matrices, the conditions are more interesting, and the connection to distance geometry emerges.

Lemma 4. 
Consider the matrix , for a fixed symmetric A with zero diagonal:
formula
There exists an such that this matrix is stable if and only if and are valid edge lengths for a nondegenerate triangle in .

In other words, the numbers must satisfy the triangle inequalities for distinct i, j, k. This can be proven by straightforward computation, using Heron's formula and the characteristic polynomial of the matrix. The upper bound on , however, is not so easy to identify.

Remarkably, the above observations completely generalize to matrices of the form , and the precise limits on can also be computed for general n. This is the content of proposition 1, below. To state it, however, we first need a few notions from distance geometry.

Definition 3. 

An matrix A is a (Euclidean) square distance matrix if there exists a configuration of points (not necessarily distinct) such that . A is a nondegenerate square distance matrix if the corresponding points are affinely independent, that is, if the convex hull of is a simplex with nonzero volume in .

Clearly, all square distance matrices are symmetric and have zero diagonal. Furthermore, a matrix A is a nondegenerate square distance matrix if and only if the off-diagonal entry satisfies the additional condition A12>0. For a matrix A, the necessary and sufficient condition to be a nondegenerate square distance matrix is that the entries and are valid edge lengths for a nondegenerate triangle in (this was precisely the condition in lemma 4). For larger matrices, however, the conditions are less intuitive. A key object for determining whether an matrix A is a nondegenerate square distance matrix is the Cayley-Menger determinant,
formula
where is the column vector of all ones. If A is a square distance matrix, then cm(A) is proportional to the square volume of the simplex obtained as the convex hull of the points (see lemma 11 in appendix  A). In particular, (and hence |cm(A)|>0) if A is a nondegenerate square distance matrix, while cm(A)=0 for any other (degenerate) square distance matrix.

Proposition 1. 
Let A be a symmetric matrix with zero diagonal and . Then the matrix
formula
is stable if and only if the following two conditions hold:
  • A is a nondegenerate square distance matrix, and

Proposition 1 is essentially a special case of theorem 4—our core technical result—whose statement and proof are given in section 4.1. The proof of proposition 1 is then given in section 4.2. To our knowledge, theorem 4 is novel, and connections to distance geometry have not previously been used in the study of neural networks or, more generally, the stability of fixed points in systems of ODEs.

The ratio has a simple geometric interpretation in cases where condition (a) of proposition 1 holds. Namely, if A is an nondegenerate square distance matrix (with n>1), then where is the radius of the unique sphere circumscribed on any set of points in Euclidean space that can be used to generate A (see remark 6 in appendix  C). Moreover, since |cm(A)|>0 whenever A is a nondegenerate square distance matrix, there always exists an small enough to satisfy the second condition, provided the first condition holds. Combining proposition 1 with Cauchy's interlacing theorem yields:

Lemma 5. 
If A is an nondegenerate square distance matrix, then
formula

Given any symmetric matrix A with zero diagonal and , it is now natural to define the following simplicial complexes:
formula
Lemma 5 implies that and geom(A) are simplicial complexes. Note that if , we have . In this case, and for all by our convention. Also, if and only if , where
formula
If A is a nondegenerate square distance matrix, then .

To state our main result, theorem 2, we also need a few standard notions from graph theory. A clique in a graph G is a subset of vertices that is all-to-all connected.7 The clique complex of G, denoted X(G), is the set of all cliques in G; this is a simplicial complex for any G. Here we are primarily interested in the graph , the cofiring graph of a set of binary patterns .

Theorem 2. 
Let S be an synaptic strength matrix satisfying and Sii =0 for all , and fix . Given a set of prescribed patterns , let W be the threshold-linear network (see equation 3.1) obtained from using S and in the encoding rule. Then,
formula
If we further assume that , then

In other words, a binary pattern is a permitted set of W if and only if is a nondegenerate square distance matrix, , and is a clique in the graph .

The proof is given in section 4.2. Theorem 2 answers question 2 and makes explicit how depends on S, , and . One way of interpreting this result is to observe that a binary pattern is successfully stored as a permitted set of W if and only if the excitatory connections between the neurons in , given by , are geometrically balanced:

  • • 

    is a nondegenerate square distance matrix.

  • • 

    .

The first condition ensures a certain balance among the relative strengths of excitatory connections in the clique , while the second condition bounds the overall excitation strengths relative to inhibition (which has been normalized to −1 in the encoding rule).

We next turn to an example that illustrates how this theorem can be used to solve the NE problem explicitly for a small binary neural code. In the following section, section 3.4, we address more generally the question of what neural codes can be encoded exactly and what the structure of spurious states is when a code is encoded inexactly.

3.3.  An Example.

Suppose is a binary neural code on n = 6 neurons, consisting of maximal patterns
formula
corresponding to subsets and , together with all subpatterns (smaller subsets) of the maximal ones, thus ensuring that is a simplicial complex. This is depicted in Figure 2A, using a standard method of illustrating simplicial complexes geometrically. The four maximal patterns correspond to the shaded triangles, while patterns with only one or two coactive neurons comprise the vertices and edges of the cofiring graph .8
Figure 2:

An example on n = 6 neurons. (A) The simplicial complex consists of 4 two-dimensional facets (shaded triangles). The graph contains the 6 vertices and 12 depicted edges; these are also included in , so the size of the code is . (B) A configuration of points that can be used to exactly encode . Lines indicate triples of points that are collinear. From this configuration, we construct a synaptic strength matrix S, with and choose . The geometry of the configuration implies that geom(S) does not contain any patterns of size greater than 3 or the triples or . It is straightforward to check that . (C) Another solution for exactly encoding is provided by choosing the matrix S with Sij given by the labeled edges in the figure. The square distances in Sij were chosen to satisfy the triangle inequalities for shaded triangles but to violoate them for empty triangles.

Figure 2:

An example on n = 6 neurons. (A) The simplicial complex consists of 4 two-dimensional facets (shaded triangles). The graph contains the 6 vertices and 12 depicted edges; these are also included in , so the size of the code is . (B) A configuration of points that can be used to exactly encode . Lines indicate triples of points that are collinear. From this configuration, we construct a synaptic strength matrix S, with and choose . The geometry of the configuration implies that geom(S) does not contain any patterns of size greater than 3 or the triples or . It is straightforward to check that . (C) Another solution for exactly encoding is provided by choosing the matrix S with Sij given by the labeled edges in the figure. The square distances in Sij were chosen to satisfy the triangle inequalities for shaded triangles but to violoate them for empty triangles.

Without theorem 2, it is difficult to find a network W that encodes exactly—that is, such that . This is in part because each connection strength Wij belongs to two matrices that must satisfy opposite stability properties. For example, subset must be a permitted set of , while is not permitted, imposing competing conditions on the entry W12. In general, it may be difficult to patch together local ad hoc solutions to obtain a single matrix W having all the desired stability properties.

Using theorem 2, however, we can easily construct many exact solutions for encoding as a set of permitted sets . The main idea is as follows. Consider the encoding rule with synaptic strength matrix S and . Applying the rule to yields a network with permitted sets
formula
The goal is thus to find S so that From the cofiring graph , we see that the clique complex contains all triangles depicted in Figure 2A, including the empty (nonshaded) triangles: and . The matrix S must therefore be chosen so that these triples are not in geom(S), while ensuring that and are included. In other words, to obtain an exact solution, we must find S such that is a nondegenerate square distance matrix for each but not for corresponding to an empty triangle.

Solution 1. Consider the configuration of points in Figure 2B, and let S be the square distance matrix with entries Because the points lie in the plane, the largest principal submatrices of S that can possibly be nondegenerate square distance matrices are . This means geom(S) has no elements of size greater than 3. Because no two points have the same position, geom(S) contains the complete graph with all edges (ij). It remains only to determine which triples are in geom(S). The only principal submatrices of S that are nondegenerate square distance matrices correspond to triples of points in general position. From Figure 2B (left), we see that geom(S) includes all triples except and , since these correspond to triples of points that are collinear (and thus yield degenerate square distance matrices). Although and , it is now easy to check that . Using theorem 2, we conclude that exactly, where W is the network obtained using the encoding rule with this S and any .

Solution 2. Let S be the symmetric matrix defined by the following equations for i<j: Sij=1 if i=1; S24=S35=1; S23=S26=S36=32; and Sij=52 if i = 4 or 5. Here we have only assigned values corresponding to each edge in (see Figure 2C); remaining entries may be chosen arbitrarily, as they play no role after we intersect . Note that S is not a square distance matrix at all, not even a degenerate one. Nevertheless, is a nondegenerate square distance matrix for , because the distances correspond to nondegenerate triangles. For example, the triple has pairwise distances (1, 1, 1), which satisfy the triangle inequality. In contrast, the triple has pairwise distances (1, 1, 3), which violate the triangle inequality; hence, is not a square distance matrix. Similarly, the triangle inequality is violated for each of and . It is straightforward to check that among all cliques of , only the desired patterns in are also elements of geom(S), so .

By construction, solutions 1 and 2 produce networks W (obtained using the encoding rule with , and ) with exactly the same set of permitted sets . Nevertheless, the solutions are functionally different in that the resulting input-output relationships associated with the equation 2.1 dynamics are different, as they depend on further details of W not captured by (see appendix  D).

3.4.  Binary Neural Codes That Can Be Encoded Exactly.

Question 3:  What binary neural codes can be encoded exactly as for a symmetric threshold-linear network W?

Question 4: If encoding is not exact, what is the structure of spurious states?

From theorem 2, it is clear that if the set of patterns to be encoded happens to be of the form , then can be exactly encoded as for small enough and the same choice of S. Similarly, if the set of patterns has the form , then can be exactly encoded as using our encoding rule (see section 3.1) with the same S and . Can any other sets of binary patterns be encoded exactly via symmetric threshold-linear networks? The next theorem assures us that the answer is no. This means that by focusing attention on networks constructed using our encoding rule, we are not missing any binary neural codes that could arise as for other symmetric networks.

Theorem 3. 
Let be a binary neural code. There exists a symmetric threshold-linear network W such that if and only if is a simplicial complex of the form
formula
3.2
for some and S an matrix satisfying and Sii=0 for all . Moreover, W can be constructed using the encoding rule on , using this choice of S and .

The proof is given in section 4.2. Theorem 3 allows us to make a preliminary classification of binary neural codes that can be encoded exactly, giving a partial answer to question 3. To do this, it is useful to distinguish three different types of S matrices that can be used in the encoding rule:

  • • 

    Universal S. We say that a matrix S is universal if it is an nondegenerate square distance matrix. In particular, any principal submatrix is also a nondegenerate square distance matrix, so if we let , then any has corresponding excitatory connections that are geometrically balanced (see section 3.2). Furthermore, , and hence , irrespective of S. It follows that if for any graph G, then can be exactly encoded using any universal S and any in the encoding rule.9 Moreover, since for any code , it follows that any code can be encoded—albeit inexactly—using a universal S in the encoding rule. Finally, the spurious states can be completely understood: they consist of all cliques in the graph that are not elements of .

  • • 
    k-sparse universal S. We say that a matrix S is k-sparse universal if it is a (degenerate) square distance matrix for a configuration of n points that are in general position10 in for k<n (otherwise S is universal). Let . Then, ; this is the (k−1)–skeleton11 of the complete simplicial complex 2[n]. This implies that , where Xk denotes the k-skeleton of the clique complex X:
    formula
    It follows that any k-skeleton of a clique complex, for any graph G, can be encoded exactly. Furthermore, since any k-sparse code satisfies , any k-sparse code can be encoded using this type of S matrix in the encoding rule. The spurious states in this case are cliques of that have size no greater than k.
  • • 

    Specially tuned S. We will refer to all S matrices that do not fall into the universal or k-sparse universal categories as specially tuned. In this case, we cannot say anything general about the codes that are exactly encodable without further knowledge about S. If we let , as above, theorem 3 tells us that the binary codes that can be encoded exactly (via the encoding rule) are of the form . Unlike in the universal and k-sparse universal cases, the encodable codes depend on the precise form of S. Note that the example code discussed in section 3.3 was not a clique complex or the k-skeleton of a clique complex. Nevertheless, it could be encoded exactly for the “specially tuned” choices of S exhibited in solutions 1 and 2 (see Figures 2B and 2C).

A summary of what codes are encodable and exactly encodable for each type of S matrix is shown in Table 2, under the assumption that in the encoding rule.

Table 2:
Classification of S Matrices, Together with Encodable Codes and Spurious States.
Type of S matrix that can be exactly encoded: that can be encoded: Spurious States
Universal S Any clique complex X(GAll codes Cliques of that are not in  
k-sparse universal S Any (k−1)–skeleton Xk−1(G) of a clique complex All k-sparse codes ( for all Cliques of of size , that are not in  
Specially tuned S  is of the form  Depends on S Cliques of that are in geom(S) but not in  
Type of S matrix that can be exactly encoded: that can be encoded: Spurious States
Universal S Any clique complex X(GAll codes Cliques of that are not in  
k-sparse universal S Any (k−1)–skeleton Xk−1(G) of a clique complex All k-sparse codes ( for all Cliques of of size , that are not in  
Specially tuned S  is of the form  Depends on S Cliques of that are in geom(S) but not in  

Notes: The above assumes using the encoding rule on the code with synaptic strength matrix S and . Additional codes may be exactly encodable for other choices of .

We end this section with several technical remarks, along with some open questions for further mathematical investigation.

Remark 1. 

Fine-tuning? It is worth noting here that solutions obtained by choosing S to be a degenerate square distance matrix, as in the k-sparse universal S or the specially tuned S of Figure 2B, are not as finely tuned as they might first appear. This is because the ratio approaches zero as subsets of points used to generate S become approximately degenerate, allowing elements to be eliminated from because of violations to condition (b) in proposition 1, even if condition (a) is not quite violated. This means the appropriate matrices do not have to be exactly degenerate, but only approximately degenerate (see remark 7 in appendix  C). In particular, the collinear points in Figure 2B need not be exactly collinear for solution 1 to hold.

Remark 2. 

Controlling spurious cliques in sparse codes. If the set of patterns to be encoded is a k-sparse code, that is, if for all , then any clique of size k+1 or greater in is potentially a spurious clique. We can eliminate these spurious states, however, by choosing a k-sparse universal S in the encoding rule. This guarantees that does not include any element of size greater than k, and hence .

Remark 3. 
Uniform S. To use truly binary synapses, we can choose S in the encoding rule to be the uniform synaptic strength matrix having Sij=1 for and Sii=0 for all . In fact, S is a nondegenerate square distance matrix, so this is a special case of a “universal” S. Here turns out to have a very simple form:
formula
Similarly, any principal submatrix , with , satisfies This implies that is the k-skeleton of the complete simplicial complex on n vertices if and only if
formula
It follows that for this choice of S and (note that , the encoding rule yields , just as in the case of k-sparse universal S. If, on the other hand, we choose , then , and we have the usual properties for universal S.

Remark 4. 
Matroid complexes. In the special case where S is a square distance matrix, geom(S) is a representable matroid complex—the independent set complex of a real-representable matroid (Oxley, 2011). Moreover, all representable matroid complexes are of this form and can thus be encoded exactly. To see this, take any code having , the complete graph on n vertices. Then , and the encoding rule (for ) yields
formula
Note that although the example code of section 3.3 is not a matroid complex (in particular, it violates the independent set exchange property; Oxley, 2011), geom(S) for the matrix S given in solution 1 (see Figure 2B) is a representable matroid complex, showing that is the intersection of a representable matroid complex and the clique complex .

Remark 5. 

Open questions. Can a combinatorial description be found for all simplicial complexes that are of the form or geom(S), where S and satisfy the conditions in theorem 3? For such complexes, can the appropriate S and be obtained constructively? Does every simplicial complex admit an exact solution to the NE problem via a symmetric network W? That is, is every simplicial complex of the form , as in equation 3.2? If not, what are the obstructions? More generally, does every simplicial complex admit an exact solution (not necessarily symmetric) to the NE problem? We have seen that all matroid complexes for representable matroids can be exactly encoded as geom(S). Can nonrepresentable matroids also be exactly encoded?

3.5.  Spurious States and “Natural” Codes.

Although it may be possible, as in the example of Section 3.3, to precisely tune the synaptic strength matrix S to exactly encode a particular neural code, this is somewhat contrary to the spirit of the encoding rule, which assumes S to be an intrinsic property of the underlying network. Fortunately, as seen in section 3.4, theorem 2 implies that certain “universal” choices of S enable any to be encoded. The price to pay, however, is the emergence of spurious states.

Recall that spurious states are permitted sets that arise in that were not in the prescribed list of binary patterns to be encoded. Theorem 2 immediately implies that all spurious states lie in —that is, every spurious state is a clique of the cofiring graph . We can divide them into two types:

  • • 

    Type 1: Spurious subsets. These are permitted sets that are subsets of patterns in . Note that if is a simplicial complex, there will not be any spurious states of this type. But if is not a simplicial complex, then type 1 spurious states are guaranteed to be present for any symmetric encoding rule, because is a simplicial complex for symmetric W (see lemma 2).

  • • 

    Type 2: Spurious cliques. These are permitted sets that are not of the first type. Note that technically, the type 1 spurious states are also cliques in , but we will use the term spurious clique to refer only to spurious states that are not spurious subsets.

Perhaps surprisingly, some common neural codes have the property that the full set of patterns to be encoded naturally contains a large fraction of the cliques in the code's cofiring graph. In such cases, or . These neural codes therefore have very few spurious states when encoded using a universal or k-sparse universal S, even though S has not been specially tuned for the given code. We will refer to these as natural codes for symmetric threshold-linear networks because they have two important properties that make them particularly fitting for these networks:

  1. Natural codes can be encoded exactly or nearly exactly, using any universal or k-sparse universal matrix S in the encoding rule.

  2. Natural codes can be fully encoded following presentation of only a small (randomly sampled) fraction of the patterns in the code.

In other words, not only can natural codes be generically encoded with very few spurious states, but they can also be encoded from a highly undersampled set of codewords. This is because the network naturally fills in the missing elements via spurious states that emerge after encoding only part of the code. In the next two sections, we explain why RF codes are “natural” in this sense, and illustrate the above two properties with a concrete application of encoding two-dimensional PF codes, an important example of RF codes.

3.6.  Receptive Field Codes Are Natural Codes.

RF codes are binary neural codes consisting of activity patterns of populations of neurons that fire according to receptive fields.12 Abstractly, a receptive field is a map from a space of stimuli to the average firing rate fi(s) of a single neuron i in response to each stimulus . Receptive fields are computed from experimental data by correlating neural responses to external stimuli. We follow a common abuse of language, where both the map and its support (i.e., the subset where fi takes on strictly positive values) are referred to as receptive fields. If the stimulus space is d-dimensional, , we say that the receptive fields have dimension d. The paradigmatic examples of neurons with receptive fields are orientation-selective neurons in visual cortex (Ben-Yishai, Bar-Or, & Sompolinsky, 1995) and hippocampal place cells (McNaughton, Battaglia, Jensen, Moser, & Moser, 2006). Orientation-selective neurons have tuning curves that reflect a neuron's preference for a particular angle. Place cells are neurons that have place fields (O'Keefe, 1976; O'Keefe & Nadel, 1978); that is, each neuron has a preferred (convex) region of the animal's physical environment where it has a high firing rate. Both tuning curves and place fields are examples of low-dimensional receptive fields, having typical dimension d=1 or d=2.

The elements of an RF code correspond to subsets of neurons that may be coactivated in response to a stimulus (see Figure 3). Here we define two variations of this notion, which we refer to as RF codes and coarse RF codes.

Figure 3:

Two-dimensional receptive fields for six neurons. The RF code has a codeword for each overlap region. For example, the shaded region corresponds to the binary pattern 001011; equivalently, we denote it as . The corresponding coarse RF code also includes all subsets, such as even if they are not part of the original RF code.

Figure 3:

Two-dimensional receptive fields for six neurons. The RF code has a codeword for each overlap region. For example, the shaded region corresponds to the binary pattern 001011; equivalently, we denote it as . The corresponding coarse RF code also includes all subsets, such as even if they are not part of the original RF code.

Definition 4. 
Let be a collection of convex open sets in , where each Ui is the receptive field corresponding to the ith neuron. To such a set of receptive fields, we associate a d-dimensional RF code, defined as follows: for each ,
formula

This definition was previously introduced in Curto et al. (2013) and Curto, Itskov, Veliz-Cuba, and Youngs (in press). A coarse RF code is obtained from an RF code by including all subsets of code words, so that for each ,
formula

Note that the codeword in Figure 3 corresponds to stimuli in the shaded region, not to the full intersection . Moreover, the subset is not an element of the RF code, since . Nevertheless, it often makes sense to also consider such subsets as codewords; for example, the cofiring of neurons 3 and 5 may still be observed, as neuron 6 may fail to fire even if the stimulus is in its receptive field. This is captured by the corresponding coarse RF code.

Coarse RF codes carry less detailed information about the underlying stimulus space (Curto & Itskov, 2008; Curto et al., in press), but turn out to be more “natural” in the context of symmetric threshold-linear networks because they have the structure of a simplicial complex.13 This implies that coarse RF codes do not yield any type 1 spurious states—the spurious subsets—when encoded in a network using the encoding rule. Furthermore, both RF codes and coarse RF codes with low-dimensional receptive fields contain surprisingly few type 2 spurious states—the spurious cliques. This follows from Helly's theorem, a classical theorem in convex geometry:

Helly's theorem(Barvinok, 2002). Suppose that is a finite collection of convex subsets of , for d<k. If the intersection of any d+1 of these sets is nonempty, then the full intersection is also nonempty.

To see the implications of Helly's theorem for RF codes, we define the notion of Helly completion:

Definition 5. 

Let be a d-dimensional simplicial complex on n vertices. The Helly completion is the largest simplicial complex on n vertices that has as its d-skeleton.

In other words, the Helly completion of a d-dimensional simplicial complex is obtained by adding in all higher-dimensional faces in a way that is consistent with the existing lower-dimensional faces. In particular, the Helly completion of any graph G is the clique complex X(G). For a two-dimensional simplicial complex, , the Helly completion includes only cliques of the underlying graph that are consistent with . For example, the Helly completion of the code in section 3.3 does not include the 3-cliques corresponding to empty (nonshaded) triangles in Figure 2A. With this notion, Helly's theorem can now be reformulated:

Lemma 6. 

Let be a coarse d-dimensional RF code, corresponding to a set of place fields where each Ui is a convex open set in . Then is the Helly completion of its own d-skeleton: .

This lemma indicates that low-dimensional RF codes, whether coarse or not, have a relatively small number of spurious cliques, since most cliques in are also in the Helly completion for small d. In particular, it implies that coarse RF codes of dimensions d=1 and d=2 are very natural codes for symmetric threshold-linear networks.

Corollary 2. 

If is a coarse one-dimensional RF code, then it is a clique complex: . Therefore, can be exactly encoded using any universal S in the encoding rule.

Corollary 3. 

If is a coarse two-dimensional RF code, then it is the Helly completion of its own 2-skeleton, , which can be obtained from knowledge of all pairwise and triple intersections of receptive fields.

For coarse two-dimensional RF codes, the only possible spurious cliques are therefore spurious triples and the larger cliques of that contain them. The spurious triples emerge when three receptive fields Ui, Uj, and Uk have the property that each pair intersects, but . For generic arrangements of receptive fields, this is relatively rare, allowing these codes to be encoded nearly exactly using any universal S in the encoding rule. In the next section, we illustrate this phenomenon in the case of two-dimensional place field codes.

3.7.  Encoding Sparse Place Field Codes in Threshold-Linear Networks.

As seen in the previous section, Helly's theorem sharply limits the number of spurious cliques that result from encoding low-dimensional RF codes. Here we illustrate this phenomenon explicitly in the case of sparse place field codes (PF codes). In particular, we find that PF codes can be encoded nearly exactly from a very small, randomly selected sample of patterns. The near-exact encoding of PF codes from highly undersampled data shows that they are “natural” codes for symmetric threshold-linear networks, as defined in section 3.5.

PF codes. Let be a collection of convex open sets in , where each Ui is the place field corresponding to the ith neuron (O'Keefe, 1976; O'Keefe & Nadel, 1978). To such a set of place fields, we associate a d-dimensional PF code, , defined as follows: for each , if and only if the intersection is nonempty.

Note that in this definition, PF codes are coarse RF codes. PF codes are experimentally observed in recordings of neural activity in rodent hippocampus (McNaughton et al., 2006). The elements of correspond to subsets of neurons that may be coactivated as the animal's trajectory passes through a corresponding set of overlapping place fields. Typically d=1 or d=2, corresponding to the standard “linear track” and “open field” environments (Muller, 1996); recently, it has also been shown that flying bats possess d=3 place fields (Yartsev & Ulanovsky, 2013).

It is clear from corollary 2 above that one-dimensional PF codes can be encoded exactly (i.e., without any spurious states) using any universal S matrix in the encoding rule. Two-dimensional PF codes have no type 1 spurious states, but may have type 2 spurious cliques. For sparse PF codes, however, the spurious cliques can be further restricted (beyond what is expected from Helly's theorem) by choosing a k-sparse universal S.

Near-Exact Encoding of Sparse PF Codes

Consider a two-dimensional PF code that is k-sparse, so that no more than k neurons can cofire in a single pattern—even if there are higher-order overlaps of place fields. Experimental evidence suggests that the fraction of active neurons is typically on the order of 5%to10% (Andersen, Morris, Amaral, Bliss, & O'Keefe, 2006), so we make the conservative choice of k=n/10 (our results improve with smaller k). In what follows, S was chosen to be k-sparse universal and so that , in order to control spurious cliques of size greater than k. We also assume the worst-case scenario of , providing an upper bound on the number of spurious cliques resulting from our encoding rule. What fraction of the stored patterns is spurious? This can be quantified by the following error probability,
formula
which assumes all permitted sets are equally likely to be retrieved from among the stored patterns in . For exact encoding, Perror=0, while large numbers of spurious states will push Perror close to 1.

To investigate how “exactly” two-dimensional PF codes are encoded, we generated random k-sparse PF codes with circular place fields, n = 80–100 neurons, and k=n/10 (see appendix  E). Because experimentally observed place fields do not have precise boundaries, we also generated “jittered” codes, where spurious triples were eliminated from the 2-skeleton of the code if they did not survive after enlarging the place field radii from r0 to r1 by a jitter ratio, (r1r0)/r0. This has the effect of eliminating spurious cliques that are unlikely to be observed in neural activity, as they correspond to very small regions in the underlying environment. For each code and each jitter ratio (up to 0.1), we computed Perror using the formula above. Even without jitter, the error probability was small, and Perror decreased quickly to values near zero for 10% jitter (see Figure 4A).

Figure 4:

PF encoding is near-exact and can be achieved by presenting a small fraction of patterns. (A) Perror was computed for randomly generated k-sparse PF codes having n=80, 90, and 100 neurons and k=n/10. For each jitter ratio, the average value of Perror over 100 codes is shown. (B) For n=90, 100 and 110 neurons, k-sparse PF codes with jitter ratio 0.1 were randomly generated and then randomly subsampled to contain a small fraction () of the total number of patterns. After applying the encoding rule to the subsampled code, the number of encoded cliques was computed. In each case, the fraction of encoded cliques for the subsampled code (as compared to the full PF code) was averaged over 10 codes. Cliques were counted using Cliquer (Niskanen & Ostergard, 2010), together with custom-made Matlab software.

Figure 4:

PF encoding is near-exact and can be achieved by presenting a small fraction of patterns. (A) Perror was computed for randomly generated k-sparse PF codes having n=80, 90, and 100 neurons and k=n/10. For each jitter ratio, the average value of Perror over 100 codes is shown. (B) For n=90, 100 and 110 neurons, k-sparse PF codes with jitter ratio 0.1 were randomly generated and then randomly subsampled to contain a small fraction () of the total number of patterns. After applying the encoding rule to the subsampled code, the number of encoded cliques was computed. In each case, the fraction of encoded cliques for the subsampled code (as compared to the full PF code) was averaged over 10 codes. Cliques were counted using Cliquer (Niskanen & Ostergard, 2010), together with custom-made Matlab software.

Encoding Full PF Codes from Highly Undersampled Sets of Patterns

To investigate what fraction of patterns is needed to encode a two-dimensional PF code using the encoding rule, we generated randomly subsampled codes from k-sparse PF codes. We then computed the number of patterns that would be encoded by a network if a subsampled code was presented. Perhaps surprisingly, network codes obtained from highly subsampled PF codes (having only 1% to 5% of the patterns) are nearly identical to those obtained from full PF codes (see Figure 4B). This is because large numbers of “spurious” states emerge when encoding subsampled codes, but most correspond to patterns in the full code. The spurious states of subsampled PF codes can therefore be advantageous, allowing networks to quickly encode full PF codes from only a small fraction of the patterns.

The results summarized in Figure 4 confirm the fact that sparse PF codes are natural codes, as they satisfy both properties P1 and P2 outlined in section 3.5. These codes can be encoded nearly exactly because they have very few spurious states. The spurious cliques are limited by two factors: the implications of Helly's theorem (see section 3.6) and their sparsity, enabling the choice of a k-sparse universal S that automatically eliminates spurious cliques of size greater than k.

4.  Proofs

To the best of our knowledge, all proofs in this section are original, as are the results presented in theorems 2, 3, and 4. Theorem 4 is our core technical result, which we state and prove in section 4.1. It appears to be closely related to some relatively recent results in convex geometry, involving correlation matrices and the geometry of the “elliptope” (Deza & Laurent, 1997). Our proof, however, relies on only classical distance geometry and well-known facts about stable symmetric matrices; these are summarized in appendix  A. The key new insight that allows us to connect stability of matrices of the form to Cayley-Menger determinants is lemma 7. In section 4.2 we give the proofs of proposition 1, theorem 2, and theorem 3, which all rely on theorem 4.

4.1.  Statement of Theorem 4 and Its Proof.

The statement of theorem 4 uses the following definition and some new notation.

Definition 6. 

A Hebbian matrix A is an matrix satisfying and Aii =0 for all .

The name reflects the fact that these are precisely the types of matrices that arise when synaptic weights are modified by a Hebbian learning rule. We also need the notation,
formula
for the set of vectors with all nonzero entries. Note that for , −vvT is a symmetric rank 1 matrix with strictly negative diagonal. Next, given any and any matrix A,
formula
denotes the matrix with entries Avij=vivjAij. We are now ready to state theorem 4.

Theorem 4. 
Let A be a Hebbian matrix and . For , consider the perturbed matrix
formula
The following are equivalent:
  1. A is a nondegenerate square distance matrix.

  2. There exists an such that M is stable.

  3. There exists a such that M is stable for all .

  4. ; and M is stable if and only if .

The rest of this section is devoted to proving theorem 4. A cornerstone of the proof is the following lemma, which allows us to connect perturbations of rank 1 matrices to Cayley-Menger determinants:

Lemma 7 
(determinant lemma).  Let . For any real-valued matrix A and any ,
formula
In particular, if and t >0, then
formula
whereis the sign function. Moreover, takingyields
formula

Proof of Lemma 7. 
Note that for any matrix A, , and , we have
formula
where −11T is, as usual, the rank 1 matrix of all −1s. It thus suffices to show that
formula
where cm(A) is the Cayley-Menger determinant of A.
Let , and let Q be any matrix. We have
formula
where we have used the well-known formula for computing the determinant of a block matrix.14 On the other hand, the usual cofactor expansion along the first row gives
formula
Therefore,
formula
In particular, taking (the column vector of all ones) and Q=tA, we have

Finally, to prove theorem 4 we will need the following technical lemma:

Lemma 8. 

Fix , and let A be an Hebbian matrix. If , then −vvT+tAv is not stable for any t>0. In particular, if there exists a t>0 such that −vvT+tAv is stable, then (−1)ncm(A)>0.

For its proof, we will need a simple convexity result.

Lemma 9. 

Let M, N be real symmetric matrices so that M is negative semidefinite (i.e., all eigenvalues are ) and N is strictly negative definite (i.e., stable, with all eigenvalues <0). Then tM+(1−t)N is strictly negative definite (i.e., stable) for all

Proof. 

M and N satisfy and xTNx<0 for all , so we have xT(tM+(1−t)N)x<0 for all nonzero if .

The proof of lemma 8 relies on lemmas 7 and 9, which we have just proven, and also on some well-known results from classical distance geometry that are collected in appendix  A. These include facts about stable symmetric matrices (Cauchy's interlacing theorem, corollary 6, and lemma 10) as well as facts about square distance matrices (lemma 12, proposition 2, and corollary 8). These facts are also used in the proof of theorem 4.

Proof of Lemma 8. 

Since A is symmetric, so are Av and −vvT+tAv for any t. Hence, if any principal submatrix of −vvT+tAv is unstable, then −vvT+tAv is also unstable, by corollary 6. Therefore, without loss of generality, we can assume for all proper principal submatrices , with (otherwise, we use this argument on a smallest principal submatrix such that ). By lemma 12, this implies that is a nondegenerate square distance matrix for all such that , and so we know by proposition 2 that and that each such that has one positive eigenvalue and all other eigenvalues are negative.

We prove the lemma by contradiction. Suppose there exists a t0>0 such that −vvT+t0Av is stable. Applying lemma 9 with M=−vvT and N=−vvT+t0Av, we have that −vvT+(1−t)t0Av is stable for all . It follows that −vvT+tAv is stable for all . Now lemma 10 implies that for all . By lemma 7, this is equivalent to having for all . By assumption, . But if (−1)ncm(A)<0, then there would exist a small enough t>0 such that . Therefore, we conclude that cm(A)=0 and hence .

Next, let denote the eigenvalues of the Cayley-Menger matrix , and observe that A, A[n−1], and CM(A[n−1]) are all principal submatrices of CM(A). Since everything is symmetric, Cauchy's interlacing theorem applies. We have seen above that A[n−1] has one positive eigenvalue and all others negative, so by Cauchy interlacing, and . Because , then CM(A) must have a zero eigenvalue, while implies that it is unique. We thus have two cases.

Case 1: Suppose and thus . Since we assume (−1)n−1cm(A[n−1])>0, the matrix CM(A[n−1]) must have an odd number of positive eigenvalues, but by Cauchy interlacing the top two eigenvalues must be positive, so we have a contradiction.

Case 2: Suppose and thus . Then by Cauchy interlacing A has exactly one positive eigenvalue. On the other hand, the fact that implies that A has an even number of positive eigenvalues, which is a contradiction.

We can now prove theorem 4.

Proof of Theorem 4. 

We prove .

(4) (3) (2) is obvious.

(2) (1): Suppose there exists a t>0 such that −vvT+tAv is stable. Then, by corollary 6 and lemma 8, for all principal submatrices . By lemma 12, it follows that A is a nondegenerate square distance matrix.

(1) (4): Suppose A is a nondegenerate square distance matrix. By lemma 12, we have for all , while proposition 2 implies for all with . This implies that for we have (by corollary 8), and that if ,
formula
Applying now lemma 7,
formula
For , we have diagonal entries and , so for all . Using lemma 10, we conclude (assuming ):
formula
where
formula
It remains only to show that Note that we cannot use lemma 5 from the main text because that lemma follows from proposition 1 and hence is a consequence of theorem 4.
On the other hand, because the matrix changes from stable to unstable at , by continuity of the eigenvalues as functions of , it must be that
formula
Using lemma 7 it follows that which implies

4.2.  Proofs of Proposition 1, Theorem 2, and Theorem 3.

Here we prove our main results from sections 3.2 and 3.4. We begin with the proof of proposition 1.

Proof of Proposition 1. 
Setting (the column vector of all ones) in theorem 4 yields a slightly weaker version of proposition 1, as the hypothesis in theorem 4 is that A is Hebbian, which is more constrained than the proposition 1 hypothesis that A is symmetric with zero diagonal. To see why proposition 1 holds more generally, suppose A is symmetric with zero diagonal but not Hebbian. Then there exists an off-diagonal pair of negative entries, Aij=Aji<0, and the principal submatrix,
formula
is unstable as it has negative trace and negative determinant. It follows from Cauchy's interlacing theorem (see corollary 6 in appendix  A) that is unstable for any . Correspondingly, condition (a) in proposition 1 is violated, as the existence of negative entries guarantees that A cannot be a nondegenerate square distance matrix.

To prove theorems 2 and 3, we will need the following two corollaries of proposition 1. First, recall the definitions for , from section 3.2. Applying proposition 1 to each of the principal submatrices of the perturbed matrix we obtain:

Corollary 4. 
If A is a symmetric matrix with zero diagonal, and , then
formula
For,

Next, recall that X(G) is the clique complex of the graph G.

Corollary 5. 
Let A be a symmetric matrix with zero diagonal, and . Let G be the graph on n vertices having if and only if . For any matrix S with and Sii=0, if S “matches” A on G (i.e., if Sij=Aij for all ), then
formula
In particular,

We can now prove theorems 2 and 3.

Proof of Theorem 2. 
Any network W obtained via the encoding rule (see equation 3.1) has the form , where A is symmetric with zero diagonal and “matches” the (nonnegative) synaptic strength matrix S precisely on the entries Aij such that . All other off-diagonal entries of A are negative. It follows that
formula
where the last two equalities are due to corollaries 4 and 5, respectively.

Proof of Theorem 3.