## Abstract

Place cells in the rat hippocampus play a key role in creating the animal’s internal representation of the world. During active navigation, these cells spike only in discrete locations, together encoding a map of the environment. Electrophysiological recordings have shown that the animal can revisit this map mentally during both sleep and awake states, reactivating the place cells that fired during its exploration in the same sequence in which they were originally activated. Although consistency of place cell activity during active navigation is arguably enforced by sensory and proprioceptive inputs, it remains unclear how a consistent representation of space can be maintained during spontaneous replay. We propose a model that can account for this phenomenon and suggest that a spatially consistent replay requires a number of constraints on the hippocampal network that affect its synaptic architecture and the statistics of synaptic connection strengths.

## 1 Introduction

In the course of learning a spatial environment, an animal forms an internal representation of space that enables spatial navigation and planning (Schmidt & Redish, 2013). The hippocampus plays a key role in producing this map through the activity of location-specific place cells (O’Keefe & Nadel, 1978). At the neurophysiological level, these place cells exhibit spatially selective spiking activity. As the animal navigates its environment, the place cell fires only at a discrete location—its place field (see Figures 1A and 1B). It is believed that the entire ensemble of place cells serves as a neuronal basis of the animal’s spatial awareness (McNaughton, Battaglia, Jensen, Moser, & Moser, 2006; Best, White, & Minai, 2001).

Remarkably, place cells spike not only during active navigation but also during quiescent wake states (Pfeiffer & Foster, 2013; Davidson, Kloosterman, & Wilson, 2009) and even during sleep (Louie & Wilson, 2001; Skaggs & McNaughton, 1996; Wilson & McNaughton, 1994). For example, the animal can “replay” place cells in sequences that correspond to the physical routes traversed during active navigation (Foster & Wilson, 2006; Diba & Buzsaki, 2008; Hasselmo, 2008) or “preplay” sequences that represent possible future trajectories, in either direct or reversed order, while pausing at a decision point (Johnson & Redish, 2007; Pastalkova, Itskov, Amarasingham, & Buzsaki, 2008). This phenomenon implies that after learning, the animal can explore and retrieve spatial information by cuing the hippocampal network (Tsao, Moser, & Moser, 2013; Dragoi & Tonegawa, 2011), which may in turn be viewed as a physiological correlate of mental exploration (Hopfield, 2010; Hasselmo, Giocomo, Brandon, & Yoshida, 2010).

It bears noting, however, that the actual functional units for spatial information processing in the hippocampal network are not individual cells but repeatedly activated groups of place cells known as cell assemblies (see Buzsaki, 2010, and Figure 1C). Although the physiological properties of the place cell assemblies remain largely unknown, it is believed that the cells constituting an assembly synaptically drive a certain readout unit downstream from the hippocampus. In the reader-centric view, this readout neuron—a small network or, most likely, a single neuron—is what actually defines the cell assembly by actualizing the information provided by its activity (Buzsaki, 2010). The identity of the readout neurons in some cases is suggested by the network’s anatomy. For example, there are direct many-to-one projections from the CA3 region of the hippocampus to the CA1 region. Since replays are believed to be initiated in CA3 (Carr, Jadhav, & Frank, 2011; Johnson & Redish, 2007), this implies that the CA1 place cells may serve as the readout neurons for the activity of the CA3 place cells. Assuming that contemporaneous spiking of place cells implies overlap of their respective place fields (see Figures 1A and 1B), it is possible to decode the rat’s current location from the ongoing spiking activity of a mere 40 to 50 neurons (Brown, Frank, Tang, Quirk, & Wilson, 1998). This suggests that the readout neurons may be wired to encode spatial connectivity between place fields by responding to place cell coactivity (see Figures 1A to 1C and Jarsky, Roxin, Kath, & Spruston, 2005; Katz, Kath, Spruston, & Hasselmo, 2007; Dabaghian, Brandt, & Frank, 2014).

A natural assumption underlying both the trajectory reconstructing algorithms (Brown et al., 1998) and various path integration models (McNaughton et al., 2006; Samsonovich & McNaughton, 1997; Issa & Zhang, 2012) is that the representation of spatial locations during physical navigation is reproducible. If the rat begins locomotion at a certain location and at a certain moment of time, *t*_{0}, and then returns to the same location at a later time, *t*_{1}, then the population activity of the place cells at *t*_{0} and *t*_{1} is the same. Similarly, if spatial information is consistently represented during replays, then the activity packet in the hippocampal network should be restored on “replaying” a closed path. Whereas the correspondence between place cell activity and spatial locations (i.e., place fields) during physical navigation is enforced by sensory and proprioceptive inputs (Samsonovich & McNaughton, 1997), the consistency of spatial representation during replay must be attributable solely to the network’s internal dynamics (Gupta, van der Meer, Touretzky, & Redish, 2010).

Here we develop a model that accounts for how a neuronal network could maintain consistency of spatial information over the course of multiple replays or preplays. This model is based on the discrete differential geometry theory developed in Novikov (2004), which reveals that key geometric concepts can be expressed in purely combinatoric terms. The choice of this theory is driven in part by recent work that indicates that the hippocampus provides a topological framework for spatial information rather than a geometric or Cartesian map (Dabaghian et al., 2014; Alvernhe, Sargolini, & Poucet, 2012; Wu & Foster, 2014).

The results suggest that to maintain consistency of spatial information during path replay, the synaptic connections between the place cells and the readout neurons must adhere to a zero holonomy principle.

## 2 The Model

### 2.1 The Simplicial Model of the Cell Assembly Network

A convenient framework for representing a population of place cell assemblies is provided by simplicial topology (Bredon, 1997; Dubrovin, Fomenko, & Novikov, 1992; Aleksandrov, 1965). In this approach, an assembly of place cells, , is represented by a *d*-dimensional abstract simplex (not to be confused with a geometric simplex) containing vertexes, , where each vertex, *v _{i}*, corresponds to a place cell

*c*(in the following, the same symbol, , will be used to denote a cell assembly and the simplex that represents it) (Dabaghian, Mémoli, Frank, & Carlsson, 2012; Arai, Brandt, & Dabaghian, 2014). The entire network can then be represented by a purely combinatorial simplicial complex (Aleksandrov, 1965; Prasolov, 2006) whose maximal simplexes correspond to place cell assemblies (Babichev, Mémoli, & Dabaghian, 2015). Simplexes in may overlap: physiological studies demonstrate that a given place cell may be part of many cell assemblies (Georgopoulos, Schwartz, & Kettner, 1986; Tudusciuc & Nieder, 2007). Many authors have suggested that place cell assemblies should overlap significantly in order to better represent contiguous spatial locations (Curto & Itskov, 2008; Jahans-Price, Gorochowski, Wilson, Jones, & Bogacz, 2014; Maurer, Cowen, Burke, Barnes, & McNaughton, 2006; Gupta, van der Meer, Touretzky, & Redish, 2012): the more cells shared by and , the closer the encoded locations are to one another. The most detailed representation of the environment is produced by a population of maximally overlapping cell assemblies, which differ by a single cell. In such case, a transition of the activity from one cell assembly to another occurs when one place cell in turns off and another cell in the new assembly turns on. The resulting simplicial complex has the structure of a combinatorial

_{i}*d*-dimensional simplicial manifold (in the literature also referred to as “pure complex” or “pseudomanifold”; Prasolov, 2006).

### 2.2 Population Activity in the Cell Assembly Complex

*c*within the assembly . Roughly speaking, can be viewed as the firing rate of

_{i}*c*at the location where the place fields of the cells constituting the assembly overlap, which we refer to as the cell assembly field, (the domain

_{i}*l*

_{123}on Figure 1B). A given place cell

*c*is a part of many cell assemblies , whose fields are contained in the

_{i}*c*’s place field; thus, the higher the orders of the cell assemblies, the (statistically) smaller the s (Babichev et al., 2015). Since the individual place cell spiking rates are well approximated by smooth gaussian functions of the rat’s coordinates (Eden, Frank, Barbieri, Solo, & Brown, 2004), the quantities remain approximately constant over . The components of the population activity vector, equation 2.1, in a given cell assembly can then be related to the corresponding place cells’ maximal firing rates by a set of factors , that are specific to a given cell and a given cell assembly, which may be viewed as measures of the separation between the location and the respective place fields’ centers (see Figure 1B). In other words, the coefficients provide a discrete description of the place field map’s geometry.

_{i}In other words, this is a rate model in that the activity of cells is described by a single parameter: the firing rate, *f*, related via coefficients *h* to the maximal rate, equation 2.2. If the network is trained—the synaptic architecture is fixed, place fields are stable—then each cell assembly fires when the rat visits (or replays) a specific spot where the respective place fields overlap. Because this spot is very small compared to the size of place fields, the left side of equation 2.4 is the essentially the same every time.

### 2.3 Dressed Cell Assembly Complex

The coefficients can be regarded as characteristics of the maximal simplexes of and the values as characteristics of its vertexes. Together, these parameters produce a “dressing” of the cell assembly complex with physiological information about the cells’ spiking and the network’s synaptic architecture. Equation 2.5 singles out a set of valid dressings, equation 2.5 is satisfied} which enable readout neurons to respond to presynaptic activity and thus defines the scope of working synaptic architectures of the place cell assembly networks.

### 2.4 Replays

*c*

_{1},

*c*

_{2}, and

*c*

_{3}: Suppose that equation 2.7 also holds for an adjacent (maximally overlapping) cell assembly, represented by an adjacent simplex , so that the second readout neuron fires with the rate : A key observation here is that since shares vertexes

*v*

_{1}and

*v*

_{2}with , the corresponding firing rates and in equation 2.10 define uniquely the firing rate of the remaining cell, , required to activate the readout neuron (see Figure 2A), Similarly, if there is another simplex adjacent to , then once the value is found from equation 2.11, the firing rate at

*v*

_{4}can be obtained from and , and so on (see Figure 2B). In other words, once the synaptic connections are specified for all simplexes, equation 2.7 can be used to describe the conditions for transferring the activity vector over the entire complex (Novikov, 2004). Notice however, that equations 2.9 to 2.11 do not specify the mechanism responsible for generating place cell activity; they only describe the conditions required to ignite the cell assemblies in a particular sequence. While the subsequent simplexes and in the simplicial path, equation 2.3, are not necessarily adjacent, the activity according to equation 2.7 is propagated along a sequence of adjacent maximal simplexes, such as depicted in Figure 2B.

### 2.5 Discrete Holonomy

*v*of the simplex shuts off and the vertex

_{s}*v*of the adjacent simplex activates. If there is a total of

_{t}*n*simplexes in the path ( for the closed simplicial path shown on Figure 2B), then the corresponding chain of

*n*equations, 2.13, will produce

Mathematically, a mismatch between the starting and the ending orientation of the population activity vector is akin to the differential-geometric notion of holonomy, which, on Riemannian manifolds, measures the change of a vector’s orientation as a result of a parallel transport around a closed loop (Novikov, 2004; Bredon, 1997; Dubrovin et al., 1992; Sternberg, 1964). Hence, the requirement (see equation 2.15) that the activity vector should be the same after completing a closed simplicial trajectory implies that the discrete holonomy along paths in should vanish.

### 2.6 Discrete Curvature

In differential geometry, zero holonomy on a Riemannian manifold is achieved by requiring that the Riemannian curvature tensor associated with the connection vanishes at every point *x* (Dubrovin et al., 1992; Sternberg, 1964). This condition is established by contracting closed paths to infinitesimally small loops encircling a point *x* and translating in parallel a unit vector around that loop. The difference between the starting and the ending orientations of defines the curvature at the point *x* (Sternberg, 1964). An analogous procedure can be performed on a discrete manifold . However, there is a natural limit to shrinking simplicial paths: in a *d*-dimensional complex, the tightest simplicial paths consist of *d*-dimensional simplexes that intersect the same dimensional face (see Figure 3B). Such a path we will call an “elementary closed path,” following Novikov (2004). The order *s _{n}* of such a path is defined by the number of

*d*-dimensional simplexes encircling a simplex . In the following we will use the short notation for the pivot simplexes , whereas the elementary simplicial path encircling will be denoted as .

*d*-dimensional dressed cell assembly simplicial complex. For example, an elementary 2D closed path encircling a vertex

*v*

_{0}with

*n*simplexes enumerated as shown on Figure 3C yields the holonomy matrix The values , , 2, 3, of the bottom row that distinguish from the unit matrix should be considered as discrete curvatures defined at the pivot vertex

*v*

_{0}(see Figure 3C and Novikov, 2004), which need to vanish in order to ensure a consistent representation of space during replays.

Since there exists a finite number of pivot simplexes, the number of constraints (see equation 2.17) on a given dressing is finite. Thus, the scope of nontrivial zero holonomy conditions (see equation 2.15) drastically reduces, and the task of ensuring consistency of translations of the population activity vectors over becomes tractable. Nevertheless, zero curvature conditions (see equation 2.17) are in general quite restrictive and impose nontrivial constraints on the synaptic architecture of the place cell assemblies. As the simplest illustration, consider the case when the firing rates of all the place cells and readout neurons are the same, , and all the connection strengths from the place cells to the readout neuron in all cell assemblies are identical, , giving a constant connection dressing . It can be shown that in this case, the resulting transfer matrix is idempotent, that is, , so that the zero curvature condition, equation 2.17, is satisfied identically for the even order elementary closed paths and cannot be satisfied if the paths’ order is odd. Under more general and physiologically more plausible assumptions, equation 2.17 does not necessarily restrict the order of the cell assemblies. However, the domain of permissible dressings, is significantly restricted by equation 2.17, as compared to the domain occupied by the synaptic parameters of the unconstrained cell assembly networks.

## 3 Statistics of Synaptic Weights in the Limit of Weak Synaptic Noise

*C*is the normalization constant and denotes integration over all .

Understanding the effects produced by zero curvature constraints, equation 2.17, on a wider range of fluctuations is mathematically more challenging. The qualitative results obtained here, however, may generalize beyond the limit of small multiplicative synaptic noise and could eventually be experimentally verified. A physiological implication of the result, equation 3.5, is that the distribution of the unconstrained synaptic weights in a network that does not encode a representation of space (e.g., measured in vitro) should be broader than the distribution measured in vivo in healthy animals, which can be tested once such measurements become technically possible.

## 4 Discussion

The task of encoding a consistent map of the environment imposes a system of constraints on the hippocampal network (i.e., on the coefficients ) that enforce the correspondence between place cell activity and the animal’s location in the physical world. Here we show that zero holonomy is a key condition, which is implemented by requiring that curvatures vanish at the pivot simplexes. This approach works within a combinatorial framework, but a similar intuition guided a geometric approach (Issa & Zhang, 2012), where the place cells’ ability to encode the location of the animal—but not the path leading to that location—was achieved by imposing the conditions of Stoke’s theorem (Dubrovin et al., 1992) on the synaptic weights of the hippocampal network, which were viewed as functions of Cartesian coordinates. Our model is based on the same requirement of path invariance of place cell population activity, implemented on a discrete representation of space—a dressed abstract simplicial complex —without involving geometric information about the animal’s environment.

In particular, note that the concepts of curvature and holonomy are defined in combinatorial, not geometric, terms. This is an advantage in light of (and indeed was motivated by) recent work indicating that the hippocampus provides a topological framework for spatial experience rather than Cartesian map of the environment (Dabaghian et al., 2014), and it also makes our model somewhat more realistic. It does, however, lead to a number of technical complications. For example, discrete connections (see equation 2.6) defined over are nonabelian (Novikov, 2004), so using the approach of Issa and Zhang (2012) would require a nontrivial generalization of Stoke’s theorem, which is valid only in spaces with abelian differential-geometric connections (Broda, 2002). Our approach is based on the analysis of discrete holonomies suggested in the pioneering work of Novikov (2004) which, in fact, explains the mathematical underpinning of the Stoke’s theorem approach in both abelian and nonabelian cases (Bredon, 1997; Dubrovin et al., 1992; Sternberg, 1964). Indeed, the zero-holonomy constraint ensures that no matter what direction the activity is propagated in the network (forward, backward, or skipping over some cell assemblies), the integrity of the spatial information remains intact.

### 4.1 Generality of the Approach

A key instrument of our analyses is equation 2.7, which describes the conditions necessary for propagating spiking conditions over the cell assembly network. The exact form of this equation is not essential; a physiologically more detailed description of near-threshold neuronal spiking (Poirazi, Brannon, & Mel, 2003a, 2003b; Wallach, Eytan, Gal, Zrenner, & Marom, 2011) could be used to establish more accurate zero holonomy and curvature constraints on the hippocampal network’s synaptic architecture, which should be viewed as a general requirement for any spatial replay model.

The assumption of maximally overlapping place cell assemblies may also be relaxed, since equation 2.7 can be applied in cases where the order of the cell assemblies varies, that is, when the simplicial complex is not a manifold but a quasimanifold (see Figure 4 and Floriani, Mesmoudi, Morando, & Puppo, 2002; Lienhardt, 1994). Unfortunately, implementing the zero holonomy principle in this case would require rather arduous combinatorial analysis. For example, propagating the activity packets using equation 2.14 would impose relationships between the dimensionalities of the maximal simplexes and their placement in (i.e., require a particular cell assembly network architecture).

### 4.2 Learning the Constraints

*r*,

_{i}Physiologically, the network may be trained by “ringing out” the violations of the conditions (see equation 2.17) in the neuronal circuit by replaying sequences and adapting the synaptic weights to get rid of the centers of nonvanishing holonomy. Curiously, the role played by in equation 4.1 resembles the role played by the curvature term in the Hilbert–Einstein action of general relativity theory (Dubrovin et al., 1992), which ensures that in the absence of gravitational field sources, the solution of the Hilbert-Einstein equations describes a flat space-time. By analogy, the constraints imposed by equation 2.17 may be viewed as conditions that enforce synaptic flatness of the hippocampal cognitive map.

(It is worth noting that the mechanism suggested here is an implementation of the zero holonomy condition in this simplest case of the reader-centric cell assembly theory that is consistent with physiology. The place cell readout might involve, instead of a single neuron, a small network of a few neurons (not yet identified experimentally), which might require a different implementation of zero holonomy principle, depending on the specific architecture of such a network. If the readout network is a cluster of synchronously activated downstream neurons, then this cluster of cells could be viewed as a meta-neuron, and the proposed approach would apply to this case as well. More complicated architectures would require modifications, but it is reasonable that the reproducibility of the population vector would require zero holonomy in all cases.)

## Appendix

*p*th simplex into the activity vector at the outgoing edge of the same simplex (e.g., from the edge to the edge of on Figure 2A), To ignite the readout neuron of the next cell assembly , which shares the edge with , vector A.1 needs to be transformed into by the diagonal matrix . Together, these two operations produce the transfer matrix: A direct verification shows that a product of

*n*transfer matrices that start and end at the same simplex, has the form of equation 2.16 in which are

*n*th-order polynomials of the coefficients, equation 2.12.

### A.1 Tuning of the fluctuation distribution

*V*are the coefficients obtained by collecting the terms proportional to s produced by equation A.2. Completing the square and integrating over yields a gaussian integral over a positive quadratic form , where

_{lp}*v*is the

_{p}*p*th row of the matrix

*V*. Evaluating equation A.4 yields where Since the second term in the parentheses is positive, , which indicates narrowing of the uncoupled distribution, equation 3.2. The magnitude of the correction in equation A.6 depends on the topological structure of the coactivity complex (e.g., its dimensionality

*d*and the statistics of the pivots’ orders,

*n*) and on the dressing parameters, . In the approximation, equation 3.1, , the diagonal matrix elements of the matrix

*A*are of the order , and hence the .

## Acknowledgments

I thank V. Brandt and R. Phenix for their critical reading of the manuscript and the reviewers for helpful comments. The work was supported in part by Houston Bioinformatics Endowment Fund, the W. M. Keck Foundation grant for pioneering research, and NSF grant 1422438.

## References

*N*-Dimensional generalized combinatorial maps and cellular quasi-manifolds