## Abstract

A place cell is a neuron that fires whenever the animal traverses a particular location of the environment—the place field of the cell. Place cells are found in two regions of the rodent hippocampus: CA3 and CA1. Motivated by the anatomical connectivity between these two regions and by the evidence for synaptic plasticity at these connections, we study how a place field in CA1 can be inherited from an upstream region such as CA3 through a Hebbian learning rule, in particular, through spike-timing-dependent plasticity (STDP). To this end, we model a population of CA3 place cells projecting to a single CA1 cell, and we assume that the CA1 input synapses are plastic according to STDP. With both numerical and analytical methods, we show that in the case of overlapping CA3 input place fields, the STDP learning rule leads to the formation of a place field in CA1. We then investigate the roles of the hippocampal theta modulation and phase precession on the inheritance process. We find that theta modulation favors the inheritance and leads to faster place field formation whereas phase precession changes the drift of CA1 place fields over time.

## 1 Introduction

Place cells are neurons that selectively increase their activity at specific locations of the environment, the so-called place field of the cell (O’Keefe & Dostrovsky, 1971). Since their discovery, place cells have been the object of a great number of experimental and theoretical studies, being probably the best example of a neural correlate for a high-level cognitive concept. Nevertheless, to date, there is no widely accepted theory for the formation of place fields (Moser, Kropff, & Moser, 2008).

Place cells are typically recorded from the rat hippocampus, where they were originally found by O’Keefe & Dostrovsky (1971). As in human and nonhuman primates, the rodent hippocampus is formed by two U-shaped interlocking sectors, the dentate gyrus (DG) and the cornu ammonis (CA), which are further subdivided into four anatomical subfields (CA1-4) (Lorente de Nó, 1934). Place fields are observed in the activity of pyramidal cells in regions CA3 and CA1. The CA1 pyramidal cells receive excitatory inputs from the CA3 pyramidal cells through the Schaffer collaterals and from layer III of entorhinal cortex (EC) through the perforant path (Andersen, Morris, Amaral, Bliss, & O’Keefe, 2006). Region CA3 is characterized by place-field activity (Leutgeb & Leutgeb, 2007), while EC consists of a mix of grid cells, head-direction cells and nonspatially tuned cells (Fyhn, Molden, Witter, Moser, & Moser, 2004; Hargreaves, Rao, Lee, & Knierim, 2005; Hafting, Fyhn, Molden, Moser, & Moser, 2005; Sargolini et al., 2006). As reviewed by Ahmed and Mehta (2009), either of these two inputs is sufficient to generate CA1 place fields, but both are required for a sharp spatial tuning. Indeed, Brun et al. (2002) showed that after lesioning the CA3-CA1 connections, CA1 pyramidal cells maintained their place fields. However, CA1 place cells in lesioned animals fired more diffusely, had larger place fields, and had lower peak firing rates compared to controls. Similarly, when Brun et al. (2008) chemically lesioned layer III of entorhinal cortex to see whether CA3 inputs alone could sustain CA1 place field activity, CA1 place fields became larger and had reduced peak firing rates. Furthermore, in young developing animals, place cells are observed before grid cells (Langston et al., 2010; Wills, Cacucci, Burgess, & O’Keefe, 2010), and place cells persist after disruption of grid cell periodicity by septal inactivation (Koenig, Linder, Leutgeb, & Leutgeb, 2011; Brandon, Koenig, Leutgeb, & Leutgeb, 2014).

The possibility of place field formation from entorhinal grid field input has been evaluated in a number of theoretical studies (see Cheng & Frank, 2011, for a review). For example, Solstad, Moser, and Einevoll (2006) showed that in small two-dimensional environments, single place fields can be formed by summing the activity of a modest number of grid cells. Additionally, the choice of grid cells contributing to a given place field can result from competitive Hebbian learning (Rolls, Stringer, & Elliot, 2006; Savelli & Knierim, 2010; Pilly & Grossberg, 2012), and Franzius, Vollgraf, and Wiskott (2007) obtained place cells from grid cells by applying a linear transformation that maximizes sparseness.

In this letter, we focus on the CA3 input to CA1. This simplification allows us to isolate, and study analytically, different properties of the CA3 input, and complements the previous studies on the transformation from grid to place cells. We investigate how a CA1 place field can spontaneously emerge from the activity of spatially tuned CA3 neurons if synapses obey a Hebbian synaptic plasticity rule (McHugh, Blum, Tsien, Tonegawa, & Wilson, 1996; Mehta, Barnes, & McNaughton, 1997). Indeed, CA3 represents the largest source of excitatory input to a CA1 pyramidal cell (Megias, Emri, Freund, & Gulyas, 2001), and intact CA3-CA1 synaptic plasticity is required for both high spatial specificity (McHugh et al., 1996) and long-term stability (Kentros et al., 1998) of CA1 place fields.

The synaptic plasticity rule we adopt is spike-timing-dependent plasticity (STDP) (Gerstner, Kempter, Van Hemmen, & Wagner, 1996; Markram, Lübke, Frotscher, & Sakmann, 1997; Kempter, Gerstner, & Van Hemmen, 1999; Song, Miller, & Abbott, 2000; Dan & Poo, 2004). Synaptic modifications are sensitive to the ordering of presynaptic and postsynaptic activation: a synaptic efficacy increases if presynaptic firing precedes a postsynaptic spike and decreases otherwise. A fundamental property of a such a learning rule resides in the asymmetry of the so-called STDP learning window: a real-valued function defining the amount and the direction of synaptic changes as a function of the time difference between pre- and postsynaptic spikes. STDP is an attractive model for our study since in its additive form, it can induce a competition between the inputs for the control of the postsynaptic output, enabling a neuron to learn input correlations in an unsupervised manner (Kempter et al., 1999; Song et al., 2000; Abbott & Nelson, 2000).

From an experimental point of view, STDP was shown to be a valid model to explain synaptic changes of hippocampal neurons in dissociated slice cultures (Bi & Poo, 1998) and at the CA3-CA1 connection in vitro (Kwag & Paulsen, 2009). Yet synaptic plasticity at the Schaffer collaterals may also depend on other activity parameters such as postsynaptic bursting, which are not considered by STDP alone (Wittenberg & Wang, 2006; Buchanan & Mellor, 2010). Additionally, CA1 place fields emerge rapidly after the first exposure to a novel environment, sometimes even in the absence of previous postsynaptic spiking (Hill, 1978; Wilson & McNaughton, 1993; Frank, Stanley, & Brown, 2004). In this respect, we suggest that more biophysically detailed models of place field inheritance could consider plasticity rules acting within the dendritic tree of a CA1 cell (see Golding, Staff, & Spruston, 2002). These rules could generate spatially tuned dendritic inputs to a CA1 cell over behaviorally long timescales (hours, days, or weeks of an animal exploring different environments), independent of somatic activation. Dendritic tuning (Sheffield & Dombeck, 2015) could then be uncovered through nonlinear gating mechanisms dependent on the somatic potential (Lee, Lin, & Lee, 2012), leading to place-specific somatic spiking as soon as the animal enters a novel environment. Despite a difference in the biophysical details, we expect dendritic plasticity rules to produce qualitatively similar results as STDP, which is well grounded theoretically and easier to study analytically.

Since STDP is sensitive to the timing of the input spikes, it is interesting to consider the temporal features of hippocampal neural activity, such as its modulation by the theta rhythm (see Buzsáki, 2002, for a review). The hippocampal theta rhythm, in the frequency band 4 to 12 Hz, is one of the largest and most regular EEG oscillations in the rat brain and is particularly prominent during locomotion, exploratory behavior, and REM sleep (Vanderwolf, 1969; Winson, 1974). Hippocampal place cell activity is strongly modulated in the theta-frequency range (O’Keefe, 1976). In addition, a peculiar relation exists between the firing of place cells and the EEG theta rhythm: as a rat moves across a place field, the associated place cell tends to spike at earlier and earlier phases of successive theta cycles (O’Keefe & Recce, 1993). This phenomenon, known as phase precession, has inspired many theories on the storage and recall of memory sequences in the hippocampus (Skaggs, McNaughton, Wilson, & Barnes, 1996; Tsodyks, Skaggs, Sejnowski, & McNaughton, 1996; Jensen & Lisman, 1996; Lisman & Otmakhova, 2001).

Given that CA1 place fields can be inherited from CA3 via Hebbian synaptic plasticity, what are the roles of theta modulation and phase precession in this scenario? Do they favor or disfavor the inheritance process? We tackle these questions from a theoretical perspective. Using a mathematical model of the neural activity in CA3 and CA1, we study the dynamics of the CA3-CA1 synaptic efficacies to test the place field inheritance hypothesis and investigate the roles of theta modulation and phase precession in the inheritance process.

## 2 Model of Place-Field Inheritance

We model the activity of a population of *N* CA3 cells projecting to a CA1 cell, assuming that place fields are already formed in CA3 and that the CA3-CA1 synaptic efficacies are plastic according to STDP. We evaluate the possibility of CA3-CA1 place field inheritance considering a hypothetical experiment of a rat running at a constant speed on a circular track. Our hypothesis is that starting from random connectivity, the activity of the CA3 place cells, combined with the STDP learning rule, can induce a structure in the distribution of the CA1 input synapses that gives rise to a CA1 place field (see Figure 1).

### 2.1 Model of Spike-Timing-Dependent Plasticity

We adopt the STDP plasticity rule proposed by Kempter et al. (1999). We consider *N* presynaptic input neurons projecting to one postsynaptic output neuron where:

Each input spike at synapse

*n*changes its synaptic efficacy*J*by an amount ._{n}Each output spike changes all input synaptic efficacies by an amount .

Each pair of an output spike with an input spike at synapse

*n*changes*J*by an amount ._{n}

Input synapses are indexed by , is the learning rate (), and are constant parameters (), and is the learning window as a function of the time difference between input and output spikes. Additionally, hard bounds are imposed to limit individual synaptic efficacies between 0 and . To ease analytical calculations and better understand the effect of bounds, we also study the unbounded case.

We make four assumptions:

The learning rate is small enough such that the timescale of learning is much slower than the neural spiking dynamics and the timescale of behavior, that is, the time taken for one round of the circular track.

The presynaptic inputs are inhomogeneous Poisson processes with intensities .

The input intensities, averaged over a time interval , are identical and constant: , where is much larger than typical interspike intervals.

*k*

_{1},

*k*

_{2}, and

*k*

_{3}are where is the integral of the learning window and the matrix is given by where is the EPSP-filtered input intensity .

*T*is the time taken by the rat to complete one round of the track. Considering a rat running at constant speed, the spatially-tuned CA3 input intensities are periodic with fundamental period

*T*, and in this setting, we can eliminate the time dependence from the expression of in equation 2.4, and rewrite the learning dynamics as a linear dynamical system with constant coefficients: As STDP learning window, we adopt a double-exponential function (Song et al., 2000), where is the time difference between a pair of pre- and postsynaptic spikes, and define the intervals over which synaptic strengthening and weakening occur, and and determine the maximum amount of synaptic modification per spike pair (, ). The parameters , , , and are chosen such that the integral of the learning window is positive: . Additionally, we impose and , and we assume that input correlations are weak, that is, , where is the average value of the correlation matrix . In this case, the learning rule leads to an intrinsic normalization of the average synaptic efficacy to the fixed point (Kempter et al., 1999) For convenience, we further impose close to 1 such that is close to . Finally, we model the EPSP as an alpha function normalized to unit integral: , where is the Heaviside step function and is the decay time constant.

### 2.2 Model of the CA3 Input Neurons

*N*CA3 place cells projecting to a single CA1 cell by considering a rat running at constant speed

*v*on a circular track of length

*L*( m/s, m). The activity of a CA3 place cell is periodic with period s, and it can be expressed with an infinite periodic summation, where is the Poisson intensity of cell

*n*for an infinite linear track. We start the summation from rather than from to avoid boundary effects at .

We model the activity of the CA3 neurons in four scenarios: no modulation, in-phase modulation, phase precession, and, for completeness, phase recession. Experimental findings (e.g., Harris et al., 2002; Kjelstrup et al., 2008; Royer, Sirota, Patel, & Buzsáki, 2010) indicate that the phase-precession scenario is closest to biology. Nonetheless, the remaining three scenarios are interesting from a theoretical point of view, allowing the disentangling of different effects of neural activity in CA3 on the learning dynamics. The function takes a different form for each of the four scenarios considered.

#### 2.2.1 No Modulation

*G*(see Figure 2A): where is the time taken by the virtual rat to reach the center of place field

*n*in the first round of the track and determines the time it takes to traverse a place field, that is, its size. Here, as in all the remaining cases, we assume place fields that are regularly spaced on the track: The normalization factor in equation 2.10 is chosen such that the activity of the CA3 inputs, averaged over the period

*T*, is equal to : as it is assumed by the STDP model we adopt (see section 2.1, assumption 4).

#### 2.2.2 In-Phase Modulation

*f*

_{0}that is equal to the frequency of the theta oscillation (see Figure 2B). As a consequence, the input intensities of different CA3 inputs peak always at the same phase of the theta rhythm, that is, neither phase precession nor phase recession is modeled. For convenience, the input oscillations peak at the peaks of the theta rhythm. In this scenario, we model a CA3 place field as where is the angular velocity of the modulating sinusoid. The time-averaged input intensity is where is the continuous Fourier transform of and is the imaginary unit. If the width of the gaussian is large enough compared to the fundamental period of the modulating sinusoid, then , and the integral in equation 2.14 can be approximated by . Finally, to maintain the circularity of the system, we impose with .

#### 2.2.3 Phase Precession

*f*

_{0}is slightly higher than the frequency of the theta oscillation (Geisler, Robbe, Zugaro, Sirota, & Buzsáki, 2007). We assume where

*c*is the so-called compression factor (Geisler et al., 2010), which assumes a small positive value (). Additionally, following Geisler et al. (2010), we assume that at the center of each firing field ( for cell

*n*), the phase of the input oscillation equals the phase of the theta rhythm , meaning that high discharge probability occurs at a preferred theta phase (Huxter, Burgess, & O’Keefe, 2003; Dragoi & Buzsáki, 2006). For convenience, and in line with the in-phase-modulation scenario, we assume this phase to be zero (in Figure 2C, high firing rate peaks occur around the peak of the theta cycle). Mathematically, this requires the input

*n*to be modulated with a phase offset , which leads us to the Poisson input intensities: In this case, the distance between two place fields is related to the compressed distance between the spike timings within a theta cycle, that is, the order of spiking within a theta cycle encodes the sequence of place fields traversed (Skaggs et al., 1996). In other words, if the rat takes a time to move from one place field center to another, the corresponding place cells spike, within a theta cycle, with an average time lag (see Figure 2C). By setting

*c*= 0 in equation 2.15, one recovers the case of modulation without phase precession in equation 2.13. Similar to this previous case, the time-averaged input intensities can be approximated by because Finally, to maintain the circularity of the system, we impose and with , which implies .

#### 2.2.4 Phase Recession

*f*

_{0}of the input oscillations ( for ). Similar to the previous case, the time-averaged input intensities are approximated by , and we impose and with to maintain the circularity of the system.

## 3 Numerical Results on Place-Field Inheritance

To test whether place fields could be inherited from CA3 to CA1, and to study how this inheritance depends on the properties of the input activity, we simulate the average dynamics of the CA3-CA1 synaptic efficacies by integrating equation 2.6 numerically (forward Euler method, integration step). The initial condition is a random vector whose components are uniformly distributed with mean and range , where . Since is close to the intrinsic normalization level of the the average synaptic efficacy (see section 2.1, equation 2.8), the initial condition can be interpreted as random synaptic wiring of the CA3-CA1 connections with the assumption that the synaptic efficacies are normalized. The synaptic efficacies are hard-bounded between the two saturation levels 0 and (), and the correlation matrix in equation 2.6 is estimated numerically using the input intensity introduced in section 2.2 for the four cases considered. The parameter values for all numerical simulations are reported in appendix A.

### 3.1 Nonmodulated Inputs

We first study the development of the synaptic efficacies for the simplest case, nonmodulated inputs. Figure 3 shows the results obtained for a random initial condition. The synaptic efficacies diverge toward the two saturation levels until, at the end of the simulation, they are bimodally distributed at 0 and (see Figure 3A). This symmetry breaking derives from the competitive dynamics that is intrinsic to the additive STDP learning rule we adopt. Additionally, the synaptic efficacies cluster in a way that only synapses corresponding to neighboring CA3 inputs are potentiated, while all the others are depressed (see Figures 3B and 3C), such that a place field emerges in CA1 (see Figure 3D).

The location where a CA1 place field emerges depends only on the initial condition, which is random. Therefore, in our model, random synaptic wiring is sufficient to explain a spatial distribution of place fields in a population of CA1 cells (see Figure 4A). The size of a CA1 place field is proportional to the number of strongly potentiated CA3-CA1 synapses, which in turn depends on the upper saturation boundary . Since STDP tends to keep the average synaptic efficacy normalized, the number of strongly potentiated synapses decreases as the upper saturation boundary increases (see Figure 4B). We choose to avoid the trivial case where only a single synapse survives. This case is obtained numerically for because the number of synapses is discrete and the learning rule normalizes the average synaptic efficacy to a value slightly higher than (see equation 2.8).

### 3.2 Theta-Modulated Inputs

To investigate the roles of theta modulation and phase precession in the inheritance process, we study the development of the CA3-CA1 synaptic efficacies in the three remaining scenarios: in-phase modulation, phase precession, and phase recession. For better comparison, we adopt the same initial condition and the same parameter values in all scenarios (see appendix A). We run the simulations both without and with imposing the saturation boundaries 0 and to the synaptic efficacies. The case of unbounded synapses, although less biologically realistic, is easier to study analytically (see section 4) and allows disentangling the fundamental system dynamics from the boundary effects. Example simulations for all cases are reported in Figure 5. First, we note that in all cases a place field emerges in CA1: a dark stripe in a plot indicates a cluster of CA3-CA1–potentiated synapses associated to neighboring CA3 input place fields (see Figure 3B). Additionally, the four scenarios show interesting differences concerning two main points: the time taken for a CA1 place field to emerge and the tendency of a CA1 place field to drift over time.

*k*th component of the discrete Fourier transform (DFT) of , and the DFT was defined as for . The tuning index is a complex number that can be interpreted as follows: its modulus , the vector strength of , indicates the degree of spatial tuning of a CA1 neuron; its phase indicates the position of a CA1 place field on the track (see Figure 6).

In Figure 7 we show the dynamics of the tuning index for the simulations in Figure 5. In all cases, the vector strength initially increases exponentially, and for bounded synapses, it saturates at about (see Figures 7B and 7C, left panels). Comparing the vector strengths in the four scenarios, we note that for theta-modulated CA3 inputs, a CA1 place field emerges faster than for no modulation. This is true for both bounded and unbounded synapses, and the presence of the synaptic saturation boundaries does not considerably affect the time of place field formation (see Figures 7A and 7B, insets). Among the three scenarios with theta-modulated inputs, the in-phase modulation is the one where a CA1 place field is formed fastest, while phase precession and phase recession lead to slower place field formation. These results are validated by analytical calculations (see equation 4.11 and Figure 12C).

With unbounded synapses, the system dynamics is unstable and a CA1 place field drifts over time (see Figures 5A and 7A, right panel). The drift of a CA1 place field is backward with respect to the direction of movement in the first three cases and forward in the fourth one (phase recession). Additionally, the phase of the input modulation relative to the theta rhythm (phase precession or phase recession) affects the drift velocity of a CA1 place field. While the drift velocity is almost equal in the no-modulation and in-phase modulation scenarios, it increases considerably with phase precession, and it decreases, and becomes even negative, with phase recession. These results are also validated analytically (see equation 4.12 and Figure 12D).

The saturation boundaries have a stabilizing effect on the tendency of a CA1 place field to drift over time. With a small number of CA3 inputs, bounded synapses can stabilize CA1 place fields. For example, with CA3 inputs, stable CA1 place fields are obtained in all input scenarios (see Figures 5B and 7B). As the number of CA3 inputs increases, CA1 place fields start drifting. For example, with , drifting CA1 place fields are obtained in all input scenarios (see Figures 5C and 7C). Nevertheless, with bounded synapses, CA1 place fields always drift more slowly compared to the unbounded case (see Figure 8B, right panel). This can be understood as follows: since the saturated synaptic efficacies cannot exceed their boundaries, at each time point only a fraction of the available weights can change and thereby contribute to the drift of a CA1 place field. Note that simulation results with different values of *N* (as in Figures 5B and 5C, 7B and 7C, and 8A and 8B) are best compared by keeping the fraction of strongly potentiated synapses constant. Hence, we set , where is the fraction of input synapses that are expected to be saturated at the upper bound by the intrinsic normalization of the learning rule (see equation 2.8).

The larger the number *N* of CA3 input synapses, the faster a CA1 place field emerges and the faster it drifts over time (see Figures 8A and 8B, see also equations 4.11, 4.12, and 4.22). Indeed, for a fixed track length , the spacing between two input place fields decreases as *N* increases, which results in a higher overlap between input and output activities. However, both the time of place field formation and the drift speed also scale with the learning rate (see equation 4.22), which was not constrained by experimental data. Hence, our model cannot predict the actual time of learning or provide accurate drift-speed estimations, yet these two quantities can be compared across different input scenarios.

With bounded synapses and a large number of CA3 inputs, the steady-state distributions of the synaptic efficacies are qualitatively different in the four input scenarios (see Figure 5D). Backward-drifting CA1 place fields (no modulation, in-phase modulation, and phase precession) are negatively skewed, whereas forward-drifting CA1 place fields (phase recession) are positively skewed. Additionally, the higher the drift speed, the higher the skewness of the steady-state CA1 fields. Interestingly, backward-drifting and negatively skewed place fields have been also reported experimentally (Mehta et al., 1997; Shen, Barnes, McNaughton, Skaggs, & Weaver, 1997; Mehta, Quirk, & Wilson, 2000; Frank, Eden, Solo, Wilson, & Brown, 2002; Lee, Rao, & Knierim, 2004), and the blockage of NMDAR-dependent synaptic plasticity was found to abolish the effect (Ekstrom, Meltzer, McNaughton, & Barnes, 2001). Experimental data are thus consistent with our place field inheritance model in the phase-precession scenario, which is also the most biologically plausible one.

In the three scenarios with theta-modulated inputs, the input-modulation frequency *f*_{0} was kept constant, while the frequency of the theta rhythm changed accordingly, that is, for in-phase modulation, for phase precession, and for phase recession (see section 2.2). This choice made analytical calculations easier, but similar results are obtained when the theta frequency is kept constant (not shown) and both the time of place field formation and the drift speed do not critically depend on the input-modulation frequency *f*_{0} (see Figure 8C).

Next, we tested the robustness of place field inheritance against several sources of variability in the system: the stochasticity of spike timing and the variability in the size and the location of the CA3 input place fields (see Figure 9). For the STDP rule we adopt, the effect of stochastic spiking on structure formation has been studied by Kempter et al. (1999). It was shown that given a sufficiently small learning rate, the diffusion of the synaptic efficacies due to noise is slow as compared to structure formation, and input correlations, however small, can always be learned. We demonstrate this result in our place field inheritance model by means of simulations with spiking neurons.

Input and output spikes are drawn from inhomogeneous Poisson processes, and the synaptic efficacies are updated according to the STDP rule described in section 2.1. Input intensities are modeled as in section 2.2 (see equations 2.10, 2.13, 2.15 and 2.17), but with variable place field sizes and randomly distributed place field centers (see appendix A; see also Figure 9A for examples in the phase precession scenario). We consider CA3 synaptic inputs, and the upper saturation boundary is set to (as in Figures 5C and 7C), so that about synapses are expected to be saturated at the upper bound (see equation 2.8). Finally, the learning rate is lowered to (compared to in Figures 5 and 7), such that synaptic efficacies diffuse much slower as compared to structure formation (see Kempter et al., 1999, for an approximation of the diffusion time constant).

Figure 9B shows the evolution of the weight vector for a random initial condition in the phase precession scenario. As shown for the averaged system (see Figure 5), a CA1 place field emerges and drifts backward over time. The CA1 cell also phase-precesses (see Figure 9C), that is, phase precession is inherited from CA3 to CA1, which is consistent with the work of Jaramillo, Schmidt, and Kempter (2014). Hence, place field inheritance through STDP is robust to stochastic spike timing and to variability in size and location of the CA3 input place fields. Finally, we computed the CA1 tuning index in the four input scenarios (see Figures 9D and E) and confirmed two main findings obtained with the averaged system: theta modulation leads to faster place field formation, and phase precession leads to a faster drift of the inherited place fields.

We now provide an intuitive interpretation of the results presented in this section. The fact that a CA1 place field emerges faster in the case of in-phase modulation compared to no modulation can be understood as follows. Since in both scenarios, the time-averaged input rates have the same value , the effect of theta modulation is to redistribute the input spikes in time windows with the length of a theta period (see Figure 10). This redistribution increases the probability for a pair of input-output spikes to be separated by a time lag that is effective for the plasticity rule and thus favors structure formation. A similar redistribution of spikes is produced also in the phase-precession and phase-recession scenarios, leading to faster structure formation with respect to the base case. However, as a result of the phase shifts, the overlap between input and output is now reduced with respect to the in-phase modulation case, and fewer pairs of input-output spikes interact with the learning window, leading to slower structure formation.

The drift of a CA1 place field can be understood as follows. Consider that a place field has emerged in a CA1 cell and that a set of CA3 inputs with close-by place fields equally contributes to its firing. In this case, the center of the CA1 field, that is, the time at which the cell fires maximally, is given by the mean of the input CA3 place field centers plus the time delay introduced by axonal transmission and synaptic filtering. In this scenario, for nonmodulated CA3 inputs and a rat running always in the same direction on the circular track, about half of the CA3 inputs are preferentially activated before the CA1 cell, whereas the other half are preferentially activated after the CA1 cell. This activation pattern, combined with the STDP learning rule, induces synaptic-efficacy changes that are biased toward LTP for the first half of the inputs and toward LTD for the second half, leading to a CA1 place field that drifts backward with respect to the animal’s direction of motion (see Figure 10, top panel).

Similarly, for theta-modulated inputs, the drift of a CA1 place field can be understood in terms of the timings at which inputs and output activities peak within a cycle of the theta oscillation. Moreover, the dynamics of the synaptic changes—hence, the one of the CA1 place-field drift—is dominated by the activation patterns occurring in the theta cycles where input and output activities overlap maximally. For in-phase modulated inputs, all CA3 inputs peak before the CA1 output (see Figure 10, second panel), and the order of input activation is consistently preserved across theta cycles. In this case, early-activated inputs undergo net LTP, whereas late-activated inputs undergo net LTD, leading to a backward-drifting CA1 place field. This is due to both the asymmetry of the STDP window and the non-Hebbian normalization terms in the learning rule (see section 2.1).

In the phase-precession case, the input-activity peaks are ordered as in the previous case, but due to the phase shifts, they occur further apart in time, and late-activated inputs tend to peak after the CA1 output (see Figure 10, third panel). In this scenario, early-activated (late-activated) inputs undergo stronger LTP (LTD) compared to the in-phase modulation case, and the CA1 place field drifts faster. Finally, with phase recession, the ordering of the input-activity peaks is reversed compared to the previous two scenarios (see Figure 10, bottom panel), leading to a CA1 place field that drifts in the opposite direction, that is, forward with respect to the direction of motion.

## 4 Analytical Results on Place-Field Inheritance

*N*independent eigenvectors and

*N*distinct eigenvalues such that and where are the

*N*th roots of unity, that is, , and the vector is the first column of . Therefore, all the solutions of the dynamical system can be expressed in the eigenvectors’ basis, where the coefficients depend on the initial condition, and the vector is the fixed point of the dynamical system, which is given by (see equation 2.8 and appendix C). Substituting the eigenvectors in equation 4.2, we obtain From equation 4.3, rewriting in polar form the coefficients , and expanding in their real and imaginary parts the eigenvalues , we obtain Note that the imaginary part of vanishes, because within the sum in equation 4.5, term

*k*cancels out with term

*N*−

*k*and the term equals 0. Indeed, both the coefficients and the eigenvalues are DFTs of real vectors: (see equation 4.2) and (see equation 4.13), from which it follows that and , where denotes complex conjugation. Equation 4.4 shows that the solutions of the dynamical system are linear combinations of exponentially modulated sinusoids. The real parts of the eigenvalues determine the exponential growth or decay, whereas the imaginary parts determine the frequency of the oscillations.

*G*, the STDP learning window

*W*, and the EPSP .

### 4.1 Estimation of the System Eigenvalues

*d*between two place fields. From equations 2.5 and 2.9, we derive where . To characterize the input intensity , we need to distinguish the four input scenarios considered: no modulation, in-phase modulation, phase precession, and phase recession.

*a*and

*b*. By using equation 4.16, we rewrite

*Q*, equation 4.15, as a function of the distance

_{ab}*d*only: hence, where and denote, respectively, the convolution and the cross-correlation of

*f*and

*g*evaluated at point

*t*. Equation 4.18 can be rewritten as where we define and is the so-called effective learning window (Sprekeler, Michaelis, & Wiskott, 2007).

*N*even, we defined , where is the indicator function ( for , for ). The approximation in equation 4.22 holds when the gaussian width is much larger than both the width of the learning window

*W*and the spacing of the input place fields, that is, when the input place fields largely overlap (see appendix D).

### 4.2 Interpretation of the Analytical Results

We showed that the eigenvalues can be approximated by sampling the continuous Fourier transform of (see equation 4.22), where takes a different form in each of the four scenarios considered (see equations 4.20 and 4.21). To elucidate the meaning of this result in terms of the place field inheritance, we study qualitatively in each scenario.

In this case, the real part of the lowest-frequency eigenvalue dominates, and a CA1 place field emerges (see equations 4.6 and 4.10). Conversely, a CA1 place field would not emerge if the gaussian was much wider and, as a result, had a local minimum at . Such a broad gaussian in the frequency domain corresponds to a narrow gaussian in the time domain, and in this case, the CA3 input place fields are narrow compared to the width of the effective learning window, a scenario that is biologically unrealistic. Additionally, a CA1 place field emerges in a time that is inversely proportional to (see equation 4.11), and it drifts with a velocity proportional to (see equation 4.12).

*c*replaced by ): This expression is analogous to the one obtained for phase precession (see equation 4.25), but inverting the direction of the frequency shifts . As shown in Figure 12 (gray dashed lines), this inversion affects mainly the imaginary part of , meaning that in passing from phase precession to phase recession, only the drift velocity of a CA1 place field is affected. Specifically, is now smaller compared to all other cases, and it even assumes a small negative value. Therefore, in this scenario, a CA1 place field slowly drifts in the animal’s direction of motion, as observed in the numerical simulations (see Figures 5A and 7A).

## 5 Discussion

We studied the inheritance of place fields from region CA3 to region CA1 of the hippocampus through a Hebbian learning rule: spike-timing-dependent plasticity (STDP). We proposed a minimal computational model of a population of CA3 place cells projecting to a single CA1 cell, considering a virtual rat running on a circular track. The activity of sequentially activated CA3 place cells with overlapping place fields, combined with the STDP learning rule, induced a symmetry breaking in the distribution of the CA3-CA1 synaptic efficacies such that only synapses associated with neighboring CA3 place fields were potentiated. In this setting, a place field emerged in CA1. Then we studied the effects of theta modulation and phase precession on the place field inheritance process. We found that theta modulation favors the inheritance and leads to faster place field formation, whereas phase precession favors the drift of CA1 place fields over time.

Since a CA1 place field emerged from the integration of neighboring CA3 input place fields, place fields could be larger in CA1 than in CA3, which is consistent with experimental findings (Barnes, McNaughton, Mizumori, Leonard, & Lin, 1990; Mizumori, Ragozzino, Cooper, & Leutgeb, 1999; Mizuseki, Royer, Diba, & Buzsáki, 2012). Additionally, the location of a CA1 place field depended only on the initial configuration of the CA3-CA1 synaptic efficacies, which was random. Therefore, random synaptic wiring between regions CA3 and CA1 could be sufficient to explain the presence of spatially distributed CA1 place fields.

Our model, although too abstract to predict the actual time needed for learning, suggests that learning is a slow process requiring multiple traversals of the same place to be effective. This seems in contrast with the fact that place fields develop rapidly in novel environments (Hill, 1978; Wilson & McNaughton, 1993; Frank et al., 2004). In this respect, we suggest that spatially tuned CA3-CA1 connections may not need to be relearned in each environment, but are rather shaped and refined throughout the entire life of an animal exploring different environments. Additionally, this learning process may not even require postsynaptic firing. Indeed, Hebbian plasticity mechanisms in the absence of postsynaptic firing exist: Golding et al. (2002) showed that when somatic spiking was blocked, large-amplitude EPSPs (2–4 mV) and locally generated dendritic spikes were sufficient to induce LTP at both distal and proximal synapses of a CA1 pyramidal neuron. Moreover, Remy and Spruston (2007) showed that even a single burst of Schaffer-collateral activation was sufficient for this type of plasticity to occur. Since large-amplitude EPSPs can result from the spatiotemporal integration of correlated synaptic inputs, this was proposed as a Hebbian learning mechanism independent of somatic action-potential firing and backpropagation. We suggest that such dendritic Hebbian learning mechanisms could induce spatial tuning in a CA1 pyramidal neuron in the same way as Hebbian STDP did in our model.

The hypothesis of place field inheritance implies that spatially tuned CA1 firing fields are produced by spatially tuned synaptic inputs. This assumption is in agreement with a subthreshold ramp-like depolarization of the membrane potential observed within the firing fields of CA1 place cells (Harvey, Collman, Dombeck, & Tank, 2009; Epsztein, Brecht, & Lee, 2011). Interestingly, Lee et al. (2012) showed that a spatially tuned subthreshold depolarization appears also in nonspiking CA1 cells (i.e., silent cells) when the soma is slightly depolarized with a spatially uniform current. The authors proposed that silent cells, like place cells, receive spatially tuned input, but that this input may not propagate to the soma because of a nonlinear gating mechanism dependent on the somatic potential. This hypothesis was recently supported by the work of Sheffield and Dombeck (2015), who provided direct evidence for spatially tuned regenerative dendritic signals in CA1 place cells. Additionally, it was shown that the fraction of dendritic branches with detectable regenerative events correlates with both place field consistency across traversals and place field stability over days (Sheffield & Dombeck, 2015), two features previously linked to NMDA-receptor mediated plasticity. Taken together, these observations suggest that spatial tuning may be learned within the dendritic tree of a CA1 cell without the need of somatic activation and that a nonlinear gating mechanism could uncover this tuning, and lead to somatic spiking, as soon as the animal enters a novel environment.

The scenario of a preexisting spatial tuning that is abruptly uncovered in a novel space relates to the phenomenon of preplay reported by Dragoi and Tonegawa (2011, 2013): the temporal order of neuronal firing during rest or sleep predicts the sequence of future place cell activation in a previously unknown environment. In both cases, a preconfigured state is uncovered as the animal is exposed to a novel space, but whether preplay is related to spatially tuned silent cells and to what extent this preconfigured state is shaped by previous spatial experiences across environments is yet to be determined. Regardless of how spatial tuning is initially uncovered in a novel space, new place fields could be stabilized by synaptic plasticity as the animal is repeatedly exposed to the same environment across days (Kentros et al., 1998). When the environment becomes familiar, spatial tuning can be then readily expressed at the beginning of each new session of spatial exploration (Muller, Kubie, & Ranck, 1987; Thompson & Best, 1990; Mehta et al., 1997; Lever, Wills, Cacucci, Burgess, & O’Keefe, 2002; Ziv et al., 2013).

We also found that synaptic plasticity causes CA1 place fields to drift over time, where the drift is backward with respect to the animal’s direction of motion. This result is consistent with backward-drifting place fields observed experimentally in linear tracks (Mehta et al., 1997, 2000; Shen et al., 1997; Frank et al., 2002; Lee et al., 2004), and with the fact that blocking NMDAR-dependent plasticity abolishes the effect (Ekstrom et al., 2001). Such experience-dependent changes of hippocampal place fields were first predicted by Abbott and Blum (1996), and were also later modeled by Mehta et al. (2000), Mehta and Wilson (2000), and Yu, Knierim, Lee, and Shouval (2006). Our work agrees with these models in showing that Hebbian learning leads to a backward drift of sequentially activated place fields in linear tracks. Additionally, these models produced an asymmetric expansion of place fields, which is consistent with experimental data (Mehta et al., 1997, 2000; Shen et al., 1997; Frank et al., 2002; Lee et al., 2004). We also obtained asymmetric distributions of CA3-CA1 synaptic efficacies, which could produce asymmetric CA1 place fields. In our model, however, CA1 firing fields do not expand over time: they increase their peak rate and shrink in size during formation (see Figures 3 and 6), and they remain constant in size at steady state. This is due to the strong and fast normalization imposed by the STDP rule, the presence of the synaptic saturation boundaries, and the initially homogeneous distribution of the synaptic efficacies. Yet asymmetrically expanding CA1 place fields could be obtained for a different initial condition of the CA3-CA1 synaptic efficacies, that is, when the initial synaptic efficacy vector is already spatially tuned and the total synaptic weight is below the target normalization level of the learning rule. This initial condition, however, is better suited to study place field plasticity through experience, as done by Abbott and Blum (1996), Mehta et al. (2000), Mehta and Wilson (2000), and Yu et al. (2006), rather than to study the initial formation of CA1 place fields. Also note that the initial sharpening of CA1 fields produced by our model is consistent with synaptic plasticity being required for sharp spatial tuning in CA1 (McHugh et al., 1996).

Finally, we studied the effects of theta modulation and phase precession on the place field inheritance process. First, we found that theta modulation favors the inheritance and leads to faster place field formation in CA1. Although a decrease in theta modulation was found to correlate with spatial memory deficits (Givens & Olton, 1990; Robbe & Buzsáki, 2009), it remains difficult to disentangle the different effects of theta modulation on hippocampal neural activity. Second, we found that phase precession favors the drift of CA1 place fields; that is, phase-precessing CA3 place cells cause CA1 place fields to drift faster. Therefore, we make the following prediction: if phase precession was abolished in the CA3 region, place fields should drift more slowly in CA1 (see Robbe & Buzsáki, 2009). It shall be also noted that the place field inheritance hypothesis addressed in this letter is tightly related to the hypothesis of phase-precession inheritance recently proposed by Jaramillo et al. (2014), who showed that for spatially tuned CA3-CA1 connections, phase-precessing CA3 inputs lead to a phase-precessing CA1 output. In this regard, our model explains how a spatial tuning of the CA3-CA1 connections could be achieved in a Hebbian framework.

We now discuss some limitations of our model. First, the entorhinal input to CA1 has been ignored, although experimental findings suggest that CA1 place fields are maintained by this input alone (Brun et al., 2002). We hypothesize that Hebbian learning can also lead to CA1 place fields from spatially tuned grid-like entorhinal input. Indeed, grid-to-place models based on Hebbian learning exist (see Cheng & Frank, 2011, for a review), and CA3 and entorhinal inputs may interact such that associations are learned between the two. Second, only excitatory connections have been considered in our model, although a dense network of interneurons provides strong inhibitory input to hippocampal pyramidal cells (Klausberger & Somogyi, 2008). This strong inhibition may indeed explain the low firing rate of CA1 pyramidal cells. Finally, several simplifications have been made in our model: hippocampal pyramidal cells were assumed simple linear Poisson units, synaptic efficacies were modeled as continuous variables ignoring the quantal nature of synaptic transmission (Martin, 1966), and the synaptic saturation boundaries, as well as the parameters of the STDP learning window, were not directly related to measurable biophysical quantities. Nevertheless, the minimal model presented in this letter allowed us to study the place field inheritance hypothesis and to understand, with both analytical and numerical means, how the temporal structure of hippocampal neural activity may affect this inheritance. We also expect that place field inheritance could be obtained with more detailed neuron models and with other plasticity rules of the Hebbian type (Linsker, 1986; Guetig, Aharonov, Rotter, & Sompolinsky, 2003; Cooper, Intrator, Blais, & Shouval, 2004; Pfister & Gerstner, 2006; Morrison, Aertsen, & Diesmann, 2007; Clopath, Büsing, Vasilaki, & Gerstner, 2010).

To conclude, while the mechanisms underlying place field formation remain unclear, we showed that the place field activity in one hippocampal region can be inherited from an upstream region by simple Hebbian learning, with theta modulation leading to faster place field inheritance and phase precession leading to a faster drift of the inherited place fields.

### Appendix A: Model Parameters

Table 1 reports the parameter values used for numerical simulations of the averaged system (see Figures 3–8) and for analytical and numerical calculation of the eigenvalues (see Figures 11 and 12). Note that in Figures 5C, 5D, and 7C, the number of CA3 input synapses was and the upper saturation boundary was . For the simulations with spiking neurons (see Figure 9), we set , , and . Additionally, input place fields had variable widths and randomly distributed centers: the place field width was normally distributed with mean s and variance s, and place field centers were drawn from a uniform distribution in the interval .

L = 1 m | Length of the track |

v = 0.5 m/s | Running speed of the virtual rat |

T = 2 s | Duration of one round of the track |

N = 20 | Number of CA3 input place cells |

= 10 spikes/s | Average input intensity in one round of the track |

v_{0} = 0 spikes/s | Spontaneous output rate of the CA1 neuron |

= 4 ms | EPSP time constant |

= s | Place field gaussian standard deviation |

f_{0} = Hz | Frequency of input oscillations |

c = | Compression factor for phase precession/recession |

= | Range of the synaptic efficacies at the initial condition |

= | Upper saturation boundary for the synaptic efficacies () |

= | Learning rate |

= 1 | Amount of synaptic change for input spikes |

= | Amount of synaptic change for output spikes |

= | Learning window amplitude for LTP |

= 1 | Learning window amplitude for LTD |

= ms | Learning window time constant for LTP |

= ms | Learning window time constant for LTD |

= s | Integral of the learning window (derived) |

k_{1} = | Constant for the learning equation (derived) |

k_{2} = | Constant for the learning equation (derived) |

k_{3} = | Constant for the learning equation (derived) |

L = 1 m | Length of the track |

v = 0.5 m/s | Running speed of the virtual rat |

T = 2 s | Duration of one round of the track |

N = 20 | Number of CA3 input place cells |

= 10 spikes/s | Average input intensity in one round of the track |

v_{0} = 0 spikes/s | Spontaneous output rate of the CA1 neuron |

= 4 ms | EPSP time constant |

= s | Place field gaussian standard deviation |

f_{0} = Hz | Frequency of input oscillations |

c = | Compression factor for phase precession/recession |

= | Range of the synaptic efficacies at the initial condition |

= | Upper saturation boundary for the synaptic efficacies () |

= | Learning rate |

= 1 | Amount of synaptic change for input spikes |

= | Amount of synaptic change for output spikes |

= | Learning window amplitude for LTP |

= 1 | Learning window amplitude for LTD |

= ms | Learning window time constant for LTP |

= ms | Learning window time constant for LTD |

= s | Integral of the learning window (derived) |

k_{1} = | Constant for the learning equation (derived) |

k_{2} = | Constant for the learning equation (derived) |

k_{3} = | Constant for the learning equation (derived) |

### Appendix B: Proof of the Circularity of the Matrix

*n*-vector by cyclically permuting its entries, that is, an matrix is circulant if and only if We recall that , where for all , and is the identity (see equation 4.1). Since and are circulant and since the set of circular matrices is closed under summation, if is circulant, then is also circulant. In section 4.1 and in appendix E, we show that in the four scenarios considered, each element

*Q*of the correlation matrix depends only on the distance

_{ab}*d*between place fields

_{ab}*a*and

*b*centered in and : Additionally, since place fields are homogeneously spaced on the track, it holds that Combining these two facts, we prove that is circulant:

### Appendix C: Computation of the Fixed Point

### Appendix D: Estimation of the Eigenvalues from

*T*-periodic function

*F*as the Fourier series: where the Fourier coefficients are given by and is the continuous Fourier transform of . From equation D.3, we compute

*f*as and by rewriting the infinite sum in equation D.6 as two nested sums, we obtain By applying the DFT at both sides of equation D.8, one finds

_{n}