Abstract

A place cell is a neuron that fires whenever the animal traverses a particular location of the environment—the place field of the cell. Place cells are found in two regions of the rodent hippocampus: CA3 and CA1. Motivated by the anatomical connectivity between these two regions and by the evidence for synaptic plasticity at these connections, we study how a place field in CA1 can be inherited from an upstream region such as CA3 through a Hebbian learning rule, in particular, through spike-timing-dependent plasticity (STDP). To this end, we model a population of CA3 place cells projecting to a single CA1 cell, and we assume that the CA1 input synapses are plastic according to STDP. With both numerical and analytical methods, we show that in the case of overlapping CA3 input place fields, the STDP learning rule leads to the formation of a place field in CA1. We then investigate the roles of the hippocampal theta modulation and phase precession on the inheritance process. We find that theta modulation favors the inheritance and leads to faster place field formation whereas phase precession changes the drift of CA1 place fields over time.

1  Introduction

Place cells are neurons that selectively increase their activity at specific locations of the environment, the so-called place field of the cell (O’Keefe & Dostrovsky, 1971). Since their discovery, place cells have been the object of a great number of experimental and theoretical studies, being probably the best example of a neural correlate for a high-level cognitive concept. Nevertheless, to date, there is no widely accepted theory for the formation of place fields (Moser, Kropff, & Moser, 2008).

Place cells are typically recorded from the rat hippocampus, where they were originally found by O’Keefe & Dostrovsky (1971). As in human and nonhuman primates, the rodent hippocampus is formed by two U-shaped interlocking sectors, the dentate gyrus (DG) and the cornu ammonis (CA), which are further subdivided into four anatomical subfields (CA1-4) (Lorente de Nó, 1934). Place fields are observed in the activity of pyramidal cells in regions CA3 and CA1. The CA1 pyramidal cells receive excitatory inputs from the CA3 pyramidal cells through the Schaffer collaterals and from layer III of entorhinal cortex (EC) through the perforant path (Andersen, Morris, Amaral, Bliss, & O’Keefe, 2006). Region CA3 is characterized by place-field activity (Leutgeb & Leutgeb, 2007), while EC consists of a mix of grid cells, head-direction cells and nonspatially tuned cells (Fyhn, Molden, Witter, Moser, & Moser, 2004; Hargreaves, Rao, Lee, & Knierim, 2005; Hafting, Fyhn, Molden, Moser, & Moser, 2005; Sargolini et al., 2006). As reviewed by Ahmed and Mehta (2009), either of these two inputs is sufficient to generate CA1 place fields, but both are required for a sharp spatial tuning. Indeed, Brun et al. (2002) showed that after lesioning the CA3-CA1 connections, CA1 pyramidal cells maintained their place fields. However, CA1 place cells in lesioned animals fired more diffusely, had larger place fields, and had lower peak firing rates compared to controls. Similarly, when Brun et al. (2008) chemically lesioned layer III of entorhinal cortex to see whether CA3 inputs alone could sustain CA1 place field activity, CA1 place fields became larger and had reduced peak firing rates. Furthermore, in young developing animals, place cells are observed before grid cells (Langston et al., 2010; Wills, Cacucci, Burgess, & O’Keefe, 2010), and place cells persist after disruption of grid cell periodicity by septal inactivation (Koenig, Linder, Leutgeb, & Leutgeb, 2011; Brandon, Koenig, Leutgeb, & Leutgeb, 2014).

The possibility of place field formation from entorhinal grid field input has been evaluated in a number of theoretical studies (see Cheng & Frank, 2011, for a review). For example, Solstad, Moser, and Einevoll (2006) showed that in small two-dimensional environments, single place fields can be formed by summing the activity of a modest number of grid cells. Additionally, the choice of grid cells contributing to a given place field can result from competitive Hebbian learning (Rolls, Stringer, & Elliot, 2006; Savelli & Knierim, 2010; Pilly & Grossberg, 2012), and Franzius, Vollgraf, and Wiskott (2007) obtained place cells from grid cells by applying a linear transformation that maximizes sparseness.

In this letter, we focus on the CA3 input to CA1. This simplification allows us to isolate, and study analytically, different properties of the CA3 input, and complements the previous studies on the transformation from grid to place cells. We investigate how a CA1 place field can spontaneously emerge from the activity of spatially tuned CA3 neurons if synapses obey a Hebbian synaptic plasticity rule (McHugh, Blum, Tsien, Tonegawa, & Wilson, 1996; Mehta, Barnes, & McNaughton, 1997). Indeed, CA3 represents the largest source of excitatory input to a CA1 pyramidal cell (Megias, Emri, Freund, & Gulyas, 2001), and intact CA3-CA1 synaptic plasticity is required for both high spatial specificity (McHugh et al., 1996) and long-term stability (Kentros et al., 1998) of CA1 place fields.

The synaptic plasticity rule we adopt is spike-timing-dependent plasticity (STDP) (Gerstner, Kempter, Van Hemmen, & Wagner, 1996; Markram, Lübke, Frotscher, & Sakmann, 1997; Kempter, Gerstner, & Van Hemmen, 1999; Song, Miller, & Abbott, 2000; Dan & Poo, 2004). Synaptic modifications are sensitive to the ordering of presynaptic and postsynaptic activation: a synaptic efficacy increases if presynaptic firing precedes a postsynaptic spike and decreases otherwise. A fundamental property of a such a learning rule resides in the asymmetry of the so-called STDP learning window: a real-valued function defining the amount and the direction of synaptic changes as a function of the time difference between pre- and postsynaptic spikes. STDP is an attractive model for our study since in its additive form, it can induce a competition between the inputs for the control of the postsynaptic output, enabling a neuron to learn input correlations in an unsupervised manner (Kempter et al., 1999; Song et al., 2000; Abbott & Nelson, 2000).

From an experimental point of view, STDP was shown to be a valid model to explain synaptic changes of hippocampal neurons in dissociated slice cultures (Bi & Poo, 1998) and at the CA3-CA1 connection in vitro (Kwag & Paulsen, 2009). Yet synaptic plasticity at the Schaffer collaterals may also depend on other activity parameters such as postsynaptic bursting, which are not considered by STDP alone (Wittenberg & Wang, 2006; Buchanan & Mellor, 2010). Additionally, CA1 place fields emerge rapidly after the first exposure to a novel environment, sometimes even in the absence of previous postsynaptic spiking (Hill, 1978; Wilson & McNaughton, 1993; Frank, Stanley, & Brown, 2004). In this respect, we suggest that more biophysically detailed models of place field inheritance could consider plasticity rules acting within the dendritic tree of a CA1 cell (see Golding, Staff, & Spruston, 2002). These rules could generate spatially tuned dendritic inputs to a CA1 cell over behaviorally long timescales (hours, days, or weeks of an animal exploring different environments), independent of somatic activation. Dendritic tuning (Sheffield & Dombeck, 2015) could then be uncovered through nonlinear gating mechanisms dependent on the somatic potential (Lee, Lin, & Lee, 2012), leading to place-specific somatic spiking as soon as the animal enters a novel environment. Despite a difference in the biophysical details, we expect dendritic plasticity rules to produce qualitatively similar results as STDP, which is well grounded theoretically and easier to study analytically.

Since STDP is sensitive to the timing of the input spikes, it is interesting to consider the temporal features of hippocampal neural activity, such as its modulation by the theta rhythm (see Buzsáki, 2002, for a review). The hippocampal theta rhythm, in the frequency band 4 to 12 Hz, is one of the largest and most regular EEG oscillations in the rat brain and is particularly prominent during locomotion, exploratory behavior, and REM sleep (Vanderwolf, 1969; Winson, 1974). Hippocampal place cell activity is strongly modulated in the theta-frequency range (O’Keefe, 1976). In addition, a peculiar relation exists between the firing of place cells and the EEG theta rhythm: as a rat moves across a place field, the associated place cell tends to spike at earlier and earlier phases of successive theta cycles (O’Keefe & Recce, 1993). This phenomenon, known as phase precession, has inspired many theories on the storage and recall of memory sequences in the hippocampus (Skaggs, McNaughton, Wilson, & Barnes, 1996; Tsodyks, Skaggs, Sejnowski, & McNaughton, 1996; Jensen & Lisman, 1996; Lisman & Otmakhova, 2001).

Given that CA1 place fields can be inherited from CA3 via Hebbian synaptic plasticity, what are the roles of theta modulation and phase precession in this scenario? Do they favor or disfavor the inheritance process? We tackle these questions from a theoretical perspective. Using a mathematical model of the neural activity in CA3 and CA1, we study the dynamics of the CA3-CA1 synaptic efficacies to test the place field inheritance hypothesis and investigate the roles of theta modulation and phase precession in the inheritance process.

2  Model of Place-Field Inheritance

We model the activity of a population of N CA3 cells projecting to a CA1 cell, assuming that place fields are already formed in CA3 and that the CA3-CA1 synaptic efficacies are plastic according to STDP. We evaluate the possibility of CA3-CA1 place field inheritance considering a hypothetical experiment of a rat running at a constant speed on a circular track. Our hypothesis is that starting from random connectivity, the activity of the CA3 place cells, combined with the STDP learning rule, can induce a structure in the distribution of the CA1 input synapses that gives rise to a CA1 place field (see Figure 1).

Figure 1:

Model of place field inheritance through STDP. The model consists of a CA1 cell that receives synaptic inputs from many CA3 place cells. The CA3 place cells are drawn according to the topography of the associated place fields: neighboring cells in the plot have neighboring place fields on the track. Initially the CA3-CA1 synaptic connections (black filled dots) are randomly distributed, and no clear place field is present in CA1 (A). We hypothesize that after learning, a structure has emerged in the configuration of the input synapses, and a place field is formed in CA1 (B).

Figure 1:

Model of place field inheritance through STDP. The model consists of a CA1 cell that receives synaptic inputs from many CA3 place cells. The CA3 place cells are drawn according to the topography of the associated place fields: neighboring cells in the plot have neighboring place fields on the track. Initially the CA3-CA1 synaptic connections (black filled dots) are randomly distributed, and no clear place field is present in CA1 (A). We hypothesize that after learning, a structure has emerged in the configuration of the input synapses, and a place field is formed in CA1 (B).

2.1  Model of Spike-Timing-Dependent Plasticity

We adopt the STDP plasticity rule proposed by Kempter et al. (1999). We consider N presynaptic input neurons projecting to one postsynaptic output neuron where:

  1. Each input spike at synapse n changes its synaptic efficacy Jn by an amount .

  2. Each output spike changes all input synaptic efficacies by an amount .

  3. Each pair of an output spike with an input spike at synapse n changes Jn by an amount .

Input synapses are indexed by , is the learning rate (), and are constant parameters (), and is the learning window as a function of the time difference between input and output spikes. Additionally, hard bounds are imposed to limit individual synaptic efficacies between 0 and . To ease analytical calculations and better understand the effect of bounds, we also study the unbounded case.

We make four assumptions:

  1. The learning rate is small enough such that the timescale of learning is much slower than the neural spiking dynamics and the timescale of behavior, that is, the time taken for one round of the circular track.

  2. The presynaptic inputs are inhomogeneous Poisson processes with intensities .

  3. The postsynaptic output is an inhomogeneous Poisson process with intensity
    formula
    where v0 is the spontaneous output rate, is the excitatory postsynaptic potential (EPSP), and is the arrival time of the kth input spike at synapse n. For convenience, the EPSP function is normalized to unit integral: .
  4. The input intensities, averaged over a time interval , are identical and constant: , where is much larger than typical interspike intervals.

For those assumptions, Kempter et al. (1999) showed that the dynamics of the synaptic efficacies , averaged over the time interval , obey the following differential equation:
formula
2.2
The three constants k1, k2, and k3 are
formula
2.3
where is the integral of the learning window and the matrix is given by
formula
2.4
where is the EPSP-filtered input intensity .
Under the assumptions above, we use equation 2.2 to model the dynamics of the CA3-CA1 synaptic efficacies, choosing an averaging time window of length , where T is the time taken by the rat to complete one round of the track. Considering a rat running at constant speed, the spatially-tuned CA3 input intensities are periodic with fundamental period T, and in this setting, we can eliminate the time dependence from the expression of in equation 2.4,
formula
2.5
and rewrite the learning dynamics as a linear dynamical system with constant coefficients:
formula
2.6
As STDP learning window, we adopt a double-exponential function (Song et al., 2000),
formula
2.7
where is the time difference between a pair of pre- and postsynaptic spikes, and define the intervals over which synaptic strengthening and weakening occur, and and determine the maximum amount of synaptic modification per spike pair (, ). The parameters , , , and are chosen such that the integral of the learning window is positive: . Additionally, we impose and , and we assume that input correlations are weak, that is, , where is the average value of the correlation matrix . In this case, the learning rule leads to an intrinsic normalization of the average synaptic efficacy to the fixed point (Kempter et al., 1999)
formula
2.8
For convenience, we further impose close to 1 such that is close to . Finally, we model the EPSP as an alpha function normalized to unit integral: , where is the Heaviside step function and is the decay time constant.

2.2  Model of the CA3 Input Neurons

We model the activity of a population of N CA3 place cells projecting to a single CA1 cell by considering a rat running at constant speed v on a circular track of length L ( m/s, m). The activity of a CA3 place cell is periodic with period s, and it can be expressed with an infinite periodic summation,
formula
2.9
where is the Poisson intensity of cell n for an infinite linear track. We start the summation from rather than from to avoid boundary effects at .

We model the activity of the CA3 neurons in four scenarios: no modulation, in-phase modulation, phase precession, and, for completeness, phase recession. Experimental findings (e.g., Harris et al., 2002; Kjelstrup et al., 2008; Royer, Sirota, Patel, & Buzsáki, 2010) indicate that the phase-precession scenario is closest to biology. Nonetheless, the remaining three scenarios are interesting from a theoretical point of view, allowing the disentangling of different effects of neural activity in CA3 on the learning dynamics. The function takes a different form for each of the four scenarios considered.

2.2.1  No Modulation

In the first scenario, CA3 place fields are modeled as gaussian functions G (see Figure 2A):
formula
2.10
where is the time taken by the virtual rat to reach the center of place field n in the first round of the track and determines the time it takes to traverse a place field, that is, its size. Here, as in all the remaining cases, we assume place fields that are regularly spaced on the track:
formula
2.11
The normalization factor in equation 2.10 is chosen such that the activity of the CA3 inputs, averaged over the period T, is equal to :
formula
2.12
as it is assumed by the STDP model we adopt (see section 2.1, assumption 4).
Figure 2:

Four examples of the Poisson input intensity : (A) no modulation, (B) in-phase modulation, (C) phase precession, and (D) phase recession. In panels B–D, the input-modulation frequency f0 is kept constant ( Hz), while the theta oscillation frequency is Hz in panel B, Hz in panel C, and Hz in panel D, where is the compression factor (gray shades indicate rising phases of the theta cycle). In panel B the inputs peak in phase with the theta cycle, while in panels C (D), the inputs peak at earlier and earlier (later and later) phases in successive theta cycles. is the time taken by the virtual rat to move between two neighboring place field centers. Within a theta cycle, neighboring inputs peak with an average time lag that is zero in panel B, in panel C, and in panel D, that is, in panel D the inputs peak in reverse order compared to the direction of place field traversal.

Figure 2:

Four examples of the Poisson input intensity : (A) no modulation, (B) in-phase modulation, (C) phase precession, and (D) phase recession. In panels B–D, the input-modulation frequency f0 is kept constant ( Hz), while the theta oscillation frequency is Hz in panel B, Hz in panel C, and Hz in panel D, where is the compression factor (gray shades indicate rising phases of the theta cycle). In panel B the inputs peak in phase with the theta cycle, while in panels C (D), the inputs peak at earlier and earlier (later and later) phases in successive theta cycles. is the time taken by the virtual rat to move between two neighboring place field centers. Within a theta cycle, neighboring inputs peak with an average time lag that is zero in panel B, in panel C, and in panel D, that is, in panel D the inputs peak in reverse order compared to the direction of place field traversal.

2.2.2  In-Phase Modulation

In the second scenario, we consider that the hippocampal theta rhythm imposes a common modulation to the inputs. We assume the inputs to be modulated with a frequency f0 that is equal to the frequency of the theta oscillation (see Figure 2B). As a consequence, the input intensities of different CA3 inputs peak always at the same phase of the theta rhythm, that is, neither phase precession nor phase recession is modeled. For convenience, the input oscillations peak at the peaks of the theta rhythm. In this scenario, we model a CA3 place field as
formula
2.13
where is the angular velocity of the modulating sinusoid. The time-averaged input intensity is
formula
2.14
where is the continuous Fourier transform of and is the imaginary unit. If the width of the gaussian is large enough compared to the fundamental period of the modulating sinusoid, then , and the integral in equation 2.14 can be approximated by . Finally, to maintain the circularity of the system, we impose with .

2.2.3  Phase Precession

In the third scenario, we consider modulated CA3 inputs that show phase precession: as the animal moves across a place field, the associated place cell spikes preferentially at earlier and earlier phases of successive theta cycles (see Figure 2C) (O’Keefe & Recce, 1993). To generate such phase precession in a firing rate model, the input-modulating frequency f0 is slightly higher than the frequency of the theta oscillation (Geisler, Robbe, Zugaro, Sirota, & Buzsáki, 2007). We assume where c is the so-called compression factor (Geisler et al., 2010), which assumes a small positive value (). Additionally, following Geisler et al. (2010), we assume that at the center of each firing field ( for cell n), the phase of the input oscillation equals the phase of the theta rhythm , meaning that high discharge probability occurs at a preferred theta phase (Huxter, Burgess, & O’Keefe, 2003; Dragoi & Buzsáki, 2006). For convenience, and in line with the in-phase-modulation scenario, we assume this phase to be zero (in Figure 2C, high firing rate peaks occur around the peak of the theta cycle). Mathematically, this requires the input n to be modulated with a phase offset , which leads us to the Poisson input intensities:
formula
2.15
In this case, the distance between two place fields is related to the compressed distance between the spike timings within a theta cycle, that is, the order of spiking within a theta cycle encodes the sequence of place fields traversed (Skaggs et al., 1996). In other words, if the rat takes a time to move from one place field center to another, the corresponding place cells spike, within a theta cycle, with an average time lag (see Figure 2C). By setting c = 0 in equation 2.15, one recovers the case of modulation without phase precession in equation 2.13. Similar to this previous case, the time-averaged input intensities can be approximated by because
formula
2.16
Finally, to maintain the circularity of the system, we impose and with , which implies .

2.2.4  Phase Recession

In the fourth scenario, we consider the hypothetical case in which the CA3 inputs show phase recession: as the animal moves across a place field, the associated place cell spikes at later and later phases of successive theta cycles, and the order of spiking within a theta cycle encodes the sequence of place fields traversed in reverse order (see Figure 2D). In this scenario, the CA3 input intensities are modeled in the same way as in the phase precession case, but the direction of the phase shifts in the modulating sinusoid is inverted:
formula
2.17
Thus, we obtain a theta frequency that is slightly higher than the frequency f0 of the input oscillations ( for ). Similar to the previous case, the time-averaged input intensities are approximated by , and we impose and with to maintain the circularity of the system.

3  Numerical Results on Place-Field Inheritance

To test whether place fields could be inherited from CA3 to CA1, and to study how this inheritance depends on the properties of the input activity, we simulate the average dynamics of the CA3-CA1 synaptic efficacies by integrating equation 2.6 numerically (forward Euler method, integration step). The initial condition is a random vector whose components are uniformly distributed with mean and range , where . Since is close to the intrinsic normalization level of the the average synaptic efficacy (see section 2.1, equation 2.8), the initial condition can be interpreted as random synaptic wiring of the CA3-CA1 connections with the assumption that the synaptic efficacies are normalized. The synaptic efficacies are hard-bounded between the two saturation levels 0 and (), and the correlation matrix in equation 2.6 is estimated numerically using the input intensity introduced in section 2.2 for the four cases considered. The parameter values for all numerical simulations are reported in appendix  A.

3.1  Nonmodulated Inputs

We first study the development of the synaptic efficacies for the simplest case, nonmodulated inputs. Figure 3 shows the results obtained for a random initial condition. The synaptic efficacies diverge toward the two saturation levels until, at the end of the simulation, they are bimodally distributed at 0 and (see Figure 3A). This symmetry breaking derives from the competitive dynamics that is intrinsic to the additive STDP learning rule we adopt. Additionally, the synaptic efficacies cluster in a way that only synapses corresponding to neighboring CA3 inputs are potentiated, while all the others are depressed (see Figures 3B and 3C), such that a place field emerges in CA1 (see Figure 3D).

Figure 3:

Emergence of a CA1 place field for nonmodulated inputs and bounded synaptic efficacies. (A) Efficacies of individual synapses as a function of time t. (B) Temporal evolution of the synaptic efficacy vector (gray scale). The synaptic efficacies are ordered according to the topography of the associated CA3 place fields; that is, consecutive indices in the efficacy vector correspond to neighboring CA3 place fields. (C) Snapshots of the synaptic efficacies at the four time points indicated (arrows in B). In the first panel (), the horizontal dashed line corresponds to the normalization level . (D) Ensemble-averaged rate of the CA1 output neuron , computed at the same time points as in panel C.

Figure 3:

Emergence of a CA1 place field for nonmodulated inputs and bounded synaptic efficacies. (A) Efficacies of individual synapses as a function of time t. (B) Temporal evolution of the synaptic efficacy vector (gray scale). The synaptic efficacies are ordered according to the topography of the associated CA3 place fields; that is, consecutive indices in the efficacy vector correspond to neighboring CA3 place fields. (C) Snapshots of the synaptic efficacies at the four time points indicated (arrows in B). In the first panel (), the horizontal dashed line corresponds to the normalization level . (D) Ensemble-averaged rate of the CA1 output neuron , computed at the same time points as in panel C.

The location where a CA1 place field emerges depends only on the initial condition, which is random. Therefore, in our model, random synaptic wiring is sufficient to explain a spatial distribution of place fields in a population of CA1 cells (see Figure 4A). The size of a CA1 place field is proportional to the number of strongly potentiated CA3-CA1 synapses, which in turn depends on the upper saturation boundary . Since STDP tends to keep the average synaptic efficacy normalized, the number of strongly potentiated synapses decreases as the upper saturation boundary increases (see Figure 4B). We choose to avoid the trivial case where only a single synapse survives. This case is obtained numerically for because the number of synapses is discrete and the learning rule normalizes the average synaptic efficacy to a value slightly higher than (see equation 2.8).

Figure 4:

Steady-state CA1 place fields. (A) Different CA1 place fields emerge from different random initial conditions. (B) Number of synapses that reached the upper saturation boundary as a function of (nonmodulated inputs). The three insets show the efficacy vector at steady state for different values of (dashed lines).

Figure 4:

Steady-state CA1 place fields. (A) Different CA1 place fields emerge from different random initial conditions. (B) Number of synapses that reached the upper saturation boundary as a function of (nonmodulated inputs). The three insets show the efficacy vector at steady state for different values of (dashed lines).

3.2  Theta-Modulated Inputs

To investigate the roles of theta modulation and phase precession in the inheritance process, we study the development of the CA3-CA1 synaptic efficacies in the three remaining scenarios: in-phase modulation, phase precession, and phase recession. For better comparison, we adopt the same initial condition and the same parameter values in all scenarios (see appendix  A). We run the simulations both without and with imposing the saturation boundaries 0 and to the synaptic efficacies. The case of unbounded synapses, although less biologically realistic, is easier to study analytically (see section 4) and allows disentangling the fundamental system dynamics from the boundary effects. Example simulations for all cases are reported in Figure 5. First, we note that in all cases a place field emerges in CA1: a dark stripe in a plot indicates a cluster of CA3-CA1–potentiated synapses associated to neighboring CA3 input place fields (see Figure 3B). Additionally, the four scenarios show interesting differences concerning two main points: the time taken for a CA1 place field to emerge and the tendency of a CA1 place field to drift over time.

Figure 5:

Emergence of a CA1 place field in the four input scenarios: no modulation, in-phase modulation, phase precession, and phase recession (from left to right). (A–B) Simulations with input synapses. The efficacy vector is coded on a gray scale and plotted over time, both without (A) and with (B), the two saturation boundaries 0 and . In (A), the efficacy vector is normalized to unit norm at each time point for visualization purposes (the actual values diverge). The dashed vertical lines indicate the time points at which a CA1 place field is formed (; see main text). In (B), steady-state CA1 place fields are stable in all input scenarios. (C) Same as in (B) but with input synapses and . Steady-state CA1 place fields drift at constant speed in all input scenarios. (D) Efficacy vector for the simulations in (C) at time s. Efficacy distributions are negatively skewed for backward-drifting CA1 place fields (no modulation, in-phase modulation, and phase precession) and positively skewed for forward drifting CA1 fields (phase recession).

Figure 5:

Emergence of a CA1 place field in the four input scenarios: no modulation, in-phase modulation, phase precession, and phase recession (from left to right). (A–B) Simulations with input synapses. The efficacy vector is coded on a gray scale and plotted over time, both without (A) and with (B), the two saturation boundaries 0 and . In (A), the efficacy vector is normalized to unit norm at each time point for visualization purposes (the actual values diverge). The dashed vertical lines indicate the time points at which a CA1 place field is formed (; see main text). In (B), steady-state CA1 place fields are stable in all input scenarios. (C) Same as in (B) but with input synapses and . Steady-state CA1 place fields drift at constant speed in all input scenarios. (D) Efficacy vector for the simulations in (C) at time s. Efficacy distributions are negatively skewed for backward-drifting CA1 place fields (no modulation, in-phase modulation, and phase precession) and positively skewed for forward drifting CA1 fields (phase recession).

To quantify these effects, we define the CA1 tuning index
formula
3.1
where is the kth component of the discrete Fourier transform (DFT) of , and the DFT was defined as
formula
3.2
for . The tuning index is a complex number that can be interpreted as follows: its modulus , the vector strength of , indicates the degree of spatial tuning of a CA1 neuron; its phase indicates the position of a CA1 place field on the track (see Figure 6).
Figure 6:

Interpretation of the CA1 tuning index . The emergence of a CA1 place field is shown in the phase-precession scenario with bounded synapses (see Figures 5B and 7B). The rates of the CA1 output neuron (top), the synaptic efficacy vector (middle), and the tuning index (bottom) are shown at five time points (A–E). The triangles in the top panels indicate the center of the place field (the location where the CA1 rate is at its maximum). (A) At the initial condition (), the CA1 neuron is not spatially tuned, and the vector strength is close to zero. (B–E) As the simulation time increases, the lowest-frequency component of the efficacy vector dominates, the CA1 neuron becomes more spatially tuned, and the vector strength increases. Panel C illustrates a vector strength , which we consider as a threshold for the presence of a place field. In panel E, the synaptic efficacies reach saturation, and the vector strength approaches 1. During place field formation, the CA1 place field center (black triangle in A) shifts backward in space, resulting in a corresponding shift in the phase of panels B–E).

Figure 6:

Interpretation of the CA1 tuning index . The emergence of a CA1 place field is shown in the phase-precession scenario with bounded synapses (see Figures 5B and 7B). The rates of the CA1 output neuron (top), the synaptic efficacy vector (middle), and the tuning index (bottom) are shown at five time points (A–E). The triangles in the top panels indicate the center of the place field (the location where the CA1 rate is at its maximum). (A) At the initial condition (), the CA1 neuron is not spatially tuned, and the vector strength is close to zero. (B–E) As the simulation time increases, the lowest-frequency component of the efficacy vector dominates, the CA1 neuron becomes more spatially tuned, and the vector strength increases. Panel C illustrates a vector strength , which we consider as a threshold for the presence of a place field. In panel E, the synaptic efficacies reach saturation, and the vector strength approaches 1. During place field formation, the CA1 place field center (black triangle in A) shifts backward in space, resulting in a corresponding shift in the phase of panels B–E).

In Figure 7 we show the dynamics of the tuning index for the simulations in Figure 5. In all cases, the vector strength initially increases exponentially, and for bounded synapses, it saturates at about (see Figures 7B and 7C, left panels). Comparing the vector strengths in the four scenarios, we note that for theta-modulated CA3 inputs, a CA1 place field emerges faster than for no modulation. This is true for both bounded and unbounded synapses, and the presence of the synaptic saturation boundaries does not considerably affect the time of place field formation (see Figures 7A and 7B, insets). Among the three scenarios with theta-modulated inputs, the in-phase modulation is the one where a CA1 place field is formed fastest, while phase precession and phase recession lead to slower place field formation. These results are validated by analytical calculations (see equation 4.11 and Figure 12C).

Figure 7:

Dynamics of place field formation. Modulus (left) and phase (right) of the tuning index for the simulations in Figure 5 in the four input scenarios (legend in panel C). The modulus of (logarithmic scale) indicates the tuning strength of the CA1 neuron, and its phase indicates the position of a CA1 place field on the circular track. The insets (linear scale) show the time points at which a CA1 place field emerges (, dotted lines). (A,B) Simulations with input synapses and with (A) unbounded and (B) bounded synaptic efficacies. In the unbounded case, numerical results perfectly match the analytical calculations (see equations 4.7 and 4.8). (C) Simulations with input synapses and bounded synaptic efficacies.

Figure 7:

Dynamics of place field formation. Modulus (left) and phase (right) of the tuning index for the simulations in Figure 5 in the four input scenarios (legend in panel C). The modulus of (logarithmic scale) indicates the tuning strength of the CA1 neuron, and its phase indicates the position of a CA1 place field on the circular track. The insets (linear scale) show the time points at which a CA1 place field emerges (, dotted lines). (A,B) Simulations with input synapses and with (A) unbounded and (B) bounded synaptic efficacies. In the unbounded case, numerical results perfectly match the analytical calculations (see equations 4.7 and 4.8). (C) Simulations with input synapses and bounded synaptic efficacies.

With unbounded synapses, the system dynamics is unstable and a CA1 place field drifts over time (see Figures 5A and 7A, right panel). The drift of a CA1 place field is backward with respect to the direction of movement in the first three cases and forward in the fourth one (phase recession). Additionally, the phase of the input modulation relative to the theta rhythm (phase precession or phase recession) affects the drift velocity of a CA1 place field. While the drift velocity is almost equal in the no-modulation and in-phase modulation scenarios, it increases considerably with phase precession, and it decreases, and becomes even negative, with phase recession. These results are also validated analytically (see equation 4.12 and Figure 12D).

The saturation boundaries have a stabilizing effect on the tendency of a CA1 place field to drift over time. With a small number of CA3 inputs, bounded synapses can stabilize CA1 place fields. For example, with CA3 inputs, stable CA1 place fields are obtained in all input scenarios (see Figures 5B and 7B). As the number of CA3 inputs increases, CA1 place fields start drifting. For example, with , drifting CA1 place fields are obtained in all input scenarios (see Figures 5C and 7C). Nevertheless, with bounded synapses, CA1 place fields always drift more slowly compared to the unbounded case (see Figure 8B, right panel). This can be understood as follows: since the saturated synaptic efficacies cannot exceed their boundaries, at each time point only a fraction of the available weights can change and thereby contribute to the drift of a CA1 place field. Note that simulation results with different values of N (as in Figures 5B and 5C, 7B and 7C, and 8A and 8B) are best compared by keeping the fraction of strongly potentiated synapses constant. Hence, we set , where is the fraction of input synapses that are expected to be saturated at the upper bound by the intrinsic normalization of the learning rule (see equation 2.8).

Figure 8:

Emergence and stability of CA1 place fields. (A–B) Time of emergence (left) and steady-state drift speed (right) as a function of the number N of input synapses. Numerically, the time of emergence is estimated as the time at which , and the drift speed is estimated from the phase change of in a time window of length 1 s. Analytically, with unbounded synapses, the two quantities are given by equations 4.11 and 4.12, respectively (see also Figure 12). (A) Analytical and numerical results match in the case of unbounded synapses. The saturation boundaries do not affect the time of place field formation (B, left) but decrease the steady-state drift speed of a CA1 place field (B, right). (C) The time of emergence (left) and the steady-state drift speed (right) as a function of the input-modulation frequency f0 with unbounded synaptic efficacies. The dotted vertical lines indicate the frequency Hz used in panels A and B and in Figure 5.

Figure 8:

Emergence and stability of CA1 place fields. (A–B) Time of emergence (left) and steady-state drift speed (right) as a function of the number N of input synapses. Numerically, the time of emergence is estimated as the time at which , and the drift speed is estimated from the phase change of in a time window of length 1 s. Analytically, with unbounded synapses, the two quantities are given by equations 4.11 and 4.12, respectively (see also Figure 12). (A) Analytical and numerical results match in the case of unbounded synapses. The saturation boundaries do not affect the time of place field formation (B, left) but decrease the steady-state drift speed of a CA1 place field (B, right). (C) The time of emergence (left) and the steady-state drift speed (right) as a function of the input-modulation frequency f0 with unbounded synaptic efficacies. The dotted vertical lines indicate the frequency Hz used in panels A and B and in Figure 5.

The larger the number N of CA3 input synapses, the faster a CA1 place field emerges and the faster it drifts over time (see Figures 8A and 8B, see also equations 4.11, 4.12, and 4.22). Indeed, for a fixed track length , the spacing between two input place fields decreases as N increases, which results in a higher overlap between input and output activities. However, both the time of place field formation and the drift speed also scale with the learning rate (see equation 4.22), which was not constrained by experimental data. Hence, our model cannot predict the actual time of learning or provide accurate drift-speed estimations, yet these two quantities can be compared across different input scenarios.

With bounded synapses and a large number of CA3 inputs, the steady-state distributions of the synaptic efficacies are qualitatively different in the four input scenarios (see Figure 5D). Backward-drifting CA1 place fields (no modulation, in-phase modulation, and phase precession) are negatively skewed, whereas forward-drifting CA1 place fields (phase recession) are positively skewed. Additionally, the higher the drift speed, the higher the skewness of the steady-state CA1 fields. Interestingly, backward-drifting and negatively skewed place fields have been also reported experimentally (Mehta et al., 1997; Shen, Barnes, McNaughton, Skaggs, & Weaver, 1997; Mehta, Quirk, & Wilson, 2000; Frank, Eden, Solo, Wilson, & Brown, 2002; Lee, Rao, & Knierim, 2004), and the blockage of NMDAR-dependent synaptic plasticity was found to abolish the effect (Ekstrom, Meltzer, McNaughton, & Barnes, 2001). Experimental data are thus consistent with our place field inheritance model in the phase-precession scenario, which is also the most biologically plausible one.

In the three scenarios with theta-modulated inputs, the input-modulation frequency f0 was kept constant, while the frequency of the theta rhythm changed accordingly, that is, for in-phase modulation, for phase precession, and for phase recession (see section 2.2). This choice made analytical calculations easier, but similar results are obtained when the theta frequency is kept constant (not shown) and both the time of place field formation and the drift speed do not critically depend on the input-modulation frequency f0 (see Figure 8C).

Next, we tested the robustness of place field inheritance against several sources of variability in the system: the stochasticity of spike timing and the variability in the size and the location of the CA3 input place fields (see Figure 9). For the STDP rule we adopt, the effect of stochastic spiking on structure formation has been studied by Kempter et al. (1999). It was shown that given a sufficiently small learning rate, the diffusion of the synaptic efficacies due to noise is slow as compared to structure formation, and input correlations, however small, can always be learned. We demonstrate this result in our place field inheritance model by means of simulations with spiking neurons.

Figure 9:

Place field formation in the presence of noise. A pool of CA3 inputs project to a CA1 neuron. For each input, spike times are drawn from an inhomogeneous Poisson process, and the STDP rule is applied (see section 2.1). (A) Top: three examples of CA3 input intensities with variable place field sizes (phase-precession scenario). Place field centers are randomly distributed on the circular track. Gray shades indicate rising phases of the theta cycle. Bottom: Phase-time plots of the spikes generated from the intensities above for one traversal of the circular track. The gray lines are circular-linear fits. (B) Efficacy vector coded on a gray scale and plotted over time. (C) Expected rate (top) and phase time plot of spikes (bottom) for the CA1 neuron with the efficacy vector at the end of the simulation ( s). (D,E) Modulus and phase of the tuning-index for the four scenarios considered.

Figure 9:

Place field formation in the presence of noise. A pool of CA3 inputs project to a CA1 neuron. For each input, spike times are drawn from an inhomogeneous Poisson process, and the STDP rule is applied (see section 2.1). (A) Top: three examples of CA3 input intensities with variable place field sizes (phase-precession scenario). Place field centers are randomly distributed on the circular track. Gray shades indicate rising phases of the theta cycle. Bottom: Phase-time plots of the spikes generated from the intensities above for one traversal of the circular track. The gray lines are circular-linear fits. (B) Efficacy vector coded on a gray scale and plotted over time. (C) Expected rate (top) and phase time plot of spikes (bottom) for the CA1 neuron with the efficacy vector at the end of the simulation ( s). (D,E) Modulus and phase of the tuning-index for the four scenarios considered.

Input and output spikes are drawn from inhomogeneous Poisson processes, and the synaptic efficacies are updated according to the STDP rule described in section 2.1. Input intensities are modeled as in section 2.2 (see equations 2.10, 2.13, 2.15 and 2.17), but with variable place field sizes and randomly distributed place field centers (see appendix  A; see also Figure 9A for examples in the phase precession scenario). We consider CA3 synaptic inputs, and the upper saturation boundary is set to (as in Figures 5C and 7C), so that about synapses are expected to be saturated at the upper bound (see equation 2.8). Finally, the learning rate is lowered to (compared to in Figures 5 and 7), such that synaptic efficacies diffuse much slower as compared to structure formation (see Kempter et al., 1999, for an approximation of the diffusion time constant).

Figure 9B shows the evolution of the weight vector for a random initial condition in the phase precession scenario. As shown for the averaged system (see Figure 5), a CA1 place field emerges and drifts backward over time. The CA1 cell also phase-precesses (see Figure 9C), that is, phase precession is inherited from CA3 to CA1, which is consistent with the work of Jaramillo, Schmidt, and Kempter (2014). Hence, place field inheritance through STDP is robust to stochastic spike timing and to variability in size and location of the CA3 input place fields. Finally, we computed the CA1 tuning index in the four input scenarios (see Figures 9D and E) and confirmed two main findings obtained with the averaged system: theta modulation leads to faster place field formation, and phase precession leads to a faster drift of the inherited place fields.

We now provide an intuitive interpretation of the results presented in this section. The fact that a CA1 place field emerges faster in the case of in-phase modulation compared to no modulation can be understood as follows. Since in both scenarios, the time-averaged input rates have the same value , the effect of theta modulation is to redistribute the input spikes in time windows with the length of a theta period (see Figure 10). This redistribution increases the probability for a pair of input-output spikes to be separated by a time lag that is effective for the plasticity rule and thus favors structure formation. A similar redistribution of spikes is produced also in the phase-precession and phase-recession scenarios, leading to faster structure formation with respect to the base case. However, as a result of the phase shifts, the overlap between input and output is now reduced with respect to the in-phase modulation case, and fewer pairs of input-output spikes interact with the learning window, leading to slower structure formation.

Figure 10:

Schematic representation of emergence and drift of CA1 place fields. Right column: Four CA3 cells with close-by place fields and equal synaptic efficacies project to a single CA1 cell. Left column: The activities of two CA3 inputs are shown, that is, the inputs whose place fields are crossed first (black) and last (light gray) as the virtual rat runs from left to right (dashed arrow) on the circular track. The dark gray shade depicts the output activity of the CA1 cell (flipped). For theta-modulated inputs, the vertical lines depict the times at which input and output activities peak within a theta cycle (only for the cycles where input-output activities largely overlap). In the right-hand-side schemata, upward (downward) pointing arrows indicate net LTP (LTD) at the corresponding input synapses, and the horizontal arrow indicates the drift direction of the CA1 place field.

Figure 10:

Schematic representation of emergence and drift of CA1 place fields. Right column: Four CA3 cells with close-by place fields and equal synaptic efficacies project to a single CA1 cell. Left column: The activities of two CA3 inputs are shown, that is, the inputs whose place fields are crossed first (black) and last (light gray) as the virtual rat runs from left to right (dashed arrow) on the circular track. The dark gray shade depicts the output activity of the CA1 cell (flipped). For theta-modulated inputs, the vertical lines depict the times at which input and output activities peak within a theta cycle (only for the cycles where input-output activities largely overlap). In the right-hand-side schemata, upward (downward) pointing arrows indicate net LTP (LTD) at the corresponding input synapses, and the horizontal arrow indicates the drift direction of the CA1 place field.

The drift of a CA1 place field can be understood as follows. Consider that a place field has emerged in a CA1 cell and that a set of CA3 inputs with close-by place fields equally contributes to its firing. In this case, the center of the CA1 field, that is, the time at which the cell fires maximally, is given by the mean of the input CA3 place field centers plus the time delay introduced by axonal transmission and synaptic filtering. In this scenario, for nonmodulated CA3 inputs and a rat running always in the same direction on the circular track, about half of the CA3 inputs are preferentially activated before the CA1 cell, whereas the other half are preferentially activated after the CA1 cell. This activation pattern, combined with the STDP learning rule, induces synaptic-efficacy changes that are biased toward LTP for the first half of the inputs and toward LTD for the second half, leading to a CA1 place field that drifts backward with respect to the animal’s direction of motion (see Figure 10, top panel).

Similarly, for theta-modulated inputs, the drift of a CA1 place field can be understood in terms of the timings at which inputs and output activities peak within a cycle of the theta oscillation. Moreover, the dynamics of the synaptic changes—hence, the one of the CA1 place-field drift—is dominated by the activation patterns occurring in the theta cycles where input and output activities overlap maximally. For in-phase modulated inputs, all CA3 inputs peak before the CA1 output (see Figure 10, second panel), and the order of input activation is consistently preserved across theta cycles. In this case, early-activated inputs undergo net LTP, whereas late-activated inputs undergo net LTD, leading to a backward-drifting CA1 place field. This is due to both the asymmetry of the STDP window and the non-Hebbian normalization terms in the learning rule (see section 2.1).

In the phase-precession case, the input-activity peaks are ordered as in the previous case, but due to the phase shifts, they occur further apart in time, and late-activated inputs tend to peak after the CA1 output (see Figure 10, third panel). In this scenario, early-activated (late-activated) inputs undergo stronger LTP (LTD) compared to the in-phase modulation case, and the CA1 place field drifts faster. Finally, with phase recession, the ordering of the input-activity peaks is reversed compared to the previous two scenarios (see Figure 10, bottom panel), leading to a CA1 place field that drifts in the opposite direction, that is, forward with respect to the direction of motion.

4  Analytical Results on Place-Field Inheritance

In this section, we study analytically the place field inheritance model presented in section 2. The equation describing the dynamics of the synaptic efficacies, equation 2.6, can be solved using classical dynamical systems theory if the boundary conditions () are ignored, which is reasonable for describing the formation of place fields (see Figure 7). Rewriting equation 2.6 in matrix form, we obtain
formula
4.1
where is an matrix , is the identity matrix of size , and is the constant vector . We observe that is a circulant matrix, that is, a Toeplitz matrix whose columns are obtained by cyclic permutations of a single vector (see appendix  B for a proof). A circulant matrix has a set of N independent eigenvectors and N distinct eigenvalues such that and where are the Nth roots of unity, that is, , and the vector is the first column of . Therefore, all the solutions of the dynamical system can be expressed in the eigenvectors’ basis,
formula
4.2
where the coefficients depend on the initial condition, and the vector is the fixed point of the dynamical system, which is given by (see equation 2.8 and appendix  C). Substituting the eigenvectors in equation 4.2, we obtain
formula
4.3
From equation 4.3, rewriting in polar form the coefficients , and expanding in their real and imaginary parts the eigenvalues , we obtain
formula
4.4
Note that the imaginary part of vanishes,
formula
4.5
because within the sum in equation 4.5, term k cancels out with term Nk and the term equals 0. Indeed, both the coefficients and the eigenvalues are DFTs of real vectors: (see equation 4.2) and (see equation 4.13), from which it follows that and , where denotes complex conjugation. Equation 4.4 shows that the solutions of the dynamical system are linear combinations of exponentially modulated sinusoids. The real parts of the eigenvalues determine the exponential growth or decay, whereas the imaginary parts determine the frequency of the oscillations.
To study the emergence of a CA1 place field, we defined the CA1 tuning index (see equation 3.1) whose modulus indicates the strength of spatial tuning of a CA1 neuron and whose phase indicates the position of a CA1 place field on the track. We now show that the modulus (phase) of the CA1 tuning index can be computed from the real (imaginary) part of the eigenvalue . By taking the DFT at both sides of equation 4.3 we derive
formula
4.6
and by using equation 4.6 in equation 3.1 and observing that is real, we obtain
formula
4.7
formula
4.8
For and , if the real part of dominates, that is, for , a CA1 place field emerges, and the modulus of the CA1 tuning index increases with time. Note that sets the time constant for the normalization of the average synaptic efficacy (see equation 4.6),
formula
4.9
that is, for , the average synaptic efficacy approaches the fixed point with time constant . We assume that the normalization dynamics is much faster than the dynamics of structure formation: . In this case, the modulus of the CA1 tuning index is approximated by (equation 4.7),
formula
4.10
meaning that a CA1 place field emerges () at a time
formula
4.11
that is inversely proportional to . Finally, from equation 4.8, we observe that a CA1 place field drifts with a velocity
formula
4.12
where the factor converts the velocity from angular to linear units. In the following section, we estimate and analytically from three basic functions defined in our inheritance model: the CA3 place fields’ gaussian function G, the STDP learning window W, and the EPSP .

4.1  Estimation of the System Eigenvalues

The eigenvalues of the circulant matrix can be computed from the DFT of its first column :
formula
4.13
and using the definition of in equation 4.1, we obtain
formula
4.14
where is the DFT of the first column of the correlation matrix .
To calculate , we first show that each element of the matrix is obtained by sampling a continuous function that depends only on the distance d between two place fields. From equations 2.5 and 2.9, we derive
formula
4.15
where . To characterize the input intensity , we need to distinguish the four input scenarios considered: no modulation, in-phase modulation, phase precession, and phase recession.
In case of nonmodulated inputs, using the definition of in equation 2.10 and defining the rescaled gaussian function , we obtain
formula
4.16
where is the distance between place fields a and b. By using equation 4.16, we rewrite Qab, equation 4.15, as a function of the distance d only:
formula
4.17
hence,
formula
4.18
where and denote, respectively, the convolution and the cross-correlation of f and g evaluated at point t. Equation 4.18 can be rewritten as
formula
4.19
where we define
formula
4.20
and is the so-called effective learning window (Sprekeler, Michaelis, & Wiskott, 2007).
In appendix  E we show that equation 4.19 holds also for the three remaining scenarios, in-phase modulation, phase precession, and phase recession, when the function is appropriately defined. In these cases, in appendix  E (see equation E.23), we derive
formula
4.21
where reads as a minus (plus) for phase precession (recession), by setting one recovers the in-phase modulation scenario, and we defined and . The approximation in equation 4.21 holds when the gaussian width is much larger than the period of the modulating sinusoid (see appendix  E). Finally, using equations 4.14 and 4.19, we compute the eigenvalues from the continuous Fourier transform of (see appendix  D, equation D.14):
formula
4.22
Assuming N even, we defined , where is the indicator function ( for , for ). The approximation in equation 4.22 holds when the gaussian width is much larger than both the width of the learning window W and the spacing of the input place fields, that is, when the input place fields largely overlap (see appendix  D).

4.2  Interpretation of the Analytical Results

We showed that the eigenvalues can be approximated by sampling the continuous Fourier transform of (see equation 4.22), where takes a different form in each of the four scenarios considered (see equations 4.20 and 4.21). To elucidate the meaning of this result in terms of the place field inheritance, we study qualitatively in each scenario.

In case of no modulation, applying the continuous Fourier transform to equation 4.20, we derive
formula
4.23
Recalling that the Fourier transform of a gaussian is real, the real (imaginary) part of is given by the product between the real (imaginary) part of and the gaussian function (see Figure 11). The real part of has a maximum at if, for and increasing , the gaussian decreases faster than the rate of change of (see Figure 11).
Figure 11:

Estimation of the eigenvalues in case of nonmodulated inputs. Parameter values are reported in appendix  A. Panel A (B) shows the real (imaginary) part of (black lines) and the gaussian function (gray lines, equation 4.23). Both functions are normalized to values between and 1 for easy graphical comparison. The inset in panel A is a zoom on the area where the two functions intersect. Panel C (D) shows the real (imaginary) part of (gray line) and the real (imaginary) parts of the eigenvalues (dots, equation 4.22). The eigenvalues were computed numerically from the system matrix (see equation 4.1). The lowest-frequency eigenvalue (frequency Hz) is indicated by a dotted vertical line, whereas is not shown. The real part of sets the time of place field formation (see equation 4.11), whereas the imaginary part of determines the drift speed of a CA1 place field (see equation 4.12). See also Figure 8A ().

Figure 11:

Estimation of the eigenvalues in case of nonmodulated inputs. Parameter values are reported in appendix  A. Panel A (B) shows the real (imaginary) part of (black lines) and the gaussian function (gray lines, equation 4.23). Both functions are normalized to values between and 1 for easy graphical comparison. The inset in panel A is a zoom on the area where the two functions intersect. Panel C (D) shows the real (imaginary) part of (gray line) and the real (imaginary) parts of the eigenvalues (dots, equation 4.22). The eigenvalues were computed numerically from the system matrix (see equation 4.1). The lowest-frequency eigenvalue (frequency Hz) is indicated by a dotted vertical line, whereas is not shown. The real part of sets the time of place field formation (see equation 4.11), whereas the imaginary part of determines the drift speed of a CA1 place field (see equation 4.12). See also Figure 8A ().

In this case, the real part of the lowest-frequency eigenvalue dominates, and a CA1 place field emerges (see equations 4.6 and 4.10). Conversely, a CA1 place field would not emerge if the gaussian was much wider and, as a result, had a local minimum at . Such a broad gaussian in the frequency domain corresponds to a narrow gaussian in the time domain, and in this case, the CA3 input place fields are narrow compared to the width of the effective learning window, a scenario that is biologically unrealistic. Additionally, a CA1 place field emerges in a time that is inversely proportional to (see equation 4.11), and it drifts with a velocity proportional to (see equation 4.12).

In case of in-phase modulation, applying the continuous Fourier transform to equation 4.21 with , we derive (see appendix  F, equation F.6, for ):
formula
4.24
Comparing this result with the one obtained for nonmodulated inputs, we note that two terms are added to the expression of in equation 4.23. These two terms are given by the Fourier transform of the effective learning window shifted by and multiplied by . Since the real part of is positive in the whole spectrum (see Figure 11A), the two additional terms in equation 4.24 lead to a larger value of compared to no modulation (see Figures 12A and 12C, black solid lines), meaning that a CA1 place field emerges faster. Interestingly, this effect is enhanced by the fact that the real part of has its maxima around , meaning that the STDP window is most effective for inputs in the theta frequency range, as it has been observed also by Albers, Schmiedt, and Pawelzik (2013). Additionally, since is slightly smaller compared to no modulation (see Figures 12B and 12D, black solid lines), a CA1 place field drifts more slowly. These two effects are in accordance with numerical simulations (see Figures 5A and 7A).
Figure 12:

Estimation of the eigenvalues in the case of modulated inputs. Parameter values are reported in appendix  A. Columns of A (B) show the three summands of the real (imaginary) parts of (see equations 4.24 to 4.26 for in-phase modulation, phase precession, and phase recession, respectively). All functions in panels A and B are normalized to values between and 1 for easy graphical comparison. Panel C (D) shows the real (imaginary) part of (lines) and the real (imaginary) parts of the eigenvalues (dots, squares, and triangles, equation 4.22). The eigenvalues were computed numerically from the system matrix (see equation 4.1). The lowest-frequency eigenvalue (frequency Hz) is indicated by a dotted vertical line, whereas is not shown. The function and the eigenvalues are also shown in the no-modulation case for comparison (gray solid lines and gray dots). The real part of sets the time of place field formation (see equation 4.11), whereas the imaginary part of determines the drift speed of a CA1 place field (see equation 4.12). See also Figure 8A ().

Figure 12:

Estimation of the eigenvalues in the case of modulated inputs. Parameter values are reported in appendix  A. Columns of A (B) show the three summands of the real (imaginary) parts of (see equations 4.24 to 4.26 for in-phase modulation, phase precession, and phase recession, respectively). All functions in panels A and B are normalized to values between and 1 for easy graphical comparison. Panel C (D) shows the real (imaginary) part of (lines) and the real (imaginary) parts of the eigenvalues (dots, squares, and triangles, equation 4.22). The eigenvalues were computed numerically from the system matrix (see equation 4.1). The lowest-frequency eigenvalue (frequency Hz) is indicated by a dotted vertical line, whereas is not shown. The function and the eigenvalues are also shown in the no-modulation case for comparison (gray solid lines and gray dots). The real part of sets the time of place field formation (see equation 4.11), whereas the imaginary part of determines the drift speed of a CA1 place field (see equation 4.12). See also Figure 8A ().

In case of phase precession, we derive (see appendix  F, equation F.6)
formula
4.25
As before, is obtained by the summation of three terms that include shifted versions of . However, different from the previous case, the Fourier transform of the effective learning window is now shifted by a slightly smaller amount (i.e., by ()), and, most important, the gaussian function is shifted by in the opposite direction. As shown in Figures 12A and 12C (black dashed lines), this results in the real part of to have a maximum at zero that is lower compared to in-phase modulation but higher compared to no modulation. Therefore, a CA1 place field emerges more slowly compared to in-phase modulation but more quickly compared to no modulation. Moreover, since is larger than in the other two scenarios (see Figures 12B and 12D, black dashed lines), a CA1 place field also drifts faster. Again, these two effects are in accordance with numerical simulations (see Figures 5A and 7A).
Finally, in the case of phase recession, we derive (see appendix  F, equation F.6 with c replaced by ):
formula
4.26
This expression is analogous to the one obtained for phase precession (see equation 4.25), but inverting the direction of the frequency shifts . As shown in Figure 12 (gray dashed lines), this inversion affects mainly the imaginary part of , meaning that in passing from phase precession to phase recession, only the drift velocity of a CA1 place field is affected. Specifically, is now smaller compared to all other cases, and it even assumes a small negative value. Therefore, in this scenario, a CA1 place field slowly drifts in the animal’s direction of motion, as observed in the numerical simulations (see Figures 5A and 7A).

5  Discussion

We studied the inheritance of place fields from region CA3 to region CA1 of the hippocampus through a Hebbian learning rule: spike-timing-dependent plasticity (STDP). We proposed a minimal computational model of a population of CA3 place cells projecting to a single CA1 cell, considering a virtual rat running on a circular track. The activity of sequentially activated CA3 place cells with overlapping place fields, combined with the STDP learning rule, induced a symmetry breaking in the distribution of the CA3-CA1 synaptic efficacies such that only synapses associated with neighboring CA3 place fields were potentiated. In this setting, a place field emerged in CA1. Then we studied the effects of theta modulation and phase precession on the place field inheritance process. We found that theta modulation favors the inheritance and leads to faster place field formation, whereas phase precession favors the drift of CA1 place fields over time.

Since a CA1 place field emerged from the integration of neighboring CA3 input place fields, place fields could be larger in CA1 than in CA3, which is consistent with experimental findings (Barnes, McNaughton, Mizumori, Leonard, & Lin, 1990; Mizumori, Ragozzino, Cooper, & Leutgeb, 1999; Mizuseki, Royer, Diba, & Buzsáki, 2012). Additionally, the location of a CA1 place field depended only on the initial configuration of the CA3-CA1 synaptic efficacies, which was random. Therefore, random synaptic wiring between regions CA3 and CA1 could be sufficient to explain the presence of spatially distributed CA1 place fields.

Our model, although too abstract to predict the actual time needed for learning, suggests that learning is a slow process requiring multiple traversals of the same place to be effective. This seems in contrast with the fact that place fields develop rapidly in novel environments (Hill, 1978; Wilson & McNaughton, 1993; Frank et al., 2004). In this respect, we suggest that spatially tuned CA3-CA1 connections may not need to be relearned in each environment, but are rather shaped and refined throughout the entire life of an animal exploring different environments. Additionally, this learning process may not even require postsynaptic firing. Indeed, Hebbian plasticity mechanisms in the absence of postsynaptic firing exist: Golding et al. (2002) showed that when somatic spiking was blocked, large-amplitude EPSPs (2–4 mV) and locally generated dendritic spikes were sufficient to induce LTP at both distal and proximal synapses of a CA1 pyramidal neuron. Moreover, Remy and Spruston (2007) showed that even a single burst of Schaffer-collateral activation was sufficient for this type of plasticity to occur. Since large-amplitude EPSPs can result from the spatiotemporal integration of correlated synaptic inputs, this was proposed as a Hebbian learning mechanism independent of somatic action-potential firing and backpropagation. We suggest that such dendritic Hebbian learning mechanisms could induce spatial tuning in a CA1 pyramidal neuron in the same way as Hebbian STDP did in our model.

The hypothesis of place field inheritance implies that spatially tuned CA1 firing fields are produced by spatially tuned synaptic inputs. This assumption is in agreement with a subthreshold ramp-like depolarization of the membrane potential observed within the firing fields of CA1 place cells (Harvey, Collman, Dombeck, & Tank, 2009; Epsztein, Brecht, & Lee, 2011). Interestingly, Lee et al. (2012) showed that a spatially tuned subthreshold depolarization appears also in nonspiking CA1 cells (i.e., silent cells) when the soma is slightly depolarized with a spatially uniform current. The authors proposed that silent cells, like place cells, receive spatially tuned input, but that this input may not propagate to the soma because of a nonlinear gating mechanism dependent on the somatic potential. This hypothesis was recently supported by the work of Sheffield and Dombeck (2015), who provided direct evidence for spatially tuned regenerative dendritic signals in CA1 place cells. Additionally, it was shown that the fraction of dendritic branches with detectable regenerative events correlates with both place field consistency across traversals and place field stability over days (Sheffield & Dombeck, 2015), two features previously linked to NMDA-receptor mediated plasticity. Taken together, these observations suggest that spatial tuning may be learned within the dendritic tree of a CA1 cell without the need of somatic activation and that a nonlinear gating mechanism could uncover this tuning, and lead to somatic spiking, as soon as the animal enters a novel environment.

The scenario of a preexisting spatial tuning that is abruptly uncovered in a novel space relates to the phenomenon of preplay reported by Dragoi and Tonegawa (2011, 2013): the temporal order of neuronal firing during rest or sleep predicts the sequence of future place cell activation in a previously unknown environment. In both cases, a preconfigured state is uncovered as the animal is exposed to a novel space, but whether preplay is related to spatially tuned silent cells and to what extent this preconfigured state is shaped by previous spatial experiences across environments is yet to be determined. Regardless of how spatial tuning is initially uncovered in a novel space, new place fields could be stabilized by synaptic plasticity as the animal is repeatedly exposed to the same environment across days (Kentros et al., 1998). When the environment becomes familiar, spatial tuning can be then readily expressed at the beginning of each new session of spatial exploration (Muller, Kubie, & Ranck, 1987; Thompson & Best, 1990; Mehta et al., 1997; Lever, Wills, Cacucci, Burgess, & O’Keefe, 2002; Ziv et al., 2013).

We also found that synaptic plasticity causes CA1 place fields to drift over time, where the drift is backward with respect to the animal’s direction of motion. This result is consistent with backward-drifting place fields observed experimentally in linear tracks (Mehta et al., 1997, 2000; Shen et al., 1997; Frank et al., 2002; Lee et al., 2004), and with the fact that blocking NMDAR-dependent plasticity abolishes the effect (Ekstrom et al., 2001). Such experience-dependent changes of hippocampal place fields were first predicted by Abbott and Blum (1996), and were also later modeled by Mehta et al. (2000), Mehta and Wilson (2000), and Yu, Knierim, Lee, and Shouval (2006). Our work agrees with these models in showing that Hebbian learning leads to a backward drift of sequentially activated place fields in linear tracks. Additionally, these models produced an asymmetric expansion of place fields, which is consistent with experimental data (Mehta et al., 1997, 2000; Shen et al., 1997; Frank et al., 2002; Lee et al., 2004). We also obtained asymmetric distributions of CA3-CA1 synaptic efficacies, which could produce asymmetric CA1 place fields. In our model, however, CA1 firing fields do not expand over time: they increase their peak rate and shrink in size during formation (see Figures 3 and 6), and they remain constant in size at steady state. This is due to the strong and fast normalization imposed by the STDP rule, the presence of the synaptic saturation boundaries, and the initially homogeneous distribution of the synaptic efficacies. Yet asymmetrically expanding CA1 place fields could be obtained for a different initial condition of the CA3-CA1 synaptic efficacies, that is, when the initial synaptic efficacy vector is already spatially tuned and the total synaptic weight is below the target normalization level of the learning rule. This initial condition, however, is better suited to study place field plasticity through experience, as done by Abbott and Blum (1996), Mehta et al. (2000), Mehta and Wilson (2000), and Yu et al. (2006), rather than to study the initial formation of CA1 place fields. Also note that the initial sharpening of CA1 fields produced by our model is consistent with synaptic plasticity being required for sharp spatial tuning in CA1 (McHugh et al., 1996).

Finally, we studied the effects of theta modulation and phase precession on the place field inheritance process. First, we found that theta modulation favors the inheritance and leads to faster place field formation in CA1. Although a decrease in theta modulation was found to correlate with spatial memory deficits (Givens & Olton, 1990; Robbe & Buzsáki, 2009), it remains difficult to disentangle the different effects of theta modulation on hippocampal neural activity. Second, we found that phase precession favors the drift of CA1 place fields; that is, phase-precessing CA3 place cells cause CA1 place fields to drift faster. Therefore, we make the following prediction: if phase precession was abolished in the CA3 region, place fields should drift more slowly in CA1 (see Robbe & Buzsáki, 2009). It shall be also noted that the place field inheritance hypothesis addressed in this letter is tightly related to the hypothesis of phase-precession inheritance recently proposed by Jaramillo et al. (2014), who showed that for spatially tuned CA3-CA1 connections, phase-precessing CA3 inputs lead to a phase-precessing CA1 output. In this regard, our model explains how a spatial tuning of the CA3-CA1 connections could be achieved in a Hebbian framework.

We now discuss some limitations of our model. First, the entorhinal input to CA1 has been ignored, although experimental findings suggest that CA1 place fields are maintained by this input alone (Brun et al., 2002). We hypothesize that Hebbian learning can also lead to CA1 place fields from spatially tuned grid-like entorhinal input. Indeed, grid-to-place models based on Hebbian learning exist (see Cheng & Frank, 2011, for a review), and CA3 and entorhinal inputs may interact such that associations are learned between the two. Second, only excitatory connections have been considered in our model, although a dense network of interneurons provides strong inhibitory input to hippocampal pyramidal cells (Klausberger & Somogyi, 2008). This strong inhibition may indeed explain the low firing rate of CA1 pyramidal cells. Finally, several simplifications have been made in our model: hippocampal pyramidal cells were assumed simple linear Poisson units, synaptic efficacies were modeled as continuous variables ignoring the quantal nature of synaptic transmission (Martin, 1966), and the synaptic saturation boundaries, as well as the parameters of the STDP learning window, were not directly related to measurable biophysical quantities. Nevertheless, the minimal model presented in this letter allowed us to study the place field inheritance hypothesis and to understand, with both analytical and numerical means, how the temporal structure of hippocampal neural activity may affect this inheritance. We also expect that place field inheritance could be obtained with more detailed neuron models and with other plasticity rules of the Hebbian type (Linsker, 1986; Guetig, Aharonov, Rotter, & Sompolinsky, 2003; Cooper, Intrator, Blais, & Shouval, 2004; Pfister & Gerstner, 2006; Morrison, Aertsen, & Diesmann, 2007; Clopath, Büsing, Vasilaki, & Gerstner, 2010).

To conclude, while the mechanisms underlying place field formation remain unclear, we showed that the place field activity in one hippocampal region can be inherited from an upstream region by simple Hebbian learning, with theta modulation leading to faster place field inheritance and phase precession leading to a faster drift of the inherited place fields.

Appendix A:  Model Parameters

Table 1 reports the parameter values used for numerical simulations of the averaged system (see Figures 3–8) and for analytical and numerical calculation of the eigenvalues (see Figures 11 and 12). Note that in Figures 5C, 5D, and 7C, the number of CA3 input synapses was and the upper saturation boundary was . For the simulations with spiking neurons (see Figure 9), we set , , and . Additionally, input place fields had variable widths and randomly distributed centers: the place field width was normally distributed with mean  s and variance s, and place field centers were drawn from a uniform distribution in the interval .

Table 1:
Parameter Values for the Numerical Simulations.
L = 1 m Length of the track 
v = 0.5 m/s Running speed of the virtual rat 
T = 2 s Duration of one round of the track 
N = 20 Number of CA3 input place cells 
= 10 spikes/s Average input intensity in one round of the track 
v0 = 0 spikes/s Spontaneous output rate of the CA1 neuron 
= 4 ms EPSP time constant 
= Place field gaussian standard deviation 
f0 = Hz Frequency of input oscillations 
c =  Compression factor for phase precession/recession 
=  Range of the synaptic efficacies at the initial condition 
=  Upper saturation boundary for the synaptic efficacies (
=  Learning rate 
= 1 Amount of synaptic change for input spikes 
=  Amount of synaptic change for output spikes 
=  Learning window amplitude for LTP 
= 1 Learning window amplitude for LTD 
= ms Learning window time constant for LTP 
= ms Learning window time constant for LTD 
= Integral of the learning window (derived) 
k1 =  Constant for the learning equation (derived) 
k2 =  Constant for the learning equation (derived) 
k3 =  Constant for the learning equation (derived) 
L = 1 m Length of the track 
v = 0.5 m/s Running speed of the virtual rat 
T = 2 s Duration of one round of the track 
N = 20 Number of CA3 input place cells 
= 10 spikes/s Average input intensity in one round of the track 
v0 = 0 spikes/s Spontaneous output rate of the CA1 neuron 
= 4 ms EPSP time constant 
= Place field gaussian standard deviation 
f0 = Hz Frequency of input oscillations 
c =  Compression factor for phase precession/recession 
=  Range of the synaptic efficacies at the initial condition 
=  Upper saturation boundary for the synaptic efficacies (
=  Learning rate 
= 1 Amount of synaptic change for input spikes 
=  Amount of synaptic change for output spikes 
=  Learning window amplitude for LTP 
= 1 Learning window amplitude for LTD 
= ms Learning window time constant for LTP 
= ms Learning window time constant for LTD 
= Integral of the learning window (derived) 
k1 =  Constant for the learning equation (derived) 
k2 =  Constant for the learning equation (derived) 
k3 =  Constant for the learning equation (derived) 

Appendix B:  Proof of the Circularity of the Matrix

Here we show that the system matrix is circulant (see section 4). An circulant matrix is obtained from any n-vector by cyclically permuting its entries, that is, an matrix is circulant if and only if
formula
B.1
We recall that , where for all , and is the identity (see equation 4.1). Since and are circulant and since the set of circular matrices is closed under summation, if is circulant, then is also circulant. In section 4.1 and in appendix  E, we show that in the four scenarios considered, each element Qab of the correlation matrix depends only on the distance dab between place fields a and b centered in and :
formula
B.2
Additionally, since place fields are homogeneously spaced on the track, it holds that
formula
B.3
Combining these two facts, we prove that is circulant:
formula
B.4

Appendix C:  Computation of the Fixed Point

Here we compute the fixed point for the dynamics of the synaptic efficacies (see equation 4.1). The fixed point is obtained by setting to 0 the left-hand side of equation 4.1:
formula
C.1
Since is circulant, equation C.1 can be rewritten as
formula
C.2
where is the first column of and denotes discrete circular convolution. By taking the DFT at both sides of equation C.2 and solving for , we obtain
formula
C.3
and from the definitions of and (see equation 4.1),
formula
C.4
formula
C.5
By substituting and into equation C.3, we have
formula
C.6
formula
C.7
where we used the definition of the normalization level (see equation 2.8) and the relation , which holds for the matrix being circulant. Finally, by taking the inverse DFT of equation C.7, we obtain
formula
C.8
meaning that the fixed point of the synaptic-efficacy dynamics is the constant vector whose components are all equal to .

Appendix D:  Estimation of the Eigenvalues from

Here we use equations 4.14 and 4.19 to estimate the system eigenvalues from the continuous Fourier transform of the function (see section 4.1). Assuming that the first CA3 input place field is centered at zero (), that place fields are uniformly spaced on the track (), and defining , we compute the first column of the correlation matrix as
formula
D.1
We define and take the DFT at both sides of equation D.1:
formula
D.2
To compute , we expand the T-periodic function F as the Fourier series:
formula
D.3
where the Fourier coefficients are given by
formula
D.4
formula
D.5
and is the continuous Fourier transform of . From equation D.3, we compute fn as
formula
D.6
and by rewriting the infinite sum in equation D.6 as two nested sums, we obtain
formula
D.7
formula
D.8
By applying the DFT at both sides of equation D.8, one finds