Abstract
The hippocampus plays a critical role in the compression and retrieval of sequential information. During wakefulness, it achieves this through theta phase precession and theta sequences. Subsequently, during periods of sleep or rest, the compressed information reactivates through sharp-wave ripple events, manifesting as memory replay. However, how these sequential neuronal activities are generated and how they store information about the external environment remain unknown. We developed a hippocampal cornu ammonis 3 (CA3) computational model based on anatomical and electrophysiological evidence from the biological CA3 circuit to address these questions. The model comprises theta rhythm inhibition, place input, and CA3-CA3 plastic recurrent connection. The model can compress the sequence of the external inputs, reproduce theta phase precession and replay, learn additional sequences, and reorganize previously learned sequences. A gradual increase in synaptic inputs, controlled by interactions between theta-paced inhibition and place inputs, explained the mechanism of sequence acquisition. This model highlights the crucial role of plasticity in the CA3 recurrent connection and theta oscillational dynamics and hypothesizes how the CA3 circuit acquires, compresses, and replays sequential information.
1 Introduction
The hippocampus contributes to episodic memory formation (Gordon, 1988; Scoville & Milner, 1957). Rodent spatial memory has been extensively studied in animal models to elucidate the mechanisms of episodic memory (Buzsáki & Moser, 2013; O'Keefe & Dostrovsky, 1971; O'Keefe & Nadel, 1978; Tolman, 1948). In the hippocampus, place information is represented as a phase code within theta cycles of local field potential (LFP) through a phenomenon known as theta phase precession, where individual neuron firing timings within theta oscillations advance as an animal walks through the place fields of neurons (O'Keefe & Recce, 1993; Skaggs et al., 1996). Moreover, sequential activity is manifested as the firing order within a theta cycle, forming theta sequences (Dragoi & Buzsáki, 2006; Foster & Wilson, 2007). The sequential activities are replayed during sharp-wave ripple (SWR) in sleep and quiet resting (Diba & Buzsáki, 2007; Foster & Wilson, 2006). It is hypothesized that sequential activity serves to compress temporal relationships within the external world, transforming them from the scale of several seconds into neural sequences operating at the scale of tens of milliseconds. This phenomenon facilitates efficient processing, storage, and retrieval of information while also integrating such information into cognitive processes, including memory consolidation, abstraction, planning, and inference (Buzsáki, 2015; Drieu & Zugaro, 2019).
The hippocampus can be divided into several subregions, primarily the dentate gyrus (DG) and cornu ammonis 1 and 3 (CA1 and CA3), each of which plays an essential and distinct role in episodic and place memory. In particular, the CA3 is integral to generating sequential activity. The CA3 is a starting point for the ripple (Buzsáki, 1986; Buzsáki et al., 1983; Suzuki & Smith, 1988) and replay (Middleton & McHugh, 2016) activities. The CA3 region receives input from the DG and sends output to the CA1. Input from the DG is extremely sparse in the CA3; however, each input exhibits substantial strength (Henze et al., 2000, 2002). There are more recurrent connections between the excitatory neurons in the CA3 than in the DG and CA1. Synapses between CA3 excitatory neurons are weaker, with a higher connection probability than that of synapses from DG to CA3 excitatory neurons. Both DG-CA3 and CA3-CA3 synapses exhibit short- and long-term plasticity (Rebola et al., 2017). These anatomical features of CA3 circuits seem optimal for sequence learning (Banino et al., 2018; Jaeger & Haas, 2004; Laje & Buonomano, 2013; Sussillo & Abbott, 2009; Wang et al., 2018).
GABAergic neurons of the medial septum (MS) provide theta-paced inhibition to the hippocampal interneurons, which in turn control the firing timing of the excitatory neurons (Freund & Antal, 1988; King et al., 1998; Petsche et al., 1962; Zutshi et al., 2018). Inhibitory neurons also receive input from nearby excitatory neurons and send them feedback inhibition (Rebola et al., 2017).
Several computational models for generating theta phase precessions (Chadwick et al., 2016; Geisler et al., 2010) and compressing sequences (Ecker et al., 2022; Jahnke et al., 2015; Nicola & Clopath, 2019; Reifenstein et al., 2021) have been proposed. However, no model has provided a unified explanation for both phenomena. Therefore, this study focused on the CA3 structure, which has many recurrent connections and is a source of SWRs, as a critical region for acquiring sequence compression, and aimed to elucidate how the CA3 encodes and replays an external place sequence.
2 Results
In this model, we set plasticity only in the CA3 recurrent synapses, not in the DG-CA3 synapses, to simplify the learning process. We selected the CA3 recurrent synapses because recurrent networks such as CA3-CA3 connections are suitable for learning sequential activities (Banino et al., 2018; Jaeger & Haas, 2004; Laje & Buonomano, 2013; Sussillo & Abbott, 2009; Wang et al., 2018). The Hebbian plasticity rule was incorporated in this study. Activated synapses are tagged for much longer than membrane potential changes (referred to as the plasticity-related factor; see section 4.3) (Chang et al., 2019; Rogerson et al., 2014), which makes sequence learning with the Hebbian rule more efficient (Reifenstein et al., 2021). Furthermore, when inputs from the DG and another CA3 neuron arrive simultaneously, the neuronal response is considerably higher than the simple summation of the responses to these two inputs, known as superlinear coincident detection (Brandalise & Gerber, 2014; London & Häusser, 2005). We implemented these synaptic properties in the model (see section 4.3). We also added the regularization term of the synaptic weight (maintaining summation of postsynaptic weights as the same constant value; see section 4.3) to prevent overexcitation. The self-connection in the CA3-CA3 recurrent synapses was excluded.
The model was assumed to run on linear space with constant velocity for 10 trials (running session). During the running session, the plasticity of the CA3-CA3 recurrent synapse was changing the synaptic weights. Figures 1D to 1F show CA3 unit spikes (see Figures 1D and 1E) and membrane potentials (see Figure 1F) of both excitatory and inhibitory units in the first, middle, and last trials. Notably, as the trial progressed, the firing started earlier than in the previous trials, which is consistent with a previous experimental report (Mehta et al., 2000). The emergence of new spikes coincided with the synaptic weight modifications in the CA3-CA3 recurrent synapse.
We analyzed the results of additional sequence learning in the large-scale model (see supplementary Figures 1B to 1E). As in the small model, replays were observed, but the directions were not unidirectional in either the initial or additional sequence sets (see supplementary Figures 1B and 1C). In the small model, reduction of the synaptic weight could cause loss of the replay directionality in the initial sequence after the additional sequence learning (see Figure 5G and supplementary Figure 1A). However, the synaptic weights among the responding units in the large model were not reduced after additional learning (see supplementary Figures 1D and 1E) in contrast to those in the small model. Thus, probabilistic sparse synaptic connection can induce bidirectional replay and prevent memory interference.
During the resting session, the model exhibited synchronized activities among excitatory units (see Figure 7I). To verify this, we analyzed the population signal, which revealed synchronization frequency of approximately 5 Hz (see supplementary Figures 2A to 2C). Additionally, we analyzed high-frequency oscillations during the burst but did not identify clear peaks in the power spectrum within the gamma or ripple range (50–400 Hz) in the population signal (see supplementary Figures 2D to 2F).
Without the MS theta inhibitory input (baseline, I = 100), the inhibitory unit had an unstable point and exhibited bursting firing (see Figure 8B, solid black line and white circle). When the inhibitory theta input was received from the MS and the external input fell below I = 73.7 (see Figure 8D), the unstable point became stable (Andronov–Hopf bifurcation; e.g., see Figure 8B, dotted black line and black circle). When the point was stable, the dynamics of the unit converged on it and stopped firing. However, the stable point became unstable again during the theta phase, at which MS inhibitory input was weak, resulting in the reemergence of burst firing.
The theta-paced inhibitory input controls the spike timings of the CA3 excitatory units; however, the theta phase precession pattern should be symmetric (no bias in the phase position of spikes before and after the peak position) because it depends solely on place input strength, which is symmetric in this model. Thus, additional synaptic interactions should shape the negative slope precession pattern. Before learning, the theta phase precession pattern of a CA3 excitatory unit was symmetric (see Figures 3C and 9E, orange shadow, referred to as the current-place unit). The learning process enhanced recurrent synaptic weights (see Figure 2A), connecting the current-place units with the units representing the places that were previously traversed (previous) and those that will be encountered soon (next) (see Figure 9E, blue and green lines, referred to as previous-place and next-place units, respectively). The differences in firing timing between the current-place and previous-place units, and between the current-place and next-place units, were also symmetric (see Figure 9E, arrows). The synaptic potentiation increases input amplitude and advances spike timing within a theta cycle in the postsynaptic unit (current-place unit) while also causing the postsynaptic unit (current-place unit) to spike in the late phase of the earlier theta cycle. The effect of synaptic input on the postsynaptic (current-place) unit occurred slightly later than the presynaptic (previous- and next-place units) spike timings due to a delay in the accumulation of external inputs and a state transition to trigger the firing of the postsynaptic unit, creating an asymmetric effect where the previous-place unit has more time to overlap with the activated timing of the current-place unit than the next-place unit does (see Figure 9F, purple-orange and yellow-orange shadow overlapping areas).
Furthermore, the previous-place unit interacts with the current-place unit during the latter part of the previous-place unit activity, while the next-place unit interacts during the earlier part of the next-place unit. This causes the plasticity-related factor (see section 4.3) to accumulate more for the previous-place unit, resulting in greater synaptic potentiation in the synapse from the previous- to current-place units than that from the next- to current-place units (see Figures 2C and 9F, blue and green opaque shadows, respectively). The absence of a previous-place unit for unit 1 likely accounted for the abnormal precession shown in Figure 3. These mechanisms lead to an asymmetric theta phase precession pattern. Additionally, the strengthened synapses induced sequential spikes by random noise input in the resting session, resulting in replays after place learning in our model.
As supplemental simulations, we tested the models by partially altering the elements of the original small model (see section 4.5) to evaluate which aspects of the original small model are involved in the theta phase precession and replay. A model with self-connection in the CA3 excitatory units (self-connection model) displayed theta phase precession. However, unlike the original model, it did not demonstrate sequential replay (see supplementary Figures 3A to 3G). Compared to the original model, the self-connection model showed a relative weakening of synaptic connections with other units, accompanied by an enhancement of the self-connected synaptic weight (see supplementary Figure 3H). In the large model, there were self-connected units probabilistically () and we selected the units with self-connections and compared them with units without such connections. These self-connected units displayed theta phase precession but not sequential replay (see supplementary Figures 3I and 3J). Synaptic weights, excluding the self-connections in the self-connected units, were smaller than those in the non-self-connected units (see supplementary Figure 3K).
We simulated the models using altered coincident detection rules. A model without coincident detection (nonCD model) weakened the synaptic weights and did not exhibit the theta phase precession or replays (see supplementary Figures 4A to 4F). Furthermore, the model with coincident detection of membrane potential (mpCD model) weakened the synaptic weights and exhibited unclear theta phase precession and replay (see supplementary Figures 5A to 5F). These were partially recovered with the increasing learning rate or the number of trials (see supplementary Figures 4G to 4L and 5G to 5L).
The firing timing difference between DG inputs and CA3 excitatory neurons is crucial for the coincident detection (Brandalise & Gerber, 2014). Thus, we modified the DG inputs from continuous values to probabilistic spikes according to Poisson distribution (see the spiking DG input model in supplementary Figure 6). Although randomizing the DG input made the theta phase precession and replay patterns more ambiguous, the synaptic connections still formed patterns that were similar to those in the original model.
To better assess the roles of synaptic plasticity, we simulated models with different plasticity rules. With both standard spike-timing-dependent plasticity (STDP; Bi & Poo, 1998, standard STDP model) and symmetric STDP (Mishra et al., 2016, symmetric STDP model), patterns of theta phase precession and replay were altered (see supplementary Figure 7). Furthermore, a model employing a short time constant for a factor related to plasticity in the original learning rule (short learning time constant model) also failed to exhibit theta phase precession and directional replays (see supplementary Figure 8). A model without the theta input from the MS (non-theta input model) still displayed the replay but not the theta phase precession (see supplementary Figure 9).
Furthermore, we randomized the order of place inputs (random DG input model). The patterns of theta phase precession and the directionality of replays were eliminated (see supplementary Figure 10). The synaptic connections formed a cluster within the responding units.
3 Discussion
In this study, we developed a neural circuit model based on the anatomical evidence of biological CA3 circuits. The model could learn and compress the order of external inputs from sparse DG projections using the Hebbian rule and displayed theta phase precession and replay-like activity. The model could also perform additional learning and relearning. In this computational model, both theta phase precession and replay were generated by a shared underlying mechanism, namely, synaptic enhancement of the CA3 recurrent connections. Synaptic modulation is governed by the interplay between theta rhythm inhibition and the strength and temporal order of the place inputs. These findings underscore the pivotal role of CA3 circuits in the acquisition of sequence information, significantly enriching our understanding of the mechanistic underpinnings of memory encoding processes.
Sequence learning within the hippocampal region remains a much debated topic (Drieu & Zugaro, 2019). This study primarily hypothesized that the dense recurrent connections in CA3 play a pivotal role in sequence learning, with supporting evidence from both anatomy and electrophysiology, such as its role as an SWR source (Buzsáki, 2015; Hájos et al., 2013; Rebola et al., 2017; Schlingloff et al., 2014). The potential of CA3 as the source of replay is reinforced by observations that it influences SWR events in CA1 (Buzsáki, 2015; Csicsvari et al., 2000) and that disruptions in its function affect CA1 replay (Davoudi & Foster, 2019; Guan et al., 2021; Middleton & McHugh, 2016; Yamamoto & Tonegawa, 2017). However, CA1 and CA2 are also considered candidates for sequence learning. CA1, for instance, has a unique connectivity profile; it receives approximately 5000 inputs from numerous CA3 pyramidal neurons per neuron (Amaral et al., 1990). Additionally, it receives dual inputs from both the entorhinal cortex (EC) and CA3, which send maximum input at distinct theta phases (Mizuseki et al., 2009), thereby distinctly influencing CA1 theta phase precession (Fernández-Ruiz et al., 2017). Although its intrinsic network in the CA1 is less recurrent compared with that in the CA3, it can generate spiking sequences by applying optogenetic stimulation to the local circuit (Stark et al., 2014). CA2, exhibiting similarities in circuitry to CA3, is also a notable SWR source (Oliva et al., 2016). The distinct advantage of CA3 might stem from its sparse yet potent inputs from the DG, an aspect absent in CA2 (Tamamaki et al., 1988). While all hippocampal subregions contribute to sequence learning, the CA3, due to its unique features, seems to play a leading role.
We intentionally excluded EC inputs to simplify the model in our study, though they are known to influence hippocampal activity (Chenani et al., 2019; Fernández-Ruiz et al., 2017; Schlesiger et al., 2015; Yamamoto & Tonegawa, 2017). It should be noted that EC inputs and CA3 recurrent activity show distinct theta phase preferences (Fernández-Ruiz et al., 2017; Mizuseki et al., 2009), and synaptic modification occurs between presynaptic EC neurons and postsynaptic CA3 neurons (McMahon & Barrionuevo, 2002; Tsukamoto et al., 2003). Moreover, even when considering our larger model, the network sizes remain significantly smaller than those of the actual CA3 circuit. In our models, a CA3 cell receives only one or a few DG inputs. In contrast, it is estimated that approximately 50 DG cells input into a single CA3 pyramidal cell (Rebola et al., 2017). Additionally, the place representation in the CA3 is distinct and not merely a replica of those in the DG (Senzai & Buzsáki, 2017). Future models incorporating diverse inputs and more accurate network scales would provide a more comprehensive understanding of hippocampal dynamics.
Models that explain theta phase precession can be divided into three main categories (Drieu & Zugaro, 2019): detuned oscillators model (Geisler et al., 2010; O'Keefe & Recce, 1993), somato-dendritic interference model (Harris et al., 2002; Kamondi et al., 1998; Mehta et al., 2002), and network connectivity model (Jensen & Lisman, 1996; Romani & Tsodyks, 2015; Skaggs et al., 1996; Tsodyks et al., 1996). The detuned oscillators model posits that theta phase precession arises from a single place cell oscillating at a slightly faster frequency than the population signal (LFP) theta oscillation, with both oscillators gradually shifting due to their frequency difference. Our findings do not support this model because the frequency of the LFP oscillations (summation of all excitatory membrane potentials) was identical to that of the theta input (7 Hz) to individual neuronal units and exhibited no signs of slowing down (see Figure 3B).
In contrast, the somato-dendritic interference model suggests that a combination of oscillatory somatic inhibition and transient ramp-like dendritic excitation determines action potential timing. Our results supported this model because the place input strength dictated the spike timing in space without recurrent inputs (see Figures 8 and 9). Furthermore, the expansion of the place fields (see Figures 1D to 1F) also supports developing the theta phase precession with the somato-dendritic interference model (Kamondi et al., 1998; Mehta et al., 2002, 2000). Network connectivity models propose that theta phase precession results from transmission delays between asymmetrically connected neurons (Jensen & Lisman, 1996; Romani & Tsodyks, 2015; Skaggs et al., 1996; Tsodyks et al., 1996). Our results align with those obtained in these models, as our model exhibited additional spikes in the late phase of the early theta cycle after learning (see Figures 1 to 3). These findings highlight the potential roles of somato-dendritic interference and network connectivity in theta phase precession, although our model does not exhaustively explore these mechanisms.
In our model, theta phase precession gradually formed through learning, consistent with the results of Mehta et al. (2002). However, this does not align with the results of a previous study (Feng et al., 2015), which reported that theta phase precession of individual place cells was present from the initial trial and became more aligned across cells as the number of trials increased, forming theta sequences. Furthermore, it has been shown that theta phase precession (and the consequent theta sequence) enhances sequence learning (George et al., 2023; Molter et al., 2007; Reifenstein et al., 2021; Theodoni et al., 2018). This is a reverse of causality with our findings, which indicated that theta phase precession occurs as a result of learning. In this study, our model assumed nearly uniform membrane properties and synaptic connections in its initial state. By introducing variabilities in these values, such as distinct cellular properties and prewired connections for each cell, it might be possible to reproduce the results of Feng et al. (2015) within our model. Another possibility is that the difference arises from the distinct characteristics of CA3 and CA1, as these reports focus on CA1. Whether sequence learning precedes theta phase precession or vice versa is still a subject of debate.
Regarding the replay, there is a debate as to whether it is preexisting (Dragoi & Tonegawa, 2011) or acquired (Silva et al., 2015). Many computational mechanisms for achieving replay sequences have been proposed in computational modeling studies (August & Levy, 1999; Cona & Ursino, 2015; Cutsuridis & Hasselmo, 2011; Ecker et al., 2022; Haga & Fukai, 2018; Jahnke et al., 2015; Jensen et al., 1996; Molter et al., 2007; Nicola & Clopath, 2019; Reifenstein et al., 2021). The results of our study explained how a recurrent network could acquire the replay sequences, incorporating both preexisting and acquired aspects. Specifically, task-related and strongly connected synapses before learning tend to be enhanced through repeated experience.
Models similar to ours include those designed by Jensen et al. (1996), Jensen and Lisman (1996), Ecker et al. (2002), and Cona and Ursino (2015). Ours and these other models learned the order in which place cells are active in the recurrent connection using the Hebbian rule or similar learning rules (symmetric STDP; Mishra et al., 2016). As a result of learning, all models had a nongaussian, asymmetric distribution of synapse weights. Experimental results of biological brains also suggested the occurrence of similar phenomena (Choi et al., 2018). However, several distinctive features of our model set it apart. First, in contrast to the other models, our model included theta input as inhibitory input to inhibitory units. This is supported by optogenetic experiments showing that theta rhythm input is mediated by inhibitory projections from MS to PV-positive inhibitory neurons in the hippocampus (Zutshi et al., 2018). Second, our model also reproduced the decrease in inhibitory neuron activity during sleep or rest (Alfonsa et al., 2022; Miyawaki & Diba, 2016; Mizuseki & Buzsáki, 2013), which leads to increased random firing and the potential induction of replay during resting sessions. Third, Jensen et al. (1996) focused on theta phase precession (with no mention of replay), and Ecker et al. (2022) demonstrated the occurrence of SWR-like activity and the accompanying forward and reverse replay (with no mention of theta phase precession). Cona and Ursino (2015) have not assumed MS theta-paced inhibitory input (theta oscillation of their model was generated by local recurrent interaction of GABA-B like slow inhibition). Our model aimed to reproduce both the characteristic theta phase precession and replay activity in the hippocampus through learning in the CA3 local circuit and interaction between place and theta-paced inhibition. These unique aspects of our model provide new insights and a more comprehensive understanding of the CA3 circuit's role in theta phase precession and replay activity in the hippocampus.
In our study, we could detect neither clear gamma oscillation nor SWR in the population signal (see supplementary Figure 2). Recurrent connections among inhibitory units, which were missing in our model, can be critical to generate the gamma- or ripple-band oscillation (Ecker et al., 2022; Tiesinga & Sejnowski, 2009).
In the self-connection model (see supplementary Figure 3), we observed theta phase precession but with fewer replay events. In contrast, the other alternated models, such as those with alternative coincident detection mechanisms (nonCD, mpCD, and spiking DG input model; see supplementary Figures 4, 5, and 6), with different plasticity rules (standard and symmetric STDP model and short learning time constant model; see supplementary Figures 7 and 8), in the absence of theta input (non-theta input model; see supplementary Figure 9), and with randomized (spike timing or input order) DG inputs (spiking DG input and random DG input model; see supplementary Figures 6 and 10) exhibited the capability for replay. However, these model alternations affected the characteristics of the theta phase precession and the directionality of the replay. Furthermore, the additional learning and relearning had mild effects on the initially learned replay sequences (see Figures 5F and 6F).
In the self-connection model, the self-connections appeared to substantially decrease the synaptic strength of other non-self-connections. However, the bias in the connections with previous- and next-place units appears to be preserved from the original (see Figure 2A and supplementary Figure 3B). These synaptic properties were kept in the large model (see supplementary Figure 3K). In contrast, the connection changes in other alternative models varied (see supplementary Figures 4 to 8). These observations suggest that the occurrence of the replay is dependent on absolute synaptic weights. Conversely, the replay direction and theta phase precession patterns seem to hinge on the relative synaptic strengths between current-place and previous-place versus next-place connections.
Learning conditions and their interactions with theta oscillations are critical for the characteristics of sequence learning (see supplementary Figures 4 to 8) (Ecker et al., 2022; Jensen et al., 1996; Jensen & Lisman, 1996; Reifenstein et al., 2021; Theodoni et al., 2018). In particular, our results suggest that a pre- to post-directional long time constant, reminiscent of the time decay associated with calcium or plasticity-related proteins (Chang et al., 2019; Jensen & Lisman, 1996; Rogerson et al., 2014), may be responsible for establishing the theta phase precession pattern and directional replay sequence in this study. This aligns with the findings of previous computational studies, suggesting that long learning time constants aid in sequence learning (Reifenstein et al., 2021).
The primary limitations of this study arise from addressing only minimal elements of the CA3 region, such as the recurrent connectivity and plasticity in the CA3, place input from the DG, and inhibitory theta rhythms from the MS. This focus resulted in a less robust depiction of several phenomena, such as theta phase precession in both slope and range and a dominance of forward-ordered replay, compared with empirical biological observations. The alignment with experimental data could potentially be improved by incorporating elements such as inputs from EC (Fernández-Ruiz et al., 2017; Hafting et al., 2008; Molter et al., 2007; Schlesiger et al., 2015) or inhibitory connections (Cona & Ursino, 2015; Cutsuridis & Hasselmo, 2011; Ecker et al., 2022; Tiesinga & Sejnowski, 2009), alternative synaptic plasticity rules (Haga & Fukai, 2018), and variability in neuronal or synaptic properties.
In this study, we proposed a shared mechanism, synaptic enhancement, for compressing the sequence of external events into theta phase precession and replay. The synaptic modulation is governed by the interaction between theta inhibitory and place excitatory inputs. However, several points still need to be elucidated. Is the formation of theta phase precession and replay preexisting or acquired? How does the neuronal synchronization state of the hippocampus change during activity and rest? How does the replay occur? Does replay in the CA3 have different functions compared to replay in the CA1 and other brain regions? Is synaptic plasticity activated during replay? What are the contributions of circuits other than CA3 on the theta phase precession and replay? What are the effects of learning on replay? Are the replayed sequences varied and evaluated during each replay for abstraction, planning, and inference? How do these sequences interact with cortical networks to facilitate their functions? Further research is needed to investigate these questions from experimental and computational approaches.
4 Materials and Methods
4.1 Small Model
A small model was used to deeply analyze the dynamics of the system under well-controlled conditions. The CA3 model consisted of eight excitatory units and one inhibitory unit (see Figure 1A). The excitatory units received the DG place inputs, recurrent CA3 connections, and inhibitory inputs from the CA3 inhibitory unit. The inhibitory unit received the theta MS inhibitory input and excitatory inputs from the CA3 excitatory units. The activity of the units was modeled using the Izhikevich model (Izhikevich, 2010), and the parameters were chosen according to sections “8.4.1 Hippocampal CA1 Pyramidal Neurons” and “8.2.6 Fast Spiking (FS) Interneurons” in Izhikevich for the excitatory and inhibitory units, respectively.
The DG-CA3 excitatory unit connection (place input) was assumed to be sparse, meaning that each CA3 excitatory unit received only one place input. The amplitude of the place inputs was set high enough to induce spikes in the CA3 excitatory units with only one place input. This model used four place inputs for each session in eight DG inputs. The CA3-CA3 recurrent connections were all-to-all and plastic connections (see section 4.3). All CA3 excitatory units sent spikes to the CA3 inhibitory unit, which in turn sent feedback inhibition to all CA3 excitatory units.
4.2 External Inputs
In this model, we assumed that the place inputs were from the DG to CA3 excitatory units and that the theta inputs were from the MS to CA3 inhibitory units.
4.3 Plasticity
The synaptic weights of the CA3-CA3 recurrent connection were plastically changed according to Hebbian and regularization rules (Kim et al., 2020; Weber & Sprekeler, 2018). Plasticity-related factors, such as calcium ion (Jensen & Lisman, 1996) and calcium/calmodulin-dependent kinase IIα (Chang et al., 2019), are assumed to have a longer time constant than synaptic inputs based on the synaptic tagging hypothesis (Rogerson et al., 2014). Furthermore, we implemented a superlinear coincident detection; when two inputs from different sources arrive simultaneously, the response is considerably higher than the simple summation of the responses to these two inputs (Brandalise & Gerber, 2014; London & Häusser, 2005).
Parameter Name . | Symbol . | Value . | Parameter Name . | Symbol . | Value . |
---|---|---|---|---|---|
Number of trials | 10 | Synapse | |||
Time of one trial | T | 12000 ms | Reversal potential (Exc) | 0 mV | |
Time derivative | 0.5 ms | Reversal potential (Inh) | 65 mV | ||
CA3 excitatory neuronal unit | Time constant (Exc) | 10 ms | |||
Number of units | 8 | Time constant (Inh) | 20 ms | ||
Parameters of the neuronal unit | 50 μF | Synaptic weight (CA3 Inh → CA3 Exc) | 1 nS | ||
0.5 nS·mV−1 | Synaptic weight (CA3 Exc → CA3 Inh) | 1 nS | |||
60 mV | Synaptic weight (DG j → CA3 Exc i) | 1 () or 0 () nS | |||
45 mV | Synaptic weight (MS → CA3 Inh) | 1 nS | |||
40 mV | External Input | ||||
0.02 ms−1 | Amplitude of place input | 30 pA | |||
0.5 nS | Peak position of place input | 2, 3, 4, 5 cm | |||
45 mV | Window width of place input | 1 cm | |||
50 pA | Amplitude of theta input | 80 pA | |||
CA3 inhibitory neuronal unit | Frequency of theta input | 7 Hz | |||
Number of units | 1 | Baseline of excitatory input | 20 pA | ||
Parameters of the neuronal unit | 20 μF | Baseline of inhibitory input | 100 pA | ||
1 nS·mV−1 | Noise level of excitatory input | 0 pA | |||
55 mV | Noise level of inhibitory input | 0 pA | |||
40 mV | Plasticity | ||||
25 mV | The learning rate for Hebbian plasticity | 0.0083 (100/T) | |||
0.2 ms−1 | The synaptic time constant for | 300 ms | |||
0.025 nS·mV−2 | plasticity-related factor | ||||
45 mV | Summation limit for regularization | 1 nS | |||
20 pA | Velocity in place | 1 cm/s |
Parameter Name . | Symbol . | Value . | Parameter Name . | Symbol . | Value . |
---|---|---|---|---|---|
Number of trials | 10 | Synapse | |||
Time of one trial | T | 12000 ms | Reversal potential (Exc) | 0 mV | |
Time derivative | 0.5 ms | Reversal potential (Inh) | 65 mV | ||
CA3 excitatory neuronal unit | Time constant (Exc) | 10 ms | |||
Number of units | 8 | Time constant (Inh) | 20 ms | ||
Parameters of the neuronal unit | 50 μF | Synaptic weight (CA3 Inh → CA3 Exc) | 1 nS | ||
0.5 nS·mV−1 | Synaptic weight (CA3 Exc → CA3 Inh) | 1 nS | |||
60 mV | Synaptic weight (DG j → CA3 Exc i) | 1 () or 0 () nS | |||
45 mV | Synaptic weight (MS → CA3 Inh) | 1 nS | |||
40 mV | External Input | ||||
0.02 ms−1 | Amplitude of place input | 30 pA | |||
0.5 nS | Peak position of place input | 2, 3, 4, 5 cm | |||
45 mV | Window width of place input | 1 cm | |||
50 pA | Amplitude of theta input | 80 pA | |||
CA3 inhibitory neuronal unit | Frequency of theta input | 7 Hz | |||
Number of units | 1 | Baseline of excitatory input | 20 pA | ||
Parameters of the neuronal unit | 20 μF | Baseline of inhibitory input | 100 pA | ||
1 nS·mV−1 | Noise level of excitatory input | 0 pA | |||
55 mV | Noise level of inhibitory input | 0 pA | |||
40 mV | Plasticity | ||||
25 mV | The learning rate for Hebbian plasticity | 0.0083 (100/T) | |||
0.2 ms−1 | The synaptic time constant for | 300 ms | |||
0.025 nS·mV−2 | plasticity-related factor | ||||
45 mV | Summation limit for regularization | 1 nS | |||
20 pA | Velocity in place | 1 cm/s |
4.4 Resting Session
4.5 Altered Models
4.5.1 The Self-Connection Model
For the model with self-connections in the recurrent synapse (see supplementary Figure 3), we set , which were excluded in the original model.
4.5.2 The Noncoincident Detection Model (nonCD)
4.5.3 The Membrane Potential Coincident Detection Model (mpCD)
4.5.4 The Spiking DG Input Model
4.5.5 The Standard STDP Model and Symmetric STDP Model
4.5.6 The short learning time constant model
4.5.7 The non-theta input model
We set and pA for the model without theta input from the MS (see supplementary Figure 9).
4.5.8 The random DG input model
For the model with randomized input, the DG input Ip order was randomized in every trial and repeated for 20 trials (see supplementary Figure 10).
4.6 Large Model
To analyze dynamics under conditions that are similar to the actual brain, specifically with sparse stochastic connections, we used a large model. In the large model, the synaptic connections between the CA3 units were modeled as probabilistic, indicating that the connections in the CA3 circuit are not predetermined (Rebola et al., 2017). The large model was simulated using the Brian 2 neural simulator (Stimberg et al., 2019). The number of CA3 excitatory units was , and the number of CA3 inhibitory units was . The probability of the DG (place input)-CA3 excitatory connection was (Claiborne et al., 1986; Rebola et al., 2017), CA3 excitatory-CA3 recurrent excitatory connection was (Guzman et al., 2016), CA3 inhibitory-CA3 excitatory connection was , CA3 excitatory-CA3 inhibitory connection was , and MS (theta input)-CA3 inhibitory connection was . These probabilities were based on the results of previous studies (Ecker et al., 2022; Rebola et al., 2017). The synaptic weights were adjusted with (if connected) and . There were three trials in the running session, and a resting session was performed before and after the running session. The number of inputs from the DG for the initial learning was four (see Figure 7) and eight for the additional learning (consisting of index 1 to 4 for the initial sequence set and index 4 to 7 for the additional sequence set; see supplementary Figure 1). Other parameters were the same as in the small model.
The simulation represented in Figures 7H and 7L was executed for 15 sessions with different random seeds. The units for the theta phase precession analysis were selected according to two criteria: they responded to the place input and exhibited an expansion of firing through learning (more than 0.2 s earlier in the last trial than in the first).
Units with self-connections were extracted and compared with units without self-connections, specifically within units that had recurrent connections among the responding units (see supplementary Figures 3I to 3K; see also sections 4.7 and 4.9).
For the additional learning (see supplementary Figures 1B to 1E), three more trials were performed with the additional learning sequence set after the initial learning, followed by the additional resting session as in the small model. Units responding to the initial or additional sequence set were collected separately and analyzed with the synaptic weight changes and replay properties (see also sections 4.9 and 4.10).
For the analysis of burst activities during resting sessions (see supplementary Figure 2), the LFP was computed as the sum of all excitatory unit activities. The burst frequency was identified by applying the fast Fourier transform (FFT) to the LFP signal. To analyze high-frequency oscillations during the burst, the LFP signal was subjected to bandpass filtering between 50 and 400 Hz, followed by an application of FFT.
4.7 Theta Phase Precession
The LFP was calculated as the summation of the membrane potentials of all excitatory units. Next, the LFP signal was filtered using a bandpass filter (5–12 Hz), and a Hilbert transform was applied. The phase component was extracted from the transformed signal. Finally, a linear regression analysis was used to plot the place versus phase points to estimate the slope value.
4.8 Evaluation of Theta Sequence
The theta sequence was evaluated using cross-correlogram among CA3 excitatory units 2 to 4 spike trains. The positive lag indicates the spike order of (between unit and unit , ), and the negative one represents the spike order of . For the statistical test, we accumulated histograms of all cell pairs and sessions, comparing the mean in a range from 70 to 70 ms lag with 0 ms lag.
4.9 Detection of Coactivation during Rest Sessions
The spike timing differences in resting sessions were collected for each unit pair (unit and unit , ). The positive lag indicates the spike order of , and the negative one represents the spike order of , except for Figure 6, where the positive lag of relearning sequence pairs indicates the spike order of place inputs (unit 8 → 1, 8 → 3, 8 → 6, 1 → 3, 1 → 6, and 3 → 6). These timing differences were grouped according to the activation status of the units during the running session and whether both units were responding units, one unit in the pair was a responding unit while the other was not, or both units were nonresponding units. In the analysis of additional learning and relearning, pairs of initial and new sequence sets were also plotted. The densities were calculated over a range of 600 to 600 ms.
4.10 Replay Sequence
Linear regression was performed for the difference between peak positions of place inputs during learning and lags between spikes during replay after learning, with a lag range from 200 to 200 ms for the responding unit pairs. The unit that responded to the first-place input was excluded.
4.11 Statistical Analyses
This study was simulated and analyzed with Python scripts. Python libraries, scipy.stats, and sklearn.linear_model.LinearRegression were used to conduct the paired t-test and linear regression, respectively.
4.12 Dynamical Analyses
Both eigenvalues should be negative for the equilibrium point to be stable, corresponding to and . If the conditions are not met, the equilibrium point is unstable.
It exhibited a saddle-node bifurcation when the constant input current was less than 24.5. In this case, the negative-side equilibrium was stable, while the positive side was unstable (Izhikevich, 2010). The equilibrium points disappeared above = 24.5.
The equilibrium point was stable when and unstable when , as is always positive. The bifurcation point occurred at I = 73.7.
Code Availability Statement
The scripts for this study can be found in the Google Colaboratory: https://colab.research.google.com/drive/1mwl6ti3lAFtVwtfoZBPhdbSUXrUoCd7v?usp=sharing.
Appendix: Supplementary Figures
Acknowledgments
We thank Hiroyuki Miyawaki and Takuya Isomura for their valuable contributions to the discussion. This work was supported by JSPS KAKENHI (23H02788 to K.M.) and Takeda Science Foundation (K.M.).