CA3 Circuit Model Compressing Sequential Information in Theta Oscillation and Replay

Abstract The hippocampus plays a critical role in the compression and retrieval of sequential information. During wakefulness, it achieves this through theta phase precession and theta sequences. Subsequently, during periods of sleep or rest, the compressed information reactivates through sharp-wave ripple events, manifesting as memory replay. However, how these sequential neuronal activities are generated and how they store information about the external environment remain unknown. We developed a hippocampal cornu ammonis 3 (CA3) computational model based on anatomical and electrophysiological evidence from the biological CA3 circuit to address these questions. The model comprises theta rhythm inhibition, place input, and CA3-CA3 plastic recurrent connection. The model can compress the sequence of the external inputs, reproduce theta phase precession and replay, learn additional sequences, and reorganize previously learned sequences. A gradual increase in synaptic inputs, controlled by interactions between theta-paced inhibition and place inputs, explained the mechanism of sequence acquisition. This model highlights the crucial role of plasticity in the CA3 recurrent connection and theta oscillational dynamics and hypothesizes how the CA3 circuit acquires, compresses, and replays sequential information.


Introduction
The hippocampus contributes to episodic memory formation (Gordon, 1988;Scoville & Milner, 1957).Rodent spatial memory has been extensively studied in animal models to elucidate the mechanisms of episodic memory (Buzsáki & Moser, 2013;O'Keefe & Dostrovsky, 1971;O'Keefe & Nadel, 1978;Tolman, 1948).In the hippocampus, place information is represented as a phase code within theta cycles of local field potential (LFP) through a phenomenon known as theta phase precession, where individual neuron firing timings within theta oscillations advance as an animal walks through the place fields of neurons (O'Keefe & Recce, 1993;Skaggs et al., 1996).Moreover, sequential activity is manifested as the firing order within a theta cycle, forming theta sequences (Dragoi & Buzsáki, 2006;Foster & Wilson, 2007).The sequential activities are replayed during sharpwave ripple (SWR) in sleep and quiet resting (Diba & Buzsáki, 2007;Foster & Wilson, 2006).It is hypothesized that sequential activity serves to compress temporal relationships within the external world, transforming them from the scale of several seconds into neural sequences operating at the scale of tens of milliseconds.This phenomenon facilitates efficient processing, storage, and retrieval of information while also integrating such information into cognitive processes, including memory consolidation, abstraction, planning, and inference (Buzsáki, 2015;Drieu & Zugaro, 2019).
The hippocampus can be divided into several subregions, primarily the dentate gyrus (DG) and cornu ammonis 1 and 3 (CA1 and CA3), each of which plays an essential and distinct role in episodic and place memory.In particular, the CA3 is integral to generating sequential activity.The CA3 is a starting point for the ripple (Buzsáki, 1986;Buzsáki et al., 1983;Suzuki & Smith, 1988) and replay (Middleton & McHugh, 2016) activities.The CA3 region receives input from the DG and sends output to the CA1.Input from the DG is extremely sparse in the CA3; however, each input exhibits substantial strength (Henze et al., 2000(Henze et al., , 2002)).There are more recurrent connections between the excitatory neurons in the CA3 than in the DG and CA1.Synapses between CA3 excitatory neurons are weaker, with a higher connection probability than that of synapses from DG to CA3 excitatory neurons.Both DG-CA3 and CA3-CA3 synapses exhibit short-and long-term plasticity (Rebola et al., 2017).These anatomical features of CA3 circuits seem optimal for sequence learning (Banino et al., 2018;Jaeger & Haas, 2004;Laje & Buonomano, 2013;Sussillo & Abbott, 2009;Wang et al., 2018).
GABAergic neurons of the medial septum (MS) provide theta-paced inhibition to the hippocampal interneurons, which in turn control the firing timing of the excitatory neurons (Freund & Antal, 1988;King et al., 1998;Petsche et al., 1962;Zutshi et al., 2018).Inhibitory neurons also receive input from nearby excitatory neurons and send them feedback inhibition (Rebola et al., 2017).
Several computational models for generating theta phase precessions (Chadwick et al., 2016;Geisler et al., 2010) and compressing sequences (Ecker et al., 2022;Jahnke et al., 2015;Nicola & Clopath, 2019;Reifenstein et al., 2021) have been proposed.However, no model has provided a unified explanation for both phenomena.Therefore, this study focused on the CA3 structure, which has many recurrent connections and is a source of SWRs, as a critical region for acquiring sequence compression, and aimed to elucidate how the CA3 encodes and replays an external place sequence.

Results
We developed a minimal CA3 neural network based on anatomical evidence (Andersen et al., 2006;Rebola et al., 2017) to understand the fundamental mechanisms by which the CA3 recurrent circuit compresses sequential information (see Figure 1A).Using a minimal model allowed for a detailed examination of the dynamics of individual neuronal units and their interactions (Chadwick et al., 2016;Izhikevich, 2010;Jensen et al., 1996;Reifenstein et al., 2021).The network comprises eight excitatory and one inhibitory neuronal units.The excitatory neuronal units receive inputs from the DG, CA3 recurrent synapses, and the inhibitory unit.In a biological brain, the input from the DG is both sparse and substantial (Henze et al., 2000(Henze et al., , 2002)).The DG activity is also sparse, demarcating the input pattern, known as pattern separation, more clearly (Yassa & Stark, 2011).To simplify the network, we assumed that each CA3 excitatory unit received only one DG input, although the synaptic weight was large (see Figure 1A).Half of the DG inputs had place fields (order of units 1-2-3-4), and the others were silent (see Figure 1B).The CA3 excitatory units were connected to each other, and the recurrent synapses were plastic.The CA3 excitatory units also received feedback inhibition from the CA3 inhibitory unit.The inhibitory neuron received excitatory inputs from the CA3 excitatory units and thetamodulated inhibitory inputs from the MS (Freund & Antal, 1988;King et al., 1998;Petsche et al., 1962;Zutshi et al., 2018; see Figures 1A and 1C).We tested whether the network model can learn sequential activity within the network by experiencing the place order and whether it can replay the sequence without external inputs.
In this model, we set plasticity only in the CA3 recurrent synapses, not in the DG-CA3 synapses, to simplify the learning process.We selected the CA3 recurrent synapses because recurrent networks such as CA3-CA3 connections are suitable for learning sequential activities (Banino et al., 2018;Jaeger & Haas, 2004;Laje & Buonomano, 2013;Sussillo & Abbott, 2009;Wang et al., 2018).The Hebbian plasticity rule was incorporated in this study.Activated synapses are tagged for much longer than membrane potential changes (referred to as the plasticity-related factor; see section 4.3) (Chang et al., 2019;Rogerson et al., 2014), which makes sequence learning with the Hebbian rule more efficient (Reifenstein et al., 2021).Furthermore, when inputs from the DG and another CA3 neuron arrive simultaneously, the neuronal response is considerably higher than the simple summation of the responses to these two inputs, known as superlinear coincident detection (Brandalise & Gerber, 2014;London & Häusser, 2005).We implemented these synaptic properties in the model (see section 4.3).We also added the regularization term of the synaptic weight (maintaining summation of postsynaptic weights as the same constant value; see section 4.3) to prevent overexcitation.The self-connection in the CA3-CA3 recurrent synapses was excluded.The model was assumed to run on linear space with constant velocity for 10 trials (running session).During the running session, the plasticity of the CA3-CA3 recurrent synapse was changing the synaptic weights.Figures 1D to 1F show CA3 unit spikes (see Figures 1D and 1E) and membrane potentials (see Figure 1F) of both excitatory and inhibitory units in the first, middle, and last trials.Notably, as the trial progressed, the firing started earlier than in the previous trials, which is consistent with a previous experimental report (Mehta et al., 2000).The emergence of new spikes coincided with the synaptic weight modifications in the CA3-CA3 recurrent synapse.
Synaptic weights between the neuronal units that received adjacent place inputs were increased using Hebbian plasticity, while those of other synapses within the same postsynaptic unit were decreased using regularization plasticity.The synaptic weights in the forward order (from earlier to later activated units) were more potentiated than those in the reverse order (from later to earlier activated units; see Figures 2A and 2B).The time course of synaptic weight potentiation differed between the order directions (see Figure 2C).These plastic changes in synaptic weights accelerated the timing of spikes in Figures 1D to 1F.
This model demonstrated theta oscillation as a population signal (see Figures 3A and 3B) and theta phase precession-like firing pattern in the last trial (Figures 3D to 3F, except in the unit receiving first-place field inputs (see Figure 3D, blue dots, and Figures 3E and 3F, unit 1).Conversely, the theta phase precession-like firing did not appear in the first trial (see Figures 3C, 3E, and 3F).During the last trial, the excitatory units that received place inputs began firing at earlier positions in the late phase than in the first trial, resulting in a negative slope of regression in units 2 to 4 (see Figures 3C to 3F; two-tailed paired t-test, n = 10; unit 1, p = 0.005, t = 3.667; unit 2, p = 0.003, t = −4.083;unit 3, p = 0.001, t = −4.704;unit 4, p = 0.006, t = −3.529;see Figure 3F).To verify the theta sequence, we examined the cross-correlogram among the spike trains of CA3 excitatory units 2 to 4 (see Figure 3G).We excluded unit 1 from the analysis due to its distinct theta phase precession pattern compared to the other units.The peak lags within a theta cycle shifted to positive lags from 0 ms after learning, suggesting that the theta sequences were established (two-tailed, one-sample t-test, p < 0.001, t = 9.82; see Figure 3H, right), whereas no such shift was observed before learning (two-tailed, one-sample t-test, p = 0.23, t = −1.21;see Figure 3H, left).The enhanced synaptic weight of recurrent connections, shown in Figure 2, caused these additional spikes.The model demonstrated the theta phase precession pattern generation by learning the sequence.
Next, we tested whether the model could replay the sequential activity without external place inputs (see Figure 4).To simulate a resting state of animals, we added a noise input independently to each neuronal unit, cutting off the external place and theta inputs and shutting off plasticity in recurrent connections (resting session; see section 4.4).After learning, the noise input occasionally induced random spikes, followed by sequential spikes from the associated units (see Figure 4B).Notably, this phenomenon was not observed before learning (see Figure 4A).The lags between spikes among units that had received place inputs (responding units) were smaller than those among other unit pairs after learning (see Figure 4D) but not before learning (see Figure 4C).By running multiple sessions for statistical testing, we confirmed that the densities of spike pairs within a 100 ms lag were significantly higher among responding unit pairs (two-tailed paired ttest, n = 10, p < 0.001, t = 20.4,Figure 4E upper) but not among other types of pairs (p = 0.98, t = 0.024, Figure 4E middle; p = 0.35, t = 0.97, Figure 4E lower).The differences in peak positions of place inputs during learning and lags between spikes during replay after learning for the responding unit pairs were significantly positively correlated in the −200 to 200 ms lag range (see Figure 4F, right, linear regression: n = 104, slope = 47.42,intercept = 4.43, r = 0.44, p < 0.001), indicating that this model mainly exhibited forward replay.The emergence of the replay was also due to strengthening of synaptic weights between neighbor place field units (see Figure 2).The model successfully reproduced replay spike sequences after learning the place sequence.
We tested the ability of the model to learn an additional sequence (order of units 4-5-6-7, Figure 5A) following the initial sequence learning (see Figures 1-4).The first DG input of the new sequence matched the last one from the initial sequence set (unit 4, Figure 1B, red line), while the other DG inputs differed from those in the initial sequence set.Similar to the initial sequence learning, the model demonstrated a shift in spike timing across trials in the additional learning stage (see Figure 5B).The weights of the pairs within the additional sequence were strengthened, while the weight strengths within the initial sequence were maintained to some extent (see Figure 5C).In addition, the model exhibited theta phase precession in units that had learned the additional sequence (see Figure 5D).During the resting session, the model displayed replay-like activity of the additional sequences set mainly in a forward order (slope = 54.2,intercept = −10.7,r = 0.43, p < 0.001, n = 122) and that of the initial sequence set with losing the direction of the order (slope = −36.8,intercept = 70.3,r = −0.26,p = 0.11, n = 40) (Figures 5E to 5G).The synaptic weights among units of the initial sequence were reduced after additional learning (see supplementary Figure 1A in the appendix).These results suggest that the model can learn additional sequences while retaining the associative information of the initial sequence.
The ability of the model to relearn the new sequence, which was a mixture of the previous and new place fields, was also examined (order of units 8-1-3-6; see Figure 6A) after the initial sequence learning (see Figures 1 to 4).The second-and third-place inputs in the relearning sequence set were the same as the first-and third-place inputs (units 1 and 3, respectively) in the initial sequence set.During the earlier trial, the units of the initial sequence set fired (see the yellow and red dots in Figure 6B, left), although these activities disappeared after relearning (see Figure 6B, right).The synaptic weights underwent reorganization to accommodate the new relearning sequence (see Figure 6C).The connections between the initial sequence set remained, although the synaptic weights were weakened.The theta phase precession of the relearning sequence was observed (see Figure 6D).During the resting session, the model showed replay-like activity of the relearning sequence set in the forward order (slope = 72.1,intercept = −27.6,r = 0.63, p < 0.001, n = 80) and that of the initial sequence set with losing the direction of the order (slope = 22.1, intercept = −0.54,r = 0.17, p = 0.114, n = 86; see Figures 6E to 6G).These results indicated that the model can adaptively reorganize new sequence sets.
So far, we examined the minimal network model, in which synaptic connections were predetermined.Notably, a higher number of neurons are involved, and synaptic connections are probabilistic in the actual CA3 circuit.To recapitulate these features in a more realistic simulation, we constructed a large model using the Brian 2 simulator (Stimberg et al., 2019).Similar to the small network, this model received two types of external input: place input from the DG to the CA3 excitatory units (see Figure 7A) and theta inhibitory input from the MS to the CA3 inhibitory units (see Figure 7B).A tiny population of CA3 excitatory units was connected with and spiked by the DG place inputs (the probability of the DG-CA3 excitatory connection p pe = 0.005), while all CA3 inhibitory units showed theta-locked burst spikes (see Figure 7C).The CA3-CA3 recurrent synaptic connections were also probabilistic (the probability of the recurrent connection p rec = 0.1) but were much denser than the DG-CA3 synapses, and plasticity was implemented using the Hebbian rule.We selected the excitatory units that had received DG place inputs (referred to as responding units) to observe the formation of the sequential activity (see Figure 7D).Some units exhibited an earlier onset of activity during the last trial due to the learning (see Figure 7E), and some synaptic weights between the responding units were enhanced (see Figure 7F).The theta phase precession was also observed in the units after learning (see Figures 7G and  7H).The model demonstrated replay activity during the resting session (see

Figures 7J to 7L
).Thus, the results produced by the large network model were consistent with the results of the small network model.
We analyzed the results of additional sequence learning in the large-scale model (see supplementary Figures 1B to 1E).As in the small model, replays were observed, but the directions were not unidirectional in either the initial or additional sequence sets (see supplementary Figures 1B and 1C).In the small model, reduction of the synaptic weight could cause loss of the replay directionality in the initial sequence after the additional sequence learning (see Figure 5G and supplementary Figure 1A).However, the synaptic weights among the responding units in the large model were not reduced after additional learning (see supplementary Figures 1D and 1E) in contrast to those in the small model.Thus, probabilistic sparse synaptic connection can induce bidirectional replay and prevent memory interference.
During the resting session, the model exhibited synchronized activities among excitatory units (see Figure 7I).To verify this, we analyzed the population signal, which revealed synchronization frequency of approximately 5 Hz (see supplementary Figures 2A to 2C).Additionally, we analyzed highfrequency oscillations during the burst but did not identify clear peaks in the power spectrum within the gamma or ripple range (50-400 Hz) in the population signal (see supplementary Figures 2D to 2F).
Next, we analyzed the dynamics of CA3 excitatory and inhibitory units when the external inputs (I exc p , I inh th ) were supplied.We referred to the parameters of the excitatory and inhibitory units described by Izhikevich (2010).The dynamics of the Izhikevich model can be depicted as a phase portrait in the v-u coordinate plane, which helps visualize the behavior of the model over time (see Figures 8A and 8B).Nonlinear neural models, such as the Izhikevich model, exhibit bifurcations (see Figures 8C and 8D).These bifurcation points are crucial for describing the activity of the neural model.The excitatory units had a stable point (see Figure 8A, black circle) in the absence of external input (see Figure 8A, solid black line).A stable point indicates that the neuron remains in a steady state and does not generate action potentials.If the external input I exceeds the bifurcation point I = 24.5 (see Figure 8C) with the DG place input, the stable point is lost, and bursting occurs (saddle-node bifurcation; e.g., see Figure 8A, dotted black line).The stable point reappeared when the external input decreased, and the firing became silent (e.g., see Figure 8A, solid black line and black circle).When the inhibitory input was received from the CA3 inhibitory unit, the quadric v-nullcline shifted in the negative direction along the u-axis, resulting in a delay in firing (if the stable point did not appear; e.g., see Figure 8A dotted black line) or complete cessation of firing (if the stable point had appeared; e.g., see Figure 8A, solid black line and black circle).The outcome depended on the magnitude of inhibitory input, which was determined by the theta phase.These dynamics mirrored the behavior of the excitatory units in the simulated scenarios discussed in previous sections.
Without the MS theta inhibitory input (baseline, I = 100), the inhibitory unit had an unstable point and exhibited bursting firing (see Figure 8B, solid black line and white circle).When the inhibitory theta input was received from the MS and the external input fell below I = 73.7 (see Figure 8D), the unstable point became stable (Andronov-Hopf bifurcation; e.g., see Figure 8B, dotted black line and black circle).When the point was stable, the dynamics of the unit converged on it and stopped firing.However, the stable point became unstable again during the theta phase, at which MS inhibitory input was weak, resulting in the reemergence of burst firing.
The amplitude of the external input affected the spike timing of the excitatory units in the theta cycle (see Figures 9A to 9D).The MS inhibitory input controlled the theta cycle (see Figure 9A).As described in previous sections, the CA3 excitatory unit did not spike without the external DG place input (see Figure 8A, solid line, and Figure 9C c).When the external input amplitude increased slightly above the bifurcation point, the units began firing later in the theta cycle (see Figure 9C a).With a small amplitude of the external input, the transition speed around the closest area between (E) Before learning, the firing timings of a CA3 excitatory unit (orange shadow and line) on the phase-place (time) plane depend on place input strength, displaying a symmetric shape.Synaptic potentiation occurs (black arrow) with the current unit (orange shadow and line) responding to the place inputs from the previous-place and next-place units (blue and green line, respectively).(F) The synaptic potentiation takes effect slightly later (colored arrows) than the firing of the presynaptic units (purple and yellow shadows), either accelerating the spike timing or inducing spikes in an additional theta cycle.The difference in plasticity-related factor accumulation between previous-place (blue opaque shadow) versus next-place (green opaque shadow) and the difference in overlapping area between the current-place unit (orange) and the previous-place (purple shadow) versus next-place (yellow shadow) units results in a stronger effect from the previous-place unit, leading to an asymmetric precession shape (orange lines).Synaptic potentiation also affects replay.
v-and u-nullclines was very slow (see Figure 9C a, lower).As the external input amplitude increased, the equilibrium disappeared, transition speed around the area increased, firing timing in the theta cycle started earlier, and the unit began to burst (see Figure 9C b).The firings were restricted and reset by inhibitory inputs from the CA3 inhibitory unit, which spiked at specific phases in the theta cycle (see Figure 9D).This resetting by the inhibitory input caused the start timings of the state in the v-u coordinate of each excitatory unit to be aligned with the theta cycle.
The theta-paced inhibitory input controls the spike timings of the CA3 excitatory units; however, the theta phase precession pattern should be symmetric (no bias in the phase position of spikes before and after the peak position) because it depends solely on place input strength, which is symmetric in this model.Thus, additional synaptic interactions should shape the negative slope precession pattern.Before learning, the theta phase precession pattern of a CA3 excitatory unit was symmetric (see Figures 3C  and 9E, orange shadow, referred to as the current-place unit).The learning process enhanced recurrent synaptic weights (see Figure 2A), connecting the current-place units with the units representing the places that were previously traversed (previous) and those that will be encountered soon (next) (see Figure 9E, blue and green lines, referred to as previous-place and next-place units, respectively).The differences in firing timing between the current-place and previous-place units, and between the current-place and next-place units, were also symmetric (see Figure 9E, arrows).The synaptic potentiation increases input amplitude and advances spike timing within a theta cycle in the postsynaptic unit (current-place unit) while also causing the postsynaptic unit (current-place unit) to spike in the late phase of the earlier theta cycle.The effect of synaptic input on the postsynaptic (currentplace) unit occurred slightly later than the presynaptic (previous-and nextplace units) spike timings due to a delay in the accumulation of external inputs and a state transition to trigger the firing of the postsynaptic unit, creating an asymmetric effect where the previous-place unit has more time to overlap with the activated timing of the current-place unit than the nextplace unit does (see Figure 9F, purple-orange and yellow-orange shadow overlapping areas).
Furthermore, the previous-place unit interacts with the current-place unit during the latter part of the previous-place unit activity, while the next-place unit interacts during the earlier part of the next-place unit.This causes the plasticity-related factor (see section 4.3) to accumulate more for the previous-place unit, resulting in greater synaptic potentiation in the synapse from the previous-to current-place units than that from the nextto current-place units (see Figures 2C and 9F, blue and green opaque shadows, respectively).The absence of a previous-place unit for unit 1 likely accounted for the abnormal precession shown in Figure 3.These mechanisms lead to an asymmetric theta phase precession pattern.Additionally, the strengthened synapses induced sequential spikes by random noise input in the resting session, resulting in replays after place learning in our model.
As supplemental simulations, we tested the models by partially altering the elements of the original small model (see section 4.5) to evaluate which aspects of the original small model are involved in the theta phase precession and replay.A model with self-connection in the CA3 excitatory units (self-connection model) displayed theta phase precession.However, unlike the original model, it did not demonstrate sequential replay (see supplementary Figures 3A to 3G).Compared to the original model, the selfconnection model showed a relative weakening of synaptic connections with other units, accompanied by an enhancement of the self-connected synaptic weight (see supplementary Figure 3H).In the large model, there were self-connected units probabilistically (p rec = 0.1) and we selected the units with self-connections and compared them with units without such connections.These self-connected units displayed theta phase precession but not sequential replay (see supplementary Figures 3I and 3J).Synaptic weights, excluding the self-connections in the self-connected units, were smaller than those in the non-self-connected units (see supplementary Figure 3K).
We simulated the models using altered coincident detection rules.A model without coincident detection (nonCD model) weakened the synaptic weights and did not exhibit the theta phase precession or replays (see supplementary Figures 4A to 4F).Furthermore, the model with coincident detection of membrane potential (mpCD model) weakened the synaptic weights and exhibited unclear theta phase precession and replay (see supplementary Figures 5A to 5F).These were partially recovered with the increasing learning rate or the number of trials (see supplementary Figures 4G to 4L and 5G to 5L).
The firing timing difference between DG inputs and CA3 excitatory neurons is crucial for the coincident detection (Brandalise & Gerber, 2014).Thus, we modified the DG inputs from continuous values to probabilistic spikes according to Poisson distribution (see the spiking DG input model in supplementary Figure 6).Although randomizing the DG input made the theta phase precession and replay patterns more ambiguous, the synaptic connections still formed patterns that were similar to those in the original model.
To better assess the roles of synaptic plasticity, we simulated models with different plasticity rules.With both standard spike-timing-dependent plasticity (STDP; Bi & Poo, 1998, standard STDP model) and symmetric STDP (Mishra et al., 2016, symmetric STDP model), patterns of theta phase precession and replay were altered (see supplementary Figure 7).Furthermore, a model employing a short time constant for a factor related to plasticity in the original learning rule (short learning time constant model) also failed to exhibit theta phase precession and directional replays (see supplementary Figure 8).A model without the theta input from the MS (non-theta input model) still displayed the replay but not the theta phase precession (see supplementary Figure 9).Furthermore, we randomized the order of place inputs (random DG input model).The patterns of theta phase precession and the directionality of replays were eliminated (see supplementary Figure 10).The synaptic connections formed a cluster within the responding units.

Discussion
In this study, we developed a neural circuit model based on the anatomical evidence of biological CA3 circuits.The model could learn and compress the order of external inputs from sparse DG projections using the Hebbian rule and displayed theta phase precession and replay-like activity.The model could also perform additional learning and relearning.In this computational model, both theta phase precession and replay were generated by a shared underlying mechanism, namely, synaptic enhancement of the CA3 recurrent connections.Synaptic modulation is governed by the interplay between theta rhythm inhibition and the strength and temporal order of the place inputs.These findings underscore the pivotal role of CA3 circuits in the acquisition of sequence information, significantly enriching our understanding of the mechanistic underpinnings of memory encoding processes.
Sequence learning within the hippocampal region remains a much debated topic (Drieu & Zugaro, 2019).This study primarily hypothesized that the dense recurrent connections in CA3 play a pivotal role in sequence learning, with supporting evidence from both anatomy and electrophysiology, such as its role as an SWR source (Buzsáki, 2015;Hájos et al., 2013;Rebola et al., 2017;Schlingloff et al., 2014).The potential of CA3 as the source of replay is reinforced by observations that it influences SWR events in CA1 (Buzsáki, 2015;Csicsvari et al., 2000) and that disruptions in its function affect CA1 replay (Davoudi & Foster, 2019;Guan et al., 2021;Middleton & McHugh, 2016;Yamamoto & Tonegawa, 2017).However, CA1 and CA2 are also considered candidates for sequence learning.CA1, for instance, has a unique connectivity profile; it receives approximately 5000 inputs from numerous CA3 pyramidal neurons per neuron (Amaral et al., 1990).Additionally, it receives dual inputs from both the entorhinal cortex (EC) and CA3, which send maximum input at distinct theta phases (Mizuseki et al., 2009), thereby distinctly influencing CA1 theta phase precession (Fernández-Ruiz et al., 2017).Although its intrinsic network in the CA1 is less recurrent compared with that in the CA3, it can generate spiking sequences by applying optogenetic stimulation to the local circuit (Stark et al., 2014).CA2, exhibiting similarities in circuitry to CA3, is also a notable SWR source (Oliva et al., 2016).The distinct advantage of CA3 might stem from its sparse yet potent inputs from the DG, an aspect absent in CA2 (Tamamaki et al., 1988).While all hippocampal subregions contribute to sequence learning, the CA3, due to its unique features, seems to play a leading role.
We intentionally excluded EC inputs to simplify the model in our study, though they are known to influence hippocampal activity (Chenani et al., 2019;Fernández-Ruiz et al., 2017;Schlesiger et al., 2015;Yamamoto & Tonegawa, 2017).It should be noted that EC inputs and CA3 recurrent activity show distinct theta phase preferences (Fernández-Ruiz et al., 2017;Mizuseki et al., 2009), and synaptic modification occurs between presynaptic EC neurons and postsynaptic CA3 neurons (McMahon & Barrionuevo, 2002;Tsukamoto et al., 2003).Moreover, even when considering our larger model, the network sizes remain significantly smaller than those of the actual CA3 circuit.In our models, a CA3 cell receives only one or a few DG inputs.In contrast, it is estimated that approximately 50 DG cells input into a single CA3 pyramidal cell (Rebola et al., 2017).Additionally, the place representation in the CA3 is distinct and not merely a replica of those in the DG (Senzai & Buzsáki, 2017).Future models incorporating diverse inputs and more accurate network scales would provide a more comprehensive understanding of hippocampal dynamics.
Models that explain theta phase precession can be divided into three main categories (Drieu & Zugaro, 2019): detuned oscillators model (Geisler et al., 2010;O'Keefe & Recce, 1993), somato-dendritic interference model (Harris et al., 2002;Kamondi et al., 1998;Mehta et al., 2002), and network connectivity model (Jensen & Lisman, 1996;Romani & Tsodyks, 2015;Skaggs et al., 1996;Tsodyks et al., 1996).The detuned oscillators model posits that theta phase precession arises from a single place cell oscillating at a slightly faster frequency than the population signal (LFP) theta oscillation, with both oscillators gradually shifting due to their frequency difference.Our findings do not support this model because the frequency of the LFP oscillations (summation of all excitatory membrane potentials) was identical to that of the theta input (7 Hz) to individual neuronal units and exhibited no signs of slowing down (see Figure 3B).
In contrast, the somato-dendritic interference model suggests that a combination of oscillatory somatic inhibition and transient ramp-like dendritic excitation determines action potential timing.Our results supported this model because the place input strength dictated the spike timing in space without recurrent inputs (see Figures 8 and 9).Furthermore, the expansion of the place fields (see Figures 1D to 1F) also supports developing the theta phase precession with the somato-dendritic interference model (Kamondi et al., 1998;Mehta et al., 2002Mehta et al., , 2000)).Network connectivity models propose that theta phase precession results from transmission delays between asymmetrically connected neurons (Jensen & Lisman, 1996;Romani & Tsodyks, 2015;Skaggs et al., 1996;Tsodyks et al., 1996).Our results align with those obtained in these models, as our model exhibited additional spikes in the late phase of the early theta cycle after learning (see Figures 1 to 3).These findings highlight the potential roles of somato-dendritic interference and network connectivity in theta phase precession, although our model does not exhaustively explore these mechanisms.
In our model, theta phase precession gradually formed through learning, consistent with the results of Mehta et al. (2002).However, this does not align with the results of a previous study (Feng et al., 2015), which reported that theta phase precession of individual place cells was present from the initial trial and became more aligned across cells as the number of trials increased, forming theta sequences.Furthermore, it has been shown that theta phase precession (and the consequent theta sequence) enhances sequence learning (George et al., 2023;Molter et al., 2007;Reifenstein et al., 2021;Theodoni et al., 2018).This is a reverse of causality with our findings, which indicated that theta phase precession occurs as a result of learning.In this study, our model assumed nearly uniform membrane properties and synaptic connections in its initial state.By introducing variabilities in these values, such as distinct cellular properties and prewired connections for each cell, it might be possible to reproduce the results of Feng et al. (2015) within our model.Another possibility is that the difference arises from the distinct characteristics of CA3 and CA1, as these reports focus on CA1.Whether sequence learning precedes theta phase precession or vice versa is still a subject of debate.
Models similar to ours include those designed by Jensen et al. (1996), Jensen and Lisman (1996), Ecker et al. (2002), and Cona and Ursino (2015).Ours and these other models learned the order in which place cells are active in the recurrent connection using the Hebbian rule or similar learning rules (symmetric STDP; Mishra et al., 2016).As a result of learning, all models had a nongaussian, asymmetric distribution of synapse weights.Experimental results of biological brains also suggested the occurrence of similar phenomena (Choi et al., 2018).However, several distinctive features of our model set it apart.First, in contrast to the other models, our model included theta input as inhibitory input to inhibitory units.This is supported by optogenetic experiments showing that theta rhythm input is mediated by inhibitory projections from MS to PV-positive inhibitory neurons in the hippocampus (Zutshi et al., 2018).Second, our model also reproduced the decrease in inhibitory neuron activity during sleep or rest (Alfonsa et al., 2022;Miyawaki & Diba, 2016;Mizuseki & Buzsáki, 2013), which leads to increased random firing and the potential induction of replay during resting sessions.Third, Jensen et al. (1996) focused on theta phase precession (with no mention of replay), and Ecker et al. (2022) demonstrated the occurrence of SWR-like activity and the accompanying forward and reverse replay (with no mention of theta phase precession).Cona and Ursino (2015) have not assumed MS theta-paced inhibitory input (theta oscillation of their model was generated by local recurrent interaction of GABA-B like slow inhibition).Our model aimed to reproduce both the characteristic theta phase precession and replay activity in the hippocampus through learning in the CA3 local circuit and interaction between place and theta-paced inhibition.These unique aspects of our model provide new insights and a more comprehensive understanding of the CA3 circuit's role in theta phase precession and replay activity in the hippocampus.
In our study, we could detect neither clear gamma oscillation nor SWR in the population signal (see supplementary Figure 2).Recurrent connections among inhibitory units, which were missing in our model, can be critical to generate the gamma-or ripple-band oscillation (Ecker et al., 2022;Tiesinga & Sejnowski, 2009).
In the self-connection model (see supplementary Figure 3), we observed theta phase precession but with fewer replay events.In contrast, the other alternated models, such as those with alternative coincident detection mechanisms (nonCD, mpCD, and spiking DG input model; see supplementary Figures 4, 5, and 6), with different plasticity rules (standard and symmetric STDP model and short learning time constant model; see supplementary Figures 7 and 8), in the absence of theta input (non-theta input model; see supplementary Figure 9), and with randomized (spike timing or input order) DG inputs (spiking DG input and random DG input model; see supplementary Figures 6 and 10) exhibited the capability for replay.However, these model alternations affected the characteristics of the theta phase precession and the directionality of the replay.Furthermore, the additional learning and relearning had mild effects on the initially learned replay sequences (see Figures 5F and 6F).
In the self-connection model, the self-connections appeared to substantially decrease the synaptic strength of other non-self-connections.However, the bias in the connections with previous-and next-place units appears to be preserved from the original (see Figure 2A and supplementary Figure 3B).These synaptic properties were kept in the large model (see supplementary Figure 3K).In contrast, the connection changes in other alternative models varied (see supplementary Figures 4 to 8).These observations suggest that the occurrence of the replay is dependent on absolute synaptic weights.Conversely, the replay direction and theta phase precession patterns seem to hinge on the relative synaptic strengths between current-place and previous-place versus next-place connections.
Learning conditions and their interactions with theta oscillations are critical for the characteristics of sequence learning (see supplementary Figures 4 to 8) (Ecker et al., 2022;Jensen et al., 1996;Jensen & Lisman, 1996;Reifenstein et al., 2021;Theodoni et al., 2018).In particular, our results suggest that a pre-to post-directional long time constant, reminiscent of the time decay associated with calcium or plasticity-related proteins (Chang et al., 2019;Jensen & Lisman, 1996;Rogerson et al., 2014), may be responsible for establishing the theta phase precession pattern and directional replay sequence in this study.This aligns with the findings of previous computational studies, suggesting that long learning time constants aid in sequence learning (Reifenstein et al., 2021).
The primary limitations of this study arise from addressing only minimal elements of the CA3 region, such as the recurrent connectivity and plasticity in the CA3, place input from the DG, and inhibitory theta rhythms from the MS.This focus resulted in a less robust depiction of several phenomena, such as theta phase precession in both slope and range and a dominance of forward-ordered replay, compared with empirical biological observations.The alignment with experimental data could potentially be improved by incorporating elements such as inputs from EC (Fernández-Ruiz et al., 2017;Hafting et al., 2008;Molter et al., 2007;Schlesiger et al., 2015) or inhibitory connections (Cona & Ursino, 2015;Cutsuridis & Hasselmo, 2011;Ecker et al., 2022;Tiesinga & Sejnowski, 2009), alternative synaptic plasticity rules (Haga & Fukai, 2018), and variability in neuronal or synaptic properties.
In this study, we proposed a shared mechanism, synaptic enhancement, for compressing the sequence of external events into theta phase precession and replay.The synaptic modulation is governed by the interaction between theta inhibitory and place excitatory inputs.However, several points still need to be elucidated.Is the formation of theta phase precession and replay preexisting or acquired?How does the neuronal synchronization state of the hippocampus change during activity and rest?How does the replay occur?Does replay in the CA3 have different functions compared to replay in the CA1 and other brain regions?Is synaptic plasticity activated during replay?What are the contributions of circuits other than CA3 on the theta phase precession and replay?What are the effects of learning on replay?Are the replayed sequences varied and evaluated during each replay for abstraction, planning, and inference?How do these sequences interact with cortical networks to facilitate their functions?Further research is needed to investigate these questions from experimental and computational approaches.

Small Model.
A small model was used to deeply analyze the dynamics of the system under well-controlled conditions.The CA3 model consisted of eight excitatory units and one inhibitory unit (see Figure 1A).The excitatory units received the DG place inputs, recurrent CA3 connections, and inhibitory inputs from the CA3 inhibitory unit.The inhibitory unit received the theta MS inhibitory input and excitatory inputs from the CA3 excitatory units.The activity of the units was modeled using the Izhikevich model (Izhikevich, 2010), and the parameters were chosen according to sections "8.4.1 Hippocampal CA1 Pyramidal Neurons" and "8.2.6 Fast Spiking (FS) Interneurons" in Izhikevich for the excitatory and inhibitory units, respectively.
The Izhikevich model is described as follows: where v is the membrane potential, u is the recovery variable that allows for the replication of various neural cell types, v r is the resting potential, v t is the threshold potential for spike initiation, C is the membrane capacitance, a is the recovery time constant of u, k and b are scaling coefficients, I is the input to the neuronal unit, and v peak is the peak potential.If v exceeds v peak , v and u are reset to v ← c and u ← u + d, respectively.In the inhibitory unit, Izhikevich, 2010).The CA3 excitatory unit receives excitatory DG place inputs, recurrent CA3 connections, and inhibitory inputs from the CA3 inhibitory unit: where I exc p is the place input from the DG, I exc rec is the recurrent input from the CA3 excitatory units, I exc ie is the input from CA3 inhibitory unit to CA3 excitatory unit, I exc base is a constant baseline input, and I exc noise is a gaussian noise input with N(0, σ 2 noise ).Similarly, the CA3 inhibitory unit receives excitatory inputs from the CA3 excitatory units and inhibitory theta input from the MS, where I inh th is the theta input from MS, I inh ei is input from the CA3 excitatory units, I inh base is a constant baseline input, and I inh noise is a gaussian noise input.
In the simulation, the σ noise is scaled by the square root of the time step (σ noise √ dt).The synaptic inputs I syn (I exc rec , I exc ie , I inh ei ) of the unit i are described as follows: where g i j (t) is a synaptic conductance between presynapse j and postsynapse i, E is an equilibrium potential (E ampa = 0 mV for excitatory synapses and E gaba = −65 mV for inhibitory synapses), τ is a time decay constant of the synaptic conductance (τ ampa = 10 ms, τ gaba = 20 ms), s j (t) is a spike (0 or 1) of the presynaptic neuronal unit, and w i j is a synaptic weight between pre-and postsynaptic neuronal units.Notably, I exc rec and I inh ei represent excitatory synapses (AMPA) and I exc ie inhibitory synapses (GABA-A).The DG-CA3 excitatory unit connection (place input) was assumed to be sparse, meaning that each CA3 excitatory unit received only one place input.The amplitude of the place inputs was set high enough to induce spikes in the CA3 excitatory units with only one place input.This model used four place inputs for each session in eight DG inputs.The CA3-CA3 recurrent connections were all-to-all and plastic connections (see section 4.3).All CA3 excitatory units sent spikes to the CA3 inhibitory unit, which in turn sent feedback inhibition to all CA3 excitatory units.

External Inputs.
In this model, we assumed that the place inputs were from the DG to CA3 excitatory units and that the theta inputs were from the MS to CA3 inhibitory units.
The place inputs from DG were modeled as follows: where a p is the amplitude of the place input, x(t) is the current position of the agent, x peak is the peak position of the place field, and the σ p is the width of the place field.The agent's speed is assumed to be constant in this simulation; thus, x(t) = t cm (velocity is 1 cm/s).The input orders were as follows: initial learning, units 1-2-3-4; additional learning, units 4-5-6-7; and relearning, units 8-1-3-6.
The theta oscillation inputs from the MS were modeled as follows: where a th is the amplitude of the theta input and f th is the frequency of the theta oscillation.As the speed of the agent was assumed to be constant in this simulation, f th was also constant.
4.3 Plasticity.The synaptic weights of the CA3-CA3 recurrent connection were plastically changed according to Hebbian and regularization rules (Kim et al., 2020;Weber & Sprekeler, 2018).Plasticity-related factors, such as calcium ion (Jensen & Lisman, 1996) and calcium/calmodulin-dependent kinase IIα (Chang et al., 2019), are assumed to have a longer time constant than synaptic inputs based on the synaptic tagging hypothesis (Rogerson et al., 2014).Furthermore, we implemented a superlinear coincident detection; when two inputs from different sources arrive simultaneously, the response is considerably higher than the simple summation of the responses to these two inputs (Brandalise & Gerber, 2014;London & Häusser, 2005).
The Hebbian plasticity between the postsynapse unit i and the presynapse unit j was modeled as follows: where a Hebb is a learning rate of the Hebbian learning, s i and s j are spikes (0 or 1) of postsynaptic and presynaptic neuronal units, respectively, and g l i j (t) is the assumed activity of plasticity-related factors, which has similar dynamics to the synaptic conductance g i j (t) but with a longer decay time constant τ l than that with voltage change.The coincident detection was implemented by multiplying the place cell input I exc p,i (t) from DG with s j (t).Self-connections were excluded, meaning that w rec i j = 0 (if i = j).The regularization rule was modeled as where a rec tot is the gain of the regularization, by which the regularization rule ensures that the summation of the postsynaptic weights is permanently restricted.The parameter values used in the original small model are summarized in Table 1.The parameters of the other models were partially adjusted using the values from Table 1.

Resting Session.
In the resting session, it is feasible to assume the absence of place input, considering that the animal is not in the place of a running session, as well as the absence of theta input, considering that the animal is not running, concurrently with the presence of spontaneous activity, including replay (Buzsáki, 2015;Drieu & Zugaro, 2019).Neural inhibition is weakened during sleep or rest (Alfonsa et al., 2022;Miyawaki & Diba, 2016;Mizuseki & Buzsáki, 2013).Thus, we removed the external place and theta inputs and added independent noise inputs into each CA3 excitatory unit to induce spontaneous spiking.In addition, the synaptic plasticity was turned off.The specific parameter changes for the resting session are as follows: I exc base = 24.4pA, I inh base = 60 pA, a th = 0, a p = 0, σ exc noise = 10 pA, a Hebb = 0.
4.5.1 The Self-Connection Model.For the model with self-connections in the recurrent synapse (see supplementary Figure 3), we set w rec i j = 0(i = j), which were excluded in the original model.

The Noncoincident Detection Model (nonCD).
For the small model without coincident detection (see supplementary Figure 4), we replaced dg l i j /dt with dg l i j /dt = −g l i j /τ l + w i j s j Īexc p,i .
Īexc p,i is the average of I exc p,i (t) through the entire trial.We set a Hebb = 0.025 (three times larger than that of original model) to simulate with a higher learning rate (see supplementary Figures 4F to 4J).

The Membrane Potential Coincident Detection Model (mpCD).
For the small model with the coincident detection of membrane potential (see supplementary Figure 5), I exc was replaced as follows: where α cd = 0.3 determines the efficiency of the coincident detection.dg l i j /dt was also replaced as in the small model without coincident detection (see supplementary Figure 4): The running session was extended until trial 20 (see supplementary Figures 5G to 5L). 4.5.4The Spiking DG Input Model.For the model with spiking DG inputs (see supplementary Figure 6), place input from the DG to CA3 excitatory unit is as follows: where g p,i j is a synaptic conductance from presynaptic DG input j to postsynaptic CA3 excitatory unit i, a Ip = 1.5 is a synaptic input gain, s p (t) is a spike of the DG place input at t. s p (t) spikes according to a Poisson distribution with firing rate λ, and the probability of one spike in the time window t (0.5 ms in the simulation) is as follows:

4.5.5
The Standard STDP Model and Symmetric STDP Model.For the model with the standard and symmetric STDP learning rule (see supplementary Figures 7 and 8), the synaptic weights between presynaptic unit j and postsynaptic unit i, w exc i j , were updated when the unit i or j was spiked: w rec i j ← w rec i j + a std p w rec i j , where t s, j and t s,i are the latest spiked time of presynaptic unit i or postsynaptic unit j.For the standard STDP rule, A + = 1, A − = −0.3,τ + , τ − = 20, a std p = 0.166.For the symmetric STDP rule, A + , A − = 1, τ + , τ − = 65.5, a std p = 0.083.

4.5.6
The short learning time constant model.For the model with a short learning time constant (see supplementary Figure 8), we set τ l = 20 ms and a Hebb = 0.083 to gain a higher learning rate (see supplementary Figures 8F to 8J).

4.5.7
The non-theta input model.We set a th = 0 and I inh base = 60 pA for the model without theta input from the MS (see supplementary Figure 9).

4.5.8
The random DG input model.For the model with randomized input, the DG input I p order was randomized in every trial and repeated for 20 trials (see supplementary Figure 10).
Other settings and processes were the same as in the small CA3 model (see Figures 1 to 4).

Large Model.
To analyze dynamics under conditions that are similar to the actual brain, specifically with sparse stochastic connections, we used a large model.In the large model, the synaptic connections between the CA3 units were modeled as probabilistic, indicating that the connections in the CA3 circuit are not predetermined (Rebola et al., 2017).The large model was simulated using the Brian 2 neural simulator (Stimberg et al., 2019).The number of CA3 excitatory units was N exc = 800, and the number of CA3 inhibitory units was N inh = 200.The probability of the DG (place input)-CA3 excitatory connection was p pe = 0.005 (Claiborne et al., 1986;Rebola et al., 2017), CA3 excitatory-CA3 recurrent excitatory connection was p rec = 0.1 (Guzman et al., 2016), CA3 inhibitory-CA3 excitatory connection was p ie = 0.25, CA3 excitatory-CA3 inhibitory connection was p ei = 0.25, and MS (theta input)-CA3 inhibitory connection was p ti = 1.These probabilities were based on the results of previous studies (Ecker et al., 2022;Rebola et al., 2017).The synaptic weights were adjusted with w pe i j = 10 (if connected) and a rec tot = 7.There were three trials in the running session, and a resting session was performed before and after the running session.The number of inputs from the DG for the initial learning was four (see Figure 7) and eight for the additional learning (consisting of index 1 to 4 for the initial sequence set and index 4 to 7 for the additional sequence set; see supplementary Figure 1).Other parameters were the same as in the small model.
The simulation represented in Figures 7H and 7L was executed for 15 sessions with different random seeds.The units for the theta phase precession analysis were selected according to two criteria: they responded to the place input and exhibited an expansion of firing through learning (more than 0.2 s earlier in the last trial than in the first).
Units with self-connections were extracted and compared with units without self-connections, specifically within units that had recurrent connections among the responding units (see supplementary Figures 3I to 3K; see also sections 4.7 and 4.9).
For the additional learning (see supplementary Figures 1B to 1E), three more trials were performed with the additional learning sequence set after the initial learning, followed by the additional resting session as in the small model.Units responding to the initial or additional sequence set were collected separately and analyzed with the synaptic weight changes and replay properties (see also sections 4.9 and 4.10).
For the analysis of burst activities during resting sessions (see supplementary Figure 2), the LFP was computed as the sum of all excitatory unit activities.The burst frequency was identified by applying the fast Fourier transform (FFT) to the LFP signal.To analyze high-frequency oscillations during the burst, the LFP signal was subjected to bandpass filtering between 50 and 400 Hz, followed by an application of FFT.4.7 Theta Phase Precession.The LFP was calculated as the summation of the membrane potentials of all excitatory units.Next, the LFP signal was filtered using a bandpass filter (5-12 Hz), and a Hilbert transform was applied.The phase component was extracted from the transformed signal.Finally, a linear regression analysis was used to plot the place versus phase points to estimate the slope value.
4.8 Evaluation of Theta Sequence.The theta sequence was evaluated using cross-correlogram among CA3 excitatory units 2 to 4 spike trains.The positive lag indicates the spike order of i → j (between unit i and unit j, i < j), and the negative one represents the spike order of i ← j.For the statistical test, we accumulated histograms of all cell pairs and sessions, comparing the mean in a range from −70 to 70 ms lag with 0 ms lag.

Detection of Coactivation during Rest Sessions.
The spike timing differences in resting sessions were collected for each unit pair (unit i and unit j, i < j).The positive lag indicates the spike order of i → j, and the negative one represents the spike order of i ← j, except for Figure 6, where the positive lag of relearning sequence pairs indicates the spike order of place inputs (unit 8 → 1, 8 → 3, 8 → 6, 1 → 3, 1 → 6, and 3 → 6).These timing differences were grouped according to the activation status of the units during the running session and whether both units were responding units, one unit in the pair was a responding unit while the other was not, or both units were nonresponding units.In the analysis of additional learning and relearning, pairs of initial and new sequence sets were also plotted.The densities were calculated over a range of −600 to 600 ms.4.10 Replay Sequence.Linear regression was performed for the difference between peak positions of place inputs during learning and lags between spikes during replay after learning, with a lag range from −200 to 200 ms for the responding unit pairs.The unit that responded to the firstplace input was excluded.

Statistical
Analyses.This study was simulated and analyzed with Python scripts.Python libraries, scipy.stats,and sklearn.linear_model.LinearRegression were used to conduct the paired t-test and linear regression, respectively.4.12 Dynamical Analyses.The nullcline lines were defined as the points where equations 4.1 and 4.2 (for excitatory) and equations 4.1 and 4.3 (for inhibitory unit) were equal to 0. The equations represented the nullclines: The equilibrium points were determined as the intersection between the v-nullcline and the u-nullcline.The stability of each equilibrium point was determined by the eigenvalues of the Jacobian matrix of the system represented by equations 4.1 and 4.2.If the two eigenvalues are both negative, the equilibrium point is stable; otherwise, it is unstable (Izhikevich, 2010).When equations 4.1 and 4.2 were redefined as The eigenvalues λ and eigenvectors x of the Jacobian matrix L can be computed by solving the following equation: The eigenvalues can be obtained by solving the following equation: Both eigenvalues should be negative for the equilibrium point to be stable, corresponding to trL < 0 and detL > 0. If the conditions are not met, the equilibrium point is unstable.
The equilibrium points of excitatory units vexc were obtained by solving the following equation: It exhibited a saddle-node bifurcation when the constant input current I was less than 24.5.In this case, the negative-side equilibrium was stable, while the positive side was unstable (Izhikevich, 2010).The equilibrium points disappeared above I = 24.5.
The equilibrium points of inhibitory units vinh were obtained by solving for v in the following equation (see also equation 4.3): which was solved using the scipy.optimize.fsolvemethod with a moving constant I.The dynamics of the inhibitory unit exhibited the Andronov-Hopf bifurcation depending on I.Then, trL inh and detL inh were defined as follows: The equilibrium point was stable when trL inh ≤ 0 and unstable when trL inh > 0, as detL inh is always positive.The bifurcation point occurred at I = 73.7.

Figure 1 .
Figure 1.Small CA3 model.(A) Schematic of the model construction.The white triangle indicates the CA3 pyramidal neuronal unit; the large white circle indicates the CA3 inhibitory neuronal unit.Lines indicate axons and dendrites.The small black and white circles on the lines represent excitatory and inhibitory synapses, respectively.(B) Place inputs from the DG.(C) Theta input from the MS.(D) Spike raster plot during trials.The colored plots are spikes of the excitatory units, and the black plots are those of the inhibitory unit.(E) Magnified view of panel D from 3 to 5 s.(F) The membrane potential of each neuronal unit.The x-axis and color of each unit correspond to panel E. CA, cornu ammonis; DG, dentate gyrus; Ex, excitatory; Inh, inhibitory; MS, medial septum.

Figure 2 .
Figure 2. Synaptic weight change.(A) Synapse weight of CA3 excitatory-CA3 excitatory recurrent connection in each trial.(B) The synapse weight as a function of trial numbers.The green lines represent synapses in the forward order.The red lines represent those in reverse order.The black lines represent the other connections.(C) Synaptic weight changes (before regularization) during trial 1.The line colors indicate the same items as in panel B. CA, cornu ammonis.

Figure 3 .
Figure 3. Theta phase precession.(A) Local field potential (LFP): summation of the membrane potentials of the CA3 excitatory units' trace.(B) Frequency spectrogram of the LFP.The dotted line represents the frequency of the theta input (7 Hz).(C) Plot of theta phase and time (corresponding to the position) in trial 1.The colors of the points correspond to those in Figures 1D to 1F. (D) Same plot as in panel C in trial 10. (E) Spiking phase versus time plot in trials 1 (gray) and 10 (blue) of the individual units are overlaid.(F) Slope values of trials 1 and 10 in individual units.The order of panels is the same as in panel E. (G) Cross-correlogram among CA3 excitatory units 2 to 4 spike trains.(H) The accumulation of 10 sessions of the cross-correlogram in the range of −150 and 150 ms lag among spike trains of CA3 excitatory units 2 to 4 in trial 1 (left) and trial 10 (right).CA, cornu ammonis.n.s.(not significant) p > 0.1, ** p < 0.01, *** p < 0.001.

Figure 4 .
Figure 4. Replay.(A) Top: Spike plot in the resting session with noise input before learning.The colored dots indicate the excitatory units, and the black dots indicate the inhibitory unit.Middle: Magnified view of the upper panel for 2 s.Bottom: Membrane potentials of each neuronal unit.The x-axis and color of each unit correspond to the middle panel.(B) The same as panel A in the resting session after learning.(C) Cross-correlogram of the spike trains of CA3 excitatory unit pairs during the resting session before learning.(D) Cross-correlogram of the spike trains of unit pairs during the resting session after learning.Blue shows the pairs among the responding (receive I p ) units during the running session.Yellow shows the pairs between the responding and nonresponding units.Gray shows the pairs among the nonresponding units.(E) The densities of spike pairs within 100 ms lag among the responding unit pairs (top), between the responding and nonresponding pairs (middle) and among the nonresponding pairs (bottom).(F) Scatter plot of the difference in peak positions of place inputs during learning and lags between spikes during replay before (left) and after (right) learning for the responding unit pairs.The blue points show the pairs excluding unit 1, and the gray points show those including unit 1.The blue lines show the linear regression results of the blue points.CA, cornu ammonis.*** p < 0.001, n.s.(not significant) p > 0.1.

Figure 5 .
Figure 5.Additional learning.(A) Place inputs from the dentate gyrus.(B) Upper: Spike plot of trials 1 and 10.Bottom: Magnified view of the upper panels for 3 to 5 s.The colored dots show excitatory spikes, and the black dots show inhibitory spikes.The colors correspond to those in Figures 1 to 4. (C) Synapse weights before (left) and after (right) the additional learning.(D) Theta phase precession after learning.(E) Spike plot of replay in the resting session with noise input after learning.The lower panel is magnified view of the upper panel for 2 s. (F) Cross-correlogram of the spike trains of unit pairs during the resting session after learning.Purple shows the pairs of additional sequences.Blue shows the pairs of the initial sequence.Orange shows the pairs between the initial and additional sequences.Gray shows the other pairs.(G) Scatter plot of the difference in peak positions of place inputs during initial and additional learning and lags between spikes during replay after the additional learning.The purple points show the pairs of the additional sequence (excluding unit 4), and the blue points show the pairs of the first sequence (excluding unit 1).The purple and blue lines show the linear regression results of each colored point.*** p < 0.001.n.s.(not significant) p > 0.1.

Figure 6 .
Figure 6.Relearning.(A) Place inputs from dentate gyrus.(B) Top: Spike plot of trials 1 and 10.Bottom: Magnified view of the upper panels for 3 to 5 s.The colored dots show excitatory spikes, and the black dots show inhibitory spikes.The colors correspond to those in Figures 1 to 4. (C) Synapse weights before (left) and after (right) the relearning.The upper panels are in the original order, and the lower panels are in the order of responding units in the relearning.(D) Theta phase precession after relearning.(E) Spike plot of replay in resting session with noise input after learning.Top: Spike plot from 0 to 12 s.Bottom: Magnified view of the upper panels for 4 to 6 s. (F) Cross-correlogram of the spike trains of unit pairs during the resting session after relearning.Green shows the pairs of relearning sequences.Blue shows the pairs of the initial sequence.Magenta shows the pairs between the initial and the relearning sequences.Gray shows the other pairs.(G) Scatter plot of the difference in peak positions of place inputs during initial learning/relearning and lags between spikes during replay after the relearning.The green points show the pairs of the relearning sequence (excluding unit 8), and the blue points show the pairs of the initial learning sequence (excluding unit 1).The green and blue lines show the linear regression results of each colored point.*** p < 0.001.n.s.(not significant) p > 0.1.

Figure 7 .
Figure 7. Large-scale network.(A) Place input from the dentate gyrus.(B) Theta input from the medial septum.(C) Spike raster plot of the excitatory (blue) and inhibitory (black) units.Left: Spike plot of trial 3 (36-48 s).Right: Magnified view of the left panels for 41 to 42 s.(D) The color plot of the spike rate of the excitatory units that responded to place inputs.They are sorted by peak response position and concatenated from trials 1 to 3. (E) Spike raster plots of the responding units from trial 1 (left) to 3 (right).Notably, the first spikes of some units proceeded in later trials.(F) Synaptic weights between the responding units from trial 1 (left) to 3 (right).(G) Theta phase precession in trials 1 (top) and 3 (bottom).(H) Slope values of the responding units in trials 1 and 3 accumulated for 15 sessions (paired t-test, n = 57, t = −3.535,p = 0.001).(I) Spike raster plot of all units in resting session with noise input before (left) and after (right) learning.The colors indicated the same as in panel C. (J) Spike raster plot during the resting session of the responding excitatory units.(K) Crosscorrelogram of the spike trains of unit pairs in the resting session.Blue shows the unit pairs between the responding units.Yellow shows the unit pairs between the responding and nonresponding units.Gray shows the unit pairs between the nonresponding units.(L) Densities of spike pairs within 100 ms lag among responding unit pairs before and after learning accumulated for 15 sessions (paired t-test, n = 15, t = 4.283, p = 0.001).** p < 0.01.

Figure 8 .
Figure 8. Bifurcation properties of excitatory and inhibitory units.(A) Phase portrait of the excitatory unit.The black solid line indicates a v-nullcline without external input (I p = 0, I rec = 0, I ie = 0, I base = 20).The dotted line indicates v-nullcline with maximum I p (I p = 30, I rec = 0, I ie = 0, I base = 20).The gray line is the u-nullcline.The black and white circles indicate stable and unstable equilibrium, respectively.(B) Phase portrait of the inhibitory unit.The black solid line indicates v-nullcline without external input (I th = 0, I ei = 0, I base = 100).The dotted line indicates v-nullcline with minimum I th (I th = −80, I ei = 0, I base = 100).The gray line represents the u-nullcline.Black and white circles indicate stable and unstable equilibrium, respectively.(C) Bifurcation diagram depending on the external input I of the excitatory unit.The solid and dotted lines indicate stable and unstable equilibrium, respectively.(D) Bifurcation diagram depending on the external input I of the inhibitory unit.The solid and dotted lines indicate the same as in panel C.

Figure 9 .
Figure 9. Firing timing in the theta cycle with place inputs.(A) Correspondence between one cycle of theta oscillation and color in I th .(B) Single-place input I p .Time windows between the dotted lines marked as a, b, and c correspond to the one theta cycles plotted in panel C. (C) The upper panels show the trajectory of the excitatory unit in the v-u phase portrait during the theta cycle of a, b, and c in panel B. The theta phase is color-coded as represented in panel A. The solid and dotted quadratic lines indicate v-nullclines with minimal and maximal theta inhibition, respectively.The lower panels are the expansions of the upper panels around the closest or cross area between the v-and u-nullcline.(D) The upper panel shows the trajectory of the inhibitory unit without place input I p , corresponding to c in panel B. The lower panel expands the upper panel around the closest area between the v-and u-nullcline.(E, F) Schematic illustrations of the acquisition process of this model for theta phase precession and replay.(E)Before learning, the firing timings of a CA3 excitatory unit (orange shadow and line) on the phase-place (time) plane depend on place input strength, displaying a symmetric shape.Synaptic potentiation occurs (black arrow) with the current unit (orange shadow and line) responding to the place inputs from the previous-place and next-place units (blue and green line, respectively).(F) The synaptic potentiation takes effect slightly later (colored arrows) than the firing of the presynaptic units (purple and yellow shadows), either accelerating the spike timing or inducing spikes in an additional theta cycle.The difference in plasticity-related factor accumulation between previous-place (blue opaque shadow) versus next-place (green opaque shadow) and the difference in overlapping area between the current-place unit (orange) and the previous-place (purple shadow) versus next-place (yellow shadow) units results in a stronger effect from the previous-place unit, leading to an asymmetric precession shape (orange lines).Synaptic potentiation also affects replay.

D
= b exc2 − 4k exc −k exc v exc t