Changes in oscillatory brain activity are strongly correlated with performance in cognitive tasks and modulations in specific frequency bands are associated with working memory tasks. Mesoscale network models allow the study of oscillations as an emergent feature of neuronal activity. Here we extend a previously developed attractor network model, shown to faithfully reproduce single-cell activity during retention and memory recall, with synaptic augmentation. This enables the network to function as a multi-item working memory by cyclic reactivation of up to six items. The reactivation happens at theta frequency, consistently with recent experimental findings, with increasing theta power for each additional item loaded in the network's memory. Furthermore, each memory reactivation is associated with gamma oscillations. Thus, single-cell spike trains as well as gamma oscillations in local groups are nested in the theta cycle. The network also exhibits an idling rhythm in the alpha/beta band associated with a noncoding global attractor. Put together, the resulting effect is increasing theta and gamma power and decreasing alpha/beta power with growing working memory load, rendering the network mechanisms involved a plausible explanation for this often reported behavior.
Active retention and encoding of one or more items in working memory have generally been associated with an increase in theta (Jacobs & Kahana, 2009; Meltzer et al., 2008; Jokisch & Jensen, 2007; Sederberg, Kahana, Howard, Donner, & Madsen, 2003; Jensen & Tesche, 2002; Mölle, Marshall, Fehm, & Born, 2002; Raghavachari et al., 2001) and gamma power (van Vugt, Schulze-Bonhage, Litt, Brandt, & Kahana, 2010; Jacobs & Kahana, 2009; Fries, Womelsdorf, Oostenveld, & Desimone, 2008; Meltzer et al., 2008; Jokisch & Jensen, 2007; Howard et al., 2003; Pesaran, Pezaris, Sahani, Mitra, & Andersen, 2002; Raghavachari et al., 2001) and a decrease in alpha/beta power (Fries et al., 2008; Jokisch & Jensen, 2007; Sederberg et al., 2003; Mölle et al., 2002). Moreover, there are several reports of this effect being load dependent (van Vugt et al., 2010; Meltzer et al., 2008; Michels, Moazami-Goudarzi, Jeanmonod, & Sarnthein, 2008; Axmacher et al., 2007; Howard et al., 2003; Jensen & Tesche, 2002). Still the neural mechanisms underlying these phenomena are rather elusive. Several different spiking working memory models have been proposed (Lundqvist, Compte, & Lansner, 2010; Barbieri & Brunel, 2008; Mongillo, Barak, & Tsodyks, 2008; Lundqvist, Rehn, Djurfeldt, & Lansner, 2006; Macoveanu, Klingberg, & Tegner, 2006; Compte, Brunel, Goldman-Rakic, & Wang, 2000; Amit & Brunel, 1997; Lisman & Idiart, 1995) with the aim of reproducing various features of working memory, such as storage capacity or experimental constraints on activity. Here we extend one of these models (Lundqvist et al., 2006)—an attractor model previously shown to account for rate modulations and single-cell statistics measured experimentally during working memory tasks. By adding synaptic augmentation (Wang et al., 2006) and finite attractor dwell time (Lundqvist et al., 2006; Treves, 2005; Fransén & Lansner, 1995), we enable the network to periodically reactivate stored items (Mongillo et al., 2008; Sandberg, Tegnér, & Lansner, 2003; Lisman & Idiart, 1995) and, thus, to operate as a multi-item working memory model. Analyzing local field potentials (LFPs) synthesized from the network's neural activity, we find for the first time, to the best of our knowledge, increase in theta and gamma as well as simultaneous decrease in alpha/beta power with growing memory load in simulated working memory tasks. Additionally, we reproduce the effect of gamma–theta phase amplitude coupling (Axmacher et al., 2010; Sirota et al., 2008; Canolty et al., 2006; Chrobak & Buzsáki, 1998). In the following, we explain key mechanisms underlying these findings and discuss their biological relevance.
The Network Model
We used a biophysically detailed network model of Layer 2/3 developed earlier (Lundqvist et al., 2006, 2010; Djurfeldt et al., 2008; Fransén & Lanser, 1995). It had both a hypercolumnar and minicolumnar organization. Neurons within a hypercolumn were organized in 49 nonoverlapping subpopulations (minicolumns). The network was composed of nine hypercolumns. Such a columnar organization has anatomical (Goldman-Rakic, 1995) and functional (Rao, Williams, & Goldman-Rakic, 1999) support in data from PFC. The minicolumns were spread out on a two-dimensional square grid with 1.5-mm side and each minicolumn had a diameter of 30 μm. All pyramidal cells in a minicolumn shared the same x and y coordinates but were uniquely spread out on the z axis along 500 μm. Interneurons were placed near the center of each minicolumn with respect to the z axis.
The cells included were Layer 2/3 pyramidal cells and two different types of inhibitory interneurons—horizontally projecting and soma targeting basket cells assumed to correspond to fast spiking and vertically projecting and dendrite targeting regular spiking nonpyramidal (RSNP) cells, for example, double bouquet cells (Markram et al., 2004; Kawaguchi & Kubota, 1993). Each Layer 2/3 portion of a minicolumn contained 30 pyramidal cells (Peters & Yilmaz, 1993), one basket cell, and two RSNP cells.
As in our previous studies (Lundqvist et al., 2006, 2010), the connectivity was set up to store nonoverlapping memory patterns (Figure 1)—in this work, 49 different patterns each comprising nine equal selectivity minicolumns in different hypercolumns. However, our recent findings have demonstrated that overlapping patterns can also be stored robustly in this network (unpublished observations).
Pyramidal-to-pyramidal connectivity within a minicolumn was at the level of 25% (Thomson, West, Wang, & Bannister, 2002). In addition, pyramidal cells were connected to the eight closest basket cells in their own hypercolumn, and remaining connections targeted pyramidal cells or RSNP neurons in other hypercolumns. The RSNP cells projected locally to the dendrites of the pyramidal cells in their own minicolumn. They provided disynaptic long-range inhibition from minicolumns of dissimilar selectivity in the other hypercolumns. Basket cells provided feedback inhibition targeting the cell soma of 70% of all the pyramidal neurons within their hypercolumn nonselectively. Connections between pairs of neurons were randomly generated according to the connection densities. A connection was always carried out by a single bouton, and all connections of a neuron onto itself were removed.
For all local connections within a hypercolumn, we constrained the network connectivity and the sizes of excitatory postsynaptic potentials (EPSPs) with biological data, mostly from Thomson et al. (2002). For long-range connections, data are rather scarce, as this type of connectivity is quite difficult to measure quantitatively. We, therefore, extrapolated the available experimental data, based on theoretical considerations, to arrive at a plausible amount of global excitation. From levels of activity and total number of pyramidal synapses onto a Layer 2/3 pyramidal cell, we estimated the number of connections from other active pyramidal cells in the same attractor to be in the order of 100. This resulted from the fact that a Layer 2/3 pyramidal cell receives, on average, 10,000 synapses and that roughly 1% of pyramidal cells in the cortex fire at an elevated rate whereas the rest are almost silent (Waydo, Kraskov, Quiroga, Fried, & Koch, 2006; Cossart, Aronov, & Yuste, 2003; Lennie, 2003; Abeles, Vaadia, & Bergman, 1990), which gives at least (for totally nonspecific connectivity) around 100 active synapses from other pyramidal cells onto a Layer 2/3 pyramidal cell. Out of these, less than 10, on average (assuming 25% connectivity [Thomson et al., 2002] and 30 cells in the Layer 2/3 portion of a minicolumn [Peters & Yilmaz, 1993]) are from within the cell's own minicolumn. Connections originating outside a local minicolumn were thus estimated to be about 10 times more numerous. Interestingly, their EPSPs are also significantly weaker than those of the local ones (Gilbert, Hirsch, & Wiesel, 1990). In the current model, the local to global EPSP was 3:1 in magnitude. Each pyramidal cell had 90 excitatory synapses from other distant pyramidal cells with equal selectivity as the minicolumn it was part of. An RSNP cell had 30 excitatory synapses from distant pyramidal cells for each pattern with different selectivity than its own minicolumn. Using these parameters implied that for the foreground pyramidal cells in the active state global and local excitation was approximately balanced.
Parameters were tuned to mimic spiking behavior of the respective neuron type. Pyramidal cells were strongly adapting, basket cells were almost nonadapting, and RSNP cells had intermediate adaptation. For individual cells of a certain type, all parameters were fixed, except for size, which varied ±10% according to a uniform distribution. This introduced a moderate variability in cell excitability.
Pyramidal-to-pyramidal and pyramidal-to-RSNP synaptic transmission was mediated by both AMPA and NMDA, the latter also being voltage dependent. Synapses formed by pyramidal cells onto basket cells were purely of AMPA type. Excitatory inputs (including noise) were placed on the second apical and basal dendritic compartment, whereas the inhibitory basket cells connected to the soma. The inhibitory synapses from RSNP cells connected to the second apical dendritic compartment. The synapses formed by pyramidal cells were fully saturating in the sense that the conductance Gsyn during repetitive firing could only sum up to the peak conductance resulting from a single presynaptic spike. After a synaptic event, conductance decays back to zero with a time constant τsyn, which is characteristic of each synapse type. Here, τAMPA = 6 msec, τGABA = 6 msec, as AMPA and GABAA have been reported to have similar time constants (Destexhe, Mainen, & Sejnowski, 1998), and τNMDA = 150 msec. Reversal potential was 0 for AMPA and −85 mV for GABA. Robustness was tested for τGABA up to 25 msec and GABA reversal potential up to −70 mV. The axonal conduction speed was 0.2 m/sec, and the synaptic delay was 0.5 msec. Synaptic plasticity between pyramidal cells was implemented according to an established model (Tsodyks, Pawelzik, & Markram, 1998). Depression was multiplicative, decreasing the synaptic conductance of the synapse 25% with each incoming spike and decaying back to the initial conductance with the time constant of 0.7 sec. Augmentation was additive, adding a constant 10% of the initial conductance with each spike and decaying with the time constant of 7 sec (Wang et al., 2006).
Noise and Input from Layer 4
The pyramidal cells received noise input through excitatory AMPA synapses activated by simulated Poisson spike trains with an average firing of 300 sec−1 but with very small level of conductance (0.08 nS, ∼10 times smaller than local pyr-pyr conduction). This source alone made the pyramidal cells spike at ∼8 sec−1. Single minicolumns could be selectively stimulated by pyramidal cells mimicking Layer 4 input cells. Each minicolumn had five such cells, and these were activated (60–100 sec−1) by spike trains generated by Poisson processes and innervated the 30 Layer 2/3 cells with feedforward connections (50% connectivity). This setup was found adequate for activating our Layer 2/3 model, although more elaborate models of Layer 4 to Layer 2/3 connectivity exist.
We used the Split simulator developed for simulations of large, biophysically detailed network models, which can run on a single processor as well as on massively parallel machines (Hammarlund & Ekeberg, 1998). The model presented here has been scaled up to the size of 22 million neurons and 11 billion synapses on a Blue Gene/L supercomputer (Djurfeldt et al., 2008). The network simulated here typically consisted of 14,553 cells and 1.8 million synapses. Simulations were typically performed on 128 nodes of the Blue Gene/L computer at the Center for Parallel Computers at the Royal Institute of Technology, Stockholm, Sweden. It took 81 sec to simulate 1 sec of network activity.
LFP was estimated by calculating the average soma potential for all cells in a local population at every time step, similar to the approach adopted by Ursino and La Cara (2006). Although LFP is more directly linked to the synaptic activity, the averaged membrane potentials have been reported to be correlated with LFPs and to have similar information content (Okun, Naim, & Lampl, 2010).
Analysis of LFP Signals
All analyses in this study were performed using MATLAB. The data generated in model simulations were LFPs and spiking activity (a point process). LFPs were produced at the sampling frequency of 1 kHz and correspondingly. Rare sharp spike artifacts occurring in the synthesized LFP were identified and removed first. Then a low-pass filter was applied to the resultant signals at 250 Hz in the forward and reverse directions to avoid any phase distortions. The analyses carried out in this work was spectral quantification and phase locking.
Spectral Analysis and Phase Coupling
The power spectra of LFP trials were obtained using the multitaper method (Thomson, 1982), with a family of orthogonal tapers produced by Slepian functions (Slepian, 1978), commonly utilized in the analysis of neural data recorded experimentally. The number of tapers, determined by the amount of time and frequency smoothing, depended on the frequency range being examined. On average, for low frequencies up to 5 Hz, the time window was set to fit at least three cycles. For the midrange, roughly from 5 to 15 Hz, at least five cycles were fit within the window span. Finally, for the beta and gamma range the time windows were adjusted to account for 10 or more full cycles. For the analysis of time courses of power over the entire LFP trial (up to 10 sec), full time-frequency maps were used. However, to obtain power spectra estimates, the time-frequency representations were averaged over quasi-stationary time intervals.
The procedure employed to test phase–amplitude coupling involved four key steps (cf. Penny, Duzel, & Ojemann, 2008). At first, LFPs were band-pass filtered to extract theta and gamma rhythms. Then, a Hilbert transform was performed on the gamma component to obtain its envelope (analytic amplitude). Next, the extracted gamma envelope was filtered using the same filter that was applied to separate the theta oscillations. Finally, PLV1:1 between the instantaneous phase of the resultant theta-filtered gamma envelope and that of the theta rhythm was estimated (referred to in the remainder of the article as PLVPAC) to quantify the theta-phase modulation of gamma amplitude.
As far as the phase-to-phase locking between theta and gamma oscillations is concerned, PLVn:m (for different pairs of n and m) was directly evaluated on the instantaneous phase series of the extracted theta and gamma rhythms (in the first step of the procedure described above). In the analyses, we adopted two approaches in estimating the instantaneous phase of the LFP components under consideration. Both of them involved narrow-band filtering of the signals with low time domain spread finite impulse response filters. The phase of the filtered components was then extracted either from the analytic signal representation obtained using a Hilbert transform or from the convolution of the narrow-band signal with a complex Morlet wavelet.
The techniques adopted in our work to estimate the power spectra and to examine phase-coupling phenomena for synthetic LFP have been extensively employed in a range of neural data analyses to date. This aspect additionally facilitates making comparisons and references to a substantial body of empirical literature.
Synaptic Augmentation as a Mechanism for Working Memory
We ran simulations on a previously developed Layer 2/3 attractor network with a hypercolumnar and minicolumnar structure (Figure 1) (Lundqvist et al., 2010) representing a cortical patch with an area of 1.5 × 1.5 mm2 and comprising ∼15,000 Hodgkin–Huxley type multicompartmental neurons. Briefly, the model was constituted by nine hypercolumns, each containing 49 minicolumns. Pyramidal cells sharing the same minicolumn had dense (25%) recurrent connections and were viewed as the basic computational units of the network. Attractors, however, were stored by means of sparse long-range connectivity between minicolumns from separate hypercolumns. Each of 49 attractors was composed of nine active minicolumns, one from every hypercolumn. Each hypercolumn acted as a local winner-take-all module by means of basket cell feedback inhibition, that is, the network consisted of several connected winner-take-all modules. Another set of interneurons, RSNP cells, provided disynaptic long-range inhibition supporting competition between attractors. Orthogonal memory items were stored off-line, as would be the result of slow Hebbian learning and synaptic weights were fixed during simulations except for fast depression and augmentation (for details of the model, see Methods).
The network could either be in a state where one of the stored attractors was active or in a ground state where all minicolumns exhibited low-level activity (Amit & Brunel, 1997), here referred to as a noncoding state. This noncoding state was stable because of the modular structure of the network and because of the fact that it operated in an oscillatory regime (Lundqvist et al., 2010). The oscillatory activity in the ground state was the result of high levels of excitatory noise and feedback inhibition, whereas the oscillations in the attractor states were the result of strong feedback inhibition. Consequently, the network produced alpha/beta oscillations during the noncoding state and faster gamma-like oscillations in the coding states. The gamma oscillations were generated by a feedback loop, referred to as a pyramidal-interneuronal network gamma network (Brunel & Wang, 2003; Whittington, Traub, Kopell, Ermentrout, & Buhl, 2000), where pyramidal cells activated basket cells, which in turn innervated the pyramidal cells temporarily shutting them down. Recurrent excitation within the network could be altered more than 60% without destabilizing the noncoding state. We modified this network (Lundqvist et al., 2010) by increasing cellular adaptation so that the coding attractor states had finite lifetime of ∼200–300 msec and by adding synaptic augmentation (Wang et al., 2006) so that recently activated attractors spontaneously reactivated after a short refractory period (Mongillo et al., 2008). We implemented augmentation as an additive postsynaptic facilitation and matched it to experimental data (Wang et al., 2006). The decay time constant was 7 sec, and we introduced an upper limit of ∼60% on the maximal effective potentiation (Figure 2A) occurring roughly 700 msec after the last synaptic event. Thus, the recurrent connectivity within one recently activated attractor was temporarily boosted once synaptic depression had worn off. This allowed for its spontaneous reactivation during a time window of several seconds at the cost of the noncoding state attractor. In consequence, we could sequentially stimulate several attractors, which would then periodically replay (Figure 2B and C). Yet the noncoding state prevented unwanted attractors from becoming active, and the network robustly replayed only recently activated and encoded memory items. With an attractor dwell time of 200–300 msec, the storage capacity was about five to six items.
Theta and Gamma Power Increases and Alpha/Beta Power Decreases with Memory Load
Because the power of cortical theta (Meltzer et al., 2008; Michels et al., 2008; Jensen & Tesche, 2002; Gevins, Smith, McEvoy, & Lu, 1997) and gamma oscillations (van Vugt et al., 2010; Meltzer et al., 2008; Axmacher et al., 2007; Howard et al., 2003) has been reported to increase with working memory load, we examined the network's behavior for a varying number of stored items. To this end, we presented a list of known items to the network by providing adequate stimuli for 120 msec, each in 1-sec intervals. Overall, we arrived at qualitatively similar findings (Figure 3) as in the literature discussed above. In particular, the analysis of synthetic LFP obtained as the average membrane potential of all pyramidal cells in one hypercolumn demonstrated elevated power in lower-range theta (2–6 Hz) and gamma (28–40 Hz) bands during active retention along with lowered alpha/beta (10–18 Hz) power when compared with the noncoding state. Each activation of an attractor item produced a theta cycle during which the network exhibited nested (phase-coupled) gamma oscillations (Figure 4), as mentioned earlier. During this time, the network did not generate alpha/beta oscillations associated with the noncoding state. Over different loads, the PLV was typically strongest between theta component ∼5 Hz and gamma range of ∼25–30 Hz (5:1 and 6:1 phase locking). More importantly, phase–amplitude coupling evaluated as PLV between the dominant theta component around 3 Hz and the theta-filtered envelope of gamma (around 25–35 Hz) power (see Methods) provided strong evidence (PLVPAC in the range of 0.75–0.85) for nested theta–gamma activity. We saw no clear trends over different load conditions, that is, both phase coupling (phase–amplitude coupling and phase–phase locking) and peak frequencies were at a comparable level. Similar to iEEG data (Canolty et al., 2006), the amplitude of the gamma oscillations was the highest at the trough of the theta wave (Figure 4C). This is explained by the fact that the theta amplitude was the highest in the switches between assemblies, when feedback inhibition was the lowest. In consequence, firing occurred preferentially around the trough of the theta cycle (Figure 4D), although the network was more hyperpolarized than at the peak. The level of power modulation in gamma, alpha/beta, and theta bands was load dependent, strengthening with the number of items encoded in working memory. The trend of the power increase for theta and gamma oscillations saturated at around four stored items (Figure 3), at which point the theta cycles filled the entire temporal domain, that is, the rhythm was not interrupted with ground state characteristic alpha/beta band oscillatory activity, and the gamma activity was continuous rather than bursting. This, together with the decay time constant of augmentation, explains the limit in the storage capacity of the network. By increasing the peak frequency of the theta rhythm, that is, shortening the attractor lifetime, it was possible to extend the capacity up to about eight items (not shown). This raises the question why in vivo theta oscillations are not faster, as this would imply faster processing and larger working memory according to our functional interpretation. We address this issue among others below (see Discussion and Previous Work on Modeling Theta Oscillations).
We extended a previously developed cortical Layer 2/3 attractor model (Lundqvist et al., 2006, 2010) by adding the mechanism of synaptic augmentation (Wang et al., 2006) and demonstrated its function as a multi-item working memory network in consistence with previous findings (Mongillo et al., 2008). The key outcome of our study is the observation that such functionality brings about increased gamma and theta power along with decreased alpha power with growing memory load because of the functional correlations of oscillations in the model. We also observe phase locking of individual neurons to the theta rhythm (Sirota et al., 2008; Jacobs, Kahana, Ekstrom, & Fried, 2007; Lee, Simpson, Logothesis, & Rainer, 2005; Siapas, Lubenov, & Wilson, 2005) and report coupling between theta and gamma oscillations (Axmacher et al., 2010; Sirota et al., 2008; Canolty et al., 2006; Chrobak & Buzsáki, 1998). Additionally, we show that the mechanism is valid for biologically plausible values of synaptic augmentation, connectivity, and firing rates. Finally, in the electrophysiological frequency range of theta oscillations (3–8 Hz), working memory capacity of the network is five to seven items, similar as reported in the literature (Miller, 1956).
Oscillation Amplitude Modulations with Working Memory Load in Experimental Studies
The modulation effects we observe in our network across the spectrum of oscillatory patterns as a function of working memory load are consistent with experimental data. Although a general increase in cortical theta power regardless of load in multi-item working memory has been reported (Raghavachari et al., 2001), there has been strong evidence that cortical theta power is indeed modulated by memory load (Meltzer et al., 2008; Michels et al., 2008; Jensen & Tesche, 2002) or task complexity (Gevins et al., 1997). In our model, theta power rises with memory load because the network spends increasingly more time in the active retrieval state, exhibiting switching dynamics between different coding attractors associated with memory items at the rate of two to five times per second at the cost of the noncoding attractor ground state. Thus, the average, but not the peak of instantaneous theta power, increases with the number of stored items. In addition, because gamma oscillations that manifest themselves in our model during reactivation of each memory item are nested on every theta cycle and the noncoding ground state is accompanied by stable idling alpha/beta rhythms, we obtain the simultaneous effect of increasing gamma power and decreasing alpha/beta power with growing content of working memory. These findings are in accordance with experimental evidence—there have been several studies reporting, based on intracranial recordings, increased cortical and hippocampal gamma power with increased number of letters, digits, or faces to remember (van Vugt et al., 2010; Meltzer et al., 2008; Axmacher et al., 2007; Howard et al., 2003). Typically in multi-item working memory tasks, items are presented one by one with interstimulus intervals in between them, similarly as in our simulation. In the study by Howard et al. (2003), where varying-length lists of letters were to be held in memory, a continuous buildup of gamma power already during the presentation of the list was reported. We observe this effect in our network too, because the periodic reactivation is initiated already after the first item is presented and occur during interstimulus intervals. Additionally, the size of this particular effect of gamma power increase per item in our network is similar to that reported in Howard et al. (2003). In the work by Axmacher et al. (2007), gamma power increases are of a transient nature similarly as in our network, where gamma power arises from gamma bursts due to periodic memory reactivations. Finally, a drop in alpha/beta power during encoding and retrieval of episodic memories has been commonly observed both in intracranial (Sederberg et al., 2003) and surface EEG (Mölle et al., 2002), as well as in magnetoencephalogram data (Jokisch & Jensen, 2007) in different cortical areas. It is important to point out that these studies often observe simultaneous synchronization of alpha oscillations, that is, increase in alpha band power, in other cortical areas, commonly interpreted as task irrelevant. Here we only model a task-specific part of the working memory network, but idling/inhibited parts of the network would presumably produce strong alpha activity.
Theta Cycle as Time Scale for Memory Retrieval
As mentioned earlier, we associate theta oscillations with the process of cyclic memory reactivation suggested as an underlying mechanism for multi-item storage (Mongillo et al., 2008; Sandberg et al., 2003). This link has been gaining growing experimental support. For instance, single-cell activity has been reported to be phase locked to theta oscillations of LFP during working memory tasks in rats (Sirota et al., 2008; Siapas et al., 2005) and monkeys (Lee et al., 2005). Along the same line, it has recently been demonstrated that memory content is replayed at theta frequency during retrieval periods in humans (Fuentemilla, Penny, Cashdollar, Bunzeck, & Duzel, 2010). Additionally, there is growing evidence that working memory-related gamma oscillations are coupled to the underlying theta rhythm (Axmacher et al., 2010; Jacobs & Kahana, 2009; Sirota et al., 2008; Canolty et al., 2006). For instance, intracranial EEG recordings in humans have revealed that spatial patterns of gamma oscillations code for distinct letters and that these patterns are locked to the theta cycle (Jacobs & Kahana, 2009). It has also been demonstrated that in human EEG recordings theta oscillations are linked to the retrieval of lexical semantic information (Bastiaansen, Oostenveld, Jensen, & Hagoort, 2008). The key assumption adopted in our work is that cortical memories are represented by cell assemblies based on Hebb's principle (Hebb, 1949). More specifically, in our view, the Hebbian process of cell wiring together occurs because of coincident firing at theta time scales. Studies of cortical plasticity support this view; long-term synaptic potentiation (LTP) has been shown to be modulated by the hippocampal theta phase (Hölscher, Anwyl, & Rowan, 1997; Huerta & Lisman, 1993) and to require postsynaptic bursting (Pike, Meredith, Olding, & Paulsen, 1999). Moreover, single-cell phase locking to theta oscillations has been demonstrated to predict learning of novel objects in humans (Rutishauser, Ross, Mamelak, & Schuman, 2010).
Cortical Layer 2/3 as an Attractor Network
We map the hypothesized cell assembly or attractor dynamics described here to neural activity in Layer 2/3. Pyramidal cells in Layer 2/3 are known to have the extensive horizontal long-range connections needed to store sparse and distributed patterns and to exhibit sparse and low-rate activity (de Kock & Sakmann, 2009; Sakata & Harris, 2009) favoring high storage capacity in attractor networks. Furthermore, poststimulus frequency shifts from alpha to gamma oscillations appear as a specific behavior observed in these layers (Fries et al., 2008; Chrobak & Buzsáki, 1998). This effect is in our network associated with increased specificity in the neural activity, that is, activation of a specific cell assembly (Lundqvist et al., 2010). Finally, theta and gamma oscillations, as well as evidence for attractor-like dynamics, are typically reported in higher-order areas such as PFC or hippocampus. In these areas, sparse neural activity in the superficial layers is to a lesser degree masked by the dense activity in the deeper layers, as the deeper layers gradually diminish in the cortical hierarchy and are completely missing in hippocampus.
Previous Approaches to Modeling of Multi-item Working Memory
There have been several approaches to modeling working memory storage reported in the computational neuroscience literature. Most commonly, persistent activity has been employed as an underlying mechanism (Barbieri & Brunel, 2008; Compte et al., 2000; Wang, 1999; Amit & Brunel, 1997; Goldman-Rakic, 1995) but typically only for single-item storage (for exception, see Macoveanu et al., 2006). The periodic activation mechanism employed here does not suffer from such a limitation. A range of alternative implementations of periodic reactivation has been proposed (Mongillo et al., 2008; Sandberg et al., 2003; Lisman & Idiart, 1995). Our implementation is similar to that of Mongillo et al. (2008), relying on LTP for memory storage and selective boosting of a subset of the resultant long-term memories with synaptic augmentation. The limitation of this mechanism is that working memory can only maintain items that have already been stored in long-term memory. This can be avoided by relying on fast Hebbian plasticity instead of postsynaptic augmentation so that new memory items could be formed on short time scales (Sandberg et al., 2003). However, the results presented here are not contingent on which one of these mechanisms is used but only on the fact that the reactivation occurs at a theta time scale. Another approach was adopted by Lisman and Idiart (1995), where periodic reactivation occurred on a gamma-rhythm time scale instead and relied on cellular mechanisms so that novel items could be stored. The implications of this interpretation in relation to our work are discussed in the next subsection.
Previous Work on Modeling Theta Oscillations
The two key phases in the switching dynamics, which underlies the generation of theta oscillations in our model, are activation and termination of attractors. The termination process relies on cellular adaptation, whereas the spontaneous activation of an attractor relies on temporal increase in the recurrent excitability because of the interplay between fast depression and slow augmentation. Theta oscillations have also been shown to arise in networks with two types of interneurons (Smerieri, Rolls, & Feng, 2010; Rotstein et al., 2005). Rotstein et al. (2005) motivated this concept with experimental findings, indicating that some cell types intrinsically produce theta oscillations and that hippocampal slices can produce theta oscillations without excitation. They demonstrated that a network of inhibitory cells with an H current, synchronized by a fast inhibitory network, could generate coherent theta oscillations. This view is compatible with our results given that the oscillatory activity in the inhibitory cells is sufficient to periodically terminate the attractor activity. If such an inhibitory network is used for termination, it can provide periodically reoccurring windows of opportunity for switching between assemblies. In this scenario, pyramidal cell adaptation could still mediate the selection of which attractor to restart but would not be constrained to terminate activity. For low levels of adaptation, items would be replayed in consecutive theta cycles without the network returning to the noncoding ground state. The attractor could only be silenced because of competition with other items stored in working memory. If only a single item was stored, it would be continuously activated and deactivated. In that case, theta power would not depend on working memory load (Axmacher et al., 2010; Raghavachari et al., 2001). The activity patterns in such a network would resemble those arising in Smerieri et al.'s (2010) model, wherein the activity of pyramidal cells is modulated at theta frequency by a specific type of inhibitory cells. This modulation does not terminate attractor retrieval though, so the attractor dwell time would extend over several theta cycles (Smerieri et al., 2010).
The aforementioned model by Lisman and Idiart (1995) suggests switching between neuronal assemblies at gamma frequency and replay of an entire list of items at theta frequency. In our view, however, there are functional advantages with periodic activation of individual items at theta rate. In our model, the windows of opportunity for reactivation lasted for several seconds, and the network was robust to variations in the timing of encoding of new items, thus implying that if a new item was stimulated as an already existing item was being replayed, the existing item was hardly ever lost. This was also facilitated by the fact that we relied on synaptic mechanisms and network effects rather than intrinsic cellular properties. The concept of several items being replayed at theta cycle (Lisman & Idiart, 1995) predicts a lower theta frequency and a larger number of nested gamma cycles within each theta wave. Recent findings both supported and contradicted their model, as it was found that theta frequency decreased with memory load but also that the number of gamma cycles per theta wave was constant (Axmacher et al., 2010). In our simulations, we do not identify a trend of decreasing theta peak frequency with memory load but rather constant n:m theta–gamma phase locking and phase–amplitude coupling. It is our intuition that both results found in the study by Axmacher et al. (2010) could be reproduced in a network similar to the one presented here but relying on fast Hebbian learning (see above) rather than on augmentation. With Hebbian learning the storage of new patterns weakens the connectivity of existing patterns, thus lowering the average excitatory connection strength. Decrease in excitation, in turn, increases theta frequency in the network (not shown). Neither our model nor Lisman and Idiart's (1995) model predicts that phase coding occurs on the gamma scale, contrary to what has been suggested by conceptual models (Fries, Nikolić, & Singer, 2007). Experimental evidence for such phase coding was recently found (Siegel, Warden, & Miller, 2009), wherein cells coding for different objects preferably fired at distinct gamma phases. However, because more excitable cells fire earlier in the gamma cycle (Fries et al., 2007), the effect reported by Siegel et al. (2009) could be the result of averaging over gamma cycles and, as mentioned earlier, owing to the fact that cellular assemblies representing separate memory objects have different levels of excitability as a result of Hebbian encoding processes.
With respect to the model of Lisman and Idiart (1995), we further argue that the large distributed cortico-hippocampal network employed in working memory and associated with axonal delays of up to ∼100 msec (Wang et al., 2008) could only robustly store and retrieve coherent information at time scales not shorter than the theta period (∼100–150 msec). This view is supported by studies showing that neocortical cells are entrained by hippocampal theta rhythms (Rutishauser et al., 2010; Sirota et al., 2008; Siapas et al., 2005) rather than by gamma and that memory formation through LTP occurs at this range of time scales (Hölscher et al., 1997; Huerta & Lisman, 1993). Long axonal delays in a working memory network also provide a possible explanation why, given our interpretation of theta as a time scale for memory retrieval, theta frequency is not faster (thus imposing limit on processing speed and working memory storage capacity). We argue that there is a tradeoff between speed and capacity on the one side and coherence in memory representation on the other. The resulting prediction is that the theta rhythm could be slightly faster in animals with smaller brains and shifted toward the range of delta rhythm in humans. However, this would have to be tested in full-scale brain models and supported by experimental data.
We would like to thank Simon Benjaminsson for interesting and valuable discussions around this work. This work was partly supported by grants from the Swedish Science Council (Vetenskapsrådet, VR-621-2004-3807), VINNOVA (Swedish Governmental Agency for Innovation Systems), the Swedish Foundation for Strategic Research (through the Stockholm Brain Institute), and the European Union (FACETS project, FP6-2004-IST-FETPI-015879).
Reprint requests should be sent to Mikael Lundqvist, Department of Computational Biology, Roslagstullsbacken 35, Stockholm, 11421, Sweden, or via e-mail: email@example.com.