Spatial learning and memory are important for navigation and formation of episodic memories. The hippocampus and medial entorhinal cortex (MEC) are key brain areas for spatial learning and memory. Place cells in hippocampus fire whenever an animal is located in a specific region in the environment. Grid cells in the superficial layers of MEC provide inputs to place cells and exhibit remarkable regular hexagonal spatial firing patterns. They also exhibit a gradient of spatial scales along the dorsoventral axis of the MEC, with neighboring cells at a given dorsoventral location having different spatial phases. A neural model shows how a hierarchy of self-organizing maps, each obeying the same laws, responds to realistic rat trajectories by learning grid cells with hexagonal grid firing fields of multiple spatial scales and place cells with unimodal firing fields that fit neurophysiological data about their development in juvenile rats. The hippocampal place fields represent much larger spaces than the grid cells to support navigational behaviors. Both the entorhinal and hippocampal self-organizing maps amplify and learn to categorize the most energetic and frequent co-occurrences of their inputs. Top–down attentional mechanisms from hippocampus to MEC help to dynamically stabilize these spatial memories in both the model and neurophysiological data. Spatial learning through MEC to hippocampus occurs in parallel with temporal learning through lateral entorhinal cortex to hippocampus. These homologous spatial and temporal representations illustrate a kind of “neural relativity” that may provide a substrate for episodic learning and memory.
Tolman (1948) suggested that both animals and humans learn cognitive maps or allocentric spatial representations of their environments to intelligently navigate through them. Spatial learning and memory are important not only for navigation but also for the formation of episodic memories. The role of the hippocampus in episodic memory was first highlighted by studies of patient HM (Scoville & Milner, 1957). Many experiments have since been done to elucidate the mechanisms of episodic memory, leading investigators to suggest that each episode in memory consists a specific spatio-temporal combination of stimuli and behavior (e.g., Tulving & Markowitsch, 1998). The current article describes a neural model of key processes that are needed to understand how spatial learning and memory occur (Figure 1). Together with available results concerning how temporal learning and memory may occur in the hippocampus (see Discussion), these results may provide new insights into the neural mechanisms whereby episodic memories are learned and remembered.
The hippocampus and medial entorhinal cortex (MEC) are key brain areas for spatial learning and memory. Lesions of the hippocampus (Morris, Garrud, Rawlins, & O'Keefe, 1982) and blockage of hippocampal N-methyl-d-aspartate receptors (Davis et al., 1992) impair the ability of rats to learn the Morris water navigation task. It has also been observed that lesions of the MEC in rats impair navigation behavior that is based just on path integration (Parron & Save, 2004). Pyramidal cells in rat hippocampal area CA1 (O'Keefe & Dostrovsky, 1971) fire whenever the animal is located in single-compact regions or “places” of an environment and are, hence, called place cells. Different place cells get activated in different places of the environment and together provide an ensemble code for spatial self-localization. The spatial selectivity of these cells, which are also found in rat hippocampal area CA3, is typically independent of head direction as a rat navigates an open field. Furthermore, at least for the rat, place fields of anatomically neighboring place cells do not exhibit any systematic relationship (Redish et al., 2001).
How spatial orientation is tracked by the brain as an animal moves through space was clarified with the report of head direction cells in the rat postsubiculum by Taube, Muller, and Ranck (1990) and Ranck (1984). Head direction cells have also been found in many other brain areas, including anterodorsal thalamic nuclei, lateral dorsal thalamic nuclei, lateral mammillary nuclei of hypothalamus, retrosplenial cortex, and deeper layers of MEC (e.g., Taube, 2007). As the name suggests, a head direction cell fires maximally whenever the animal's head points in a preferred direction irrespective of position in an environment. The head direction system, comprising cells with various preferred directions, thus functions as a compass in a two-dimensional environment.
O'Keefe (1976) proposed that “each place cell receives two different inputs, one conveying information about a large number of environmental stimuli or events, and the other from a navigational system which calculates where an animal is in an environment independently of the stimuli impinging on it at that moment…. One possible basis for the navigation system…could be calculated from the animal's movements…the hippocampus could calculate…on the basis of how far, and in what direction, the animal had moved.” The effects of visual and path integration inputs on hippocampal place cells have since been studied intensively.
A breakthrough into how path integration inputs are preprocessed before they reach hippocampus came with the discovery of grid cells in Layer II of rat MEC (Hafting, Fyhn, Molden, Moser, & Moser, 2005), which provide major inputs to the place cells. Grid cells are called as such because each of them, unlike a place cell, fires at multiple spatial regions that form a regular hexagonal grid during navigation in an open field. Grid cells also exhibit a gradient of spatial scales along the dorsoventral axis of the MEC, with anatomically neighboring cells sharing similar grid spacings and orientations but having different spatial phases that are not topographically organized (Hafting et al., 2005).
It has been suggested that multiple spatial scales and phases of grid cells exist because their outputs can be combined to create the single spatial fields of hippocampal place cells (McNaughton, Battaglia, Jensen, Moser, & Moser, 2006; O'Keefe & Burgess, 2005). Some researchers have provided proof of this concept by proposing self-organizing models based on inputs from idealized or hand-crafted grid cells to learn place cells as spatial categories (Savelli & Knierim, 2010; Molter & Yamaguchi, 2008; Gorchetchnikov & Grossberg, 2007; Rolls, Stringer, & Elliot, 2006). It has also been suggested that unimodal spatial fields of place cells allow them to represent larger environments than the entorhinal grid cells (Gorchetchnikov & Grossberg, 2007; McNaughton et al., 2006; Rolls et al., 2006; O'Keefe & Burgess, 2005). Specifically, Gorchetchnikov and Grossberg (2007) showed how place cells can be learned with a spatial scale that is the least common multiple of the spatial scales of their inducing grid cell inputs. Mhatre, Gorchetchnikov, and Grossberg (2012) proposed the GRIDSmap model to demonstrate how grid cells can themselves be self-organized through learning based on linear velocity path integration signals during navigation in an open environment.
The current article unifies and extends these separate contributions to understanding how grid and place cells arise by showing how a hierarchy of self-organizing maps in the entorhinal and hippocampal cortices, each obeying the same circuit and learning laws, can respond to realistic rat trajectories to learn hippocampal place fields that emerge from inputs from simultaneously learned grid cell hexagonal firing fields. Development of grid cells and place cells as rat pups begin to navigate their environments has been reported by Langston et al. (2010) and Wills, Cacucci, Burgess, and O'Keefe (2010).
These results combine several conceptual and technical advances. First, they show that, despite their different receptive field structures, both grid cells and place cells may be learned using the same laws. Said in another way, both grid cell periodic hexagonal firing fields and place cell unimodal firing fields, despite their different appearance, can arise from the same neural mechanisms because of the different inputs that they receive at their respective stages in the entorhinal–hippocampal hierarchy. Second, these shared laws are those of self-organizing maps, which have been used to explain development in many other parts of the brain, notably the visual cortex. Third, place cell learning can occur incrementally in real time in response to inputs from grid cells, whose outputs to the place cells change through time because of their own learning in response to path integration inputs. Fourth, the path integration inputs that drive model learning in this hierarchy are derived from realistic trajectories of rats in navigation experiments. Fifth, the path integration of both linear velocity and angular velocity estimates is proposed to be accomplished by the same type of neural circuit; namely, a ring attractor. Sixth, the process whereby model grid and place cells are learned through time exhibits statistical properties that match neurophysiological data recorded from rat pups through their juvenile period.
In summary, this model system exhibits a remarkable parsimony and unity, both in its use of similar ring attractor mechanisms to compute the linear and angular path integration inputs that drive learning and in its use of the same self-organizing map mechanisms to learn the grid cell and place cell receptive fields. Even more striking is the fact that both grid and place cells emerge by detecting, learning, and remembering the most frequent and energetic co-occurrences of their inputs, properly understood. This co-occurrence property is naturally computed in response to data, such as navigational signals, that take on contextual meaning through time. These results were earlier reported in abstract form in Pilly and Grossberg (2010, 2011).
The co-occurrence property seems to be qualitatively different from the property of oscillatory interference that some other models have championed (e.g., Burgess, Barry, & O'Keefe, 2007). See the Methods section for a key difference between oscillatory interference and self-organizing map models. In addition, although oscillatory interference models have been used to explain properties of grid cells, they have not yet been used to show how grid cells can be learned. Nor have they shown how place cells might arise from grid cells from the same oscillatory interference mechanism, and how place cells might be learned.
Top–down attentive matching processes are needed to dynamically stabilize the memories that are learned by self-organizing maps. The Discussion section describes how the current results may be embedded into a larger entorhinal–hippocampal architecture that combines hierarchical bottom–up categorization and top–down attentional processes to stabilize spatial memories. This enhanced architecture can explain additional data about attentional modulation and the organization of beta and gamma rhythms in the hippocampal system. The possibility of such an extension provides additional evidence for the relevance of the current modeling approach.
Stripe Cells and Ring Attractors
The starting point for our hierarchical GridPlaceMap learning model is the GRIDSmap model (Mhatre et al., 2012), which is the only model to date that simulates how grid cells may learn to fire at hexagonal grid positions as an animal explores an open environment. In particular, the GRIDSmap model showed how grid cells in Layer II of MEC can emerge as learned spatial categories of a self-organizing map whose inputs come from groups of hypothesized “stripe cells” in the deeper layers of MEC, each of which performs linear velocity path integration along a different direction of movement in space. In other words, two-dimensional grid fields self-organize in response to inputs from groups of stripe cells, each with a different directional selectivity. The name “stripe cell” acknowledges that the spatial firing pattern of each such cell exhibits parallel stripes as the environment is navigated. Burgess et al. (2007) introduced an analogous concept of “band cells,” but they are formed by the different proposed mechanism of oscillatory interference.
In the GRIDSmap and GridPlaceMap models, stripe cells for linear path integration and head direction cells for angular path integration are both realized by ring attractor circuits (Figure 2A). Several authors have earlier proposed that head direction cells may be modeled as ring attractors in which angular head velocity signals are integrated through time into displacements of an activity bump along the ring (Fortenberry, Gorchetchnikov, & Grossberg, in press; Boucheny, Brunel, & Arle, 2005; Song & Wang, 2005; Goodridge & Touretzky, 2000; Blair & Sharp, 1995, 1996; Redish, Elga, & Touretzky, 1996; Skaggs, Knierim, Kudrimoti, & McNaughton, 1995). Similarly, in the current model, linear velocity along a prescribed direction is integrated in a ring attractor into displacement of an activity bump along the ring. The outputs of head direction cells modulate the linear velocity signal to multiple directionally selective stripe cell ring attractor circuits. This modulation is sensitive to the cosine of the difference between the current heading direction of movement and the ring attractor's directional preference. Each stripe cell ring attractor is sensitive to a different direction and spatial scale. Stripe cells are the individual cells within each such ring attractor circuit and are activated at different spatial phases as the activity bump moves across their ring locations. They may be activated periodically as the activity bump moves around the ring attractor more than once in response to the navigational movements of the animal.
In this conception, the head direction cells and stripe cells that receive angular velocity and linear velocity inputs, respectively, parsimoniously utilize a similar neural circuit design (Figure 2A). The model's assumption that both of these cell types are available to drive spatial experience-based grid and place cell development is consistent with data showing that adultlike head direction cells already exist in parahippocampal regions of rat pups when they actively move out of their nests for the first time at around 2 weeks of age (Langston et al., 2010; Wills et al., 2010).
As in the GRIDSmap model, the current GridPlaceMap model instantiates a simple algorithmic realization of a ring attractor for performing linear path integration, because the focus in this article is on grid cell and place cell learning (see Equations 1–4 below). Multiple calibration and learning problems need to be solved to develop a more complete ring attractor model. Fortenberry et al. (in press) have developed a dynamic, as opposed to algorithmic, ring attractor model wherein motor, vestibular, and landmark-based visual signals may be adaptively calibrated for purposes of computing self-consistent head direction estimates using any combination of these input sources. This model may provide guidance for future efforts to construct a dynamic model of stripe cell development.
The GRIDSmap model simulated the development of grid cells with observed properties using a suitably defined self-organizing map model. GRIDSmap exploits a simple property of the trigonometry of spatial navigation in a two-dimensional open environment. By this property, stripe cells are most frequently coactivated at positions that happen to form a regular hexagonal grid across the environment. GRIDSmap learns spatial categories that fire selectively to these most frequent coactivations of stripe cells. GRIDSmap grid cells self-organized in response to a real rat trajectory, obtained from Sargolini et al. (2006), with stripe cell directions in different simulations chosen to differ by 7, 10, 15, 20, or random numbers of degrees.
This property of GRIDSmap model overcame a deficiency of the oscillatory interference-based models of grid cell firing (e.g., Burgess et al., 2007; Hasselmo, Giocomo, & Zilli, 2007), which required that there be just three band cells with direction preferences separated by 60° and which did not show how hexagonal grid firing patterns could be learned. This interference mechanism implies that combining band cells whose direction preferences have other angular separations can generate markedly different grid cell firing patterns (Hasselmo et al., 2007). In particular, if the angular separations do not equal 60°, then oscillatory interference can create a wide range of nonhexagonal patterns. However, these other patterns are not observed in the firing fields of experimentally recorded cells. Thus, a mechanism that naturally selects the directions that are 60° apart over all other possible directional combinations must exist. The ability of a self-organizing map model to select and learn preferred subsets of coactivation patterns overcomes this problem.
Although the GRIDSmap model clarified how hexagonal grid fields could be learned, it simulated only the case where there are just five cells in the self-organizing map for a given stripe spacing and used the same rat trajectory on each learning trial. Moreover, for smaller separations between adjacent stripe directions (<20°) or for larger stripe spacings (>20 cm), the learned spatial firing patterns of most map cells looked increasingly fuzzy or stripe-like, rather than hexagonal (see Figures 8 and 9 in Mhatre et al., 2012). One goal of the current work is to overcome these weaknesses of the GRIDSmap model by identifying self-organizing map mechanisms that can learn clear hexagonal grids in response to a broader range of stripe cell choices. A second goal is to show that this refinement of the self-organizing map design, when replicated identically at the next processing stage, can also automatically learn unimodal place cell receptive fields in response to inputs from the grid cells as they are learned with multiple spatial scales.
The GridPlaceMap model is described in the Model Description section, followed by simulations exhibiting the statistical properties of the developing grid and place cell receptive fields that match corresponding neurophysiological data from rat pups. A comparative analysis with other models is provided in the Discussion section along with discussions of attentional modulation and temporal learning.
The navigated trajectory determines the activities of all stripe cells via Equations 1–4. The model assumes that the spatial scales s of stripe cells in the deeper layers of MEC gradually increase along the dorsoventral axis, thereby inducing learning of grid cells with increasing spacing and field size along this axis in the superficial layers of MEC (Hafting et al., 2005). See the Simulation Settings section for details.
Medial Entorhinal Map Cells
Hippocampal Map Cells
The parameter values used in all the simulations are A = 10, α = 100, β = 30, Γ = 0.25, and λw = 0.01. Also, the model assumes stripe cells with three spacings (s = 20 cm, 35 cm, 50 cm), 18 directions (equally spaced in steps of 10°), and five spatial phases (p = [0, s/5, 2s/5, 3s/5, 4s/5] for the stripe spacing s) per direction. The standard deviation σ of stripe field widths is set to 7% of the stripe spacing s (see Figure 2B and C). The values for the three stripe spacings were chosen to be consistent with the observed constant ratios of grid spacings (∼1.7 and ∼2.5) with respect to the shortest spacing across rats (Barry et al., 2007: Figure 3B and Supplementary Figure 5B). That is, the choice for the shortest stripe spacing in our simulations (20 cm) constrained the values for the other longer-stripe spacings (35 cm [1.75 × 20 cm] and 50 cm [2.5 × 20 cm]).
The stripe cells for each spacing s project to a corresponding population of 200 medial entorhinal map cells, which will gradually transform into grid cells with the proportional grid spacing. The entorhinal populations, for the three grid scales, in turn all drive a single population of 101 hippocampal map cells, which will concurrently transform into place cells as a result of the same computational rules of self-organization used for the previous level in the hierarchy (see Figure 3). The choice for the number of map cells was made to demonstrate learning using larger self-organizing maps.
Stripe cells are driven by the spatial movements of a virtual animal or animat based on a real trajectory of a laboratory rat randomly foraging for food in a square (100 cm × 100 cm; ∼9.93 min) environment (Sargolini et al., 2006; see Figure 2D). The spatial positions comprising the trajectory were linearly interpolated to increase their temporal resolution from 20 msec to match with the time step of numerical integration of the model dynamics (2 msec). The Results section shows that GridPlaceMap can learn good hexagonal grids using a larger number of map cells and a novel trajectory on each learning trial, rather than using the same trajectory on all trials, as in the GRIDSmap simulations of Mhatre et al. (2012). GridPlaceMap can also learn good hexagonal grids in response to larger stripe spacings and a smaller separation between stripe directions than could GRIDSmap.
Each trial comprises one run across the environment. A novel trajectory was created on each trial by rotating the original trajectory by a random angle about the starting position. Because the model is currently based only on path integration inputs, each trial in these simulation experiments began at the same starting position so that the degree of spatial stability of firing rate maps across trials could be observed. The starting position of the original trajectory is ∼44 cm from the midpoint of the 100 × 100 cm environment (origin); so a random rotation of the trajectory about this starting position would likely result in a trajectory that takes the animat significantly outside the environment. To ensure that the derived trajectories go beyond the environment only minimally, the original trajectory was prefixed by a short straight trajectory from the origin to the actual starting position at a running speed of 30 cm/sec (see Figure 2D). The remaining minimal outer excursions were bounded by environment limits.
Trigonometry of Spatial Navigation
Hexagonal grids are learned because of a property of the trigonometry of spatial navigation to which the self-organizing map dynamics are sensitive. This property was first described in Mhatre et al. (2012). Here, we refine its analysis. This trigonometric property controls the sets of coactive stripe cells that the model experiences as each trajectory is traversed. The self-organizing map model is capable of detecting and learning grid cells in response to the most energetic and frequent coactivations of these stripe cell sets through time.
To understand how this works, consider an animal moving straight along the horizontal axis from the origin. This situation illustrates a key point, although the average length of piecewise linear segments in the real rat trajectory shown in Figure 2D is 0.88 cm, which is much less than the smallest stripe spacing of 20 cm in the simulations. A stripe cell tuned to 0° will become active periodically as the animal moves forward. However, the stripe cell tuned to the perpendicular direction of 90° and with one of its stripe fields intersecting the trajectory will be continually active, because the velocity projected along its direction, being zero, does not move the activity bump around in its ring attractor. Thus, if s is the stripe spacing, then the set of stripe cells whose directions differ by 90° will be coactive every s units (Figure 4A), but the set of stripe cells whose directions differ by 60° will be coactive every 2s units (Figure 4B). For this reason, in a two-dimensional environment, sets of two stripe cells with the same spacing whose preferred directions differ by 90° have the maximal coactivation frequency. By comparison, sets of three stripe cells whose directions differ from each other by 60°, with two degrees of freedom for the corresponding spatial phases, exhibit only near-to-maximal coactivation frequency. However, they provide maximally energetic input patterns to drive learning at the map cells, via the gating signals Gjs in Equation 6, because three, rather than two, inputs summate at them in the 60° case. For example, in a 100 cm × 100 cm environment, considering a stripe spacing of 20 cm, pairs of stripe cells differing in their preferred directions by 90° have 25 coactivation points each in the environment, but summate inputs from only two cells to their corresponding map cells. On the other hand, triplets of stripe cells differing in their preferred directions by 60° have 23 coactivation points each, but summate inputs from three stripe cells to their corresponding map cells (see Figure 4C and D). Because (25 × 2) < (23 × 3) and map cell activities increase with the total input that they receive, self-organized learning favors hexagonal grids as opposed to rectangular grids. This explanation refines that given in Mhatre et al. (2012), who only commented about the most frequent coactivations.
Coordinated Entorhinal Grid Cell and Hippocampal Place Cell Learning
Model simulations based on 30 learning trials are presented in Figures 5,678910111213–14. The quality of the spatial fields of the emerging grid and place cells was assessed by computing standard grid score and spatial information at the end of each trial. Grid score measures how periodic and hexagonal a grid pattern is (Sargolini et al., 2006), and spatial information measures how predictive of an animal location a cell's activation is (Skaggs, McNaughton, Gothard, & Markus, 1993). Grid cells are defined as those medial entorhinal map cells whose grid score > 0.3, and place cells are those hippocampal map cells whose spatial information > 0.5 (see Supplementary Figures S6 and S11 in Langston et al., 2010, and Supplementary Figure S7 in Wills et al., 2010, for how these thresholds are calculated). The Appendix summarizes the computation of other spatial measures such as grid orientation, grid spacing, spatial coherence, and intertrial stability.
Figure 5 presents the spatial responses of representative learned grid and place cells on the 30th learning trial. Spatial autocorrelograms of the rate maps are also shown for the grid cells, which in this case were learned from a stripe spacing of 35 cm. These grid and place cells were selected based on uniform sampling of the corresponding population distributions of grid score (ranging from −0.5 to 1.41) and spatial information (ranging from 0.9 to 23.06), respectively. Note the distributed spatial phases of the learned spatial fields at either level in the model hierarchy; namely, the spatially offset multiple fields of entorhinal map cells (Figure 5A) and the spatially offset unimodal fields of hippocampal map cells (Figure 5B).
Figure 6 summarizes the distributed spatial phases of the learned grid and place cells in the last trial. The firing fields of two grid cells with the same spacing are formally defined to have different spatial phases if the cross correlogram of their rate maps does not yield a local maximum at the origin. Moreover, the cross-correlogram exhibits a hexagonal grid pattern if the grid fields of the two cells share nearly the same orientation. If an ensemble of grid cells share the same grid orientation and spacing but different spatial phases, the phase difference for any pair of cells within the ensemble, defined as the distance from the origin to the nearest peak in the cross correlogram of their rate maps, will lie between zero and half the grid spacing. In this regard, the model simulations in Figure 6C and D closely match the data from grid cells in the adult rat MEC (Hafting et al., 2005) shown in Figure 6A and B. Distributed spatial encoding by the learned grid and place fields, respectively, can also be seen from the histograms of all pairwise spatial correlation coefficients shown in Figure 6E (mean = −0.0084 for the spatial scale of 20 cm) and Figure 6F (mean = −0.13). Moderate recurrent competition and the lack of winner-take-all cell dynamics in the map hierarchy (see Equations 5 and 11) help in the distributed learning of overlapping but offset spatial firing fields of grid and place cells.
Figure 7 presents the learned grid orientation distributions for each spatial scale on the last trial. Only the orientations of unique grid structures were included in the respective distributions. For each spatial scale, the model grid cells were clustered into different groups using the criterion that two grid firing patterns are defined to be similar if their spatial correlation coefficient r ≥ 0.7 and their orientation difference < 5°. There were 96 grid groups (out of 200 map cells) for the spatial scale of 20 cm, 104 groups for 35 cm, and 85 groups for 50 cm. It can be seen that, although the learned grid orientations for each scale do not form a relatively tight cluster, their distribution is nonetheless tuned and can be approximated by a Gaussian function. Chi-square goodness-of-fit test for normality (null hypothesis) with 10 bins yielded significant results for all spatial scales (20 cm: χ2(7,97) = 11.93, p = .10; 35 cm: χ2(7,106) = 8.41, p = .3; 50 cm: χ2(7,85) = 6.05, p = .53). Our model predicts a tuned, rather than a uniform, distribution for grid orientations because hexagonal grid patterns with the same spacing but different orientations that are equally sampled between 0° and 60° cannot be arbitrarily translated in space to pack them such that there is minimal overlap among them. In other words, a uniform grid orientation distribution cannot be stably learned in the presence of recurrent competition among entorhinal map cells. The above statistical results are relevant because data suggest that neighboring grid cells with the same spacing share similar orientations even as their firing fields are spatially offset (Barry, Hayman, Burgess, & Jeffery, 2007; Hafting et al., 2005). Tighter grid orientation distributions could be obtained if the entorhinal network included some degree of recurrent excitatory connections among map cells that would not hamper the learning of nontopographically organized distributed spatial phases of hexagonal grid firing patterns. One goal of future work is to understand how best to reconcile these competing data constraints.
Figure 8 shows the gradual evolution of spatial fields across trials for two entorhinal map cells with the highest grid score in the last trial, one corresponding to the input stripe spacing of 20 cm (Figure 8A) and the other to that of 50 cm (Figure 8B). Comparing the grid fields in any trial for these two cells, it can be seen that both the grid field size and spacing increase with the spatial scale of input stripe cells. Moreover, the time course of hexagonal grid emergence, its quality, and its stability across trials for a particular entorhinal cell depend on its pattern of initial weights from stripe cells, its activity dynamics, and the density and regularity with which various spatial regions in the environment are visited.
Figure 9 presents the gradual evolution of spatial fields across trials for four representative hippocampal map cells. These place cells gradually learn to forget various nonpreferred spatial fields and eventually represent unimodal spatial fields. As for grid cells, place field development depends on a cell's pattern of initial weights from entorhinal cells and its activity dynamics and on the spatio-temporal coverage statistics of the environment.
Figure 10 shows the bottom–up adaptive weights from stripe cells to grid cells (see Equation 8) with the highest grid score in the three entorhinal self-organizing maps for input spatial scales of 20 cm (Figure 10A), 35 cm (Figure 10B), and 50 cm (Figure 10C) at the end of the last trial. The weight strengths are grouped by stripe direction with the various colored bars representing the five spatial phases in each group. These results show that the model's learning law enables grid cells to become selectively tuned to coactivations of appropriate stripe cells such that a better grid score correlates with a closer average separation between the local peaks in the distribution of maximal weights from various stripe groups to 60°. Local peaks with weight values of <0.0225 are not considered. For example, these local peaks for the grid cell shown in Figure 10A, which has a grid score of 1.41, have stripe directions of −70°, −10°, and 50°, which differ from each other by 60°.
Mhatre et al. (2012) found that “most of the weight sets [from stripe cells] are less readable” in estimating grid orientation. In contrast, simulations with the GridPlaceMap model show that the grid orientation of each learned model grid cell can be extracted from the stripe directions of the above-defined local peaks in the distribution of weights from stripe cells. In particular, given the 10° resolution in stripe directions (see Simulation Settings), the grid orientation can be predicted with a ±5° margin of error as the stripe direction midway between the local peaks that lies in the range between 0° and 60°. For example, the grid orientation for the cell shown in Figure 10C, namely 27.72°, is near midway between the local peaks at 0° and 60°.
Figure 11 shows firing rate maps in the last trial of grid cells that project maximally to a model place cell from the three entorhinal self-organizing populations for the spatial scales of 20 cm (Figure 11A), 35 cm (Figure 11B), and 50 cm (Figure 11C). These three rate maps are summed across scale to create an ensemble rate map (Figure 11D), which highlights the spatial region where the developing grid fields are in phase, thereby explaining the place cell's receptive field (Figure 11E).
The environment occupancy map (Figure 12A) is obtained by tracking the amount of time spent on the movement trajectories in each discrete spatial bin of the environment across all trials (see Appendix). Comparing the ensemble rate map in the last trial of all hippocampal cells (Figure 12B) with the environment occupancy map shows that the learned spatial field distribution exhibits more selectivity for more frequently visited spatial regions (linear correlation: r(1598) = 0.88, p = 0). In other words, model hippocampal cells at the top of the self-organizing map hierarchy, with their mostly unimodal spatial fields, are able to provide an ensemble code for predicting self-location in two-dimensional space. Given that the learned hippocampal cells have different peak and mean firing rates, we verified that the above conclusion holds even when their firing rate maps are normalized to equalize either peak rates (r(1598) = 0.77, p < .001) or mean rates (r(1598) = 0.78, p < .001) across the cells.
Langston et al. (2010) and Wills et al. (2010) studied the development of spatial representation in rat pups as they begin to actively explore their environments. Figure 13 shows that the model can replicate how the average quality of the hexagonal grid fields (grid score) tends to gradually improve with experience (data: Figure 13A and B; model: Figure 13D), whereas the average grid spacing or interfield distance does not exhibit any appreciable change (data: Figure 13C; model: Figure 13E). The former illustrates gradual learning; the latter that grid spacing is derived from the spacing of already developed (hard-wired) stripe cells.
Figure 14 shows that the model can simulate how the average quality of the place fields (spatial information) tends to gradually improve with experience, whereas the spatial information of the grid cells is relatively lower and does not change noticeably (data: Figure 14A; model: Figure 14C). This is intuitive because hippocampal cells, unlike entorhinal cells, develop unimodal spatial selectivities as a result of self-organization. The model also qualitatively replicates the small, gradual improvement in the intertrial stability for place cells during development (data: Figure 14B; model: Figure 14D), which follows the gradual convergence in the weights of entorhinal projections to hippocampal cells.
Figure 15 compares model learning using the same trajectory on each trial (Figure 15A and 15C) with that using a novel trajectory on each trial (Figure 15B and 15D). Grid scores and intertrial stability measures of learned entorhinal grid cells are considered for each spatial scale. Table 1 summarizes more results for these two simulation conditions regarding the number of entorhinal map cells that have become grid cells (grid score > 0.3) for each spatial scale, the number of hippocampal map cells that have become place cells (spatial information > 0.5), the number of unique grid groups for each spatial scale, the number of unique place groups, and the average group sizes. The rate maps of two place cells are formally defined to be different if their spatial correlation coefficient r is < 0.7. Figure 15 and Table 1 together show that, while presenting the same trajectory on each trial improves stability of the learned grid fields (compare Figure 15C and 15D), it results in the development of a smaller proportion of grid cells (compare the corresponding entries for the two conditions in Table 1). Furthermore, it leads to reduced hexagonal gridness within the subset of cells that pass the criterion for grid cells (compare Figure 15A and 15B) and also increased redundancy among the grid fields (compare the average grid and place group sizes for the two conditions in Table 1). All these results, which represent experimentally testable predictions, can be understood as a reflection of how fewer hexagonal grid exemplars are experienced when navigated trajectories generate sparser environmental coverage. It would be interesting to study the proportion and quality of grid cells in juvenile rats that experience space for the first time in (underground) piecewise linear tunnels, instead of open two-dimensional fields as in the recent development studies (Langston et al., 2010; Wills et al., 2010).
|(In Last Trial)|
|Same Trajectory in Each Trial|
|Novel Trajectory in Each Trial|
|No. of grid cells (20 cm)||39 (19.5%)||131 (65.5%)|
|No. of grid cells (35 cm)||85 (42.5%)||146 (73%)|
|No. of grid cells (50 cm)||137 (68.5%)||176 (88%)|
|No. of unique grid groups (20 cm)||22||97|
|No. of unique grid groups (35 cm)||32||106|
|No. of unique grid groups (50 cm)||48||85|
|Average grid group size (20 cm)||1.77||1.35|
|Average grid group size (35 cm)||2.13||1.38|
|Average grid group size (50 cm)||2.85||2.07|
|No. of place cells||101 (100%)||101 (100%)|
|No. of unique place groups||70||91|
|Average place group size||1.44||1.11|
|(In Last Trial)|
|Same Trajectory in Each Trial|
|Novel Trajectory in Each Trial|
|No. of grid cells (20 cm)||39 (19.5%)||131 (65.5%)|
|No. of grid cells (35 cm)||85 (42.5%)||146 (73%)|
|No. of grid cells (50 cm)||137 (68.5%)||176 (88%)|
|No. of unique grid groups (20 cm)||22||97|
|No. of unique grid groups (35 cm)||32||106|
|No. of unique grid groups (50 cm)||48||85|
|Average grid group size (20 cm)||1.77||1.35|
|Average grid group size (35 cm)||2.13||1.38|
|Average grid group size (50 cm)||2.85||2.07|
|No. of place cells||101 (100%)||101 (100%)|
|No. of unique place groups||70||91|
|Average place group size||1.44||1.11|
Top–Down Attentive Matching Helps to Stabilize Learned Grid and Place Cell Firing Fields
Figure 15 confirms in the context of spatial representation that the associative and competitive mechanisms of a self-organizing map are not sufficient to stabilize memories in response to densely covered environments. The novel trajectory on each trial can drag the spatial firing fields from their immediately prior arrangement. Because place cell selectivity can develop within seconds to minutes and can remain stable for months (Frank, Stanley, & Brown, 2004; Muller, 1996; Wilson & McNaughton, 1993; Thompson & Best, 1990), the hippocampus needs additional mechanisms to ensure this long-term stability. This combination of fast learning and stable memory is often called the stability-plasticity dilemma (Grossberg, 1980, 1999). Self-organizing maps are themselves insufficient to solve the stability–plasticity dilemma in environments whose input patterns are dense and are nonstationary through time, as occurs regularly during real-world navigation. However, self-organizing maps augmented by learned top–down expectations that focus attention upon expected combinations of features can do so.
Adaptive resonance theory (ART) was introduced in Grossberg (1976a, 1976b) to show how to dynamically stabilize the learned memories of self-organizing maps. Grossberg (2009) applied to the hippocampus general predictions about how learned top–down expectations match bottom–up signal patterns to focus attention upon expected critical features and drive fast learning of new or refined recognition categories and dynamic stabilization of established memories. In particular, he proposed how attentive matching mechanisms from hippocampal cortex to MEC may stabilize learned grid and place cell receptive fields. These top–down connections are realized by modulatory on-center, off-surround networks (Grossberg, 1980, 1999; Carpenter & Grossberg, 1987, 1991). The key role of competition in attention focusing has been called “biased competition” (Desimone, 1998).
Experimental data about the hippocampus are compatible with this predicted role of top–down expectations and attentional matching in memory stabilization. Kentros, Agniotri, Streater, Hawkins, and Kandel (2004) reported that “conditions that maximize place field stability greatly increase orientation to novel cues. This suggests that storage and retrieval of place cells is modulated by a top–down cognitive process resembling attention and that place cells are neural correlates of spatial memory” and that N-methyl-d-aspartate receptors mediate long-lasting hippocampal place field memory in novel environments (Kentros et al., 1998). Morris and Frey (1997) proposed that hippocampal plasticity reflects an “automatic recording of attended experience.” Bonnevie et al. (2010) showed that hippocampal inactivation causes grid cells to lose their spatial firing patterns. These experiments clarify how cognitive processes like attention may play a role in entorhinal–hippocampal spatial learning and memory stability.
In ART, when the cells that represent a learned category are activated, their activity triggers read-out of top–down signals that interact with adaptive weights to generate a learned expectation that is matched against the activation pattern that bottom–up inputs induce. A good enough match can focus attention on expected input features, while suppressing unexpected input features, and can trigger a synchronous resonant state between the attended features and the active category, indeed synchronized oscillations, via the reciprocal exchange of bottom–up and top–down signals. Such a resonant state can trigger fast learning, hence the name adaptive resonance, and can also prevent unattended features from being learned. A sufficiently big mismatch because of a novel or unexpected event can reset the active category cells and drive a memory search for and learning of better matching category cells. Novel events such as novel spatial environments or novel routes within a familiar environment can mismatch previously learned top–down expectations that are read out from hippocampal place cells.
Grossberg and Versace (2008) predicted that a sufficiently good match can trigger fast gamma oscillations that enable spike-timing-dependent plasticity, whereas a big enough mismatch can trigger slow beta oscillations that do not. They predicted, moreover, how a mismatch-based reset or shift of attentional focus could act in deeper cortical layers, thereby suggesting that beta oscillations might be found more in deeper layers than superficial layers. Their simulation study was carried out in a laminar model of visual cortex, but the dynamic principles behind the gamma–beta distinction seem to be general. The Grossberg–Versace prediction is consistent with data of Buffalo, Fries, Landman, Buschman, and Desimone (2011), who reported that superficial layers in visual cortical areas V1, V2, and V4 show stimulus-induced synchrony in the gamma range, whereas deeper layers show beta oscillations. Grossberg (2009) illustrated the potential generality of this prediction by showing how the beta oscillations that were measured by Berke, Hetrick, Breck, and Green (2008) during spatial learning of place cells in novel environments can be explained by the same top–down match-mismatch mechanism. Buschman and Miller (2009) reported additional consistent data from the FEF wherein covert attention shifts exhibit beta oscillations during visual search. The different oscillation frequencies that occur during match versus mismatch are emergent properties because of changes in the balance of excitation and inhibition in these two conditions. Further study of the mathematical basis for these changes is warranted.
In summary, top–down learned expectations, attention focusing, and novelty-sensitive memory search, because of feedback from hippocampal cortex to MEC, can ensure the combined benefits of fast learning and memory stability of the learned spatial category maps. These are core mechanisms of cognition. The fact that the GridPlaceMap model of coordinated learning of grid cells and place cells is defined by a hierarchy of self-organizing maps opens the way to augment the model with top–down connections and the match–mismatch mechanisms that are needed to rapidly learn and dynamically stabilize spatial memories.
Top–Down Matching May Control Grid Orientation, Grid Realignment, and Place Remapping
In addition to facilitating fast learning and self-stabilization of spatial memories, top–down attentive matching may have several other observable effects on grid and place cell dynamics, such as remapping. Remapping could occur, for example, when the path integration inputs activate one combination of grid and place cells via bottom–up pathways from MEC to hippocampus, but visual inputs activate a different combination by top–down pathways from hippocampus to MEC. In any ART system, a big enough mismatch can cause reset. In this way, top–down attentive matching may underlie the phenomena of global remapping in the hippocampus and grid realignment in the MEC (Fyhn, Hafting, Treves, Moser, & Moser, 2007).
Top–down matching from place cells to grid cells can also help to align grid orientations. Such global alignment by top–down matching has earlier been simulated in other brain systems. For example, as part of a proposed solution of the global aperture problem, top–down matching from area MST (medial superior temporal) to area MT (medial temporal) in the motion system may align perceived motion directions across spatial locations to conform to a higher-level choice of object motion direction (Berzhanskaya, Grossberg, & Mingolla, 2007; Grossberg, Mingolla, & Viswanathan, 2001; Chey, Grossberg, & Mingolla, 1997).
These proposed effects can be quantitatively studied only by implementing a more complete entorhinal–hippocampal model that includes top–down attentive matching.
Space and Time in Entorhinal–Hippocampal Circuits and Episodic Memory
The GridPlaceMap model illustrates how spatial memories can be learned in circuits passing through MEC to the hippocampus. In particular, multiple small spatial scales developing through the MEC are fused through learning in the hippocampus to generate place cells capable of representing the large spaces that animals navigate. Previous modeling has shown how multiple small temporal scales through the lateral entorhinal cortex (LEC) can be adaptively combined in the hippocampus to generate temporal scales that are large enough to bridge behaviorally relevant temporal gaps between stimulus and response, such as occurs during trace conditioning and delayed nonmatch to sample experiments (Grossberg & Merrill, 1992, 1996; Grossberg & Schmajuk, 1989). In both cases, multiple small spatial or temporal scales give rise to larger and behaviorally useful spatial or temporal scales.
It has been suggested how these spatial and temporal scales can both be generated by homologous neural mechanisms (Grossberg & Pilly, submitted). The mechanistic homology between the spatial and temporal mechanisms suggests why they may occur side by side in the medial and lateral streams through entorhinal cortex into the hippocampus; indeed, spatial representations in the “where” cortical stream go through parahippocampal cortex and medial entrorhinal cortex on their way to hippocampal cortex, and object representations in the “what” cortical stream go through perirhinal cortex and LEC on their way to hippocampal cortex (e.g., Eichenbaum & Lipton, 2008), where they are merged. This mechanistic homology joining space and time mechanisms into a unified spatio-temporal design may embody a type of “neural relativity.” The existence of neural relativity of spatio-temporal representations may clarify how hippocampus contributes to episodic learning and memories, which include both spatial and temporal information.
Why Can Some Place Cells Develop before Grid Cells?
The current model studies how grid and place cell receptive fields can develop in a coordinated way. However, it is also known that, in young rat pups by around P16, hippocampal area CA1 already has a greater proportion of spatially modulated cells that exhibit more stability and more spatial information when compared with MEC (Langston et al., 2010; Wills et al., 2010). In other words, some place cell development occurs before stable grid cells appear. The place cells in the current model develop only from outputs of emerging grid cells; that is, based only on path integration information. In our simulations, although the first grid cells appear before the earliest place cells (Trial 1 vs. Trial 2), the proportion of place cells becomes greater than that of grid cells already by Trial 3.
In rats, adultlike grid cell firing and the emergence of running behavior suggestive of some degree of spatial cognition, in terms of reduction in both mean number of pauses and mean pause duration, first occur around P28 (see Figure 3E and Supplementary Figure 1B and C, respectively, in Langston et al., 2010). Our model is consistent with these data and also suggests predictions that could further test their mechanistic basis. In particular, our simulations and those of Gorchetchnikov and Grossberg (2007) for the one-dimensional case show how grid cells enable hippocampal place cells to be learned that can represent the large spaces in which adult rats typically navigate. While rat pups remain in and nearby their nests, before exploration of large spaces begins (P16), such large space representations may be unnecessary, although some spatial representation using place cells may be of use. Environmental sensory inputs, such as boundaries and visual landmarks, may mediate the learning (Barry et al., 2006) of these early place representations (Wills et al., 2010) via spatially modulated nongrid cells in MEC such as border/boundary cells (Savelli, Yoganarasimha, & Knierim, 2008; Solstad, Boccara, Kropff, Moser, & Moser, 2008) and also nonspatial sensory signals from LEC (Witter & Amaral, 2004). This is consistent with the observation that eye opening in rat pups happens around P14 (Tan, Wills, O'Keefe, & Cacucci, 2010). Two conclusions follow from this discussion: First, grid cells may not be necessary to learn unimodal place fields per se but may enable learning of place cells capable of representing large enough spaces suitable for adult navigation. Second, the spatial information of early place cells is gradually improved, and more place cells are learned by inputs from developing grid cells, over Postnatal Weeks 3 and 4 (P16–P28).
Additional insights may derive from correlating aspects of the spatial representations in CA1 that are learned via the direct and indirect pathways and the gamma rhythms that these pathways seem to support. In particular, fast gamma oscillations seem to be mediated by the direct pathway, whereas slow gamma oscillations seem to be facilitated by the indirect pathway (Colgin et al., 2009). It would be of interest to study when these different gamma rhythms first arise in CA1 and whether they are correlated with the maximum spatial scale of the spatial representations that they support. Interestingly, Colgin et al. (2009) reported that the spatial firing fields of CA1 place cells constructed using spikes recorded during fast gamma periods are sharper and less dispersed compared with those based on spikes during slow gamma periods.
Previous Models of Grid Cells and Place Cells
Previous self-organizing models for the development of grid cells or place cells have used a variety of neuronal activation dynamics and synaptic learning laws. Molter and Yamaguchi (2008) and Gorchetchnikov and Grossberg (2007) employed different versions of spiking neurons and spike timing-dependent plasticity. Mhatre et al. (2012) and Rolls et al. (2006) used different kinds of rate-based neuronal dynamics and synaptic learning laws. Savelli and Knierim (2010) simulated integrate-and-fire neurons and yet another kind of rate-based synaptic learning law. There have also been several models that directly construct either grid fields (Burgess, 2008; Burgess et al., 2007; Hasselmo et al., 2007; Solstad, Moser, & Einevoll, 2006) or place fields (Fuhs & Touretzky, 2006; McNaughton et al., 2006; O'Keefe & Burgess, 2005), without testing if the model mechanisms can support learning of the corresponding receptive fields. The first model that simulated learning of grid cell periodic hexagonal receptive fields in response to spatial movement signals is the GRIDSmap model (Mhatre et al., 2012). The GridPlaceMap model shows how grid cell receptive fields can develop under much broader input conditions and how the same activation and learning laws can induce coordinated learning of both grid cell and place cell receptive fields as the model animal navigates realistic trajectories.
The dynamics of map cells in the GRIDSmap model are governed by a recurrent on-center off-surround shunting network with adaptive bottom–up excitatory inputs from stripe cells. The recurrent on-center provides grid cells with self-excitatory feedback to contrast enhance the activity of the currently winning cell and to facilitate suppression of losing cells. In the GridPlaceMap model, there is no recurrent on-center for either grid cell or place cell dynamics. Previous winning map cells are shut off, and new cells are activated, as different positions in the environment are navigated, because their only excitation comes from bottom–up inputs. Given self-excitatory feedback in GRIDSmap, an additional mechanism is needed to prevent the activity of winning map cells from perseverating indefinitely through time and learning multiple positions in the environment during navigation, thereby learning no stable map properties, while preventing other map cells from learning at all.
To overcome this problem, both the bottom–up and recurrent excitatory inputs of each GRIDSmap grid cell are multiplicatively gated by an activity-dependent habituative transmitter (see Equations 7 and 9 in Mhatre et al., 2012). Each habituative transmitter is depleted or inactivates after its winning cell is active for a controllable time interval and thereby gates off the inputs to that cell. This forces the winning cell to inactivate. Multiple map cells can thus briefly win and learn to represent environmental positions as they are explored.
Such a combination of self-excitatory feedback and habituative gating has successfully enabled self-organizing map models to simulate complex properties of visual cortical map development (e.g., Grossberg & Seitz, 2003; Grossberg & Williamson, 2001; Olson & Grossberg, 1998). In the visual cortex, map development begins in utero, before any visual inputs are experienced. Recurrent excitation and habituative gating enable map cells to learn ocular dominance and orientation columns. They also help to explain many data about adult visual perception as consequences of mechanisms for cortical development, including data about visual persistence, afterimages, apparent motion, bistable perception, and binocular rivalry (Grossberg, Yazdanbakhsh, Cao, & Swaminathan, 2008; Grossberg & Swaminathan, 2004; Francis & Grossberg, 1996a, 1996b; Francis, Grossberg, & Mingolla, 1994). In addition, activity-dependent habituation, also called synaptic depression, has been reported in neurophysiological data collected in cortical area V1 (Abbott, Varela, Sen, & Nelson, 1997).
Given that models of visual cortex successfully use self-excitatory feedback and habituative gating, why does GRIDSmap learn low quality or fuzzy grid fields in a significant number of cases where the grid fields learned by the GridPlaceMap model are clearly defined? One problem seems to be that the combination of self-excitatory feedback and habituative gating may not be sufficiently sensitive to the dynamics of changes in the input pattern as navigation proceeds along realistic trajectories. For such trajectories (see Figure 2D), both head directions and running speeds are time-variant and do not exhibit stationary statistics. Given the inertia of activation for a cell that excites itself, if the habituation is too slow (or weak) the cell may continue to fire at an appreciable level even when the animal leaves its spatial field. If the habituation is too fast (or strong), cell firing may reduce to a low level even when the animal does not move out of the spatial field immediately.
Perhaps better grid fields would be learned if the rate of habituation dynamics was made to covary with running speed. There is a precedent for using speed-dependent gain control of processing rates in, for example, models of variable-rate speech perception (Boardman, Grossberg, Myers, & Cohen, 1999; Grossberg, Boardman, & Cohen, 1997). However, this solution could be fooled if the animal decides to make movements within the spatial field of the cell or returns to the spatial field before the habituative transmitter replenishes. The effects of such movements on grid and place cell firing in vivo would be interesting to know. In this direction, Czurkó, Hirase, Csicsvari, and Buzsáki (1999) reported that when a rat moves its limbs in a running wheel intending to translate its body with respect to the world, yet remains in the same “clamped” place, hippocampal CA1 place cells that show activity in the running wheel continue to persistently fire (based on data that excluded the first second of running; see their Figure 6). Hirase, Czurkó, Csicsvari, and Buzsáki (1999) showed that the firing rates of these so-called “wheel cells” vary dynamically in proportion to running speed (see their Figures 1 and 4).
Another possible way to prevent cell activities from perseverating for too long when their inputs shut off is to require that the self-excitatory feedback multiplicatively modulate the bottom–up input, because when the bottom–up input shuts off, the effects of self-excitatory feedback would end as well.
Notwithstanding these possible future extensions, the current work focuses on defining the minimal mechanisms that can be used at both stages of the entorhinal-hippocampal hierarchy to self-organize behaviorally useful grid and place cells.
All cell activity and weight dynamics were numerically computed using Euler's forward method with a time step of 2 msec. The 100 cm × 100 cm environment was divided into 2.5 cm × 2.5 cm bins. In each learning trial, the amount of time spent by the animat in the various bins was tracked, and also for each map cell, the separate spatial bins accumulated its real-time output activity (Gjs for medial entorhinal cells and Pk for hippocampal cells; see Equations 6 and 12, respectively) whenever the animat visited them. The occupancy map and the various output activity maps were each smoothed using a 5 × 5 Gaussian kernel of standard deviation equal to one to create smoothed versions. Smoothed and unsmoothed rate maps for each map cell were obtained by dividing the cumulative activity variable by cumulative occupancy variable for each bin. For each entorhinal map cell, spatial measures such as grid score, grid orientation, grid spacing, and grid field width were derived from the spatial autocorrelogram of its smoothed rate map. In particular, six local maxima (with r > 0.3) closest to the origin in the autocorrelogram were identified, and grid spacing was defined as the median of their distances from the origin (Wills et al., 2010; Hafting et al., 2005). Grid orientation was defined as the smallest positive angle with the horizontal axis (0° direction) made by line segments connecting the origin to each of these six local maxima. Grid field width was estimated by computing the width of the central peak in the autocorrelogram at which r ≤ 0 or there is a local minimum, whichever is closer to the central peak (Langston et al., 2010). For all map cells, intertrial stability was computed from smoothed rate maps, and spatial coherence from the unsmoothed rate maps. Spatial information measures were calculated using rate maps that were adaptively smoothed (Skaggs, McNaughton, Wilson, & Barnes, 1996). The readers are referred to the supplementary material of Langston et al. (2010) and Wills et al. (2010) for details to computing these statistical properties of developing grid and place cells.
S. G. is supported in part by the SyNAPSE program of DARPA (HR0011-09-C-0001).
Reprint requests should be sent to Stephen Grossberg, Center for Adaptive Systems, Boston University, 677 Beacon Street, Boston, MA 02215, or via e-mail: email@example.com.