The neural system that encodes heading direction in humans can be found in the medial and superior parietal cortex and the entorhinal-retrosplenial circuit. However, it is still unclear whether heading direction in these different regions is represented within an allocentric or egocentric coordinate system. To investigate this problem, we first asked whether regions encoding (putatively) allocentric facing direction also encode (unambiguously) egocentric goal direction. Second, we assessed whether directional coding in these regions scaled with the preference for an allocentric perspective during everyday navigation. Before the experiment, participants learned different object maps in two geometrically similar rooms. In the MRI scanner, their task was to retrieve the egocentric position of a target object (e.g., Front, Left) relative to an imagined facing direction (e.g., North, West). Multivariate analyses showed, as predicted, that facing direction was encoded bilaterally in the superior parietal lobule (SPL), the retrosplenial complex (RSC), and the left entorhinal cortex (EC), a result that could be interpreted both allocentrically and egocentrically. Crucially, we found that the same voxels in the SPL and RSC also coded for egocentric goal direction but not for allocentric goal direction. Moreover, when facing directions were expressed as egocentric bearings relative to a reference vector, activities for facing direction and egocentric goal direction were correlated, suggesting a common reference frame. Besides, only the left EC coded allocentric goal direction as a function of the subject’s propensity to use allocentric strategies. Altogether, these results suggest that heading direction in the superior and medial parietal cortex is mediated by an egocentric code, whereas the entorhinal cortex encodes directions according to an allocentric reference frame.

To navigate successfully, an organism must know its current position and heading direction. In rodents, head direction cells are thought to constitute the neural substrates of facing direction. Indeed, these neurons fire in relation to the organism’s facing direction with respect to the environment (Taube et al., 1990), working as a neural compass. This neural compass is suggested to be involved in the representation of goal direction during navigation (Bicanski & Burgess, 2018; Byrne et al., 2007; Erdem & Hasselmo, 2012; Schacter et al., 2012), allowing the computation of the movements required to reach a goal from the current location and orientation.

The neural compass in humans has been studied using fMRI and both univariate (adaptation) and multivariate (MVPA) approaches. These methods enabled the isolation of the brain regions representing imagined heading direction. Previous studies have revealed a number of brain regions which represent heading direction in familiar environments or virtual reality. These include the entorhinal cortex, the retrosplenial complex, and superior parietal regions (Baumann & Mattingley, 2010; Chadwick et al., 2015; Chrastil et al., 2016; Kim & Maguire, 2019; Marchette et al., 2014; Shine et al., 2016, 2019; Vass & Epstein, 2013, 2017). Whereas some of these studies focused on facing or goal direction in isolation (Baumann & Mattingley, 2010; Chrastil et al., 2016; Shine et al., 2016, 2019; Vass & Epstein, 2013, 2017), some others found that the same areas represented both types of heading directions (Chadwick et al., 2015; Marchette et al., 2014).

Since heading directions can be aligned to environmental or absolute allocentric landmarks (e.g., north, south), previous work has suggested that these regions encode directions in an allocentric reference frame independent from the agent’s vantage point (Marchette et al., 2014; Shine et al., 2016; Vass & Epstein, 2017; Weisberg et al., 2018). However, in most cases, putatively allocentric heading can be accounted for by egocentric bearings to specific landmarks (Marchette et al., 2014). For instance, when entering a new environment, it is possible to choose a principal reference vector (e.g., directed to a specific landmark or from the first encountered perspective) and to code all directions as an egocentric bearing relative to this vector (Shelton & McNamara, 2001). Hence, since any putative allocentric direction can be expressed both as an allocentric heading and as the egocentric angle required to rotate from the principal reference vector to this direction (see Fig. 1), it is still unclear whether such a directional code is encoded according to an egocentric or an allocentric reference frame.

Fig. 1.

Example of how any allocentric directions (in red) can be expressed as an egocentric bearing (in blue) relative to a reference vector.

Fig. 1.

Example of how any allocentric directions (in red) can be expressed as an egocentric bearing (in blue) relative to a reference vector.

Close modal

The present fMRI study aimed to disentangle these two frames of reference using fMRI and Representational Similarity Analysis (RSA) (Kriegeskorte, Mur, & Bandettini, 2008). To this end, participants were asked to remember the location of the objects within two virtual rooms (see Fig. 2A-B). In the MRI scanner, participants were first cued to imagine themselves facing one of the four walls, and then to recall the egocentric (goal) direction of a target object (left, right, back, front) given the current imagined facing direction.

Fig. 2.

Materials. (A) Views of the two context rooms. Room 1 had a zigzag pattern on the upper part of the walls, while Room 2 had circles. Participants were in the middle of the room and studied it rotating their point of view clockwise or counterclockwise using the right and the left arrow, respectively. (B) Examples of the two versions of each context room. (C) Sequence of events of the fMRI task. ISI, interstimulus interval; ITI, intertrial interval.

Fig. 2.

Materials. (A) Views of the two context rooms. Room 1 had a zigzag pattern on the upper part of the walls, while Room 2 had circles. Participants were in the middle of the room and studied it rotating their point of view clockwise or counterclockwise using the right and the left arrow, respectively. (B) Examples of the two versions of each context room. (C) Sequence of events of the fMRI task. ISI, interstimulus interval; ITI, intertrial interval.

Close modal

We hypothesized that heading direction is indeed encoded in the three areas of interest mentioned above: the entorhinal cortex (EC), retrosplenial complex (RSC), and superior parietal lobule (SPL). However, since the reference frame underlying the coding of heading direction is uncertain, the present study is designed to disentangle them in two different ways. First, we investigated whether the very same regions that encoded (putatively) allocentric facing direction also encoded (non-ambiguous) egocentric goal direction. Indeed, while allocentric directions can be expressed egocentrically with respect to a principal reference vector, an egocentric direction like left, back, or right can only be formulated with respect to the current vantage point. Second, because of the intrinsic ambiguous nature of allocentric directions, we assessed whether these regions encode direction in an allocentric reference frame using an external validity method; namely, whether a region was encoding allocentric heading direction as a function of the subject’s propensity to use allocentric navigation strategies in daily life.

2.1 Participants

Thirty-four right-handed native Italian speakers with no history of neurological or psychiatric disorders participated in this experiment (17 women and 17 men, mean age = 23.94, SD age = 3.90, range 18-35). The ethical committee of the University of Trento approved the experimental protocol. All participants provided informed written consent before the start of the experiment, and they received a payment as compensation for their time. We discarded one participant because of a catch-trial accuracy lower than 2 SDs below the mean.

2.2 Materials

2.2.1 3D rooms

Two virtual rectangular context rooms (Room 1 and Room 2) were created with Unity. These rooms could be distinguished based on the frieze pattern on the upper part of the walls: Room 1 had a zigzag pattern, while Room 2 had circles (see Fig. 2A). In these rooms, one of the two short walls and one of the two long walls were white, while the two remaining walls were blue. Hence, a wall could be recognized by its specific combination of length (short or long) and color (blue or white). The participants’ point of view was placed in the middle of the room and was stationary. Four tables surrounded this point of view, forming a square paralleling the room’s walls, and one object was placed in the middle of each of these four tables. Two different versions of each context room were further created (Room 1 version a and b; Room 2 version a and b), containing two different object layouts to dissociate object identities and object locations. In sum, in each version, the room contained a different layout with different objects. These layouts consisted of four objects assigned randomly to one of the four slots located each in front of one of the four walls (see Fig. 2B).

2.2.2 SDSR questionnaire

The SDSR (Sense of Direction and Spatial Representation) questionnaire allowed us to assess the participants’ propensity to use the survey perspective (i.e., bird’s-eye view representations of the environment) during spatial navigation in everyday life (Pazzaglia et al., 2000). This questionnaire comprises 11 items on a 5-point scale (except for three items that included only three alternatives). Although the questionnaire provides several scores measuring the propensity to use different perspectives, we were primarily interested in the survey perspective, which is usually associated with an allocentric frame of reference. The survey items comprised four questions that assessed how much participants tend to use a representation “from above” (i.e., a “map-like representation”) while navigating a city. The scores could range from -2 to 14. Although the questionnaire addressed navigation in a large-scale environment, it has been validated by its authors using a pointing task in a room (Pazzaglia et al., 2000) and has been shown to be informative in other studies using smaller spaces (Lhuillier et al., 2018).

2.3 Behavioral procedures

The experiment was organized in three sessions: two training sessions—the familiarization and the rehearsal session—and the scanning session. These sessions will now be described in detail in turn. All tasks were developed with Python 3.7 using the Neuropsydia package (Makowski & Dutriaux, 2017).

2.3.1 Session 1: Familiarisation session

This first training session was conducted around a week before the scanning session. All the necessary programs and files required to conduct the experiment were first sent to the participants, who performed it on their personal laptop or desktop computer using their mouse/touchpad and keyboard. Throughout the experiment, the participants shared their screens with the experimenter via a video call. This allowed the experimenter to give guidance, provide instructions, and monitor the experiment’s progress. This session was designed to familiarize participants with the rooms, with the aim of constructing a mental representation of the environments prior to performing the main task.

Participants were presented with a room inside a virtual reality setting. They could only perform a rotation movement of their point of view from the middle of the room by pressing the left or right arrow. No other movement inside the virtual environment was possible. When entering for the first time each version (a and b) of a context room (1 or 2), the vantage point was oriented towards the short blue wall (see Fig. 2A). For simplicity, we refer to this wall as being the North wall. Consistently, we associated the other walls with their corresponding cardinal direction.

Besides studying the rooms, participants’ allocentric knowledge of the rooms was assessed with three tasks (see Fig. S1). Task a aimed to assess participants’ knowledge of the spatial location of the objects relative to the walls of the room, while task b aimed to assess participants’ knowledge of the spatial location of the objects relative to the other objects. Finally, the test task was very similar to the fMRI task and was designed to ensure that the participant would easily perform it in the scanner. The detailed sequence of tasks of this session is shown in Figure S2, and a detailed description of the training can be found in the Supplementary Material.

2.3.2 Session 2: Rehearsal session

The second training session was conducted in the lab, 2 days before the scanning session at the earliest. After re-studying the rooms, participants were asked to perform the final task of the first session in the lab on our laptop computer. This session allowed participants to rehearse their memory of the room before the scanning session and ensured that they could still easily perform the fMRI task.

2.3.3 Session 3: Scanning session

Because the fMRI task was slightly different from the test task of the training sessions, participants were first trained to perform it before entering the MRI. The sequence of events of the fMRI task is shown in Figure 2C.

The timing and jitter of each trial followed the “quick” event-related fMRI design optimized for the condition-rich experiment and RSA (Kriegeskorte, Mur, & Bandettini, 2008; Kriegeskorte, Mur, Ruff, et al., 2008). Each experimental trial started with a 500 ms fixation cross. Then, a reference map with a character facing one of the four walls was shown on the screen in one of the four possible orientations for 500 ms, followed by a 3000 ms interval. In this interval, participants were instructed to imagine facing the wall cued by the character on the screen. At this moment, an interstimulus interval (ISI) of 4000 ms was present in 25% of the trials (Zeithamova et al., 2017). Next, another fixation cross was presented for 500 ms. The target object was then displayed for 500 ms, followed by a 3000 ms interval. Participants were instructed to indicate when they were ready to answer and could do so from the beginning of the target object time window until the end of the 3000 ms interval. At this moment, an intertrial interval (ITI) of 4000 ms was present in 25% of the trials. We jittered both the ITI and the ISI (0 or 4000 ms) to better dissociate the BOLD signal related to the reference and the target events (Zeithamova et al., 2017). Catch trials represented 20% of total trials. These trials were identical to the experimental trials, except that at the end, an egocentric directional word (Front, Right, Back, Left) appeared on the screen for 2000 ms. Participants had to indicate whether this word matched the actual egocentric direction of the object. Fifty percent of the catch trials matched the actual direction of the object.

In the experimental trials, each object was presented in all 16 conditions resulting from the combination of the four allocentric goal directions (i.e., the allocentric position of the target object: North, South, East, or West) and the four egocentric goal directions (i.e., the egocentric position of the object: Front, Back, Right, or Left) as a function of the current facing direction. The environment-facing direction indicated by the orientation of the character relative to the wall (allocentric—North, East, South, West) was dissociated from the screen-facing direction of the character, namely, the orientation of the character relative to the screen (egocentric—top, right, bottom, and left). In other words, the maps were presented in four different orientations across trials. The four screen-facing directions were counterbalanced across the 16 trials of each of the 16 allocentric × egocentric goal direction levels. Therefore, each map orientation was presented four times for each one of these 16 levels. This resulted in 256 experimental trials, to which 64 catch trials were added. These 320 trials were arranged in 8 runs of 40 trials (32 experimental trials and 8 catch trials). Each block included trials for only one version per context room (e.g., 1a-2a, or 1b-2a), which means that there were two blocks for each context room × version combination (1a-2a, 1a-2b, 1b-2a, and 1b-2b). Each object appeared twice in each block. Within a block, there was a catch trial every four experimental trials on average, placed in a random position within these four trials. This was done to spread the catch trials along the whole run. The 8 catch trials within a block were not included in the analyses and were chosen such as each of the 8 objects presented in a given block was presented once, and each of the 4 egocentric target positions was tested twice. Fifty percent of the catch trials matched the actual egocentric goal direction of the object. All this was done to ensure that participants could not anticipate which trials were catch trials. For the same reason, similar to the experimental trials (see above), 25% of catch trials comprised an ITI and 25% comprised an ISI. Participants were given the opportunity of a break between each run. After the fMRI session, they had to complete the SDSR scale. To sum up, brain activity was recorded for each participant in a within-participant design, since all participants were presented eight times for each of the 32 levels resulting from the combination of the 2 context rooms × 4 allocentric goal directions × 4 egocentric goal directions, and the SDSR scale was used as a covariate four our analyses.

2.4 MRI procedures

2.4.1 MRI data acquisition

MRI data were acquired using a MAGNETOM Prisma 3T MR scanner (Siemens) with a 64-channel head-neck coil at the Centre for Mind/Brain Sciences, University of Trento. Functional images were acquired using the simultaneous multislice echoplanar imaging sequence (multiband factor = 5). The angle of the plane of scanning was set to 15° towards the chest from the anterior commissure–posterior commissure plane to maximize the signal in the MTL. The phase encoding direction was from anterior to posterior, repetition time (TR) = 1000 ms, echo time (TE) = 28 ms, flip angle (FA) = 59°, field of view (FOV) = 200 mm × 200 mm, matrix size = 100 × 100, 65 axial slices, slices thickness (ST) = 2 mm, gap = 0.2 mm, and voxel size = 2 × 2 × (2 + 0.2) mm. Three-dimensional T1-weighted images were acquired using the magnetization-prepared rapid gradient-echo sequence, sagittal plane, TR = 2140 ms, TE = 2.9 ms, inversion time = 950 ms, FA = 12°, FOV = 288 mm × 288 mm, matrix size = 288 × 288, 208 continuous sagittal slices, ST = 1 mm, and voxel size = 1 × 1 × 1 mm. B0 fieldmap images, including the two magnitude images associated with the first and second echoes of the images and the phase-difference image, were also collected for distortion correction (TR = 768 ms, TE = 4.92 and 7.38 ms).

2.4.2 fMRI preprocessing

The preprocessing was conducted using the SPM12 for MATLAB ® (https://www.fil.ion.ucl.ac.uk/spm/software/spm12/). First, we computed each participant’s Voxel Displacement Map (VDM) using the FieldMap toolbox (Jenkinson, 2003; Jezzard & Balaban, 1995). Second, functional images in each run were realigned to the first image of the first run, and then the VDM were also coregistered to the first image and were used to resample the voxel values of the images in each run to correct for EPI distortions caused by the inhomogeneities of the static magnetic field in the vicinity of the air/tissues interface. Third, the functional images were coregistered onto the structural image in each individual’s native space with six rigid-body parameters. Lastly, a minimum spatial smoothing was applied to the functional images with a full width at half maximum (FWHM) of 2 mm.

2.4.3 Regions of interests

Multiple ROI masks were used in the analysis. The entorhinal and the superior parietal cortex were segmented in each subject’s native space with the Freesurfer image analysis suite 2. The entorhinal cortex masks were thresholded at a probability of 0.5, as recommended by Freesurfer 3. The location estimates for the EC were based on a cytoarchitectonic definition (Fischl et al., 2009). Given the importance of the hippocampus in spatial cognition, we ran complementary analyses using a mask for the hippocampus which was also based on a cytoarchitectonic definition (Iglesias et al., 2015). Since the hippocampus proper was not among the predefined ROIs, and no significant effect was found in this region, we decided to report these results in the Supplementary Material. Masks for the SPL were based on the Destrieux atlas (Destrieux et al., 2010). Because activation in RSC in previous studies was not found in the anatomical retrosplenial cortex, we also used masks of the retrosplenial cortex defined functionally as category-specific regions for scene perception (Julian et al., 2012). Anatomical RSC was defined as the combination of BA29 and 30 using MRIcron. These last masks were in MNI space and were then coregistered onto the structural image in each individual’s native space.

2.4.4 First-level analysis

Both the patterns instantiated during the appearance of the reference map and the target object were analyzed. Therefore, there were 16 experimental conditions related to the reference map (4 screen-facing directions × 4 environmental-facing directions) and 32 related to the target object (4 allocentric goal directions × 4 egocentric goal directions × 2 context rooms). The first-level analysis was computed using the SPM12 package. The brain signal related to the reference was modeled as stick functions convolved with the canonical HRF, and the brain signal related to the target was modeled as a boxcar function (with event duration equivalent to the reaction time) convolved with the canonical HRF in the time window between the presentation of the target and the response. We then used the resulting T images (48 volumes, one for each condition) in the following analyses.

2.4.5 Representational similarity analysis

2.4.5.1 RDM models

Representational Similarity Analysis (RSA) uses a correlation measure to compare a brain-based Representational Dissimilarity Matrix (RDM) obtained by calculating the pairwise correlation between patterns instantiated in all the pairs of conditions with a model-based RDM of experimental interest (Kriegeskorte, Mur, & Bandettini, 2008). A brain-based RDM is a square matrix containing the Spearman correlation distance (1−r) between two brain patterns instantiated during two different conditions. Thus, its dimension was 16 × 16 in the case of the reference window and 32 × 32 in the target window. The different RDMs were designed to detect coding for (environmental) facing direction, egocentric goal direction, and allocentric goal direction. They are described in turn in the results section. Throughout the paper, when talking about “facing direction,” we will refer to “environmental facing direction” (not screen facing direction), unless further specified.

2.4.5.2 ROI-based RSA

In the ROI-based RSA, the brain-based RDM was computed using the activity pattern of all voxels inside a given ROI. The second order-correlations between the brain-based RDM of this ROI and each model-based RDM were then performed with a partial Pearson correlation method. Although our RDMs for facing direction and egocentric and allocentric goal directions were only slightly correlated (all r = -.11), partial correlations were used to regress out confounds. For example, when an effect was significant when computing the correlation with egocentric goal direction, the allocentric goal direction matrix was then regressed out. Conversely, the egocentric goal direction matrix was regressed out when the correlations with facing directions and allocentric goal directions were computed. In the reference window, screen facing direction and map orientation were also regressed out from the correlation with the RDMs for facing directions. The resulting correlations were then tested to be greater than zero with a one-tailed one-sample t-test. To control for potential confounding effects of the RT, we used the duration modulation method by convolving each trial with a boxcar equal to the length of the trial’s RT for each participant (Grinband et al., 2008). Moreover, partial correlation using the RTs RDM was implemented as a further control in some analyses.

To investigate whether individual RSA results were modulated by the participants’ propensity to use an egocentric or allocentric perspective, we computed the correlation between the individual second-order correlations for a given ROI and the individual SDSR scores. The resulting correlations were then tested to be greater than zero with a one-tailed one-sample t-test.

2.4.5.3 Searchlight-based RSA

In the Searchlight RSA analysis, a brain-based RDM was calculated for each voxel using the pattern instantiated in the neighborhood of the voxel of interest within a 6 mm sphere. After calculating the brain-based RDM, we computed the second-order correlations with each RDM model using a partial Pearson correlation method. Similar to the ROI-based RSA, egocentric RDM was regressed out for second-order correlations with allocentric RDMs, and vice versa. These second-order correlations were fisher z transformed to be used in the second-level analysis.

After computing the Searchlight images for each participant, they were normalized using the unified segmentation method and then smoothed with a Gaussian kernel (FWHM of 6 mm) using SPM12. These normalized images were the input of the second-level analysis, which was performed with SnPM 13 (http://warwick.ac.uk/snpm) using the permutation-based nonparametric method (Nichols & Holmes, 2003). No variance smoothing was applied, and 10,000 permutations were performed. A conventional cluster-extent-based inference threshold was used (voxel-level at p < .001; cluster-extent FWE p < .05).

To investigate whether individual differences in allocentric strategy modulated the whole-brain activity, in the second-level general linear models, we used the survey score to predict the correlation between the brain-based RDM and each model-based RDM. The resulting T-score volume for each model-based RDM allowed us to assess where the correlation with the model-based RDM was modulated by an individual’s propensity to use an allocentric perspective.

3.1 Facing direction coding in the reference window is present in SPL and RSC

In the reference window, the facing direction model assumed that trials for which participants had to face the same wall were similar, regardless of the room’s orientation relative to the screen (see Fig. 3A-B). ROI analyses revealed first a significant facing direction coding during the reference window in bilateral SPL and RSC (see Fig. 3C-F; lSPL: t(33) = 3.90, p < .001; rSPL: t(33) = 3.34, p = .001; lRSC: t(33) = 1.80, p < .05; rRSC: t(33) = 2.26, p < .05), but not in EC (All ps > .05). Whole-brain analysis (whole-brain inferential statistics are computed with primary voxel-level p < .001, cluster-level FWE corrected p < .05) revealed an additional bilateral activation in the occipital place area (OPA; MNI coordinate of the left peak: [-38, -80, 28], t(33) = 4.46, pFWE < .05; MNI coordinate of the right peak: [38, -76, 28], t(33) = 5.82, pFWE = .004; see Fig. S3). This result is consistent with the hypothesis that the OPA plays an important function in spatial reorientation by extracting the structure of an environment or a scene (Julian et al., 2018). However, the exact function of OPA in navigation (perception vs memory) as well as the frame of reference in which it operates remain unclear (See the Discussion session for a detailed discussion of the potential role of OPA in this study and beyond). Importantly, all these results confirm that participants have a representation of the geometry and features of the room that is independent of the arrays of objects in the room. No reliable correlations were found with the SDSR scores in these analyses.

Fig. 3.

RSA for facing direction in the reference window. (A) The 16 × 16 RDM for facing direction, where two trials were considered similar when they shared the same facing direction relative to the map, regardless of the orientation of the North on the screen (U = Up, R = Right, D = Down, L = Left). (B) For instance, in these examples, all these reference maps were cuing the same facing direction (North). (C) Both SPL showed reliable environmental facing direction coding in the reference window. (D) Both RSC showed reliable environmental facing direction coding in the reference window. The error bars represent the standard error of the mean (*p < .05, ***p < .001).

Fig. 3.

RSA for facing direction in the reference window. (A) The 16 × 16 RDM for facing direction, where two trials were considered similar when they shared the same facing direction relative to the map, regardless of the orientation of the North on the screen (U = Up, R = Right, D = Down, L = Left). (B) For instance, in these examples, all these reference maps were cuing the same facing direction (North). (C) Both SPL showed reliable environmental facing direction coding in the reference window. (D) Both RSC showed reliable environmental facing direction coding in the reference window. The error bars represent the standard error of the mean (*p < .05, ***p < .001).

Close modal

3.2 Facing direction coding is present in the target window in the left EC

We then investigated the encoding of facing direction in the target window. To solve the task, participants had to keep in memory the current facing direction cued in the reference window until the target object appeared (i.e., the target window). Because the target object was presented at this moment, only then could participants encode facing direction in a room- or map-specific way. Thus, for this analysis, we created two model RDMs. In the facing direction model, only conditions with the same facing direction and the same context room were considered similar. In the second RDM, the facing-generalized direction model, facing directions were considered similar regardless of the context room (see Fig. 4A). A significant correlation with the room-specific environmental facing direction was observed in the left EC (t(33) = 2.12, p < .05; throughout the paper, p-values are corrected for multiple comparisons across the two hemispheres; Fig. 4B-C). No other ROI demonstrated room-specific or generalized facing direction coding during the target window (All ps > .05). Whole-brain analysis did not yield any significant clusters in this case. No correlations were found with the SDSR scores in these analyses.

Fig. 4.

RSA for facing direction in the target window. (A) In the upper panels are presented the whole 32 × 32 RDMs, including all Context Rooms × Allocentric goal directions × Egocentric goal directions conditions. These matrices were indeed used on the data of the Target window to detect areas that are still coding for the facing direction during this window. For instance, it means that a trial where the target was North and Front was considered similar to another trial where the target is East and Right since it implies that the facing direction was North in both cases. The black square indicates which quadrant of the matrix is represented in the lower panel. The key difference between the facing and the facing-generalized model is that in the facing-generalized model, two facing directions were considered similar even if they were in different context rooms (lower-right panel); while in the facing model, they were considered similar only if they were in the same context room (lower-left panel) (F = Front, R = Right, B = Back, L = Left). (B) EC ROIs. (C) Left EC showed facing direction coding in the target window. For each RDM, a diamond represents the mean correlation; a box and whisker plot represents the median and inter-quartile range. (*p < .05).

Fig. 4.

RSA for facing direction in the target window. (A) In the upper panels are presented the whole 32 × 32 RDMs, including all Context Rooms × Allocentric goal directions × Egocentric goal directions conditions. These matrices were indeed used on the data of the Target window to detect areas that are still coding for the facing direction during this window. For instance, it means that a trial where the target was North and Front was considered similar to another trial where the target is East and Right since it implies that the facing direction was North in both cases. The black square indicates which quadrant of the matrix is represented in the lower panel. The key difference between the facing and the facing-generalized model is that in the facing-generalized model, two facing directions were considered similar even if they were in different context rooms (lower-right panel); while in the facing model, they were considered similar only if they were in the same context room (lower-left panel) (F = Front, R = Right, B = Back, L = Left). (B) EC ROIs. (C) Left EC showed facing direction coding in the target window. For each RDM, a diamond represents the mean correlation; a box and whisker plot represents the median and inter-quartile range. (*p < .05).

Close modal

3.3 The parietal and retrosplenial cortex code for egocentric but not allocentric goal direction

We created three RDMs to disentangle egocentric and allocentric goal direction in the target window (see Fig. 5). In the egocentric model, conditions in which the target object was in the same egocentric position (e.g., to the left) were considered similar. In the allocentric model, only conditions in which the target object was placed in the same allocentric goal direction and in the same context room were considered similar. Lastly, in the allocentric-generalized model, conditions in which the target object was placed in the same allocentric goal position independently of the context room were considered similar. This last RDM was designed to test whether allocentric goal direction coding generalized across rooms with identical geometrical layouts.

Fig. 5.

Model RDMs for target direction. In the upper panels are presented the whole 32 × 32 RDMs, including all Context Rooms × Allocentric goal directions × Egocentric goal directions conditions. The black square indicates which quadrant of the matrix is represented in the lower panel. The key difference between the allocentric and the allocentric-generalized model is that in the allocentric-generalized model, two objects sharing the same allocentric goal direction are considered similar even if they are in different context rooms (right panel); while in the allocentric model, they are considered similar only if they are in the same context room (middle panel). (F = Front, R = Right, B = Back, L = Left).

Fig. 5.

Model RDMs for target direction. In the upper panels are presented the whole 32 × 32 RDMs, including all Context Rooms × Allocentric goal directions × Egocentric goal directions conditions. The black square indicates which quadrant of the matrix is represented in the lower panel. The key difference between the allocentric and the allocentric-generalized model is that in the allocentric-generalized model, two objects sharing the same allocentric goal direction are considered similar even if they are in different context rooms (right panel); while in the allocentric model, they are considered similar only if they are in the same context room (middle panel). (F = Front, R = Right, B = Back, L = Left).

Close modal

ROI analyses revealed a strong egocentric bilateral coding in both SPL and RSC (see Fig. 6A-B; lSPL: t(33) = 7.52, p < .001; rSPL: t(33) = 6.27, p < .001; lRSC: t(33) = 5.25, p < .001; rRSC: t(33) = 3.23, p = .001), but no allocentric coding (All ps > .05). No correlations were found with the SDSR scores in these ROIs, suggesting that spatial coding in the parietal cortex did not change as a function of the propensity for a particular reference frame. Notably, this effect was also significant when we excluded the “front” condition (which, contrary to other conditions, did not require reorientation) and control for RTs (see Fig. S4).

Fig. 6.

RSA for egocentric goal direction. (A) Both SPL showed reliable egocentric goal direction coding in the target window. (B) Both RSC showed reliable egocentric goal direction coding in the target window. (C) Whole-brain searchlight RSA confirmed the large parietal involvement in representing egocentric goal direction, showing further activations in the dorsal premotor area, the left posterior middle frontal gyrus, the left posterior cingulate cortex, and the left pars triangularis. (***p < .001; voxel level at p < .001; cluster-extent FWE p < .05).

Fig. 6.

RSA for egocentric goal direction. (A) Both SPL showed reliable egocentric goal direction coding in the target window. (B) Both RSC showed reliable egocentric goal direction coding in the target window. (C) Whole-brain searchlight RSA confirmed the large parietal involvement in representing egocentric goal direction, showing further activations in the dorsal premotor area, the left posterior middle frontal gyrus, the left posterior cingulate cortex, and the left pars triangularis. (***p < .001; voxel level at p < .001; cluster-extent FWE p < .05).

Close modal

Whole-brain searchlight RSA confirmed that the parietal cortex overall coded for egocentric goal direction (Fig. 6C), showing a very large bilateral cluster with a peak in the left AG (peak voxel MNI coordinates: [-48, -62, 44], t(33) = 8.17, pFWE < .001) extending in the left hemisphere to the superior parietal lobule, the precuneus, and also ventrally in the inferior part of the occipitotemporal cortex (BA 37). It also spread in the right hemisphere to the AG, superior parietal lobule, and precuneus. Further, two clusters were found bilaterally in the dorsal premotor area (BA 6; left peak: t(33) = 7.56, pFWE = .002; right peak: t(33) = 8.98, pFWE = .004). Other clusters included the right and left posterior middle frontal gyrus (left: t(33) = 4.96, pFWE < .01; right: t(33) = 5.54, pFWE < .01), the left posterior cingulate cortex (t(33) = 6.80, pFWE < .01), and the left pars triangularis (t(33) = 4.51, pFWE < .01) (see Table S1 for details).

Next, we wanted to check whether the same voxels coding for facing direction in the reference window also coded for egocentric goal direction in the target window. For that purpose, we used the whole-brain activation maps at a lower threshold to extract four masks corresponding to the bilateral SPL and RSC clusters sensitive to facing direction during the reference window (see Fig. 7A-B). We then used these masks to conduct ROI analyses of the egocentric and allocentric goal direction. We found that the voxels coding for facing direction during the reference window in the SPL and the RSC also coded for egocentric goal direction in the target window (lSPL: t(33) = 3.46, p < .001; rSPL: t(33) = 4.87, p < .001; lRSC: t(33) = 3.08, p = .002; rRSC: t(33) = 2.88, p = .004).

Fig. 7.

RSA Results for egocentric goal direction using functionally defined masks where a voxel was included if it was encoding facing direction in the reference window. (A) Voxels showing the coding of facing direction in the reference window in both SPL (extracted at p < .001) showed reliable egocentric goal direction coding in the target window. (B) Voxels showing the coding of facing direction in the reference window in both RSC (extracted at p < .05 for left RSC and p < .005 for right RSC) showed reliable egocentric goal direction coding in the target window. (C) Comparisons of the location of the RSC and SPL clusters extracted from the reference window with the peak activation coordinates in Baumann and Mattingley (2010) and Marchette et al. (2014). (D) BA29/30 Masks used in the complementary analyses. (**p < .01, ***p < .001).

Fig. 7.

RSA Results for egocentric goal direction using functionally defined masks where a voxel was included if it was encoding facing direction in the reference window. (A) Voxels showing the coding of facing direction in the reference window in both SPL (extracted at p < .001) showed reliable egocentric goal direction coding in the target window. (B) Voxels showing the coding of facing direction in the reference window in both RSC (extracted at p < .05 for left RSC and p < .005 for right RSC) showed reliable egocentric goal direction coding in the target window. (C) Comparisons of the location of the RSC and SPL clusters extracted from the reference window with the peak activation coordinates in Baumann and Mattingley (2010) and Marchette et al. (2014). (D) BA29/30 Masks used in the complementary analyses. (**p < .01, ***p < .001).

Close modal

We compared the exact coordinate of our brain activations with those reported in other studies observing putatively allocentric-facing direction in the superior parietal lobule (Marchette et al., 2014) and the retrosplenial complex (Baumann & Mattingley, 2010; Marchette et al., 2014). Our activation in the SPL overlaps with the one previously reported by Marchette et al. (2014), and one of the masks in the retrosplenial complex overlaps with the peak of activity reported by Baumann and Mattingley (2010) (Fig. 7C). These results substantiate the comparability of our results with previous studies reporting putatively allocentric heading direction signals. However, our RSC masks were more lateral than the RSC activity reported by Marchette and colleagues. Indeed, the functionally defined RSC used here as ROI mask (see Julian et al., 2012 and Methods section) comprises a large portion of the medial parietal lobe, and different studies have reported different exact functional localization of the retrosplenial cortex (Baumann & Mattingley, 2010; Marchette et al., 2014; Vass & Epstein, 2017). In some studies (Baumann & Mattingley, 2010), however, heading direction coding has been reported in the anatomically defined RSC (BA 29/30), which is outside the functional RSC mask used in ours and many other studies (Baumann & Mattingley, 2010). In an exploratory analysis, we tested whether our results generalize to this region of interest. We found that facing direction in the reference window was encoded in BA 29/30 (see Fig. 7D for representations of the ROIs) in the left hemisphere (t(33) = 2.25, p < .05; Corrected for multiple comparisons across hemispheres). Crucially, the same region also encoded egocentric goal direction in the target window (t(33) = 1.81, p = .04).

In sum, we performed a series of analyses using three different types of ROIs: predefined masks of the RSC and the SPL, functionally defined masks encoding facing direction in the reference window, and anatomical masks of the RSC proper (BA 29/30). In all these cases, regions encoding putatively allocentric facing direction in the reference window also encode unambiguously egocentric goal direction in the target window.

3.4 SPL and RSC encode both facing and goal directions relative to a principal reference vector

The results presented above suggest that the SPL and the RSC code heading direction in an egocentric fashion. One possibility is that these areas compute both facing and goal direction through the egocentric bearing relative to a principal reference vector. In the case of goal direction, this reference vector would be the current imagined facing direction. Concerning the facing direction (reference window), it has been shown that the first experienced vantage point in a new environment tends to be used as a reference vector from which bearings are computed (Shelton & McNamara, 2001). In the present experiment, this vantage point is in the direction of what is called North in the present article, which is the short blue wall (which has never been referred to as “North” to the participants). It is then possible that, in the SPL and RSC, both facing (reference window) and egocentric goal directions (target window), are computed egocentrically from a given reference vector. If that is the case, the representation of the facing direction “North” should be similar to that of the egocentric goal direction “Front.” Consequently, we should expect the following similarity pattern between the reference and the target window: North = Front, South = Back, East = Right, and West = Left.

To explore this idea, we ran a new ROI-based RSA in which we computed, for each participant and each ROI, the pattern similarity between the activity observed for facing directions in the reference window and the activity observed for egocentric goal directions in the target window. This resulted in a 4 × 4 matrix (see Fig. 8A) where the North, East, South, and West facing directions on one side matched the Front, Right, Back, and Left egocentric goal directions on the other side. Following the hypothesis of the principal reference vector, we expected higher average pattern similarity between matching directions (on the diagonal: North-Front, East-Right, South-Back, and West-Left) than between non-matching directions (off-diagonal). Because we wanted to see whether voxels coding for facing direction were coding similarly egocentric target direction, we used the brain masks that we extracted in the previous analyses of the reference window (Fig. 7A-B). It is important to note that results are very similar when the a priori anatomical/functional ROIs are used instead. Consistent with our hypothesis, average pattern similarity is higher when directions are matching than when they are not, in all parietal areas (see Fig. 8B; lSPL: t(33) = 1.86, p = .03; rSPL: t(33) = 4.69, p < .001; lRSC: t(33) = 2.43, p = .01; rRSC: t(33) = 2.13, p = .02). Importantly, we did not observe this effect in the left EC (t(33) = 0.32, p = .38). These results suggest that the same egocentric representation, anchored to a specific vantage point (North in the reference window and Front in the target window) is at the basis of facing direction and egocentric goal direction encoding in the SPL and RSC. No correlations were detected with the SDSR scores in these analyses.

Fig. 8.

SPL and RSC both encode facing and goal directions relative to a principal reference vector (A) In this analysis, the reference direction North and the egocentric goal direction are both hypothesized as reference vectors. This means that, across reference and target windows, Front is considered as matching North, Right as matching East, Back as matching South, and Left as matching West. (B) Brain RDMs used to test the hypothesis that North in the reference window and Front in the target window are both used as principal reference vectors. In this analysis, we averaged the correlations of matching directions (in purple) and the unmatching conditions (in grey) for each participant to compare these average correlations across participants. (C) RSA results for the comparison between on diagonal and off diagonal facing and egocentric target directions in SPL and RSC. Results all showed a more positive average correlation for matching conditions (*p < .05, ***p < .001).

Fig. 8.

SPL and RSC both encode facing and goal directions relative to a principal reference vector (A) In this analysis, the reference direction North and the egocentric goal direction are both hypothesized as reference vectors. This means that, across reference and target windows, Front is considered as matching North, Right as matching East, Back as matching South, and Left as matching West. (B) Brain RDMs used to test the hypothesis that North in the reference window and Front in the target window are both used as principal reference vectors. In this analysis, we averaged the correlations of matching directions (in purple) and the unmatching conditions (in grey) for each participant to compare these average correlations across participants. (C) RSA results for the comparison between on diagonal and off diagonal facing and egocentric target directions in SPL and RSC. Results all showed a more positive average correlation for matching conditions (*p < .05, ***p < .001).

Close modal

3.5 Participants’ propensity for allocentric perspective modulates goal-direction coding in the EC

The ROI analysis did not yield any reliable group-level allocentric or allocentric-generalized goal direction coding either in the EC or the parietal ROIs (see Fig. S5B). On the other hand, we observed a significant modulation of the allocentric and allocentric-generalized coding in the left EC by the allocentric (survey) score measured with the SDSR questionnaire (see Fig. 9B-D; allocentric: r = .33, t(32) = 1.98, p < .05; allocentric-generalized: r = .38, t(32) = 2.32, p = .01). This suggests that, in our experiment, the allocentric coding in the left EC depends on participants’ propensity to use an allocentric perspective during everyday navigation.

Fig. 9.

Allocentric coding in the left EC is modulated by the participant’s propensity for the allocentric perspective. (A) Left EC ROI. (B) Correlations between the ROI results in the left EC and the survey score (*p < .05). (C) Scatterplot of the correlation between the allocentric coding in the left EC and the individual score at the survey scale of the SDSR. (D) Scatterplot of the correlation between the allocentric-generalized coding in the left EC and the individual scores at the survey scale of the SDSR.

Fig. 9.

Allocentric coding in the left EC is modulated by the participant’s propensity for the allocentric perspective. (A) Left EC ROI. (B) Correlations between the ROI results in the left EC and the survey score (*p < .05). (C) Scatterplot of the correlation between the allocentric coding in the left EC and the individual score at the survey scale of the SDSR. (D) Scatterplot of the correlation between the allocentric-generalized coding in the left EC and the individual scores at the survey scale of the SDSR.

Close modal

The whole-brain analysis led exclusively to bilateral occipital V1 activations (Fig. S5C), both with the allocentric (left: t(33) = 8.36, pFWE < .001; right: t(33) = 5.54, pFWE < .001) and the allocentric-generalized model (left: [-20, -98, 12], t(33) = 5.23, pFWE = .003; right: t(33) = 5.27, pFWE = .002) (see Table S2 for details). This was likely due either to the reactivation of the visual information related to the wall or to the fact that, in each allocentric direction, the same objects are presented several times throughout the trials (although different objects appeared in the same allocentric direction). Besides, contrary to the left EC, activity in V1 was not correlated to the propensity to use an allocentric reference in everyday life (All ps > .10).

The reference frame underlying the representation of heading direction in different regions of the brain remains largely ambiguous. Although previous studies found that the entorhinal cortex (EC), the retrosplenial cortex (RSC), and the superior parietal lobule (SPL) coded for facing and/or goal direction, they generally did not enable the disentanglement of egocentric and allocentric reference frames. The present study used a reorientation task, which allowed us to address this question by testing (i) whether the same regions that encoded (putatively) allocentric facing direction also encoded (unambiguously) egocentric goal direction, and (ii) whether the activity in these regions was modulated by the subject’s propensity to use allocentric strategies in daily life. Our results confirmed first that the EC, the RSC, and the SPL all represent environmental facing direction. Up to that point, this effect could result from both allocentric and egocentric processing. However, we found that RSC and the SPL also encoded egocentric goal direction (whether an object is on the left/right/front/back independently of its position in the map), a result that could not emerge from an allocentric coding. Crucially, RSC and SPL did not encode the allocentric position of the target (allocentric goal direction). This result raises the possibility that these regions represent both facing and goal direction according to an egocentric reference frame, not an allocentric one. On the other hand, the EC did not demonstrate any egocentric coding, whereas allocentric goal direction coding in this region was uniquely modulated by participants’ propensity for an allocentric perspective. Thus, in agreement with previous findings (Chadwick et al., 2015; Shine et al., 2019), the entorhinal cortex seems to encode heading direction in an allocentric reference frame. Overall, these results suggest that the neural compass operates in different brain regions but using different reference frames.

The present study replicated the results of previous studies finding the involvement of the EC, RSC, and SPL in facing direction coding (Baumann & Mattingley, 2010; Chadwick et al., 2015; Marchette et al., 2014; Vass & Epstein, 2013, 2017). Our finding that the EC heading direction system seems to operate within an allocentric reference frame is in keeping with the hypothesis that the fMRI signal is driven, at least in part, by the activity of head-direction cells (Taube et al., 1990). However, the fact that an egocentric reference frame provides a better account for the heading-related activity in medial and superior parietal cortices suggests that the neural compass in these regions arises from a different neural mechanism than the allocentric direction coded by HD cells.

One possibility is that the neural activity observed in SPL and RSC comes from hypothetical reference vector cells, which would code for the egocentric bearing relative to a principal reference vector (Marchette et al., 2014). For instance, during the reference window, participants may take one of the walls as the principal reference vector (Shelton & McNamara, 2001) and compute the facing direction egocentrically in reference to that wall. In the target window, the current facing direction (the Front direction) could be defined as the new principal vector, and all directions would then be coded as an egocentric bearing from this principal reference vector. The analysis of the similarity between the brain activity across the reference and the target window backed this idea. Indeed, in both SPL and RSC, we observed higher average correlations between directions that matched according to the reference-vector model (North = Front, East = Right, South = Back, and West = Left) than between non-matching directions. These findings suggest that, in the SPL and RSC, an egocentric representation anchored to a specific direction (e.g., North or Front) is used to guide re-orientation for both facing direction in the reference window and egocentric goal direction in the target window. In line with these results, a previous study that used a re-orientation task in a larger natural environment (a university campus) showed that putatively allocentric heading directions (North, South, East, West) were encoded in RSC both when the starting point and the target buildings were indicated with realistic pictures and when they were conveyed verbally. However, when the similarity between brain activity in RSC was compared across the two tasks (visual and verbal), only the North heading direction showed a similar pattern across conditions (Vass & Epstein, 2017). Vass and colleagues hypothesized that the RSC preference to represent north-facing headings arose because the RSC represents environments according to a particular reference direction (McNamara et al., 2003; Mou et al., 2004; Waller & Hodgson, 2006). Although such direction was suggested to be computed allocentrically, Vass and colleagues could not establish which frame of reference was actually utilized. Here, we observed that the reference vector is updated depending on the imagined position of the body. Thus, this study not only supports that, in the RSC and SPL, heading is derived relative to a reference vector, but also that this computation is done within an egocentric frame of reference.

In this paper, we also showed that the representation of allocentric goal direction in the left EC was modulated by participants’ propensity for the allocentric perspective in everyday life. This, together with the presence of facing direction coding and the absence of egocentric coding, suggests that the EC coded for heading direction in an allocentric frame of reference (see also Chadwick et al., 2015). Consistently, the entorhinal cortex has strongly been associated with allocentric representation in the literature, particularly through the presence of grid cells (Hafting et al., 2005), which are thought to provide the scaffolding of allocentric representations (Buzsáki & Moser, 2013). Contrary to previous results (Chadwick et al., 2015; Shine et al., 2019), we did not find a consistent representation of allocentric goal direction in the entorhinal cortex across subjects (i.e., independently from their everyday navigation style). One possible reason for this discrepancy is that we did not explicitly ask subjects to provide the allocentric location of the target object (North, South, East, West) during the task, but only the egocentric one (Front, Back, Right, Left). Thus, participants could solve the task relying solely on egocentric information. Our result suggests that the activation of an allocentric map to retrieve the position of objects is not automatic. This interpretation is in line with previous studies showing that different cognitive styles in spatial strategies lead to the activation of partially different neural networks during the same spatial task (Iaria et al., 2003; Jordan et al., 2004). We might have failed to observe allocentric goal direction coding in the RSC for similar reasons. Indeed, according to a prominent spatial memory model (Bicanski & Burgess, 2018; Byrne et al., 2007), the RSC should serve as a hub where spatial information is transformed across reference frames. If that is the case, one should expect to find both allocentric and egocentric goal direction coding in this region. Nevertheless, if the activation of an allocentric map is indeed not necessary for the task, reference frames transformation might not have been necessary either.

The absence of locomotion might have had an impact on our capacity to detect genuine HD cell signals. For instance, in the paradigm used by Chadwick et al. (2015), participants were required to face various allocentric directions from different viewpoints. Additionally, the constraints on head movement within the MRI setting might have obstructed our ability to detect HD cell activity, given that the vestibular input was not informative during the task. However, it is worth noting that the HD signal has been observed to update based on visual landmarks even when head movement is restricted, as demonstrated by previous research (Jeffery et al., 2016; Yoder et al., 2011). This suggests that the present experimental design should have theoretically enabled us to identify typical HD signals.

The environments used in this experiment did not allow us to disentangle between the coding of allocentric direction and environmental boundaries, due to the fact that each allocentric direction was associated with a specific wall (which is not uncommon in fMRI studies on heading direction; e.g., Chadwick et al., 2015; Shine et al., 2019). This could explain why facing direction in the reference window was also encoded in the Occipital Place Area (OPA), which is involved in representing environmental boundaries during visually guided navigation (Julian et al., 2016). However, additional unplanned analyses have shown that the OPA also encodes egocentric goal direction during the target window (Fig. S6), but not allocentric goal direction (which would be akin to encoding the position of an object tethered to a specific boundary). Moreover, RSA analysis across the two temporal windows showed a similar mapping of facing direction onto egocentric goal direction as showed for RSC and SPL (Fig. S6). The representation of egocentric environmental structure in this brain region is consistent with previous fMRI studies in which the OPA showed different levels of activation for a picture of a scene and its mirror image (left vs right sensitivity; Dilks et al., 2011) and encoded egocentric distance (Persichetti & Dilks, 2016) and egocentric perspective motion (Kamps et al., 2016) implied by visual scenes. The role of OPA beyond visually guided navigation as well as the reference frame in which it operates, however, remains unclear. Although our study was not designed to address this specific question, the fact that, in our paradigm, OPA encodes memory retrieved egocentric goal direction (but not boundary-tethered allocentric goal direction), and encodes both goal and facing direction relative to an egocentric reference vector, suggest the involvement of OPA in spatial reorientation from memory, operating in an egocentric reference frame.

Finally, it is important to note that in the present paradigm, participants were studying the environment from a single vantage point, without any locomotion. Such spaces are often referred to as vista spaces, as opposed to environmental spaces where locomotion is necessary to explore the environment (Montello, 1993). Previous research on the neural compass made use of both vista spaces (Chadwick et al., 2015; Shine et al., 2016) and environmental spaces (Baumann & Mattingley, 2010; Kim & Maguire, 2019; Marchette et al., 2014; Shine et al., 2019; Vass & Epstein, 2013, 2017). However, there are both qualitative and quantitative differences in the vista or environmental space. For instance, spatial memory in vista space is more sensitive to the intrinsic properties of the layout (Meilinger et al., 2016) and leads to better pointing performance (He et al., 2019). Further, while allocentric-related activity in the EC has been observed in both environmental (Shine et al., 2019) and vista space (Chadwick et al., 2015), it has been shown that the presence of visual barriers in a room modulate grid-like signal in the EC (from a 6-fold to a 4-fold symmetry) (He & Brown, 2019). A similar argument can be made for large and small environments, that also can induce different navigational strategies (Burgess, 2006; Hegarty et al., 2006; but see Lawton, 1996). Altogether, these studies suggest that extrapolating our conclusions from observations in a small vista space to other types of environments should be made with caution, although there is evidence that principal reference vectors are used in both vista and environmental spaces (Meilinger et al., 2014).

Overall, the present work allowed us to disentangle between different reference frames supporting the representation of heading direction across different brain regions. We showed that superior and medial parietal regions encode not only facing direction, as already suggested in previous studies (Baumann & Mattingley, 2010; Marchette et al., 2014; Vass & Epstein, 2017), but also egocentric goal direction. This finding suggests the use of a common egocentric reference frame to represent heading direction in these regions. On the other hand, no egocentric coding emerged in the entorhinal cortex, which, beyond representing facing direction, also represents allocentric goal direction as a function of the individual propensity to use allocentric navigational strategies in everyday life. Although limited to a particular spatial setting (small environments without translation or actual head rotation of the observer; Shine et al., 2016), our study highlights the necessity to investigate how different brain regions may encode similar spatial features by means of different computations across reference frames. Beyond space, one might ask whether the same sort of mechanism would apply in non-spatial domains. Indeed, recent works have suggested that the EC and the PC can be used to “navigate” non-spatial domains, particularly conceptual domains (Bellmund et al., 2018), across complementary reference frames (Bottini & Doeller, 2020; Viganò et al., 2023).

Our code is publicly available at https://github.com/BottiniLab/allo-ego, and data are available from the corresponding author upon request, without restriction.

Léo Dutriaux: Conceptualization, Methodology, Software, Formal analysis, Investigation, Writing—Original Draft, and Visualization. Yangwen Xu: Conceptualization, Methodology, Formal analysis, and Writing—Original Draft. Nicola Sartorato: Formal analysis, Investigation, Writing—Original Draft, and Visualization. Simon Lhuillier: Methodology, Software, Writing—Review & Editing, and Visualization. Roberto Bottini: Conceptualization, Methodology, Resources, Writing—Original Draft, Visualization, Supervision, and Funding acquisition.

This research is supported by the European Research Council (ERC-stg NOAM 804422) and the Italian Ministry of education university and research (Miur-FARE Ricerca, Modget 40103642).

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

We thank Simone Viganò for suggestions on the early draft and Alexander Eperon for proofreading the manuscript.

Supplementary material for this article is available with the online version here: https://doi.org/10.1162/imag_a_00149.

Baumann
,
O.
, &
Mattingley
,
J. B.
(
2010
).
Medial parietal cortex encodes perceived heading direction in humans
.
Journal of Neuroscience
,
30
(
39
),
12897
12901
. https://doi.org/10.1523/JNEUROSCI.3077-10.2010
Bellmund
,
J. L. S.
,
Deuker
,
L.
, &
Doeller
,
C. F.
(
2018
).
Mapping sequence structure in the human lateral entorhinal cortex
.
bioRxiv
,
1
20
. https://doi.org/10.1101/458133
Bicanski
,
A.
, &
Burgess
,
N.
(
2018
).
A neural-level model of spatial memory and imagery
.
eLife
,
7
,
e33752
. https://doi.org/10.7554/eLife.33752
Bottini
,
R.
, &
Doeller
,
C. F.
(
2020
).
Knowledge across reference frames: Cognitive maps and image spaces
.
Trends in Cognitive Sciences
,
24
(
8
),
606
619
. https://doi.org/10.1016/j.tics.2020.05.008
Burgess
,
N.
(
2006
).
Spatial memory: How egocentric and allocentric combine
.
Trends in Cognitive Sciences
,
10
(
12
),
551
557
. https://doi.org/10.1016/j.tics.2006.10.005
Buzsáki
,
G.
, &
Moser
,
E. I.
(
2013
).
Memory, navigation and theta rhythm in the hippocampal-entorhinal system
.
Nature Neuroscience
,
16
(
2
),
130
138
. https://doi.org/10.1038/nn.3304
Byrne
,
P.
,
Becker
,
S.
, &
Burgess
,
N.
(
2007
).
Remembering the past and imagining the future: A neural model of spatial memory and imagery
.
Psychological Review
,
114
(
2
),
340
375
. https://doi.org/10.1037/0033-295X.114.2.340
Chadwick
,
M. J.
,
Jolly
,
A. E. J.
,
Amos
,
D. P.
,
Hassabis
,
D.
, &
Spiers
,
H. J.
(
2015
).
A goal direction signal in the human entorhinal/subicular region
.
Current Biology
,
25
(
1
),
87
92
. https://doi.org/10.1016/j.cub.2014.11.001
Chrastil
,
E. R.
,
Sherrill
,
K. R.
,
Hasselmo
,
M. E.
, &
Stern
,
C. E.
(
2016
).
Which way and how far? Tracking of translation and rotation information for human path integration
.
Human Brain Mapping
,
37
(
10
),
3636
3655
. https://doi.org/10.1002/hbm.23265
Destrieux
,
C.
,
Fischl
,
B.
,
Dale
,
A.
, &
Halgren
,
E.
(
2010
).
Automatic parcellation of human cortical gyri and sulci using standard anatomical nomenclature
.
Neuroimage
,
53
(
1
),
1
15
. https://doi.org/10.1016/j.neuroimage.2010.06.010
Dilks
,
D. D.
,
Julian
,
J. B.
,
Kubilius
,
J.
,
Spelke
,
E. S.
, &
Kanwisher
,
N.
(
2011
).
Mirror-image sensitivity and invariance in object and scene processing pathways
.
Journal of Neuroscience
,
31
(
31
),
11305
11312
. https://doi.org/10.1523/JNEUROSCI.1935-11.2011
Erdem
,
U. M.
, &
Hasselmo
,
M.
(
2012
).
A goal-directed spatial navigation model using forward trajectory planning based on grid cells
.
European Journal of Neuroscience
,
35
(
6
),
916
931
. https://doi.org/10.1111/j.1460-9568.2012.08015.x
Fischl
,
B.
,
Stevens
,
A. A.
,
Rajendran
,
N.
,
Yeo
,
B. T. T.
,
Greve
,
D. N.
,
Van Leemput
,
K.
,
Polimeni
,
J. R.
,
Kakunoori
,
S.
,
Buckner
,
R. L.
,
Pacheco
,
J.
,
Salat
,
D. H.
,
Melcher
,
J.
,
Frosch
,
M. P.
,
Hyman
,
B. T.
,
Grant
,
P. E.
,
Rosen
,
B. R.
,
van der Kouwe
,
A. J. W.
,
Wiggins
,
G. C.
,
Wald
,
L. L.
, &
Augustinack
,
J. C.
(
2009
).
Predicting the location of entorhinal cortex from MRI
.
Neuroimage
,
47
(
1
),
8
17
. https://doi.org/10.1016/j.neuroimage.2009.04.033
Grinband
,
J.
,
Wager
,
T. D.
,
Lindquist
,
M.
,
Ferrera
,
V. P.
, &
Hirsch
,
J.
(
2008
).
Detection of time-varying signals in event-related fMRI designs
.
Neuroimage
,
43
(
3
),
509
520
. https://doi.org/10.1016/j.neuroimage.2008.07.065
Hafting
,
T.
,
Fyhn
,
M.
,
Molden
,
S.
,
Moser
,
M.-B.
, &
Moser
,
E. I.
(
2005
).
Microstructure of a spatial map in the entorhinal cortex
.
Nature
,
436
(
7052
),
801
806
. https://doi.org/10.1038/nature03721
He
,
Q.
, &
Brown
,
T. I.
(
2019
).
Environmental barriers disrupt grid-like representations in humans during navigation
.
Current Biology
,
29
(
16
),
2718.e3
2722.e3
. https://doi.org/10.1016/j.cub.2019.06.072
He
,
Q.
,
McNamara
,
T. P.
, &
Brown
,
T. I.
(
2019
).
Manipulating the visibility of barriers to improve spatial navigation efficiency and cognitive mapping
.
Scientific Reports
,
9
(
1
),
1
12
. https://doi.org/10.1038/s41598-019-48098-0
Hegarty
,
M.
,
Montello
,
D. R.
,
Richardson
,
A. E.
,
Ishikawa
,
T.
, &
Lovelace
,
K.
(
2006
).
Spatial abilities at different scales: Individual differences in aptitude-test performance and spatial-layout learning
.
Intelligence
,
34
(
2
),
151
176
. https://doi.org/10.1016/j.intell.2005.09.005
Iaria
,
G.
,
Petrides
,
M.
,
Dagher
,
A.
,
Pike
,
B.
, &
Bohbot
,
V. D.
(
2003
).
Cognitive strategies dependent on the hippocampus and caudate nucleus in human navigation: Variability and change with practice
.
Journal of Neuroscience
,
23
(
13
),
5945
5952
. https://doi.org/10.1523/jneurosci.23-13-05945.2003
Iglesias
,
J. E.
,
Augustinack
,
J. C.
,
Nguyen
,
K.
,
Player
,
C. M.
,
Player
,
A.
,
Wright
,
M.
,
Roy
,
N.
,
Frosch
,
M. P.
,
McKee
,
A. C.
,
Wald
,
L. L.
,
Fischl
,
B.
, &
Van Leemput
,
K.
(
2015
).
A computational atlas of the hippocampal formation using ex vivo, ultra-high resolution MRI: Application to adaptive segmentation of in vivo MRI
.
Neuroimage
,
115
,
117
137
. https://doi.org/10.1016/j.neuroimage.2015.04.042
Jeffery
,
K. J.
,
Page
,
H. J. I.
, &
Stringer
,
S. M.
(
2016
).
Optimal cue combination and landmark-stability learning in the head direction system
.
Journal of Physiology
,
594
(
22
),
6527
6534
. https://doi.org/10.1113/JP272945
Jenkinson
,
M.
(
2003
).
Fast, automated, N-dimensional phase-unwrapping algorithm
.
Magnetic Resonance in Medicine
,
49
(
1
),
193
197
. https://doi.org/10.1002/MRM.10354
Jezzard
,
P.
, &
Balaban
,
R. S.
(
1995
).
Correction for geometric distortion in echo planar images from B0 field variations
.
Magnetic Resonance in Medicine
,
34
(
1
),
65
73
. https://doi.org/10.1002/MRM.1910340111
Jordan
,
K.
,
Schadow
,
J.
,
Wuestenberg
,
T.
,
Heinze
,
H. J.
, &
Jäncke
,
L.
(
2004
).
Different cortical activations for subjects using allocentric or egocentric strategies in a virtual navigation task
.
Neuroreport
,
15
(
1
),
135
140
. https://doi.org/10.1097/00001756-200401190-00026
Julian
,
J. B.
,
Fedorenko
,
E.
,
Webster
,
J.
, &
Kanwisher
,
N.
(
2012
).
An algorithmic method for functionally defining regions of interest in the ventral visual pathway
.
Neuroimage
,
60
(
4
),
2357
2364
. https://doi.org/10.1016/j.neuroimage.2012.02.055
Julian
,
J. B.
,
Keinath
,
A. T.
,
Marchette
,
S. A.
, &
Epstein
,
R. A.
(
2018
).
The neurocognitive basis of spatial reorientation
.
Current Biology
,
28
(
17
),
R1059
R1073
. https://doi.org/10.1016/j.cub.2018.04.057
Julian
,
J. B.
,
Ryan
,
J.
,
Hamilton
,
R. H.
, &
Epstein
,
R. A.
(
2016
).
The occipital place area is causally involved in representing environmental boundaries during navigation
.
Current Biology
,
26
(
8
),
1104
1109
. https://doi.org/10.1016/j.cub.2016.02.066
Kamps
,
F. S.
,
Julian
,
J. B.
,
Kubilius
,
J.
,
Kanwisher
,
N.
, &
Dilks
,
D. D.
(
2016
).
The occipital place area represents the local elements of scenes
.
Neuroimage
,
132
,
417
424
. https://doi.org/10.1016/j.neuroimage.2016.02.062
Kim
,
M.
, &
Maguire
,
E. A.
(
2019
).
Encoding of 3D head direction information in the human brain
.
Hippocampus
,
29
(
7
),
619
629
. https://doi.org/10.1002/hipo.23060
Kriegeskorte
,
N.
,
Mur
,
M.
, &
Bandettini
,
P.
(
2008
).
Representational similarity analysis—Connecting the branches of systems neuroscience
.
Frontiers in Systems Neuroscience
,
2
,
1
28
. https://doi.org/10.3389/neuro.06.004.2008
Kriegeskorte
,
N.
,
Mur
,
M.
,
Ruff
,
D. A.
,
Kiani
,
R.
,
Bodurka
,
J.
,
Esteky
,
H.
,
Tanaka
,
K.
, &
Bandettini
,
P. A.
(
2008
).
Matching categorical object representations in inferior temporal cortex of man and monkey
.
Neuron
,
60
(
6
),
1126
1141
. https://doi.org/10.1016/j.neuron.2008.10.043
Lawton
,
C. A.
(
1996
).
Strategies for indoor wayfinding: The role of orientation
.
Journal of Environmental Psychology
,
16
(
2
),
137
145
. https://doi.org/10.1006/jevp.1996.0011
Lhuillier
,
S.
,
Gyselinck
,
V.
,
Dutriaux
,
L.
,
Grison
,
E.
, &
Nicolas
,
S.
(
2018
).
“Like a ball and chain”: Altering locomotion effort perception distorts spatial representations
.
Journal of Environmental Psychology
,
60
,
63
71
. https://doi.org/10.1016/j.jenvp.2018.10.008
Makowski
,
D.
, &
Dutriaux
,
L.
(
2017
).
Neuropsydia.py: A python module for creating experiments, tasks and questionnaires
.
Journal of Open Source Software
,
2
(
19
),
259
. https://doi.org/10.21105/joss.00259
Marchette
,
S. A.
,
Vass
,
L. K.
,
Ryan
,
J.
, &
Epstein
,
R. A.
(
2014
).
Anchoring the neural compass: Coding of local spatial reference frames in human medial parietal lobe
.
Nature Neuroscience
,
17
(
11
),
1598
1606
. https://doi.org/10.1038/nn.3834
McNamara
,
T. P.
,
Rump
,
B.
, &
Werner
,
S.
(
2003
).
Egocentric and geocentric frames of reference in memory of large-scale space
.
Psychonomic Bulletin and Review
,
10
(
3
),
589
595
. https://doi.org/10.3758/BF03196519
Meilinger
,
T.
,
Riecke
,
B. E.
, &
Bülthoff
,
H. H.
(
2014
).
Local and global reference frames for environmental spaces
.
Quarterly Journal of Experimental Psychology
,
67
(
3
),
542
569
. https://doi.org/10.1080/17470218.2013.821145
Meilinger
,
T.
,
Strickrodt
,
M.
, &
Bülthoff
,
H. H.
(
2016
).
Qualitative differences in memory for vista and environmental spaces are caused by opaque borders, not movement or successive presentation
.
Cognition
,
155
,
77
95
. https://doi.org/10.1016/j.cognition.2016.06.003
Montello
,
D.
(
1993
).
Scale and multiple psychologies of space
. In
A. U.
Frank
&
I.
Campari
(Eds.),
Spatial information theory a theoretical basis for GIS. COSIT 1993. Lecture notes in computer science
(Vol.
716
, pp.
312
321
).
Springer
. https://doi.org/10.1007/3-540-57207-4_21
Mou
,
W.
,
McNamara
,
T. P.
,
Valiquette
,
C. M.
, &
Rump
,
B.
(
2004
).
Allocentric and egocentric updating of spatial memories
.
Journal of Experimental Psychology: Learning Memory and Cognition
,
30
(
1
),
142
157
. https://doi.org/10.1037/0278-7393.30.1.142
Nichols
,
T.
, &
Holmes
,
A.
(
2003
).
Nonparametric permutation tests for functional neuroimaging
.
Human Brain Function: Second Edition
,
25
(July),
887
910
. https://doi.org/10.1016/B978-012264841-0/50048-2
Pazzaglia
,
F.
,
Cornoldi
,
C.
, &
De Beni
,
R.
(
2000
).
Diverenze individuali nella rappresentazione dello spazio: Presentazione di un Questionario autovalutativo [Individual differences in spatial representation: A self-rating questionnaire]
.
Giornale Italiano Di Psicologia
,
3
,
241
264
. https://doi.org/10.1421/310
Persichetti
,
A. S.
, &
Dilks
,
D. D.
(
2016
).
Perceived egocentric distance sensitivity and invariance across scene-selective cortex
.
Cortex
,
77
,
155
163
. https://doi.org/10.1016/j.cortex.2016.02.006
Schacter
,
D. L.
,
Addis
,
D. R.
,
Hassabis
,
D.
,
Martin
,
V. C.
,
Spreng
,
R. N.
, &
Szpunar
,
K. K.
(
2012
).
The future of memory: Remembering, imagining, and the brain
.
Neuron
,
76
(
4
),
677
694
. https://doi.org/10.1016/j.neuron.2012.11.001
Shelton
,
A. L.
, &
McNamara
,
T. P.
(
2001
).
Systems of spatial reference in human memory
.
Cognitive Psychology
,
43
(
4
),
274
310
. https://doi.org/10.1006/cogp.2001.0758
Shine
,
J. P.
,
Valdés-Herrera
,
J. P.
,
Hegarty
,
M.
, &
Wolbers
,
T.
(
2016
).
The human retrosplenial cortex and thalamus code head direction in a global reference frame
.
Journal of Neuroscience
,
36
(
24
),
6371
6381
. https://doi.org/10.1523/JNEUROSCI.1268-15.2016
Shine
,
J. P.
,
Valdés-Herrera
,
J. P.
,
Tempelmann
,
C.
, &
Wolbers
,
T.
(
2019
).
Evidence for allocentric boundary and goal direction information in the human entorhinal cortex and subiculum
.
Nature Communications
,
10
(
1
),
1
10
. https://doi.org/10.1038/s41467-019-11802-9
Taube
,
J. S.
,
Muller
,
R. U.
, &
Ranck
,
J. B.
(
1990
).
Head-direction cells recorded from the postsubiculum in freely moving rats. I. Description and quantitative analysis
.
Journal of Neuroscience
,
10
(
2
),
420
435
. https://doi.org/10.1523/jneurosci.10-02-00420.1990
Vass
,
L. K.
, &
Epstein
,
R. A.
(
2013
).
Abstract representations of location and facing direction in the human brain
.
Journal of Neuroscience
,
33
(
14
),
6133
6142
. https://doi.org/10.1523/JNEUROSCI.3873-12.2013
Vass
,
L. K.
, &
Epstein
,
R. A.
(
2017
).
Common neural representations for visually guided reorientation and spatial imagery
.
Cerebral Cortex
,
27
(
2
),
1457
1471
. https://doi.org/10.1093/cercor/bhv343
Viganò
,
S.
,
Bayramova
,
R.
,
Doeller
,
C. F.
, &
Bottini
,
R.
(
2023
).
Mental search of concepts is supported by egocentric vector representations and restructured grid maps
.
Nature Communications
,
14
(
1
),
8132
. https://doi.org/10.1038/s41467-023-43831-w
Waller
,
D.
, &
Hodgson
,
E.
(
2006
).
Transient and enduring spatial representations under disorientation and self-rotation
.
Journal of Experimental Psychology: Learning Memory and Cognition
,
32
(
4
),
867
882
. https://doi.org/10.1037/0278-7393.32.4.867
Weisberg
,
S. M.
,
Marchette
,
S. A.
, &
Chatterjee
,
A.
(
2018
).
Behavioral and neural representations of spatial directions across words, schemas, and images
.
Journal of Neuroscience
,
38
(
21
),
4996
5007
. https://doi.org/10.1523/JNEUROSCI.3250-17.2018
Yoder
,
R. M.
,
Clark
,
B. J.
, &
Taube
,
J. S.
(
2011
).
Origins of landmark encoding in the brain
.
Trends in Neurosciences
,
34
(
11
),
561
571
. https://doi.org/10.1016/j.tins.2011.08.004
Zeithamova
,
D.
,
de Araujo Sanchez
,
M. A.
, &
Adke
,
A.
(
2017
).
Trial timing and pattern-information analyses of fMRI data
.
Neuroimage
,
153
,
221
231
. https://doi.org/10.1016/j.neuroimage.2017.04.025
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0) license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.

Supplementary data