The mechanisms linking perceptual and memory representations in the brain are not yet fully understood. In the early visual cortex, perception and memory are known to share similar neural representations, but how they interact beyond early visual cortex is less clear. Recent work identified that scene-perception and scene-memory areas on the lateral and ventral surfaces of the brain are linked via a shared but opponent visuospatial coding scheme, with spatially specific visual responses in the absence of traditionally defined retinotopic maps. This shared visuospatial coding may provide a framework for perceptual-memory interactions. Here, we test whether the pattern of visuospatial coding within category-selective memory areas of the medial parietal cortex structures responses during memory recall and visual perception. Using functional magnetic resonance imaging, we observe signatures of visuospatial coding in the form of population receptive fields (pRFs) with both positive and negative response profiles within medial parietal cortex. Crucially, the more dissimilar the timeseries of a pair of positive/negative pRFs within a region, the more dissimilar their responses during both memory recall and visual perception. These are tasks that place very different demands on these regions: internally oriented memory recall versus externally oriented visual perception. These data extend recent work to suggest that the interplay between pRFs with opponent visuospatial coding may play a vital role in integrating information across different representational spaces.

Interaction with the complex visual world around us requires the constant integration of perceived visual information with the retrieval of previously encoded information from memory. Understanding how perceptual and memory representations interact is, therefore, an enduring puzzle in both psychology and neuroscience (Dijkstra et al., 2020; Favila et al., 2020; Groen et al., 2021; Steel et al., 2024). Prior computational (Marr, 1970; Norman & O’Reilly, 2003), and neuroimaging work (Klein et al., 2004; Polyn et al., 2005; Slotnick, 2009; Thirion et al., 2006) suggests that perception and memory may share similar neural representations, as the recall of previously encoded stimuli recruits or reactivates retinotopically specific portions of the early visual cortex (i.e., neural reinstatement: Klein et al., 2004; Slotnick, 2009; Thirion et al., 2006). However, how perception and memory representations integrate beyond the early visual cortex is less clear. For example, the visual cortex is known to encode perceptual information within a visuospatial reference frame, according to locations in the visual field relative to one’s gaze position (Groen et al., 2021; Wandell et al., 2007). In contrast, anterior memory-driven regions (e.g., hippocampus) are usually considered to be governed by more abstract coding schemes such that visuospatial information is abstracted away as information propagates through the cortical hierarchy (Guclu & van Gerven, 2015; Margulies et al., 2016; Popham et al., 2021). Thus, how perception and memory representations are integrated beyond the early visual cortex is a matter of current debate.

Work spanning the last decade demonstrates that the visuospatial coding scheme is more extensively represented in the brain than previously thought and may offer a plausible framework for such perceptual-memory integration. For instance, there are over twenty-five separate visual field maps identified throughout the dorsal and ventral visual streams (Amano et al., 2009; Arcaro et al., 2009; Brewer et al., 2005; Larsson & Heeger, 2006; Wandell et al., 2007; Wang et al., 2015), and signatures of visuospatial coding, such as biases for the contralateral visual field, and spatially-sensitive population receptive fields (pRFs, Dumoulin & Wandell, 2008) have been identified within category-selective regions of the visual cortex—once thought position invariant (Silson et al., 2015; Silson, Groen, et al., 2016; Silson, Steel, et al., 2016; Silson, Groen, et al., 2021), the default-mode network (Szinte & Knapen, 2020) and even hippocampus (Knapen, 2021; Silson, Zeidman, et al., 2021). Together, this body of work suggests that the brain retains the visuospatial coding scheme throughout the cortical hierarchy and may make use of it to aid different cognitive processes (Groen et al., 2021; Szinte & Knapen, 2020).

Very recent work explored this possibility within regions involved in the perceptual processing and mental recall of scenes, finding shared access to the same visuospatial coding scheme (Steel et al., 2024). Specifically, it was shown that whereas pRFs in scene-perception areas (Occipital Place Area [OPA] and Parahippocampal Place Area [PPA], Dilks et al., 2013; Epstein & Kanwisher, 1998) had positive response profiles (+ve pRFs: increased response to a stimulus within the receptive field), a significant percentage of pRFs in their paired scene-memory areas (Lateral Place Memory Area [LPMA] and Ventral Place Memory Area [VPMA], Steel et al., 2021; Steel, Silson et al., 2024) had negative response profiles (-ve pRFs: decreased response to a stimulus within the receptive field). Moreover, during memory recall, there was a push-pull relationship between these two sets of pRFs between regions, whereby when activity of +ve pRFs was high in perceptual regions (i.e., OPA/PPA), activity of -ve pRFs was low in memory regions (i.e., LPMA/VPMA) and vice-versa (Steel, Silson et al., 2024). This finding suggests that these coupled scene-perception and scene-memory areas make use of the same visuospatial coding scheme, but with opponent profiles, in order to represent bottom-up sensory information and top-down memory information while limiting interference.

While this prior work represents a novel observation, important considerations are that this process was: (A) Only tested within perception and memory areas on the lateral and ventral surfaces of the brain and (B) was only explored within the context of scene-perception and scene-memory, and not other stimulus categories like people. Crucially, prior work has identified additional regions of the brain that are critical to memory recall (Silson, Gilmore, et al., 2019; Silson, Steel, et al., 2019). Indeed, within the medial parietal cortex (MPC) there exists an alternating pattern of four regions selectively engaged during the mental recall of either people or places (Fig. 1). So, while it has been shown that scene-perception and scene-memory areas on the lateral and ventral surfaces are linked via visuospatial coding, it is currently unknown whether such an organisation persists into people and place memory regions in MPC.

Fig. 1.

People and place memory regions in medial parietal cortex. Group level results from a whole-brain linear mixed model contrast for the recall of people versus place stimuli. The results are displayed on partially-inflated cortical surfaces of a sample subject, separately for the left and right hemispheres. Activation was thresholded with a minimum F value of 12 for the main effect of category (p < 7.1-4). Regions with greater activation for places are displayed in cold colours, and those with greater activation for people displayed in hot colours. Place-selective regions (Places1, Places2) from prior work (Silson, Steel, et al., 2019) outlined in purple, and people-selective regions (People1, People2) outlined in green show a high-degree of overlap with the current ROIs.

Fig. 1.

People and place memory regions in medial parietal cortex. Group level results from a whole-brain linear mixed model contrast for the recall of people versus place stimuli. The results are displayed on partially-inflated cortical surfaces of a sample subject, separately for the left and right hemispheres. Activation was thresholded with a minimum F value of 12 for the main effect of category (p < 7.1-4). Regions with greater activation for places are displayed in cold colours, and those with greater activation for people displayed in hot colours. Place-selective regions (Places1, Places2) from prior work (Silson, Steel, et al., 2019) outlined in purple, and people-selective regions (People1, People2) outlined in green show a high-degree of overlap with the current ROIs.

Close modal

Here, we asked whether people and place memory regions of MPC also show signatures of visuospatial coding (i.e., spatially specific visual responses in the form of +ve and -ve pRFs). We took advantage of two existing datasets from our lab containing a mixture of pRF mapping (Dataset1 and 2, n = 36), scene-face localisers (Dataset1 and 2, n = 36), scene perception (Dataset1, n = 24), and memory recall data (Dataset2, n = 12). We had two main goals: (1) Quantify the presence of visuospatial coding throughout MPC using pRF modelling, and (2) Examine the extent to which the pattern of visuospatial coding between +ve and -ve pRFs structures responses across different cognitive processes: memory recall, and visual perception. To preview, we show that memory recall regions of MPC exhibit visuospatial coding (spatially specific responses) and contain significant proportions of -ve pRFs. Crucially, we show that the pattern in visuospatial coding between +ve and -ve pRFs structures the responses across different cognitive tasks, such that the more dissimilar the timeseries between pairs of positive/negative pRFs within a region during pRF mapping, the more dissimilar their responses during both memory recall and visual perception.

2.1 Dataset descriptions

Here, we used two independent fMRI datasets collected by our team at the University of Edinburgh Imaging Facility RIE, Edinburgh Royal Infirmary. Both MRI datasets were acquired using the same Siemens 3T Prisma scanner and 32-channel head coil and very similar sequence parameters. Independent results from Dataset1 can be found in Scrivener, Zamboni, et al. (2024), and from Dataset2 in Scrivener, Teed, et al. (2024).

Dataset1 had 24 participants (mean age: 24, 7 males), and Dataset2 had 12 participants (mean age: 27, 3 males). Participants included students from the University of Edinburgh and individuals from the surrounding areas. Participants had normal or corrected-to-normal vision, and were free from neurological or psychiatric conditions. Written consent was obtained from all participants in accordance with the Declaration of Helsinki and a consent form approved by the School of Philosophy, Psychology and Language Sciences ethics committee of the University of Edinburgh. Participants were given monetary compensation for their time.

For both datasets, participants completed 2 hours of scanning with a break in between (ranging from 15–60 minutes, depending on scanner timetabling constraints). In the first hour of scanning for both datasets, we acquired T1- and T2-weighted structural scans, followed by functional runs of population receptive field (pRF) mapping (Dumoulin & Wandell, 2008), lasting six minutes each (Dataset1 = 6 runs per participant, Dataset2 = 4 runs). In the second hour of scanning for both datasets, we first acquired 2 runs of a scene-face localiser scan lasting 5 minutes each. In Dataset1, we next acquired 2 runs of an event-related visual perception task, lasting 8 minutes each. In Dataset2, we instead acquired 6 runs of a recall task, lasting 6 minutes each. One participant only completed 4 of the 6 recall runs.

Using both datasets, we have a mixed sample of pRF mapping (both datasets, n = 36), scene-face localisers (both datasets, n = 36), scene perception (Dataset1, n = 24), and memory recall data (Dataset2, n = 12).

2.2 MRI recording parameters

For both datasets we acquired two structural images; T1 weighted (TR = 2.5 s, TE = 4.37 ms, flip angle = 7 deg, FOV = 256 mm x 256 mm x 192 mm, resolution = 1 mm isotropic, acceleration factor = 3), and T2 weighted (TR = 3.2 s, TE = 408 ms, flip angle = 9 deg, FOV = 256 mm x 240 mm x 192 mm, resolution = 0.9 mm isotropic, acceleration factor = 2). The majority of the functional scans across both datasets were acquired using the same multiecho multiband echo planar imaging sequence with the following parameters (TR = 2, TEs = 14.6 ms, 35.79 ms, 56.98 ms, MB factor = 2, acceleration factor = 2, 48 interleaved slices, phase encoding anterior to posterior, transversal orientation, slice thickness = 2.7 mm, voxel size = 2.7 mm x 2.7 mm, distance factor = 0%, flip angle = 70 degrees). The first six participants in Dataset1 were recorded with a larger number of slices (52) and slightly different echo times (14.6 ms, 32.84 ms, 51.08 ms). To accommodate larger heads, we reduced the number of slices from 52 to 48 from participant 6 onwards, which provided greater coverage in the anterior-posterior direction. Although we were able to achieve whole-brain coverage for participants 1-5 who had smaller head sizes, we made this change as a precaution.

2.3 Stimuli and tasks

2.3.1 Population receptive field modelling

During pRF mapping sessions, a bar aperture traversed gradually through the visual field, while revealing randomly selected scene fragments from 90 possible scenes. During each 36 s sweep, the aperture took 18 evenly spaced steps every 2 s (1 TR) to traverse the entire screen. Across the 18 aperture positions, all 90 possible scene images were displayed once. A total of eight sweeps were made during each run (four orientations, two directions). Specifically, the bar aperture progressed in the following order for all runs: Left to Right, Bottom Right to Top Left, Top to Bottom, Bottom Left to Top Right, Right to Left, Top Left to Bottom Right, Bottom to Top, and Top Right to Bottom Left. The bar stimuli covered a circular aperture (diameter = 12 degrees of visual angle). Participants performed a colour detection task at fixation, indicating via button press when the white fixation dot changed to red. Colour fixation changes occurred semi-randomly, with approximately two-colour changes per sweep (Silson et al., 2015).

2.3.2 Scene-face localiser

During each run, colour images of scenes and faces were presented at fixation (10 × 10 degrees of visual angle) in 16 s blocks (20 images per block; 300 ms per image, 500 ms blank). Participants responded via an MRI-compatible button box whenever the same image appeared sequentially (randomly occurring twice per run).

2.3.3 Scene perception: Dataset1

During two event-related scene perception scans, participants were presented with 96 complex scenes (12 x 9 degrees of visual angle, 500 ms each) in a randomised order as in Kravitz et al. (2011). Interstimulus intervals (3–7 s) were chosen to optimise the ability of the later deconvolution to extract the responses to each scene. Participants fixated centrally and performed an orthogonal fixation colour-detection task, pressing a button (via MRI compatible button box) every time the green fixation cross turned red (randomly occurring nine times per run).

2.3.4 Recall: Dataset2

Before attending their first session, participants in Dataset2 were asked to provide the researchers with images of six personally familiar people and six personally familiar places that they were confident they could recall easily. We asked participants to save these images with a name that would serve as a cue for them to recall the stimuli, such as ‘Sophie’, or ‘Office’. During each trial, participants were presented with a randomly chosen word cue for 500 ms, followed by 9.5 s of a green fixation cross during which they were asked to create a mental image of the cued stimulus. Each recall period was followed by a jittered ITI of 2.5–7 s where a white fixation cross was presented. Each recall cue was presented twice per run across 6 runs (12 total repetitions). We specifically asked participants to generate images that closely matched the pictures that they had provided to make the recall as similar as possible across repetitions.

2.4. Analysis

2.4.1 Pre-processing

MRI scans were processed using AFNI (Cox, 1996), Freesurfer, and SUMA (Saad & Reynolds, 2012). Dummy scans were removed from the start of each run (3dTcat). Large deviations in signal were removed (3dDespike). Slice time correction was then performed (3dTshift), aligning each slice with a time offset of zero. The skull was removed from the first echo 1 scan and used to create a brain mask (3dSkullStrip and 3dAutomask). The first echo 2 scan was used as a base for motion correction and registration with the T1 structural scan (3dbucket). Motion parameters were estimated for the echo 2 scans (3dVolreg) and applied to the other echos (3dAllineate). After completing the standard pre-processing, the data were also processed using tedana (Evans et al., 2015; Kundu et al., 2012) to denoise the multi-echo scans (version 0.0.12, using default options). The Tedana optimally combined and denoised output was then scaled. To do this, we divided the signal in each voxel by its mean value and multiplied the signal by 100 (3dTstat and 3dcalc). This means that the fMRI values can be interpreted as a percentage of the mean signal, and effect estimates can be viewed as percentage change from baseline (Chen et al., 2017). For the pRF data, an average of the runs was taken to leave a single timeseries for further analysis.

The session 1 structural scans were aligned to the functional data collected in session 1 (align_epi_anat) with a multi-cost function (including LPC, MNI, and LPA) and manually checked for accuracy. For almost all subjects we used the default LPC method output (localised Pearson correlation), as this worked sufficiently well. For three subjects in Dataset1 we used the NMI output (normalised mutual information). Functional data collected in session 2 were aligned to the session 1 structural to ensure that all functional data had the same alignment. Freesurfer reconstructions were estimated using both the T1 and T2 scans (recon-all) from session 1, and the output was used to create surfaces readable in SUMA (SUMA_Make_Spec_FS). The SUMA structural was then aligned to the session 1 experimental structural to ensure alignment with the functional images (SUMA_AlignToExperiment). Surface-based analyses were conducted using the SUMA standard cortical surface (std.141).

2.4.2 pRF modelling and amplitude definition

Visuospatial organisation refers to regions with spatially specific visual responses in the absence of traditionally defined retinotopic maps (e.g., polar angle and eccentricity maps in the early visual cortex). These regions may have many suprathreshold population receptive fields (pRFs), clearly indicating that their response varies with the spatial position of the stimulus, but do not have visible gradients along which to draw retinotopic maps (Groen et al., 2021). Here, we estimated pRFs using AFNI’s non-linear fitting algorithm (3dNLfim) and the GAM basis function (Silson et al., 2015). For every voxel in the brain, the model initially estimates the centre of the pRF on an X, Y grid with 200 samples across both the height and the width of the screen. For each point in the grid, (pRF size) values are sampled at the same resolution, but over a default range of 0 to half the screen width sampled at 100 even intervals. These default parameters result in 4 million possible pRFs (X, Y location and sizes). Given the position of the stimulus in the visual field at every TR, the estimated time series for a receptive field of a given location (X, Y) and size (sigma) can be modelled. Similar to standard pRF implementations, the model then makes use of a 2-D stimulus time series, which contains binary masks of the stimulus location at each TR and a convolution with a standard hemodynamic response function to produce 4 million predicted time series, which reflect the predicted response of each voxel’s pRF modelled as a 2-D Gaussian located in the visual field.

A typical pRF model output provides the associated location (X and Y) and size (sigma) of the best fitting predicted timeseries for each voxel, as well as the model fit (R2). Suprathreshold pRFs were here defined as those with R² > 0.08, as in prior work (Steel et al., 2024). pRFs with an estimated size larger than the visual presentation were also excluded (sigma > 0.95). A feature of the AFNI pRF implementation is that it also allows for the best predicted time series to either be positive (i.e., positive BOLD response to stimulation of its receptive field) or negative (i.e., negative BOLD response to stimulation of its receptive field). Here, pRFs with an amplitude greater than 0 were defined as positive (+ve), whereas pRFs with an amplitude less than 0 were defined as negative (-ve). We calculated the percentage of suprathreshold -ve pRFs within each ROI and entered these values (averaged across hemispheres) into a linear mixed model (LMM) with Category (People, Places) and Position (Posterior, Anterior) as factors. LMM’s were conducted in R (v1.3; lme4 v27.1; lmerTest v3.1).

2.4.3 Recall univariate activation

The activity associated with each stimulus in the recall scans was deconvolved using a BLOCK basis function (BLOCK(9.5,1), 3dDeconvolve and 3dREMLfit) aligned to the onset of the recall period. Each run was modelled separately, and estimates of subject motion were included as regressors of no interest. The output of the model was projected onto the SUMA standard cortical surface (3dVol2Surf). Group-level analysis was conducted on the surface data using a linear mixed model (3dLME). We included subject as a random effect and modelled the main effect of category (people vs. places), the main effect of run (runs 1 to 6), and an interaction between category and run effects.

2.4.4 Defining medial parietal regions of interest

Medial parietal ROIs were taken from previous work (Silson, Steel, et al., 2019) to avoid issues of circularity in our analysis (Kriegeskorte et al., 2009). Here, two ROIs on the medial parietal surface with increased BOLD activation during recall of place stimuli were named as Places1 and Places2. A further two ROIs with greater activation for people stimuli were named People1 and People2. To compare the spatial location of prior ROIs with the group-level recall data from Dataset2, we calculated the dice-coefficient of overlap between both sets of ROIs, which is given as: (2*Overlap between ROIs / size of ROI A + size ROI B). The independent ROIs from prior work were transformed from surface space into each individual participant’s volumetric space using the same parameters that were used to project their data to the surface (3dSurf2Vol). All subsequent analyses were conducted using the voxels within these volumetric ROIs.

2.4.5 Visual perception univariate analysis

The activity associated with each stimulus in the visual perception scans was deconvolved using a GAM basis function aligned to the onset of each stimulus (3dDeconvolve and 3dREMLfit). The two runs were modelled together, and each stimulus regressor included two onsets. Estimates of subject motion were included as regressors of no interest.

2.4.6 Representational dissimilarity analysis

Representational dissimilarity analysis was conducted in the volume within each MPC ROI in the left and right hemispheres separately (Places1, Places2, People1, and People2). For each participant and ROI, we constructed a representational dissimilarity matrix (RDM) of the pairwise dissimilarity (1-Pearson’s r) between the pRF timeseries of all pairs of +ve and -ve pRFs. As a concrete example, consider a single ROI (e.g., Places1) that contains 300 suprathreshold pRFs (R2 > 0.08). Of those 300 pRFs, 175 are identified as +ve pRFs with the remaining 125 identified as -ve pRFs. Thus, computing the pairwise dissimilarity between all +ve and all -ve pRFs produces and 175*125 RDM for this ROI. These RDM represents the dissimilarity in the timeseries during pRF mapping. Thus, as a pair of +ve and -ve pRFs become more similar in visuospatial terms (i.e., more overlapping in the visual field) so their responses during pRF mapping become more dissimilar. For example, perfectly overlapping +ve/-ve pRFs will have perfectly anticorrelated, and so maximally dissimilar timeseries. The pRF RDM thus represents the degree of opponent visuospatial coding within each ROI.

Next, we asked whether the similarity in visuospatial coding between +ve and -ve pRFs (i.e., pRF timeseries RDM) was also present during memory recall (n = 12). Put simply, we asked whether the voxels with more dissimilar timeseries during pRF mapping also had more dissimilar timeseries during recall. To do this, we correlated the pRF timeseries RDM (as described above) with a similar RDM constructed using the recall timeseries, hypothesising that there would be a positive correlation. For each participant, we computed one RDM per recall run before using the average RDM in the correlation analysis. We tested for a significant positive correlation across participants using the normalised correlation values, as above.

Lastly, we tested whether the visuospatial coding present in our ROIs also structured the responses during visual perception (n = 24). For this, we correlated the pRF timeseries RDMs with RDMs constructed using the scene-perception timeseries. We again tested for significant positive correlations across participants using the normalised correlation values, as above.

2.4.7 Test-retest analysis

To assess the test-retest reliability of the positive correlation between the pRF and Recall RDMs, we leveraged the six runs of the memory recall task. For each run, we calculated a Recall RDM (as described above) and correlated that with the pRF RDM.

2.4.8 Visuospatial matching analysis

We performed a visuospatial matching analysis to test the prediction that pairs of +ve/-ve pRFs that are more similar in visuospatial terms (i.e., have more anticorrelated pRF timeseries) would show more dissimilar (or less correlated) responses during memory recall. The rationale here is that if the opponent visuospatial coding framework is driving the results we observe, then a pair of +ve/-ve pRFs that represent similar portions of visual spaces should be more anticorrelated during both the recall and perception tasks than a pair of +ve/-ve pRFs that do not share visual space. To perform this analysis, we took the following steps:

  1. For every -ve pRF in an ROI we found: (a) the best matching +ve pRF in visuospatial terms (i.e., most anticorrelated +ve pRF in the time series), (b) the worst matching +ve pRF in visuospatial terms (i.e., most positively correlated +ve pRF in the time series), and (c) a randomly selected pRF regardless of its amplitude (1000 random iterations). Note, ‘best’ and ‘worst’ matched pRFs are categorised on the basis that perfectly matched +ve/-ve pRFs have perfectly anticorrelated time series.

  2. Next, for each of these three pairs of pRFs we computed the correlation in the time series during recall or perception (i.e., an independent task) and stored those correlation coefficients, before averaging across pRFs. This analysis produced for every participant, ROI and Hemisphere three correlation coefficients:

    • Correlation between matched -ve/+ve pRFs.

    • Correlation between unmatched -ve/+ve pRFs.

    • Correlation between -ve pRFs and random pRFs.

  3. For each hemisphere separately, these values were submitted to a Linear Mixed Model with ROI (Places1, Places2, People1 & People2) and MatchType (Matched, Unmatched & Random) as factors. If the positive correlation between the pRF RDMs and recall RDMs was driven by opponent visuospatial coding, and not solely the result of some voxels being inherently anticorrelated (regardless of task), then we predicted that the correlation in the recall time series would be weaker (i.e., more dissimilar) between matched -ve/+ve pRFs than either unmatched -ve/+ve pRFs or -ve pRFs and random pRFs (averaged over 1000 iterations).

2.4.9 Bootstrapping analysis

To evaluate the robustness of the positive correlations observed between the pRF and Recall RDMs, we performed a bootstrapping analysis. For each participant, ROI, and hemisphere, we randomly shuffled the pRF labels (i.e., +ve/-ve) prior to computing the pRF and Recall RDMs and their corresponding correlation. This was performed 1000 times. We then compared the observed pRF-Recall RDM correlation relative to these distributions, which reflected the correlation one could expect between the pRF and Recall RDMs at random. The same visuospatial matching and bootstrapping analyses were also applied to the visual perception data.

Here, we focus on quantifying the presence of visuospatial coding within people and place memory regions of MPC. Prior work identified an alternating pattern of people and place memory recall regions within MPC along the posterior-anterior axis (Places1, People1, Places2 & People2, see Fig. 1). As such, we initially sought to replicate this finding by comparing the magnitude of activity during people versus place recall (n = 12, see Section 2). At the group level, we were able to identify all four regions-of-interest (ROIs) in both hemispheres. Moreover, the spatial location and topography of these ROIs was mostly consistent with prior work, except for People 2 in the left hemisphere (Dice-coefficient of overlap between prior and current ROIs; LH: Places1 = 0.85, People1 = 0.86, Places2 = 0.45, People2 = 0.02; RH: Places1 = 0.84, People1 = 0.80, Places2 = 0.59, People2 = 0.35). Given the high degree of similarity between the two datasets and to maximise the power in our sample and avoid issues of circular analyses (Kriegeskorte et al., 2009), all subsequent analyses made use of ROIs from independent prior work (Silson, Steel, et al., 2019).

3.1 Visuospatial coding in people and place memory areas of MPC

Next, we examined whether people and place memory areas of MPC showed signs of visuospatial coding using pRF modelling. Here, we combined data from two separate pRF datasets acquired with the same experimental paradigm (Dataset1 n = 24, Dataset2 n = 12, see Section 2). For consistency with prior work, we calculated the percentage of suprathreshold (R2 > 0.08) -ve pRFs within each ROI (Fig. 2). All people and place memory regions of MPC contained significant percentages of -ve pRFs (t-test versus zero, all t-values >15.13, all p-values <0.001, see Supplementary Table S1 for full statistical breakdown), which did not differ between hemispheres (t(245) = 1.95, p = 0.05). The prevalence of -ve pRFs in MPC appeared relatively constant along the posterior-anterior axis. Importantly, the prevalence of -ve pRFs in MPC was highly consistent across the two independent datasets with, on average, only ~4% separating the two datasets (mean difference = 3.86, SD = 4.30).

Fig. 2.

Visuospatial coding within people and place regions of MPC. Bars represent the mean percentage of suprathreshold (R²>0.08) -ve pRFs in each region (left hemisphere = light grey, right hemisphere = dark grey). Individual participant data points are included from both datasets (Dataset1 = orange dots, Dataset2 = purple dots). Error bars represent the standard error of the mean (SEM). Each ROI contains a significant percentage of -ve pRFs (*p < 0.001).

Fig. 2.

Visuospatial coding within people and place regions of MPC. Bars represent the mean percentage of suprathreshold (R²>0.08) -ve pRFs in each region (left hemisphere = light grey, right hemisphere = dark grey). Individual participant data points are included from both datasets (Dataset1 = orange dots, Dataset2 = purple dots). Error bars represent the standard error of the mean (SEM). Each ROI contains a significant percentage of -ve pRFs (*p < 0.001).

Close modal

To quantify these effects, we submitted these values (averaged across hemispheres) into a linear mixed model (LMM) with Category (People, Places) and Position (Posterior, Anterior) as factors. Neither the main effects of Position (F(1, 245) = 1.80, p = 0.18), nor Category (F(1, 245) = 3.68, p = 0.06) were significant. These effects were qualified, however, by a significant Category by Position interaction (F(1, 245) = 6.70, p = 0.01), which reflects higher prevalence of -ve pRFs posteriorly for people (i.e., People1 > People2), but anteriorly for places (i.e., Places2 > Places1), respectively. People and place memory areas within MPC had significant levels of -ve pRFs, but importantly they were not universally negative; all ROIs contained a mixture of both +ve and -ve pRFs with neither an obvious spatial topography within an ROI, nor overall differences in explained variance (paired t-test of mean R2 p > 0.05, in all cases except lhPlaces1, p = 0.03).

Recent work identified a push-pull relationship between scene-perception and scene-memory areas of the brain during memory recall, such that when activity in +ve pRFs within scene-perception areas was high, activity in -ve pRFs within scene-memory areas was low and vice-versa—a relationship that was strengthened by the similarity in visuospatial coding between +ve/-ve pRF pairs (Steel, Silson et al., 2024). Crucially, this analysis was conducted between-regions. This followed logically from the anatomical yoking of scene-perception and scene-memory areas on the lateral and ventral surfaces (Steel et al., 2021). An important difference between this perception-memory yoking and MPC is that within MPC there is not a clear gradient from perception-memory for both categories. Also, as demonstrated above, the prevalence of -ve pRFs did not change significantly throughout MPC (i.e., no effect of Position). Given this, we took a different within-region approach and asked whether the similarity in visuospatial coding between +ve/-ve pRFs within a region structures the responses during memory recall.

We start by considering a theoretical relationship between two pRFs within a region that are perfectly overlapping, but with opposite amplitudes (i.e., one +ve and one -ve, Fig. 3). Here, the two pRFs have identical centre locations and sizes (left panel, Fig. 3) but differ in that one responds positively to a stimulus within its receptive field (+ve pRF) and the other responds negatively (-ve pRF) with a perfect negative correlation (middle & right panels, Fig. 3). This reflects an idealised scenario, but we can look for a similar pattern in real data (accepting that real data will be noisier).

Fig. 3.

Opponent responses of matched +ve and -ve pRFs. Left: Idealised scenario of two perfectly overlapping pRFs with opposite amplitudes. Middle: The timeseries of the +ve pRF (red) and -ve pRF (blue) during pRF mapping. As activity in the +ve pRF increases in response to the bar stimulus within its receptive field, the activity of the -ve pRF decreases and vice-versa. Right: The two timeseries have a perfect negative correlation (r = -1.00).

Fig. 3.

Opponent responses of matched +ve and -ve pRFs. Left: Idealised scenario of two perfectly overlapping pRFs with opposite amplitudes. Middle: The timeseries of the +ve pRF (red) and -ve pRF (blue) during pRF mapping. As activity in the +ve pRF increases in response to the bar stimulus within its receptive field, the activity of the -ve pRF decreases and vice-versa. Right: The two timeseries have a perfect negative correlation (r = -1.00).

Close modal

3.2 Similarity in visuospatial coding between +ve/-ve pRFs structures responses during memory recall in MPC

The indicative example in Figure 3 demonstrates the inverse relationship expected between the timeseries during pRF mapping and the visual field coverage of +ve/-ve pRF pairs. That is, the more dissimilar the timeseries during pRF mapping between a pair of +ve and -ve pRFs, the more similar they are in visual space. Next, we sought to test whether the pattern of opponent visuospatial coding between +ve/-ve pRFs within each region could structure the responses during memory recall.

Here, we computed RDMs for each run of the recall task separately before averaging across runs in each participant and ROI (see Section 2). These RDMs reflect the pairwise dissimilarity in response during recall between all pairs of +ve and -ve pRFs. Crucially then, if the dissimilarity in visuospatial coding between +ve and -ve pRFs (i.e., pRF timeseries RDM) within a region structures the responses during memory recall, then we expected positive correlations between these two RDMs across participants. Figure 4 depicts this relationship for left hemisphere Places1 of a representative participant. The pattern of dissimilarity in visuospatial coding between +ve/-ve pRFs (Fig. 4A) and the pattern of dissimilarity in memory recall response (Fig. 4B) are positively correlated (Fig. 4C). These data highlight that the more dissimilar the timeseries between a pair of +ve/-ve pRFs during pRF mapping, the more dissimilar their timeseries during memory recall—a task that places very different demands on this region.

Fig. 4.

Positive relationship between pattern of visuospatial coding and memory recall response for an example participant. (A) An RDM representing the pattern of dissimilarity (1-Pearson’s r) in the timeseries of all +ve and -ve pRFs during pRF mapping (navy = more similar, yellow = more dissimilar). This RDM is constructed by computing the pairwise dissimilarity in the pRF timeseries between all +ve pRFs (1 to Nth) and all -ve pRFs (1 to Nth). (B) An RDM representing the average pattern of dissimilarity (1-Pearson’s r) in the timeseries of all +ve and -ve pRFs during memory recall (purple = more similar, yellow = more dissimilar). (C) A scatter plot depicting the positive relationship between these two RDMs (z = 0.36).

Fig. 4.

Positive relationship between pattern of visuospatial coding and memory recall response for an example participant. (A) An RDM representing the pattern of dissimilarity (1-Pearson’s r) in the timeseries of all +ve and -ve pRFs during pRF mapping (navy = more similar, yellow = more dissimilar). This RDM is constructed by computing the pairwise dissimilarity in the pRF timeseries between all +ve pRFs (1 to Nth) and all -ve pRFs (1 to Nth). (B) An RDM representing the average pattern of dissimilarity (1-Pearson’s r) in the timeseries of all +ve and -ve pRFs during memory recall (purple = more similar, yellow = more dissimilar). (C) A scatter plot depicting the positive relationship between these two RDMs (z = 0.36).

Close modal

As predicted, there exists on average significant and positive correlations between the pattern of dissimilarity in the pRF timeseries and the pattern of dissimilarity in memory recall between +ve/-ve pRFs within each ROI (all t-values > 2.87, all p-values <0.015, Figure 5 & Supplementary Table S2 for full statistical breakdown). Performing the same analysis on each memory recall run separately also confirms a significant positive relationship (i.e., p < 0.05) between the pRF and Recall RDMs in 47/48 cases (4 ROIs * 2 Hemispheres * 6 Runs = 48 cases). This finding indicates that +ve and -ve pRFs with similar visuospatial coding (i.e., dissimilar timeseries but similar visual field positions) within an ROI may represent different patterns of information during memory recall.

Fig. 5.

Pattern of visuospatial coding between +ve/-ve pRFs structures responses during memory recall. Bars represent the mean normalised correlation coefficients between the pRF timeseries RDM and memory recall RDM in each MPC ROIs (left hemisphere = green, right hemisphere = purple). Each data point represents an individual participant. Error bars represent SEM. The pattern of visuospatial coding between +ve/-ve pRFs was positively correlated with the pattern of responses during memory recall in each ROI (*p < 0.05).

Fig. 5.

Pattern of visuospatial coding between +ve/-ve pRFs structures responses during memory recall. Bars represent the mean normalised correlation coefficients between the pRF timeseries RDM and memory recall RDM in each MPC ROIs (left hemisphere = green, right hemisphere = purple). Each data point represents an individual participant. Error bars represent SEM. The pattern of visuospatial coding between +ve/-ve pRFs was positively correlated with the pattern of responses during memory recall in each ROI (*p < 0.05).

Close modal

To better evaluate the effects reported above we performed a bootstrapping analysis (see Section 2). Here, for each participant, ROI and Hemisphere, we randomly shuffled (1000 times) the voxel labels (i.e., +ve/-ve) prior to computing the pRF and Recall RDMs and their corresponding correlation. We then compared the observed pRF-Recall RDM correlations for each ROI against these distributions, which reflect the relationship between pRF-Recall RDMs one might expect from chance (Fig. 6). In each ROI, the observed pRF-Recall RDM correlation falls well outside of the random distribution and was significantly different from its mean correlation (all t-values > 5, all p-values <0.0001).

Fig. 6.

Bootstrapping analysis. Histograms depict the distributions of pRF-Recall RDM correlation coefficients (1000 iterations) computed between shuffled +ve/-ve pRF labels. In each plot, the red-line represents the observed pRF-Recall RDM correlation across participants within the opponent visuospatial coding framework. In all cases, the observed correlation falls well outside of what could be expected by chance.

Fig. 6.

Bootstrapping analysis. Histograms depict the distributions of pRF-Recall RDM correlation coefficients (1000 iterations) computed between shuffled +ve/-ve pRF labels. In each plot, the red-line represents the observed pRF-Recall RDM correlation across participants within the opponent visuospatial coding framework. In all cases, the observed correlation falls well outside of what could be expected by chance.

Close modal

Is this relationship really structured by the similarity in visuospatial coding, or could it be explained by less interesting effects? For example, are there inherent anticorrelations between voxels regardless of whether their pRFs are +ve or -ve or the degree to which they represent similar locations of visual space? To rule out this possibility, we performed a pRF matching analysis (see Section 2). Here, we compared the correlation in the recall timeseries between three populations of pRFs identified using the pRF data: (1) between -ve pRFs and their best matching +pRFs in visuospatial terms (i.e. the most anticorrelated timeseries), (2) between -ve pRFs and their worst matching +ve pRFs in visuospatial terms (i.e. the most positively correlated timeseries), and (3) between -ve pRFs and a random pRF that could either be +ve or -ve (Fig. 7). As a reminder, +ve and -ve pRFs that represent overlapping portions of visual space will have more dissimilar timeseries than +ve and -ve pRFs that represent different portions of space (see Fig. 3 for indicative example).

Fig. 7.

Visuospatial matching analysis. Bars represent the average recall timeseries correlations in each ROI for three different types of pRF matching. Matched: -ve pRFs were paired with their best matching +ve pRFs (i.e., the most anticorrelated timeseries) +ve pRF. Unmatched: -ve pRFs were paired with their worst matching +ve pRFs (i.e., the most positively correlated timeseries) +ve pRF. Random: -ve pRFs were paired with a random pRF, which could either be +ve or -ve. Matching +ve/-ve pRFs produced the lowest correlation in all ROIs, as their timeseries are more dissimilar. Error bars represent the standard error of the mean.

Fig. 7.

Visuospatial matching analysis. Bars represent the average recall timeseries correlations in each ROI for three different types of pRF matching. Matched: -ve pRFs were paired with their best matching +ve pRFs (i.e., the most anticorrelated timeseries) +ve pRF. Unmatched: -ve pRFs were paired with their worst matching +ve pRFs (i.e., the most positively correlated timeseries) +ve pRF. Random: -ve pRFs were paired with a random pRF, which could either be +ve or -ve. Matching +ve/-ve pRFs produced the lowest correlation in all ROIs, as their timeseries are more dissimilar. Error bars represent the standard error of the mean.

Close modal

If the positive correlation between the pRF RDMs and recall RDMs reported above was driven by opponent visuospatial coding, and not solely the result of some voxels being inherently anticorrelated (regardless of task), then we predicted that the correlation in the recall time series would be weaker (i.e., more dissimilar) between matched -ve/+ve pRFs than either unmatched -ve/+ve pRFs or -ve pRFs and random pRFs (averaged over 1000 iterations).

For each hemisphere separately, the correlation values were submitted to a Linear Mixed Model with ROI (Places1, Places2, People1 & People2) and MatchType (Matched, Unmatched & Random) as factors. Critically, we observed a significant main effect of MatchType in both hemispheres (LH; F(2, 104) = 33.52, p = 5.7-12; RH; F(2, 100) = 37.96, p = 5.11-13). This main effect was driven by lower correlation values, on average, for the Matched versus either the Unmatched or Random conditions. Post-hoc comparisons (Bonferroni corrected) confirm significantly larger dissimilarity values for Matched versus Unmatched (LH; t(104) = 8.13, p < 0.0001; RH; t(101) = 8.64, p < 0.0001), Matched versus Random (LH; t(104) = 3.28, p = 0.004; RH; t(101) = 3.34, p = 0.003) and Random versus Unmatched (LH; t(104) = 4.85, p < 0.0001; RH; t(101) = 5.29, p < 0.0001) comparisons. We also observed significant main effects of ROI in both hemispheres (LH; F(3, 105) = 10.65, p = 3.56-6; RH; F(3, 102) = 9.62, p = 1.18-5) which reflected higher correlation values on average in People1. No ROI by MatchType interactions were significant (p > 0.05 in both hemispheres).

Across ROIs, the recall time series was less correlated (i.e., more dissimilar) between pairs of matched +ve/-ve pRF than either Unmatched or Random pairs. In other words, manipulating the degree of opponent visuospatial coding from strong (i.e., overlapping pRFs) to weak (i.e., non-overlapping pRFs) significantly altered the similarity in the time series during recall—a task entirely independent of how these pRFs are defined

3.3 Similarity in visuospatial coding between +ve/-ve pRFs structures responses during visual perception

Our main analyses presented above focused on memory recall, given the recruitment of MPCs during mental imagery and the higher prevalence of -pRFs in MPC over lateral and ventral memory regions. However, it is also possible that the similarity in visuospatial coding between +ve and -ve pRFs reflects a representational principle that extends beyond a single cognitive process. That is, despite our ROIs being principally engaged during memory recall, it is possible that the visuospatial coding between +ve and -ve pRFs might also structure responses during other cognitive tasks, such as visual perception.

To test this possibility, we used data from a second group of participants (n = 24) who performed a slow event-related visual scene-perception task (see Section 2). First, we computed RDMs for each ROI based on the pRF timeseries. Like above, these RDMs represented the pairwise dissimilarity in the pRF response between +ve and -ve pRFs. Next, we computed RDMs based on the event-related timeseries (i.e., visual perception). If the similarity in visuospatial coding between +ve and -ve pRFs reflects a broad representational principle that structures the responses across different cognitive tasks, then we expected to find positive correlations between the pRF RDMs and the perceptual RDMs. Figure 8 depicts this analysis for Places1 of a representative participant. As above, the similarity in visuospatial coding between +ve/-ve pRFs (Fig. 8A) and the responses during visual perception (Fig. 8B) were positively correlated (Fig. 8C).

Fig. 8.

Positive relationship between pattern of visuospatial coding and pattern during visual perception for an example participant. (A) An RDM representing the pattern of dissimilarity (1-Pearson’s r) in the timeseries of all positive (+ve) and negative (-ve) pRFs during pRF mapping (navy = more similar, yellow = more dissimilar). This RDM is constructed by computing the pairwise dissimilarity in the pRF timeseries between all +ve pRFs (1 to Nth) and all -ve pRFs (1 to Nth). (B) An RDM representing the average pattern of dissimilarity (1-Pearson’s r) in the timeseries of all positive and negative pRFs during visual perception (dark blue = more similar, light blue = more dissimilar). (C) A scatter plot depicting the positive relationship between these two RDMs (z = 0.34).

Fig. 8.

Positive relationship between pattern of visuospatial coding and pattern during visual perception for an example participant. (A) An RDM representing the pattern of dissimilarity (1-Pearson’s r) in the timeseries of all positive (+ve) and negative (-ve) pRFs during pRF mapping (navy = more similar, yellow = more dissimilar). This RDM is constructed by computing the pairwise dissimilarity in the pRF timeseries between all +ve pRFs (1 to Nth) and all -ve pRFs (1 to Nth). (B) An RDM representing the average pattern of dissimilarity (1-Pearson’s r) in the timeseries of all positive and negative pRFs during visual perception (dark blue = more similar, light blue = more dissimilar). (C) A scatter plot depicting the positive relationship between these two RDMs (z = 0.34).

Close modal

As anticipated, we observed significant and positive correlations between the pattern of dissimilarity in the pRF timeseries and the pattern of dissimilarity in the responses during visual perception (all t-values > 3.88, all p-values <0.001, see Supplementary Table S3) between +ve/-ve pRFs within each ROI (Fig. 9). This suggests that the visuospatial similarity between +ve and -ve pRFs is not simply a feature specific to pRF modelling but extends to structure the relationship between voxels across different cognitive tasks. That is, the fact that the pattern of similarity between +ve/-ve pRFs during pRF mapping remains when participants are engaging in either a cued mental imagery task (internally orientated) or a visual perception task (externally orientated) suggests that the visuospatial similarity between +ve and -ve pRFs (i.e., how similar they are in visual terms) is something that likely has a functional relevance outside of just pRF mapping.

Fig. 9.

Pattern of visuospatial coding between +ve/-ve pRFs structures responses during visual perception recall. Bars represent the mean normalised correlation coefficients between the pRF timeseries RDM and visual perception RDM in each MPC ROIs (left hemisphere = red, right hemisphere = orange). Each data point represented an individual participant. Error bars represent the standard error of the mean. The pattern of visuospatial coding between +ve/-ve pRFs was positively correlated with the pattern of responses during visual perception in each ROI (*p < 0.05).

Fig. 9.

Pattern of visuospatial coding between +ve/-ve pRFs structures responses during visual perception recall. Bars represent the mean normalised correlation coefficients between the pRF timeseries RDM and visual perception RDM in each MPC ROIs (left hemisphere = red, right hemisphere = orange). Each data point represented an individual participant. Error bars represent the standard error of the mean. The pattern of visuospatial coding between +ve/-ve pRFs was positively correlated with the pattern of responses during visual perception in each ROI (*p < 0.05).

Close modal

For consistency with the analyses above, we also performed a bootstrapping analysis (see Section 2). Here, for each participant, ROI and Hemisphere, we randomly shuffled (1000 times) the voxel labels (i.e., +ve/-ve) prior to computing the pRF and Perception RDMs and their corresponding correlation. We then compared the observed pRF- Perception RDM correlations for each ROI against these distributions, which reflect the relationship between pRF- Perception RDMs one might expect from chance (Fig. 10). In each ROI, the observed pRF- Perception RDM correlation falls outside of the random distribution. The one exception to this was for People 2 in the left hemisphere. Here, the observed pRF-Perception RDM correlation falls within the shuffled distribution, albeit towards the rightward tail. The observed pRF-Perception RDM correlations were significantly different from its mean correlation (all t-values > 5, all p-values <0.0001).

Fig. 10.

Bootstrapping analysis. Histograms depict the distributions of pRF-Perception RDM correlation coefficients (1000 iterations) computed between shuffled +ve/-ve pRF labels. In each plot, the red-line represents the observed pRF- Perception RDM correlation across participants within the opponent visuospatial coding framework. In all but one case, the observed correlation falls outside of what could be expected by chance.

Fig. 10.

Bootstrapping analysis. Histograms depict the distributions of pRF-Perception RDM correlation coefficients (1000 iterations) computed between shuffled +ve/-ve pRF labels. In each plot, the red-line represents the observed pRF- Perception RDM correlation across participants within the opponent visuospatial coding framework. In all but one case, the observed correlation falls outside of what could be expected by chance.

Close modal

We also applied the visuospatial matching analysis to the perception data. As above, we compared the correlation in the perception timeseries between three populations of pRFs: (1) between -ve pRFs and their best matching +pRFs in visuospatial terms (i.e., most anticorrelated timeseries), (2) between -ve pRFs and their worst matching +ve pRFs in visuospatial terms (i.e., most positively correlated timeseries), and 3) between -ve pRFs and a random pRF that could either be +ve or -ve (Fig. 11).

Fig. 11.

Visuospatial matching analysis. Bars represent the average visual perception timeseries correlations in each ROI for three different types of pRF matching. Matched: -ve pRFs were paired with their best matching +ve pRFs (i.e., the most anticorrelated timeseries) +ve pRF. Unmatched: -ve pRFs were paired with their worst matching +ve pRFs (i.e., the most positively correlated timeseries) +ve pRF. Random: -ve pRFs were paired with a random pRF, which could either be +ve or -ve. Matching +ve/-ve pRFs produced the lowest correlation in all ROIs, as their timeseries are more dissimilar. Error bars represent the standard error of the mean.

Fig. 11.

Visuospatial matching analysis. Bars represent the average visual perception timeseries correlations in each ROI for three different types of pRF matching. Matched: -ve pRFs were paired with their best matching +ve pRFs (i.e., the most anticorrelated timeseries) +ve pRF. Unmatched: -ve pRFs were paired with their worst matching +ve pRFs (i.e., the most positively correlated timeseries) +ve pRF. Random: -ve pRFs were paired with a random pRF, which could either be +ve or -ve. Matching +ve/-ve pRFs produced the lowest correlation in all ROIs, as their timeseries are more dissimilar. Error bars represent the standard error of the mean.

Close modal

As with the recall data, these values were submitted to a LMM with factors ROI and MatchType. Importantly, we observed again a significant main effect of MatchType in each hemisphere (LH; F(2, 210) = 43.83, p = 2.2-16; RH; F(2, 229) = 24.53, p = 2.2-10). This main effect was driven by lower correlation values, on average, for the Matched versus either the Unmatched or Random conditions. Post-hoc comparisons (Bonferroni corrected) confirm significantly lower correlation values for Matched versus Unmatched (LH; t(211) = 7.91, p < 0.0001; RH; t(229) = 6.61, p < 0.0001), Matched versus Random (LH; t(211) = 8.28, p = 0.004; RH; t(229) = 5.29, p = 0.003), but not Random versus Unmatched (LH; t(211) = 0.37, p = 1.00; RH; t(229) = 1.39, p = 0.55) comparisons. We also observed significant main effects of ROI in both hemispheres ((LH; F(3, 218) = 5.36, p = 0.001; RH; F(3, 231) = 5.77, p = 0.0008) which reflects higher correlation values on average in Places2 of the left hemisphere, and People2 of the right hemisphere. We also observed significant ROI by MatchType interactions in both hemispheres ((LH; F(6, 210) = 3.32, p = 0.003; RH; F(6, 229) = 5.77, p = 0.01).

We observed that the similarity in visuospatial coding between pRFs with opposite amplitudes structures their responses across different cognitive tasks, here within people and place memory areas of MPC. These data add to the growing body of literature demonstrating pervasive visuospatial coding throughout the brain (Groen et al., 2021), including the visual cortex (Arcaro & Livingstone, 2017, 2021; Grill-Spector & Weiner, 2014; Silson et al., 2015; Wandell et al., 2007), the default mode network (Szinte & Knapen, 2020), and the hippocampus (Knapen, 2021; Silson, Zeidman, et al., 2021). Our data extend recent work (Steel, Silson et al., 2024) by showing that visuospatial coding is present within people and place memory areas within MPC (Silson, et al., 2019).

Prior work using data from the Human Connectome Project initiative first identified -ve pRFs within the default-mode-network (Andrews-Hanna et al., 2010; Raichle et al., 2001), including its MPC component (Szinte & Knapen, 2020). The authors speculated on, but did not demonstrate, the potential functional significance of these -ve pRFs. Within the context of MPC, we replicate this finding (Szinte & Knapen, 2020) by showing significant levels of -ve pRFs within people and place recall regions. Moreover, we demonstrate that the relationship between +ve/-ve pRFs is not simply an inherited feature of the pRF modelling technique but persists during tasks that place very different demands on the default-mode-network (i.e., internally oriented memory recall versus externally oriented visual perception).

The functional role of MPC is often considered within the context of the default-mode-network (Andrews-Hanna et al., 2010, 2014; Raichle et al., 2001; Shirer et al., 2012), where much of MPC is thought to act a as a ‘core’ region that integrates information between separate dorsal and ventral subnetworks (Andrews-Hanna et al., 2010). The original delineation of people and place recall areas within MPC (Silson, et al., 2019) challenged this conceptualisation by suggesting that this core itself is fractionated along the same lines as the dorsal and ventral subnetworks; Places1/2, like the ventral component, are associated with scene-construction or contextual association, whereas People1/2, like the dorsal component, are associated with more social and potentially semantic processing (Andrews-Hanna et al., 2010; Bar & Aminoff, 2003; Hassabis et al., 2007). The current data add further complexity to this picture by highlighting that not only do these regions contain a mixture of +ve and -ve visually sensitive pRFs, but that the relationship between pairs of +ve/-ve pRFs within a region structures the responses of those voxels more generally.

Why do these regions contain a mixture of +ve/-ve pRFs? One possibility (already discussed in the context of -ve pRFs in the default-mode-network) is that there are computational benefits to representing the same signal with both activation and deactivation (Szinte & Knapen, 2020). For example, efficiency of sensory processing can be increased through the interplay of activation and deactivation via predictive coding (Srinivasan et al., 1982), and deactivations can help discount erroneous computational outcomes by explicitly representing ‘what is not’ (Goncalves & Welchman, 2017). The fact that each region contains a mixture of +ve/-ve pRFs offers the possibility that such an interplay occurs in MPC. For example, one interpretation of -ve pRFs was that they could serve to store perceptual signals before they are used for other forms of cognition (Groen et al., 2021; Szinte & Knapen, 2020). The current data are consistent with this idea, but suggest that such signals are potentially stored and accessed within a region. That responses during both memory recall and visual perception become more dissimilar the more similar a pair of +ve/-ve pRFs become in visuospatial terms hints strongly at just such an interplay.

The finding that the more similar a pair of +ve and -ve pRFs are in terms of visuosaptial coding the more dissimilar their responses during different cognitive tasks is consistent with the idea that the default-mode-network is well situated to integrate information across different functional modalities (Margulies et al., 2016; Popham et al., 2021). Matched visual and language areas have been identified recently at the anterior border of visual cortex (Popham et al., 2021). Within MPC specifically, two regions in the approximate locations of Places1 and People1 were found to have aligned visual and semantic representations. It is possible that such alignment across different representations is facilitated via the opponent dynamics of +ve/-ve pRFs within these regions, in the same way that responses during memory recall and visual perception were structured via these interactions reported here. Considering this prior work, it is interesting to note that although the regions we describe as Places1 and People1 appear commensurate with regions showing aligned visual and semantic representations (Popham et al., 2021), these do not extend anteriorly to cover the regions we describe as Places2 and People2, suggesting perhaps that the more anterior pair of recall regions in MPC differ from their posterior counterparts on some functional level as yet not identified. Indeed, in our data we do not see clear evidence of a posterior-anterior shift in either the prevalence of -ve pRFs or in the degree to which the pattern of visuospatial coding relates to the recall or perceptual responses. Future work should aim to establish what (if any) functional component differentiates the posterior and anterior pairs of recall regions within MPC.

Several important distinctions between the current data and prior recent work (Steel, Silson et al., 2024) are noteworthy. First, the spatial scale of visuospatial coding interactions is different. For instance, prior work observed a push-pull relationship between +ve and -ve pRFs in separate, albeit adjacent scene-perception and scene-memory areas, respectively (Steel, Silson et al., 2024). It was shown that as activity in +ve pRFs increased, so activity in -ve pRFs decreased and vice-versa. This visuospatial interaction occurred therefore at the between-region level. Here, we observe within-region interactions between +ve and -ve pRFs that structure the responses of those voxels across multiple cognitive tasks. Second, prior work focused exclusively on scene-perception and scene-memory areas on the lateral and ventral surfaces and on responses during scene-perception and scene-memory, respectively. Here, we extend the scope of visuospatial coding to include both people and place memory regions of MPC and measure responses during memory recall of people and places, as well as during visual perception.

A limitation of the current study is that it does not provide evidence for the functional relevance of the opponent visuospatial coding framework within MPC, beyond its presence in structuring the responses across different tasks. We believe the current study is an important initial step in identifying the presence of the opponent visuospatial coding framework and that future studies should seek to link it to, for example, the strength of memory recall. While the visuospatial matching analyses demonstrates that the overall pattern of responses are more dissimilar (i.e., weaker correlation) for visuospatially matched pairs of +ve/-ve than for either Unmatched or Random pairs of pRFs, we cannot entirely rule out the possibility that within each ROI there are populations of voxels with inherently anticorrelated timeseries regardless of their pRF properties.

Taken together, these data add to the growing body of work demonstrating the pervasive influence of visuospatial coding throughout the brain. Within MPC specifically, we show that people and place memory regions exhibit signatures of visuospatial coding in the form of +ve and -ve pRFs. Moreover, we show that the similarity in visuospatial representation between +ve/-ve pRFs structures the similarity in response during both internally-orientated memory recall and externally-orientated visual perception. The interplay between +ve/-ve pRFs in MPC may play a vital role in integrating information across different representational spaces.

Data and code are available via the Open Science Framework (https://osf.io/ypk6c/).

C.L.S. & E.H.S. jointly conceived of the project. C.L.S. collected the data. C.L.S. & E.H.S. jointly analysed the data and wrote the manuscript.

This work was funded by the Biotechnology and Biological Sciences Research Council, BB/V003887/1. For the purpose of open access, the author has applied a creative commons attribution (CC BY) license to any author accepted manuscript version arising.

None.

Supplementary material for this article is available with the online version here: https://doi.org/10.1162/imag_a_00507.

Amano
,
K.
,
Wandell
,
B. A.
, &
Dumoulin
,
S. O.
(
2009
).
Visual field maps, population receptive field sizes, and visual field coverage in the human MT+ complex
.
Journal of Neurophysiology
,
102
(
5
),
2704
2718
. https://doi.org/10.1152/jn.00102.2009
Andrews-Hanna
,
J. R.
,
Reidler
,
J. S.
,
Sepulcre
,
J.
,
Poulin
,
R.
, &
Buckner
,
R. L.
(
2010
).
Functional-anatomic fractionation of the brain’s default network
.
Neuron
,
65
(
4
),
550
562
. https://doi.org/10.1016/j.neuron.2010.02.005
Andrews-Hanna
,
J. R.
,
Smallwood
,
J.
, &
Spreng
,
R. N.
(
2014
).
The default network and self-generated thought: Component processes, dynamic control, and clinical relevance
.
Annals of the New York Academy of Sciences
,
1316
(
1
),
29
52
. https://doi.org/10.1111/nyas.12360
Arcaro
,
M. J.
, &
Livingstone
,
M. S.
(
2017
).
A hierarchical, retinotopic proto-organization of the primate visual system at birth
.
eLife
,
6
,
e26196
. https://doi.org/10.7554/eLife.26196
Arcaro
,
M. J.
, &
Livingstone
,
M. S.
(
2021
).
On the relationship between maps and domains in inferotemporal cortex
.
Nature Reviews Neuroscience
,
22
(
9
),
573
583
. https://doi.org/10.1038/s41583-021-00490-4
Arcaro
,
M. J.
,
McMains
,
S. A.
,
Singer
,
B. D.
, &
Kastner
,
S.
(
2009
).
Retinotopic organization of human ventral visual cortex
.
Journal of Neuroscience
,
29
(
34
),
10638
10652
. https://doi.org/10.1523/JNEUROSCI.2807-09.2009
Bar
,
M.
, &
Aminoff
,
E.
(
2003
).
Cortical analysis of visual context
.
Neuron
,
38
(
2
),
347
358
. https://doi.org/10.1016/S0896-6273(03)00167-3
Brewer
,
A. A.
,
Liu
,
J.
,
Wade
,
A. R.
, &
Wandell
,
B. A.
(
2005
).
Visual field maps and stimulus selectivity in human ventral occipital cortex
.
Nature Neuroscience
,
8
(
8
),
1102
1109
. https://doi.org/10.1038/nn1507
Chen
,
G.
,
Taylor
,
P. A.
, &
Cox
,
R. W.
(
2017
).
Is the statistic value all we should care about in neuroimaging?
NeuroImage
,
147
,
952
959
. https://doi.org/10.1016/j.neuroimage.2016.09.066
Cox
,
R. W.
(
1996
).
AFNI: Software for analysis and visualization of functional magnetic resonance neuroimages
.
Computers and Biomedical Research
,
29
(
3
),
162
173
. https://doi.org/10.1006/cbmr.1996.0014
Dijkstra
,
N.
,
Ambrogioni
,
L.
,
Vidaurre
,
D.
, &
van Gerven
,
M
. (
2020
).
Neural dynamics of perceptual inference and its reversal during imagery
.
eLife
,
9
,
e53588
. https://doi.org/10.7554/eLife.53588
Dilks
,
D. D.
,
Julian
,
J. B.
,
Paunov
,
A. M.
, &
Kanwisher
,
N.
(
2013
).
The occipital place area is causally and selectively involved in scene perception
.
Journal of Neuroscience
,
33
(
4
),
1331
1336
. https://doi.org/10.1523/JNEUROSCI.4081-12.2013
Dumoulin
,
S. O.
, &
Wandell
,
B. A.
(
2008
).
Population receptive field estimates in human visual cortex
.
NeuroImage
,
39
(
2
),
647
660
. https://doi.org/10.1016/j.neuroimage.2007.09.034
Epstein
,
R.
, &
Kanwisher
,
N.
(
1998
).
A cortical representation of the local visual environment
.
Nature
,
392
(
6676
),
598
601
. https://doi.org/10.1038/33402
Evans
,
J. W.
,
Kundu
,
P.
,
Horovitz
,
S. G.
, &
Bandettini
,
P. A.
(
2015
).
Separating slow BOLD from non-BOLD baseline drifts using multi-echo fMRI
.
NeuroImage
,
105
,
189
197
. https://doi.org/10.1016/j.neuroimage.2014.10.051
Favila
,
S. E.
,
Lee
,
H.
, &
Kuhl
,
B. A.
(
2020
).
Transforming the concept of memory reactivation
.
Trends in Neurosciences
,
43
(
12
),
939
950
. https://doi.org/10.1016/j.tins.2020.09.006
Goncalves
,
N. R.
, &
Welchman
,
A. E.
(
2017
).
“What Not” detectors help the brain see in depth
.
Current Biology
,
27
(
10
),
1403
-
1412.e8
. https://doi.org/10.1016/j.cub.2017.03.074
Grill-Spector
,
K.
, &
Weiner
,
K. S.
(
2014
).
The functional architecture of the ventral temporal cortex and its role in categorization
.
Nature Reviews Neuroscience
,
15
(
8
),
536
548
. https://doi.org/10.1038/nrn3747
Groen
,
I. I. A.
,
Dekker
,
T. M.
,
Knapen
,
T.
, &
Silson
,
E. H.
(
2021
).
Visuospatial coding as ubiquitous scaffolding for human cognition
.
Trends in Cognitive Sciences
,
26
(
1
),
81
96
. https://doi.org/10.1016/j.tics.2021.10.011
Guclu
,
U.
, &
van Gerven
,
M. A. J
. (
2015
).
Deep neural networks reveal a gradient in the complexity of neural representations across the ventral stream
.
Journal of Neuroscience
,
35
(
27
),
10005
10014
. https://doi.org/10.1523/JNEUROSCI.5023-14.2015
Hassabis
,
D.
,
Kumaran
,
D.
, &
Maguire
,
E. A.
(
2007
).
Using imagination to understand the neural basis of episodic memory
.
Journal of Neuroscience
,
27
(
52
),
14365
14374
. https://doi.org/10.1523/JNEUROSCI.4549-07.2007
Klein
,
I.
,
Dubois
,
J.
,
Mangin
,
J. F.
,
Kherif
,
F.
,
Flandin
,
G.
,
Poline
,
J. B.
,
Denis
,
M.
,
Kosslyn
,
S. M.
, &
Le Bihan
,
D.
(
2004
).
Retinotopic organization of visual mental images as revealed by functional magnetic resonance imaging
.
Cognitive Brain Research
,
22
(
1
),
26
31
. https://doi.org/10.1016/j.cogbrainres.2004.07.006
Knapen
,
T.
(
2021
).
Topographic connectivity reveals task-dependent retinotopic processing throughout the human brain
.
Proceedings of the National Academy of Sciences
,
118
(
2
),
e2017032118
. https://doi.org/10.1073/pnas.2017032118
Kravitz
,
D. J.
,
Peng
,
C. S.
, &
Baker
,
C. I.
(
2011
).
Real-world scene representations in high-level visual cortex: It’s the spaces more than the places
.
Journal of Neuroscience
,
31
(
20
),
7322
7333
. https://doi.org/10.1523/JNEUROSCI.4588-10.2011
Kriegeskorte
,
N.
,
Simmons
,
W. K.
,
Bellgowan
,
P. S. F.
, &
Baker
,
C. I.
(
2009
).
Circular analysis in systems neuroscience: The dangers of double dipping
.
Nature Neuroscience
,
12
(
5
),
535
540
. https://doi.org/10.1038/nn.2303
Kundu
,
P.
,
Inati
,
S. J.
,
Evans
,
J. W.
,
Luh
,
W.-M.
, &
Bandettini
,
P. A.
(
2012
).
Differentiating BOLD and non-BOLD signals in fMRI time series using multi-echo EPI
.
NeuroImage
,
60
(
3
),
1759
1770
. https://doi.org/10.1016/j.neuroimage.2011.12.028
Larsson
,
J.
, &
Heeger
,
D. J.
(
2006
).
Two retinotopic visual areas in human lateral occipital cortex
.
Journal of Neuroscience
,
26
(
51
),
13128
13142
. https://doi.org/10.1523/JNEUROSCI.1657-06.2006
Margulies
,
D. S.
,
Ghosh
,
S. S.
,
Goulas
,
A.
,
Falkiewicz
,
M.
,
Huntenburg
,
J. M.
,
Langs
,
G.
,
Bezgin
,
G.
,
Eickhoff
,
S. B.
,
Castellanos
,
F. X.
,
Petrides
,
M.
,
Jefferies
,
E.
, &
Smallwood
,
J.
(
2016
).
Situating the default-mode network along a principal gradient of macroscale cortical organization
.
Proceedings of the National Academy of Sciences
,
113
(
44
),
12574
12579
. https://doi.org/10.1073/pnas.1608282113
Marr
,
D.
(
1970
).
A theory for cerebral neocortex
.
Proceedings of the Royal society of London. Series B. Biological sciences
,
176
(
1043
),
161
234
. https://doi.org/10.1098/rspb.1970.0040
Norman
,
K. A.
, &
O’Reilly
,
R. C.
(
2003
).
Modeling hippocampal and neocortical contributions to recognition memory: A complementary-learning-systems approach
.
Psychological Review
,
110
(
4
),
611
646
. https://doi.org/10.1037/0033-295X.110.4.611
Polyn
,
S. M.
,
Natu
,
V. S.
,
Cohen
,
J. D.
, &
Norman
,
K. A.
(
2005
).
Category-specific cortical activity precedes retrieval during memory search
.
Science
,
310
(
5756
),
1963
1966
. https://doi.org/10.1126/science.1117645
Popham
,
S. F.
,
Huth
,
A. G.
,
Bilenko
,
N. Y.
,
Deniz
,
F.
,
Gao
,
J. S.
,
Nunez-Elizalde
,
A. O.
, &
Gallant
,
J. L.
(
2021
).
Visual and linguistic semantic representations are aligned at the border of human visual cortex
.
Nature Neuroscience
,
24
(
11
),
1628
1636
. https://doi.org/10.1038/s41593-021-00921-6
Raichle
,
M. E.
,
MacLeod
,
A. M.
,
Snyder
,
A. Z.
,
Powers
,
W. J.
,
Gusnard
,
D. A.
, &
Shulman
,
G. L.
(
2001
).
A default mode of brain function
.
Proceedings of the National Academy of Sciences
,
98
(
2
),
676
682
. https://doi.org/10.1073/pnas.98.2.676
Saad
,
Z. S.
, &
Reynolds
,
R. C.
(
2012
).
SUMA
.
NeuroImage
,
62
(
2
),
768
773
. https://doi.org/10.1016/j.neuroimage.2011.09.016
Scrivener
,
C. L.
,
Zamboni
,
E.
,
Morland
,
A. B.
, &
Silson
,
E. H.
(
2024
).
Retinotopy drives the variation in scene responses across visual field map divisions of the occipital place area
.
Journal of Vision
,
24
(
8
),
10
. https://doi.org/10.1167/jov.24.8.10
Scrivener
,
C. L.
,
Teed
,
J. A.
, &
Silson
,
E. H.
(
2024
).
Visual imagery of familiar people and places in category selective cortex
.
bioRxiv
, 2024-09. https://doi.org/10.1101/2024.09.13.612846
Shirer
,
W. R.
,
Ryali
,
S.
,
Rykhlevskaia
,
E.
,
Menon
,
V.
, &
Greicius
,
M. D.
(
2012
).
Decoding subject-driven cognitive states with whole-brain connectivity patterns
.
Cerebral Cortex
,
22
(
1
),
158
165
. https://doi.org/10.1093/cercor/bhr099
Silson
,
E. H.
,
Chan
,
A. W.-Y.
,
Reynolds
,
R. C.
,
Kravitz
,
D. J.
, &
Baker
,
C. I.
(
2015
).
A retinotopic basis for the division of high-level scene processing between lateral and ventral human occipitotemporal cortex
.
Journal of Neuroscience
,
35
(
34
),
11921
11935
. https://doi.org/10.1523/JNEUROSCI.0137-15.2015
Silson
,
E. H.
,
Gilmore
,
A. W.
,
Kalinowski
,
S. E.
,
Steel
,
A.
,
Kidder
,
A.
,
Martin
,
A.
, &
Baker
,
C. I.
(
2019
).
A posterior–anterior distinction between scene perception and scene construction in human medial parietal cortex
.
Journal of Neuroscience
,
39
(
4
),
705
717
. https://doi.org/10.1523/JNEUROSCI.1219-18.2018
Silson
,
E. H.
,
Groen
,
I. I. A.
, &
Baker
,
C. I.
(
2021
).
Direct comparison of contralateral bias and face/scene selectivity in human occipitotemporal cortex
.
Brain Structure and Function
,
227
(
4
),
1405
1421
. https://doi.org/10.1007/s00429-021-02411-8
Silson
,
E. H.
,
Groen
,
I. I. A.
,
Kravitz
,
D. J.
, &
Baker
,
C. I.
(
2016
).
Evaluating the correspondence between face-, scene-, and object-selectivity and retinotopic organization within lateral occipitotemporal cortex
.
Journal of Vision
,
16
(
6
),
1
21
. https://doi.org/10.1167/16.6.14
Silson
,
E. H.
,
Steel
,
A. D.
, &
Baker
,
C. I.
(
2016
).
Scene-selectivity and retinotopy in medial parietal cortex
.
Frontiers in Human Neuroscience
,
10
,
412
. https://doi.org/10.3389/fnhum.2016.00412
Silson
,
E. H.
,
Steel
,
A.
,
Kidder
,
A.
,
Gilmore
,
A. W.
, &
Baker
,
C. I.
(
2019
).
Distinct subdivisions of human medial parietal cortex support recollection of people and places
.
eLife
,
8
,
1
25
. https://doi.org/10.7554/elife.47391
Silson
,
E. H.
,
Zeidman
,
P.
,
Knapen
,
T.
, &
Baker
,
C. I.
(
2021
).
Representation of contralateral visual space in the human hippocampus
.
The Journal of Neuroscience
,
41
(
11
),
2382
2392
. https://doi.org/10.1523/JNEUROSCI.1990-20.2020
Slotnick
,
S. D.
(
2009
).
Rapid retinotopic reactivation during spatial memory
.
Brain Research
,
1268
,
97
111
. https://doi.org/10.1016/j.brainres.2009.02.056
Srinivasan
,
M. V.
,
Laughlin
,
S. B
., &
Dubs
,
A.
(
1982
).
Predictive coding: A fresh view of inhibition in the retina
.
Proceedings of the Royal Society of London. Series B. Biological Sciences
,
216
(
1205
),
427
459
. https://doi.org/10.1098/rspb.1982.0085
Steel
,
A.
,
Billings
,
M. M.
,
Silson
,
E. H.
, &
Robertson
,
C. E.
(
2021
).
A network linking scene perception and spatial memory systems in posterior cerebral cortex
.
Nature Communications
,
12
(
1
),
2632
. https://doi.org/10.1038/s41467-021-22848-z
Steel
,
A.
,
Silson
,
E. H.
,
Garcia
,
B. D.
, &
Robertson
,
C. E.
(
2024
).
A retinotopic code structures the interaction between perception and memory systems
.
Nature Neuroscience
,
27
(
2
),
339
347
. https://doi.org/10.1038/s41593-023-01512-3
Szinte
,
M.
, &
Knapen
,
T.
(
2020
).
Visual organization of the default network
.
Cerebral Cortex
,
30
(
6
),
3518
3527
. https://doi.org/10.1093/cercor/bhz323
Thirion
,
B.
,
Duchesnay
,
E.
,
Hubbard
,
E.
,
Dubois
,
J.
,
Poline
,
J. B.
,
Lebihan
,
D.
, &
Dehaene
,
S.
(
2006
).
Inverse retinotopy: Inferring the visual content of images from brain activation patterns
.
NeuroImage
,
33
(
4
),
1104
1116
. https://doi.org/10.1016/j.neuroimage.2006.06.062
Wandell
,
B. A.
,
Dumoulin
,
S. O.
, &
Brewer
,
A. A.
(
2007
).
Visual field maps in human cortex
.
Neuron
,
56
(
2
),
366
383
. https://doi.org/10.1016/j.neuron.2007.10.012
Wang
,
L.
,
Mruczek
,
R. E. B.
,
Arcaro
,
M. J.
, &
Kastner
,
S.
(
2015
).
Probabilistic maps of visual topography in human cortex
.
Cerebral Cortex
,
25
(
10
),
3911
3931
. https://doi.org/10.1093/cercor/bhu277
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0) license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.

Supplementary data