Abstract
We shift our gaze even when we orient attention internally to visual representations in working memory. Here, we show the bodily orienting response associated with internal selective attention is widespread as it also includes the head. In three virtual reality experiments, participants remembered 2 visual items. After a working memory delay, a central color cue indicated which item needed to be reproduced from memory. After the cue, head movements became biased in the direction of the memorized location of the cued memory item—despite there being no items to orient toward in the external environment. The heading-direction bias had a distinct temporal profile from the gaze bias. Our findings reveal that directing attention within the spatial layout of visual working memory bears a strong relation to the overt head orienting response we engage when directing attention to sensory information in the external environment. The heading-direction bias further demonstrates common neural circuitry is engaged during external and internal orienting of attention.
INTRODUCTION
We often move our head when orienting attention overtly to sensory information in our environment. We can also orient attention covertly to items in the external world, in the absence of large head movements. For example, you may be watching a film while directing your attention toward your phone when you are expecting a phone call. Covertly orienting attention to items in the environment is accompanied by subtle overt manifestations of orienting behavior, including directional biases in eye movements (Yuval-Greenberg, Merriam, & Heeger, 2014; Hafed, Lovejoy, & Krauzlis, 2011; Engbert & Kliegl, 2003; Hafed & Clark, 2002).
We can also orient attention internally to items maintained in the spatial layout of visual working memory (van Ede & Nobre, 2021; Manohar, Zokaei, Fallon, Vogels, & Husain, 2019; Souza & Oberauer, 2016; Murray, Nobre, Clark, Cravo, & Stokes, 2013; Olivers, Peters, Houtkamp, & Roelfsema, 2011; Griffin & Nobre, 2003). Similar to attentional selection in the external world, internal selective attention within visual working memory is associated with small directional eye-movement biases toward the memorized locations of attended items (Draschkow, Nobre, & van Ede, 2022; van Ede, Deden, & Nobre, 2021; van Ede, Board, & Nobre, 2020; van Ede, Chekroud, & Nobre, 2019; see also: Ferreira, Apel, & Henderson, 2008; Spivey & Geng, 2001). This overt manifestation of internal selective attention occurs despite the external absence of the attended memory items and even when memorized item location is not required for task performance.
Head movements are also affected by covert attentional selection. Covert attention activates neck muscles (Corneil & Munoz, 2014a; Corneil, Munoz, Chapman, Admans, & Cushing, 2007), and the lag between head and eye movements is affected by the congruency of covert attentional cues (Khan, Blohm, McPeek, & Lefèvre, 2009), suggesting that the head and eyes may each be modulated or involved when directing covert attention toward items in the external environment. The potential involvement of head and eye movements may be separable, provided there are differences in the neurophysiological pathways controlling head and eye movements (Gandhi & Sparks, 2007). Therefore, it is important to explore both the head and eyes when asking questions relating to bodily orienting behavior, because the head and eyes may contribute in distinct ways as part of a broader bodily orienting response (Corneil & Munoz, 2014b).
If the overt ocular traces of covert selective attention in memory (Draschkow et al., 2022; van Ede et al., 2020, 2021; van Ede, Chekroud, & Nobre, 2019) are part of a more widespread bodily orienting response, then directing internal selective attention to items in working memory should also be accompanied by head movement. Therefore, it is conceivable that internally directed selective attention in working memory may not only be associated with small orienting behavior of the eyes but also of the head.
To test whether such an embodied orienting response of eyes and head occurs during internally directed spatial attention, we analyzed head- and eye-tracking data from a virtual reality (VR) study investigating selective attention in visual working memory (Draschkow et al., 2022). The head-tracking data, which was not interrogated previously, allowed us to address whether head movements are similarly biased toward the memorized locations of selectively attended items in visual working memory.
METHODS
The data were collected as part of a study that used VR to examine different spatial frames of working memory in immersive environments (Draschkow et al., 2022). To answer the current research question, we focused on head-movement data, which were not analyzed in the previous study (Draschkow et al., 2022). In this section, we describe the experimental materials and methods relevant to the focus of our research question. Information on additional manipulations that were not the focus of the current study can be found in Draschkow et al. (2022).
Participants
We analyzed data from three experiments (1–3). Each experiment had a sample size of 24 human volunteers. Sample size was on the basis of our prior study that contained four experiments using a similar outcome measure (van Ede, Chekroud, & Nobre, 2019) and revealed robust results with 20–25 participants. To address our new research question and further increase power and sensitivity, we combined the samples from the individual experiments to create a larger data set with 48 participants and 72 experimental runs. The participants in Experiments 1–2 were the same and were recruited separately from the participants in Experiment 3 (Experiments 1–2: mean age 25.8 years, age range 18–40 years, all right-handed, 20 women; Experiment 3: mean age 25.5 years, age range 19–37 years, 1 left-handed, 13 women). All participants had normal or corrected-to-normal vision. Participants provided written consent prior to the experiments and were compensated £10 per hour. Protocols were approved by the local ethics committee (Central University Research Ethics Committee #R64089/RE001 and #R70562/RE001).
Materials and Apparatus
Participants wore an HTC Vive Tobii Pro VR headset. Participants held the controller in their dominant hand, using their index finger and thumb to press response buttons. The positions of the headset and hand controller were recorded by two Lighthouse base stations, using 60 infrared pulses per second. These pulses interacted with 37 sensors on the headset and 24 sensors on the controller, providing submillimeter tracking accuracy. The headset contained a gyroscope and accelerometer, allowing for the precise recording of head rotational positions (accuracy < 0.001°). The headset contained a binocular eye tracker (approximately 0.5° visual angle accuracy, sampling rate 90 Hz). Two organic light-emitting diode screens displayed the environment in the headset (refresh rate 90 Hz, 1080 × 1200 pixels, field of view 100° horizontal × 110° vertical). We used Vizard (Version 6) to render and run the VR experimental environment on a Windows desktop computer.
In the VR environment, participants stood in the center of a virtual room (4.2 m long, 4.2 m wide, 2.5 m tall) with a gray concrete texture applied to the four walls (Figure 1A). The working memory items were two colored bars (length 0.5 m/14.25° visual angle, diameter 0.05 m/1.425°of visual angle), which appeared 2 m in front of the participant. One item appeared 1 m to the left (28.7° visual angle), on the front wall. The other appeared 1 m to the right, on the front wall. The centers of the items were 2 m apart.
Heading direction tracks attentional selection in visual working memory. (A) Participants remembered the orientations of two colored items in a VR environment. After a delay, the fixation cross retrospectively cued the color of the target item. Participants then reported the orientation of the target item using the controller. (B) We recorded the shift in the projected location (in cm) of the “heading direction” onto the virtual wall in front of the participant. (C) Average heading direction for left (L) and right (R) item trials as a function of time after cue. Shading indicates ±1 SEM. (D) Towardness of heading direction as a function of time after cue. Horizontal line indicates a significant difference from zero, using cluster-based permutation testing. Shading indicates ±1 SEM. (E) Density map showing the difference in heading-direction density between right minus left item trials (500–2000 msec after cue). Circles indicate the locations of the items during encoding. Centers of items are at 100 cm (28.7° of visual angle). (C–E) Data aggregated from Experiments 1–3. See Figure A1 for separate plots of heading direction and heading-direction towardness as functions of time after cue for Experiments 1–3.
Heading direction tracks attentional selection in visual working memory. (A) Participants remembered the orientations of two colored items in a VR environment. After a delay, the fixation cross retrospectively cued the color of the target item. Participants then reported the orientation of the target item using the controller. (B) We recorded the shift in the projected location (in cm) of the “heading direction” onto the virtual wall in front of the participant. (C) Average heading direction for left (L) and right (R) item trials as a function of time after cue. Shading indicates ±1 SEM. (D) Towardness of heading direction as a function of time after cue. Horizontal line indicates a significant difference from zero, using cluster-based permutation testing. Shading indicates ±1 SEM. (E) Density map showing the difference in heading-direction density between right minus left item trials (500–2000 msec after cue). Circles indicate the locations of the items during encoding. Centers of items are at 100 cm (28.7° of visual angle). (C–E) Data aggregated from Experiments 1–3. See Figure A1 for separate plots of heading direction and heading-direction towardness as functions of time after cue for Experiments 1–3.
Procedure and Tasks
Participants were given time to get used to the headset, controller, laboratory room, and virtual environment before the experiments began. This included 24 practice trials in which participants learned how to make responses and became familiar with the trial sequence.
In all experiments, each trial consisted of the same main steps (Figure 1A). At the beginning of each trial, participants stood upright in the center of the room and were instructed to fixate on a fixation cross with their eyes (size 12 cm × 12 cm, ∼3.4° visual angle). During the task, participants were free to hold their heads as they liked. After 500 msec of fixation, two items appeared (as described in the Materials and Apparatus section). Both items were slanted at independently drawn random orientations (ranging 0–180°). One item was red, and the other was blue. The color of each item was allocated randomly on each trial. Participants were instructed to remember the orientations of the items during a delay.
All three experiments included conditions in which participants turned 90° to the left or right during the delay between the presentation of the items and the cue (“turning trials”). These turning trials were part of a separate study addressing a distinct question regarding how selection dynamics in visual working memory are influenced by self-movement (Draschkow et al., 2022) and were not included in our analyses.
Because of differences in the turning trials between experiments, the timings of the tasks differed between experiments. In Experiment 1, the items disappeared after 500 msec, compared with Experiments 2–3 where the items remained present for 1600 msec. After the items disappeared, the participant needed to remember the orientations of the items during a delay. The delays lasted 1935 msec (Experiment 1) and 835 msec (Experiments 2–3) after the items disappeared.
Following the delay, the fixation cross changed to a blue or red color—matching the color of the left or right item in working memory. The color cue indicated the item for which the orientation response needed to be reproduced (target item) and signaled that participants could initiate the response when ready. The target item was randomly selected in each trial irrespective of orientation, location, and color. Participants had unlimited time to recall the orientation of the target item and activate a response.
Once a response was initiated, participants had 2000 msec to dial in the orientation of the target item, using the controller. The response activation generated a dial made of two handles (diameter 0.06 m) on a circular torus (diameter 0.5 m, tube diameter 0.02 m), which was centered at the fixation cross. This dial was only present during the response stage. The handles moved along the torus according to the controller's orientation, allowing participants to reproduce the orientation of the target item. Participants confirmed their response by pressing the trigger button of the controller. Immediately after confirming their response, the dial disappeared, and participants received feedback on their performance. Feedback was presented on a 0–100 scale, with 100 being perfect reproduction of the target item's orientation. This number was presented above the fixation cross for 500 msec. Feedback was followed by a 700-msec delay. After this delay, there was an intertrial interval randomly selected between 1500 and 2000 msec.
There were 100 stationary trials in each experiment (50 left target item, 50 right target item). Trials were presented in five blocks with 20 trials each. The headset recalibrated the gaze tracking at the beginning of each block. Participants completing Experiments 1 and 2 performed both tasks in the same session, in counterbalanced order. Each experiment lasted approximately 1 hr, and the full session lasted approximately 2 hr.
Data Analysis
Tracking and behavioral recordings were stored in a comma-separated variable file, for each participant. We used R Studio (Version 1.3.1093, 2020) to analyze the data. The data files and analysis scripts are available on-line here: https://doi.org/10.17605/OSF.IO/24U9M.
The “heading direction” variable refers to the projected location (in cm) of the heading direction onto the virtual wall in front of the participant. The “gaze direction” variable was the horizontal distance between the fixation cross and the gaze-fixation point on the virtual wall (averaged between both eyes). For an illustration of the heading direction variable, see Figure 1B.
We also recorded yaw, roll, and translation of the headset (Figure 2) to look at the contributions of these individual components of the heading-direction vector. Head yaw is the rotational position around the head's vertical axis. Head roll is the rotational position around the head's longitudinal axis. For example, rotating your head while reading a large sign left-to-right would be reflected in changing yaw values and tilting your head to read a slanted sign would change roll values. Translation refers to the horizontal movement of the entire headset (e.g., if the participant moved their entire head to the left while looking straight ahead). Together, yaw, roll, and translation are components that can influence the horizontal heading direction.
Biased movement in yaw, roll, and translation. (A) Left: Yaw as a component of heading direction. Center: Average yaw for left (L) and right (R) item trials as a function of time after cue. Shading indicates ±1 SEM. Right: Towardness of yaw as a function of time after cue. Horizontal line indicates a significant difference from zero, using cluster-based permutation testing. Shading indicates ±1 SEM. (B) Same as A, using roll instead of yaw. (C) Same as A, using translation instead of yaw. (B–C) Lack of horizontal lines in towardness plots in the right indicates no significant difference from zero was found, using cluster-based permutation testing with a threshold of p < .05.
Biased movement in yaw, roll, and translation. (A) Left: Yaw as a component of heading direction. Center: Average yaw for left (L) and right (R) item trials as a function of time after cue. Shading indicates ±1 SEM. Right: Towardness of yaw as a function of time after cue. Horizontal line indicates a significant difference from zero, using cluster-based permutation testing. Shading indicates ±1 SEM. (B) Same as A, using roll instead of yaw. (C) Same as A, using translation instead of yaw. (B–C) Lack of horizontal lines in towardness plots in the right indicates no significant difference from zero was found, using cluster-based permutation testing with a threshold of p < .05.
We epoched the data from 500 msec before cue to 2000 msec after cue. We smoothed all time-course data over four samples (44-msec smoothing window average). In each trial, the mean value between 200 and 0 msec before the cue was used as a baseline and subtracted from all values in the trial. We excluded trials in which heading direction or gaze direction exceeded 0.5 m (half the distance to the locations of the memoranda) in either direction of the fixation cross during the time window (−500 msec to 2000 msec) to remove the effect of large outliers. This cutoff was set a priori in accordance with our previous work (Draschkow et al., 2022). We also excluded trials with a yaw or roll of over 20° in either direction (average percentage of excluded trials per participant: M = 5.96%, SE = 0.01; total percentage of excluded trials: 16.58%). Importantly, however, not applying any cutoff did not change the findings presented in the Results section.
We compared behavior between right- and left-item trials in the three experiments separately to check if the side of the target item affected performance. We used within-subject ANOVAs to check for effects of target side on error and RT. To follow up findings (including null findings), we conducted Bayesian t tests (Rouder, Speckman, Sun, Morey, & Iverson, 2009) with the default settings of the Bayes Factor package (Morey et al., 2021). Bayes-factor values either indicated evidence in favor of the alternative hypothesis (B01 > 3), in favor of the null hypothesis (B01 < 0.33), or suggested inconclusive evidence (B01 > 0.3 and B01 < 3; Kass & Raftery, 1995).
Next, we plotted the change in the time-course data (heading direction, yaw, roll, translation, gaze direction) from baseline (−200 to 0 msec before cue), separately for left- and right-item trials. To increase sensitivity and interpretability, we constructed a single measure of “towardness.” Towardness aggregated horizontal movement toward the target item on each trial, combining leftward movement in left-item trials and rightward movement in right-item trials. A positive towardness indicated a horizontal position in the direction of the target item. Towardness for each time step was given by the trial-average horizontal position in right-item trials minus the trial-average horizontal position in left-item trials (where position values left of fixation were negative) divided by two. The same procedure for calculating towardness was used for all time-course head and gaze data. We used this towardness variable to determine the significance of the biased movements (compared with zero), using “cluster-depth” (Frossard & Renaud, 2022) cluster-based permutation tests (Sassenhagen & Draschkow, 2019; Maris & Oostenveld, 2007). We ran the cluster-based permutation testing in R with the “permuco” package (Frossard & Renaud, 2021, 2022).
To gain a better understanding of the scale and variance of the heading direction, we plotted a density map of all of the heading-direction values between 500 msec and 2000 msec postcue over all trials and all participants (including excluded trials). We used color to code the side of the target item in the trial and highlight differences in the directionality of heading direction between item-sides.
RESULTS
Participants performed a visual working memory task in a VR environment while we tracked their head and gaze. In the task, participants remembered the orientations of two colored bars, one on the left and one on the right, for a short delay (Figure 1A). After the working memory delay, a color cue indicated the bar for which participants needed to reproduce the orientation on a dial.
Heading Direction Tracks Internal Selective Attention in Visual Working Memory
After the color change in the fixation cross (cue onset), horizontal heading direction became biased in the direction of the memorized external location of the cued memory item (Figures 1B–1E). This heading-direction bias occurred although there was no information present or expected at the external location corresponding to the memorized item after the color cue.
The bias in horizontal heading movement was leftward in trials in which the color cue corresponded with the memory item that had been encoded on the left (“left item”), and rightward in trials in which the color cue corresponded with the memory item that had been encoded on the right (“right item”). Figure 1B illustrates the nature of the heading-direction bias in left- and right-item trials. The average heading direction after the color cue for trials with cued memory items on the left and right are plotted separately in Figure 1C. To quantify this heading-direction bias and express it as a single measure, we combined the heading-direction bias from left- and right-item trials into a measure of towardness (van Ede, Chekroud, & Nobre, 2019). The towardness of the heading direction became evident starting at approximately 500 msec after the onset of the cue (Figure 1D; cluster p < .05; largest cluster ranging between 1167 and 1367 msec).
To explore the scale of the heading-direction bias, we calculated density maps of single-trial heading-direction values and subtracted density maps between left- and right-item trials. To focus on the window of interest, we considered all heading-direction values when the heading-direction bias was most pronounced (500–2000 msec; Figure 1E). This revealed the subtle nature of the heading-direction bias. Participants did not move their heading direction all the way to the memorized locations of the items (circles in Figure 1E). Instead, participants subtly moved their heading direction toward the memorized item locations (< 0.5° of rotation), with heading-direction biases remaining close to fixation—akin to the type of directional biases we have recently observed in gaze (Draschkow et al., 2022; van Ede et al., 2020, 2021; van Ede, Chekroud, & Nobre, 2019). The properties of the heading-direction bias were similar across three slightly different versions of the task (Experiments 1–3) and are plotted separately in Figure A1. There were no significant effects of target side (left vs. right) on behavioral performance (error and RT) in any of the experiments (see Figure A2).
The Heading-Direction Bias Is Driven by Movement along the Head's Yaw Axis
To determine which heading-movement components contributed to the heading-direction bias, we separately analyzed yaw, roll, and translation. Like the heading-direction vector, yaw followed the movement pattern of heading-direction in the left- and right-item trials, which was also confirmed by a significantly positive towardness cluster (Figure 2A; p < .05, cluster-corrected). Roll showed a nonsignificant towardness trend (Figure 2B; p > .999 for all clusters of the full time window), and translation did not move toward the memorized locations of the cued memory items (Figure 2C; p > .257 for all clusters of the full time window). We also investigated all components making up the heading direction measure (x-, y-, z-translation, yaw, pitch, and roll), during the critical 500- to 1500-msec postcue period (Figure A3). Figure 3 shows how head rotation around the yaw axis closely tracks heading direction. Thus, the leftward–rightward rotation along the head's yaw axis was the primary factor contributing to the directional heading-direction bias when selectively attending items in our visual working memory task.
The gaze bias and the heading-direction bias. (A) Average gaze direction for left (L) and right (R) item trials as a function of time after cue. Shading indicates ±1 SEM. (B) Towardness of gaze direction (gaze) and heading direction (heading) as a function of time after cue. Horizontal line indicates a significant difference from zero, using cluster-based permutation testing. Shading indicates ±1 SEM.
The gaze bias and the heading-direction bias. (A) Average gaze direction for left (L) and right (R) item trials as a function of time after cue. Shading indicates ±1 SEM. (B) Towardness of gaze direction (gaze) and heading direction (heading) as a function of time after cue. Horizontal line indicates a significant difference from zero, using cluster-based permutation testing. Shading indicates ±1 SEM.
The Heading-Direction Bias Is Accompanied by a Gaze Bias in the Same Direction
Like the heading direction, gaze direction moved toward the location of the cued item during internal selective attention (as we have previously reported in this data set (Draschkow et al., 2022) as well as in prior data sets (van Ede & Nobre, 2021; van Ede et al., 2020, 2021; van Ede, Chekroud, & Nobre, 2019). Figure 3A shows the leftward and rightward movement of gaze direction in left- and right-item trials. The gaze towardness was significantly different from zero after the cue (p < .05 between 400 and 1244 msec, cluster-corrected; Figure 3B). We focused our statistical analyses on the data aggregated across the individual experiments to improve sensitivity, noting that the gaze direction for left and right trials and towardness over time were similar between all three experiments (Experiments 1–3; Figure A1).
In Figure 3B, we also overlay the heading-direction bias for a descriptive comparison. Whereas the heading-direction bias and gaze bias both acted toward the memorized location of the cued item, Figure 3B shows the bias lags behind the gaze bias. The largest significant cluster (Frossard & Renaud, 2021, 2022) for the gaze bias was significant at ∼400 msec, whereas the significant time window for the heading-direction bias started more than a full second after the cue (p < .05; heading: 1167–1367 msec, gaze: 400–1244 msec).
DISCUSSION
Our results reveal that, like eyes, the heading direction tracks internally directed selective attention inside visual working memory. This manifests in directionally biased head movements toward the memorized location of attended memory items. Although the heading direction bias is small (Figures 1 and A1), we were able to capture it by calculating the relative change in heading direction triggered by the cue and by aggregating the data from multiple experiments. The heading-direction bias in our task was predominantly driven by the head's rotation around its yaw axis and accompanies a gaze bias in the same direction. The observed heading-direction bias suggests there is a general bodily orienting response during internal selective attention—suggesting brain structures involved in orienting of the eye and head are also engaged when orienting within the internal space of working memory.
The heading-direction and gaze biases may reflect bodily signatures that are part of a widespread orienting response activating brain areas that are involved in both overt and covert attentional selection. Indeed, there is good evidence that the brain's oculomotor system is also involved in covert orienting of spatial attention (Yuval-Greenberg et al., 2014; Hafed et al., 2011; Moore & Fallah, 2004; Engbert & Kliegl, 2003; Moore & Armstrong, 2003; Hafed & Clark, 2002; Nobre et al., 1997; Deubel & Schneider, 1996). Moreover, from an evolutionary perspective, it is conceivable that our ability to orient internally evolved gradually from external orienting behaviors of the head and eyes—maybe relying on overlapping neural circuitry (Cisek, 2019). From this perspective, the observed subtle bias in head- and eye-movements may reflect an inevitable “spill over” from activating neural circuitry that has evolved to orient both internally and externally (Strauss et al., 2020).
It is maybe surprising to find this heading-direction bias, even when attention is directed internally and without any items in the environment toward which to orient. However, in natural settings, there may be a behavioral benefit of orienting the head and eyes toward the locations of selected memory items. In our task, no subsequent behavioral goal benefited from orienting toward the memorized location of the attended memory item. However, in daily life, items rarely disappear from their location in the external environment as they do in our task. Thus, orienting the eyes and head toward the memorized locations of selected items may serve to guide future behavior, such as resampling items. In fact, people often resample items in a naturalistic working memory task, when it is easy to do so (Draschkow, Kallmayer, & Nobre, 2021; Ballard, Hayhoe, & Pelz, 1995). For example, imagine you are with a friend in a café, and they comment on the barista's hat. You may attend the barista in memory, attempting to recall what their hat looked like. At the same time, your head and eyes may be preparing for you to shift your gaze and look at the barista's hat again. In this way, the small heading-direction and gaze biases toward selected items in working memory may reflect a natural tendency to engage in action in relation to selected memoranda (Boettcher, Gresch, Nobre, & van Ede, 2021; Heuer, Ohl, & Rolfs, 2020; Olivers & Roelfsema, 2020; van Ede, 2020; van Ede, Chekroud, Stokes, & Nobre, 2019), even if there was no incentive for this in our task.
In natural behavior, head and eye movements are intrinsically functionally linked (Solman, Foulsham, & Kingstone, 2016; Foulsham, Walker, & Kingstone, 2011; Land, 2009) and head movements can even compensate for eye movements when people cannot make saccades (Ceylan, Henriques, Tweed, & Crawford, 2000; Gilchrist, Brown, & Findlay, 1997). This coordinated relationship between head- and eye-movements motivated us to look at both the head and eyes when exploring bodily orienting responses. The heading-direction bias revealed here implicates that neural circuitry that controls head movements—at least along the yaw axis—is recruited by, and potentially overlaps with, circuitry that directs internal selective attention. In fact, previous research has found overlap between brain areas thought to process spatial attention and eye and head movements. For example, the FEFs play a role in directing attention and controlling eye movements (Taylor, Nobre, & Rushworth, 2007; Moore & Fallah, 2004; Grosbras & Paus, 2002; Bruce & Goldberg, 1984; Robinson & Fuchs, 1969). Alongside attentional selection and eye movements, the FEF also contributes to head movements. The hemodynamic activity of the FEF responds to head movement (Petit & Beauchamp, 2003), and microstimulation to the FEF in primates results in head movement (Elsley, Nagy, Cushing, & Corneil, 2007; Chen & Walton, 2005). In addition, modulation of activity in the superior colliculus—an area shown to process not only eye (Wurtz & Albano, 1980; Schiller & Stryker, 1972; Wurtz & Goldberg, 1971) but also head movements (Corneil, Olivier, & Munoz, 2002; Bizzi, Kalil, & Morasso, 1972)—also affects the deployment of covert attention (Krauzlis, Lovejoy, & Zénon, 2013; Lovejoy & Krauzlis, 2009; Müller, Philiastides, & Newsome, 2005). Our results complement these findings, with the heading-direction and gaze biases suggesting overlap between neural circuitry and activity governing attentional selection inside working memory, eye movements, and head movements.
However, control of the head and eye is not entirely linked, as shown by differences in the neurophysiological pathways controlling eye and head movements (Horn et al., 2012; Oommen & Stahl, 2005; Bizzi et al., 1972). This is demonstrated in the distinct temporal profiles of the heading-direction and gaze biases presented here, which highlight the value of looking at multiple components of what might be a widespread bodily orienting response involving the head and eyes. It is important to note that comparisons between the temporal profiles of the head and gaze biases should be made with caution because of differences in mass and musculature of the head and eyes and the signal-to-noise ratio of the two measures.
It is worth noting the apparent asymmetry in the magnitude and time course of the heading-direction bias in left versus right trials and across experiments (as seen in Figure 1 and Figure A1). On the basis of our previous work on gaze biases (Draschkow et al., 2022; van Ede et al., 2020, 2021; van Ede, Chekroud, & Nobre, 2019), we a priori decided to focus on a single measure of “towardness”, which represents horizontal movement toward the target item on each trial. This aggregated measure does not only benefit from increased sensitivity but also removes any potential drifts in the measure that are not because of selective attention (that could potentially contribute to the apparent asymmetry we observed here). In future studies, it would be interesting to further investigate these potential asymmetries and how they relate to behavioral performance, for example, by increasing trial numbers and introducing a neutral condition in which no item is cued.
Finally, by using VR, we were able to measure the heading-direction bias alongside the gaze bias as participants' head, eye, and body were unconstrained. To date, the benefits of VR have been appreciated most prominently by researchers studying naturalistic human navigation, ethology, and long-term memory (Mobbs et al., 2021; Helbing, Draschkow, & Võ, 2020; Stangl et al., 2020; Topalovic et al., 2020; Draschkow & Võ, 2017; Li, Aivar, Kit, Tong, & Hayhoe, 2016). Our present findings further highlight the benefits of using VR (combined with eye- and head-tracking) to study bodily orienting behavior (Draschkow et al., 2021, 2022) related to internal cognitive processes, as showcased here for internal attentional focusing in working memory.
APPENDIX
The heading-direction and gaze biases for Experiments 1–3. (A) Left: Average heading direction for left (L) and right (R) item trials as a function of time after cue, using data from Experiment 1. Middle: Towardness of heading direction as a function of time after cue, using data from Experiment 1. Right: Distribution of the mean towardness between 500 and 1500 msec across participants. (B) Same as A, using gaze direction instead of heading direction. (C) Same as A, using data from Experiment 2. (D) Same as (B), using data from Experiment 2. (E) Same as (A), using data from Experiment 3. (F) Same as (B), using data from Experiment 3. (A–F) Shading indicates ±1 SEM.
The heading-direction and gaze biases for Experiments 1–3. (A) Left: Average heading direction for left (L) and right (R) item trials as a function of time after cue, using data from Experiment 1. Middle: Towardness of heading direction as a function of time after cue, using data from Experiment 1. Right: Distribution of the mean towardness between 500 and 1500 msec across participants. (B) Same as A, using gaze direction instead of heading direction. (C) Same as A, using data from Experiment 2. (D) Same as (B), using data from Experiment 2. (E) Same as (A), using data from Experiment 3. (F) Same as (B), using data from Experiment 3. (A–F) Shading indicates ±1 SEM.
Similar performance in left- and right-item trials. (A) Left: Plot comparing the mean RT between left item (Item L) and right item (Item R) trials, for each participant in Experiment 1. Connected pairs of points are the means of the same participant. Error bars represent a 95% confidence interval. Right: Same as Left, for error instead of RT. (B) Same as (A), using data from Experiment 2. (C) Same as A, using data from Experiment 3. There was no significant effect of target side on mean error in any of the experiments, Experiment 1: F(1, 23) = 0.01, p = .934; Experiment 2: F(1, 23) = 0.02, p = .881; Experiment 3: F(1, 23) = 2.04, p = .166. For Experiments 1–2, the follow-up Bayes t test supported the null hypothesis, suggesting the errors are similar between left- and right-item trials, Experiment 1: (B01 = 0.22), Experiment 2: (B01 = 0.22). Similarly, there was no significant effect of target side on mean RT in any of the experiments, Exp. 1: F(1, 23) = 0.19, p = .671; Experiment 2: F(1, 23) = 0.23, p = .633; Experiment 3: F(1, 23) = 0.07, p = .793. For Experiments 1–3, the follow-up Bayes t tests supported the null hypothesis, suggesting the RTs are similar between left- and right-item trials, Experiment 1: (B01 = 0.23), Experiment 2: (B01 = 0.24), Experiment 3: (B01 = 0.22).
Similar performance in left- and right-item trials. (A) Left: Plot comparing the mean RT between left item (Item L) and right item (Item R) trials, for each participant in Experiment 1. Connected pairs of points are the means of the same participant. Error bars represent a 95% confidence interval. Right: Same as Left, for error instead of RT. (B) Same as (A), using data from Experiment 2. (C) Same as A, using data from Experiment 3. There was no significant effect of target side on mean error in any of the experiments, Experiment 1: F(1, 23) = 0.01, p = .934; Experiment 2: F(1, 23) = 0.02, p = .881; Experiment 3: F(1, 23) = 2.04, p = .166. For Experiments 1–2, the follow-up Bayes t test supported the null hypothesis, suggesting the errors are similar between left- and right-item trials, Experiment 1: (B01 = 0.22), Experiment 2: (B01 = 0.22). Similarly, there was no significant effect of target side on mean RT in any of the experiments, Exp. 1: F(1, 23) = 0.19, p = .671; Experiment 2: F(1, 23) = 0.23, p = .633; Experiment 3: F(1, 23) = 0.07, p = .793. For Experiments 1–3, the follow-up Bayes t tests supported the null hypothesis, suggesting the RTs are similar between left- and right-item trials, Experiment 1: (B01 = 0.23), Experiment 2: (B01 = 0.24), Experiment 3: (B01 = 0.22).
Distributions of measures making up heading direction. The measures that make up heading direction (x-, y-, z-translation, yaw, pitch, and roll) were z-score normalized before calculating their mean between 500 and 1500 msec. These mean values were averaged across blocks and trials for each participant and split by item side. Boxplots indicate median and interquartile range. The figure shows how head rotation around the yaw axis closely tracks heading direction. We separately analyzed yaw, roll, and x-translation in the main text and figure.
Distributions of measures making up heading direction. The measures that make up heading direction (x-, y-, z-translation, yaw, pitch, and roll) were z-score normalized before calculating their mean between 500 and 1500 msec. These mean values were averaged across blocks and trials for each participant and split by item side. Boxplots indicate median and interquartile range. The figure shows how head rotation around the yaw axis closely tracks heading direction. We separately analyzed yaw, roll, and x-translation in the main text and figure.
Reprint requests should be sent to Jude L. Thom, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom, or via e-mail: [email protected]; or Dejan Draschkow, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom, Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, United Kingdom, or via e-mail: [email protected].
Data Availability Statement
The data files and analysis scripts are available on-line here: https://osf.io/24u9m/.
Author Contribution
Jude L. Thom: Formal analysis; Investigation; Visualization; Writing—Original draft; Writing—Review & editing. Anna C. Nobre: Funding acquisition; Project administration; Resources; Supervision; Writing—Original draft; Writing—Review & editing. Freek van Ede: Funding acquisition; Investigation; Methodology; Project administration; Resources; Supervision; Writing—Original draft; Writing—Review & editing. Dejan Draschkow: Data curation; Formal analysis; Investigation; Methodology; Project administration; Resources; Supervision; Writing—Original draft; Writing—Review & editing.
Funding Information
This research was funded by a Wellcome Trust Senior Investigator Award (https://dx.doi.org/10.13039/100010269), grant number: 104571/Z/14/Z, and a James S. McDonnell Foundation Understanding Human Cognition Collaborative Award, grant number: 220020448 to A. C. N., an ERC Starting Grant from the European Research Council (https://dx.doi.org/10.13039/100010663), grant number: 850636 to F. v. E., and by the NIHR Oxford Health Biomedical Research Centre. The Wellcome Centre for Integrative Neuroimaging is supported by core funding from the Wellcome Trust (https://dx.doi.org/10.13039/100010269), grant number: 203139/Z/16/Z. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. For the purpose of open access, the author has applied a CC BY public copyright license to any Author Accepted Manuscript version arising from this submission.
Diversity in Citation Practices
Retrospective analysis of the citations in every article published in this journal from 2010 to 2021 reveals a persistent pattern of gender imbalance: Although the proportions of authorship teams (categorized by estimated gender identification of first author/last author) publishing in the Journal of Cognitive Neuroscience (JoCN) during this period were M(an)/M = .407, W(oman)/M = .32, M/W = .115, and W/W = .159, the comparable proportions for the articles that these authorship teams cited were M/M = .549, W/M = .257, M/W = .109, and W/W = .085 (Postle and Fulvio, JoCN, 34:1, pp. 1–3). Consequently, JoCN encourages all authors to consider gender balance explicitly when selecting which articles to cite and gives them the opportunity to report their article's gender citation balance.
REFERENCES
Author notes
These authors contributed equally.