Abstract

In everyday life, we often make judgments regarding the sequence of events, for example, deciding whether a baseball runner's foot hit the plate before or after the ball hit the glove. Numerous studies have examined the functional correlates of temporal processing using variations of the temporal order judgment and simultaneity judgment (SJ) tasks. To perform temporal order judgment tasks, observers must bind temporal information with identity and/or spatial information relevant to the task itself. SJs, on the other hand, require observers to detect stimulus asynchrony but not the order of stimulus presentation and represent a purer measure of temporal processing. Some previous studies suggest that these temporal decisions rely primarily on right-hemisphere parietal structures, whereas others provide evidence that temporal perception depends on bilateral TPJ or inferior frontal regions (inferior frontal gyrus). Here, we report brain activity elicited by a visual SJ task. Our methods are unique given our use of two orthogonal control conditions, discrimination of spatial orientation and color, which were used to control for brain activation associated with the classic dorsal (“where/how”) and ventral (“what”) visual pathways. Our neuroimaging experiment shows that performing the SJ task selectively activated a bilateral network in the parietal (TPJ) and frontal (inferior frontal gyrus) cortices. We argue that SJ tasks are a purer measure of temporal perception because they do not require observers to process either identity or spatial information, both of which may activate separate cognitive networks.

INTRODUCTION

Human perception requires the integration of multiple sensory signals that are often presented in rapid sequential order or simultaneously. The ability to accurately perceive the temporal properties of sensory signals is critical to everyday human behavior, as this ability helps us reorient attention, prioritize responses, and act accordingly to our environment. For example, temporal information helps referees call sports games, dancers coordinate their actions, and musicians stay in sync. Additional evidence is necessary to understand the precise nature of brain networks underlying visual temporal information processing in healthy adults.

The significance of temporal processing may be seen when we observe deficits in individuals with neurological impairments such as visual extinction (Karnath & Zihl, 2003). These individuals only report a single item when two are presented simultaneously or report that the contralesional item appeared with an artificial delay. These individuals are often unaware of their impairments, exhibiting a potentially dangerous level of anosognosia (Vossel, Weiss, Eschenbeck, & Fink, 2013). The consequences of these impairments are varied and may include inaccurate duration judgments, poor temporal order judgments, and an inability to accurately engage attention to temporally sequenced stimuli (Rorden, Li, & Karnath, 2018). Here, we focus on visual order judgments made by neurologically healthy individuals performing a simultaneity task.

Neuroimaging research on visual temporal perception has been partly motivated by patient data with lesions to parietal or ventral frontal brain regions (Roberts, Lau, Chechlacz, & Humphreys, 2012). Patient studies provide us with valuable insight into which brain regions may be involved in various behaviors. Of these studies, most have focused on patients with the inability to interpret the order of sensory events or engage their attention to contemporaneous stimuli after injuries such as unilateral stroke (i.e., exaggerated attentional blink; Husain, Shapiro, Martin, & Kennard, 1997). Previous research has suggested that the TPJ may play a crucial role in integrating information across space and time (for a review, see Husain & Rorden, 2003). Battelli and colleagues boldly suggest that only the right hemisphere TPJ forms a dedicated “when” pathway (Battelli, Alvarez, Carlson, & Pascual-Leone, 2009; Battelli, Walsh, Pascual-Leone, & Cavanagh, 2008; VanRullen, Pascual-Leone, & Battelli, 2008), located between the heavily studied dorsal “where”/“how” pathway and the ventral “what” pathway (Creem & Proffitt, 2001; Goodale & Milner, 1992). However, there are conflicting reports as to which brain regions support visual temporal perceptions like order judgment or simultaneity. For example, seminal work by Coull and Nobre (1998) suggests that temporal tasks preferentially engage the left hemisphere, whereas others have produced results in favor of the right hemisphere being most involved (Agosta et al., 2017). Still, others argue that temporal perception engages a bilateral system in either parietal (and/)or inferior frontal regions (Davis, Christie, & Rorden, 2009; Lux, Marshall, Ritzl, Zilles, & Fink, 2003). It is important to note that the wide range of anatomical discrepancies might be explained by the various designs of behavioral tasks employed (Agosta et al., 2017).

Two previous studies with similar designs to ours support the view that bilateral parietal regions are activated by visual temporal judgment tasks. Lux et al. (2003) used a simultaneity judgment where visual shapes were presented to the left and right visual hemifields. Their task required participants to judge if the shapes were presented at the same time or not or if the shapes (rhombuses) were tilted in the same manner (an orientation judgment control task). In the simultaneity task of the Lux et al. article, half of the trials where shown simultaneously, and the other half presented one stimulus with a 119-msec delay relative to the first stimulus onset. The brain activation results of Lux et al. show involvement of bilateral parietal regions and left frontal regions (e.g., inferior frontal gyrus [IFG]). The Davis et al. (2009) task was a temporal order judgment (TOJ) design, with a shape control task as well. However, again, the timing task was quite simple as with Lux et al. (2003) and only used two SOAs (33 and 67 msec). The shape task in Davis et al. (2009) was also simple in nature with only two designed differences, which were roughly matched for difficulty to the timing task via pilot testing. Note that both Lux et al. (2003) and Davis et al. (2009) report regions that were more active in a timing task relative to a single control condition. It is logically possible that identical results could result because of inhibition of these regions in the control task. Here, we extend their work by contrasting timing to two different control tasks, to provide more evidence for regions that are specifically engaged by timing demands.

We hypothesize that visual temporal information is encoded bilaterally in both inferior parietal and inferior frontal regions. This idea builds on previous work mentioned above but also assumes a more inclusive, bilateral network of brain regions supporting the perception of visual temporal information. The present article extends our knowledge by reporting on the brain correlates of visual processing during a simultaneity judgment (SJ) task, while explicitly controlling for extraneous perceptual processes in the spatial and identity processing domains. Parsing the contribution of many perceptual processes (e.g., spatial processing, identity processing, temporal processing) is a difficult task in many experiments. We chose the SJ task for its simplicity and because it allowed us to keep instructions and stimuli as similar as possible across all tasks.

SJ tasks demand that the participant describe whether the trial stimuli were presented simultaneously or not. In contrast to our study, most previous visual temporal perception work has made use of TOJ tasks, or paradigms such as the attentional blink, where the individual is asked to report the correct sequence of events or the presence of a target stimulus. TOJ requires binding the temporal discrimination process with an object's physical or spatial properties (García-Pérez & Alcalá-Quintana, 2012). For example, participants might be asked to report if the red or green object appeared first or, alternatively, if the left or right object appeared first. One could argue that the color TOJ task may require communication with the ventral visual “what” stream, whereas the spatial task would require more integration with the dorsal “where” stream. Therefore, the choice of task in TOJ paradigms might induce brain activation that is potentially unrelated to visual temporal information in neuroimaging studies (Keetels & Vroomen, 2012). We argue that the SJ task provides a purer measure of temporal perception because successfully performing the task does not rely on linking identity or spatial information to the decision criteria, thus avoiding potential integration with other neural modules (Binder, 2015). Furthermore, the SJ task can be viewed as measuring detection performance, rather than discrimination performance.

Furthermore, our SJ experiment (including control tasks) was designed to be adaptive on a trial-by-trial basis. This crucial element is novel compared to similar experiments mentioned above. The adaptive nature of our task design maintains participants at their individual perceptual threshold throughout the entire experiment.

The second major innovation in our study design is using two different control tasks: one that requires a decision about stimuli orientation (presumably relying on the dorsal “where/how” system) and a second task that requires a decision about stimuli color (presumably relying on the ventral “what” identification system). The SJ task was our main experimental condition, and the color and orientation tasks were chosen as control tasks so that low-level visual features of the stimuli could be subtracted from the temporal perception components in the fMRI analysis. Typical fMRI analyses rely on looking for the difference between one task and another, where the change from one condition to another is kept at a minimum. In imaging studies, the choice of control task plays a pivotal role (Manly et al., 2003). By using two different control tasks, rather than comparing to rest, we can identify and account for any brain activation biases associated with each of these tasks. Here, the stimuli in each of the three tasks are perceptually similar and require an identical motoric response. We use a standard general linear model (GLM) analysis of the fMRI data (see Methods section) in combination with multivariate machine learning analyses to investigate brain regions activated by our visual tasks and measure the predictive value of those activations in an out-of-sample data set with a similar experimental design. Taken together, these innovations extend prior work on visual temporal information processing and provide compelling evidence for a more inclusive bilateral network.

METHODS

Participants

The current experiment included 35 participants (28 women, mean age = 22 ± 4 years) who were recruited from the University of South Carolina (Columbia, South Carolina) and surrounding areas. All study procedures were reviewed and approved by the local institutional review board, and each participant provided written consent using documents approved by the institutional review board. All participants self-reported right-hand dominance, had normal or corrected-to-normal visual acuity, and were neurologically healthy by self-report. All participants were screened for MRI compatibility. Participants were paid $20 for their participation and were told that the top performer would receive an additional $80 to increase motivation.

We also included a previously collected data set consisting of 26 participants (16 women, mean age = 23 ± 5.84 years) who were recruited and consented in the same manner as above. Six of the participants in this data set were left-handed via self-report. Going forward, this data set of 26 participants is referred to as the out-of-sample data used in the machine learning classification analysis.

Stimuli and Procedure

Stimuli were created and presented using Psychtoolbox (Kleiner et al., 2007) on a Windows 7 computer connected to a projector with a long throw lens aimed at a screen inside the scanner room (1024 × 768 px). Stimuli were made isoluminant for each participant with the use of a visual flicker fusion thresholding procedure where they adjusted stimulus color intensities until flickering ceased. The lighting and conditions inside the MRI room were consistent throughout the entire study.

Participants completed three tasks while imaging was conducted: SJs, color judgments (CL), and orientation judgments (OR). All tasks were presented visually, and stimuli consisted of two colored Gabor patches (145 px, 2.44° of visual angle, phase = 1.0, frequency = 0.055, sigma = 24.17, contrast = 0.7) presented on the left and right of a central fixation letter (0.4° of visual angle). For each task, participants were instructed to make “same” or “different” judgments immediately after presentation of the stimuli. Responses were made using the right index (same) and middle (different) finger buttons on an MRI-compatible response glove. For the timing task (SJ), participants made SJs and responded “same” if stimuli appeared to have the same onset time or “different” if they perceived them as coming on asynchronously. During the color task (CL), participants responded “same” if the stimuli were perceived as having the same color or “different” if they were not. Finally, for the orientation task (OR), participants responded “same” if the two stimuli appeared to be oriented (angled) the same way or “different” if they were not. Crucially, all stimuli appeared perceptually similar across tasks (size, shape, color, Gabor grating properties), had identical motor responses, and were equated for difficulty by adaptively adjusting stimulus levels to match accuracies across tasks. See Figure 1 for an illustration of the task and details of stimulus presentation.

Figure 1. 

Actual trial screenshots from the behavior tasks performed during MRI scanning (modified for print). Participants performed each of the tasks in a pseudorandom block design. Each trial lasted 1.8 sec, with 500-msec stimulus duration, 1100 msec for response, and 200-msec intertrial interval. Timing parameters were constant across tasks. Before each block, a word appeared on-screen for 1000 msec to indicate the task: time, angle, or color. A centrally located character (“T,” “A,” or “C”) provided a fixation point to reduce eye movement. This character remained on-screen during each block and did not provide any information relevant to the judgment task other than the current block type.

Figure 1. 

Actual trial screenshots from the behavior tasks performed during MRI scanning (modified for print). Participants performed each of the tasks in a pseudorandom block design. Each trial lasted 1.8 sec, with 500-msec stimulus duration, 1100 msec for response, and 200-msec intertrial interval. Timing parameters were constant across tasks. Before each block, a word appeared on-screen for 1000 msec to indicate the task: time, angle, or color. A centrally located character (“T,” “A,” or “C”) provided a fixation point to reduce eye movement. This character remained on-screen during each block and did not provide any information relevant to the judgment task other than the current block type.

Stimuli were presented in a pseudorandomized block order so that one task would never repeat more than twice in a row. Each block lasted 29 sec and contained 16 individual trials each lasting 1.8 sec. Total stimulus duration from onset to offset was ∼500 msec (30 frames at 60-Hz presentation). Each block condition was shown seven times, totaling 21 stimulus blocks per fMRI run, with an accompanying rest block after each stimulus block that lasted 15 sec. For each participant, two fMRI runs were performed while in the scanner, with each block condition being presented 14 times across fMRI runs. Each participant performed a training session in the scanner during the T1 scan before the fMRI runs. This training session was used to build the starting logistic model for each participant used in the adaptive thresholding procedure.

During the training session (T1 scan), 64 trials of linearly spaced values ranging from extremely difficult to extremely easy were presented to the participants: SJ, 0.01–0.49 (rate of change); OR, 0.1–10 (degrees); and CL, 0.001–2 (color difference: 0–1 scale). The adaptive thresholding procedure fit a logistic regression model to a participant's data to find the values at which the participant performed each task at a 75% accuracy level. The color and orientation judgments used straightforward stimulus difference values (e.g., rotation difference or RGB values—corrected for luminance differences). Specifically, the color RGB difference from the “target” was computed as R = (RstimVal); G = stimVal × (0.299/0.587); B = 0. Note that this equation makes the colors approximately isoluminant. The variable stimVal is the value computed from the adaptive thresholding algorithms. So a lower value results in a closer match to the template color (e.g., harder to tell the difference), and a higher stimVal results in a larger difference from the template (e.g., easier to tell the difference between stimuli because the color is more dramatically altered). However, the SJ task required the use of a ramped illumination algorithm in which the stimuli increase in brightness from zero to full intensity and then ramp off to zero again in an inverse pattern for every trial. In other words, stimuli were not switched from “on” to “off” in a single frame; rather, the rate of stimulus illumination was manipulated. Specifically, the rate of intensity change per frame was the differentiating value that was adaptively changed while participants completed the SJ task. Because this value is based on each individual's performance, there is no set range of SOAs across participants. Rather, the rate of intensity change acts as an alternative to SOA that is adapted on every trial to attain a desired accuracy level. To be clear, both stimuli were presented to the retina at the same time, but one was illuminated quicker than the other, giving the illusion of a traditional SOA-based experiment. Furthermore, both stimuli were removed from the screen at the same time (the last fame of presentation). This method allowed us to circumvent traditional frame rate limitations of presentation hardware operating at a maximum of 60 frames per second and allowed us more degrees of freedom in stimulus manipulation. During fMRI runs, the program maintained participants' accuracy level by updating the logistic regression model after each stimulus presentation. If a participant began to perform at a level above 75%, then the program produced more difficult stimuli. In contrast, if a participant began to perform poorly (scoring below 75%), the program made the tasks less challenging, as to return their accuracy back to 75%. This adaptive program controlled for an individual's perceived difficulty across tasks, which is important when comparing fMRI data as to avoid having brain regions appear boastfully active due to one task being more difficult than the comparison.

For the additional study performed before this experiment (the out-of-sample data), the tasks were similar but differed in a few key ways. First, the visual stimuli were rectangles of size 1.38° × 2.76° of visual angle. As with the current experiment, they were presented bilaterally to both visual fields. Second, the temporal task was still an SJ but used a more traditional SOA design. On asynchronous trials, one stimulus would appear first, followed by the second stimulus that was delayed by a variable number of screen refreshes. This task was also adaptive but used a 2-down 1-up parameter estimation by sequential testing (PEST) algorithm to keep participants at their perceptual threshold. Third, in this version of the visual simultaneity task, each of the other tasks (CL and OR) could be either congruent (e.g., same color, same orientation) or incongruent from trial to trial. In other words, the unattended physical properties of the stimuli changed although those properties were not required to make a correct response. This was done by programmatically simulating responses to the unattended tasks while maintaining values near the last known threshold for each task respectively. Likewise, if performing the CL task, the SJ and OR tasks were programmatically simulated to be congruent or incongruent. The congruency ratio was balanced across all trials within a block. Total stimulus duration was 134 msec including the SOA time for any particular trial. In this version of the experiment, tasks were presented in a blocked manner with twelve 28.8-sec-long blocks, each containing 16 trials (with a 1.8-sec intertrial interval), and alternated in their presentation using an interleaved design to maintain simplicity for participants. For example, if the two tasks for the first fMRI run were SJ and CL, a block of 16 SJ trials was shown in which participants made SJs (same or different onset time), followed by a 10-sec rest block and then a block of CL trials in which the participants made color discriminations (same or different color). The same procedure was used for fMRI runs that included SJ and OR task blocks.

Both of these experiments make use of an SJ task but differ in notable ways. The early experiment (with rectangles as stimuli) is referred to as our out-of-sample data. The two experiments do not contain any of the same participants. The data from this early experiment were only used to test the functional model developed from the later experiment via the machine learning classification procedures described later.

Our current version of the SJ paradigm we have developed was designed with great care to control for task difficulty across the different tasks. The early experiment was limited in that task difficulty was not as rigorously controlled via the PEST procedure. However, the early data set is still rich with information related to visual temporal information processing, and therefore we chose to use it as our out-of-sample data to predict task based on the functional activation model derived from the newer, current data set.

Imaging Protocol and Analysis

All imaging took place at the McCausland Center for Brain Imaging located at the Palmetto Health Richland Hospital (Columbia, SC) using a Siemens Prisma 3-T MRI system fitted with a 32-channel head coil. High-resolution structural images were obtained using a T1-weighted 3-D magnetized prepared rapid gradient echo scan with the following parameters: repetition time (TR) = 2400 msec, echo time (TE) = 2.24 msec, inversion time = 1060 msec, flip angle = 8°, field of view (FOV) = 167 × 240 × 256 mm, voxel size = 0.8 mm isotropic, bandwidth = 210 Hz/px, iPAT factor of 2, and acquisition time = 6:38 min. Two fMRI series, each with 1,360 volumes, were collected using x8 multiband (Xu et al., 2013), gradient-echo EPI, TR = 720 msec, TE = 37 msec, flip angle = 52°, FOV = 208 × 208 × 144 mm, slice thickness = 2.0 mm (72 slices, 2.0-mm isotropic voxels), echo spacing = 0.58 msec, and bandwidth = 2290 Hz/px. Single-band fMRI data were also acquired at the beginning of each run. These single-band images had the same acquisition settings as the task fMRI, with the exception that they do not use multiband speed enhancements. In addition, to correct for spatial distortions in the fMRI data, spin-echo images were acquired with the following parameters: TR = 7700 msec, TE = 58 msec, flip angle = 90°, FOV = 208 × 208 × 144 mm, slice thickness = 2.0 mm (72 slices, 2.0-mm isotropic voxels), multiband factor = 1, echo spacing = 0.59 msec, bandwidth = 2290 Hz/px, and acquisition time = 31 sec. fMRI data acquisition for the earlier data set was identical in every way except that 1,370 volumes were collected versus the 1,360 for the current experiment.

All neuroimaging data were analyzed using a combination of FMRIB Software Library (FSL; Jenkinson, Beckmann, Behrens, Woolrich, & Smith, 2012; Smith et al., 2004) and SPM routines. Specifically, the spin-echo images were used in FSL's topup and applytopup to compute and correct for spatial distortions in the fMRI data (Andersson, Skare, & Ashburner, 2003). Each participant's functional data (via single-band reference images) were then coregistered to their T1 anatomical image using the FSL implementation of the boundary-based registration method (Greve & Fischl, 2009). Motion correction was applied using the rigid body MCFLIRT routines in FSL (Jenkinson, Bannister, Brady, & Smith, 2002), and functional data were smoothed using a 5-mm Gaussian kernel. The last preprocessing step was linearly warping each participant's data to the 2-mm Montreal Neurological Institute template brain distributed with FSL using FLIRT with 12 degrees of freedom (Jenkinson & Smith, 2001).

Statistical analyses of the fMRI data were carried out using the GLM as implemented in SPM. Specifically, each participant's normalized functional data were modeled using a boxcar block design with block duration timing parameters specified in the task description section and a high-pass filter of 116 sec. Motion parameters (translation and rotation) were included in the GLM model as nuisance regressors. At the participant level, each participant's two fMRI runs were included in the model, in addition to another nuisance regressor to account for each fMRI run. At the participant level, each pairwise combination of task activation contrasts was modeled. At the group level, the same contrasts were modeled. Thresholded conjunction images were made to indicate significant voxels in each task compared with the others using the simple union of suprathreshold values (see Equations 13) described in Nichols, Brett, Andersson, Wager, and Poline (2005). See Figure 2 caption for thresholding information.
SJconjunction=SJ>CLSJ>OR
(1)
CLconjunction=CL>SJCL>OR
(2)
ORconjunction=OR>SJOR>CL
(3)
Figure 2. 

(A) 3-D rendering of results from the group-level conjunction analysis shown overlaid on a standard anatomical template image. (B) Axial slices showing the same activation as in A. Red indicates voxels in which BOLD signal was significantly greater during the simultaneity task relative to the color task and significantly greater in the simultaneity task relative to the orientation task (conjunction analysis). These regions included TPJ (including SMG, pSTG, and MTG) and IFG (pars opercularis). Similarly, a conjunction analysis of BOLD signal during the orientation task (green) relative to the other two tasks revealed activation at sites in the LOC and SPL. Finally, the conjunction analysis for the color task (blue) revealed activity in the PCC and ANG. All images for the conjunction analyses were thresholded at the cluster level using p < .05, Bonferroni FWE corrected. Cluster extent thresholds for each group contrast were as follows: SJ > CL (k = 155), SJ > OR (k = 207), OR > SJ (k = 138), OR > CL (k = 126), CL > OR (k = 137), and CL > SJ (k = 136). MTG = middle temporal gyrus; LOC = lateral occipital cortex; SPL = superior parietal lobule; PCC = posterior cingulate cortex; ANG = angular gyrus.

Figure 2. 

(A) 3-D rendering of results from the group-level conjunction analysis shown overlaid on a standard anatomical template image. (B) Axial slices showing the same activation as in A. Red indicates voxels in which BOLD signal was significantly greater during the simultaneity task relative to the color task and significantly greater in the simultaneity task relative to the orientation task (conjunction analysis). These regions included TPJ (including SMG, pSTG, and MTG) and IFG (pars opercularis). Similarly, a conjunction analysis of BOLD signal during the orientation task (green) relative to the other two tasks revealed activation at sites in the LOC and SPL. Finally, the conjunction analysis for the color task (blue) revealed activity in the PCC and ANG. All images for the conjunction analyses were thresholded at the cluster level using p < .05, Bonferroni FWE corrected. Cluster extent thresholds for each group contrast were as follows: SJ > CL (k = 155), SJ > OR (k = 207), OR > SJ (k = 138), OR > CL (k = 126), CL > OR (k = 137), and CL > SJ (k = 136). MTG = middle temporal gyrus; LOC = lateral occipital cortex; SPL = superior parietal lobule; PCC = posterior cingulate cortex; ANG = angular gyrus.

Support Vector Machine Analysis

We performed an additional multivariate analysis where we predicted the task from the BOLD signal measured at the ROIs defined by the SJ conjunction analysis. We carried out this multivariate analysis for the SJ versus CL and SJ versus OR contrasts separately. The network was defined as follows: From the SJ conjunction map, the center of mass coordinates of each cluster was computed and used to make spherical ROIs at each cluster location with a 10-mm radius. These ROIs were combined into a single image, which represented a visual temporal perception network and served to select the voxels (features) that were entered into support vector machine (SVM) classification. Within the task blocks, the BOLD signal for each volume was divided by the mean BOLD signal from the last half of the preceding resting block. This normalization procedure was used to normalize the task-related BOLD signal relative to the resting baseline (Schmah et al., 2010; McIntosh & Lobaugh, 2004). After this signal normalization, we computed the mean volume for the entire task block (Ku, Gretton, Macke, & Logothetis, 2008); the values for these averages within the spatial mask were used as observations for classification analysis.

To predict the task, we used SVM with linear kernel, as implemented in the LIBSVM package (Chang & Lin, 2011). The relative BOLD signal from the observations was scaled to [0–1] range: We computed the minimum and maximum values for each feature, subtracted the minimum value from the block averages, and divided by the range (maximum minus minimum). The value for the slack parameter C was selected to optimize the classification accuracy using 10-fold cross-validation on the training set (no data from the test set were used in this optimization procedure). The candidate values for the optimal C were 0.00001, 0.0001, 0.001, 0.01, 0.1, 10, 100, 250, 500, 750, 1,000, 1,500, 2,500, 5,000, and 10,000.

We performed two types of classification analysis: within participant and across participants. For within-participant analysis, prediction was performed using leave-one-block-per-condition-out procedure. To test our prediction, we selected one block average from the first task (SJ) and another block average from the second task (CL or OR) and used the remaining block averages to train the SVM model. This procedure was repeated exhaustively for each possible pairing of block averages from the two tasks. Participant-specific classification accuracy was computed as the average proportion of correctly predicted test cases. This value was then averaged across participants.

For the second type of analysis (across participants), we left out all the observations (block averages) for a particular participant to use for testing, and all block averages for the remaining participants were used for training. After training the SVM model, we predicted the task for each block average of the test participant (28 blocks per participant, 14 for each task). An additional step was performed during the scaling of features into [0–1] range: The minimum and maximum values (described above) were computed across the training participants, and the observations from the test participants were scaled into this range.

RESULTS

Behavioral

A one-way within-participant ANOVA was performed to determine an effect of Task on overall Accuracy. Before the ANOVA, a Fisher exact test was used to filter outlier participants who had significantly different hit–miss ratios for each pairwise combination of tasks. This filter preserved all participants. Task had no significant effect on accuracy measurements, F(2, 68) = 1.95, p = .15, indicating that the adaptive algorithm was able to maintain equal objective difficulty across tasks and within participants. The mean and standard deviation of accuracy measurements across tasks were 0.81 (0.04) for SJs, 0.82 (0.04) for color judgments, and 0.82 (0.04) for orientation judgments. Accuracy means include all task trials where there was a valid response (including catch trials).

Neuroimaging

The conjunction analysis revealed that the timing task (SJ) activated a ventral frontoparietal network including bilateral supramarginal gyrus (SMG), posterior superior temporal gyrus (pSTG), bilateral IFG, and bilateral temporo-occipital areas (but still connected to the larger TPJ cluster). The conjunction map for the OR task showed significant activity in bilateral inferior lateral occipital cortex, and right superior parietal lobule. Finally, the conjunction result for the CL task showed activation of bilateral posterior cingulate cortex and right angular gyrus. Detailed cluster information for all neuroimaging results are listed in Table 1, and Figure 2 shows all conjunction analysis results overlaid on surface- and volume-based brain renderings.

Table 1. 
Center of Mass MNI Coordinates for SJ Activation Clusters
RegionxyzCluster Size
L IFG −50.54 8.18 12.73 320 
L TPJ −54.32 −48.55 19 691 
R IFG 50.22 17.63 8.21 531 
R TPJ 56.46 −40.77 22.6 961 
RegionxyzCluster Size
L IFG −50.54 8.18 12.73 320 
L TPJ −54.32 −48.55 19 691 
R IFG 50.22 17.63 8.21 531 
R TPJ 56.46 −40.77 22.6 961 

Here, we report the center of mass for each ROI that resulted from the conjunction procedure. To create a custom ROI mask, spheres with a 10-mm radius were generated at each of these points. The voxels from these ROIs were used as predictors in the SVM model. MNI = Montreal Neurological Institute.

SVM fMRI Classification

For the within-participant classification analysis, we achieved an accuracy of 99.4% for the SJ versus CL contrast and an accuracy of 99.1% for the SJ versus OR contrast. To see which spatial features were driving this highly accurate prediction, we computed the ROI-specific average of the squared voxel-wise weights from the linear SVM model. These voxel-wise weights represent the importance of each voxel to task prediction; higher values indicate more important voxels. The ROIs were left and right TPJ and left and right IFG. For both the within-participant contrasts of SJ versus CL and SJ versus OR, the most predictive ROI was the right IFG (see Figure 3). In both left and right hemispheres, the IFG contributed more than TPJ. In addition, left and right TPJ regions were similar in their contribution to task prediction.

Figure 3. 

Bar chart showing the average absolute feature weight for each ROI obtained from the SJ conjunction map. The range of these values is relative to the scaled fMRI data. No statistical comparisons were made. TPJ ROIs had numerically similar feature weights in the linear SVM model, and the right IFG (RIFG) was the strongest feature in both comparison conditions of SJ versus CL and SJ versus OR. Error bars represent 95% CI of mean within participant weights. LTPJ = left TPJ; LIFG = left IFG; RTPJ = right TPJ.

Figure 3. 

Bar chart showing the average absolute feature weight for each ROI obtained from the SJ conjunction map. The range of these values is relative to the scaled fMRI data. No statistical comparisons were made. TPJ ROIs had numerically similar feature weights in the linear SVM model, and the right IFG (RIFG) was the strongest feature in both comparison conditions of SJ versus CL and SJ versus OR. Error bars represent 95% CI of mean within participant weights. LTPJ = left TPJ; LIFG = left IFG; RTPJ = right TPJ.

For the across-participants classification analysis, we could predict the task with 61.3% accuracy in the SJ versus CL contrast (the task was predicted correctly for 598 of 980 total block averages). As expected, this value is lower than the accuracy achieved during the within-participant analysis. However, it is still significantly higher than chance probability of 50% (p = 3e-12, assuming binomial distribution with equal likelihood of both tasks). The tasks in the other contrast (SJ vs. OR), however, could not be predicted successfully; accuracy was 44.4%.

Finally, we used the ROIs generated from the group GLM results of the current data set (n = 35) to predict task from fMRI volumes in a separate out-of-sample data set (n = 26) with similar experimental procedures and identical fMRI acquisition parameters. As a group, individuals in this prior study found the simultaneity task easier than the color task (72% vs. 65% accuracy, respectively) indicating that accuracy was not as balanced as that in the current study. However, the average within-participant classification accuracy (predicted using the ROIs from our conjunction analysis) for this second data set was 99.8% for predicting the task (timing judgments or color judgments). Therefore, although this previous study differed in a few ways and was not as balanced for task difficulty, classifiers trained on an independent, controlled data set still performed exceptionally well for predicting behavior given that the task demands were similar.

DISCUSSION

The experiment presented above aimed to investigate the functional anatomy underlying visual temporal perception. We employed a temporal perception task using an SJ paradigm where participants were required to judge if two visual stimuli appeared as simultaneous or not when presented via our custom ramped illumination procedure. The neuroimaging data from the temporal task were then compared with data from the two control tasks of color and orientation discriminations. Crucially, these control tasks used perceptually similar stimuli with the exception that only the attended stimulus property was manipulated from one block to the next and the unattended properties remained constant (e.g., if attending to SJ for a block, then CL and OR were not manipulated). Note that the prior version of this experiment (out-of-sample data) was different in that it did manipulate the unattended stimulus properties to be either congruent (e.g., same color) or incongruent. One could argue that the former paradigm reduces unattended congruence/ incongruence biases whereas the latter preserves identical stimulus properties across trials (albeit these differences are near perceptual threshold). Regardless of this difference in experimental design, the results of our conjunction analysis revealed ROIs that were used to reliably predict unmodeled, out-of-sample data with an average within-participant accuracy of 99%. Our statistical analysis of the imaging data revealed that performing the SJ task selectively activated bilateral TPJ (including SMG and pSTG) and bilateral frontal (IFG) regions after using a conjunction analysis to account for the activation induced by our comparison tasks of CL and OR.

Researchers have devised a myriad of paradigms for investigating the temporal properties of perception (i.e., TOJs, subjective duration judgments, interval estimates, attentional blink). Our decision to use SJs over alternative tests was motivated by two key concerns. First, to maintain control and continuity across tasks, we needed the set of instructions to be the same for all three tasks. Specifically, we used a same or different paradigm. The typical decision criterion for a TOJ judgment task of “which came first” is not an intuitive decision to color or spatial orientation judgments, whereas our same or different criteria were. Second, we have encountered few previous studies of the functional anatomy of visual-only SJs in healthy participants in the fMRI literature. The most similar visual experiment to ours is the work of Lux et al. (2003), where they compared fMRI results of an SJ task with those of an orientation task. Our results show similar patterns of frontal and parietal BOLD response for the SJ condition when compared to their findings. Our experiment clearly extends that of Lux et al. (2003) and Davis et al. (2009), but we have presented data from an experimental design that controls for multiple physical properties of the visual stimuli in our scenes. Furthermore, we designed our task to be individually adaptive for each participant such that they performed at their perceptual threshold for the duration of the experiment across all tasks. Combined, we believe these enhancements provide more control of participant performance and provide a compelling updated account of brain regions involved in visual temporal information processing. Other prior work has focused on tactile (Miyazaki et al., 2016) or audiovisual (Binder, 2015; Zampini, Guest, Shore, & Spence, 2005) asynchrony paradigms, which are difficult to directly relate to our visual-only experiment. However, there are overlapping anatomical results from these studies congruent with our results, namely, the involvement of inferior parietal regions. Finally, in a TOJ study similar to our current SJ work, Davis et al. (2009) used the task of “which came first, red or green,” which undoubtedly required binding temporal information with visual identity information. Davis et al.'s decision criteria presumably relied on links between the theoretical “when” system and the well-studied “what” system for visual information processing. We suggest that visual SJs are better suited due to their arguably more simple decision criteria, which is supported by others in the field (García-Pérez & Alcalá-Quintana, 2012).

Our findings are particularly timely but also somewhat paradoxical with the work of Agosta et al. (2017). Their recent study examined whether brain injury and brain stimulation involving the right versus left TPJ affected participants' performance on a flickering SJ task. They found that right hemisphere injury as well as inhibiting stimulation over the right TPJ impaired task performance. They argue that prior results indicating bilateral activation of tasks like SJs might be due to the variability of tasks used. However, here, we report robust bilateral activations despite also using an SJ task. In addition, we can reliably predict brain activation from an SJ task in an unrelated sample of participants given the “visual timing” ROIs resulting from our main conjunction analysis. One way to reconcile these apparently discrepant findings is that we used a brain activation measure, where it is theoretically possible that the observed left-hemisphere activity is perhaps involved, rather than required for the task. On the other hand, Agosta et al. (2017) are effectively arguing from a null result, as their left-hemisphere stimulation condition shows the predicted trend for poorer temporal perception in the right visual field. This result of poorer performance cannot completely discount accounts for a bilateral network supporting visual temporal perception. Our experiment provides precise coordinates for future brain stimulation work and uses a task we prefer, as it is hard to contaminate with strategies such as blinking, which can create stroboscopic effects making flicker judgments easier to solve or cheat (as used in Agosta et al., 2017).

Our current study was also able to overcome the limitation of variable difficulty across tasks for the duration of the experiment, which may have influenced previous studies on timing behavior (Miyazaki et al., 2016; Battelli et al., 2009; Davis et al., 2009). The solution to the variable task difficulty problem was to construct an individual logistic regression response model for each participant on each task. During the fMRI runs, every trial added a new data point to the model and was used to continuously update the participant's individual model to suggest an estimate for the next stimulus value to use per task. This method was used on all three tasks. Our results presented earlier suggest that this method was able to maintain difficulty across tasks within a participant. Furthermore, participants' anecdotal reports of which task was harder were varied, suggesting no clear trend.

The present experiment, as well as that of its most similar counterparts (Davis et al., 2009; Lux et al., 2003), identifies bilateral regions activated by visual temporal processing using two different tasks (SJs vs. TOJ judgments in the case of Davis et al.). Although we may view SJs as the purer of the two visual timing tasks, both behaviors do undoubtedly show some similarity in their functional anatomy. Specifically, inferior frontal regions and TPJ are often reported in the fMRI temporal perception literature (at least within the visual domain). It is possible that the bilateral patterns observed in fMRI activation measurements reflect the strong homologous connections present (e.g., the left hemisphere is involved but not required for temporal decisions). This hypothesis could be directly addressed by future experiments using brain stimulation (in conjunction with the crucial involvement of an fMRI functional localizer task) to disrupt performance in the novel behavioral paradigm we describe here.

Finally, we wish to point out that the temporal perception network we report here (bilateral TPJ regions and IFG) is nearly identical to that of the ventral frontoparietal attention network. This network is largely thought to be involved in the detection of unattended or low-frequency events, regardless of their spatial location or modality of sensation (Corbetta & Shulman, 2002). It is possible that our neuroimaging results can be explained by differences in the extent of which our task (SJs) and the other two tasks (color and orientation discriminations) activate this system. It has been hypothesized that this network is one of the possible anatomical loci for common brain injury deficit known as extinction (Corbetta & Shulman, 2011). On the basis of clinical evidence (de Haan, Karnath, & Driver, 2012; Karnath, Himmelbach, & Küker, 2003; Mort et al., 2003), SJs are a relevant temporal task because of their dissociation from spatial and identity characteristics (compared with TOJ) and that these judgments represent a behavior that is fundamentally impaired in patients with extinction. Here, we provide evidence that activation of the ventral stimulus-driven attention network in healthy participants supports behavioral processes that are fundamentally impaired in patients who have suffered damage to the same underlying areas (see Karnath et al., 2003).

Furthermore, our neuroimaging results may corroborate this explanation by the fact that our task, along with the popular TOJ judgment task, can inherently induce a stimulus-driven shift in attention when SOA is large enough. Detecting the appearance of one visual stimulus immediately followed by a temporally delayed and spatially adjacent second stimulus is certainly a behavior that has been demonstrated to activate the ventral attention network (Downar, Crawley, Mikulis, & Davis, 2000). Most investigations of the ventral attention network involve TPJ regions and frontal regions (including IFG) and tend to support the claim that this particular network is largely right hemisphere lateralized. However, the reliable bilateral activation of the ventral attention network reported in the current study suggests that regions in both hemispheres may play more active roles than previously thought. In fact, our SVM results show that the left and right TPJ add equal value to task classification in our study (Figure 3). One possibility is that the hypothesized visual “when” system actually only represents a unilateral subset of the ventral attention network. It is conceivable that regions in both hemispheres have a contribution in supporting behavior related to temporal perception in the visual domain but that the right hemisphere may have, in general, become specialized in this role. Variations in visual temporal perception task dependencies may also activate different neural networks. It cannot be said that only the right hemisphere is responsible, when relevant visual deficits are observed after unilateral left hemisphere damage as well (Suchan, Rorden, & Karnath, 2012). Incongruencies in the proposed anatomical loci may be explained by the large variability in experimental paradigms and the diffuse patterns of damage seen in brain injury studies. Additional study into the proposed visual timing system is certainly warranted and may benefit from similar task paradigms to that presented here in which tasks are continuously adjusted to match difficulty, both within and across other sensory modalities. Brain stimulation will be a particularly useful technique for examining the necessity of both left/right and frontal/parietal components of the network identified in the present article. Virtual lesions induced by inhibitory TMS stimulation studies could be used to further delineate the extent of left- and right-hemisphere contributions to temporal processing, specifically in the ventral attention network regions of both hemispheres while performing relevant tasks. Causal statistical procedures such as dynamic causal modeling may also prove to be insightful. Results from such studies could have important implications for future prognostic value and therapeutic interventions when temporal perception ability is impaired after traumatic brain injury or stroke.

Acknowledgments

This work was supported by the Euro Deutsche Forschungsgemeinschaft (DFG) KA 1258/23-1 and the National Institutes of Health (P50 DC014664).

Reprint requests should be sent to Taylor Hanayik, PhD, Department of Psychology, University of South Carolina, 1512 Pendleton St., Barnwell College, Suite #220, Columbia, SC 29208, or via e-mail: hanayik@gmail.com.

REFERENCES

REFERENCES
Agosta
,
S.
,
Magnago
,
D.
,
Tyler
,
S.
,
Grossman
,
E.
,
Galante
,
E.
,
Ferraro
,
F.
, et al
(
2017
).
The pivotal role of the right parietal lobe in temporal attention
.
Journal of Cognitive Neuroscience
,
29
,
805
815
.
Andersson
,
J. L. R.
,
Skare
,
S.
, &
Ashburner
,
J.
(
2003
).
How to correct susceptibility distortions in spin-echo echo-planar images: Application to diffusion tensor imaging
.
Neuroimage
,
20
,
870
888
.
Battelli
,
L.
,
Alvarez
,
G. A.
,
Carlson
,
T.
, &
Pascual-Leone
,
A.
(
2009
).
The role of the parietal lobe in visual extinction studied with transcranial magnetic stimulation
.
Journal of Cognitive Neuroscience
,
21
,
1946
1955
.
Battelli
,
L.
,
Walsh
,
V.
,
Pascual-Leone
,
A.
, &
Cavanagh
,
P.
(
2008
).
The “when” parietal pathway explored by lesion studies
.
Current Opinion in Neurobiology
,
18
,
120
126
.
Binder
,
M.
(
2015
).
Neural correlates of audiovisual temporal processing—Comparison of temporal order and simultaneity judgments
.
Neuroscience
,
300
,
432
447
.
Chang
,
C.-C.
, &
Lin
,
C.-J.
(
2011
).
LIBSVM: A library for support vector machines
.
ACM Transactions on Intelligent Systems and Technology
,
2
,
1
27
.
Corbetta
,
M.
, &
Shulman
,
G. L.
(
2002
).
Control of goal-directed and stimulus-driven attention in the brain
.
Nature Reviews Neuroscience
,
3
,
201
215
.
Corbetta
,
M.
, &
Shulman
,
G. L.
(
2011
).
Spatial neglect and attention networks
.
Annual Review of Neuroscience
,
34
,
569
599
.
Coull
,
J. T.
, &
Nobre
,
A. C.
(
1998
).
Where and when to pay attention: The neural systems for directing attention to spatial locations and to time intervals as revealed by both PET and fMRI
.
Journal of Neuroscience
,
18
,
7426
7435
.
Creem
,
S. H.
, &
Proffitt
,
D. R.
(
2001
).
Defining the cortical visual systems: “What,” “where,” and “how.”
Acta Psychologica
,
107
,
43
68
.
Davis
,
B.
,
Christie
,
J.
, &
Rorden
,
C.
(
2009
).
Temporal order judgments activate temporal parietal junction
.
Journal of Neuroscience
,
29
,
3182
3188
.
de Haan
,
B.
,
Karnath
,
H.-O.
, &
Driver
,
J.
(
2012
).
Mechanisms and anatomy of unilateral extinction after brain injury
.
Neuropsychologia
,
50
,
1045
1053
.
Downar
,
J.
,
Crawley
,
A. P.
,
Mikulis
,
D. J.
, &
Davis
,
K. D.
(
2000
).
A multimodal cortical network for the detection of changes in the sensory environment
.
Nature Neuroscience
,
3
,
277
283
.
García-Pérez
,
M. A.
, &
Alcalá-Quintana
,
R.
(
2012
).
On the discrepant results in synchrony judgment and temporal-order judgment tasks: A quantitative model
.
Psychonomic Bulletin & Review
,
19
,
820
846
.
Goodale
,
M. A.
, &
Milner
,
A. D.
(
1992
).
Separate visual pathways for perception and action
.
Trends in Neurosciences
,
15
,
20
25
.
Greve
,
D. N.
, &
Fischl
,
B.
(
2009
).
Accurate and robust brain image alignment using boundary-based registration
.
Neuroimage
,
48
,
63
72
.
Husain
,
M.
, &
Rorden
,
C.
(
2003
).
Non-spatially lateralized mechanisms in hemispatial neglect
.
Nature Reviews Neuroscience
,
4
,
26
36
.
Husain
,
M.
,
Shapiro
,
K.
,
Martin
,
J.
, &
Kennard
,
C.
(
1997
).
Abnormal temporal dynamics of visual attention in spatial neglect patients
.
Nature
,
385
,
154
156
.
Jenkinson
,
M.
,
Bannister
,
P.
,
Brady
,
M.
, &
Smith
,
S.
(
2002
).
Improved optimization for the robust and accurate linear registration and motion correction of brain images
.
Neuroimage
,
17
,
825
841
.
Jenkinson
,
M.
,
Beckmann
,
C. F.
,
Behrens
,
T. E. J.
,
Woolrich
,
M. W.
, &
Smith
,
S. M.
(
2012
).
FSL
.
Neuroimage
,
62
,
782
790
.
Jenkinson
,
M.
, &
Smith
,
S.
(
2001
).
A global optimisation method for robust affine registration of brain images
.
Medical Image Analysis
,
5
,
143
156
.
Karnath
,
H.-O.
,
Himmelbach
,
M.
, &
Küker
,
W.
(
2003
).
The cortical substrate of visual extinction
.
NeuroReport
,
14
,
437
442
.
Karnath
,
H.-O.
, &
Zihl
,
J.
(
2003
).
Disorders of spatial orientation
. In
T.
Brandt
,
L. R.
Caplan
,
J.
Dichgans
,
H. C.
Diener
, &
C.
Kennard
(Eds.),
Neurological disorders: Course and treatment
(2nd ed., pp.
277
286
).
San Diego, CA
:
Academic Press
.
Keetels
,
M.
, &
Vroomen
,
J.
(
2012
).
Perception of synchrony between the senses
. In
M. M.
Murray
&
M. T.
Wallace
(Eds.),
The neural bases of multisensory processes
(pp.
147
178
).
Boca Raton, FL
:
CRC Press/Taylor & Francis
.
Kleiner
,
M.
,
Brainard
,
D.
,
Pelli
,
D.
,
Ingling
,
A.
,
Murray
,
R.
, &
Broussard
,
C.
(
2007
).
What's new in Psychtoolbox-3
.
Perception
,
36
,
1
16
.
Ku
,
S.-P.
,
Gretton
,
A.
,
Macke
,
J.
, &
Logothetis
,
N. K.
(
2008
).
Comparison of pattern recognition methods in classifying high-resolution BOLD signals obtained at high magnetic field in monkeys
.
Magnetic Resonance Imaging
,
26
,
1007
1014
.
Lux
,
S.
,
Marshall
,
J. C.
,
Ritzl
,
A.
,
Zilles
,
K.
, &
Fink
,
G. R.
(
2003
).
Neural mechanisms associated with attention to temporal synchrony versus spatial orientation: An fMRI study
.
Neuroimage
,
20(Suppl. 1)
,
S58
S65
.
Manly
,
T.
,
Owen
,
A. M.
,
McAvinue
,
L.
,
Datta
,
A.
,
Lewis
,
G. H.
,
Scott
,
S. K.
, et al
(
2003
).
Enhancing the sensitivity of a sustained attention task to frontal damage: Convergent clinical and functional imaging evidence
.
Neurocase
,
9
,
340
349
.
McIntosh
,
A. R.
, &
Lobaugh
,
N. J.
(
2004
).
Partial least squares analysis of neuroimaging data: Applications and advances
.
Neuroimage
,
23(Suppl. 1)
,
S250
S263
.
Miyazaki
,
M.
,
Kadota
,
H.
,
Matsuzaki
,
K. S.
,
Takeuchi
,
S.
,
Sekiguchi
,
H.
,
Aoyama
,
T.
, et al
(
2016
).
Dissociating the neural correlates of tactile temporal order and simultaneity judgements
.
Scientific Reports
,
6
,
23323
.
Mort
,
D. J.
,
Malhotra
,
P.
,
Mannan
,
S. K.
,
Rorden
,
C.
,
Pambakian
,
A.
,
Kennard
,
C.
, et al
(
2003
).
The anatomy of visual neglect
.
Brain
,
126
,
1986
1997
.
Nichols
,
T.
,
Brett
,
M.
,
Andersson
,
J.
,
Wager
,
T.
, &
Poline
,
J.-B.
(
2005
).
Valid conjunction inference with the minimum statistic
.
Neuroimage
,
25
,
653
660
.
Roberts
,
K. L.
,
Lau
,
J. K. L.
,
Chechlacz
,
M.
, &
Humphreys
,
G. W.
(
2012
).
Spatial and temporal attention deficits following brain injury: A neuroanatomical decomposition of the temporal order judgement task
.
Cognitive Neuropsychology
,
29
,
300
324
.
Rorden
,
C.
,
Li
,
D.
, &
Karnath
,
H.-O.
(
2018
).
Biased temporal order judgments in chronic neglect influenced by trunk position
.
Cortex
,
99
,
273
280
.
Schmah
,
T.
,
Yourganov
,
G.
,
Zemel
,
R. S.
,
Hinton
,
G. E.
,
Small
,
S. L.
, &
Strother
,
S. C.
(
2010
).
Comparing classification methods for longitudinal fMRI studies
.
Neural Computation
,
22
,
2729
2762
.
Smith
,
S. M.
,
Jenkinson
,
M.
,
Woolrich
,
M. W.
,
Beckmann
,
C. F.
,
Behrens
,
T. E. J.
,
Johansen-Berg
,
H.
, et al
(
2004
).
Advances in functional and structural MR image analysis and implementation as FSL
.
Neuroimage
,
23(Suppl. 1)
,
S208
S219
.
Suchan
,
J.
,
Rorden
,
C.
, &
Karnath
,
H.-O.
(
2012
).
Neglect severity after left and right brain damage
.
Neuropsychologia
,
50
,
1136
1141
.
VanRullen
,
R.
,
Pascual-Leone
,
A.
, &
Battelli
,
L.
(
2008
).
The continuous Wagon Wheel Illusion and the “when” pathway of the right parietal lobe: A repetitive transcranial magnetic stimulation study
.
PLoS One
,
3
,
e2911
.
Vossel
,
S.
,
Weiss
,
P. H.
,
Eschenbeck
,
P.
, &
Fink
,
G. R.
(
2013
).
Anosognosia, neglect, extinction and lesion site predict impairment of daily living after right-hemispheric stroke
.
Cortex
,
49
,
1782
1789
.
Xu
,
J.
,
Moeller
,
S.
,
Auerbach
,
E. J.
,
Strupp
,
J.
,
Smith
,
S. M.
,
Feinberg
,
D. A.
, et al
(
2013
).
Evaluation of slice accelerations using multiband echo planar imaging at 3 T
.
Neuroimage
,
83
,
991
1001
.
Zampini
,
M.
,
Guest
,
S.
,
Shore
,
D. I.
, &
Spence
,
C.
(
2005
).
Audio-visual simultaneity judgments
.
Perception & Psychophysics
,
67
,
531
544
.