Sensory processing is strongly influenced by prior expectations. Valid expectations have been shown to lead to improvements in perception as well as in the quality of sensory representations in primary visual cortex. However, very little is known about the neural correlates of the expectations themselves. Previous studies have demonstrated increased activity in sensory cortex following the omission of an expected stimulus, yet it is unclear whether this increased activity constitutes a general surprise signal or rather has representational content. One intriguing possibility is that top–down expectation leads to the formation of a template of the expected stimulus in visual cortex, which can then be compared with subsequent bottom–up input. To test this hypothesis, we used fMRI to noninvasively measure neural activity patterns in early visual cortex of human participants during expected but omitted visual stimuli. Our results show that prior expectation of a specific visual stimulus evokes a feature-specific pattern of activity in the primary visual cortex (V1) similar to that evoked by the corresponding actual stimulus. These results are in line with the notion that prior expectation triggers the formation of specific stimulus templates to efficiently process expected sensory inputs.
Prior expectations affect sensory processing at the earliest stages of the cortical hierarchy (Kok, Rahnev, Jehee, Lau, & De Lange, 2012; Todorovic & De Lange, 2012; Arnal, Wyart, & Giraud, 2011; Todorovic, Van Ede, Maris, & De Lange, 2011; Alink, Schwiedrzik, Kohler, Singer, & Muckli, 2010; Den Ouden, Friston, Daw, McIntosh, & Stephan, 2009; Summerfield, Trittschuh, Monti, Mesulam, & Egner, 2008). Valid expectations have been shown to lead to improvements in perception (Bar, 2004) as well as in the quality (Kok, Jehee, & De Lange, 2012) and content (Kok, Brouwer, Van Gerven, & De Lange, 2013) of sensory representations in early visual cortex. These findings are in line with a framework in which perception is cast as a process of inference, wherein bottom–up stimulus inputs and prior knowledge jointly determine the contents of perception (Fiser, Berkes, Orbán, & Lengyel, 2010; Yuille & Kersten, 2006; Helmholtz, 1867). In this framework, prior knowledge is expected to affect sensory processing not only when this is required for optimal task performance (Ma, 2012), but even when stimuli are fully task irrelevant (Yaron, Hershenhoren, & Nelken, 2012; Alink et al., 2010; Den Ouden et al., 2009). However, despite numerous demonstrations of prior expectations affecting stimulus processing, very little is known about the neural correlates of the expectations themselves.
Previous studies have demonstrated increased activity in sensory cortex following the omission of an expected stimulus compared with when a stimulus was not expected (SanMiguel, Widmann, Bendixen, Trujillo-Barreto, & Schröger, 2013; Kok, Rahnev, et al., 2012; Todorovic et al., 2011; Wacongne et al., 2011; Den Ouden et al., 2009). Although this has sometimes been interpreted as a “pure expectation“ signal, uncontaminated by bottom–up signals, it is unclear whether this increased activity constitutes a general surprise signal or has representational content. For example, early visual cortex shows an overall activity increase when an auditory target is selected (Swallow, Makovski, & Jiang, 2012), showing that temporal selection of behaviorally relevant items can enhance sensory activity in other modalities in a rather unspecific manner. It is also possible that the omission of an expected event draws spatial attention but does not have any feature specificity. However, another intriguing possibility is that top–down expectation leads to a template of the expected stimulus being set up in visual cortex, which can then be compared with subsequent bottom–up input (Friston, 2005; Rao & Ballard, 1999; Mumford, 1992). In line with this, feature-specific top–down effects on visual cortex have been shown as a consequence of preparatory attention (Stokes, Thompson, Nobre, & Duncan, 2009), mental imagery (Albers, Kok, Toni, Dijkerman, & De Lange, 2013; Lee, Kravitz, & Baker, 2012; Stokes, Thompson, Cusack, & Duncan, 2009), and working memory (Harrison & Tong, 2009; Serences, Ester, Vogel, & Awh, 2009).
To test this hypothesis, we used fMRI to investigate whether the neural activity in early visual cortex elicited by the expectation of a particular stimulus is specific to the feature that is being expected. Such feature specificity would suggest that prior expectation leads to the generation of a stimulus template in visual cortex.
Twenty-six healthy right-handed individuals (17 women, mean age = 22 years, SD = 2 years) with normal or corrected-to-normal vision gave written informed consent to participate in this study in accordance with the institutional guidelines of the local ethics committee (CMO region Arnhem-Nijmegen, The Netherlands). Data from three participants were excluded because of excessive (>5 mm) head movements.
Grayscale luminance-defined sinusoidal Gabor grating stimuli were generated using MATLAB (MathWorks, Natick, MA) and the Psychophysics Toolbox (Brainard, 1997). In the behavioral session, stimuli were presented on a Samsung Syncmaster 940BF screen (1024 × 786 screen resolution, 60 Hz refresh rate). In the fMRI session, they were projected on a rear projection screen using a luminance-calibrated SONY VPL FX40 projector (1024 × 768 resolution, 60 Hz). A set of two gratings (20% Michelson contrast) was displayed in succession in an annulus (outer diameter: 15° of visual angle, inner diameter: 3°), surrounding a fixation point. The two gratings differed from each other in terms of phase and spatial frequency. The first grating had random spatial phase, and the second grating was in counterphase to the first grating. The two gratings had spatial frequencies of 1.0 and 1.5 cpd, respectively; the order of which was pseudorandomized and counterbalanced over conditions. The grating stimuli were presented for 100 msec each, separated by a blank screen (500 msec).
Each trial started with an auditory cue (200 msec), consisting of either a low-frequency (450 Hz) or high-frequency (1000 Hz) tone, which predicted the orientation of the first grating stimulus of the pair with 100% validity (45° or 135°; see Figure 1). On 75% of trials, participants were then presented with a set of gratings, the first of which had the (expected) orientation, and the second was tilted a few degrees clockwise or anticlockwise with respect to the first (Stimulus trials; Figure 1A). On these trials, participants performed an orientation discrimination task, wherein they judged whether the second grating was rotated clockwise or anticlockwise. On the remaining 25% of trials, no gratings were presented (Omission trials; Figure 1B). Therefore, on these trials, participants had an expectation of a particular visual stimulus, but no actual visual input. Participants had no task on these trials, except for holding central fixation. The intertrial interval (ITI) was jittered between 3250 and 5250 msec. A central fixation point was presented throughout the trial and ITI. All participants completed two runs, consisting of two blocks of 64 trials each, yielding 256 trials. The contingencies between cues and gratings were counterbalanced over participants.
After the main experiment, participants performed a functional localizer task in the magnetic resonance scanner. This consisted of flickering gratings (2 Hz), presented at 100% contrast, in blocks of 12 sec. Each block contained gratings with a fixed orientation (45° or 135°) and random spatial phase and spatial frequency (either 1.0 or 1.5 cpd). The two orientations were presented consecutively (order was pseudorandomized), followed by a 12-sec blank screen, containing only a fixation point. To ensure central fixation, participants were required to press a button whenever the white fixation dot turned black (four to eight times per 36-sec block, at unpredictable times). All participants were presented with 16 blocks, yielding ∼10 min of scanning.
The same task was used during the retinotopic mapping run, in which participants viewed a wedge, consisting of a flashing black-and-white checkerboard pattern (3 Hz), first rotating clockwise for nine cycles and then anticlockwise for another nine cycles (at a rotation speed of 24 sec/cycle).
To familiarize participants with the task and to assess whether the predictive cue had behavioral consequences for perceptual processing, the fMRI session was preceded by a behavioral experiment, which was identical to the imaging experiment except for the following changes.
In the behavioral session, expectations were violated (25% of trials) not by omitting the grating stimuli but by presenting gratings with the nonpredicted orientation (e.g., 45° when 135° was predicted). During this session, participants performed the orientation discrimination task both on expected and unexpected trials. Note that the response on the orientation discrimination task is orthogonal to the orientation expectation, thereby avoiding potential confounds with respect to response bias. During the behavioral session, the orientation difference between the two gratings was determined by an adaptive staircase procedure (Watson & Pelli, 1983), which was set to an overall correct response percentage of ∼75%. The orientation differences for expected and unexpected trials were controlled by a single, joint staircase to prevent differences in physical stimulus attributes between conditions. Furthermore, orientation differences were updated only between blocks to prevent potential differences between conditions because of fluctuations in orientation difference over the course of a block. The final staircase thresholds obtained during this behavioral session were used in the fMRI session, where orientation differences were kept fixed. Effects of Expectation (Expected vs. Unexpected) and Orientation (45° vs. 135°) on accuracy and RTs were tested using two-way repeated-measures ANOVAs.
Before the actual task, participants were given instructions and performed a short training run, consisting of two blocks of 32 trials. After this, participants completed six blocks of 64 trials, yielding a total of 384 trials. The ITI was fixed at 1700 msec.
fMRI Acquisition and Analysis
Functional images were acquired using a 1.5-T Avanto MRI system (Siemens, Erlangen, Germany) with a T2*-weighted gradient-echo EPI sequence (repetition time/echo time = 2000/40 msec, 33 transversal slices, voxel size = 3 × 3 × 3 mm, 80° flip angle). Anatomical images were acquired with a T1-weighted MP-RAGE sequence (repetition time/echo time = 2250/2.95 msec, voxel size =1 × 1 × 1 mm, 15° flip angle). SPM8 (www.fil.ion.ucl.ac.uk/spm, Wellcome Trust Centre for Neuroimaging, London, UK) was used for image preprocessing. The first four volumes of each run were discarded to allow T1 equilibration. All functional images were spatially realigned to the mean image, yielding head movement parameters that were used as nuisance regressors in the general linear model and temporally aligned to the first slice of each volume. The structural image was coregistered with the functional volumes.
Data of each participant were modeled using an event-related approach within the framework of the general linear model. Regressors representing the four different conditions (the two trial types and the two expected orientations) were constructed by convolving the (expected) onsets of the first grating in each trial (i.e., 750 msec postcue) with a canonical hemodynamic response function. The auditory cues were not modeled separately, in view of the short SOA between cues and stimuli. Instead, the modeled BOLD response was taken to be an aggregate of cue-evoked and stimulus-evoked activity (or, in the case of omission trials, cue-evoked activity only). Instruction screens at the beginning of each block were included as regressors of no interest, as were head motion parameters (Lund, Norgaard, Rostrup, Rowe, & Paulson, 2005). Finally, the data were high-pass filtered (cutoff = 128 sec) to remove low-frequency signal drifts. A similar model was constructed for the data of the localizer run, with separate regressors for the two grating orientations (45° and 135°). Additionally, to visualize the time course of the BOLD signal evoked by stimulus and omission trials, the data from the experiment were modeled using a finite impulse response approach. Here, the two trial types were each modeled by eight 2-sec time bins, covering a 16-sec post-stimulus period.
Freesurfer (surfer.nmr.mgh.harvard.edu/) was used to identify the boundaries of retinotopic areas in early visual cortex using established methods (Sereno et al., 1995). To probe the orientation specificity of the signals in visual cortex, data from the independent functional localizer run were used to split the voxels in each ROI into two populations: those responding more strongly to 45° grating stimuli and those responding more strongly to 135° grating stimuli. Specifically, for each ROI (V1, V2, and V3), we selected the 100 voxels that showed the most reliable preference for 45° (highest t value for the contrast 45°–135° gratings) and the 100 voxels that showed the most reliable preference for 135°. BOLD responses were averaged over voxels for these two populations separately. In the main experiment, we expected BOLD responses to be larger for stimuli with the orientation that voxels responded preferentially to during the functional localizer than for stimuli with the nonpreferred orientation (Kamitani & Tong, 2005; Haxby et al., 2001). Similarly, if neural responses evoked by prior expectations are feature specific, we would expect the BOLD response in voxels preferring 45° gratings to be larger for omissions of 45° stimuli than for omissions of 135° stimuli, and vice versa for voxels preferring 135° gratings. These effects were assessed by two-way repeated-measures ANOVAs, with the factors Stimulus/Omission Orientation and Voxel Orientation Preference. Feature-specific activity would be reflected by an interaction between the two factors.
Additionally, to assess the overall BOLD amplitude and its time course in these ROIs, we averaged parameter estimates from the finite impulse response model (Figure 2A) and the model incorporating the canonical hemodynamic response function (Figure 2B, bars) over the selected voxels.
Finally, to establish the retinotopic specificity of the neural response to omissions (and stimuli), we probed the BOLD response in visual cortex voxels that did not correspond to the retinotopic locations at which the gratings were presented. To this end, we selected voxels that did not show a positive BOLD response during the functional localizer. To ensure that we were probing visually responsive voxels, only voxels with a significant response to the rotating wedge stimulus (p < .01) were included. These voxels therefore responded to visual stimulation, but not at the retinotopic locations at which the gratings were presented (and expected) in the current study. For each ROI and each participant, we included all voxels that met these criteria (V1: mean = 95 voxels, SD = 33 voxels; V2: mean = 83 voxels, SD = 30 voxels; V3: mean = 52 voxels, SD = 24 voxels). As a comparison, we selected the same number of stimulus-responsive voxels for each ROI and each participant.
We expected a valid orientation expectation to sharpen the neural representation of the orientation of the first grating (Kok, Jehee, et al., 2012), facilitating comparison of this grating to the subsequent second (slightly tilted) grating. In this way, an improved representation of the first grating can improve performance on the orientation discrimination task. To test whether the expectation cues indeed resulted in perceptual benefits, we carried out a behavioral experiment, in which participants performed an orientation discrimination task on gratings that had either an expected or an unexpected orientation. Participants were faster on trials in which the orientation of the first grating was expected than when it was unexpected (mean RT = 612 msec vs. 622 msec, F(1, 22) = 6.2, p = .020). This indicates that a valid expectation of the orientation of the first grating led to a behavioral benefit when discriminating this grating from the second one, although the direction of rotation of the second grating (clockwise or anticlockwise) was wholly unpredictable. There was no significant difference in accuracy between the two expectation conditions (79.9% vs. 80.1%, F(1, 22) = 0.06, p = .81). There were no effects of grating orientation per se on task performance (RT: F(1, 22) = 0.2, p = .63; accuracy: F(1, 22) = 0.02, p = .90). During the behavioral session, orientation differences between the two gratings were determined by a staircase procedure (see Methods) and resulted in a final mean angle difference of 3.7° ± 0.8° (mean ± SE). These final angle differences were used for all trials during the fMRI session. Accuracy during the fMRI session was similar to that during the behavioral session (mean = 80.1%, SE =1.8%), although RTs were slightly longer (mean = 656 msec, SE = 19 msec). There was no significant difference in Task Accuracy (79.3% vs. 81.0%, t(22) = 1.24, p = .23) or RT (655 msec vs. 657 msec, t(22) = 0.42, p = .68) between the two orientations. Effects of expectation on task performance could not be probed during the fMRI session, because “unexpected” trials consisted of omitted gratings, rather than gratings with opposite orientation (Figure 1B, see Methods).
To probe representational content of the BOLD signal in visual cortex (Figure 2A, B), we estimated the BOLD response evoked in voxels preferring 45° and 135° separately, on the basis of an independent localizer data set (see Methods). As expected, there was a significant interaction between stimulus orientation and voxel orientation preference in V1, F(1, 22) = 18.97, p < .001, demonstrating that neural activity in V1 was orientation specific (Kamitani & Tong, 2005). There was no overall stronger response for either of the two grating orientations, F(1, 22) = 0.39, p > .50, and no overall stronger response in either the 45° or 135° preferring voxels, F(1, 22) < 0.1, p > .50 (Figure 2B). Next, we turned to the omission trials. If the omission-induced activity represents an unspecific surprise signal, we would not expect a significant interaction between the expected orientation and voxel orientation preference. On the other hand, if the omission-induced activity represents the prior expectation of the visual stimulus, induced by the auditory cue, we would expect such an interaction between the expected orientation and voxel orientation preference. This is indeed what we found in V1, F(1, 22) = 7.28, p = .013 (Figure 2B). This indicates that expectations about upcoming visual stimuli, induced by an auditory cue, evoked patterns of activity in primary visual cortex with similar feature specificity as those evoked by actual stimuli, despite the absence of any visual stimulation. As for the stimulus trials, there were no main effects of stimulus orientation, F(1, 22) < 0.1, p > .50, and voxel orientation preference, F(1, 22) = 0.20, p > .50 (Figure 2B), ruling out an explanation in terms of nonspecific effects of arousal.
In V2, we also found a significant interaction between orientation and voxel orientation preference for both the stimulus, F(1, 22) = 11.05, p = .0031, and the omission trials, F(1, 22) = 5.03, p = .035, but neither effect was significant in V3 (stimuli: F(1, 22) = 2.46, p = .13; omissions: F(1, 22) = 1.05, p = .32).
The difference in BOLD response between voxels preferring the currently presented or expected orientation and those preferring the orthogonal orientation (see Figure 2B) was largely independent of the number of voxels selected for the analysis (Figure 3), speaking to the robustness of the effects.
Finally, we probed the retinotopic specificity of the neural response to unexpected omissions. To this end, we selected separate sets of voxels that did or did not correspond to the retinotopic locations at which the grating stimuli were presented and expected (see Methods). We found that, although there was a significant positive response to both stimuli and omissions in voxels corresponding to the retinotopic grating locations in V1 (stimuli: t(22) = 8.90, p < .001; omissions: t(22) = 5.73, p < .001), V2 (stimuli: t(22) = 10.00, p < .001; omissions: t(22) = 6.46, p < .001), and V3 (stimuli: t(22) = 9.81, p < .001; omissions: t(22) = 6.52, p < .001), there were no significant responses to either stimuli or omissions in voxels that corresponded to retinotopic locations at which gratings were not expected (and presented) in V1 (stimuli: t(22) = 1.00, p = .33; omissions: t(22) = 0.99, p = .33), V2 (stimuli: t(22) < 0.1, p < .94; omissions: t(22) = 0.62, p = .54), and V3 (stimuli: t(22) = 0.18, p = .86; omissions: t(22) = 0.76, p < .45; Figure 4). This suggests that the neural response evoked by unexpected omissions was retinotopically specific.
Prior expectations affect sensory processing at the earliest stages of the cortical hierarchy (Kok, Rahnev, et al., 2012; Wyart, Nobre, & Summerfield, 2012; Arnal et al., 2011; Todorovic et al., 2011; Alink et al., 2010; Arnal, Morillon, Kell, & Giraud, 2009; Den Ouden et al., 2009; Summerfield et al., 2008), yet the neural mechanisms underlying the instantiation of the expectations themselves have remained elusive. Here, we showed that prior expectation of a specific visual stimulus evokes a pattern of activity in the primary visual cortex (V1) with similar feature specificity as that evoked by the expected stimulus, even when the stimulus itself is omitted.
The separation of voxels into populations preferring 45° and 135° gratings relied on activity patterns obtained from an independent functional localizer during which participants performed a task at fixation, rendering explanations of our results in terms of eye movements or task-related components such as attention or arousal unlikely. Specifically, because the grating stimuli were task-irrelevant during the localizer run, activity patterns in early visual cortex are not confounded with potential orientation-specific differences in task performance. During the main experiment, there were no orientation-specific differences in behavioral task performance or overall BOLD amplitude evoked by the two orientations. Finally, the design of the analysis avoided potentially confounding effects of the auditory cues, because no such cues were presented during the functional localizer run.
Previous studies (Esterman & Yantis, 2010; Puri, Wojciulik, & Ranganath, 2009) have shown that manipulating the likelihood (expectation) of an upcoming stimulus category (i.e., faces vs. houses) leads to an upregulation of activity in areas of visual cortex selective for the expected category (FFA vs. PPA). Other studies have shown that preparing for a specific stimulus category (people vs. cars; Peelen & Kastner, 2011) or exemplar (letter “X” vs. “O”; Stokes, Thompson, Nobre, et al., 2009) leads to distinguishable patterns of activity in higher-order visual cortex (object-selective and shape-selective areas, respectively). These results suggest that the locus of preparatory signals is determined by the features to be attended or expected. Until now, it was unknown whether such effects could occur in the early visual cortex, given that top–down effects are generally far more prominent in higher-order visual cortex (Lee et al., 2012; Buffalo, Fries, Landman, Liang, & Desimone, 2010; Mohr, Linder, Linden, Kaiser, & Sireteanu, 2009; Johnson, Mitchell, Raye, D'Esposito, & Johnson, 2007; Kastner, De Weerd, Desimone, & Ungerleider, 1998). Indeed, with the exception of Peelen and Kastner (2011), none of the studies mentioned above uncovered preparatory signals in V1, and in Peelen and Kastner (2011), the preparatory signal in V1 was in fact anticorrelated with detection performance (of people or cars). In this study, expectation pertained to the low-level features of the stimulus: Participants knew exactly where to expect the stimulus, and the cue informed them of the likely grating orientation. These low-level features (orientation, location, size) map well onto the receptive field properties of V1 neurons, making V1 an a priori likely locus for the stimulus templates uncovered by this study. In general, it seems likely that expectation affects cortical areas whose receptive fields match the expected features. The fact that participants knew not only where but also when (750 msec postcue) stimuli would appear may have boosted the effects of expectation further, analogous to the synergistic interaction reported between temporal and spatial expectation (Morillon & Barbot, 2013; Doherty, Rao, Mesulam, & Nobre, 2005).
Given that there was no explicit baseline condition included in our study (e.g., “expected omissions”), the current data cannot distinguish whether expectation enhanced the baseline activity of V1 neurons preferring the expected orientation or inhibited the activity of neurons preferring the nonexpected orientation. However, previous studies (Kok, Rahnev, et al., 2012; Todorovic et al., 2011; Wacongne et al., 2011; Den Ouden et al., 2009) have reported increased activity in sensory cortex for unexpected omissions, compared with expected omissions, arguing for an excitatory effect of expectation in the absence of bottom–up input. Additionally, modeling work has shown that expectation amplifies the baseline response of signal-selective units (Wyart et al., 2012). Together, these results suggest that expectation evokes stimulus templates in visual cortex through top–down excitation of neurons preferring the expected orientation. Note that such a prestimulus increase in neurons representing the expected stimulus may give them an advantage in the competition with units preferring other orientations once the bottom–up stimulus input arrives, leading to a relative suppression of neurons preferring nonexpected orientations. Such a mechanism could yield representational sharpening (reduced overall amplitude, but increased stimulus information) of expected stimuli, in line with predictive coding theories (Friston, 2005; Rao & Ballard, 1999), and recently demonstrated empirically (Kok, Jehee, et al., 2012; see below for further discussion).
The mechanisms through which prior expectations evoked stimulus templates in the current study may be similar to those of feature-based attention. In previous studies, feature-based attention has been shown to modulate activity in early visual cortex in a feature-specific fashion (Kamitani & Tong, 2005), an effect that spreads across the visual field to nonstimulated regions of cortex (Serences & Boynton, 2007). However, the current study differs from these previous studies in several respects. First, in the studies by Serences and Boynton (2007) and Kamitani and Tong (2005), a stimulus was always presented on the screen. Spreading of effects of feature-based attention to the nonstimulated hemisphere (Serences & Boynton, 2007) may be mediated by callosal connections between hemispheres (Kennedy & Dehay, 1988). In other words, unlike this study, these studies cannot distinguish between top–down modulation and top–down driving of activity in V1. Second, in this study, the expectation cue (pertaining to the likely orientation of the first grating) was orthogonal to the response on the task (pertaining to the direction the second grating was tilted in). In other words, participants could have ignored the expectation cues and still perform the task successfully. Therefore, our manipulation of prior expectation is more implicit than in studies explicitly manipulating participants' task (e.g., “detect cars”). The fact that expectation resulted in behavioral improvement and detectable stimulus templates in visual cortex suggests that preparatory activity can be induced by factors more implicit than directly changing the task set. Future studies will be needed to investigate whether expectation results in the instantiation of stimulus templates even when the expected feature is fully task irrelevant, in line with findings of task-irrelevant expectation effects on stimulus-evoked activity (Alink et al., 2010; Den Ouden et al., 2009). Third, the neural response to unexpected omissions was retinotopically specific (Figure 4), whereas the effects of feature-based attention have been shown to spread across the visual field (Serences & Boynton, 2007). It should be noted, however, that the current data did not allow us to assess the orientation specificity of activity for retinotopic locations at which no gratings were presented or expected, because the functional localizer contained only gratings overlapping with the locations where stimuli were presented in the experiment. Future research may shed further light on this interesting issue.
One may wonder how the effects of expectation reported here relate to those of mental imagery. Mental imagery has been shown to modulate both perception (Winawer, Huk, & Boroditsky, 2010; Pearson, Clifford, & Tong, 2008; Finke & Schmidt, 1977) and activity in sensory cortex (Albers et al., 2013; Lee et al., 2012; Mohr et al., 2009) in a feature-specific way, and this study shows that expectation does so as well. Therefore, it seems plausible that explicit mental imagery and implicit perceptual expectations result in similar sensory templates being set up in visual cortex. An exciting avenue for further research would be to directly compare the effects of prior expectation and visual imagery on subsequent stimulus-evoked activity.
Overall, one intriguing explanation of our results is that expectation leads to a stimulus template being set up in visual cortex before the stimulus is presented, allowing subsequent input matching this template to be processed efficiently. In a recent study with a very similar design, we showed that a valid expectation (i.e., matching templates) leads to reduced overall activity but improved stimulus representation (Kok, Jehee, et al., 2012), presumably reflecting a sharpened population response. Traditional models of attention would rather predict a boosted stimulus response as a result of attention (Sylvester, Shulman, Jack, & Corbetta, 2009; Maunsell & Treue, 2006; Mangun & Hillyard, 1991), suggesting that expectation and attention may have separable effects (Kok, Rahnev, et al., 2012; Summerfield & Egner, 2009). Previous studies on predictive stimulus templates have reported effects in the medial frontal cortex, suggesting that these regions may be the source of the templates evoked in early visual cortex reported here (Bar et al., 2006; Summerfield et al., 2006). Alternatively, the patterns we observed may have been instantiated only after it had become apparent that the expected stimulus would not appear, signaling the mismatch between expected and actual outcome (i.e., prediction error; Arnal & Giraud, 2012; Den Ouden, Kok, & De Lange, 2012; Summerfield & Koechlin, 2008; Friston, 2005; Rao & Ballard, 1999). The stimulus templates uncovered in the current study are compatible with either “prediction” or “prediction error” signals, because the temporal resolution of the BOLD signal does not allow us to distinguish between pre- and poststimulus signals.
Electrophysiological studies in humans have shown increased responses to unexpected omissions compared with expected omissions in auditory cortex as early as 100–150 msec after a tone was expected (SanMiguel, Saupe, & Schröger, 2013; SanMiguel, Widmann, et al., 2013; Todorovic et al., 2011; Wacongne et al., 2011). These results cannot distinguish whether the activity is related to the prediction itself or the prediction error, because prediction error responses to actually presented stimuli have also been observed as early as 100 msec poststimulus (Todorovic & De Lange, 2012; Todorovic et al., 2011; Wacongne et al., 2011). Interestingly, neurons in the anterior temporal cortex of macaques have been shown to fire in anticipation of their preferred stimulus during paired-association tasks (Meyer & Olson, 2011; Erickson & Desimone, 1999; Sakai & Miyashita, 1991). Furthermore, electrophysiological studies in monkeys (Xu, Jiang, Poo, & Dan, 2012) and humans (De Lange, Rahnev, Donner, & Lau, 2013) suggest that expectation leads to prestimulus activity changes in early visual cortex. Together, these results hint at the possibility of forming predictive stimulus templates in sensory cortex before stimulus onset.
In summary, the current study shows that prior expectation of a specific visual stimulus evokes a feature-specific pattern of activity in the primary visual cortex that correlates positively with that evoked by the actual stimulus, possibly reflecting the formation of a stimulus template to efficiently process expected sensory inputs (Kok, Jehee, et al., 2012).
This study was supported by the Netherlands Organisation for Scientific Research (NWO VENI 451-09-001 awarded to F. P. d. L.).
Reprint requests should be sent to Peter Kok, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, P.O. Box 9101, 6500 HB Nijmegen, The Netherlands, or via e-mail: email@example.com.