Abstract

Perceptual judgments can be based on two kinds of information: state-based perception of specific, detailed visual information, or strength-based perception of global or relational information. State-based perception is discrete in the sense that it either occurs or fails, whereas strength-based perception is continuously graded from weak to strong. The functional characteristics of these types of perception have been examined in some detail, but whether state- and strength-based perception are supported by different brain regions has been largely unexplored. A consideration of empirical work and recent theoretical proposals suggests that parietal and occipito-temporal regions may be differentially associated with state- and strength-based signals, respectively. We tested this parietal/occipito-temporal state/strength hypothesis using fMRI and a visual perception task that allows separation of state- and strength-based perception. Participants made same/different judgments on pairs of faces and scenes using a 6-point confidence scale where “6” responses indicated a state of perceiving specific details that had changed, and “1” to “5” responses indicated judgments based on varying strength of relational match/mismatch. Regions in the lateral and medial posterior parietal cortex (supramarginal gyrus, posterior cingulate cortex, and precuneus) were sensitive to state-based perception and were not modulated by varying levels of strength-based perception. In contrast, bilateral fusiform gyrus activation was increased for strength-based “different” responses compared with misses and did not show state-based effects. Finally, the lateral occipital complex showed increased activation for state-based responses and additionally showed graded activation across levels of strength-based perception. These results offer support for a state/strength distinction between parietal and temporal regions, with the lateral occipital complex at the intersection of state- and strength-based processing.

INTRODUCTION

Introspection suggests that we make use of two different kinds of visual information when making perceptual decisions. For example, imagine you were given two pictures of the same person and had to decide if the person was the same in both pictures or if something about their appearance had changed. In some cases, you might be able to pick out specific details that are different—for example, the person may have shorter hair in one picture compared with the other. In other situations, you may be completely unable to pinpoint any specific changes but nevertheless have a feeling that something is different. This feeling of difference may vary from weak to strong (i.e., you think there may be a difference, or you are sure something is different). This example suggests that some perceptual discriminations are associated with discrete states in which we have conscious access to specific, detailed information that supports the decision, whereas in other cases, perceptual discriminations are made on the basis of the strength of match/mismatch between two items.

Recent research has shown that these two kinds of judgments jointly and independently contribute to performance on perceptual tasks. In a series of same/different visual discrimination experiments, Aly and Yonelinas (2012) found evidence that strength-based perception was affected by manipulations of global featural relationships, was characterized by gradual evidence accumulation over time, and was associated with a feeling of knowing that something had changed, but with little to no ability to identify what the change was. In contrast, state-based perception was driven by manipulations of discrete features (e.g., a window in one scene that is absent in another), was characterized by a sudden onset over time, and was associated with consciously perceiving specific details that had changed. These experiments suggested that visual change detection is not a unitary phenomenon but can be decomposed into very different kinds of perceptual judgments.

This work joins a growing body of literature that suggests the utility of differentiating between different types of perceptual judgments (e.g., Rensink, 2000, 2004; see also Dehaene, Changeux, Naccache, Sackur, & Sergent, 2006), rather than assuming that individuals are simply aware or unaware of perceptual information (e.g., Mitroff, Simons, & Levin, 2004; Rensink, O'Regan, & Clark, 2000; O'Regan, Rensink, & Clark, 1999; Levin & Simons, 1997; Rensink, O'Regan, & Clark, 1997). For example, Rensink (2000, 2004) has argued that, in addition to consciously identifying specific visual changes, individuals may “sense” a change in the absence of being visually aware of what the change is; that is, detection without identification (see also Galpin, Underwood, & Chapman, 2008; but see Simons, Nevarez, & Boot, 2005; we will return to these issues in the Discussion).

Neural Correlates of Visual Change Detection—Evidence and a New Hypothesis

Despite the evidence suggesting the importance of differentiating types of perceptual judgments, research on the neural underpinnings of visual change detection has focused primarily on dichotomous same/different or aware/unaware assessments of performance (e.g., Schwarzkopf, Silvanto, Gilaie-Dolan, & Rees, 2010; Tseng et al., 2010; Large, Cavina-Pratesi, Vilis, & Culham, 2008; Beck, Muggleton, Walsh, & Lavie, 2005; Turatto, Sandrini, & Miniussi, 2004; Beck, Rees, Frith, & Lavie, 2001; Huettel, Guzeldere, & McCarthy, 2001,1). Two exceptions are recent ERP studies, which have examined the electrophysiological signatures associated with detection, identification, and localization of visual changes, and found partially distinct ERP components for perceptual responses based on these different kinds of information (Busch, Durschmid, & Hermann, 2010; Busch, Frund, & Hermann, 2009). Insofar as state- and strength-based perception entail different levels of detection, identification, and localization of changes (Aly & Yonelinas, 2012; also see Busch et al., 2009, 2010; Rensink, 2000, 2004), these ERP results lend support to the prediction that state- and strength-based visual perception may have at least partially distinct neural signatures.

Additional evidence in favor of separable state- and strength-based neural processing are the findings that (1) damage to the hippocampus impairs strength- but not state-based scene perception and (2) hippocampal activation in healthy adults linearly tracks strength-based perceptual judgments on scenes and is not differentially related to state-based perception (Aly, Ranganath, & Yonelinas, 2013). These data therefore indicate that the hippocampus plays a selective role in strength-based perception. In that study, we also found that activation in the parahippocampal cortex linearly tracks strength-based scene perception, but we did not find any state-based effects in the medial-temporal lobe. Thus, an open question is, given the finding of selective strength-based processing in the medial-temporal lobe for scene stimuli, what regions support state-based perception? Our aims in the current study were to determine which regions are involved in state-based perception and, more broadly, test specific predictions about potentially dissociable roles of parietal and occipito-temporal areas in state- and strength-based perception.

On the basis of a consideration of existing empirical and theoretical work, we predicted that parietal and occipito-temporal areas would be differentially associated with state- and strength-based perception, which we refer to here as the parietal/occipito-temporal state/strength hypothesis. For example, a change detection study by Beck et al. (2001) found that conscious change detection, compared with change blindness, was associated with activation in parieto-frontal regions as well as occipito-temporal areas, but change blindness, compared with no change, was associated only with occipito-temporal activation. Thus, the occipito-temporal activation, because it was present in weak form when individuals reported no change and present in stronger form when a change was consciously detected, might be related to strength-based perceptual responses. In contrast, the parietal activation, because it was present only when individuals were consciously aware of a change and absent when a change was unreported, may be related to discrete, state-based perception. That is, parietal areas may be correlated with perceptual judgments based on conscious access to specific details, but activation in these areas may not vary with strength-based perceptual decisions.

Several other studies support the idea that occipito-temporal regions provide graded perceptual signals, whereas parietal regions are associated with more discrete perceptual experiences. For example, graded levels of activation in the fusiform gyrus and lateral occipital complex (LOC) are correlated with the ability to recognize masked objects (Bar et al., 2001; Grill-Spector, Kushnir, Hendler, & Malach, 2000; see also Sligte, Scholte, & Lamme, 2009). In contrast, right parietal regions have been implicated in discrete perceptual switches associated with bistable perception of ambiguous figures (Knapen, Brascamp, Pearson, van Ee, & Blake, 2011; Britz, Landis, & Michel, 2009; Kleinschmidt, Buchel, Zeki, & Frackowiak, 1998) and binocular rivalry (Britz, Pitts, & Michel, 2011; Knapen et al., 2011; Zaretskaya, Thielscher, Logothetis, & Bartels, 2010; Lumer, Friston, & Rees, 1998). Like state-based visual change detection, bistable perception and binocular rivalry are characterized by discrete visual experiences, rather than continuously graded ones. These findings therefore offer further evidence that parietal regions may be involved in perceptual experiences that are discrete or state based.

Our hypothesis is also in line with the “global neuronal workspace” model, according to which graded activation in occipito-temporal areas and the all-or-none engagement of an extended parietal system are associated with different kinds or levels of conscious awareness (Dehaene et al., 2006; see also Lamme, 2003; Kanwisher, 2001). Together, this empirical and theoretical work suggests that parietal and occipito-temporal areas may be associated with discrete/state-based and graded/strength-based perception, respectively.

The Current Study

We investigated perception in a same/different discrimination task; individuals viewed sequentially presented pairs of faces or scenes that were either identical or differed in that the images were slightly contracted or expanded relative to one another (Figure 1; see also Aly et al., 2013; Aly & Yonelinas, 2012). The manipulation was a “pinching” or “spherizing” of the face or scene, which keeps the size of the images the same, but contracts (“pinches”) or expands (“spherizes”) the image with the largest changes at the center and gradually decreasing changes toward the periphery. These changes alter the configural or relational information within the faces and scenes (i.e., the relative distance or position between different components or features) without adding or removing any specific objects or details. Previous studies (Aly & Yonelinas, 2012) have shown that individuals make perceptual judgments on these stimuli with a combination of strength-based assessments of relational match/mismatch and state-based detection and identification of specific differences. The basis for these state-based responses may be relatively local changes, such as changes in the orientation or size of specific features that are changed when the images are expanded or contracted.

Figure 1. 

Example trial for the change detection task. Scenes are shown here, but faces were also used. The manipulations consisted of contracting or expanding the images relative to one another, keeping the size of the image the same. This manipulation changes the configural or relational information within the images without adding or removing any components or features. In the example shown here, the first image is contracted inward at the center whereas the second image is expanded outward. The confidence scale was shown on the screen while the second image was presented and then removed. Responses could be made while the second image was on the screen or in the 1950 msec before the onset of the next (experimental or null) trial.

Figure 1. 

Example trial for the change detection task. Scenes are shown here, but faces were also used. The manipulations consisted of contracting or expanding the images relative to one another, keeping the size of the image the same. This manipulation changes the configural or relational information within the images without adding or removing any components or features. In the example shown here, the first image is contracted inward at the center whereas the second image is expanded outward. The confidence scale was shown on the screen while the second image was presented and then removed. Responses could be made while the second image was on the screen or in the 1950 msec before the onset of the next (experimental or null) trial.

On each trial, participants made same/different judgments using a 6-point confidence scale, which allowed us to separate state- and strength-based perception (Aly & Yonelinas, 2012). To rate the strength of perceptual match, individuals rated their confidence from 1 to 5, with higher confidence levels indicating increasing confidence in difference. If a perceptual decision was based on identification of specific, detailed differences, individuals reported on the occurrence of this state by using a “6” response. We predicted that (1) parietal regions would be sensitive to state-based identification of difference and would not be modulated by varying levels of confidence in strength-based responses and (2) occipito-temporal regions would be sensitive to strength-based detection of difference.

METHODS

Part of the current data set was reported in Aly et al. (2013). In that article, we only analyzed data from a subset of trials (scene trials; see below) within medial temporal lobe ROIs to complement the patient study that examined how medial-temporal lobe lesions affect scene perception. The medial temporal lobe is not the focus of the current article, but we will return to those results in the Discussion in the broader context of the current findings. In the current article, we analyze both face and scene trials and focus on regions outside the medial temporal lobe.

Participants

Eighteen healthy individuals (nine men; age: M = 27 years, SD = 4.4 years; education: M = 17.2 years, SD = 2.3 years) took part in the study, which was approved by the University of California, Davis, Institutional Review Board. Informed consent was obtained before the study, and participants were paid for their time. All but one participant were right-handed.

Behavioral Paradigm

Stimuli, Design, and Procedure

The stimuli and behavioral paradigm were modified from a behavioral study used previously (Aly & Yonelinas, 2012). Stimuli were grayscale scenes and faces. For each stimulus, two altered versions were created in Adobe Photoshop. The first version was expanded outward slightly using the “spherize” option, and the second version was contracted inward slightly using the “pinch” option. As noted above, these manipulations keep the sizes of the images the same, but contract (“pinch”) or expand (“spherize”) the images with the largest changes at the center and gradually decreasing changes toward the periphery. On “same” trials, two identical stimuli were presented (i.e., two presentations of the same “pinched” stimulus or two presentations of the same “spherized” stimulus, with these trial types occurring equally often). On “different” trials, the two altered versions of the stimulus were presented (i.e., “pinched” followed by “spherized” or vice versa, with these trial types occurring equally often). Pinched and spherized stimuli occurred equally frequently as the first and second stimuli.

Stimuli were projected on a screen that participants viewed on a mirror attached to the head coil. Sequential stimulus presentations were used to reduce excessive eye movements that may impact the BOLD response (Kimmig et al., 2001). Each trial consisted of a 1000-msec presentation of the first image, followed by a 50-msec dynamic noise mask, then the corresponding “same” or “different” image for 1000 msec (Figure 1). This was followed by a fixation screen for 1950 msec. Individuals responded with a 1–6 confidence judgment either while the second image was on the screen or during the fixation period following. The confidence scale was shown on the screen while the second image was presented, and then both the image and the confidence scale were removed at the same time. Participants made confidence responses with two button boxes. All participants used the left hand for “same” responses (1–3) and the right hand for “different” responses (4–6).

Participants were told to respond with a “6” (perceive different) judgment only if they experienced a mental state in which they were able to provide specific, qualitative details about how the two images differed. If individuals were confident that the images were different but were not able to provide such details (i.e., if the discrete state did not occur), they were told to respond with a “5” response. A “4” indicated “guess different,” “3” was “guess same,” “2” was “maybe same,” and “1” was “sure same.”

We used these “perceive” instructions for the “6” judgment so that we could treat these responses as primarily reflecting state-based perception. In a previous study, we found that, with these instructions, individuals made very few false alarms for “perceive different” judgments and for hits they were typically able to go on to correctly identify the specific aspect of the images that had changed (Aly & Yonelinas, 2012). If a 1–6 scale were used without this instruction for the “6” responses, however, “6” could reflect a mixture of state-based perception and high-confidence strength-based perception; this would complicate the interpretation of brain activation differences for “6” compared with other responses. Additionally, the use of this “perceive different” response allowed us to identify state-based perceptual trials on a trial-by-trial basis, which we could not do if we estimated the occurrence of state-based perception (across trials) based on a receiver operating characteristic (ROC) analysis (see Aly & Yonelinas [2012] and Behavioral Data Analysis section below).

The experiment was divided into eight runs of 90 trials each. Each run lasted approximately 5 min and consisted of 30 face trials (half “different”), 30 scene trials (half “different”), and 30 null trials. Null trials were 2-sec presentations of the fixation cross. Each run began with 10 sec of fixation to allow time for signal normalization and ended with 12 sec of fixation to allow time for the response to the final trial to be collected.

Order of trial types (scene “same,” scene “different,” face “same,” face “different,” null) was optimized using optseq2 (Dale, 1999; surfer.nmr.mgh.harvard.edu/optseq/). The duration of null events ranged from 2 sec (i.e., one null trial) to 10 sec (i.e., five null trials in a row). The mean duration of null events was 3 sec, with a standard deviation of 1.5 sec. Eight trial sequences were chosen and assigned to each of the eight runs to form eight different orders, so that each sequence was used in each run across participants. Each of these eight orders was run in two counterbalancing conditions, allowing each item to be tested as both “same” and “different” for different participants.

Before the experiment, participants were given instructions, looked at sample images, and did a short practice phase while in the scanner (practice was not scanned).

Behavioral Data Analysis

Performance was assessed by plotting confidence-based ROCs, in which hits are plotted against false alarms at different levels of response confidence (Macmillan & Creelman, 2005; Green & Swets, 1966). The left-most point on the ROC reflects the probability of a “1” (sure “same”) response for “same” trials (x axis) and “different” trials (y axis). The next point is the (cumulative) probability of a “1” or “2” response for “same” and “different” trials. Subsequent points add the other confidence responses one at a time.

For each individual, ROCs were fit using maximum likelihood estimation to obtain parameter estimates of state- and strength-based perception (Aly & Yonelinas, 2012; also see Yonelinas, 1994, 2001). There were seven free parameters for each ROC (five criterion points and estimates of state- and strength-based perception). The parameter for state-based perception reflects a discrete threshold that is either exceeded or not and is estimated by the upper x intercept of the ROC. The parameter for strength-based perception reflects the curvilinearity of the ROC (measured as d′), which is related to the distance between two overlapping Gaussian strength distributions for “same” and “different” trials.

fMRI Acquisition and Preprocessing

Participants were scanned at the University of California, Davis, MRI Facility for Integrative Neuroscience. fMRI data were collected on a 3T Siemens Skyra scanner with a 32-channel head coil. Functional images were obtained with a gradient-echo EPI sequence (repetition time = 2000 msec, echo time = 25 msec, flip angle = 90 degrees, field of view = 205 mm, voxel size = 3.2 mm isotropic). Each functional volume consisted of 34 slices oriented parallel to the AC–PC line, which were acquired in an interleaved sequence. Coplanar high-resolution (1.0 mm isotropic) T1-weighted structural images were also acquired for each participant using an MP-RAGE sequence.

All preprocessing and data analysis were conducted using Statistical Parametric Mapping software (SPM8; www.fil.ion.ucl.ac.uk/spm/software/spm8). Preprocessing for all participants included, in order, slice-timing correction, motion correction, coregistration of the structural image to the mean EPI, and segmentation of the structural image. All of the participants' segmented gray and white matter images were then imported into the DARTEL toolbox (Ashburner, 2007) in SPM8 to create an average gray and white matter template for this group of participants. The template and individual participant flow fields were then used to normalize each participant's EPIs and structural image to MNI space. The EPIs were also resampled to 1.5 mm isotropic voxel dimensions and smoothed with a 6-mm FWHM Gaussian kernel. The structural images were then averaged together for the purpose of displaying the functional data.

fMRI Data Analysis

Event-related BOLD responses were analyzed using a general linear model (GLM). Activity related to each trial was modeled with a stick function representing the onset of the first image, convolved with the canonical hemodynamic response function. Serial correlations in the time series were accounted for using the autoregressive model [AR(1)]. A high-pass filter of 128 sec was used. Each of the eight runs was modeled separately.

We ran three GLMs to examine the BOLD activation related to state- and strength-based perception. For the first GLM, the covariates of interest were the following, separately for scenes and faces: state-based “different” responses (“6” responses on “different” trials), strength-based “different” responses (4–5), misses (1–3 on “different” trials), correct “same” responses (1–3 on “same” trials), and false “different” responses (4–6 on “same” trials). For the second GLM, there was a covariate of interest for each confidence bin (i.e., 1–6) for each of the four kinds of trials (i.e., scene different/same and face different/same). Finally, because RT can significantly modulate BOLD activation (Grinband, Wager, Lindquist, Ferrera, & Hirsch, 2008), we reran the first GLM including RT as a covariate. One participant had to be excluded from that analysis because RTs for state-based perception were not recorded. For all analyses, covariates of no interest were the six motion covariates for each run estimated during the realignment step of preprocessing.

Contrast coefficients for each run were weighted to account for different numbers of trial types in each run. Contrast images from first-level analyses were then entered into second-level analyses. The AFNI program 3DClustSim (Cox, 1996; afni.nimh.nih.gov/pub/dist/doc/program_help/3dClustSim.html) was used to determine the cluster correction for p < .05 across the whole brain (p < .001 and k = 86 voxels).

Given our hypotheses about parietal and occipito-temporal regions, we extracted parameter estimates for parietal and occipito-temporal clusters that were significantly active in the contrasts of interest. For these ROI analyses, MarsBaR (Brett, M., Anton, J.-L., Valabregue, R., & Poline, J.-P., International Conference on Functional Mapping of the Human Brain, 2002) was used to save SPM clusters as ROIs. Parameter estimates were then extracted and averaged for the voxels within the cluster, separately for each confidence bin. The contrasts for which parameter estimates were extracted were “different” trials in each confidence bin versus the baseline null trials. “1” responses (“sure same”) were not included when extracting parameter estimates separately for each confidence bin because of insufficient trial numbers; we used a criterion of at least 10 responses in a confidence bin for a participant to be included, and 13 of the 18 participants did not meet this criterion for “1” responses on “different” trials (summed across scenes and faces; see behavioral data in Figure 2A).

Figure 2. 

Behavioral data. Average number of responses for each of the six confidence bins for “same” and “different” trials are shown on the left, for faces (A) and scenes (B). Aggregate receiver operating characteristics and parameter estimates of state- and strength-based perception are shown on the right, for faces (A) and scenes (B). The upper x intercept of the ROC provides an estimate of the probability of state-based perception, and the curvilinearity in the ROC is used to estimate strength-based perception. Note that state- and strength-based perception are on different scales (probability for state-based perception and d′ for strength-based perception), so the magnitude of the estimates for state- and strength-based perception are not comparable. All error bars depict SEM.

Figure 2. 

Behavioral data. Average number of responses for each of the six confidence bins for “same” and “different” trials are shown on the left, for faces (A) and scenes (B). Aggregate receiver operating characteristics and parameter estimates of state- and strength-based perception are shown on the right, for faces (A) and scenes (B). The upper x intercept of the ROC provides an estimate of the probability of state-based perception, and the curvilinearity in the ROC is used to estimate strength-based perception. Note that state- and strength-based perception are on different scales (probability for state-based perception and d′ for strength-based perception), so the magnitude of the estimates for state- and strength-based perception are not comparable. All error bars depict SEM.

Only “different” trials were used when extracting parameter estimates across confidence to avoid confounding objective stimulus changes with subjective changes in response confidence (i.e., because “different” trials will contribute a declining proportion of trials, and “same” trials an increasing proportion of trials, as confidence in difference decreases). Additionally, scenes and faces were weighted equally in all contrasts, so that differences in scene/face trial proportions across confidence bins would not confound interpretation of the results.

RESULTS

Behavioral Data

The average number of responses in each of the six confidence bins is shown on the left in Figure 2 (faces in Figure 2A and scenes in Figure 2B). Replicating our previous study with a similar procedure (Aly & Yonelinas, 2012), when individuals made state-based “6” responses, they were very accurate, making almost no false alarms (1% for both faces and scenes). This suggests that individuals were following instructions and reserving those responses for when they were able to report specific differences between images.

Aggregate ROC curves and parameter estimates are shown on the right in Figure 2 (faces in Figure 2A and scenes in Figure 2B). Analyses of the ROCs showed that performance was based on a combination of discrete states of perceiving specific differences (upper x intercept of the ROC and “state” parameter estimate in the inset) and assessments of the strength of overall match (degree of curvilinearity in the ROC and “strength” parameter estimate in the inset).

fMRI Data

Overall Change Detection

First, to ensure that the current task recruited a change detection network similar to that observed in prior studies (e.g., Large et al., 2008; Beck et al., 2001; see also Pessoa & Ungerleider, 2004), we conducted a preliminary analysis in which we examined the regions associated with successful change detection, without differentiating between state- and strength-based perception. Specifically, we examined which areas were more active when individuals correctly identified a “different” trial (i.e., “4,” “5,” and “6” responses on “different trials”) than when they correctly identified a “same” trial (i.e., “1,” “2,” and “3” responses on “same” trials). We found activation in a network of occipito-temporal and parieto-frontal regions (Figure 3 and Table 1). These areas included bilateral intraoccipital sulcus, fusiform gyrus and LOC, inferior parietal cortex (including and extending beyond the supramarginal gyrus), and inferior frontal gyrus. We can therefore be reasonably confident that our task engaged similar areas to those recruited by other change detection paradigms (e.g., Large et al., 2008; Beck et al., 2001; see also Pessoa & Ungerleider, 2004).

Figure 3. 

A network of occipito-temporal and parieto-frontal regions is associated with successful change detection. These regions included bilateral intraoccipital sulcus, fusiform gyrus and LOC, inferior parietal cortex, and inferior frontal gyrus. See Table 1 for a complete list of activated regions (different > same).

Figure 3. 

A network of occipito-temporal and parieto-frontal regions is associated with successful change detection. These regions included bilateral intraoccipital sulcus, fusiform gyrus and LOC, inferior parietal cortex, and inferior frontal gyrus. See Table 1 for a complete list of activated regions (different > same).

Table 1. 

MNI Coordinates for the Peak Voxels in Each Cluster, along with Their t Values

RegionCoordinates (mm)t(17)
Correct Different > Correct Same 
L precentral gyrus −33 −10 63 17.66 
R cerebellum 20 −51 −20 15.11 
L inferior frontal gyrus −48 30 −2 11.01 
L LOCa −44 −49 −12 9.65 
L thalamus −17 −22 1 8.81 
R parahippocampal cortex 31 −48 −16 8.34 
R LOCa 42 −72 −9 8.31 
L putamen −31 −12 −3 8.05 
L intraoccipital sulcus −27 −69 28 7.75 
R intraoccipital sulcus 31 −74 27 7.57 
L cingulate gyrus −6 −21 43 6.67 
R inferior parietal cortex 62 −25 48 6.56 
L medial frontal gyrus −3 38 42 6.51 
L insula −36 −4 12 6.38 
L posterior hippocampus −15 −34 2 6.05 
R middle frontal gyrus 48 9 22 5.88 
L parahippocampal cortex −27 −45 −10 5.76 
R inferior frontal gyrus 51 35 0 5.20 
L orbitofrontal cortex −26 24 −21 5.03 
L precuneus −11 −70 30 4.84 
L inferior parietal cortex −54 −42 43 4.65 
L amygdala −16 −4 −15 4.49 
 
Different State > Different Strength 
L LOC −36 −43 −20 7.16 
L supramarginal gyrus −60 −31 34 7.02 
L amygdala −30 −9 −12 6.33 
R supramarginal gyrus 59 −21 39 5.98 
R LOC 53 −66 −6 5.95 
L postcentral gyrus −41 −22 54 5.62 
R inferior frontal gyrus 36 29 15 5.00 
 
Different Strength > Different State 
R posterior cingulate cortex 15 −55 19 7.28 
R precuneus 11 −64 46 6.75 
L posterior cingulate cortex −17 −57 22 5.65 
R superior frontal gyrus 23 −1 46 5.07 
 
Different Strength > Miss 
L precentral gyrus −45 −21 51 14.63 
R cerebellum 21 −51 −22 7.63 
L fusiform gyrus −43 −51 −12 6.27 
L putamen −28 −6 −3 5.02 
R fusiform gyrus 43 −52 −21 4.61 
L cingulate gyrus −4 −9 49 4.50 
RegionCoordinates (mm)t(17)
Correct Different > Correct Same 
L precentral gyrus −33 −10 63 17.66 
R cerebellum 20 −51 −20 15.11 
L inferior frontal gyrus −48 30 −2 11.01 
L LOCa −44 −49 −12 9.65 
L thalamus −17 −22 1 8.81 
R parahippocampal cortex 31 −48 −16 8.34 
R LOCa 42 −72 −9 8.31 
L putamen −31 −12 −3 8.05 
L intraoccipital sulcus −27 −69 28 7.75 
R intraoccipital sulcus 31 −74 27 7.57 
L cingulate gyrus −6 −21 43 6.67 
R inferior parietal cortex 62 −25 48 6.56 
L medial frontal gyrus −3 38 42 6.51 
L insula −36 −4 12 6.38 
L posterior hippocampus −15 −34 2 6.05 
R middle frontal gyrus 48 9 22 5.88 
L parahippocampal cortex −27 −45 −10 5.76 
R inferior frontal gyrus 51 35 0 5.20 
L orbitofrontal cortex −26 24 −21 5.03 
L precuneus −11 −70 30 4.84 
L inferior parietal cortex −54 −42 43 4.65 
L amygdala −16 −4 −15 4.49 
 
Different State > Different Strength 
L LOC −36 −43 −20 7.16 
L supramarginal gyrus −60 −31 34 7.02 
L amygdala −30 −9 −12 6.33 
R supramarginal gyrus 59 −21 39 5.98 
R LOC 53 −66 −6 5.95 
L postcentral gyrus −41 −22 54 5.62 
R inferior frontal gyrus 36 29 15 5.00 
 
Different Strength > Different State 
R posterior cingulate cortex 15 −55 19 7.28 
R precuneus 11 −64 46 6.75 
L posterior cingulate cortex −17 −57 22 5.65 
R superior frontal gyrus 23 −1 46 5.07 
 
Different Strength > Miss 
L precentral gyrus −45 −21 51 14.63 
R cerebellum 21 −51 −22 7.63 
L fusiform gyrus −43 −51 −12 6.27 
L putamen −28 −6 −3 5.02 
R fusiform gyrus 43 −52 −21 4.61 
L cingulate gyrus −4 −9 49 4.50 

aThese lateral occipital complex clusters extended medially to the fusiform gyrus.

State- and Strength-based Perception

The “state” contrast was correct state-based “different” responses (i.e., “6” responses on “different” trials) greater than correct strength-based “different” responses (i.e., “4” and “5” responses on “different” trials). The “strength” contrast was correct strength-based “different” responses (i.e., “4” and “5” responses on “different” trials) greater than misses (i.e., “1,” “2,” and “3” responses on “different” trials).2

We first looked for interactions between “state” and “strength” contrasts and stimulus type. A 2 (Contrast: state or strength) × 2 (Stimulus Type: scene or face) ANOVA did not yield any evidence for an interaction in occipito-temporal or parietal regions. For the analyses reported below, we therefore included both scenes and faces in each contrast, with each stimulus weighted equally. Additionally, we conducted follow-up exploratory analyses in which we examined face and scene trials separately for each ROI, but there was no evidence for interactions in any of the regions for any contrast.

First, to identify regions involved in state-based perception, we examined the “state” contrast [correct state-based “different” responses (i.e., “6” responses on “different” trials) greater than correct strength-based “different” responses (i.e., “4” and “5” responses on “different” trials)]. Consistent with our hypothesis, this contrast revealed activation in the posterior parietal cortex, specifically the supramarginal gyrus bilaterally (Figure 4A). Additionally, this contrast yielded activation in bilateral LOC (Figure 4B). The complete list of activated clusters and peak voxels is listed in Table 1.

Figure 4. 

Activation in bilateral posterior parietal cortex is correlated with state-based perception, and activation in bilateral LOC is associated with both state- and strength-based perception. The left and right supramarginal gyrus (A) and LOC (B) were more active for correct state-based “different” responses than correct strength-based “different” responses. The graphs for this and subsequent figures show parameter estimates for “different” trials only. Parameter estimates for the parietal ROIs were suggestive of a role in state-based perception, with activation not significantly modulated by varying levels of strength-based perception. In contrast, parameter estimates for the LOC ROIs suggested a role in both state- and strength-based perception, with activation not only higher for state-based responses but also modulated by varying levels of strength-based responses (see main text for statistical analyses). All error bars depict SEM. See Table 1 for a complete list of activated regions (different state > different strength).

Figure 4. 

Activation in bilateral posterior parietal cortex is correlated with state-based perception, and activation in bilateral LOC is associated with both state- and strength-based perception. The left and right supramarginal gyrus (A) and LOC (B) were more active for correct state-based “different” responses than correct strength-based “different” responses. The graphs for this and subsequent figures show parameter estimates for “different” trials only. Parameter estimates for the parietal ROIs were suggestive of a role in state-based perception, with activation not significantly modulated by varying levels of strength-based perception. In contrast, parameter estimates for the LOC ROIs suggested a role in both state- and strength-based perception, with activation not only higher for state-based responses but also modulated by varying levels of strength-based responses (see main text for statistical analyses). All error bars depict SEM. See Table 1 for a complete list of activated regions (different state > different strength).

We then examined the parameter estimates across different confidence responses for the parietal and occipito-temporal clusters, that is, the supramarginal gyrus and LOC ROIs. These regions were selected based on higher activation for “6” than “4” and “5” responses; therefore, a difference between “6” and other responses is to be expected; of most interest is whether activation for responses “2” to “5” is relatively stable (i.e., not modulated by varying levels of strength-based perception) or increases (i.e., tracks strength-based perception). The left and right supramarginal gyrus (Figure 4A) showed the former pattern; activity in these regions was sensitive to state-based perception but was not significantly modulated by variations in strength-based perception (i.e., an increasing linear trend from “2” to “5” was not statistically significant, both ts < 1). The left and right LOC (Figure 4B), in contrast, showed the latter pattern, such that activity was sensitive to both state- and strength-based perception; the increasing linear trend from “2” to “5” was statistically significant for the left LOC, t(17) = 1.97, p = .03, and marginal for the right LOC, t(17) = 1.55, p = .07.

We then examined whether any regions showed greater activation for strength-based perception than state-based perception. The contrast of interest was correct strength-based “different” responses (i.e., “4” and “5” responses on “different” trials) greater than correct state-based “different” responses (i.e., “6” responses on “different” trials). This contrast revealed activation in bilateral posterior cingulate cortex and right precuneus (Figure 5A; the complete list of activated clusters and peak voxels is listed in Table 1). Parameter estimates across confidence showed that activation in these regions was not significantly modulated by variations in strength-based perception (i.e., the linear trends from “2” to “5” were not statistically significant; all ts < 1.1, ps > .14; Figure 5B). That is, although these regions showed increased activation during strength- compared with state-based perception, they were not particularly sensitive to different levels of strength-based perception. These medial parietal regions therefore show the mirror image of the activation patterns of the lateral parietal regions (Figure 4A), but both sets of activation profiles show state- but not strength-based response characteristics.

Figure 5. 

Deactivation in bilateral posterior cingulate cortex and right precuneus is associated with state-based perception. These regions were less active for correct state-based “different” responses than correct strength-based “different” responses. Parameter estimates for these ROIs showed that activation in these regions was not modulated by varying levels of strength-based perception, suggesting that activation in these regions changes with the occurrence of state-based perception (see main text for statistical analyses). All error bars depict the SEM. See Table 1 for a complete list of activated regions (different strength > different state).

Figure 5. 

Deactivation in bilateral posterior cingulate cortex and right precuneus is associated with state-based perception. These regions were less active for correct state-based “different” responses than correct strength-based “different” responses. Parameter estimates for these ROIs showed that activation in these regions was not modulated by varying levels of strength-based perception, suggesting that activation in these regions changes with the occurrence of state-based perception (see main text for statistical analyses). All error bars depict the SEM. See Table 1 for a complete list of activated regions (different strength > different state).

We next examined whether any regions are related to strength-based detection of difference (i.e., in addition to the LOC reported above). The “strength” contrast (correct strength-based “different” responses [i.e., “4” and “5” responses on “different” trials] greater than misses [i.e., “1,” “2,” and “3” responses on “different” trials]3) did not yield any significant clusters of activation other than left motor cortex and right cerebellum (because individuals used their right hands for Responses 4–6 and left hands for Responses 1–3). We therefore relaxed the threshold to examine whether there was any weak evidence for strength-based effects; we lowered the threshold from p < .001 and k = 86 to p < .005 and k = 86. This relaxed threshold yielded activation in bilateral fusiform gyrus (Table 1; Figure 6A). Figure 6B shows parameter estimates across confidence for these ROIs. These regions were selected because they showed strength effects, so we do not report results from a linearity analysis, as that analysis would not be independent of the criteria used to identify the regions (Kriegeskorte, Simmons, Bellgowan, & Baker, 2009). Nonetheless, the fusiform gyrus activation evident in the “strength” contrast is consistent with our occipito-temporal “strength” prediction.

Figure 6. 

Activation in the fusiform gyrus is related to strength-based detection of differences. Activation in the left and right fusiform gyrus was higher when individuals correctly detected “different” trials on the basis of strength-based perception than when they missed a difference. Parameter estimates across confidence for the left and right fusiform gyrus are shown. All error bars depict SEM. See Table 1 for a complete list of activated regions (different strength > miss).

Figure 6. 

Activation in the fusiform gyrus is related to strength-based detection of differences. Activation in the left and right fusiform gyrus was higher when individuals correctly detected “different” trials on the basis of strength-based perception than when they missed a difference. Parameter estimates across confidence for the left and right fusiform gyrus are shown. All error bars depict SEM. See Table 1 for a complete list of activated regions (different strength > miss).

Finally, including RT as a covariate in the state/strength GLM led to the same pattern of results for all of the contrasts examined (see Appendix; RTs are shown in Table 2). The similarity of the results with and without the RT covariate helps validate the current results and argues against a time-on-task explanation for the parietal and occipito-temporal differences we observed for state- and strength-based perception.

Table 2. 

Mean RTs (in msec) for Each Confidence Bin for Faces and Scenes, Separately for Different and Same Trials


1
2
3
4
5
6
Faces (Different) 1428 (104) 1521 (110) 1628 (90) 1564 (84) 1292 (75) 1114 (68) 
Faces (Same) 1259 (67) 1417 (73) 1579 (91) 1585 (79) 1332 (85) 1348 (143) 
Scenes (Different) 1206 (113) 1516 (73) 1632 (99) 1603 (81) 1347 (82) 1051 (63) 
Scenes (Same) 1314 (70) 1444 (74) 1623 (93) 1589 (78) 1423 (91) 1316 (177) 

1
2
3
4
5
6
Faces (Different) 1428 (104) 1521 (110) 1628 (90) 1564 (84) 1292 (75) 1114 (68) 
Faces (Same) 1259 (67) 1417 (73) 1579 (91) 1585 (79) 1332 (85) 1348 (143) 
Scenes (Different) 1206 (113) 1516 (73) 1632 (99) 1603 (81) 1347 (82) 1051 (63) 
Scenes (Same) 1314 (70) 1444 (74) 1623 (93) 1589 (78) 1423 (91) 1316 (177) 

SEM is shown in parentheses.

The current data indicate that lateral and medial posterior parietal regions are sensitive to state-based perception but are not significantly modulated by varying levels of strength-based perception, showing a dissociation with our earlier report of hippocampal involvement in strength- but not state-based perception (Aly et al., 2013), as well as the current findings of marginally significant fusiform gyrus activation sensitive to strength- but not state-based detection of difference. Moreover, the LOC exhibited evidence for both state- and strength-based signals.

DISCUSSION

This study examined the neural correlates of visual change detection based on discrete states of identifying specific differences and strength-based assessments of relational match/mismatch. We used a same/different discrimination task and collected confidence responses indexing varying levels of strength-based perception as well as the occurrence of state-based perception. This paradigm allowed us to test the prediction that parietal regions would show state-based response characteristics, and occipito-temporal regions would be sensitive to strength-based perception.

In support of this parietal/occipito-temporal state/strength hypothesis, posterior parietal regions (bilateral supramarginal gyrus, bilateral posterior cingulate cortex, and right precuneus) were sensitive to state-based perception and were not significantly modulated by varying levels of strength-based perception. In contrast, activation in bilateral fusiform gyrus was increased for strength-based detection of difference and was not related to state-based perception. Finally, the LOC exhibited activation patterns that tracked strength-based perception and, additionally, also increased for state-based perception.

State- and Strength-based Information Processing

According to the global neuronal workspace model of Dehaene and colleagues (see Del Cul, Baillet, & Dehaene, 2007; Dehaene et al., 2006; Dehaene, Sergent, & Changeux, 2003; Dehaene, Kerszberg, & Changeux, 1998), activation in occipito-temporal areas may vary continuously depending on the strength of visual input and the focus of attention; these graded signals may in turn be related to graded levels of awareness that vary from subliminal to conscious (Dehaene et al., 2006). In contrast, parieto-frontal regions are proposed to be associated with a discrete threshold for conscious access, showing an all-or-none neural “ignition”; that is, an extended parietal network may be engaged only when individuals are consciously aware of specific visual information (i.e., the threshold for conscious access is exceeded) and not otherwise (Del Cul et al., 2007; Dehaene et al., 2001, 2006; see also Del Cul, Dehaene, Reyes, Bravo, & Slachevsky, 2009). The current results are broadly consistent with this framework; although we did not find frontal activation related to state-based perception, we observed discrete activation patterns related to state-based perception in parietal regions. Additionally, in support of this model, occipito-temporal regions (i.e., fusiform gyrus and LOC) were sensitive to strength-based perception (see also Dehaene et al., 2006; Lamme, 2003; Bar et al., 2001; Kanwisher, 2001; Grill-Spector et al., 2000). Finally, we found that LOC exhibits a further increase in activation associated with state-based perception, which may be related to the proposal that conscious awareness involves amplification of occipito-temporal signals by parieto-frontal regions (Dehaene et al., 2006; Lamme, 2003).

The current findings suggest that examining the neural correlates of conscious visual perception without respect to the kind of information that forms the basis for a perceptual decision may mask important differences in the relationship between parietal and occipito-temporal activation and different kinds of visual awareness. The use of a confidence scale that separates state- and strength-based perception, as used in the current study, offers a simple and particularly useful method by which to examine different kinds of visual awareness and their neural underpinnings. Below, we explore how strength- and state-based processing can shed light on the different roles of occipito-temporal and parietal regions to visual perception.

Strength-based Perception and Occipito-temporal Cortex

There is evidence that some occipito-temporal regions are differentially sensitive to different kinds of visual content, for example, faces, scenes, and/or objects (Epstein & Kanwisher, 1998; Kanwisher, McDermott, & Chun, 1997; Malach et al., 1995; Aguirre, Zarahn, & D'Esposito, 1998; see Milner, 2012). Graded signals in occipito-temporal areas may therefore be related to the strength of sensory input or the subjective strength of a percept (Dehaene et al., 2006; Bar et al., 2001; Grill-Spector et al., 2000; see also Sligte et al., 2009). For example, graded levels of activation in the LOC are correlated with graded levels of identification of masked objects (Grill-Spector et al., 2000; see also Bar et al., 2001). The graded signals in the LOC that we observed may therefore reflect varying levels of fidelity of visual representations that can be used in the service of change detection or object recognition. The disproportionate increase in LOC activation for visual change detection based on state-based access to specific detailed information may reflect a particularly strong visual representation of detailed shape information (Kanwisher, Woods, Iacoboni, & Mazziotta, 1997; Malach et al., 1995) that is useful for identifying specific changes in scenes or faces. As noted above, this increased state-based activation—and potential improvement in the fidelity of the visual representation—may be a result of top–down amplification by posterior parietal regions (Silvanto, Muggleton, Lavi, & Walsh, 2009; Dehaene et al., 2006; Lamme, 2003; Pessoa, Kastner, & Ungerleider, 2003).

The current data suggest that activation in the fusiform gyrus and LOC may reflect the strength of perceptual “evidence” in these regions. Importantly, there are two potential sources of variation in perceptual “strength”: bottom–up, physical stimulus strength (i.e., signal or noise level; see Dehaene et al., 2006) and subjective assessment of perceptual strength. Several studies examining the relationship between occipito-temporal activation and perceptual performance have manipulated perceptual strength by varying, for example, the duration of stimulus and/or mask presentations (e.g., Bar et al., 2001; Grill-Spector et al., 2000; see Dehaene et al., 2006).

“Strength” in the current context refers to how weak or strong the match is between paired items. We did not parametrically vary the amount of objective stimulus differences in this task, but some stimulus pairs may have been more “different” than others, for example, because of how the visual features in each image were affected by the distortions used. Thus, we cannot disentangle the contribution of objective “strength” of differences from subjective assessment of those differences. Variations in the latter may arise from different sources, such as how well the first image was encoded, the ability to maintain a detailed visual representation of the first image when the next image appears, and the process of comparing the first image to the second.

It is important to note that previous studies, even when varying bottom–up stimulus strength, have found evidence for a relationship between recognition level and brain activation that is not entirely attributable to changes in physical signal or noise levels (e.g., Bar et al., 2001; Grill-Spector et al., 2000; see also Sligte et al., 2009; Dehaene et al., 2006). Thus, both objective and subjective variations in perceptual strength may be related to modulation of occipito-temporal regions.

Strength-based perception may seem to be related to variations in perceptual uncertainty, in that both are related to the extent of visual evidence for a particular decision. That is, high uncertainty could be related to moderately low levels of strength-based perception, and uncertainty may decrease with greater levels of strength-based perception. For example, under conditions in which object recognition is made difficult by brief exposure durations and masking (i.e., object identity is uncertain), activation in ventral temporal areas is correlated with recognition performance (Bar et al., 2001; Grill-Spector et al., 2000). In these cases, it is likely that the representation of sensory evidence in these regions varies from weak to strong and is therefore associated with high or low uncertainty (see Heekeren, Marrett, & Ungerleider, 2008).

Nevertheless, there are some differences between perceptual uncertainty and strength-based perception. For example, uncertainty should be high for the 3–4 responses and lower for the 1–2 and 5–6 responses—and indeed, this is consistent with the pattern observed in RTs in the current study (Table 2). In contrast, strength-based perception of difference is proposed to increase monotonically across the confidence range.

Additionally, frontal and parietal regions have also been linked to perceptual uncertainty, but we did not find evidence for strength-based perceptual processing in these regions. For example, in tasks in which individuals have to learn to discriminate novel perceptual categories, perceptual uncertainty is associated not only with activation in the inferior temporal gyrus and fusiform gyrus (Li & Yang, 2012; Daniel et al., 2011) but also parietal, frontal, and frontostriatal-thalamic networks (Li & Yang, 2012; Daniel et al., 2011; Grinband, Hirsch, & Ferrera, 2006; see Heekeren et al., 2008). The modulation of occipito-temporal areas by strength-based perception and perceptual uncertainty may be related to the amount of sensory evidence accumulated in those regions (see Li & Yang, 2012; Heekeren et al., 2008). In contrast, the modulation of parietal (Bollimunta, Totten, & Ditterich, 2012) or frontostriatal-thalamic networks (Daniel et al., 2011; Grinband et al., 2006) by uncertainty may be related to a decision-making process whereby sensory evidence from occipito-temporal regions is integrated and evaluated before a decision (Heekeren et al., 2008).

State-based Perception and Parietal Cortex

In contrast to occipito-temporal regions, where activation may be related to the representation of visual content, the state-based activation in the lateral posterior parietal cortex may be related to perceptual orienting or changing the focus of attention (Corbetta & Shulman, 2002; Corbetta, Kincade, Ollinger, McAvoy, & Shulman, 2000; but see Geng & Vossel, 2013, for an alternative) rather than carrying information about visual content per se (Milner, 2012). The current results raise the possibility that attention-related effects in the supramarginal gyrus may consist of discrete, state-based signals that either occur or do not. Right parietal regions have been implicated in discrete perceptual switches associated with bistable perception and binocular rivalry (e.g., Britz et al., 2009, 2011; Knapen et al., 2011; Zaretskaya et al., 2010; Kleinschmidt et al., 1998; Lumer et al., 1998); it is possible that the dynamics of parietal (and specifically, supramarginal gyrus) patterns of activation are generally discrete or state-based across many perceptual phenomena and may in turn be associated with state-based changes in visual awareness. These changes in activation may switch the focus of attention or amplify activation in posterior occipito-temporal areas that themselves represent specific visual content (Milner, 2012; Silvanto et al., 2009; Dehaene et al., 2006; Lamme, 2003; Pessoa et al., 2003).

Interestingly, we found that lateral and medial parietal regions showed different response profiles. The supramarginal gyrus showed greater activation for state- compared with strength-based perception, whereas the posterior cingulate cortex and precuneus showed reduced activation for state- compared with strength-based perception. Although speculative, we think this difference may reflect the relative involvement of these regions in outward versus inward attention. The supramarginal gyrus and other regions at the TPJ have been associated with exogenous attentional capture by salient environmental changes and reorientation to those changes (Corbetta & Shulman, 2002; Corbetta et al., 2000). In contrast, the posterior cingulate cortex and precuneus are components of the “default mode” network, which may be associated with internally focused attention, and these regions show reduced activation during outwardly focused attention (Buckner, Andrews-Hanna, & Schacter, 2008; Raichle et al., 2001). To the extent that state-based perception, more than strength-based perception, is associated with the identification of specific visual changes (Aly & Yonelinas, 2012), this outward focus on environmental features may lead to opposite changes in activation in these lateral and medial parietal regions.

Although we did not find evidence for strength-based processing in parietal regions, our data do not rule out a role for the posterior parietal cortex in strength-based perception. For example, although the response characteristics in these regions were state-based, lateral parietal regions may indirectly have an effect on strength-based perception by modulating activation in occipito-temporal areas that track strength-based perception. Such top–down modulation of occipito-temporal regions may amplify the signal in these posterior regions (Silvanto et al., 2009; Dehaene et al., 2006; Lamme, 2003; Pessoa et al., 2003), thus enhancing strength-based perceptual responses. This modulation may be a constant amplification signal that increases the magnitude of the existing graded signals in occipito-temporal regions. The end result is an effect on graded perceptual responses, despite the lack of a graded signal in posterior parietal regions.

It is important to consider alternative explanations of our findings of state-based responses in the supramarginal gyrus. For example, several studies have implicated the supramarginal gyrus in motor attention (Rushworth, Ellison, & Walsh, 2001; Rushworth, Krams, & Passingham, 2001) and response selection (Bunge, Hazeltine, Scanlon, Rosen, & Gabrieli, 2002), so it is important to consider whether the state-based response characteristics in the supramarginal gyrus in the current task might be related to motor attention or the need to select a response. We think this is unlikely, because individuals had to select a response on every trial, and it is not clear why state-based perceptual responses should put greater demands on motor attention or response selection than other responses. Furthermore, those studies have implicated the left supramarginal gyrus in these motor functions, while we found very similar activation bilaterally.

State/Strength and Other Dichotomies

The state/strength distinction is related to the difference between detection of change on the one hand (strength-based perception) and both detection and identification of change on the other (state-based perception). Previous studies have shown that state-based perception is associated with detection and identification of specific changes whereas strength-based perception is related to detection of changes with a poor or absent ability to identify what the change is (Aly & Yonelinas, 2012, Experiment 4B). Importantly, however, state- and strength-based perceptions differ from one another in ways beyond detection versus identification. For example, strength-based perception is continuously graded and varies from weak to strong; it is associated with a representation of the overall relational match/mismatch between items, and it is associated with graded evidence accumulation over time (Aly et al., 2013; Aly & Yonelinas, 2012). In contrast, state-based perception is a signal that either occurs or does not, it is associated with a representation of detailed local or qualitative features that differ between items, and it is associated with a sudden onset over time (Aly & Yonelinas, 2012).

The state/strength distinction also bears resemblance to that between “sensing” and “seeing” (Rensink, 2004), which are argued to reflect detection of change without a corresponding visual experience and conscious visual experience of change, respectively. There are, however, important differences between these distinctions. For example, in the current framework, state- and strength-based perception are both associated with conscious visual experiences; the critical difference is the kind of information associated with each. In the former, individuals have access to local, specific details that differentiate two similar images and can report on those details. In the latter, individuals have conscious visual experiences of difference, but this is based on assessment of global or relational match/mismatch (Aly & Yonelinas, 2012). Thus, one difference between seeing/sensing and state- and strength-based perception is that the latter are both proposed to be conscious visual experiences—but based on different kinds of information.

In addition, one criticism of the sensing/seeing distinction has been that these may not reflect different routes to change detection, but rather differences in response criteria (Simons et al., 2005). That is, individuals may report “sensing” a change when they are relatively unsure, and after verifying that it is in fact present, they report “seeing” the change. In contrast, however, state- and strength-based perception have been found to independently affect perceptual sensitivity, rather than simply reflecting differences in response criterion (Aly & Yonelinas, 2012). For example, when individuals have to detect global or relational changes in images, as in the current study, the contributions of state-based perception are decreased, and strength-based perception increased, compared with an equally difficult task with discrete or local changes. Moreover, damage to the hippocampus selectively impairs strength-based perception (Aly et al., 2013). These dissociations cannot be accounted for by differences in response criteria for state- versus strength-based perception.

Utility of the State/Strength Distinction: Predictions

The current study, along with recent behavioral and ERP studies (Aly & Yonelinas, 2012; Busch et al., 2009, 2010; Rensink, 2000, 2004), suggests that visual perceptual decisions can be based on different types of information, and these are in turn associated with distinct neural signatures (see also Dehaene et al., 2006). The functional characteristics of state- and strength-based perception have been explored in some detail (Aly & Yonelinas, 2012), and the insights from those studies can be used to make predictions about the contributions of occipito-temporal and parietal regions to visual awareness.

For example, Aly and Yonelinas (2012) found that strength-based perception plays a larger role in performance when visual changes consist of subtle relational or global distortions (as in the current study) whereas state-based perception makes a larger contribution to performance when visual changes are discrete in nature (e.g., a window is present in one scene but absent in the other). This leads to the prediction that the contribution of parietal regions to change detection will be greater for discrete than relational/global changes, and the opposite should be true for occipito-temporal regions. It is important to note, however, that these tasks are not process-pure: The relative contributions of state- and strength-based perception are affected by these manipulations; similarly, the relative contributions of occipito-temporal and parietal regions may be affected by the use of relational/global or discrete changes.

Aly and Yonelinas (2012) also found that state- and strength-based perception were associated with different temporal dynamics. State-based perception showed an abrupt onset over time, whereas strength-based perception was associated with a gradual accumulation of evidence over time. The temporal dynamics of these behavioral responses mirror the proposed dynamics of occipito-temporal and parieto-frontal activation within the global neuronal workspace framework; that is, occipito-temporal areas may accumulate information in a graded manner over time, whereas parieto-frontal regions may be associated with a rapid onset of activation related to conscious awareness of detailed information (neural “ignition”; Del Cul et al., 2007; Dehaene et al., 2006; Appendix to Del Cul et al., 2009; Sergent, Baillet, & Dehaene, 2005). This leads to the prediction that regions associated with state-based perception should be associated with abrupt changes in activation whereas regions associated with strength-based perception should show graded evidence accumulation over time.

Finally, the distinction between state- and strength-based perception has proven useful in elucidating the role of the hippocampus in scene perception (Aly et al., 2013). Individuals with focal lesions to the hippocampus exhibited selective deficits in strength-based scene perception, with state-based perception remaining intact. Furthermore, in scene perception trials in the current data set, activation in the hippocampus varied in a linear manner with increasing perceptual decision confidence and was not differentially sensitive to state-based perception (Aly et al., 2013). These data therefore provide evidence that strength-based perception of scenes can be neurally dissociated from state-based perception, with only the former dependent on the hippocampus. Together with the current study, this patient and fMRI evidence shows that state- and strength-based perception are supported by at least partially distinct neural systems. On a broader level, the patient study additionally shows the utility of the distinction between state- and strength-based perception; this distinction can prove useful in better characterizing the perceptual impairments that result from different types of brain lesions.

Conclusions

Prior studies have shown that visual change detection can be based either on discrete states of identifying specific details or assessments of the graded strength of relational match/mismatch. In the current study, we show that activation in the fusiform gyrus is increased for strength-based detection of differences; the supramarginal gyrus, posterior cingulate cortex, and precuneus are sensitive to state-based perception and are not modulated by strength-based perception, and LOC activation is related to both state- and strength-based perception. Parietal and occipito-temporal regions therefore play dissociable roles in visual change detection, and these different roles in state- and strength-based perception may shed important insights into how these different regions contribute to conscious visual awareness.

APPENDIX

Figure A1 

Figure A1. 

Activation associated with correct “different” greater than correct “same” judgments, after RTs were included as a covariate in the analysis. Refer to Figure 3.

Figure A1. 

Activation associated with correct “different” greater than correct “same” judgments, after RTs were included as a covariate in the analysis. Refer to Figure 3.

Figure A2 

Figure A2. 

Activation associated with the different state > different strength contrast after RTs were included as a covariate in the analysis. Refer to Figure 4.

Figure A2. 

Activation associated with the different state > different strength contrast after RTs were included as a covariate in the analysis. Refer to Figure 4.

Figure A3 

Figure A3. 

Activation associated with the different strength > different state contrast after RTs were included as a covariate in the analysis. Refer to Figure 5.

Figure A3. 

Activation associated with the different strength > different state contrast after RTs were included as a covariate in the analysis. Refer to Figure 5.

Figure A4 

Figure A4. 

Activation associated with the different strength > miss contrast after RTs were included as a covariate in the analysis. Refer to Figure 6.

Figure A4. 

Activation associated with the different strength > miss contrast after RTs were included as a covariate in the analysis. Refer to Figure 6.

Acknowledgments

This research was supported by grants MH59352 and MH083734. We would like to thank Maureen Ritchey, Luke Jenkins, and the University of California, Davis, Memory Group for helpful advice on the project.

Reprint requests should be sent to Mariam Aly, Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, or via e-mail: aly@princeton.edu.

Notes

1. 

Huettel et al. (2001) included both “sensing” and “seeing” response options for their participants, but because only two participants provided a sufficient number of “sensing” responses, these trials could not be analyzed.

2. 

We used misses on “different” trials rather than correct “same” responses for the strength analysis to (1) avoid redundancy with the overall change detection contrast and (2) avoid confounding subjective strength-based perception with objective stimulus differences.

3. 

We opted to collapse across Response Bins 1–3 rather than conduct a parametric modulator analysis to examine strength-based perception because there were too few trials in some response bins for many participants. For example, using our criterion of at least 10 responses in a confidence bin for a participant to be included in an analysis, 13 of the 18 participants would have had to be excluded for not meeting this criterion for “1” responses on “different” trials. Although we could have collapsed across “same” and “different” trials to avoid this issue of low trial numbers, such an analysis would confound changes in subjective confidence and changes in the kinds of stimuli (i.e., because “same” pairs would contribute gradually increasing trial numbers as confidence in difference decreases). Thus, we felt that analyzing only “different” trials and collapsing across Response Bins 1–3 was the most appropriate way to analyze the data.

REFERENCES

REFERENCES
Aguirre
,
G. K.
,
Zarahn
,
E.
, &
D'Esposito
,
M.
(
1998
).
An area within human ventral cortex sensitive to “building” stimuli: Evidence and implications.
Neuron
,
21
,
373
383
.
Aly
,
M.
,
Ranganath
,
C. R.
, &
Yonelinas
,
A. P.
(
2013
).
Detecting changes in scenes: The hippocampus is critical for strength-based perception.
Neuron
,
78
,
1127
1137
.
Aly
,
M.
, &
Yonelinas
,
A. P.
(
2012
).
Bridging consciousness and cognition in memory and perception: Evidence for both state and strength processes.
PLoS One
,
7
,
e30231, 1
16
.
Ashburner
,
J.
(
2007
).
A fast diffeomorphic image registration algorithm.
Neuroimage
,
38
,
95
113
.
Bar
,
M.
,
Tootell
,
R. B. H.
,
Schacter
,
D. L.
,
Greve
,
D. N.
,
Fischl
,
B.
,
Mendola
,
J. D.
,
et al
(
2001
).
Cortical mechanisms specific to explicit visual object recognition.
Neuron
,
29
,
529
535
.
Beck
,
D. M.
,
Muggleton
,
N.
,
Walsh
,
V.
, &
Lavie
,
N.
(
2005
).
Right parietal cortex plays a critical role in change blindness.
Cerebral Cortex
,
16
,
712
717
.
Beck
,
D. M.
,
Rees
,
G.
,
Frith
,
C. D.
, &
Lavie
,
N.
(
2001
).
Neural correlates of change detection and change blindness.
Nature Neuroscience
,
4
,
645
650
.
Bollimunta
,
A.
,
Totten
,
D.
, &
Ditterich
,
J.
(
2012
).
Neural dynamics of choice: Single-trial analysis of decision-related activity in parietal cortex.
Journal of Neuroscience
,
32
,
12684
12701
.
Brett
,
M.
,
Anton
,
J.-L.
,
Valabregue
,
R.
, &
Poline
,
J.-B.
(
2002
).
Region of interest analysis using an SPM toolbox [abstract] presented at the 8th International Conference on Functional Mapping of the Human Brain, June 2–6, 2002, Sendai, Japan.
Available on CD-ROM in Neuroimage
,
16
.
Britz
,
J.
,
Landis
,
T.
, &
Michel
,
C. M.
(
2009
).
Right parietal brain activity precedes perceptual alternation of bistable stimuli.
Cerebral Cortex
,
19
,
55
65
.
Britz
,
J.
,
Pitts
,
M. A.
, &
Michel
,
C. M.
(
2011
).
Right parietal brain activity precedes perceptual alternation during binocular rivalry.
Human Brain Mapping
,
32
,
1432
1442
.
Buckner
,
R. L.
,
Andrews-Hanna
,
J. R.
, &
Schacter
,
D. L.
(
2008
).
The brain's default network: Anatomy, function, and relevance to disease.
Annals of the New York Academy of Sciences
,
1124
,
1
38
.
Bunge
,
S. A.
,
Hazeltine
,
E.
,
Scanlon
,
M. D.
,
Rosen
,
A. C.
, &
Gabrieli
,
J. D. E.
(
2002
).
Dissociable contributions of prefrontal and parietal cortices to response selection.
Neuroimage
,
17
,
1562
1571
.
Busch
,
N. A.
,
Durschmid
,
S.
, &
Hermann
,
C. S.
(
2010
).
ERP effects of change localization, change identification, and change blindness.
NeuroReport
,
21
,
371
375
.
Busch
,
N. A.
,
Frund
,
I.
, &
Hermann
,
C. S.
(
2009
).
Electrophysiological evidence for different types of change detection and change blindness.
Journal of Cognitive Neuroscience
,
22
,
1852
1869
.
Corbetta
,
M.
,
Kincade
,
J. M.
,
Ollinger
,
J. M.
,
McAvoy
,
M. P.
, &
Shulman
,
G. L.
(
2000
).
Voluntary orienting is dissociated from target detection in human posterior parietal cortex.
Nature Neuroscience
,
3
,
292
297
.
Corbetta
,
M.
, &
Shulman
,
G. L.
(
2002
).
Control of goal-directed and stimulus-driven attention in the brain.
Nature Reviews Neuroscience
,
3
,
201
215
.
Cox
,
R. W.
(
1996
).
AFNI: Software for analysis and visualization of functional magnetic resonance neuroimages.
Computers and Biomedical Research
,
29
,
162
173
.
Dale
,
A. M.
(
1999
).
Optimal experimental design for event-related fMRI.
Human Brain Mapping
,
8
,
109
114
.
Daniel
,
R.
,
Wagner
,
G.
,
Koch
,
K.
,
Reichenbach
,
J. R.
,
Sauer
,
H.
, &
Schlosser
,
R. G. M.
(
2011
).
Assessing the neural basis of uncertainty in perceptual category learning through varying levels of distortion.
Journal of Cognitive Neuroscience
,
23
,
1781
1793
.
Dehaene
,
S.
,
Changeux
,
J.-P.
,
Naccache
,
L.
,
Sackur
,
J.
, &
Sergent
,
C.
(
2006
).
Conscious, preconscious, and subliminal processing: A testable taxonomy.
Trends in Cognitive Sciences
,
10
,
204
211
.
Dehaene
,
S.
,
Kerszberg
,
M.
, &
Changeux
,
J.-P.
(
1998
).
A neuronal model of a global workspace in effortful cognitive tasks.
Proceedings of the National Academy of Sciences
,
95
,
14529
14534
.
Dehaene
,
S.
,
Naccache
,
L.
,
Cohen
,
L.
,
Le Bihan
,
D.
,
Mangin
,
J.-F.
,
Poline
,
J.-B.
,
et al
(
2001
).
Cerebral mechanisms of word masking and unconscious repetition priming.
Nature Neuroscience
,
4
,
752
758
.
Dehaene
,
S.
,
Sergent
,
C.
, &
Changeux
,
J.-P.
(
2003
).
A neuronal network model linking subjective reports and objective physiological data during conscious perception.
Proceedings of the National Academy of Sciences
,
100
,
8520
8525
.
Del Cul
,
A.
,
Baillet
,
S.
, &
Dehaene
,
S.
(
2007
).
Brain dynamics underlying the nonlinear threshold for access to consciousness.
PLoS Biology
,
5
,
2408
2423
.
Del Cul
,
A.
,
Dehaene
,
S.
,
Reyes
,
P.
,
Bravo
,
E.
, &
Slachevsky
,
A.
(
2009
).
Causal role of prefrontal cortex in the threshold for access to consciousness.
Brain
,
132
,
2531
2540
.
Epstein
,
R.
, &
Kanwisher
,
N.
(
1998
).
A cortical representation of the local visual environment.
Nature
,
392
,
598
601
.
Galpin
,
A.
,
Underwood
,
G.
, &
Chapman
,
P.
(
2008
).
Sensing without seeing in comparative visual search.
Consciousness and Cognition
,
17
,
658
673
.
Geng
,
J.
, &
Vossel
,
S.
(
2013
).
Re-evaluating the role of TPJ in attentional control: Contextual updating?
Neuroscience and Biobehavioral Reviews
, .
Green
,
D. M.
, &
Swets
,
J. A.
(
1966
).
Signal detection theory and psychophysics.
New York
:
Wiley
.
Grill-Spector
,
K.
,
Kushnir
,
T.
,
Hendler
,
T.
, &
Malach
,
R.
(
2000
).
The dynamics of object-selective activation correlates with recognition performance in humans.
Nature Neuroscience
,
3
,
837
843
.
Grinband
,
J.
,
Hirsch
,
J.
, &
Ferrera
,
V. P.
(
2006
).
A neural representation of categorization uncertainty in the human brain.
Neuron
,
49
,
757
763
.
Grinband
,
J.
,
Wager
,
T. D.
,
Lindquist
,
M.
,
Ferrera
,
V. P.
, &
Hirsch
,
J.
(
2008
).
Detection of time-varying signals in event-related fMRI designs.
Neuroimage
,
43
,
509
520
.
Heekeren
,
H. R.
,
Marrett
,
S.
, &
Ungerleider
,
L. G.
(
2008
).
The neural systems that mediate human perceptual decision making.
Nature Review Neuroscience
,
9
,
467
479
.
Huettel
,
S. A.
,
Guzeldere
,
G.
, &
McCarthy
,
G.
(
2001
).
Dissociating the neural mechanisms of visual attention in change detection using functional MRI.
Journal of Cognitive Neuroscience
,
13
,
1006
1018
.
Kanwisher
,
N.
(
2001
).
Neural events and perceptual awareness.
Cognition
,
79
,
89
113
.
Kanwisher
,
N.
,
McDermott
,
J.
, &
Chun
,
M. M.
(
1997
).
The fusiform face area: A module in human extrastriate cortex specialized for face perception.
Journal of Neuroscience
,
17
,
4302
4311
.
Kanwisher
,
N.
,
Woods
,
R. P.
,
Iacoboni
,
M.
, &
Mazziotta
,
J. C.
(
1997
).
Human extrastriate cortex for visual shape analysis.
Journal of Cognitive Neuroscience
,
9
,
133
142
.
Kimmig
,
H.
,
Greenlee
,
M. W.
,
Gondan
,
M.
,
Schira
,
M.
,
Kassubek
,
J.
, &
Mergner
,
T.
(
2001
).
Relationship between saccadic eye movements and cortical activity as measured by fMRI: Quantitative and qualitative aspects.
Experimental Brain Research
,
141
,
184
194
.
Kleinschmidt
,
A.
,
Buchel
,
C.
,
Zeki
,
S.
, &
Frackowiak
,
R. S. J.
(
1998
).
Human brain activity during spontaneously reversing perception of ambiguous figures.
Proceedings of the Royal Society of London, Series B
,
265
,
2427
2433
.
Knapen
,
T.
,
Brascamp
,
J.
,
Pearson
,
J.
,
van Ee
,
R.
, &
Blake
,
R.
(
2011
).
The role of frontal and parietal brain areas in bistable perception.
Journal of Neuroscience
,
31
,
10293
10301
.
Kriegeskorte
,
N.
,
Simmons
,
W. K.
,
Bellgowan
,
P. S. F.
, &
Baker
,
C. I.
(
2009
).
Circular analysis in systems neuroscience: The dangers of double dipping.
Nature Neuroscience
,
12
,
535
540
.
Lamme
,
V. A. F.
(
2003
).
Why visual attention and awareness are different.
Trends in Cognitive Sciences
,
7
,
12
18
.
Large
,
M.-E.
,
Cavina-Pratesi
,
C.
,
Vilis
,
T.
, &
Culham
,
J. C.
(
2008
).
The neural correlates of change detection in the face perception network.
Neuropsychologia
,
46
,
2169
2176
.
Levin
,
D. T.
, &
Simons
,
D. J.
(
1997
).
Failure to detect changes to attended objects in motion pictures.
Psychonomic Bulletin & Review
,
4
,
501
506
.
Li
,
S.
, &
Yang
,
F.
(
2012
).
Task-dependent uncertainty modulation of perceptual decisions in the human brain.
European Journal of Cognitive Neuroscience
,
36
,
3732
3739
.
Lumer
,
E. D.
,
Friston
,
K. J.
, &
Rees
,
G.
(
1998
).
Neural correlates of perceptual rivalry in the human brain.
Science
,
280
,
1930
1934
.
Macmillan
,
N. A.
, &
Creelman
,
C. D.
(
2005
).
Detection theory: A user's guide
(2nd ed.).
New York
:
Cambridge University Press
.
Malach
,
R.
,
Reppas
,
J. B.
,
Benson
,
R. R.
,
Kwong
,
K. K.
,
Jiang
,
H.
,
Kennedy
,
W. A.
,
et al
(
1995
).
Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex.
Proceedings of the National Academy of Sciences, U.S.A.
,
92
,
8135
8138
.
Milner
,
A. D.
(
2012
).
Is visual processing in the dorsal stream accessible to consciousness?
Proceedings of the Royal Society of London, Series B
,
279
,
2289
2298
.
Mitroff
,
S. R.
,
Simons
,
D. J.
, &
Levin
,
D. T.
(
2004
).
Nothing compares 2 views: Change blindness can occur despite preserved access to the change information.
Perception & Psychophysics
,
66
,
1268
1281
.
O'Regan
,
J. K.
,
Rensink
,
R. A.
, &
Clark
,
J. J.
(
1999
).
Change-blindness as a result of “mudsplashes.”
Nature
,
398
,
34
.
Pessoa
,
L.
, &
Ungerleider
,
L. G.
(
2004
).
Neural correlates of change detection and change blindness in a working memory task.
Cerebral Cortex
,
14
,
511
520
.
Pessoa
,
L.
,
Kastner
,
S.
, &
Ungerleider
,
L. G.
(
2003
).
Neuroimaging studies of attention: From modulation of sensory processing to top-down control.
The Journal of Neuroscience
,
23
,
3990
3998
.
Raichle
,
M. E.
,
MacLeod
,
A. M.
,
Snyder
,
A. Z.
,
Powers
,
W. J.
,
Gusnard
,
D. A.
, &
Shulman
,
G. L.
(
2001
).
A default mode of brain function.
Proceedings of the National Academy of Sciences
,
98
,
676
682
.
Rensink
,
R. A.
(
2000
).
Seeing, sensing, and scrutinizing.
Vision Research
,
40
,
1469
1487
.
Rensink
,
R. A.
(
2004
).
Visual sensing without seeing.
Psychological Science
,
15
,
27
32
.
Rensink
,
R. A.
,
O'Regan
,
J. K.
, &
Clark
,
J. J.
(
1997
).
To see or not to see: The need for attention to perceive changes in scenes.
Psychological Science
,
8
,
368
373
.
Rensink
,
R. A.
,
O'Regan
,
J. K.
, &
Clark
,
J. J.
(
2000
).
On the failure to detect changes in scenes across brief interruptions.
Visual Cognition
,
7
,
127
145
.
Rushworth
,
M. F. S.
,
Ellison
,
A.
, &
Walsh
,
V.
(
2001
).
Complementary localization and lateralization of orienting and motor attention.
Nature Neuroscience
,
4
,
656
661
.
Rushworth
,
M. F. S.
,
Krams
,
M.
, &
Passingham
,
R. E.
(
2001
).
The attentional role of the left parietal cortex: The distinct lateralization and localization of motor attention in the human brain.
Journal of Cognitive Neuroscience
,
13
,
698
710
.
Schwarzkopf
,
D. S.
,
Silvanto
,
J.
,
Gilaie-Dotan
,
S.
, &
Rees
,
G.
(
2010
).
Investigating object representations during change detection in human extrastriate cortex.
European Journal of Neuroscience
,
32
,
1780
1787
.
Sergent
,
C.
,
Baillet
,
S.
, &
Dehaene
,
S.
(
2005
).
Timing of the brain events underlying access to consciousness during the attentional blink.
Nature Neuroscience
,
8
,
1391
1400
.
Silvanto
,
J.
,
Muggleton
,
N.
,
Lavie
,
N.
, &
Walsh
,
V.
(
2009
).
The perceptual and functional consequences of parietal top-down modulation on the visual cortex.
Cerebral Cortex
,
19
,
327
330
.
Simons
,
D. J.
,
Nevarez
,
G.
, &
Boot
,
W. R.
(
2005
).
Visual sensing is seeing: Why mindsight, in hindsight, is blind.
Psychological Science
,
16
,
520
524
.
Sligte
,
I. G.
,
Scholte
,
H. S.
, &
Lamme
,
V. A. F.
(
2009
).
V4 activity predicts the strength of visual short-term memory representations.
Journal of Neuroscience
,
29
,
7432
7438
.
Tseng
,
P.
,
Hsu
,
T.-Y.
,
Muggleton
,
N. G.
,
Tzeng
,
O. J. L.
,
Hung
,
D. L.
, &
Juan
,
C.-H.
(
2010
).
Posterior parietal cortex mediates encoding and maintenance processes in change blindness.
Neuropsychologia
,
48
,
1063
1070
.
Turatto
,
M.
,
Sandrini
,
M.
, &
Miniussi
,
C.
(
2004
).
The role of the right dorsolateral prefrontal cortex in visual change awareness.
Cognitive Neuroscience and Neuropsychology
,
15
,
2549
2552
.
Yonelinas
,
A. P.
(
1994
).
Receiver-operating characteristics in recognition memory: Evidence for a dual-process model.
Journal of Experimental Psychology: Learning, Memory, & Cognition
,
20
,
1341
1354
.
Yonelinas
,
A. P.
(
2001
).
Components of episodic memory: The contribution of recollection and familiarity.
Proceedings of the Royal Society of London, Series B
,
356
,
1363
1374
.
Zaretskaya
,
N.
,
Thielscher
,
A.
,
Logothetis
,
N.
, &
Bartels
,
A.
(
2010
).
Disrupting parietal function prolongs dominance durations in binocular rivalry.
Current Biology
,
20
,
1
6
.