A central concern in the study of learning and decision-making is the identification of neural signals associated with the values of choice alternatives. An important factor in understanding the neural correlates of value is the representation of the object itself, separate from the act of choosing. Is it the case that the representation of an object within visual areas will change if it is associated with a particular value? We used fMRI adaptation to measure the neural similarity of a set of novel objects before and after participants learned to associate monetary values with the objects. We used a range of both positive and negative values to allow us to distinguish effects of behavioral salience (i.e., large vs. small values) from effects of valence (i.e., positive vs. negative values). During the scanning session, participants made a perceptual judgment unrelated to value. Crucially, the similarity of the visual features of any pair of objects did not predict the similarity of their value, so we could distinguish adaptation effects due to each dimension of similarity. Within early visual areas, we found that value similarity modulated the neural response to the objects after training. These results show that an abstract dimension, in this case, monetary value, modulates neural response to an object in visual areas of the brain even when attention is diverted.
The neural representation of a visual stimulus must code many dimensions, and so, the similarity space of a set of objects is multidimensional, even in a single brain region. For example, the similarity of the neural responses in V1 reflects both stimulus orientation and spatial frequency (Mazer, Vinje, McDermott, Schiller, & Gallant, 2002), and the similarity of neural codes in V4 reflects both stimulus color and shape (Roe et al., 2012). The voxel-level BOLD response can reflect multiple stimulus dimensions that are coded, at the neural level, either independently or conjointly (Drucker, Kerr, & Aguirre, 2009).
For these basic visual dimensions and beyond, it is clear that neural responses early in the visual pathway are shaped by learning, both of categorical boundaries along visual stimulus dimensions (Folstein, Palmeri, & Gauthier, 2012) and of abstract information about objects (e.g., biological class structure of living things; Connolly et al., 2012). The goal of the current study is to explore whether (and where) neural responses to novel visual stimuli reflect the abstract but behaviorally relevant variable of value. We show that early in the visual processing pathway nonvisual information is coded in the neural response, even when (a) the abstract dimension is orthogonal to all visual dimensions, (b) the abstract dimension is newly learned, and (c) responses are measured during a task that makes no reference to the abstract (value) dimension.
Many recent experiments have sought to identify neural signals associated with the values of choice alternatives (see Bartra, McGuire, & Kable, 2013, for a review). It has been suggested that the process of choosing between items is a two-staged process, in which values are first assigned to each option and then compared to yield a choice (Levy, Lazzaro, Rutledge, & Glimcher, 2011; Kable & Glimcher, 2009). This two-staged process of choice suggests that the process of tracking values of items is independent of choosing between them (Lebreton, Jorge, Michel, Thirion, & Pessiglione, 2009). Several studies have shown brain responses that engage automatically to different kinds of valuations, including monetary value (Tallon-Baudry, Meyniel, & Bourgeois-Gironde, 2011), facial attractiveness (e.g., Chatterjee, Thomas, Smith, & Aguirre, 2009), houses and paintings (e.g., Lebreton et al., 2009), consumer goods (e.g., Levy et al., 2011), and faces that have learned associations to monetary values (Rothkirch, Schmack, Schlagenhauf, & Sterzer, 2012). These results address if values are stored separately from a choice task, but their use of familiar objects makes it difficult to disentangle the value of a stimulus from its cultural significance and familiarity (Rangel, Camerer, & Montague, 2008; Erk, Spitzer, Wunderlich, Galley, & Walter, 2002). In Rothkirch et al. (2012), a baseline measure of brain response to the face stimuli before value learning is not provided to compare the fMRI results after value learning, thus leaving their findings ambiguous.
Finally, prior work has suggested that coupling reward with visual stimuli may modulate the visual representation of the reward-predicting stimuli (Arsenault, Nelissen, Jarraya, & Vanduffel, 2013; Seitz, Kim, & Watanabe, 2009) and improve performance during perceptual tasks (Nomoto, Schultz, Watanabe, & Sakagami, 2010; Serences, 2008; Engelmann & Pessoa, 2007; Pessiglione, Seymour, Flandin, Dolan, & Frith, 2006). Stanisor, van der Togt, Pennartz, and Roelfsema (2013) showed that V1 neurons that exhibited a strong response to value also exhibited a strong attention effect. We add to this literature by showing that these behaviorally relevant changes to visual representations of reward-related stimuli are present even when attention is diverted away from value and engaged instead in a perceptual task that is not reward related. Using fMRI, we measured neural responses to novel objects with learned values while participants performed an unrelated perceptual task. We calculated the degree of fMRI adaptation (Grill-Spector & Malach, 2001) as a measure of neural similarity between objects along the value dimension to determine if the response of neurons in visual cortex is modulated by the newly learned value of these objects.
Thirteen right-handed participants (mean age = 24.3 years, nine women) with normal or contact-corrected vision participated in the study for monetary compensation. Informed consent was obtained from each participant as approved by the University of Pennsylvania institutional review board.
Design and Procedure
Participants learned the value for novel stimuli over the course of a four-session training protocol. Before and after training, participants were scanned while performing a visual decision task unrelated to value. The total span of the experiment for each participant was 1 week.
The novel stimuli were nine closed contours, or “moon” shapes, that varied across three dimensions: color, shape, and monetary value. The nine objects were created by pairing each of three shapes with each of three colors (Figure 1A), and they ranged in value from −$10 to +$10, in increments of $2.50. For any pair of objects, we could assign a distance along the color dimension (0–2 “steps”), the shape dimension (also 0–2 steps), and the value dimension (1–8 steps; 0 would occur only for two identical stimuli). So, for example, the stimuli in the top left and bottom left of Figure 1A would differ in 0 shape steps, 2 color steps, and 2 value steps. The values were assigned to the objects such that these three distances were uncorrelated across the set of stimuli, allowing us to measure the effect of value similarity independently of the effects of color or shape similarity.
During each trial of the training sessions, the participant saw two objects next to each other on a computer screen. Participants were asked to choose one object in an effort to maximize the total amount of money in their bank. On each trial during the choice task, the value of each object was drawn from a Gaussian distribution with a standard deviation of $0.25 that was centered on the mean value of the object. The variation was intended to be a way of presenting the stimuli and their associated values during the learning phase while preventing participants from simply associating a number with an object (instead of thinking about these objects as having worth). After an object was chosen, it was highlighted, and the amounts for both objects were displayed on the bottom of the screen. Next, a screen appeared showing the participant's total bank up to that point. Each participant completed 20 blocks, of 72 trials each, of this task over a 4-day period. Each object was paired with every other object an equal number of times, and identical objects were never presented together during a trial. Each participant received payment after the final scan. This payment included 10% of the final bank value of a randomly chosen block, excluding blocks on the first day, from the training sessions. The average of this bonus payment across participants was $29.
The primary dependent variable was a measure of fMRI activity obtained while participants viewed these objects while performing a difficult cover task unrelated to value (Figure 1B). On each trial, we presented one object on a gray background. A vertical line bisected the object, leaving 65% of the object either on the left or right side of the line. This line was randomly tilted between 10° and 40° from the vertical. The participant indicated by button press on each trial whether more of the shape was to the left or to the right of the line. We chose this cover task because it required the participant to attend to the appearance of the stimulus, shown on each trial, but did not involve an explicit comparison between sequential stimuli nor their respective monetary values (Drucker et al., 2009). The stimuli were backprojected onto a screen viewed by the participant through a mirror mounted on the head coil and subtended 5° of visual angle. Each stimulus was presented for 1300 msec, with a 300-msec ISI consisting of the mean gray background.
We employed a pseudorandom and counterbalanced, continuous carry-over design (Aguirre, 2007) that controlled the influence of stimulus order upon neural response. This allowed us to measure both fMRI adaptation and the direct effect for each item in a continuous sequence. The order of the stimuli was determined using a de Bruijn sequence (Aguirre, Mattar, & Magis-Weinberg, 2011). Given the nine stimuli and a null trial, a k = 10, n = 3 de Bruijn sequence was sought, using a trial duration of 1600 msec and three separate neural models (i.e., value, color, and shape; Figure 1C). The sequence was optimized to detect adaptation responses predicted by the value dimension (and, specifically, by the dimension of what we will refer to, below, as the signed value, in contrast to the unsigned or absolute value), but detection powers for effects of the unsigned value dimension and each of the visual dimensions were also taken into consideration while optimizing the sequence. Additional blank trials were added to the sequence to increase the power of the main effect (all stimuli vs. null trials) and to increase the total length of the sequence to an integer multiple of repetition times (TRs). This sequence was then broken into five runs of 128 TRs each. The last eight TRs from each run were added to the beginning of the subsequent run to ensure that our sequence counterbalancing was not affected by breaking it into smaller runs.
In addition to the cover task, we collected fMRI data during two additional tasks. At the end of the first scanning session, participants completed a one-back task with faces, objects, and scrambled objects. We included this functional localizer so that we could define ROIs corresponding to early visual cortex (EVC), using the contrast of scrambled objects greater than objects, and to lateral occipital cortex (LOC), using the contrast of objects greater than scrambled objects. The EVC region corresponds to the foveal confluence of visual areas V1–V3, which we confirmed by projecting a cortical surface template of these early visual areas (Benson, Butt, Brainard, & Aguirre, 2014) onto the volumetric data (Figure 2). During the second scanning session, after completing the cover task, participants completed two short runs of a choice task in which they saw each object on the screen for 1300 msec and were asked to choose whether they would prefer to have the value of that object or one dollar. This choice task differed from the training task because we needed to present one stimulus at a time in the scanner so that we could later conduct item-specific analyses. Each run of the choice task was 68 TRs.
Scans were collected on a 3-T Siemens Trio (Berlin) using a 32-channel surface array coil. Echo-planar BOLD fMRI data were collected at a TR of 3 sec, with 3 × 3 × 3 mm voxels covering the entire brain. A high-resolution anatomical image (3-D magnetization prepared rapid gradient echo) was also collected with 1 × 1 × 1 mm voxels for each participant. The stimuli were presented using a Sanyo (Moriguchi, Japan) SXGA 4200 lumens projector with a Buhl long-throw lens for rear projection onto Mylar screens, which participants viewed through a mirror mounted on the head coil.
Image preprocessing and analyses were conducted using FMRIB Software Library (Smith et al., 2004). The first eight TRs from each scan were removed before analysis. The data were smoothed with a FWHM Gaussian kernel of 5 mm. The functional images were aligned to the middle image of the time series with MCFLIRT (Jenkinson, Bannister, Brady, & Smith, 2002) and then transformed to standard Montreal Neurological Institute space. Within-subject statistical models were created using a general linear model (Figure 1D). Experimental conditions were convolved with a canonical hemodynamic response function, and spikes caused by head motion were included as covariates in the model. Beta estimates from the model were then averaged across each ROI.
When using the continuous carry-over approach to measure neural adaptation, the relationship of each stimulus to the prior stimulus forms the basis of the covariates (Aguirre, 2007). In our design, we used both positive and negative values to distinguish changes in responses due to the actual value of the objects from those due to the behavioral salience of each object (i.e., large vs. small values, as reflected by the absolute value, or what we will call the unsigned value). With this in mind, we modeled the distance between the value of each stimulus and the prior stimulus in two ways: One covariate models the “signed effect,” with nine different transition sizes, such that the stimulus associated with wins of $10 was maximally different from the one associated with losses of $10; the second covariate models the “unsigned effect,” with five transition sizes such that those two stimuli (positive and negative $10) would be maximally similar. The inclusion of the unsigned covariate was added to the analysis to demonstrate that our effect of interest, namely, the signed adaptation effect in EVC, was about value per se and not simply behavioral salience.
Our model included covariates for signed adaptation, unsigned adaptation, shape, color, blank trials, and trials after a blank trial. When using the continuous carry-over approach to measure adaptation, a trial in the adaptation covariates represents the difference along a given dimension between the present stimulus and the one that precedes it. Therefore, we included trials that followed blank screens as a covariate of no interest in the model because a blank screen trial does not have a value to serve as a comparison for the stimulus it precedes. For each participant, the five main task runs were first modeled individually and then combined using a higher-level fixed effects model. Data were then combined across participants using a random effects model.
We report analyses in five ROIs. We included two functionally defined ROIs in visual cortex, EVC and LOC, as described above. Although we were primarily interested in the effects of value on object representations in EVC (and potentially LOC), we included value-related ROIs to provide a point of reference for our findings in EVC. For example, if we find that the signed value covariate does explain variance in EVC after training, how does that compare with more traditionally value-related ROIs? Is there dissociation between EVC and these other regions, or are the effects similar across these different cortical regions? Because there is currently not an agreed-upon way to functionally localize value-related brain areas, we defined value-related ROIs based on the anatomical coordinates reported in a meta-analysis of fMRI experiments that examined subjective value (Bartra et al., 2013); we defined ROIs in ventromedial pFC (VMPFC), dorsomedial pFC (DMPFC), and striatum.
During each day of the training and at the end of the second scan, participants performed a choice task that required them to use their knowledge of the values of the objects to optimize their winnings. Participants chose the object with the greater associated reward more frequently from the first training day (mean ± SEM correct = 69.7 ± 2.7%) to the final training day (mean ± SEM correct = 90.1 ± 3.4%). The scores at the end of training were above chance (50%) for all participants (Figure 3) and significantly different than the scores on Day 1 (t(12) = 9.7, p < .001). Participants were also able to identify the values associated with the stimuli during the alternate choice task (i.e., object vs. $1) at the end of the second scan (mean ± SEM correct = 89.7 ± 2.3%).
We measured the degree to which the learned value of an object modulates the neural response to the presentation of subsequent objects of greater or lesser value. This neural adaptation effect could be manifested as proportional to the signed value of an object, meaning that an object associated with a loss of $10 would be treated as maximally different from an object associated with a gain of $10. Alternatively, neural populations could encode the absolute value of an object and thus reflect behavioral salience. In this case, an object associated with either a gain or a loss of $10 would be treated as maximally different from an object with a small relative value (e.g., $2.50). We tested for both of these possible effects in each of five ROIs. In addition, we examined these effects before and after learning of object value.
The primary question of this experiment is whether nonvisual, value information is coded in the neural response early in the visual processing pathway even when it is orthogonal to all visual dimensions and the responses are measured during a task that makes no reference to the abstract (value) dimension. As expected, before training, no significant adaptation effect related to either signed or unsigned value was found in any ROI (Figure 4). In contrast, significant neural adaptation related to shape similarity was found in multiple regions. This indicates that object value and shape were unrelated before training.
After training, we observed an adaption effect for the signed model in EVC (t(12) = 2.87, p < .02; Figure 5A). This effect was greater than that observed during the pretraining scan (marginally significant interaction, t(12) = 2.11, p = .057). Signed value was not associated with adaptation in any of the other ROIs, nor did we find significant effects with the unsigned model in any of our ROIs.
Prior studies of value representation have examined the overall magnitude of neural response associated with item value (Bartra et al., 2013; Levy et al., 2011; Tusche, Bode, & Haynes, 2010). We tested for similar item effects in our data by determining if the magnitude of the BOLD fMRI signal was proportional to object value, independent of the similarity of value to the preceding or following stimuli. We did not observe a significant item effect for signed value in any of our ROIs during either scan session. Given that prior studies have shown monotonic increase of activity in pFC and striatum as a function of stimulus value during value-related decision tasks, we also measured these effects during the value task at the end of the second scan session. We analyzed the data from these scans in the same way as the data from the main task. Whereas we did not find adaptation effects of value during this task, we did find direct effects of relative value in DMPFC (t = 2.77, p < .02) but not in striatum or VMPFC (both ps > .1). Although the inability to measure a significant response to value in striatum and VMPFC may seem odd, it could be because participants did not expect a reward based on their performance in this task (i.e., participants were aware that we were simply testing their knowledge of the values).
Finally, we examined the adaptation effect of shape similarity. Although this effect has no direct bearing on the goals of this study, we report it here as it establishes the sensitivity of our ROIs to object similarity, beyond value. We observed an adaptation effect of shape similarity in LOC (t(12) = 2.96, p < .02), EVC (t(12) = 2.31, p < .05), DMPFC (t(12) = 3.05, p = .01), and VMPFC (t(12) = 3.02, p = .01) and a marginal effect of shape similarity in striatum (t(12) = 2.06, p = .06; Figure 5B). There were no reliable differences in the magnitude of the shape adaptation effect pretraining versus posttraining in any of these ROIs. We also tested for neural adaptation related to the sequential effect of color change. This effect was not significant during either scan in the EVC (both ps > .3) or the LOC (both ps > .2).
In summary, EVC showed adaptation to shape before and after training and adaptation to value after training. In contrast, LOC, DMPFC, and VMPFC showed adaption to shape (at both time points) but no adaptation to value.
We scanned participants while they viewed a set of nine novel objects that varied across two visual dimensions (color and shape) and one orthogonal, nonvisual dimension (monetary value). We measured fMRI adaptation to characterize the neural similarity of these novel objects before and after a training period during which participants associated monetary values with the objects. We found a recovery from adaptation along the monetary value dimension in EVC only after training. We interpret this as evidence that object representations in the visual cortex are affected by value learning even when value is orthogonal to visual features and even when participants are engaged in a perceptual task unrelated to object value.
This value effect may be related to other effects of learning that have been reported in the visual system. Folstein et al. (2012) mention that in previous studies in which shape spaces were created by morphing complex objects, the dimensions that define those spaces are unclear and may not have existed before category learning (Gureckis & Goldstone, 2008; Goldstone & Steyvers, 2001). The authors further posit that because the objects that differ along dimensions relevant to the learned categories are more perceptually discriminable after learning, category learning may create representations of those relevant object dimensions. In this study, we believe that a similar process is at work. Specifically, learning the abstract property of value has modified the representations within certain assemblies of color- and shape-responsive neurons in EVC to better discriminate along the value dimension.
Our results go beyond the evidence for attention effects of value in the visual system (Serences, 2008), or what others have called visual perceptual learning (Sasaki, Nanez, & Watanabe, 2009).These studies have shown that reward history associated with a specific feature of a set of stimuli will modulate responses to that feature in EVC and out into higher-level visual areas. Our results add to this literature by showing that even when reward history is orthogonal to the low-level features of objects, neural activity in EVC tracks the value of each stimulus. Previous studies have suggested that coupling reward with visual stimuli may improve performance during perceptual tasks (Nomoto et al., 2010; Serences, 2008; Engelmann & Pessoa, 2007; Pessiglione et al., 2006).
Whereas prior studies of the representation of value have examined the bulk neural response evoked by the stimulus, we measured here neural adaptation induced by the similarity of object values. We used neural adaptation to characterize the similarity space represented within separate neural populations. We chose this method because it allowed for us to manipulate and measure each stimulus dimension separately and to observe neural sensitivity to each stimulus dimension across our ROIs. We did not observe a modulation of overall level of neural activity driven by object value in any of our ROIs. Interestingly, we also found that brain regions commonly found to be associated with value (DMPFC and VMPFC) encoded the shapes of objects, but not their values, indicating that when value is irrelevant to the present task, these regions will instead track task-relevant dimensions of the presented stimuli. This finding is not surprising because it has been reported that regions of frontal cortex, including dorsal and medial PFC, play a crucial role in the active biasing of task-relevant processes against strong competing alternatives (Chadick, Zanto, & Gazzaley, 2014; Chadick & Gazzaley, 2011; Miller & Cohen, 2001). Furthermore, evidence from several species suggests that the striatum contributes directly to decision-making, action selection, and initiation (Green, Biele, & Heekeren, 2012; Balleine, Delgado, & Hikosaka, 2007). This finding suggests that the process of tracking values occurs early in perception and that more frontal brain regions, involved in executive functions, are not recruited in value representation unless a choice between items is necessary.
Taken together, our findings that value learning modified the representations within certain neurons in EVC to better discriminate along the value dimension and that neurons in brain regions commonly found to be associated with value are not tuned to the value dimension when it is irrelevant to the task suggest that the sensory system plays a large role in the valuation process in the brain. This top–down facilitation in visual areas serves to assist in learning and in disambiguating input data, which can lead to faster RTs and better accuracy in important situations (Hsieh, Vul, & Kanwisher, 2010; Nomoto et al., 2010; Bar, 2003). If neurons in executive brain regions are not tuned to dimensions irrelevant to task demands, then where does value information reside when not in use? Our data support the idea that neural tuning in EVC, as measured by fMRI adaptation, contains information that could facilitate faster processing during value-based decision tasks. However, the lack of a main effect of value in EVC suggests that value is represented in visual cortex differently from putative “value regions.” In this study, we did find an item-level effect of value in DMPFC after training, indicating that a different representation of value is associated with this brain region.
An interesting topic for further examination is if altered neural representations in EVC give rise to altered performance on simple perceptual tasks. Speeded discrimination judgments might reflect the value of the objects, suggesting that learning to associate a value with an object influences perceptual similarity, in addition to neural similarity.
In conclusion, we found that neural populations in EVC encode value information. The response in EVC was measured during a demanding task that diverted attention away from the object value. The stimuli used were novel objects with no prior reward history, cultural significance, or familiarity, and the value dimension was orthogonal to the low-level visual features of the stimuli. These findings suggest that the learned value of objects penetrates to the earliest level of cortical sensory representation.
This work was funded by National Institutes of Health grant R01EY021717. We would like to thank the Kable laboratory at the University of Pennsylvania for their insightful comments.
Reprint requests should be sent to Andrew S. Persichetti, Department of Psychology, Emory University, 36 Eagle Row, Room 410, Atlanta, GA 30322, or via e-mail: email@example.com.