Good translatability of behavioral measures of affect (emotion) between human and nonhuman animals is core to comparative studies. The judgment bias (JB) task, which measures “optimistic” and “pessimistic” decision-making under ambiguity as indicators of positive and negative affective valence, has been used in both human and nonhuman animals. However, one key disparity between human and nonhuman studies is that the former typically use secondary reinforcers (e.g., money) whereas the latter typically use primary reinforcers (e.g., food). To address this deficiency and shed further light on JB as a measure of affect, we developed a novel version of a JB task for humans using primary reinforcers. Data on decision-making and reported affective state during the JB task were analyzed using computational modeling. Overall, participants grasped the task well, and as anticipated, their reported affective valence correlated with trial-by-trial variation in offered volume of juice. In addition, previous findings from monetary versions of the task were replicated: More positive prediction errors were associated with more positive affective valence, a higher lapse rate was associated with lower affective arousal, and affective arousal decreased as a function of number of trials completed. There was no evidence that more positive valence was associated with greater “optimism,” but instead, there was evidence that affective valence influenced the participants' decision stochasticity, whereas affective arousal tended to influence their propensity for errors. This novel version of the JB task provides a useful tool for investigation of the links between primary reward and punisher experience, affect, and decision-making, especially from a comparative perspective.

An important goal in cognitive neuroscience, psychopharmacology, and affective science is the development of translational tasks that can be used to assess affective (emotional) states. For example, the development of novel treatments for affective disorders in humans depends on the use of animal models of affect (Rupniak, 2003). Because we cannot simply ask animals to report their affective state, proxy indicators such as tests of anhedonia (Van der Harst & Spruijt, 2007) or cognitive bias (Mendl, Burman, Parker, & Paul, 2009) are often designed to simulate and assess behavioral characteristics observed in humans experiencing affective states such as depression or anxiety (American Psychiatric Association, 2013; Williams, Mathews, & MacLeod, 1996; Wright & Bower, 1992; MacLeod, Mathews, & Tata, 1986). The judgment bias task has been demonstrated to provide a measure of affective valence (positivity or negativity) in a range of nonhuman animals with relatively “optimistic” (risk-seeking) decision-making being associated with environmental or pharmacological manipulations designed to induce positive affect and relatively “pessimistic” (risk-averse) decision-making being associated with environmental or pharmacological manipulations designed to induce more negative affect (Lagisz et al., 2020; Neville, Nakagawa, et al., 2020). Since its conception in 2004 by Harding, Paul, and Mendl (2004), several studies have used judgment bias in nonhuman animals as a translational tool to investigate human affective disorders and pharmacological treatments for such disorders (Hales, Robinson, & Houghton, 2016; Anderson, Munafò, & Robinson, 2013; Papciak, Popik, Fuchs, & Rygula, 2013; Enkel et al., 2010).

Conducting judgment bias studies with humans, who can report how they feel, may help to provide a better insight into how performance in judgment bias tasks reflects affective state. It might also help to elucidate the putative adaptive function of affect-modulated decision-making, specifically, leading to an understanding of precisely how and why rewarding experiences might lead to positive affect and then to “optimistic” decision-making, and vice versa. However, human judgment bias studies to date have painted a mixed picture. Whereas some studies have found that “pessimism” correlates with subjective reports of more negative affect (Positive and Negative Affect Schedule [Paul et al., 2010], State–Trait Anxiety Inventory [Aylward, Hales, Robinson, & Robinson, 2020; Anderson, Hardcastle, Munafò, & Robinson, 2012], Beck Depression Inventory [Daniel-Watanabe, McLaughlin, Gormley, & Robinson, in press], Visual Analog Scale for anxiety [Anderson et al., 2012]), other studies have found no relationship between reported affect and judgment bias (affect grid and Positive and Negative Affect Schedule [Iigaya et al., 2016], State–Trait Anxiety Inventory [Daniel-Watanabe et al., in press]) or even that “optimistic” decision-making is associated with more negative reported affect (affect grid [Neville, Dayan, Gilchrist, Paul, & Mendl, 2021]).

There are several differences between the human and nonhuman animal judgment bias studies that may have led to these inconsistencies; humans are required to learn the task in a shorter period (i.e., within an hour-long session as opposed to over several days for most nonhuman animals), humans do not live under the highly controlled conditions typical of laboratory animals, and there may be social factors influencing human decision-making (e.g., wanting to perform well to satisfy the experimenter). However, one difference that might be of particular significance is that judgment bias studies in nonhuman animals are typically conducted using primary reinforcers such as food or electric shocks, whereas all judgment bias testing in humans has used secondary reinforcers such as monetary gain or loss.

Many studies have identified differences between the neural processing of primary and second reinforcers (Sescousse, Caldú, Segura, & Dreher, 2013; Delgado, Jou, & Phelps, 2011; Beck, Locke, Savine, Jimura, & Braver, 2010; Grimm & See, 2000). For example, it has been demonstrated that primary rewards are more strongly represented in evolutionary older brain regions, such as the anterior insula, whereas secondary rewards are more strongly represented in evolutionary newer brain regions, such as the anterior OFC (Sescousse et al., 2013; Delgado et al., 2011). Given these differences, it is important to develop tasks for humans that utilize primary reinforcers and hence more closely resemble animal tasks and to investigate whether they yield similar findings to the typical secondary reinforcer human task. The use of primary reinforcers in this way is particularly pertinent to our understanding of judgment bias from a functional and evolutionary perspective, given that affect is hypothesized both to reflect ongoing and prior experience of rewards and punishers and also to mediate the way this experience guides decision-making (Bach & Dayan, 2017; Marshall, Trimmer, Houston, & McNamara, 2013; Nettle & Bateson, 2012; Mendl, Burman, & Paul, 2010). Moreover, primary reinforcers may tap into fundamental affective processes and pathologies that underlie affective disorders more reliably than secondary reinforcers, given their more direct relevance to our ability to survive and reproduce. Consequently, to better understand judgment bias as a measure of affect, we need a task for humans in which we administer primary reinforcers and in which we can obtain a fine-scale picture of sequential effects that might be influencing decision-making and affective state.

To this end, we aimed to develop a translation of the automated rat judgment bias task (Neville, King, et al., 2020; Jones et al., 2018) for humans that uses primary reinforcers: apple juice and cold salty tea (Pauli et al., 2015). Using this novel variant of the judgment bias task alongside computational modeling, we additionally aimed to elucidate the extent to which latent processes underlying decision-making might relate to subjective experiences of affect, reward, and punishment within the task. We hypothesized that more positive reported affect would be associated with model parameters characterizing biases toward the “optimistic” response and that, consistent with previous research (Neville et al., 2021; Rutledge, Skandali, Dayan, & Dolan, 2014), the absolute favorability (i.e., average earning rate) of within-task experience would inform decision-making and the relative favorability (i.e., reward prediction error) of within-task experience would inform reported affective valence.

Participants

Thirty-three people from the Bristol Veterinary School community participated in the study. All participants provided written informed consent, and the study was approved by the Faculty of Science Research Ethics Committee at the University of Bristol. This sample size was based on a previous human judgment study (39 participants, across two conditions [Neville et al., 2021]) and a previous study using primary reinforcers as part of a conditioning paradigm (29 participants [Pauli et al., 2015]).

The inclusion criteria were that the participant enjoyed apple juice; was not allergic to apple juice, salt, or black tea; was not hypertensive or had any medical condition that meant that he or she must limit his or her salt intake; and was over the age of 18 years. Participants were asked to abstain from drinking anything (except water) or eating in the hour before the study. To cover their time and expenses, participants were paid £7 for the hour-long session. To encourage full engagement with the study, participants were informed that the top three ranking participants in terms of accuracy would receive an additional £7 bonus.

Apparatus

The task was conducted on a laptop (Dell Latitude), which was connected to two syringe pumps (SPLG100, World Precision Instruments) that were set to pump liquid at a rate of 2 mL per minute. Sterilized food-grade PVC tubing (outer diameter: 11 mm, inner diameter: 8 mm) were attached to syringes (Becton Dickinson; 50-mL Plastipak) that were driven by the syringe pumps. This tubing was connected via a tube connector to smaller sterilized food-grade PVC tubing (outer diameter: 6 mm, inner diameter: 4 mm) and held in place in front of the participant using a retort clamp and stand, the height of which could be adjusted by the participant at the start of the experiment. The participants placed the end of these tubes in their mouth. Individuals made responses on a keyboard connected to the laptop. The code for the task was written in MATLAB (MathWorks, Inc.) using the PsychToolBox extensions (Kleiner et al., 2007; Brainard, 1997).

Procedure

The methodology was adapted from that of the human monetary judgment bias task (Neville et al., 2021), which itself was a translation of a rodent judgment bias task (Jones et al., 2018). On each trial of the task, participants were instructed to press and hold the enter key before being presented with a fixation cross for 1000 msec followed by a random dot kinematogram (RDK) displayed for 2000 msec, which across trials varied in direction of motion (leftward or rightward) and coherence (proportion of dots moving in a coherent direction: 0.01, 0.02, 0.16, or 0.32). Participants had two options when the RDK was presented: to release the key before the end of the 2-sec RDK presentation (“leave”) or continue to hold the key for 2 sec (“stay”). The outcome associated with either response depended on the stage of training and the direction of the RDK, with one direction being favorable and requiring a “stay” response to gain reward and the other being unfavorable and requiring a “leave” response to avoid punishment (either leftward or rightward, counterbalanced across participants).

Participants were provided with written instructions about the rules of the task and were then provided with practice trials that were composed of two blocks of 48 trials. The aim of the first practice block was to introduce the participants to the task and train them on the correct responses to the RDK. In this first practice block, the word “correct” was shown on screen for correct responses (i.e., those where “stay” was executed when the RDK direction was favorable, and “leave” was executed when the RDK direction was unfavorable), and “incorrect” was shown on screen for incorrect responses. The direction of motion of the RDK was very easy to detect (coherence = 0.32) on all trials, and the direction of motion was leftward on 50% of trials and rightward on the remaining 50%.

The aim of the second practice block was to acquaint participants with the delivery and taste of the apple juice (Apple Juice from Concentrate, Morrisons) and salty tea. Salty tea was prepared each morning as per Pauli et al. (2015) with two black tea bags (Bettys & Taylors of Harrogate; Yorkshire Tea) and 29 g of salt per liter of boiling water, which was chilled before data collection. We opted to use salty tea as opposed to electric shocks as we considered this to be a milder punisher (hence less of an ethical concern) and so that the modality of the punisher was the same as the reward (i.e., both gustatory). Importantly, liquid reinforcers such as juice and salty tea have successfully been used in fMRI studies (Pauli et al., 2015; Metereau & Dreher, 2013; Kim, Shimojo, & O'Doherty, 2011), hence their use in our task paves the way for future investigations of the neural underpinnings of judgment bias in humans. In this block, the direction of the RDK was easy to detect (coherence = 0.16) on 50% of trials, of which half were leftward and half were rightward, and moderately difficult to detect (coherence = 0.02) on the remaining 50%, of which half were leftward and half were rightward. The volume of juice received for “stay” responses when the direction of the RDK was favorable was 0.457 mL, and likewise, the volume of salty tea received for “stay” responses when the RDK was unfavorable was 0.457 mL. While juice was delivered and also for 3000 msec after delivery, the words “Juice delivered” were displayed on screen, whereas during salty tea delivery and for 3000 msec after this, the words “Salty tea delivered” were displayed on screen. The potential volume of juice on each trial (i.e., “0.457 mL”) was shown above an orange-colored bar with a height proportional to the potential juice volume, similarly the volume of salty tea (i.e., “0.457 mL”) was displayed below a brown bar positioned directly below the orange-colored bar with a height proportional to the potential tea volume. These bars were shown before the instruction to press and hold the enter key. The participant received nothing for making the “leave” response, and the words “Nothing delivered” were displayed on screen for 3000 msec. Across all blocks, the directions and coherence levels of the RDKs were randomized across trials before the start of the study so that the order of these was identical for all participants.

The test session (see Figure 1) comprised 90 trials on which the RDK moved leftward and 90 trials on which it moved rightward. For each direction, 30 trials had coherence levels of low (0.01), moderate (0.02), and high (0.16). RDKs with low and moderate coherence levels are the ambiguous probe cues, whereas those with a high coherence level are the reference cues. Hence, each possible stimulus was shown on 16.7% of trials. The potential volume of juice fluctuated across trials according to a noisy sine wave with a mean volume of 0.457 mL and a standard deviation of 0.216 mL, ranging from a minimum of 0.021 mL to a maximum of 0.781 mL. As a result of ethical concerns about the effect of large quantities of salty tea, the potential volume of salty tea remained at 0.457 mL throughout testing. As in the second practice block, the potential volume of juice and salty tea was displayed on screen both as text and graphically using two colored bars with heights proportional to the volume of juice and salty tea. Before the first trial and every subsequent 10 trials, participants were asked to report how they were feeling using a computerized affect grid (Killgore, 1998). To complete the affect grid, participants had to move a cross that was initially central in the grid to the location that best described their current affective state using the arrow keys on a keyboard. Horizontal movements represented changes in affective valence, with movements to the right reporting a more positively valenced affect. Vertical movements represented arousal, with upward movement reporting higher levels of arousal.

Figure 1.

Structure of the human primary reward and punisher judgment bias test session: (1) Participants are shown the potential outcomes of the “stay” response (juice volume: orange bar; tea volume: brown bar) and then must press “enter”; (2) participants are instructed to press and hold the enter key; (3) participants are shown a fixation cross for 1000 msec; (4) participants are presented with an RDK for 2000 msec, during which they must either continue holding the enter key (“stay”) or release the enter key (“go”); (5) participants receive the reward or punishment and are shown the outcome of their action (which is also determined by the true direction of the RDK); and (6) after at least 3000 msec, either the next trial starts or the participant is asked to complete an affect grid (after every 10 trials).

Figure 1.

Structure of the human primary reward and punisher judgment bias test session: (1) Participants are shown the potential outcomes of the “stay” response (juice volume: orange bar; tea volume: brown bar) and then must press “enter”; (2) participants are instructed to press and hold the enter key; (3) participants are shown a fixation cross for 1000 msec; (4) participants are presented with an RDK for 2000 msec, during which they must either continue holding the enter key (“stay”) or release the enter key (“go”); (5) participants receive the reward or punishment and are shown the outcome of their action (which is also determined by the true direction of the RDK); and (6) after at least 3000 msec, either the next trial starts or the participant is asked to complete an affect grid (after every 10 trials).

Close modal

Data Analysis

Two participants were excluded from data analysis because of poor performance at the reference cues, which we defined as not making the correct response significantly above chance across both reference cues according to a one-tailed binomial test (Participant 12, p = .18; Participant 29, p = .99).

We conducted both a model-dependent and model-agnostic analysis of the data. The aim of the model-dependent analysis was to investigate the latent processes involved in decision-making and how these might be modulated by different aspects of reward and punisher experience. Judgment bias RT data (“stay”: 2 sec; “leave”: <2 sec) were fitted to the partially observable Markov decision process (POMDP) model described in full by Neville et al. (2021). Briefly, we consider that participants transition through a 2-D state space in which they accumulate evidence about the true direction of the presented stimulus as informed by their observations (Dimension 1) across the discretized duration of the trial (Dimension 2). The probability on a given trial that an individual will opt for the safe “leave” response, and the speed with which they do so, will depend on their transitions through the state space and the value of occupying each state.

In the model, the movement through the state space and the values are determined by a number of parameters including those characterizing the psychometric function, namely, σ, which reflects the ability of the participant to discriminate between the stimuli, and λref and λamb, which are lapse rates for the reference and ambiguous stimuli, respectively; those characterizing the hazard function (probability of “timeout,” i.e., time elapsed > 2 sec, given “timeout” has not already occurred) to account for the possibility of making the “stay” response by default because of inaccurate timekeeping (φ and ζ); and an inverse temperature parameter (B) to reflect decision stochasticity. In addition, we included a set of parameters that allowed for biases toward or away from the “optimistic” response. This set of parameters was composed of a baseline bias parameter (β0δ) capturing overall tendencies for risk-seeking or risk aversion, as well as the overall dislike for the salty tea and enjoyment of the apple juice, and parameters that encompassed the potential effect of the average earning rate (βR¯δ), most recent outcome (βOδ), weighted prediction error (βwPEδ), and squared weighted prediction error (βwPE2δ) on bias, as well as a parameter that allowed for decision-making to vary as a function of the number of trials completed (βn) to account for fatigue. The average earning rate reflects the learnt value of the test session from previous juice and salty tea intake and updates according to a Rescorla–Wagner learning model, with learning rate αR¯ (following Neville et al., 2021; Guitart-Masip, Beierholm, Dolan, Duzel, & Dayan, 2011), whereas the weighted reward prediction error is the difference between the model-predicted outcome and the actual outcome across trials weighted such that the influence of past prediction errors attenuates over trials, with forgetting factor γwPE (following Neville et al., 2021; Rutledge et al., 2014).

As the task used food rewards, we included a parameter (κ) that captured the potential effect of satiation as juice intake increased. Specifically, the subjective worth (Rn) depended on the offered juice volume (in milliliters) on trial n via an exponent that varied as a function of total juice intake, i=0n1Ri:
An exponent was chosen to reflect the nonlinearity of utility functions and, more specifically, to capture the typical concavity of utility functions; increases in reward gain lead to diminishing marginal increases in the subjective value of those gains (Hsee & Rottenstreich, 2004; Kahneman & Tversky, 1979). As the offered volume of juice is always lower than 1, a negative value of κ would represent adherence with the law of diminishing marginal utility, whereas a value of greater than zero would lead to the opposite (i.e., a steep increase in the subjective value of the juice with additional consumption).

Models were fitted to the RT data using maximum likelihood, with multiple starting values (including values found to provide the best fit for the core model as starting values) because of nonconvexity. Parameters that characterized decision-making in the absence of biases (i.e., B, λamb, λref σ, φ, and ζ) were included in all models to account for their potential influence on behavior. We then added parameters in a stepwise manner, first assessing whether the parameter that characterized constant biases in decision-making improved model parsimony and then those that characterized within-task variation in decision-making. We then assessed whether the addition of single parameters to the best-fitting model would further improve the model fit. Models were compared using Bayesian information criterion (BIC) values, and we additionally compared the final set of models using the Akaike information criterion (AIC). A stepwise model-fitting procedure was employed as fitting all possible models was not feasible because of the large number of possible models and the computational intensiveness of model fitting. Model fitting was carried out using the computational facilities of the Advanced Computing Research Centre at the University of Bristol.

The parameter estimates from the most parsimonious model were analyzed using permutation tests to assess whether they varied significantly from zero, where this was meaningful (i.e., for β0δ and κ).

The aim of the model-agnostic analysis was to investigate the relationship, first, between primary reward and punisher experience and reported affect and, second, between the parameters characterizing decision-making and reported affect. These analyses were conducted in R (R Core Team, 2015) using the nlme package (Pinheiro, Bates, DebRoy, Sarkar, & R Core Team, 2017). Likelihood ratio tests were used to assess whether the difference in model deviance was significant after removal of a parameter from a model.

The reported valence (affect grid x coordinate) and reported arousal (affect grid y coordinate) were analyzed using general linear mixed models (GLMMs) in which both the intercept and slope were allowed to vary among participants, with the predictor variables for which a random slope was included determined using BIC comparison (i.e., comparing a model with the variable included in the random effects structure to the model without the variable included in the random effects structure). The predictor variables were those encompassing reward and punisher experience, included those derived from the best-fitting POMDP model: the most recent volume of offered juice, weighted reward prediction error (wPE), squared weighted reward prediction error (wPE2), the previous outcome O, total juice consumed, the number of trials completed, and the average earning rate R¯. Because of correlations between wPE, wPE2, and O, and also between total juice consumed, number of trials completed, and R¯, these predictor variables were not included in the same GLMM but instead were included separately, and each model was then compared using their BIC values. The GLMMs that provided the best fit were analyzed further.

In addition, the parameter estimates from the POMDP model were analyzed using a general linear model with the mean reported arousal and mean reported valence as fixed effects, except ζ and φ, in which the mean timeout probability that was jointly determined by these two parameters was instead used for a more intuitive interpretation of the results, as in Neville et al. (2021). The values of B were log-transformed because of the presence of extreme outliers exerting undue influence on the model; the GLMM was also run after removal of these outliers.

Judgment Bias

The most parsimonious model of judgment bias RT, according to the BIC values, included the following core parameters: B, λamb, λref σ, φ, and ζ, which characterized decision-making in the absence of biases, in addition to β0δ, characterizing baseline biases, and κ, characterizing the rate of satiation (see Table 1). This model fit the observed data well (see Figure 2). The AIC values indicated that inclusion of βR¯δ or βwPEδ outperformed the BIC best model. However, permutation tests revealed that the estimates of these parameters did not differ significantly from zero (βR¯δ: mean = −0.13, SE = 0.11, p = .28; βwPEδ: mean = 0.005, SE = 0.01, p = .72). Hence, there was no strong evidence for the inclusion of these parameters, and we selected the BIC best model as our final model for further analysis.

Table 1.

Comparison of POMDP Models

ModelAICBIC
σ, λamb, λref, ζ, φ, B 15024.49 15656.70 
σ, λamb, λref, ζ, φ, B, β0δ 13652.29 14389.86 
σ, λamb, λref, ζ, φ, B, β0δ, βR¯δ 13600.18 14443.12 
σ, λamb, λref, ζ, φ, B, β0δ, βR¯δ, αR¯ 13560.34 14508.65 
σ, λamb, λref, ζ, φ, B, β0δ, βwPEδ 13646.15 14489.09 
σ, λamb, λref, ζ, φ, B, β0δ, βwPEδ, γwPE 13704.29 14652.60 
σ, λamb, λref, ζ, φ, B, β0δ, βwPE2δ 13662.85 14505.80 
σ, λamb, λref, ζ, φ, B, β0δ, βwPE2δ, γwPE 13590.04 14538.35 
σ, λamb, λref, ζ, φ, B, β0δ, βOδ 13676.61 14519.55 
σ, λamb, λref, ζ, φ, B, β0δ, βn 13528.49 14371.43 
σ, λamb, λref, ζ, φ, B, β0δ, κ 12641.70 13484.64 
σ, λamb, λref, ζ, φ, B, β0δ, κ, βR¯δ 12639.33 13587.64 
σ, λamb, λref, ζ, φ, B, β0δ, κ, βwPEδ 12600.02 13548.33 
σ, λamb, λref, ζ, φ, B, β0δ, κ, βwPE2δ 12991.77 13940.08 
σ, λamb, λref, ζ, φ, B, β0δ, κ, βOδ 12700.81 13649.12 
σ, λamb, λref, ζ, φ, B, β0δ, κ, βn 12700.61 13648.92 
ModelAICBIC
σ, λamb, λref, ζ, φ, B 15024.49 15656.70 
σ, λamb, λref, ζ, φ, B, β0δ 13652.29 14389.86 
σ, λamb, λref, ζ, φ, B, β0δ, βR¯δ 13600.18 14443.12 
σ, λamb, λref, ζ, φ, B, β0δ, βR¯δ, αR¯ 13560.34 14508.65 
σ, λamb, λref, ζ, φ, B, β0δ, βwPEδ 13646.15 14489.09 
σ, λamb, λref, ζ, φ, B, β0δ, βwPEδ, γwPE 13704.29 14652.60 
σ, λamb, λref, ζ, φ, B, β0δ, βwPE2δ 13662.85 14505.80 
σ, λamb, λref, ζ, φ, B, β0δ, βwPE2δ, γwPE 13590.04 14538.35 
σ, λamb, λref, ζ, φ, B, β0δ, βOδ 13676.61 14519.55 
σ, λamb, λref, ζ, φ, B, β0δ, βn 13528.49 14371.43 
σ, λamb, λref, ζ, φ, B, β0δ, κ 12641.70 13484.64 
σ, λamb, λref, ζ, φ, B, β0δ, κ, βR¯δ 12639.33 13587.64 
σ, λamb, λref, ζ, φ, B, β0δ, κ, βwPEδ 12600.02 13548.33 
σ, λamb, λref, ζ, φ, B, β0δ, κ, βwPE2δ 12991.77 13940.08 
σ, λamb, λref, ζ, φ, B, β0δ, κ, βOδ 12700.81 13649.12 
σ, λamb, λref, ζ, φ, B, β0δ, κ, βn 12700.61 13648.92 

Values in bold highlight the minimum value for each comparison method.

Figure 2.

Observed versus generated RT data. The mean discretized RT and mean proportion of stay responses for each stimulus level for both the model-generated and observed judgment bias data. Error bars represent 1 SE.

Figure 2.

Observed versus generated RT data. The mean discretized RT and mean proportion of stay responses for each stimulus level for both the model-generated and observed judgment bias data. Error bars represent 1 SE.

Close modal

Decision-Making and Reported Affect

Visual inspection of the parameter estimates revealed several potential outliers. These were formally defined as estimates that were either above the third quartile or below the first quartile by more than 1.5 times the interquartile range (i.e., below Q1–1.5 × IQR or above Q3 + 1.5 × IQR). All analyses were conducted both before and after the removal of these outliers. The presented results are from the analyses including the outliers, except where the removal of outliers changed the results qualitatively, in which case both results are presented.

The parameter characterizing satiation was significantly lower than zero (κ: mean = −5.84, SE = 4.23, p < .001), reflecting the fact that participants valued juice less with increasing juice consumption. Overall, the subjective value of the juice depreciated across the test session and, for much of the test session, remained less appetitive than the salty tea was aversive (Figure 3). This decrease in the subjective value of the juice exerted a strong influence on behavior; participants were more risk-averse than would be expected if there was no effect of satiation (Figure 4). The effect of satiation was also reflected in the observed decrease in RTs and proportion of “stay” responses made as the number of trials increased (Figure 5). The estimates of κ were not associated with either reported valence (beta weight = −5.72, SE = 4.64, likelihood ratio test [LRT] = 2.03, p = .16) or arousal (beta weight = 0.98, SE = 4.64, LRT = 0.075, p = .79).

Figure 3.

The value of juice across trials; the black line reflects the offered value, and the gray line reflects the mean satiation-dependent value across participants as determined by the POMDP model. The dashed line shows the threatened volume of salty tea. The shaded error bars represent 1 SE.

Figure 3.

The value of juice across trials; the black line reflects the offered value, and the gray line reflects the mean satiation-dependent value across participants as determined by the POMDP model. The dashed line shows the threatened volume of salty tea. The shaded error bars represent 1 SE.

Close modal
Figure 4.

Observed versus generated RT data where there is full satiation or is no satiation, and the generated RT data where there is no bias. The mean discretized RT for each stimulus level for both the model-generated (with κ = 0, β0δ = 0, and cumulative juice intake equal to the maximum juice intake for the participant) and observed judgment bias data. Error bars represent 1 SE.

Figure 4.

Observed versus generated RT data where there is full satiation or is no satiation, and the generated RT data where there is no bias. The mean discretized RT for each stimulus level for both the model-generated (with κ = 0, β0δ = 0, and cumulative juice intake equal to the maximum juice intake for the participant) and observed judgment bias data. Error bars represent 1 SE.

Close modal
Figure 5.

Observed versus generated RT and decision data. The mean discretized RT and mean proportion of stay responses for each stimulus level for both the model-generated and observed judgment bias data split by trials in the first and second half of the test session. Error bars represent 1 SE.

Figure 5.

Observed versus generated RT and decision data. The mean discretized RT and mean proportion of stay responses for each stimulus level for both the model-generated and observed judgment bias data split by trials in the first and second half of the test session. Error bars represent 1 SE.

Close modal

The bias parameter was significantly greater than zero (β0δ: mean = 0.05, SE < 0.01, p < .001), reflecting the participants' overall bias toward persisting with the “stay” response for a longer duration on a trial than would otherwise be anticipated (Figure 4), but this parameter was not predicted by reported affective valence (beta weight < 0.01, SE < 0.01, LRT < 0.01, p = .95) or arousal (beta weight < 0.01, SE < 0.01, LRT = 0.55, p = .46). Thus, although the value of the juice depreciated considerably over trials (Figure 3), participants occasionally continued to risk making the “stay” response, which was driven by this fixed bias. There was no evidence that the extent of this bias, or the speed of satiation, was influenced by affective state.

Participants who reported lower arousal tended to make more errors to the ambiguous stimulus; the association between the lapse rate and reported arousal was marginally nonsignificant (λamb: beta weight = −0.02, SE = 0.01, LRT = 3.77, p = .052). The reference lapse rate did not depend on reported arousal (λref: beta weight < 0.01, SE < 0.01, LRT = 2.04, p = .15). Neither λamb (beta weight < 0.01, SE = 0.01, LRT < 0.01, p = .93) nor λref (beta weight < 0.01, SE < 0.01, LRT = 2.13, p = .15) depended on reported valence. The slope parameter, σ, did not significantly depend on reported affective valence (beta weight = 0.04, SE = 0.04, LRT = 1.10, p = .30) or arousal (beta weight = 0.03, SE = 0.04, LRT = 0.78, p = .38). Likewise, the mean timeout probability (determined by ζ and φ) was not significantly associated with reported affective valence (beta weight < 0.01, SE = 0.03, LRT < 0.01, p = .95) or arousal (beta weight = 0.01, SE = 0.03, LRT = 0.41, p = .52).

Participants who reported more positive valence had significantly higher inverse temperature parameter values (B: beta weight = 1.05, SE = 0.48, LRT = 4.92, p = .027; Figure 6), and there was no significant relationship between affective arousal and B (beta weight = 0.22, SE = 0.48, LRT = 0.23, p = .63). However, after outlier removal, no significant effect of valence (beta weight = −0.21, SE = 0.30, LRT = 0.52, p = .47) or arousal (beta weight = −0.46, SE = 0.28, LRT = 2.97, p = .085) on inverse temperature was observed (Figure 6). The median value of the inverse temperature parameters was 336.4297 with an interquartile range of 1224.997, and any value over 3172.95 was deemed to be an outlier. The six outliers identified, whose removal altered the results qualitatively, had values of 4099.67, 5309.31, 82248.43, 15654.76, 121346.67, and 2779806.37 (Figure 6).

Figure 6.

The relationship between the log-transformed inverse temperature parameter (with unit mL−1) and reported affective valence with regression lines for the data including (solid line) and excluding (dashed line) outliers.

Figure 6.

The relationship between the log-transformed inverse temperature parameter (with unit mL−1) and reported affective valence with regression lines for the data including (solid line) and excluding (dashed line) outliers.

Close modal

Reported Affect and Reward and Punisher Experience

The best-fitting GLMM of reported valence included the potential outcome as solely a fixed effect (ΔBIC = 12.27 vs. inclusion as a random linear effect), number of trials completed with a slope coefficient that varied among participants (ΔBIC = 26.91 vs. solely a fixed effect; as opposed to the average earning rate − ΔBIC = 26.75, and total juice consumed − ΔBIC = 1.10), and weighted prediction error as solely a fixed effect (ΔBIC = 14.98 vs. inclusion as a random linear effect; as opposed to the squared weighted prediction error − ΔBIC = 54.25, or previous outcome ΔBIC = 38.14). Reported valence was positively correlated with both the prediction error (beta weight = 40.70, SE = 5.34, LRT = 52.14, p < .001) and potential outcome (beta weight = 14.30, SE = 4.24, LRT = 11.33, p < .001). The random slope coefficients for trial did not differ significantly from zero according to a permutation test (mean = −0.03, SE = 0.12, p = .81).

The best-fitting GLMM of reported arousal included potential outcome as solely a fixed effect (ΔBIC = 15.34 vs. inclusion as a random linear effect), number of trials completed with a slope coefficient that varied among participants (ΔBIC = 71.70 vs. solely a fixed effect; as opposed to the average earning rate − ΔBIC = 83.80, or total juice consumed − ΔBIC = 0.95), and squared prediction error as solely a fixed effect (ΔBIC = 18.37 vs. inclusion as a random linear effect; as opposed to the weighted prediction error − ΔBIC = 1.11, or previous outcome − ΔBIC = 1.12). A higher potential outcome tended to be associated with greater reported arousal (beta weight = 7.06, SE = 4.66, LRT = 2.89, p = .09). The squared weight prediction error had no significant effect of affective arousal (beta weight = 4.96, SE = 4.66, LRT = 1.14, p = .26). The random slope coefficients for trial were overall significantly lower than zero (mean = −0.52, SE = 0.17, p = .004), suggesting a decrease in reported arousal as the test session progressed.

In this study, we developed a human judgment bias task that closely mirrored nonhuman animal versions of the task by using primary, rather than secondary, reinforcers. We did this to increase the translatability of the task and to allow more effective use of results from the human task to shed light on judgment bias as a measure of affect in nonhuman animals. To achieve this, we modified a human judgment bias task, which was itself originally designed as a translation of a rodent judgment bias task, by replacing monetary gain with apple juice and monetary loss with salty tea. We then sought to investigate the relationship between prior experience, affect, and decision-making using computational modeling. The task itself was largely successful, with more than 90% of participants performing better than chance in their responses to the reference cues to which they had been trained. Furthermore, the data produced were highly amenable to computational modeling.

Our initial analyses demonstrated that the favorability of the potential outcome of participants' responses modulated self-reported affective valence, with a greater volume of offered juice leading to more positive affective valence. There was also a tendency for a greater volume of offered juice to lead to greater reported affective arousal. This supports the definition of “emotion” (i.e., short-term affect) as a state elicited by (anticipated) rewards or punishers (Mendl & Paul, 2020; Rolls, 2013). It also indicates that participants were sensitive to the fluctuating volume of potential apple juice and were, indeed, perceiving larger volumes of apple juice as more rewarding. This demonstrates that the manipulation of reward experience worked well, having induced the anticipated shifts in affective states. The finding that positive affective valence was related to recently experiencing rewards that were greater than anticipated (positive reward prediction errors) aligns with the results of a number of other studies (Neville et al., 2021; Otto & Eichstaedt, 2018; Rutledge et al., 2014).

Our results provide evidence for a complex relationship between affect and decision-making that could only be teased apart through use of computational modeling. Contrary to our expectations, there was no evidence in this study that an “optimistic” bias was directly associated with more positive reported affect. This conflicts with some (but not all) previous findings from judgment bias studies in humans and with the meta-analytic findings in animal studies that relatively positive affective states are associated with more “optimistic” decision-making (Lagisz et al., 2020; Neville, Nakagawa, et al., 2020). Importantly, the meta-analyses identified a small-to-moderate effect size with high heterogeneity, using manipulations including those designed to induce substantial shifts in affective valence (e.g., enriched vs. barren housing; administration of pharmacological substances; restraint stress). It is also possible that the primary reinforcers used in these animal studies may have been far more salient to their nonhuman subjects than the juice/salty tea was for the humans in this study (e.g., animals were food restricted; animals were maintained on a nonvaried diet). Given this, it is perhaps not entirely unsurprising that no significant relationship was found between the parameter estimates encompassing “optimism” and affective valence in our nonclinical population of human participants. These findings, first, raise the question of whether judgment bias can be sufficiently sensitive to detect more subtle variation in baseline affect and, second, highlights that contextual differences in how humans and animals experience the tasks may present significant hurdles to the development of truly translational and directly comparable tests, even given the explicit attempt.

We did find that, when looking at the data as a whole, the inverse temperature parameter was associated with reported valence; lower decision stochasticity was associated with more positive reported affect. This result is consistent with findings that depression, a clinically negatively valenced affective state, has been associated with more stochastic decision-making (Harlé, Guo, Zhang, Paulus, & Yu, 2017; Huys, Pizzagalli, Bogdan, & Dayan, 2013). However, this result appeared to be driven by a subset of participants whose data revealed large outlying estimates of the inverse temperature parameter and who tended to report affective valence at the more positive end of the scale. The cause of the outlying values is unclear; speculatively, it could reflect a trait characterized by heightened sensitivity to rewards and punishers and overall more positive affect. It could also reflect that the model fits particularly well to these participants (given that the inverse temperature parameter can also capture model misfit), although the reasons for this are not apparent. Nonetheless, the results of our study imply a relationship between affective valence and the cognitive processes underlying judgment bias. The potential link between decision stochasticity and affect requires further investigation.

We also identified a weak relationship between affective arousal and the cognitive processes underlying judgment bias. In particular, a higher ambiguous stimulus lapse rate tended to be associated with lower arousal, which corroborates results from the monetary version of the task (Neville et al., 2021) and likely reflects that higher arousal may lead to higher engagement with the task and result in better performance. It would therefore be useful to assess whether lapse rates, extracted using computational modeling, could provide a measure of arousal in judgment bias tasks for nonhuman animals.

Our results also highlighted the influence of reward and punisher experience on both affect and decision-making. First, the total juice consumed by the participant was the largest experiential contributor to their decision-making. The value of the juice decreased as a function of increasing juice intake, which is consistent with participants becoming satiated. In accordance with the findings of the monetary version of the task (Neville et al., 2021), we also found decreasing arousal as more trials had been completed, which may indicate that participants found the task to be tiring. Greater consideration should thus be given to the potential for satiation and potential fatigue (and individual differences that lead to variation in how rapidly an individual becomes sated) when conducting nonhuman animal judgment bias tasks.

The degree of satiation indicated in our human participants was arguably extreme; the absolute subjective value (i.e., subjective magnitude) of the juice remained largely below the value of the salty tea after approximately 20 trials. Yet, despite this, and even after accounting for errors that were characterized by a lapse rate, participants continued to make the “optimistic” response more often than would be anticipated; they exhibited an overall “optimistic” bias. The precise nature of this bias is unclear. One possibility is that this bias, particularly at the favorable and near-favorable cue, reflects the intrinsic reward associated with making accurate (correct) responses. Similarly, it may reflect that the feedback provided by the “stay” response was itself rewarding, given that the “leave” response provides no feedback about the correct action. This would be in line with studies that have shown that being correct is in itself rewarding (Satterthwaite et al., 2012; Han, Huettel, Raposo, Adcock, & Dobbins, 2010). An alternative is that the “gamble” of the “stay” may have elicited a rewarding frisson of excitement, particularly given the repetitive and likely dull nature of the task. Studies have shown that risky decision-making can induce excitement and that this increases under boredom (Kılıç, van Tilburg, & Igou, 2020; Binde, 2013; Mercer & Eastwood, 2010). This further highlights issues in developing truly translational versions of the judgment bias task; nonhuman animals may not experience an intrinsic reward associated with being correct, and judgment bias tasks for nonhuman animals have been suggested to be enriching for them (Krakenberg et al., 2021; Mallien et al., 2016).

The extent to which recent outcomes were better or worse than anticipated was found to determine self-reported affect. This is in agreement with previous studies demonstrating that prediction error is a key determinant of subjectively experienced affective valence in humans (Neville et al., 2021; Otto & Eichstaedt, 2018; Rutledge et al., 2014), at least in relation to short-term rewarding experiences.

As outlined above, several findings were consistent between this task and the monetary version from which it was adapted: A higher lapse rate is associated with lower affective arousal, more positive prediction errors are associated with more positive reported affective valence, and reported affective arousal decreases as a function of number of trials completed. However, there were also a couple of inconsistencies. First, we found no evidence for a relationship between time estimation and affective valence, and second, we found no effect of the average earning rate or previous outcome on judgment bias (Neville et al., 2021). It is unclear whether this might stem from differences in the processing of primary and secondary rewards, or other factors such as satiation having an overriding effect on behavior. This therefore warrants further examination.

In summary, we have developed a novel version of the judgment bias task for humans that uses primary as opposed to secondary reinforcers and, as a result, is more comparable to versions of the task designed for nonhuman animals. We identified a relationship between decision stochasticity and affective valence. Specifically, we generally observed higher decision stochasticity in participants who reported more negatively valenced affect, although only when a cluster of outlying participants were included. We also found that lower reported affective arousal tended to be associated with a greater propensity for errors. The decision-making processes underlying judgment bias on this task are hence linked to both reported affective valence and arousal. In addition, we confirmed some, but not all, previous findings that had been observed in a monetary version of the task, such as the positive association between prediction errors and reported valence. We conclude that our novel version of the judgment bias task provides a means to investigate the relationship between affect and decision-making in greater depth and in a manner that is more comparable to animal versions of the task.

Reprint requests should be sent to Vikki Neville, Bristol Veterinary School, University of Bristol, Langford BS40 5DU, United Kingdom, or via e-mail: [email protected].

Vikki Neville: Conceptualization; Data curation; Formal analysis; Investigation; Methodology; Writing—Original draft; Writing—Review & editing. Peter Dayan: Conceptualization; Formal analysis; Methodology; Supervision; Writing—Original draft; Writing—Review & editing. Iain D. Gilchrist: Conceptualization; Methodology; Supervision; Writing—Review & editing. Elizabeth S. Paul: Conceptualization; Methodology; Supervision; Writing—Review & editing. Michael Mendl: Conceptualization; Methodology; Supervision; Writing—Original draft; Writing—Review & editing.

Vikki Neville, Biotechnology and Biological Sciences Research Council (https://dx.doi.org/10.13039/501100000268), grant number: BB/M009122/1. Peter Dayan, Alexander von Humboldt-Stiftung (https://dx.doi.org/10.13039/100005156). Peter Dayan, Max-Planck-Gesellschaft (https://dx.doi.org/10.13039/501100004189). All authors, Biotechnology and Biological Sciences Research Council (https://dx.doi.org/10.13039/501100000268), grant number: BB/T002654/1.

A retrospective analysis of the citations in every article published in this journal from 2010 to 2020 has revealed a persistent pattern of gender imbalance: Although the proportions of authorship teams (categorized by estimated gender identification of first author/last author) publishing in the Journal of Cognitive Neuroscience (JoCN) during this period were M(an)/M = .408, W(oman)/M = .335, M/W = .108, and W/W = .149, the comparable proportions for the articles that these authorship teams cited were M/M = .579, W/M = .243, M/W = .102, and W/W = .076 (Fulvio et al., JoCN, 33:1, pp. 3–7). Consequently, JoCN encourages all authors to consider gender balance explicitly when selecting which articles to cite and gives them the opportunity to report their article's gender citation balance.

American Psychiatric Association
. (
2013
).
Diagnostic and statistical manual of mental disorders
(5th ed.).
Washington, DC
:
American Psychiatric Association
.
Anderson
,
M. H.
,
Hardcastle
,
C.
,
Munafò
,
M. R.
, &
Robinson
,
E. S. J.
(
2012
).
Evaluation of a novel translational task for assessing emotional biases in different species
.
Cognitive, Affective & Behavioral Neuroscience
,
12
,
373
381
. ,
[PubMed]
Anderson
,
M. H.
,
Munafò
,
M. R.
, &
Robinson
,
E. S. J.
(
2013
).
Investigating the psychopharmacology of cognitive affective bias in rats using an affective tone discrimination task
.
Psychopharmacology
,
226
,
601
613
. ,
[PubMed]
Aylward
,
J.
,
Hales
,
C.
,
Robinson
,
E.
, &
Robinson
,
O. J.
(
2020
).
Translating a rodent measure of negative bias into humans: The impact of induced anxiety and unmedicated mood and anxiety disorders
.
Psychological Medicine
,
50
,
237
246
. ,
[PubMed]
Bach
,
D. R.
, &
Dayan
,
P.
(
2017
).
Algorithms for survival: A comparative perspective on emotions
.
Nature Reviews Neuroscience
,
18
,
311
319
. ,
[PubMed]
Beck
,
S. M.
,
Locke
,
H. S.
,
Savine
,
A. C.
,
Jimura
,
K.
, &
Braver
,
T. S.
(
2010
).
Primary and secondary rewards differentially modulate neural activity dynamics during working memory
.
PLoS One
,
5
,
e9251
. ,
[PubMed]
Binde
,
P.
(
2013
).
Why people gamble: A model with five motivational dimensions
.
International Gambling Studies
,
13
,
81
97
.
Brainard
,
D. H.
(
1997
).
The Psychophysics toolbox
.
Spatial Vision
,
10
,
433
436
. ,
[PubMed]
Daniel-Watanabe
,
L.
,
McLaughlin
,
M.
,
Gormley
,
S.
, &
Robinson
,
O. J.
(
in press
).
Association between a directly translated cognitive measure of negative bias and self-reported psychiatric symptoms
.
Biological Psychiatry: Cognitive Neuroscience and Neuroimaging
,
1
9
.
Delgado
,
M. R.
,
Jou
,
R. L.
, &
Phelps
,
E. A.
(
2011
).
Neural systems underlying aversive conditioning in humans with primary and secondary reinforcers
.
Frontiers in Neuroscience
,
5
,
71
. ,
[PubMed]
Enkel
,
T.
,
Gholizadeh
,
D.
,
von Bohlen und Halbach
,
O.
,
Sanchis-Segura
,
C.
,
Hurlemann
,
R.
,
Spanagel
,
R.
, et al
(
2010
).
Ambiguous-cue interpretation is biased under stress- and depression-like states in rats
.
Neuropsychopharmacology
,
35
,
1008
1015
. ,
[PubMed]
Grimm
,
J. W.
, &
See
,
R. E.
(
2000
).
Dissociation of primary and secondary reward-relevant limbic nuclei in an animal model of relapse
.
Neuropsychopharmacology
,
22
,
473
479
. ,
[PubMed]
Guitart-Masip
,
M.
,
Beierholm
,
U. R.
,
Dolan
,
R.
,
Duzel
,
E.
, &
Dayan
,
P.
(
2011
).
Vigor in the face of fluctuating rates of reward: An experimental examination
.
Journal of Cognitive Neuroscience
,
23
,
3933
3938
. ,
[PubMed]
Hales
,
C. A.
,
Robinson
,
E. S. J.
, &
Houghton
,
C. J.
(
2016
).
Diffusion modelling reveals the decision making processes underlying negative judgement bias in rats
.
PLoS One
,
11
,
e0152592
. ,
[PubMed]
Han
,
S.
,
Huettel
,
S. A.
,
Raposo
,
A.
,
Adcock
,
R. A.
, &
Dobbins
,
I. G.
(
2010
).
Functional significance of striatal responses during episodic decisions: Recovery or goal attainment?
Journal of Neuroscience
,
30
,
4767
4775
. ,
[PubMed]
Harding
,
E. J.
,
Paul
,
E. S.
, &
Mendl
,
M.
(
2004
).
Animal behaviour: Cognitive bias and affective state
.
Nature
,
427
,
312
. ,
[PubMed]
Harlé
,
K. M.
,
Guo
,
D.
,
Zhang
,
S.
,
Paulus
,
M. P.
, &
Yu
,
A. J.
(
2017
).
Anhedonia and anxiety underlying depressive symptomatology have distinct effects on reward-based decision-making
.
PLoS One
,
12
,
e0186473
. ,
[PubMed]
Hsee
,
C. K.
, &
Rottenstreich
,
Y.
(
2004
).
Music, pandas, and muggers: On the affective psychology of value
.
Journal of Experimental Psychology: General
,
133
,
23
30
. ,
[PubMed]
Huys
,
Q. J.
,
Pizzagalli
,
D. A.
,
Bogdan
,
R.
, &
Dayan
,
P.
(
2013
).
Mapping anhedonia onto reinforcement learning: A behavioural meta-analysis
.
Biology of Mood & Anxiety Disorders
,
3
,
12
. ,
[PubMed]
Iigaya
,
K.
,
Jolivald
,
A.
,
Jitkrittum
,
W.
,
Gilchrist
,
I. D.
,
Dayan
,
P.
,
Paul
,
E.
, et al
(
2016
).
Cognitive bias in ambiguity judgements: Using computational models to dissect the effects of mild mood manipulation in humans
.
PLoS One
,
11
,
e0165840
. ,
[PubMed]
Jones
,
S.
,
Neville
,
V.
,
Higgs
,
L.
,
Paul
,
E. S.
,
Dayan
,
P.
,
Robinson
,
E. S. J.
, et al
(
2018
).
Assessing animal affect: An automated and self-initiated judgement bias task based on natural investigative behaviour
.
Scientific Reports
,
8
,
12400
. ,
[PubMed]
Kahneman
,
D.
, &
Tversky
,
A.
(
1979
).
Prospect theory: An analysis of decision under risk
.
Econometrica
,
47
,
263
292
.
Kılıç
,
A.
,
van Tilburg
,
W. A. P.
, &
Igou
,
E. R.
(
2020
).
Risk-taking increases under boredom
.
Journal of Behavioral Decision Making
,
33
,
257
269
.
Killgore
,
W. D. S.
(
1998
).
The Affect Grid: A moderately valid, nonspecific measure of pleasure and arousal
.
Psychological Reports
,
83
,
639
642
. ,
[PubMed]
Kim
,
H.
,
Shimojo
,
S.
, &
O'Doherty
,
J. P.
(
2011
).
Overlapping responses for the expectation of juice and money rewards in human ventromedial prefrontal cortex
.
Cerebral Cortex
,
21
,
769
776
. ,
[PubMed]
Kleiner
,
M.
,
Brainard
,
D.
,
Pelli
,
D.
,
Ingling
,
A.
,
Murray
,
R.
, &
Broussard
,
C.
(
2007
).
What's new in Psychtoolbox-3?
Perception
,
36
,
1
16
.
Krakenberg
,
V.
,
Wewer
,
M.
,
Palme
,
R.
,
Kaiser
,
S.
,
Sachser
,
N.
, &
Richter
,
S. H.
(
2021
).
Regular touchscreen training affects faecal corticosterone metabolites and anxiety-like behaviour in mice
.
Behavioural Brain Research
,
401
,
113080
. ,
[PubMed]
Lagisz
,
M.
,
Zidar
,
J.
,
Nakagawa
,
S.
,
Neville
,
V.
,
Sorato
,
E.
,
Paul
,
E. S.
, et al
(
2020
).
Optimism, pessimism and judgement bias in animals: A systematic review and meta-analysis
.
Neuroscience & Biobehavioral Reviews
,
118
,
3
17
. ,
[PubMed]
MacLeod
,
C.
,
Mathews
,
A.
, &
Tata
,
P.
(
1986
).
Attentional bias in emotional disorders
.
Journal of Abnormal Psychology
,
95
,
15
20
. ,
[PubMed]
Mallien
,
A. S.
,
Palme
,
R.
,
Richetto
,
J.
,
Muzzillo
,
C.
,
Richter
,
S. H.
,
Vogt
,
M. A.
, et al
(
2016
).
Daily exposure to a touchscreen-paradigm and associated food restriction evokes an increase in adrenocortical and neural activity in mice
.
Hormones and Behavior
,
81
,
97
105
. ,
[PubMed]
Marshall
,
J. A. R.
,
Trimmer
,
P. C.
,
Houston
,
A. I.
, &
McNamara
,
J. M.
(
2013
).
On evolutionary explanations of cognitive biases
.
Trends in Ecology & Evolution
,
28
,
469
473
. ,
[PubMed]
Mendl
,
M.
,
Burman
,
O. H. P.
,
Parker
,
R. M. A.
, &
Paul
,
E. S.
(
2009
).
Cognitive bias as an indicator of animal emotion and welfare: Emerging evidence and underlying mechanisms
.
Applied Animal Behaviour Science
,
118
,
161
181
.
Mendl
,
M.
,
Burman
,
O. H. P.
, &
Paul
,
E. S.
(
2010
).
An integrative and functional framework for the study of animal emotion and mood
.
Proceedings of the Royal Society of London, Series B, Biological Sciences
,
277
,
2895
2904
. ,
[PubMed]
Mendl
,
M.
, &
Paul
,
E. S.
(
2020
).
Animal affect and decision-making
.
Neuroscience and Biobehavioral Reviews
,
112
,
144
163
. ,
[PubMed]
Mercer
,
K. B.
, &
Eastwood
,
J. D.
(
2010
).
Is boredom associated with problem gambling behaviour? It depends on what you mean by ‘boredom.’
International Gambling Studies
,
10
,
91
104
.
Metereau
,
E.
, &
Dreher
,
J.-C.
(
2013
).
Cerebral correlates of salient prediction error for different rewards and punishments
.
Cerebral Cortex
,
23
,
477
487
. ,
[PubMed]
Nettle
,
D.
, &
Bateson
,
M.
(
2012
).
The evolutionary origins of mood and its disorders
.
Current Biology
,
22
,
R712
R721
. ,
[PubMed]
Neville
,
V.
,
Dayan
,
P.
,
Gilchrist
,
I. D.
,
Paul
,
E. S.
, &
Mendl
,
M.
(
2021
).
Dissecting the links between reward and loss, decision-making, and self-reported affect using a computational approach
.
PLoS Computational Biology
,
17
,
e1008555
. ,
[PubMed]
Neville
,
V.
,
King
,
J.
,
Gilchrist
,
I. D.
,
Dayan
,
P.
,
Paul
,
E. S.
, &
Mendl
,
M.
(
2020
).
Reward and punisher experience alter rodent decision-making in a judgement bias task
.
Scientific Reports
,
10
,
11839
. ,
[PubMed]
Neville
,
V.
,
Nakagawa
,
S.
,
Zidar
,
J.
,
Paul
,
E. S.
,
Lagisz
,
M.
,
Bateson
,
M.
, et al
(
2020
).
Pharmacological manipulations of judgement bias: A systematic review and meta-analysis
.
Neuroscience & Biobehavioral Reviews
,
108
,
269
286
. ,
[PubMed]
Otto
,
A. R.
, &
Eichstaedt
,
J. C.
(
2018
).
Real-world unexpected outcomes predict city-level mood states and risk-taking behavior
.
PLoS One
,
13
,
e0206923
. ,
[PubMed]
Papciak
,
J.
,
Popik
,
P.
,
Fuchs
,
E.
, &
Rygula
,
R.
(
2013
).
Chronic psychosocial stress makes rats more ‘pessimistic’ in the ambiguous-cue interpretation paradigm
.
Behavioural Brain Research
,
256
,
305
310
. ,
[PubMed]
Paul
,
E. S.
,
Cuthill
,
I.
,
Kuroso
,
G.
,
Norton
,
V.
,
Woodgate
,
J.
, &
Mendl
,
M.
(
2010
).
Mood and the speed of decisions about anticipated resources and hazards
.
Evolution and Human Behavior
,
32
,
21
28
.
Pauli
,
W. M.
,
Larsen
,
T.
,
Collette
,
S.
,
Tyszka
,
J. M.
,
Seymour
,
B.
, &
O'Doherty
,
J. P.
(
2015
).
Distinct contributions of ventromedial and dorsolateral subregions of the human substantia nigra to appetitive and aversive learning
.
Journal of Neuroscience
,
35
,
14220
14233
. ,
[PubMed]
Pinheiro
,
J.
,
Bates
,
D.
,
DebRoy
,
S.
,
Sarkar
,
D.
, &
R Core Team
. (
2017
).
nlme: Linear and nonlinear mixed effects models (R package version 3.1–131)
. .
R Core Team
. (
2015
).
R: A language and environment for statistical computing
.
Vienna
:
R Foundation for Statistical Computing
.
Rolls
,
E. T.
(
2013
).
What are emotional states, and why do we have them?
Emotion Review
,
5
,
241
247
.
Rupniak
,
N. M. J.
(
2003
).
Animal models of depression: Challenges from a drug development perspective
.
Behavioural Pharmacology
,
14
,
385
390
. ,
[PubMed]
Rutledge
,
R. B.
,
Skandali
,
N.
,
Dayan
,
P.
, &
Dolan
,
R. J.
(
2014
).
A computational and neural model of momentary subjective well-being
.
Proceedings of the National Academy of Sciences, U.S.A.
,
111
,
12252
12257
. ,
[PubMed]
Satterthwaite
,
T. D.
,
Ruparel
,
K.
,
Loughead
,
J.
,
Elliott
,
M. A.
,
Gerraty
,
R. T.
,
Calkins
,
M. E.
, et al
(
2012
).
Being right is its own reward: Load and performance related ventral striatum activation to correct responses during a working memory task in youth
.
Neuroimage
,
61
,
723
729
. ,
[PubMed]
Sescousse
,
G.
,
Caldú
,
X.
,
Segura
,
B.
, &
Dreher
,
J.-C.
(
2013
).
Processing of primary and secondary rewards: A quantitative meta-analysis and review of human functional neuroimaging studies
.
Neuroscience & Biobehavioral Reviews
,
37
,
681
696
. ,
[PubMed]
Van der Harst
,
J. E.
, &
Spruijt
,
B. M.
(
2007
).
Tools to measure and improve animal welfare: Reward-related behaviour
.
Animal Welfare
,
16(Suppl. 1)
,
67
73
.
Williams
,
J. M. G.
,
Mathews
,
A.
, &
MacLeod
,
C.
(
1996
).
The emotional Stroop task and psychopathology
.
Psychological Bulletin
,
120
,
3
24
. ,
[PubMed]
Wright
,
W. F.
, &
Bower
,
G. H.
(
1992
).
Mood effects on subjective probability assessment
.
Organizational Behavior and Human Decision Processes
,
52
,
276
291
.

Author notes

*

These authors contributed equally.