Surprise-based learning allows agents to rapidly adapt to nonstationary stochastic environments characterized by sudden changes. We show that exact Bayesian inference in a hierarchical model gives rise to a surprise-modulated trade-off between forgetting old observations and integrating them with the new ones. The modulation depends on a probability ratio, which we call the Bayes Factor Surprise, that tests the prior belief against the current belief. We demonstrate that in several existing approximate algorithms, the Bayes Factor Surprise modulates the rate of adaptation to new observations. We derive three novel surprise-based algorithms, one in the family of particle filters, one in the family of variational learning, and one in the family of message passing, that have constant scaling in observation sequence length and particularly simple update dynamics for any distribution in the exponential family. Empirical results show that these surprise-based algorithms estimate parameters better than alternative approximate approaches and reach levels of performance comparable to computationally more expensive algorithms. The Bayes Factor Surprise is related to but different from the Shannon Surprise. In two hypothetical experiments, we make testable predictions for physiological indicators that dissociate the Bayes Factor Surprise from the Shannon Surprise. The theoretical insight of casting various approaches as surprise-based learning, as well as the proposed online algorithms, may be applied to the analysis of animal and human behavior and to reinforcement learning in nonstationary environments.

Animals, humans, and similarly reinforcement learning agents may safely assume that the world is stochastic and stationary during some intervals of time interrupted by change points. The position of leaves on a tree, a stock market index, or the time it takes to travel from A to B in a crowded city is often well captured by stationary stochastic processes for extended periods of time. Then sudden changes may happen, such that the distribution of leaf positions becomes different due to a storm, the stock market index is affected by the enforcement of a new law, or a blocked road causes additional traffic jams. The violation of an agent's expectation caused by such sudden changes is perceived by the agent as surprise, which can be seen as a measure of how much the agent's current belief differs from reality.

Surprise, with its physiological manifestations in pupil dilation (Nassar et al., 2012; Preuschoff, t Hart, & Einhauser, 2011) and EEG signals (Mars et al., 2008; Modirshanechi, Kiani, & Aghajan, 2019; Ostwald et al., 2012), is believed to modulate learning, potentially through the release of specific neurotransmitters (Gerstner, Lehmann, Liakoni, Corneil, & Brea, 2018; Yu & Dayan, 2005) so as to allow animals and humans to adapt quickly to sudden changes. The quick adaptation to novel situations has been demonstrated in a variety of learning experiments (Behrens, Woolrich, Walton, & Rushworth, 2007; Glaze, Kable, & Gold, 2015; Heilbron & Meyniel, 2019; Nassar et al., 2012; Nassar, Wilson, Heasly, & Gold, 2010; Yu & Dayan, 2005). The bulk of computational work on surprise-based learning can be separated into two groups. Studies in the field of computational neuroscience have focused on biological plausibility with little emphasis on the accuracy of learning (Behrens et al., 2007; Bogacz, 2017; Faraji, Preuschoff, & Gerstner, 2018; Friston, 2010; Friston, FitzGerald, Rigoli, Schwartenbeck, & Pezzulo, 2017; Nassar et al., 2012; Nassar, Wilson, Heasly, & Gold, 2010; Ryali, Reddy, & Yu, 2018; Schwartenbeck, FitzGerald, Dolan, & Friston, 2013; Yu & Dayan, 2005), whereas exact and approximate Bayesian online methods (Adams & MacKay, 2007; Fearnhead & Liu, 2007) for change point detection and parameter estimation have been developed without any focus on biological plausibility (Aminikhanghahi & Cook, 2017; Cummings, Krehbiel, Mei, Tuo, & Zhang, 2018; Lin, Sharpnack, Rinaldo, & Tibshirani, 2017; Masegosa et al., 2017; Wilson, Nassar, & Gold, 2010).

In this work, we take a top-down approach to surprise-based learning. We start with a generative model of change points similar to the one that has been the starting point of multiple experiments (Behrens et al., 2007; Findling, Chopin, & Koechlin, 2019; Glaze et al., 2015; Heilbron & Meyniel, 2019; Nassar et al., 2012, 2010; Yu & Dayan, 2005). We demonstrate that Bayesian inference on such a generative model can be interpreted as modulation of learning by surprise; we show that this modulation leads to a natural definition of surprise that is different from but closely related to the Shannon Surprise (Shannon, 1948). Moreover, we derive three novel approximate online algorithms with update rules that inherit the surprise-modulated adaptation rate of exact Bayesian inference. The overall goal of this article is to give a Bayesian interpretation for surprise-based learning in the brain and to find approximate methods that are computationally efficient and biologically plausible while maintaining a high level of learning accuracy. As a by-product, our approach provides theoretical insights on commonalities and differences among existing surprise-based and approximate Bayesian approaches. Importantly, our approach makes specific experimental predictions.

In section 2, we introduce the generative model and we present our surprise-based interpretation of Bayesian inference and our three approximate algorithms. Next, we use simulations to compare our algorithms with existing ones on two different tasks inspired by and closely related to real experiments (Behrens et al., 2007; Mars et al., 2008; Nassar et al., 2012, 2010; Ostwald et al., 2012). At the end of section 2, we formalize two experimentally testable predictions of our theory and illustrate them with simulations. A brief review of related studies as well as a few directions for further work are supplied in section 3.

In order to study learning in an environment that exhibits occasional and abrupt changes, we consider a hierarchical generative model (see Figure 1A) in discrete time, similar to existing model environments (Behrens et al., 2007; Nassar et al., 2012, 2010; Yu & Dayan, 2005). At each time point t, the observation Yt=y comes from a distribution with the time-invariant likelihood PY(y|θ) parameterized by Θt=θ, where both y and θ can be multidimensional. In general, we indicate random variables by capital letters and values by lowercase letters. Whenever there is no risk of ambiguity, we drop the explicit notation of random variables to simplify notation. Abrupt changes of the environment correspond to sudden changes of the parameter θt. At every time t, there is a change probability pc(0,1) for the parameter θt to be drawn from its prior distribution π(0) independent of its previous value, and a probability 1-pc to stay the same as θt-1. A change at time t is specified by the event Ct=1; otherwise, Ct=0. Therefore, the generative model can be formally defined, for any T1, as a joint probability distribution over Θ1:T(Θ1,,ΘT), C1:T, and Y1:T as
(2.1)
where P(θ1)=π(0)(θ1), P(c1)=δ(c1-1), and
(2.2)
(2.3)
(2.4)
P stands for either probability density function (for the continuous variables) or probability mass function (for the discrete variables), and δ is the Dirac or Kronecker delta distribution, respectively.
Figure 1:

Nonstationary environment. (A) The generative model. At each time point t there is a probability pc(0,1) for a change in the environment. When there is a change in the environment, that is, Ct=1, the parameter Θt is drawn from its prior distribution π(0), independent of its previous value. Otherwise the value of Θt retains its value from the previous time step t-1. Given a parameter value Θt=θ, the observation Yt=y is drawn from a probability distribution PY(y|θ). We indicate random variables by capital letters, and values by lowercase letters. (B) Example of a nonstationary environment. Your friend meets you every day at the coffee shop (blue dot) starting after work from her office (orange dot) and crossing a river. The time of arrival of your friend is the observed variable Yt, which due to the traffic or your friend's workload may exhibit some variability but has a stable expectation (θ). If, however, a new bridge is opened (Ct=1 where t is the moment of change), your friend no longer needs to take a detour. There is, then, a sudden change in her observed daily arrival times.

Figure 1:

Nonstationary environment. (A) The generative model. At each time point t there is a probability pc(0,1) for a change in the environment. When there is a change in the environment, that is, Ct=1, the parameter Θt is drawn from its prior distribution π(0), independent of its previous value. Otherwise the value of Θt retains its value from the previous time step t-1. Given a parameter value Θt=θ, the observation Yt=y is drawn from a probability distribution PY(y|θ). We indicate random variables by capital letters, and values by lowercase letters. (B) Example of a nonstationary environment. Your friend meets you every day at the coffee shop (blue dot) starting after work from her office (orange dot) and crossing a river. The time of arrival of your friend is the observed variable Yt, which due to the traffic or your friend's workload may exhibit some variability but has a stable expectation (θ). If, however, a new bridge is opened (Ct=1 where t is the moment of change), your friend no longer needs to take a detour. There is, then, a sudden change in her observed daily arrival times.

Close modal

Given a sequence of observations y1:t, the agent's belief π(t)(θ) about the parameter Θt at time t is defined as the posterior probability distribution P(Θt=θ|y1:t). In the online learning setting studied here, the agent's goal is to update the belief π(t)(θ) to the new belief π(t+1)(θ), or an approximation thereof, upon observing yt+1.

A simplified real-world example of such an environment is illustrated in Figure 1B. Imagine that every day, a friend meets you at the coffee shop, starting after work from her office (see Figure 1B, left). To do so, she needs to cross a river via a bridge. The time of arrival of your friend (yt) exhibits some variability due to various sources of stochasticity (e.g., traffic and your friend's daily workload), but it has a stable average over time (θt). However, if a new bridge is opened, your friend arrives earlier, since she no longer has to take a detour (see Figure 1B, right). The moment of opening the new bridge is indicated by ct+1=1 in our framework and the sudden change in the average arrival time of your friend by a sudden change from θt to θt+1. Even without any explicit discussion with your friend about this situation and only by observing her actual arrival time, you can notice the abrupt change and hence adapt your schedule to the new situation.

2.1  Online Bayesian Inference Modulated by Surprise

According to the definition of the hierarchical generative model (see Figure 1A and equations 2.1 to 2.4), the value yt+1 of the observation at time t+1 depends only on the parameters θt+1, and is (given θt+1) independent of earlier observations and earlier parameter values. We exploit this Markovian property and update, using Bayes' rule, the belief π(t)(θ)P(Θt=θ|y1:t) at time t to the new belief at time t+1:
(2.5)
So far, equation 2.5 remains rather abstract. The aim of this section is to rewrite it in the form of a surprise-modulated recursive update. The first term in the numerator of equation 2.5 is the likelihood of the current observation given the parameter Θt+1=θ, and the second term is the agent's estimated probability distribution of Θt+1 before observing yt+1. Because there is always the possibility of an abrupt change, the second term is not the agent's previous belief π(t) but P(Θt+1=θ|y1:t)=(1-pc)π(t)(θ)+pcπ(0)(θ). As a result, it is possible to find a recursive formula for updating the belief. For the derivation of this recursive rule, we define the following terms.
Definition 1.
The probability or density (for discrete and continuous variables respectively) of observing y with a belief π(t') is denoted as
(2.6)

Note that if π is the exact Bayesian belief defined as above in equation 2.5, then P(y;π(t'))=P(Yt'+1=y|y1:t',ct'+1=0). In section 2.2 we will also use P(y;π^(t')) for an arbitrary π^(t'). Two particularly interesting cases of equation 2.6 are P(yt+1;π(t)), the probability of a new observation yt+1 with the current belief π(t), and P(yt+1;π(0)), the probability of a new observation yt+1 with the prior belief π(0).

Definition 2.
The Bayes Factor Surprise SBF of the observation yt+1 is defined as the ratio of the probability of observing yt+1 given ct+1=1 (i.e., when there is a change), to the probability of observing yt+1 given ct+1=0 (i.e., when there is no change), that is,
(2.7)

This definition of surprise measures how much more probable the current observation is under the naive prior π(0) relative to the current belief π(t) (see section 3 for further interpretation). This probability ratio is the Bayes factor (Efron & Hastie, 2016; Kass & Raftery, 1995) that tests the prior belief π(0) against the current belief π(t). We emphasize that our definition of surprise is not arbitrary, but essential in order to write the exact inference in equation 2.5 on the generative model in the compact recursive form indicated in the proposition that follows. Moreover, as we show later, this term can be identified in multiple learning algorithms (among them, Nassar et al., 2010, 2012), but it has never been interpreted as a surprise measure. In the following sections we establish the generality of this computational mechanism and identify it as a common feature of many learning algorithms.

Definition 3.
Under the assumption of no change ct+1=0, and using the most recent belief π(t) as prior, the exact Bayesian update for π(t+1) is denoted as
(2.8)

πB(t+1)(θ) describes the incorporation of the new information into the current belief via Bayesian updating.

Definition 4.
The surprise-modulated adaptation rate is a function γ:R+×R+[0,1] specified as
(2.9)
where S0 is a surprise value and m0 is a parameter controlling the effect of surprise on learning.

Using the above definitions and equation 2.5, we have for the generative model of Figure 1A and equations 2.1 to 2.4 the following proposition:

Proposition.
Exact Bayesian inference on the generative model is equivalent to the recursive update rule,
(2.10)
where SBF(t+1)=SBF(yt+1;π(t)) is the Bayes Factor Surprise, and
(2.11)
is the posterior if we take yt+1 as the only observation.
The proposition indicates that the exact Bayesian inference on the generative model discussed above (see Figure 1) leads to an explicit trade-off between (1) integrating a new observation ynew (corresponding to yt+1) with the old belief πold (corresponding to π(t)) into a distribution πintegration (corresponding to πB(t+1)) and (2) forgetting the past observations, so as to restart with the belief πreset (corresponding to P(θ|yt+1)) which relies only on the new observation and the prior π(0):
(2.12)
This trade-off is governed by a surprise-modulated adaptation rate γ(S,m)[0,1], where S=SBF0 (corresponding to the Bayes Factor Surprise) can be interpreted as the surprise of the most recent observation, and m=pc1-pc0 is a parameter controlling the effect of surprise on learning. Because the parameter of modulation m is equal to pc1-pc, for a fixed value of surprise S, the adaptation rate γ is an increasing function of pc. Therefore, in more volatile environments, the same value of surprise S leads to a higher adaptation rate than in a less volatile environment; in the case of pc1, any surprise value leads to full forgetting, γ=1.

As a conclusion, our first main result is that a split as in equation 2.12 with a weighting factor (“adaptation rate” γ) as in equation 2.9 is exact and always possible for the class of environments defined by our hierarchical generative model. This surprise modulation gives rise to specific testable experimental predictions discussed later.

2.2  Approximate Algorithms Modulated by Surprise

Despite the simplicity of the recursive formula in equation 2.10, the updated belief π(t+1) is generally not in the same family of distributions as the previous belief π(t), for example, the result of averaging two normal distributions is not a normal distribution. Hence it is in general impossible to find a simple and exact update rule for, say, some sufficient statistic. As a consequence, the memory demands for π(t+1) scale linearly in time, and updating π(t+1) using π(t) needs O(t) operations. In the following sections, we investigate three approximations, algorithms 1 to 3, that have simple update rules and finite memory demands, so that the updated belief remains tractable over a long sequence of observations.

As our second main result, we show that all our three approximate algorithms inherit the surprise-modulated adaptation rate from the exact Bayesian approach, equations 2.9 and 2.12. The first algorithm adapts an earlier algorithm of surprise minimization learning (SMiLe, Faraji et al., 2018) to variational learning. We refer to our novel algorithm as Variational SMiLe and abbreviate it as VarSMiLe (see algorithm 1). The second algorithm is based on message passing (Adams & MacKay, 2007) restricted to a finite number of messages N. We refer to this algorithm as MPN (see algorithm 2). The third algorithm uses the ideas of particle filtering (Gordon, Salmond, & Smith, 1993) for an efficient approximation for our hierarchical generative model. We refer to our approximate algorithm as Particle Filtering with N particles and abbreviate it by pfN (see algorithm 3). All algorithms are computationally efficient, have finite memory demands, and are biologically plausible; Particle Filtering has possible neuronal implementations (Huang & Rao, 2014; Kutschireiter, Surace, Sprekeler, & Pfister, 2017; Legenstein & Maass, 2014; Shi & Griffiths, 2009), MPN can be seen as a greedy version of pfN without sampling, and Variational SMiLe may be implemented by neo-Hebbian (Gerstner et al., 2018; Lisman, Grace, & Duzel, 2011) update rules. Simulation results show that the performance of the three approximate algorithms is comparable to and more robust across environments than other state-of-the-art approximations.

2.2.1  Variational SMiLe Rule: Algorithm 1

A simple heuristic approximation to keep the updated belief in the same family as the previous beliefs consists in applying the weighted averaging of the exact Bayesian update rule (see equation 2.10) to the logarithm of the beliefs rather than the beliefs themselves:
(2.13)

where γt+1=γSBF(yt+1;π^(t)),m is given by equation 2.9 with a free parameter m>0, which can be tuned to each environment. By doing so, we still have the explicit trade-off between two terms as in equation 2.10, but in the logarithms, yet an advantageous consequence of averaging over logarithms is that if the likelihood function PY is in the exponential family and the initial belief π(0) is its conjugate prior, then π^(t+1) and π(0) are members of the same family. In this particular case, we arrive at a simple update rule for the parameters of π^(t+1) (see algorithm 1 for pseudocode and section 4 for details). As it is common in variational approaches (Beal, 2003), the price of this simplicity is that except for the trivial cases of pc=0 and pc=1, there is no evidence other than simulations that the update rule of equation 2.13 will end up at an approximate belief close to the exact Bayesian belief.

One way to interpret the update rule of equation 2.13 is to rewrite it as the solution of a constraint optimization problem. The new belief π^(t+1) is a variational approximation of the Bayesian update π^B(t+1) (see section 4)
(2.14)
with a family of functions q(θ) constrained by the Kullback-Leibler divergence,
(2.15)

where the bound Bt+10,DKL[π^B(t+1)(θ)||P(θ|yt+1)] is a decreasing function of the Bayes Factor Surprise SBF(yt+1;π^(t)) (see section 4 for proof), and P(θ|yt+1) is given by equation 2.11.

Because of the similarity of the constraint optimization problem in equations 2.14 and 2.15 to the Surprise Minimization Learning rule, (SMiLe) (Faraji et al., 2018), we call this algorithm the Variational Surprise Minimization Learning rule, or for short, the Variational SMiLe rule. The differences between SMiLe and variational SMiLe are discussed in section 4.

Our variational method, and particularly its surprise-modulated adaptation rate, is complementary to earlier studies (Masegosa et al., 2017; Özkan, Šmídl, Saha, Lundquist, & Gustafsson, 2013) in machine learning that assumed different generative models and used additional assumptions and different approaches for deriving the learning rule.

2.2.2  Message-Passing N: Algorithm 2

For a hierarchical generative model similar to ours, a message-passing algorithm has been used to perform exact Bayesian inference (Adams & MacKay, 2007), where the algorithm's memory demands and computational complexity scale linearly in time t. In this section, we first explain the idea of the message-passing algorithm of Adams and MacKay (2007) and its relation to our proposition. We then present our approximate version of this algorithm, which has a constant (in time) computational complexity and memory demands.

The history of change points up to time t is a binary sequence, for example, c1:t={1,0,0,1,0,1,1}, where the value 1 indicates a change in the corresponding time step. Following the idea of Adams and MacKay (2007), we define the random variable Rt=min{nN:Ct-n+1=1} in order to describe the time since the last change point, which takes values between 1 and t. We can write the exact Bayesian expression for π(t)(θ) by marginalizing P(Θt=θ,rt|y1:t) over the t possible values of rt in the following way:
(2.16)
For consistency with algorithm 3 (i.e. Particle Filtering), we call each term in the sum of equation 2.16 a “particle” and denote as πk(t)(θ)=P(Θt=θ|Rt=t-k,y1:t) the belief of the particle corresponding to Rt=t-k, and wt(k)=P(Rt=t-k|y1:t) its corresponding weight at time t:
(2.17)
For each particle, the term πk(t)(θ) is simple to compute, because when rt is known, inference depends only on the observations after the last change point. Therefore, the goal of online inference is to find an update rule for the evolution of the weights wt(k) over time.
We can apply the exact update rule of our proposition (see equation 2.10) to the belief expressed in the form of equation 2.17. Upon each observation of a new sample yt+1, a new particle is generated and added to the set of particles, corresponding to P(θ|yt+1) (that is, πreset), modeling the possibility of a change point occurring at t+1. According to the proposition, the weight of the new particle (k=t) is equal to
(2.18)
where γt+1=γSBF(yt+1;π(t)),pc1-pc (cf. equation 2.9). The other t particles coming from π(t) correspond to πB(t+1) (that is, πintegration) in the proposition. The update rule (see section 4 for derivation) for the weights of these particles (0kt-1) is
(2.19)
So far, we used the idea of Adams and MacKay (2007) to write the belief as in equation 2.17 and used our proposition to arrive at the surprise-modulated update rules in equations 2.18 and 2.19.

The computational complexity and memory requirements of the complete message-passing algorithm increase linearly with time t. To deal with this issue and to have a constant computation and memory demands over time, we implemented a message-passing algorithm of the form of equations 2.17 to 2.19, but with a fixed number N of particles, chosen as those with the highest weights wt(k). Therefore, our second algorithm adds a new approximation step to the full message-passing algorithm of Adams and MacKay (2007): Whenever t>N, after adding the new particle with the weight as in equation 2.18 and updating the previous weights as in equation 2.19, we discard the particle with the smallest weight (i.e., set its weight equal to 0) and renormalize the weights. By doing so, we always keep the number of particles with nonzero weights equal to N. Note that for tN, our algorithm is exact and identical to the message-passing algorithm of Adams and MacKay (2007). We call our modification of the message-passing algorithm of Adams and MacKay (2007) “Message Passing N (MPN).”

To deal with the computational complexity and memory requirements, one may alternatively keep only the particles with weights greater than a cut-off threshold (Adams & MacKay, 2007). However, such a constant cut-off leads to a varying number (smaller or equal to t) of particles in time. Our approximation MPN can therefore be seen as a variation of the thresholding algorithm in Adams and MacKay (2007) with fixed number of particles N, and hence a variable cut-off threshold. The work of Fearnhead and Liu (2007) follows the same principle as Adams and MacKay (2007), but employs stratified resampling to eliminate particles with negligible weights in order to reduce the total number of particles. Their resampling algorithm involves solving a complicated nonlinear equation at each time step, which makes it unsuitable for a biological implementation. In addition, we experienced that in some cases, the small errors introduced in the resampling step of the algorithm of Fearnhead and Liu (2007) accumulated and led to worse performance than our MPN algorithm, which simply keeps the N particles with the highest weight at each time step.

For the case where the likelihood function PY(y|θ) is in the exponential family and π(0) is its conjugate prior, the resulting algorithm of MPN has a simple update rule for the belief parameters (see algorithm 2 and section 4 for details). For comparison, we also implemented in our simulations the full message-passing algorithm of Adams and MacKay (2007) with an almost zero cut-off (machine precision), which we consider as our benchmark “Exact Bayes,” as well as the stratified optimal resampling algorithm of Fearnhead and Liu, which we call ``SORN.''

2.2.3  Particle Filtering: Algorithm 3

Equation 2.17 demonstrates that the exact Bayesian belief π(t) can be expressed as the marginalization of P(Θt=θ,rt|y1:t) over the time since the last change point rt. Equivalently, one can compute the exact Bayesian belief as the marginalization of P(Θt=θ,c1:t|y1:t) over the history of change points c1:t:
(2.20)
The idea of our third algorithm is to approximate this expectation by particle filtering, that is, sequential Monte Carlo sampling (Doucet, Godsill, & Andrieu, 2000; Gordon et al., 1993) from P(C1:t|y1:t).
We then approximate π(t) by
(2.21)
where {c1:t(i)}i=1N is a set of N realizations (or samples) of c1:t (i.e., N particles) drawn from a proposal distribution Q(c1:t|y1:t), {wt(i)}i=1N are their corresponding weights at time t, and π^i(t)(θ)=P(Θt=θ|c1:t(i),y1:t) is the approximate belief corresponding to particle i.
Upon observing yt+1, the update procedure for the approximate belief π^(t+1) of equation 2.21 includes two steps: (1) updating the weights and (2) sampling the new hidden state ct+1 for each particle. The two steps are coupled together through the choice of the proposal distribution Q, for which we choose the optimal proposal function (Doucet et al., 2000; see section 4). As a result, given this choice of proposal function, we show (see section 4) that the first step amounts to
(2.22)
where γt+1=γSBF(yt+1;π^(t)),m with m=pc1-pc (cf. equation 2.9), and {wB,t+1(i)}i=1N are the weights corresponding to the Bayesian update π^B(t+1) of equation 2.8 (see section 4). In the second step, we update each particle's history of change points by going from the sequence {c1:t(i)}i=1N to {c1:t+1(i)}i=1N, for which we always keep the old sequence up to time t, and for each particle i, we add a new element ct+1(i){0,1} representing no change c1:t+1(i)=[c1:t(i),0] or change c1:t+1(i)=[c1:t(i),1]. Note, however, that it is not needed to keep the whole sequences c1:t+1(i) in memory, but instead one can use ct+1(i) to update π^i(t) to π^i(t+1). We sample the new element ct+1(i) from the optimal proposal distribution Q(ct+1(i)|c1:t(i),y1:t+1) (Doucet et al., 2000), which is given by (see section 4)
(2.23)

Interestingly, the above formulas entail the same surprise modulation and the same trade-off as proposed by the proposition's equation 2.10. For the weight update, there is a trade-off between an exact Bayesian update and keeping the value of the previous time step, controlled by a adaptation rate modulated exactly in the same way as in equation 2.10. Note that in contrast to equation 2.10, the trade-off for the particles' weights is not between forgetting and integrating, but between maintaining the previous knowledge and integrating. However, the change probability (see equation 2.23) for sampling is equal to the adaptation rate and is an increasing function of surprise. As a result, although the weights are updated less for surprising events, a higher surprise causes a higher probability for change, indicated by ct+1(i)=1, which implies forgetting, because for a particle i with ct+1(i)=1, the associated belief π^i(t+1)=P(Θt+1=θ|ct+1(i)=1,c1:t(i),y1:t+1) is equal to P(Θt+1=θ|ct+1(i)=1,yt+1)=P(θ|yt+1) (see Figure 1A), which is equivalent to a reset of the belief as in equation 2.12. In other words, while in MPN and the exact Bayesian inference in the proposition's equation 2.10, the trade-off between integration and reset is accomplished by adding at each time step a new particle with weight γt+1, in Particle Filtering, it is accomplished via sampling. As a conclusion, the above formulas are essentially the same as the update rules of MPN (see equations 2.19 and 2.18) and have the same spirit as the recursive update of the proposition's equation 2.10.

Equations 2.21 and 2.23 can be applied to the case where the likelihood function PY(y|θ) is in the exponential family and π(0) is its conjugate prior. The resulting algorithm (algorithm 3) has a particularly simple update rule for the belief parameters (see section 4 for details).

The theory of particle filter methods is well established (Doucet et al., 2000; Gordon et al., 1993; Särkkä, 2013). Particle filters in simpler (Brown & Steyvers, 2009) or more complex (Findling et al., 2019) forms have also been employed to explain human behavior. Here we derived a simple particle filter for the general case of generative models of equations 2.2 to 2.4. Our main contribution is to show that the use of the optimal proposal distribution in this particle filter leads to a surprise-based update scheme.

2.2.4  Surprise-Modulation as a Framework for Other Algorithms

Other existing algorithms (Adams & MacKay, 2007; Faraji et al., 2018; Fearnhead & Liu, 2007; Nassar et al., 2012, 2010) can also be formulated in the surprise-modulation framework of equations 2.9 and 2.12 (see section 4). Moreover, in order to allow for a transparent discussion and for fair comparisons in simulations, we extended the algorithms of Nassar et al. (2012, 2010) to a more general setting. Here we give a brief summary of the algorithms we considered. A detailed analysis is provided in section 4.5.

The algorithms of Nassar et al. (2012, 2010) were originally designed for a gaussian estimation task (see section 2.3 for details of the task) with a broad uniform prior. We extended them to the more general case of gaussian tasks with gaussian priors, and we call our extended versions Nas10* and Nas12* for Nassar et al. (2010) and Nassar et al. (2012), respectively (for a performance comparison between our extended algorithms and their original versions see Figures 12 and 13). Both algorithms have the same surprise-modulation as in our proposition (see equation 2.10). There are multiple interpretations of the approaches of Nas10* and Nas12* and links to other algorithms. One such link we identify is in relation to Particle Filtering with a single particle (pf1). More specifically, one can show that pf1 behaves in expectation similar to Nas10* and Nas12* (see section 4 and the appendix).

2.3  Simulations

With the goal of gaining a better understanding of different approximate algorithms, we evaluated the departure of their performance from the exact Bayesian algorithm in terms of mean squared error (MSE) of estimating Θt (see section 4), on two tasks inspired by and closely related to real experiments (Behrens et al., 2007; Maheu, Dehaene, & Meyniel, 2019; Mars et al., 2008; Nassar et al., 2012, 2010; Ostwald et al., 2012): a gaussian and a categorical estimation task.

We compared our three novel algorithms VarSMiLe, Particle Filtering (pfN, where N is the number of particles), and Message Passing with finite number of particles N (MPN) to the online exact Bayesian Message Passing algorithm (Adams & MacKay, 2007) (Exact Bayes), which yields the optimal solution with Θ^t=Θ^tOpt. Furthermore, we included in the comparison the stratified optimal resampling algorithm (Fearnhead & Liu, 2007) (SORN, where N is the number of particles), our variant of Nassar et al. (2010) (Nas10*) and Nassar et al. (2012) (Nas12*), the Surprise-Minimization Learning algorithm of Faraji et al. (2018) (SMiLe), as well as a simple Leaky Integrator (Leaky; see section 4).

The algorithms Exact Bayes and SORN come from the field of change point detection, and whereas the former has high memory demands, the latter has the same memory demands as our algorithms pfN and MPN. The algorithms Nas10*, Nas12*, and SMiLe, on the other hand, come from the human learning literature and are more biologically oriented.

2.3.1  Gaussian Estimation Task

The task is a generalized version of the experiment of Nassar et al. (2012, 2010). The goal of the agent is to estimate the mean θt=μt of observed samples, which are drawn from a gaussian distribution with known variance σ2: yt+1|μt+1N(μt+1,σ2). The mean μt+1 is itself drawn from a gaussian distribution μt+1N(0,1) whenever the environment changes. In other words, the task is a special case of the generative model of equations 2.2 to 2.4, with π(0)(μt)=N(μt;0,1) and PY(yt|μt)=N(yt;μt,σ2). An example of the task is shown in Figure 2A.

We simulated the task for all combinations of σ{0.1,0.5,1,2,5} and pc{0.1,0.05,0.01,0.005,0.001,0.0001}. For each combination of σ and pc, we first tuned the free parameter of each algorithm—m for SMiLe and Variational SMiLe, the leak parameter for the Leaky Integrator, and the pc of Nas10* and Nas12*—by minimizing the MSE on three random initializations of the task. For the Particle Filter (pfN), the Exact Bayes, the MPN, and the SORN, we empirically checked that the true pc of the environment was indeed the value that gave the best performance, and we used this value for the simulations. We evaluated the performance of the algorithms on 10 different random task instances of 105 steps each for pc{0.1,0.05,0.01,0.005} and 106 steps each for pc{0.001,0.0001} (in order to sample more change points). Note that the parameter σ is not estimated, and its actual value is used by all algorithms except the Leaky Integrator.

In Figure 2B we show the MSE[Θ^t|Rt=n] in estimating the parameter after n steps since the last change point, for each algorithm, computed over multiple changes, for two exemplar task settings. The Particle Filter with 20 particles (pf20), the VarSMiLe, and the Nas12* have an overall performance very close to that of the Exact Bayes algorithm (MSE[Θ^tOpt|Rt=n]), with much lower memory requirements. VarSMiLe sometimes slightly outperforms the other two early after an environmental change (see Figure 2B, right), but shows slightly higher error values at later phases. The MPN algorithm is the closest one to the optimal solution (i.e., Exact Bayes) for low σ (see Figure 2B, left), but its performance is much worse for the case of high σ and low pc (see Figure 2B, right). For SORN we observe a counter intuitive behavior in the regime of low σ: the inclusion of more particles leads to worse performance (see Figure 2B, left). At higher σ levels, the performance of SOR20 is close to optimal and better than the MP20 in later time steps. This may be due to the fact that the MPN discards particles in a deterministic and greedy way (i.e., the one with the lowest weight), whereas for the SORN, there is a component of randomness in the process of particle elimination, which may be important for environments with higher stochasticity.

Figure 2:

Gaussian estimation task: Transient performance after changes. (A) At each time step an observation (depicted as black dot) is drawn from a gaussian distribution exp(-(yt-μt)2/2σ2) with changing mean μt (marked in blue) and known variance σ2 (lower left panels). At every change of the environment (marked with red lines), a new mean μt is drawn from a standard gaussian distribution exp(-μt2). In this example, σ=1 and pc=0.01. (B) Mean squared error for the estimation of μt at each time step n after an environmental change, that is, the average of MSE[Θ^t|Rt=n] over time; σ=0.1,pc=0.1 (left panel) and σ=5,pc=0.01 (right panel). The shaded area corresponds to the standard error of the mean. Abbreviations: pfN: Particle Filtering with N particles; MPN: Message Passing with N particles; VarSMiLe: Variational SMiLe; SORN: Stratisfied Optimal Resampling with N particles (Fearnhead & Liu, 2007); SMiLe: Faraji et al. (2018); Nas10*, Nas12*: Variants of Nassar et al. (2010) and Nassar et al. (2012), respectively; Leaky: Leaky Integrator, Exact Bayes: Adams and MacKay (2007).

Figure 2:

Gaussian estimation task: Transient performance after changes. (A) At each time step an observation (depicted as black dot) is drawn from a gaussian distribution exp(-(yt-μt)2/2σ2) with changing mean μt (marked in blue) and known variance σ2 (lower left panels). At every change of the environment (marked with red lines), a new mean μt is drawn from a standard gaussian distribution exp(-μt2). In this example, σ=1 and pc=0.01. (B) Mean squared error for the estimation of μt at each time step n after an environmental change, that is, the average of MSE[Θ^t|Rt=n] over time; σ=0.1,pc=0.1 (left panel) and σ=5,pc=0.01 (right panel). The shaded area corresponds to the standard error of the mean. Abbreviations: pfN: Particle Filtering with N particles; MPN: Message Passing with N particles; VarSMiLe: Variational SMiLe; SORN: Stratisfied Optimal Resampling with N particles (Fearnhead & Liu, 2007); SMiLe: Faraji et al. (2018); Nas10*, Nas12*: Variants of Nassar et al. (2010) and Nassar et al. (2012), respectively; Leaky: Leaky Integrator, Exact Bayes: Adams and MacKay (2007).

Close modal

For the Leaky Integrator, we observe a trade-off between good performance in the transient phase and the stationary phase; a fixed leak value cannot fulfill both requirements. The SMiLe rule, by construction, never narrows its belief π^(θ) below some minimal value, which allows it to have a low error immediately after a change but leads later to high errors. Its performance deteriorates for higher σ (see Figure 2B, right). The Nas10* performs well for low but not for higher values of σ. Despite the fact that a Particle Filter with one particle (pf1) is in expectation similar to Nas10* and Nas12* (see section 4), it performs worse than these two algorithms on trial-by-trial measures. Still, it performs better than the MP1 and identically to the SOR1.

Figure 3:

Gaussian estimation task: Steady-state performance. (A) Mean squared error of the Exact Bayes algorithm (i.e., optimal solution) for each combination of σ and pc averaged over time. (B–F) Difference between the mean squared error of each algorithm and the optimal solution (of panel A), that is, the average of ΔMSE[Θ^t] over time. The color bar of panel A applies to these panels as well. Note that the black color for the MP20 indicates negative values, which are due to the finite sample size for the estimation of MSE. Abbreviations: See the Figure 2 caption.

Figure 3:

Gaussian estimation task: Steady-state performance. (A) Mean squared error of the Exact Bayes algorithm (i.e., optimal solution) for each combination of σ and pc averaged over time. (B–F) Difference between the mean squared error of each algorithm and the optimal solution (of panel A), that is, the average of ΔMSE[Θ^t] over time. The color bar of panel A applies to these panels as well. Note that the black color for the MP20 indicates negative values, which are due to the finite sample size for the estimation of MSE. Abbreviations: See the Figure 2 caption.

Close modal

In Figure 3A, we have plotted the average of MSE[Θ^tOpt] of the Exact Bayes algorithm over the whole simulation time for each of the considered σ and pc levels. The difference between the other algorithms and this benchmark is called ΔMSE[Θ^t] (see section 4) and is plotted in Figures 3C to 3F. All algorithms except for the SOR20 have lower average error values for low σ and low pc than high σ and high pc. The Particle Filter pf20 and the Message Passing MP20 have the smallest difference from the optimal solution. The average error of MP20 is higher than that of pf20 for high σ and low pc, whereas pf20 is more robust across levels of environmental parameters. The worst-case performance for pf20 is ΔMSE[Θ^t]=0.033 for σ=5 and pc=0.0001, and for SOR20, it is ΔMSE[Θ^t]=0.061 for σ=0.1 and pc=0.1. The difference between these two worst-case scenarios is significant (p-value=2.79×10-6, two-sample t-test, 10 random seeds for each algorithm). Next in performance are the algorithms, Nas12* and VarSMiLe. VarSMiLe exhibits its largest deviation from the optimal solution for high σ and low pc, but is still more resilient compared to the MPN algorithms for this type of environment. Among the algorithms with only one unit of memory demands—pf1, MP1, SOR1, VarSMiLe, SMiLe, Leaky, Nas10* and Nas12*—the winners are VarSMiLe and Nas12*. The SOR20 has low error overall but unexpectedly high error for environmental settings that are presumably more relevant for biological agents (intervals of low stochasticity marked by abrupt changes). The simple Leaky Integrator performs well at low σ and pc but deviates more from the optimal solution as these parameters increase (see Figure 3F). The SMiLe rule performs best at lower σ, that is, more deterministic environments.

A summary graph, where we collect the ΔMSE[Θ^t] across all levels of σ and pc, is shown in Figure 4. We can see that pf20, Nas12*, and VarSMiLe give the lowest worst case (lowest maximum value) ΔMSE[Θ^t] and are statistically better than the other eight algorithms (the error bars indicate the standard error of the mean across the ten random task instances).

2.3.2  Categorical Estimation Task

The task is inspired by the experiments of Behrens et al. (2007), Maheu et al. (2019), Mars et al. (2008), and Ostwald et al. (2012). The goal of the agent is to estimate the occurrence probability of five possible states. Each observation yt+1{1,,5} is drawn from a categorical distribution with parameters θt+1=pt+1, that is, yt+1|pt+1Cat(yt+1;pt+1). When there is a change Ct+1=1 in the environment, the parameters pt+1 are drawn from a Dirichlet distribution Dir(s·1), where s(0,) is the stochasticity parameter. In relation to the generative model of equations 2.2 to 2.4, we thus have π(0)(pt)=Dir(pt;s·1) and PY(yt|pt)=Cat(yt;pt). An illustration of this task is depicted in Figure 5A.

We considered the combinations of stochasticity levels s{0.01, 0.1, 0.14, 0.25, 1, 2, 5} and change probability levels pc{0.1, 0.05, 0.01, 0.005, 0.001, 0.0001}. The algorithms of Nassar et al. (2012, 2010) were specifically developed for a gaussian estimation task and cannot be applied here. All other algorithms were first optimized for each combination of environmental parameters before an experiment starts, and then evaluated on 10 different random task instances, for 105 steps each for pc{0.1, 0.05, 0.01, 0.005} and 106 steps each for pc{0.001, 0.0001}. The parameter s is not estimated, and its actual value is used by all algorithms except the Leaky Integrator.

Figure 4:

Gaussian estimation task: Steady-state performance summary. Difference between the mean squared error of each algorithm and the optimal solution (Exact Bayes), that is, the average of ΔMSE[Θ^t] over time, for all combinations of σ and pc together. For each algorithm we plot the 30 values (5 σ times 6 pc values) of Figure 3 with respect to randomly jittered values in the x-axis. The color coding is the same as in Figure 2. The error bars mark the standard error of the mean across 10 random task instances. The difference between the worst case of SOR20 and pf20 is significant (p-value=2.79×10-6, two-sample t-test, 10 random seeds for each algorithm). Abbreviations: See the Figure 2 caption.

Figure 4:

Gaussian estimation task: Steady-state performance summary. Difference between the mean squared error of each algorithm and the optimal solution (Exact Bayes), that is, the average of ΔMSE[Θ^t] over time, for all combinations of σ and pc together. For each algorithm we plot the 30 values (5 σ times 6 pc values) of Figure 3 with respect to randomly jittered values in the x-axis. The color coding is the same as in Figure 2. The error bars mark the standard error of the mean across 10 random task instances. The difference between the worst case of SOR20 and pf20 is significant (p-value=2.79×10-6, two-sample t-test, 10 random seeds for each algorithm). Abbreviations: See the Figure 2 caption.

Close modal
The Particle Filter pf20, the MP20, and the SOR20 have a performance closest to that of Exact Bayes, the optimal solution (see Figure 5B). VarSMiLe is the next in the ranking, with a behavior after a change similar to the gaussian task. pf20 performs better for s>2, and MP20 performs better for s2 (see Figure 6). For this task, the biologically less plausible SOR20 is the winner in performance, and it behaves most consistently across environmental parameters. Its worst-case performance is ΔMSE[Θ^t]=8.16×10-5 for s=2 and pc=0.01, and the worst-case performance for pf20 is ΔMSE[Θ^t]=0.0048 for s=0.25 and pc=0.005 (p-value=1.148×10-12, two-sample t-test, 10 random seeds for each algorithm). For all the other algorithms except MP20, the highest deviations from the optimal solution are observed for medium stochasticity levels (see Figures 6B to 6F). When the environment is nearly deterministic (e.g., s=0.001 so that the parameter vectors pt have almost all mass concentrated in one component), or highly stochastic (e.g., s>1 so that nearly uniform categorical distributions are likely to be sampled), these algorithms achieve higher performance, while the Particle Filter is the algorithm that is most resilient to extreme choices of the stochasticity parameter s. For VarSMiLe in particular, the lowest mean error is achieved for high s and high pc or low s and low pc.
Figure 5:

Categorical estimation task: Transient performance after changes. (A) At each time step, the agent sees one out of five possible categories (black dots) drawn from a categorical distribution with parameters pt. Occasional abrupt changes happen with probability pc and are marked with red lines. After each change, a new pt vector is drawn from a Dirichlet distribution with stochasticity parameter s. In this example, s=1 and pc=0.01. (B) Mean squared error for the estimation of pt at each time step n after an environmental change, that is, the average of MSE[Θ^t|Rt=n] over time; s=0.14,pc=0.01 (left panel) and s=5,pc=0.005 (right panel). The shaded area corresponds to the standard error of the mean. Abbreviations: pfN: Particle Filtering with N particles; MPN: Message Passing with N particles; VarSMiLe: Variational SMiLe; SORN: Stratisfied Optimal Resampling with N particles (Fearnhead & Liu, 2007); SMiLe: Faraji et al. (2018); Leaky: Leaky Integrator; Exact Bayes: Adams and MacKay (2007).

Figure 5:

Categorical estimation task: Transient performance after changes. (A) At each time step, the agent sees one out of five possible categories (black dots) drawn from a categorical distribution with parameters pt. Occasional abrupt changes happen with probability pc and are marked with red lines. After each change, a new pt vector is drawn from a Dirichlet distribution with stochasticity parameter s. In this example, s=1 and pc=0.01. (B) Mean squared error for the estimation of pt at each time step n after an environmental change, that is, the average of MSE[Θ^t|Rt=n] over time; s=0.14,pc=0.01 (left panel) and s=5,pc=0.005 (right panel). The shaded area corresponds to the standard error of the mean. Abbreviations: pfN: Particle Filtering with N particles; MPN: Message Passing with N particles; VarSMiLe: Variational SMiLe; SORN: Stratisfied Optimal Resampling with N particles (Fearnhead & Liu, 2007); SMiLe: Faraji et al. (2018); Leaky: Leaky Integrator; Exact Bayes: Adams and MacKay (2007).

Close modal
A summary graph, with the ΔMSE[Θ^t] across all levels of s and pc, is in Figure 7. The algorithms with the lowest “worst case” are SOR20 and pf20. The top four algorithms SOR20, pf20, MP20, and VarSMiLe are significantly better than the others (the error bars indicate the standard error of the mean across the 10 random task instances), whereas MP1 and SMiLe have the largest error with a maximum at 0.53.
Figure 6:

Categorical estimation task: Steady-state performance. (A) Mean squared error of the Exact Bayes algorithm (i.e., optimal solution) for each combination of environmental parameters s and pc averaged over time. (B–F) Difference between the mean squared error of each algorithm and the optimal solution (of panel A), that is, the average of ΔMSE[Θ^t] over time. The color bar of panel A applies to these panels as well. Abbreviations: See the Figure 5 caption.

Figure 6:

Categorical estimation task: Steady-state performance. (A) Mean squared error of the Exact Bayes algorithm (i.e., optimal solution) for each combination of environmental parameters s and pc averaged over time. (B–F) Difference between the mean squared error of each algorithm and the optimal solution (of panel A), that is, the average of ΔMSE[Θ^t] over time. The color bar of panel A applies to these panels as well. Abbreviations: See the Figure 5 caption.

Close modal

2.3.3  Summary of Simulation Results

In summary, our simulation results of the two tasks collectively suggest that our Particle Filtering (pfN) and Message Passing (MPN) algorithms achieve a high level of performance, very close to the one of biologically less plausible algorithms with higher (Exact Bayes) and same (SORN) memory demands. Moreover, their behavior is more consistent across tasks. Finally, among the algorithms with memory demands of one unit, VarSMiLe performs best.

Figure 7:

Categorical estimation task: Steady-state performance summary. Difference between the mean squared error of each algorithm and the optimal solution (Exact Bayes), that is, the average of ΔMSE[Θ^t] over time, for all combinations of s and pc together. For each algorithm, we plot the 42 values (7 s times 6 pc values) of Figure 6 with respect to randomly jittered values in the x-axis. The color coding is the same as in Figure 5. The error bars mark the standard error of the mean across 10 random task instances. The difference between the worst case of SOR20 and pf20 is significant (p-value=1.148×10-12, two-sample t-test, 10 random seeds for each algorithm). Abbreviations: See the Figure 5 caption. Note that MP1 and SMiLe are out of bound with a maximum at 0.53.

Figure 7:

Categorical estimation task: Steady-state performance summary. Difference between the mean squared error of each algorithm and the optimal solution (Exact Bayes), that is, the average of ΔMSE[Θ^t] over time, for all combinations of s and pc together. For each algorithm, we plot the 42 values (7 s times 6 pc values) of Figure 6 with respect to randomly jittered values in the x-axis. The color coding is the same as in Figure 5. The error bars mark the standard error of the mean across 10 random task instances. The difference between the worst case of SOR20 and pf20 is significant (p-value=1.148×10-12, two-sample t-test, 10 random seeds for each algorithm). Abbreviations: See the Figure 5 caption. Note that MP1 and SMiLe are out of bound with a maximum at 0.53.

Close modal

2.3.4  Robustness against Suboptimal Parameter Choice

In all algorithms we considered, the environment's hyperparameters are assumed to be known. We can distinguish between two types of hyperparameters in our generative model: the parameters of the likelihood function (e.g., σ in the gaussian task), and the pc and the parameters of the conjugate prior (e.g., s in the categorical task). Hyperparameters of the first type can be added to the parameter vector θ and be inferred with the same algorithm. However, learning the second type of hyperparameters is not straightforward. By assuming that these hyperparameters are learned more slowly than θ, one can fine-tune them after each n (e.g., 10) change points, while change points can be detected by looking at the particles for the Particle Filter and at the peaks of surprise values for VarSMiLe. Other approaches to hyperparameter estimation can be found in Doucet and Tadić (2003), George and Doss (2017), Liu and West (2001), and Wilson et al. (2010).

When the hyperparameters are fixed, a mismatch between the assumed values and the true values is a possible source of errors. In this section, we investigate the robustness of the algorithms to a mismatch between the assumed and the actual probability of change points. To do so, we first tuned each algorithm's parameter for an environment with a change probability pc and then tested the algorithms in environments with different change probabilities while keeping the parameter fixed. For each new environment with a different change probability, we calculated the difference between the MSE of these fixed parameters and the optimal MSE, that is, the resulting MSE for the case that the Exact Bayes' parameter is tuned for the actual pc.

More precisely, if we denote as MSE[Θ^t;pc',pc] the MSE of an algorithm with parameters tuned for an environment with pc', applied in an environment with pc, we calculated the mean regret, defined as MSE[Θ^t;pc',pc]-MSE[Θ^tOpt,pc], over time; note that the second term is equal to MSE[Θ^t;pc,pc] when the algorithm Exact Bayes is used for estimation. The lower the values and the flatter the curve of the mean regret, the better the performance and the robustness of the algorithm in the face of lacking knowledge of the environment. The slope of the curve indicates the degree of deviations of the performance as we move away from the optimally tuned parameter. We ran three random (and same for all algorithms) tasks initializations for each pc level.

In Figure 8 we plot the mean regret for each algorithm for the gaussian task for four pairs of s and pc' levels. For σ=0.1 and pc'=0.04 (see Figure 8A), the Exact Bayes and the MP20 show the highest robustness (smallest regret) and are closely followed by the pf20, VarSMiLe, and Nas12* (note the regret's small range of values). The lower the actual pc, the higher the regret, but the changes are still very small. The curves for the SMiLe and the Leaky Integrator are also relatively flat, but the mean regret is much higher. The SOR20 is the least robust algorithm.

Similar observations can be made for σ=0.1 and pc'=0.004 (see Figure 8B). In this case, the performance of all algorithms deteriorates strongly when the actual pc is higher than the assumed one.

However, for σ=5 (see Figures 8C and 8D), the ranking of algorithms changes. The SOR20 is very robust for this level of stochasticity. The pf20 and MP20 perform similarly for pc=0.04, but for lower pc', the pf20 is more robust and the MP20 exhibits high fluctuations in its performance. The Nas12* is quite robust at this σ level. Overall for Exact Bayes, SOR20, pf20, VarSMiLe, and Nas12*, a mismatch of the assumed pc from the actual one does not deteriorate the performance dramatically for σ=5, pc'=0.004 (see Figure 8D). The SMiLe and the Leaky Integrator outperform the other algorithms for higher pc' if pc<pc' (see Figure 8C). A potential reason is that the optimal behavior for the Leaky Integrator (according to the tuned parameters) is to constantly integrate new observations into its belief (i.e., to act like a Perfect Integrator) regardless of the pc' level. This feature makes it blind to the pc and therefore very robust against the lack of knowledge of it (see Figure 8C).

Figure 8:

Robustness to mismatch between actual and assumed probability of changes for the gaussian estimation task. The mean regret is the mean squared error obtained with assumed change probability pc' minus the mean squared error obtained with the optimal parameter choice of Exact Bayes for the given actual pc, that is, the average of the quantity MSE[Θ^t;pc',pc]-MSE[Θ^tOpt,pc] over time. A red triangle marks the pc' value each algorithm was tuned for. We plot the mean regret for the following parameter combinations: A, σ=0.1 and pc'=0.04; B, σ=0.1 and pc'=0.004; C, σ=5 and pc'=0.04; D, σ=5 and pc'=0.004. Abbreviations: See the Figure 2 caption.

Figure 8:

Robustness to mismatch between actual and assumed probability of changes for the gaussian estimation task. The mean regret is the mean squared error obtained with assumed change probability pc' minus the mean squared error obtained with the optimal parameter choice of Exact Bayes for the given actual pc, that is, the average of the quantity MSE[Θ^t;pc',pc]-MSE[Θ^tOpt,pc] over time. A red triangle marks the pc' value each algorithm was tuned for. We plot the mean regret for the following parameter combinations: A, σ=0.1 and pc'=0.04; B, σ=0.1 and pc'=0.004; C, σ=5 and pc'=0.04; D, σ=5 and pc'=0.004. Abbreviations: See the Figure 2 caption.

Close modal
In summary, most of the time, the mean regret for Exact Bayes and MP20 is less than the mean regret for pf20 and VarSMiLe. However, the variability in the mean regret for pf20 and VarSMiLe is smaller, and their curves are flatter across pc levels, which makes their performance more predictable. The results for the categorical estimation task are similar to those of the gaussian task, with the difference that the SOR20 is very robust for this case (see Figure 9).
Figure 9:

Robustness to mismatch between actual and assumed probability of changes for the categorical estimation task. The mean regret is the mean squared error obtained with assumed change probability pc' minus the mean squared error obtained with the optimal parameter choice of Exact Bayes for the given actual pc, that is, the average of the quantity MSE[Θ^t;pc',pc]-MSE[Θ^tOpt,pc] over time. A red triangle marks the pc' value each algorithm was tuned for. We plot the mean regret for the following parameter combinations: A, s=0.14 and pc'=0.04; B, s=0.14 and pc'=0.004; C, s=5 and pc'=0.04; D, s=5 and pc'=0.004. Abbreviations: See the Figure 5 caption.

Figure 9:

Robustness to mismatch between actual and assumed probability of changes for the categorical estimation task. The mean regret is the mean squared error obtained with assumed change probability pc' minus the mean squared error obtained with the optimal parameter choice of Exact Bayes for the given actual pc, that is, the average of the quantity MSE[Θ^t;pc',pc]-MSE[Θ^tOpt,pc] over time. A red triangle marks the pc' value each algorithm was tuned for. We plot the mean regret for the following parameter combinations: A, s=0.14 and pc'=0.04; B, s=0.14 and pc'=0.004; C, s=5 and pc'=0.04; D, s=5 and pc'=0.004. Abbreviations: See the Figure 5 caption.

Close modal

2.4  Experimental Prediction

It has been experimentally shown that some important behavioral and physiological indicators statistically correlate with a measure of surprise or a prediction error. Examples of such indicators are the pupil diameter (Joshi & Gold, 2020; Nassar et al., 2012; Preuschoff et al., 2011), the amplitude of the P300, N400, and MMN components of EEG (Kopp & Lange, 2013; Lieder, Daunizeau, Garrido, Friston, & Stephan, 2013; Mars et al., 2008; Meyniel, Maheu, & Dehaene, 2016; Modirshanechi et al., 2019; Musiolek, Blankenburg, Ostwald, & Rabovsky, 2019; Ostwald et al., 2012), the amplitude of MEG in specific time windows (Maheu et al., 2019), BOLD responses in fMRI (Konovalov & Krajbich, 2018; Loued-Khenissi, Pfeuffer, Einhäuser, & Preuschoff, 2020), and reaction time (Huettel, Mack, & McCarthy, 2002; Meyniel et al., 2016). The surprise measure is usually the negative log probability of the observation, known as Shannon Surprise (Shannon, 1948), and denoted here as SSh. However, as we show in this section, as long as there is an uninformative prior over observations, Shannon Surprise SSh is just an invertible function of our modulated adaptation rate γ and hence an invertible function of the Bayes Factor Surprise SBF. Thus, based on the results of previous works (Meyniel et al., 2016; Modirshanechi et al., 2019; Nassar et al., 2012, 2010; Ostwald et al., 2012), which always used uninformative priors, one cannot determine whether the physiological and behavioral indicators we have noted correlate with SSh or SBF.

In this section, we first investigate the theoretical differences between the Bayes Factor Surprise SBF and Shannon Surprise SSh. Then, based on their observed differences, we formulate two experimentally testable predictions, with a detailed experimental protocol. Our predictions make it possible to discriminate between the two measures of surprise and determine whether physiological or behavioral measurements are signatures of SBF or of SSh.

2.4.1  Theoretical Difference between SBF and SSh

Shannon Surprise (Shannon, 1948) is defined as
(2.24)
where for computing P(yt+1|y1:t), one should know the structure of the generative model. For the generative model of Figure 1A, we find SSh(yt+1;π(t))=-log(1-pc)P(yt+1;π(t))+pcP(yt+1;π(0)). While the Bayes Factor Surprise SBF depends on a ratio between the probability of the new observation under the prior and the current beliefs, Shannon Surprise depends on a weighted sum of these probabilities. Interestingly, it is possible to express (see section 4 for derivation) the adaptation rate γt+1 as a function of the “difference in Shannon Surprise,”
(2.25)
where γt+1=γSBF(t+1),m depends on the Bayes Factor Surprise and the saturation parameter m (cf. equation 2.9). Equation 2.25 shows that the modulated adaptation rate is not just a function of Shannon Surprise upon observing yt+1, but a function of the difference between the Shannon Surprise of this observation under the current and the prior beliefs. Note that if the prior is uninformative then the second term in equation 2.25 is a constant with respect to observations, and γt+1 is an invertible function of SSh, thus indistinguishable from SSh. We next exploit differences between SBF and SSh to formulate our experimentally testable predictions.

2.4.2  Experimental Protocol

Consider the variant of the gaussian task of Nassar et al. (2012, 2010), which we used in our simulations: PY(y|θ)=N(y;θ,σ2) and π(0)(θ)=N(θ;0,1). Human subjects are asked to predict the next observation yt+1 given what they have observed so far, that is, y1:t. The experimental procedure is as follows:

  1. Fix the hyperparameters σ2 and pc.

  2. At each time t, show the observation yt (produced in the aforementioned way) to the subject, and measure a physiological or behavioral indicator Mt, for example, pupil diameter (Nassar et al., 2012, 2010).

  3. At each time t, after observing yt, ask the subject to predict the next observation y^t+1 and his or her confidence Ct about that prediction.

Note that the only difference between our task and the task of Nassar et al. (2012, 2010) is the choice of prior for θ (i.e., gaussian instead of uniform). The assumption is that according to the previous studies, there is a positive correlation between Mt and a measure of surprise.

2.4.3  Prediction 1

Based on the results of Nassar et al. (2012, 2010), in such a gaussian task, the best fit for subjects' prediction y^t+1 is θ^t, and the confidence Ct is a monotonic function of the standard deviation of θ^t, denoted as sigma^t. In order to formalize our experimental prediction, we define, at time t, the prediction error as δt=yt-y^t and the “sign bias” as st=sign(δty^t). The variable st is a crucial variable for our analysis. It shows whether the prediction y^t is an overestimation in absolute value (st=+1) or an underestimation in absolute value (st=-1). Figure 10A shows a schematic for the case that both the current and prior beliefs are gaussian distributions. The two observations indicated by dashed lines have the same absolute error |δt| but differ in the sign bias s.
Figure 10:

Experimental prediction 1. (A) Schematic of the task for the case of a gaussian belief. The distributions of yt+1 under the prior belief π(0) and the current belief π(t) are shown by black and red curves, respectively. Two possible observations with equal absolute prediction error δ but opposite sign bias s are indicated by dashed lines. The two observations are equally probable under π(t) but not under π(0). SBF is computed as the ratio between the red and black dots for a given observation, whereas SSh is a function of the weighted sum of the two. This phenomenon is the basis of our experimental prediction. (B) The average surprise values S¯Sh(θ^=1,δ,s=±1,σC=0.5) and S¯BF(θ^=1,δ,s=±1,σC=0.5) over 20 subjects (each with 500 observations) are shown for two different learning algorithms (Nas12* and pf20). The mean S¯BF is higher for negative sign bias (marked in blue) than for positive sign bias (marked in orange). The opposite is observed for the mean S¯Sh. This effect increases with increasing values of prediction error δ. The shaded area corresponds to the standard error of the mean. The experimental task is the same as the gaussian task we used in the previous section, with σ=0.5 and pc=0.1 (see section 4 for details).

Figure 10:

Experimental prediction 1. (A) Schematic of the task for the case of a gaussian belief. The distributions of yt+1 under the prior belief π(0) and the current belief π(t) are shown by black and red curves, respectively. Two possible observations with equal absolute prediction error δ but opposite sign bias s are indicated by dashed lines. The two observations are equally probable under π(t) but not under π(0). SBF is computed as the ratio between the red and black dots for a given observation, whereas SSh is a function of the weighted sum of the two. This phenomenon is the basis of our experimental prediction. (B) The average surprise values S¯Sh(θ^=1,δ,s=±1,σC=0.5) and S¯BF(θ^=1,δ,s=±1,σC=0.5) over 20 subjects (each with 500 observations) are shown for two different learning algorithms (Nas12* and pf20). The mean S¯BF is higher for negative sign bias (marked in blue) than for positive sign bias (marked in orange). The opposite is observed for the mean S¯Sh. This effect increases with increasing values of prediction error δ. The shaded area corresponds to the standard error of the mean. The experimental task is the same as the gaussian task we used in the previous section, with σ=0.5 and pc=0.1 (see section 4 for details).

Close modal

Given an absolute prediction value y^>0, an absolute prediction error δ>0, a confidence value C>0, and a sign bias s{-1,1}, we can compute the average of Mt over time for the time points with |y^t|y^, |δt|δ, CtC, and st=s, which we denote as M¯1(y^,δ,s,C); the index 1 stands for experimental prediction 1. The approximation notation is used for continuous variables instead of equality due to practical limitations (i.e., for obtaining adequate number of samples for averaging). Note that for our theoretical proofs, we use equality, but in our simulation, we include the practical limitations of a real experiment, and hence, use an approximation. The formal definitions can be found in section 4. It is worth noting that the quantity M¯1(y^,δ,s,C) is model independent; its calculation does not require any assumption on the learning algorithm the subject may employ. Depending on whether the measurement M¯1(y^,δ,s,C) reflects SSh or SBF, its relationship to the defined four variables (y^, δ, s, C) is qualitatively and quantitatively different.

In order to prove and illustrate our prediction, we consider each subject as an agent enabled with one of the learning algorithms that we discussed. Similar to the above, given an absolute prediction θ^>0 (corresponding to the subjects' absolute prediction y^), an absolute prediction error δ>0, a standard deviation σC (corresponding to the subjects' confidence value C), and a sign bias s{-1,1}, we can compute the average Shannon Surprise SSh(yt;π^(t-1)) and the average Bayes Factor Surprise SBF(yt;π^(t-1)) over time, for the time points with |θ^t-1|θ^, |δt|δ, σ^tσC, and st=s, which we denote as S¯Sh(θ^,δ,s,σC) and S¯BF(θ^,δ,s,σC) respectively. We can show theoretically (see section 4) and in simulations (see Figure 10B and section 4) that for any value of θ^, δ, and σC, we have S¯Sh(θ^,δ,s=+1,σC)>S¯Sh(θ^,δ,s=-1,σC) for the Shannon Surprise, and exactly the opposite relation, S¯BF(θ^,δ,s=+1,σC)<S¯BF(θ^,δ,s=-1,σC), for the Bayes Factor Surprise. Moreover, this effect increases with increasing δ.

It should be noted that such an effect is due to the essential difference of SSh and SBF in using the prior belief π(0)(θ). Our experimental prediction is theoretically provable for the cases that each subject's belief π^(t) is a gaussian distribution, which is the case if they employ VarSMiLe, Nas10*, Nas12*, pf1, MP1, or Leaky Integrator as their learning rule (see section 4). For the cases that different learning rules (e.g., pf20) are used, where the posterior belief is a weighted sum of gaussians, the theoretical analysis is more complicated, but our simulations show the same results (see Figure 10B and section 4). Therefore, independent of the learning rule, we have the same experimental prediction on the manifestation of different surprise measures on physiological signals, such as pupil dilation. Our first experimental prediction can be summarized as the set of hypotheses shown in Table 1.

2.4.4  Prediction 2

Our second prediction follows the same experimental procedure as the one for the first prediction. The main difference is that for the second prediction, we need to fit a model to the experimental data. Given one of the learning algorithms, the fitting procedure can be done by tuning the free parameters of the algorithm with the goal of minimizing the mean squared error between the model's prediction θ^t and a subject's prediction y^t+1 (similar to Nassar et al., 2012, 2010) or with the goal of maximizing the likelihood of subject's prediction π^(t)(y^t+1). Our prediction is independent of the learning algorithm, but in an actual experiment, we recommend using model selection to find the model that fits the human data best.

Table 1:
Experimental Hypotheses and Predictions 1.
HypothesisPrediction
The indicator reflects SBF ΔM¯1(θ^,δ,C)<0 and ΔM¯1(θ^,δ,C)δ<0 
The indicator reflects SSh ΔM¯1(θ^,δ,C)>0 and ΔM¯1(θ^,δ,C)δ>0 
The prior is not used for inference ΔM¯1(θ^,δ,C)=0 
HypothesisPrediction
The indicator reflects SBF ΔM¯1(θ^,δ,C)<0 and ΔM¯1(θ^,δ,C)δ<0 
The indicator reflects SSh ΔM¯1(θ^,δ,C)>0 and ΔM¯1(θ^,δ,C)δ>0 
The prior is not used for inference ΔM¯1(θ^,δ,C)=0 

Note: ΔM¯1(θ^,δ,C) stands for M¯1(θ^,δ,s=+1,C)-M¯1(θ^,δ,s=-1,C).

Having a fitted model, we can compute the probabilities P(yt+1;π^(t)) and P(yt+1;π^(0)). For the case that these probabilities are equal, P(yt+1;π^(t))=P(yt+1;π^(0))=p, the Bayes Factor Surprise SBF is equal to 1, independent of the value of p (see equation 2.7). However, the Shannon Surprise SSh is equal to -logp and varies with p. Figure 11A shows a schematic for the case that both current and prior beliefs are gaussian distributions. Two cases for which we have P(yt+1;π^(t))=P(yt+1;π^(0))=p, for two different p values, are marked by black dots at the intersections of the curves.

Given a probability p>0, we can compute the average of Mt over time for the time points with P(yt+1;π^(t))p and P(yt+1;π^(0))p, which we denote as M¯2(p); the index 2 stands for experimental prediction 2. Analogous to the first prediction, the approximation notation is used due to practical limitations. Then, if M¯2(p) is independent of p, its behavior is consistent with SBF, whereas if it decreases by increasing p, it can be a signature of SSh. Our second experimental prediction can be summarized as the two hypotheses shown in Table 2. Note that in contrast to our first prediction, with the assumption that the standard deviation of the prior belief is fitted using the behavioral data, we do not consider the hypothesis that the prior is not used for inference, because this is indistinguishable from a very large variance of the prior belief.

In order to illustrate the possible results and the feasibility of the experiment, we ran a simulation and computed S¯BF(p) and S¯Sh(p) for the time points with P(yt+1;π^(t))p and P(yt+1;π^(0))p (see section 4 for details). The results of the simulation are shown in Figure 11B.

Figure 11:

Experimental prediction 2. (A) Schematic of the task for the case of a gaussian belief. The probability distribution of observations under the prior belief is shown by the solid black curve. Two different possible current beliefs (determined by the letters A and B) are shown by dashed red curves. The intersections of the dashed red curves with the prior belief determine observations whose SBF is same and equal to one, but their SSh is a function of their probabilities under the prior belief p. (B) The average surprise values S¯Sh(p) and S¯BF(p) over 20 subjects (each with 500 observations) are shown for two different learning algorithms (Nas12* and pf20). The mean SBF is constant (equal to 1) and independent of p, whereas the mean SSh is a decreasing function of p. The shaded area corresponds to the standard error of the mean. The experimental task is the same as the gaussian task we used in the previous section. Observations yt are drawn from a gaussian distribution with σ=0.5, whose mean changes with change point probability pc=0.1 (see section 4 for details).

Figure 11:

Experimental prediction 2. (A) Schematic of the task for the case of a gaussian belief. The probability distribution of observations under the prior belief is shown by the solid black curve. Two different possible current beliefs (determined by the letters A and B) are shown by dashed red curves. The intersections of the dashed red curves with the prior belief determine observations whose SBF is same and equal to one, but their SSh is a function of their probabilities under the prior belief p. (B) The average surprise values S¯Sh(p) and S¯BF(p) over 20 subjects (each with 500 observations) are shown for two different learning algorithms (Nas12* and pf20). The mean SBF is constant (equal to 1) and independent of p, whereas the mean SSh is a decreasing function of p. The shaded area corresponds to the standard error of the mean. The experimental task is the same as the gaussian task we used in the previous section. Observations yt are drawn from a gaussian distribution with σ=0.5, whose mean changes with change point probability pc=0.1 (see section 4 for details).

Close modal

We have shown that performing exact Bayesian inference on a generative world model naturally leads to a definition of surprise and a surprise-modulated adaptation rate. We have proposed three approximate algorithms (VarSMiLe, MPN, and pfN) for learning in nonstationary environments, which all exhibit the surprise-modulated adaptation rate of the exact Bayesian approach and are biologically plausible. Empirically we observed that our algorithms achieve levels of performance comparable to approximate Bayesian methods with higher memory demands (Adams & MacKay, 2007) and are more resilient across different environments compared to methods with similar memory demands (Faraji et al., 2018; Fearnhead & Liu, 2007; Nassar et al., 2012, 2010).

Table 2:
Experimental Hypotheses and Predictions 2.
HypothesisPrediction
The indicator reflects SBF M¯2(p)p=0 
The indicator reflects SSh M¯2(p)p<0 
HypothesisPrediction
The indicator reflects SBF M¯2(p)p=0 
The indicator reflects SSh M¯2(p)p<0 

Learning in a volatile environment has been studied for a long time in the fields of Bayesian learning, neuroscience, and signal processing. In the following, we discuss the biological relevance of our work, and we briefly review some of the previously developed algorithms, with particular focus on the ones that have studied environments that can be modeled with a generative model similar to the one in Figure 1. We then discuss further our results and propose directions for future work on surprise-based learning.

3.1  Biological Interpretation

Humans are able to quickly adapt to changes (Behrens et al., 2007; Nassar et al., 2012, 2010), but human behavior is also often observed to be suboptimal, compared to the normative approach of exact Bayesian inference (Glaze et al., 2015; Mathys, Daunizeau, Friston, & Stephan, 2011; Nassar et al., 2010; Prat-Carrabin, Wilson, Cohen, & Da Silveira, 2020; Wilson, Nassar, & Gold, 2013). In general, biological agents have limited resources and possibly inaccurate assumptions about hyperparameters, yielding suboptimal behavior, as we also see with our algorithms whose accuracies degrade with a suboptimal choice of hyperparameters. Performance also deteriorates with a decreasing number of particles in the sampling-based algorithms, which might be another possible explanation of suboptimal human behavior. Previously, Particle Filtering has been shown to explain the behavior of human subjects in changing environments: Daw and Courville (2008) use a single particle, Brown and Steyvers (2009) use a simple heuristic form of particle filtering based on direct simulation, Findling et al. (2019) combine Particle Filtering with a noisy inference, and Prat-Carrabin et al. (2020) use it for a task with temporal structure.

At the level of neuronal implementation, we do not propose a specific suggestion. However, there are several hypotheses about neural implementations of related particle filters (Huang & Rao, 2014; Kutschireiter et al., 2017; Legenstein & Maass, 2014; Shi & Griffiths, 2009) on which, a neural model of pfN and—its greedy version—MPN could be based. In a similar spirit, the updating scheme of Variational SMiLe may be implemented in biological neural networks (for distributions in the exponential family).

Our theoretical framework for modulation of learning by the Bayes Factor Surprise SBF is related to the body of literature on neo-Hebbian three-factor learning rules (Frémaux & Gerstner, 2016; Gerstner et al., 2018; Lisman et al., 2011), where a third factor indicating reward or surprise enables or modulates a synaptic change or a belief update (Yu, 2012; Yu & Dayan, 2005). We have shown how Bayesian or approximate Bayesian inference naturally leads to such a third factor that modulates learning via the surprise modulated adaptation rate γ(SBF,m). This may offer novel interpretations of behavioral and neurophysiological data and help in understanding how three-factor learning computations may be implemented in the brain.

3.2  Related Work

3.2.1  Exact Bayesian Inference

As already described in section 2.2.2, for the generative model in Figure 1, it is possible to find an exact online Bayesian update of the belief using a message-passing algorithm (Adams & MacKay, 2007). The space and time complexity of the algorithm increases linearly with t, which makes it unsuitable for an online learning setting. However, approximations like dropping messages below a certain threshold (Adams & MacKay, 2007) or stratified resampling (Fearnhead & Liu, 2007) allow reducing the computational complexity. The former has a variable number of particles in time, and the latter needs solving a complicated nonlinear equation at each time step in order to reduce the number of particles to N (called SORN in section 2).

Our message-passing algorithm with a finite number of particles (messages) N (MPN; see algorithm 3) is closely related to these algorithms and can be seen as a biologically more plausible variant of the other two. All three algorithms have the same update rules given by equations 2.19 and 2.18. Hence the algorithms of both Adams and MacKay (2007) and Fearnhead and Liu (2007) have the same surprise modulation as our MPN. The difference lies in their approaches to eliminate less “important” particles.

In the literature of switching state-space models (Barber, 2012), the generative models of the kind in Figure 1 are known as “reset models,” and the message-passing algorithm of Adams and MacKay (2007) is known to be the standard algorithm for inference over these models (Barber, 2012). See Barber (2006, 2012), and Ghahramani & Hinton (2000) for other variations of switching state-space models and examples of approximate inference over them.

3.2.2  Leaky Integration and Variations of Delta Rules

In order to estimate some statistics, leaky integration of new observations is a particularly simple form of a trade-off between integrating and forgetting. After a transient phase, the update of a leaky integrator takes the form of a delta rule that can be seen as an approximation of exact Bayesian updates (Heilbron & Meyniel, 2019; Meyniel et al., 2016; Ryali et al., 2018; Yu & Cohen, 2009). This update rule was found to be biologically plausible and consistent with human behavioral data (Meyniel et al., 2016; Yu & Cohen, 2009). However, Behrens et al. (2007) and Heilbron and Meyniel (2019) demonstrated that in some situations, the exact Bayesian model is significantly better than leaky integration in explaining human behavior. The inflexibility of leaky integration with a single, constant leak parameter can be overcome by a weighted combination of multiple leaky integrators (Wilson et al., 2013), where the weights are updated in a similar fashion as in the exact online methods (Adams & MacKay, 2007; Fearnhead & Liu, 2007), or by considering an adaptive leak parameter (Nassar et al., 2012, 2010). We have shown that the two algorithms of Nassar et al. (2012, 2010) can be generalized to gaussian prior beliefs (Nas10* and Nas12*). Our results show that these algorithms also inherit the surprise modulation of the exact Bayesian inference. Our surprise-dependent adaptation rate γ can be interpreted as a surprise-modulated leak parameter.

3.2.3  Other Approaches

Learning in the presence of abrupt changes has also been considered without explicit assumptions about the underlying generative model. One approach uses a surprise-modulated adaptation rate (Faraji et al., 2018) similar to equation 2.9. The Surprise-Minimization Learning (SMiLe) algorithm of Faraji et al. (2018) has an updating rule similar to the one of VarSMiLe (see equations 2.14 and 2.15). The adaptation rate modulation, however, is based on the Confidence Corrected Surprise (Faraji et al., 2018) rather than the Bayes Factor Surprise, and the trade-off in its update rule is between resetting and staying with the latest belief rather than between resetting and integrating (see section 4).

Other approaches use different generative models, such as conditional sampling of the parameters also when there is a change (Glaze et al., 2015; Yu & Dayan, 2005), a deeper hierarchy without fixed change probability pc (Wilson et al., 2010), or drift in the parameters (Gershman, Radulescu, Norman, & Niv, 2014; Mathys et al., 2011). A recent work shows that inference on a simpler version the generative model of Figure 1, with no change points but with a noisy inference style, can explain human behavior well even when the true generative model of the environment is different and more complicated (Findling et al., 2019). They develop a heuristic approach to add noise in the inference process of a Particle Filter. Their algorithm can be interpreted as a surprise-modulated Particle Filter, where the added noise scales with a measure of surprise—conceptually equivalent to Bayesian surprise (Itti & Baldi, 2006; Schmidhuber, 2010; Storck, Hochreiter, & Schmidhuber, 1995). Moreover, another recent work (Prat-Carrabin et al., 2020) shows that approximate sampling algorithms (like Particle Filtering) can explain human behavior better than their alternatives in tasks closely related to the generative model of Figure 1. The signal processing literature provides further methods to address the problem of learning in nonstationary environments with abrupt changes; see Aminikhanghahi and Cook (2017) for a review, and Cummings et al. (2018), Lin et al. (2017), Masegosa et al. (2017), and Özkan et al. (2013) for a few recent examples.

3.3  Surprise Modulation as a Generic Phenomenon

Learning rate modulation similar to the one in equation 2.9 has been previously proposed in the neuroscience literature with either heuristic arguments (Faraji et al., 2018) or Bayesian arguments for a particular experimental task, for example, when samples are drawn from a gaussian distribution (Nassar et al., 2012, 2010). The fact that the same form of modulation is at the heart of Bayesian inference for our relatively general generative model, that it is derived without any further assumptions, and is not a priori defined, is in our view an important contribution to the field of adaptive learning algorithms in computational neuroscience.

Furthermore, the results of our three approximate methods (Particle Filtering, Variational SMiLe, and Message Passing with a fixed N number of messages) as well as some previously developed ones (Adams & MacKay, 2007; Fearnhead & Liu, 2007; Nassar et al., 2012, 2010) demonstrate that the surprise-based modulation of the learning rate is a generic phenomenon. Therefore, regardless of whether the brain uses Bayesian inference or an approximate algorithm (Bogacz, 2017, 2019; Findling et al., 2019; Friston, 2010; Gershman, 2019; Gershman et al., 2014; Mathys et al., 2011; Nassar et al., 2012, 2010; Prat-Carrabin et al., 2020), the notion of Bayes Factor Surprise and the way it modulates learning (see equations 2.12 and 2.9) look generic.

The generality of the way surprise should modulate learning depends on an agent's inductive biases about its environment and is directly associated with the assumed generative model of the world. The generative model we considered in this work involves abrupt changes. However, one can think of other realistic examples, where an improbable observation does not indicate a persistent change but a singular event or an outlier, similar to d'Acremont and Bossaerts (2016) and Nassar, Bruckner, and Frank (2019). In such situations, the belief should not be changed, and surprise should attenuate learning rather than accelerate it. Interestingly, we can show that exact and approximate Bayesian inference on such a generative model naturally leads to a surprise-modulated adaptation rate γ(SBF,m), with the same definition of SBF, where the trade-off is not between integrating and resetting, but between integrating and ignoring the new observation.1

An aspect that the generative model we considered does not capture is the potential return to a previous state of the environment rather than a change to a completely new situation. If in our example of Figure 1B, the bridge with the shortest path is temporarily closed for repairs, your friend would again have to take the longer detour; therefore, her arrival times will return to their previous values (i.e. increase). In such cases, an agent should infer whether the surprising observation stems from a new hidden state or from an old state stored in memory. Relevant generative models have been studied in Fox, Sudderth, Jordan, and Willsky (2011), Gershman, Monfils, Norman, and Niv (2017), Gershman et al. (2014) and are out of the scope of this article.

3.4  Bayes Factor Surprise as a Novel Measure of Surprise

In view of a potential application in the neurosciences, a definition of surprise should reflect how unexpected an event is and should modulate learning. Surprising events indicate that our belief is far from the real world and suggest updating our model of the world, or, for a large surprise, simply forgetting it. Forgetting is the same as returning to the prior belief. However, an observation yt+1 can be unexpected under both the prior π(0) and the current beliefs π(t). In these situations, it is not obvious whether forgetting helps. Therefore, the modulation between forgetting or not should be based on a comparison between the probability of an event under the current belief P(yt+1;π(t)) and its probability under the prior belief P(yt+1;π(0)).

The definition of the Bayes Factor Surprise SBF as the ratio of P(yt+1;π(t)) and P(yt+1;π(0)) exploits this insight. The Bayes Factor Surprise appears as a modulation factor in the recursive form of the exact Bayesian update rule for a hierarchical generative model of the environment. When two events are equally probable under the prior belief, the one that is less expected under the current belief is more surprising, satisfying the first property. At the same time, when two events are equally probable under the current belief, the one that is more expected under the prior belief is more surprising, signaling that forgetting may be beneficial.

SBF can be written (using equation 2.6) in a more explicit way as
(3.1)
Note that the definition by itself is independent of the specific form of the generative model. In other words, even in the cases where data are generated with another generative model (e.g., the real world), SBF could be a candidate surprise measure in order to interpret brain activity or pupil dilation.

We formally discussed the connections between the Bayes Factor Surprise and Shannon Surprise (Shannon, 1948) and showed that they are closely linked. We showed that the modulated adaptation rate (γ) used in (approximate) Bayesian inference is a function of the difference between the Shannon Surprise under the current and the prior beliefs, but cannot be expressed solely by the Shannon Surprise under the current one. Our formal comparisons between these two different measures of surprise lead to specific experimentally testable predictions.

The Bayesian Surprise SBa (Itti & Baldi, 2006; Schmidhuber, 2010; Storck et al., 1995) and the Confidence Corrected Surprise SCC (Faraji et al., 2018) are two other measures of surprise in neuroscience. The learning modulation derived in our generative model cannot be expressed as a function of SBa and SCC. However, one can hypothesize that SBa is computed after the update of the belief to measure the information gain of the observed event and is therefore not a good candidate for online learning modulation. The Confidence Corrected Surprise SCC takes into account the shape of the belief and therefore includes the effects of confidence, but it does not consider any information about the prior belief. Hence, a result of M¯1(θ^,δ,s=+1,C)=M¯1(θ^,δ,s=-1,C) in our first experimental prediction would be consistent with the corresponding behavioral or physiological indicator reflecting the SCC.

3.5  Difference in Shannon Surprise, an Alternative Perspective

Following our formal comparison in section 2.4, SBF can be expressed as a deterministic function of the difference in Shannon Surprise as
(3.2)
All of our theoretical results can be rewritten by replacing SBF with this function of ΔSSh. Moreover, because there is a one-to-one mapping between SBF and ΔSSh, from a systemic point of view, it is not possible to specify whether the brain computes the former or the latter by analysis of behavioral data and biological signals. This suggests an alternative interpretation of surprise-modulated learning as an approximation of Bayesian inference: What the brain computes and perceives as surprise or prediction error may be Shannon Surprise, but the modulating factor in a three-factor synaptic plasticity rule (Frémaux & Gerstner, 2016; Gerstner et al., 2018; Lisman et al., 2011) may be implemented by comparing the Shannon Surprise values under the current and the prior beliefs.

3.6  Future Directions

A natural continuation of our study is to test our experimental predictions in human behavior and physiological signals in order to investigate which measures of surprise are used by the brain. In a similar direction, our approximate learning algorithms can be evaluated on human behavioral data from experiments that use a similar generative model (Behrens et al., 2007; Glaze et al., 2015; Heilbron & Meyniel, 2019; Nassar et al., 2012, 2010; Wilson et al., 2013; Yu & Dayan, 2005) in order to assess if our proposed algorithms achieve similar or better performance in explaining data.

Finally, our methods can potentially be applied to model-based reinforcement learning in nonstationary environments. In recent years, there has been growing interest in adaptive or continually learning agents in changing environments in the form of continual learning and meta-learning (Lomonaco, Desai, Culurciello, & Maltoni, 2019; Traoré et al., 2019). Many continual learning model-based approaches make use of some procedure to detect changes (Lomonaco et al., 2019; Nagabandi et al., 2018). Integrating SBF and a learning rate γ(SBF) into a reinforcement learning agent would be an interesting future direction.

4.1  Proof of the proposition

By definition,
(4.1)
We exploit the Markov property of the generative model in equations 2.2 to 2.4, condition on the fixed past y1:t and rewrite
(4.2)
By marginalization over the hidden state Ct+1, the second factor in the numerator of equation 4.2 can be written as
(4.3)
The denominator in equation 4.2 can be written as
(4.4)
where we used the definition in equation 2.6. Using these two expanded forms, equation 4.2 can be rewritten as
(4.5)
We define P(θ|yt+1) as the posterior given a change in the environment as
(4.6)
Then, we can write equation 4.5 as
(4.7)
where πB(t+1)(θ) is defined in equation 2.8, and
(4.8)
with SBF defined in equation 2.7, and γ(S,m) defined in equation 2.9. Thus, our calculation yields a specific choice of surprise (S=SBF) and a specific value for the saturation parameter m=pc1-pc.

4.2  Derivation of the Optimization-Based Formulation of VarSMiLe: Algorithm 1

To derive the optimization-based update rule for the Variational SMiLe rule and the relation of the bound Bt+1 with surprise, we used the same approach used in Faraji et al. (2018).

4.2.1  Derivation of the Update Rule

Consider the general form of the following variational optimization problem:
(4.9)
where B0,DKL[p1(θ)||p2(θ)]. On the extremes of B, we will have trivial solutions
(4.10)
Note that the Kullback–Leibler divergence is a convex function with respect to its first argument—q in our setting. Therefore, both the objective function and the constraints of the optimization problem in equation 4.9 are convex. For convenience, we assume that the parameter space for θ is discrete, but the final results can be generalized also to the continuous case with some considerations (see Beal, 2003, and Faraji et al., 2018). For the discrete setting, the optimization problem in equation 4.9 can be rewritten as
(4.11)
For solving the mentioned problem, one should find a q that satisfies the Karush-Kuhn-Tucker (KKT) conditions (Boyd & Vandenberghe, 2004) for
(4.12)
(4.13)
where λ and α are the parameters of the dual problem. Defining γ=λ1+λ and considering the partial derivative to be zero, we have
(4.14)
where α is always specified in a way to have Const(α,γ) as the normalization factor:
(4.15)
According to the KKT conditions, λ0, and as a result, γ[0,1]. Therefore, considering p1(θ)=π^B(t+1)(θ) and p2(θ)=P(θ|yt+1), the solution to the optimization problem of equations 2.14 and 2.15 is 2.13.

4.2.2  Proof of the Claim That B Is a Decreasing Function of Surprise

According to the KKT conditions,
(4.16)
For the case that λ0 (i.e., γ0), we have B as a function of γ:
(4.17)
Now, we show that the derivate of B(γ) with respect to γ is always nonpositive. To do so, we first compute the derivative of Z(γ) as
(4.18)
and the derivate of Eq*[h(θ)] for an arbitrary h(θ) as
(4.19)
Using the last previous equations, we have
(4.20)
which means that B is a decreasing function of γ. Because γ is an increasing function of surprise, B is also a decreasing function of surprise.

4.3  Derivations of Message Passing N: Algorithm 2

For the sake of clarity and coherence, we repeat here some steps performed in section 2.

Following the idea of Adams and MacKay (2007) we first define the random variable Rt=min{nN:Ct-n+1=1}. This is the time window from the last change point. Then the exact Bayesian form for π(t)(θ) can be written as
(4.21)
To have a formulation similar to the one of Particle Filtering, we rewrite the belief as
(4.22)
where πk(t)(θ)=P(Θt=θ|Rt=t-k,y1:t) is the term corresponding to Rt=t-k, and wt(k)=P(Rt=t-k|y1:t) is its corresponding weight at time t.
To update the belief after observing yt+1, one can use the exact Bayesian recursive formula, equation 2.10, for which one needs to compute πB(t+1)(θ) as
(4.23)
Using Bayes' rule and the conditional independence of observations, we have
(4.24)
and once again, by using Bayes' rule and the conditional independence of observations, we find
(4.25)
This gives us
(4.26)
and finally,
(4.27)
Using the recursive formula, the update rule for the weights for 0kt-1 is
(4.28)
and for the newly added particle t,
(4.29)
where γt+1=γSBF(yt+1;π(t)),m=pc1-pc of equation 2.9.

The MPN algorithm uses equations 4.22, 4.28, and 4.29 for computing the belief for tN, which is same as the exact Bayesian inference. For t>N, it first updates the weights in the same fashion as equations 4.28 and 4.29, keeps the greatest N weights, and sets the rest weights equal to 0. After normalizing the new weights, it uses equation 4.22 (but only over the particles with nonzero weights) to compute the belief π^(t). (For the particular case of exponential family, see algorithm 2 for the pseudocode.)

4.4  Derivation of the Weight Update for Particle Filtering: Algorithm 3

We derive here the weight update for the particle filter. The difference in our formalism from a standard derivation (Särkkä, 2013) is the absence of the Markov property of conditional observations (i.e., P(yt+1|c1:t+1,y1:t)P(yt+1|ct+1)). Our goal is to perform the approximation
(4.30)
Given a proposal sampling distribution Q, for the weight of particle i at time t+1, we have
(4.31)
where the only assumption for the proposal distribution Q is that the previous hidden states c1:t(i) are independent of the next observation, yt+1, which allows keeping the previous samples c1:t(i) when going from c1:t(i) to c1:t+1(i) and writing the update of the weights in a recursive way (Särkkä, 2013).
Notice that wt(i)P(c1:t(i)|y1:t)Q(c1:t(i)|y1:t) are the weights calculated at the previous time step. Therefore,
(4.32)
For the choice of Q, we use the optimal proposal function in terms of variance of the weights (Doucet et al., 2000):
(4.33)
Using Bayes' rule and equations 4.32 and 4.33, after a few steps of algebra, we have
(4.34)
Using the definition in equation 2.6, we have P(yt+1|c1:t(i),y1:t,ct+1(i)=0)=P(yt+1;π^i(t)) and P(yt+1|c1:t(i),y1:t,ct+1(i)=1)=P(yt+1;π(0)). Therefore, we have
(4.35)
where Z is the normalization factor,
(4.36)
where we have
(4.37)
We now compute the weights corresponding to πB(t+1) as defined in equation 2.8
(4.38)
Combining equations 4.35, 4.36, and 4.38, we can then rewrite the weight update rule as
(4.39)
where γt+1=γSBF(yt+1;π^(t)),pc1-pc of equation 2.9.
At every time step t+1 we sample each particle's hidden state ct+1 from the proposal distribution. Using equation 4.33, we have
(4.40)
We implemented the Sequential Importance Resampling algorithm (Doucet et al., 2000; Gordon et al., 1993), where the particles are resampled when their effective number falls below a threshold. The effective number of the particles is defined as (Doucet et al., 2000; Särkkä, 2013)
(4.41)
When Neff is below a critical threshold, the particles are resampled with replacement from the categorical distribution defined by their weights, and all their weights are set to wt(i)=1/N. We did not optimize the parameter Neff, and following Doucet and Johansen (2009), we performed resampling when NeffN/2.

4.5  Surprise-Modulation as a Framework for Other Algorithms

4.5.1  SMiLe Rule

The Confidence Corrected Surprise (Faraji et al., 2018) is
(4.42)
where P˜(θ|yt+1) is the scaled likelihood defined as
(4.43)

Note that this becomes equal to P(θ|yt+1) if the prior belief π(0) is a uniform distribution (see equation 2.11).

With the aim of minimizing the Confidence Corrected Surprise by updating the belief during time, Faraji et al. (2018) suggested an update rule solving the optimization problem,
(4.44)
where Bt+10,DKL[P(θ|yt+1)||π^(t)(θ)] is an arbitary bound. The authors showed that the solution to this optimization problem is
(4.45)
where γt+1[0,1] is specified so that it satisfies the constraint in equation 4.44.

Although equation 4.45 looks very similar to equation 2.13, it signifies a trade-off between the latest belief π^(t) and the belief updated by only the most recent observation P˜(θ|yt+1), that is, a trade-off between adherence to the current belief and reset. While SMiLe adheres to the current belief π^(t), Variational SMiLe integrates the new observation with the current belief to get π^B(t), which leads to a trade-off similar to the one of the exact Bayesian inference (see equations 2.12 and 2.10).

To modulate the learning rate by surprise, Faraji et al. (2018) considered the boundary Bt+1 as a function of the Confidence Corrected Surprise,
(4.46)
where m is a free parameter. Then, γt+1 is found by satisfying the constraint of the optimization problem in equation 4.44 using equations 4.45 and 4.46.

4.5.2  Nassar's Algorithm

For the particular case that observations are drawn from a gaussian distribution with known variance and unknown mean, yt+1|μt+1N(μt+1,σ2) and θt=μt, Nassar et al. (2012, 2010) considered the problem of estimating the expected μt and its variance rather than a probability distribution (i.e., belief) over it, implicitly assuming that the belief is always a gaussian distribution. The algorithms of Nassar et al. (2012, 2010) were developed for the case that whenever the environment changes, the mean μt+1 is drawn from a uniform prior with a range of values much larger than the width of the gaussian likelihood function. The authors showed that in this case, the expected μt+1 (i.e., μ^t+1) estimated by the agent upon observing a new sample yt+1 is
(4.47)
with αt+1 the adaptive learning rate given by
(4.48)
where rt^ is the estimated time since the last change point (i.e., the estimated Rt=min{nN:Ct-n+1=1) and Ωt+1=P(ct+1=1|y1:t+1) the probability of a change given the observation. Note that this quantity (i.e. the posterior change point probability) is the same as our adaptation rate γt+1 of equation 2.9.
Figure 12:

Gaussian estimation task: Transient performance after changes for original algorithms of Nassar et al. (2010) and Nassar et al. (2012). Mean squared error for the estimation of μt at each time step n after an environmental change, that is, the average of MSE[Θ^t|Rt=n] over time; σ=0.1,pc=0.1 (left panel) and σ=5,pc=0.01 (right panel). The shaded area corresponds to the standard error of the mean. Nas10*, Nas12*: variants of Nassar et al. (2010) and Nassar et al. (2012), respectively, Nas10 Original, Nas12 Original: Original algorithms of Nassar et al. (2010) and Nassar et al. (2012), respectively.

Figure 12:

Gaussian estimation task: Transient performance after changes for original algorithms of Nassar et al. (2010) and Nassar et al. (2012). Mean squared error for the estimation of μt at each time step n after an environmental change, that is, the average of MSE[Θ^t|Rt=n] over time; σ=0.1,pc=0.1 (left panel) and σ=5,pc=0.01 (right panel). The shaded area corresponds to the standard error of the mean. Nas10*, Nas12*: variants of Nassar et al. (2010) and Nassar et al. (2012), respectively, Nas10 Original, Nas12 Original: Original algorithms of Nassar et al. (2010) and Nassar et al. (2012), respectively.

Close modal

We next extend their approach to a more general case where the prior is a gaussian distribution with arbitrary variance, μt+1N(μ0,σ02). We then discuss the relation of this method to Particle Filtering. A performance comparison between our extended algorithms Nas10* and Nas12* and their original versions Nas10 and Nas12 is depicted in Figures 12 and 13.

4.5.3  Nas10* and Nas12* Algorithms

Let us consider that y1:t are observed, the time since the last change point rt is known, and the agent's current estimation of μt is μ^t. It can be shown (see the appendix for the derivation) that the expected μt+1 (i.e., μ^t+1) upon observing the new sample yt+1 is
(4.49)
where ρ=σ2σ02, μ0 is the mean of the prior distribution, and γt+1 is the adaptation rate of equation 2.9.
Figure 13:

Gaussian estimation task: Steady-state performance for original algorithms of Nassar et al. (2010) and Nassar et al. (2012). Difference between the mean squared error of each algorithm and the optimal solution (Exact Bayes), that is, the average ΔMSE[Θ^t] over time for each combination of environmental parameters σ and pc. Abbreviations: Nas10*, Nas12*: Variants of Nassar et al. (2010) and Nassar et al. (2012) respectively; Nas10 Original, Nas12 Original: Original algorithms of Nassar et al. (2010) and Nassar et al. (2012) respectively.

Figure 13:

Gaussian estimation task: Steady-state performance for original algorithms of Nassar et al. (2010) and Nassar et al. (2012). Difference between the mean squared error of each algorithm and the optimal solution (Exact Bayes), that is, the average ΔMSE[Θ^t] over time for each combination of environmental parameters σ and pc. Abbreviations: Nas10*, Nas12*: Variants of Nassar et al. (2010) and Nassar et al. (2012) respectively; Nas10 Original, Nas12 Original: Original algorithms of Nassar et al. (2010) and Nassar et al. (2012) respectively.

Close modal

We can see that the updated mean is a weighted average, with surprise-modulated weights, between integrating the new observation with the current mean μ^t and integrating it with the prior mean μ0, in the same spirit as the other algorithms we considered here. Equation 4.49 can also be seen as a surprise-modulated weighted sum of two delta rules: one including a prediction error between the new observation and the current mean (yt+1-μ^t) and one including a prediction error between the observed sample and the prior mean (yt+1-μ0).

In order to obtain a form similar to the one of Nassar et al. (2012, 2010), we can rewrite the above formula as
(4.50)
where we have defined αt+1=ρ+γt+1rt+1ρ+rt+1. Hence, the update rule takes the form of a weighted average, with fixed weights, between two delta rules: one including a prediction error between the prior mean and the current mean (μ0-μ^t) and one including a prediction error between the observed sample and the current mean (yt+1-μ^t), both with surprise-modulated learning rates.

In Nassar et al. (2012, 2010), the true new mean after a change point is drawn from a uniform distribution with a range of values much larger than the width of the gaussian likelihood. Their derivations implicitly approximate the uniform distribution with a gaussian distribution with σ0σ. Note that if σ0σ, then ρ0, so that the first term of equation 4.50 disappears, and αt+1=1+γt+1rt1+rt. This results in the delta rule of the original algorithm in equations 4.47 and 4.48, with γt+1=Ωt+1.

All of the calculations so far were done by assuming that rt is known. However, for the case of a nonstationary regime with a history of change points, the time interval rt is not known. Nassar et al. (2012, 2010) used the expected time interval r^t as an estimate. We make a distinction here between Nassar et al. (2012) and Nassar et al. (2010):

In Nassar et al. (2010) r^t is calculated recursively on each trial in the same spirit as equation 2.10: r^t+1=(1-γt+1)(r^t+1)+γt+1, that is, at each time step, there is a probability (1-γt+1) that r^t increments by 1 and a probability γt+1 that it is reset to 1. So r^t+1 is the weighted sum of these two outcomes. Hence, equation 4.50 combined with the expected time interval r^t constitutes a generalization of the update rule of Nassar et al. (2010) for the case of gaussian prior N(μ0,σ02). We call this algorithm Nas10* (see the appendix for pseudocode).

In Nassar et al. (2012), the variance σ^t+12=Var[μt+1|y1:t+1] is estimated given μ^t, r^t, and σ^t2. Based on this variance, r^t+1=σ2σ^t+12-σ2σ02 is computed. The derivation of the recursive computation of σ^t+12 for the case of gaussian priors can be found in the appendix. We call the combination of equation 4.50 with this way of computing the expected time interval r^t Nas12* (see the appendix for pseudocode). These two versions of calculating r^t in Nassar et al. (2010) and Nassar et al. (2012) give different results, and we compare our algorithms with both Nas10* and Nas12* in our simulations. Note that, as discussed in section 2.1, the posterior belief at time t+1 does not generally belong to the same family of distributions as the belief of time t. However, we therefore approximate for both algorithms the posterior belief P(θ|y1:t+1) by a gaussian.

4.5.4  Nassar's Algorithm and Particle Filtering with One Particle

In the case of Particle Filtering (see equation 2.23) with only one particle, at each time step, we sample the particle's hidden state with change probability Q(ct+1(1)=1|c1:t(1),y1:t+1)=γt+1, generating a posterior belief that takes two possible values with probability (according to the proposal distribution)
(4.51)
So, in expectation, the updated belief will be
(4.52)
If we apply equation 4.52 to μ^t+1, we find that EQ[μ^(t+1)], is identical to the generalization of Nassar et al. (2010) (see equation 4.49).
Moreover, in Particle Filtering with a single particle, we sample the particle's hidden state, which is equivalent to sampling the interval R^t+1. Because R^t+1 takes the value r^t+1 with (1-γt+1) and the value of 1 (=reset) with probability γt+1, the expected value of R^t+1 is
(4.53)
In other words, in Nassar et al. (2010), the belief is updated based on the expectedr^t, whereas in Particle Filtering with one particle, the belief is updated using the sampledr^t.

In summary, the two methods will give different estimates on a trial-per-trial basis, but the same result in expectation. The pseudocode for Particle Filtering with one particle for the particular case of the gaussian estimation task is in the appendix.

4.6  Application to the Exponential Family

For our three algorithms—Variational SMiLe, Message Passing with fixed number N of particles, and Particle Filtering—we derive compact update rules for π^(t+1)(θ) when the likelihood function PY(y|θ) is in the exponential family and π(0)(θ) is its conjugate prior. In that case, the likelihood function has the form
(4.54)
where θ is the vector of natural parameters, h(y) is a positive function, ϕ(y) is the vector of sufficient statistics, and A(θ) is the normalization factor. Then the conjugate prior π(0) has the form
(4.55)
where χ(0) and ν(0) are the distribution parameters, h˜(θ) is a positive function, and fχ(0),ν(0) is the normalization factor. For this setting and while π(t)=PπΘ=θ;χ(t),ν(t), the Bayes Factor Surprise has the compact form
(4.56)

The pseudocode for Variational SMiLe, MPN, and Particle Filtering can be seen in algorithms 1, 2, and 3, respectively.

4.7  Simulation Task

In this section, we first argue why the mean squared error is a proper measure for comparing different algorithms with each other, and then we explain the version of Leaky integrator that we used for simulations.

4.7.1  Mean Squared Error as an Optimality Measure

Consider the case that at each time point t, the goal of an agent is to have an estimation of the parameter Θt as a function of the observations Y1:t, that is, Θ^t=f(Y1:t). The estimator that minimizes the mean squared error MSE[Θ^t]=EP(Y1:t,Θt)(Θ^t-Θt)2 is
(4.57)
which is the expected value of Θt conditioned on the observations Y1:t, or, in other words under the Bayes-optimal current belief (see Papoulis & Saunders, 1989, for a proof). The MSE for any other estimator Θ^t can be written as (see below for the proof)
(4.58)
This means that the MSE for any arbitrary estimator Θ^t includes two terms: the optimal MSE and the mismatch of the actual estimator from the optimal estimator Θ^tOpt. As a result, if the estimator we are interested in is the expected value of Θt under the approximate belief π^(t) computed by each of our algorithms (i.e., Θ^t'=Eπ^(t)Θt), the second term in equation 4.58, that is, the deviation from optimality, is a measure of how good the approximation is.
Proof for the Algorithms without Sampling. Consider Θ^tOpt=fOpt(Y1:t). Then, for any other arbitrary estimator Θ^t=f(Y1:t) (except for the ones with sampling), we have
(4.59)
The quadratic term in the last line can be expanded and written as
(4.60)
The random variables in the expected value of the first line are not dependent on Θt, so it can be computed over Y1:t. The expected value of the second line is equal to MSE[Θ^tOpt]. It can also be shown that the expected value of the third line is equal to 0,
(4.61)
where in the last line, we used the definition of the optimal estimator. Together, we have
(4.62)

Proof for the Algorithms with Sampling. For particle filtering (and any kind of estimator with sampling), the estimator is not a deterministic function of observations Y1:t. Rather, the estimator is a function of observations as well as a set of random variables (samples) drawn from a distribution that is also a function of observations Y1:t. In our case, the samples are the sequence of hidden states C1:t. The estimator can be written as
(4.63)
where C1:t(1:N) are N independent and identically distributed samples drawn from the proposal distribution Q(C1:t|Y1:t). MSE for this estimator should also be averaged over the samples, which leads to
(4.64)
Similar to what we did before, the MSE for particle filtering can be written as
(4.65)
which can be written in terms of bias and variance over samples as
(4.66)

4.7.2  Leaky Integration

For the gaussian task, the goal is to have an estimation of the mean of the gaussian distribution at each time t, denoted by θ^t. Given a leak parameter ω(0,1], the leaky integrator estimation is
(4.67)
For the categorical task, the goal is to have an estimation of the parameters of the categorical distribution at each time t, denoted by θ^t=[θ^i,t]i=1N for the case that there are N categories. Given a leak parameter ω(0,1], the leaky integrator estimation is
(4.68)
where δ is the Kronecker delta function.

4.8  Derivation of the Formula Relating Shannon Surprise to the Modulated Learning Rate

Given the defined generative model, the Shannon surprise upon observing yt+1 can be written as
(4.69)
where γt+1=γSBF(yt+1;π(t)),m=pc1-pc of equation 2.9. As a result, the modulated adaptation rate can be written as in equation 2.25 and the Bayes Factor Surprise as in equation 3.2.

4.9  Experimental Predictions

4.9.1  Setting

Consider a gaussian task where Yt can take values in R. The likelihood function PY(y|θ) is defined as
(4.70)
where σR+ is the standard deviation and θR is the mean of the distribution, that is, the parameter of the likelihood. Whenever there is a change in the environment (with probability pc(0,1)), the value θ is drawn from the prior distribution π(0)(θ)=N(θ;0,1).

4.9.2  Theoretical Proofs for Prediction 1

For our theoretical derivations for our first prediction, we consider the specific but relatively mild assumption that the subjects' belief π(t) at each time is a gaussian distribution,
(4.71)
where θ^t and σ^t are determined by the learning algorithm and the sequence of observations y1:t. This is the case when the subjects use either VarSMiLe, Nas10*, Nas12*, pf1, MP1, or Leaky Integration as their learning rule. With such assumptions, the inferred probability distribution P(y;π(t)) can be written as
(4.72)
As mentioned in section 2, we define, at time t, the prediction error as δt+1=yt+1-θ^t and the “sign bias” as st+1=sign(δt+1θ^t). Then, given an absolute prediction θ^>0, an absolute prediction error δ>0, a standard deviation σC, and a sign bias s{-1,1}, the average Bayes Factor Surprise is computed as
(4.73)
It can easily be shown that the value SBF(yt;π^(t-1)) is the same for all tT, and hence the average surprise is the same as the surprise for each time point. For example, the average surprise for s=+1 is equal to
(4.74)
Similar formulas can be computed for S¯BF(θ^,δ,s=-1,σC). Then the difference ΔS¯BF(θ^,δ,σC)=S¯BF(θ^,δ,s=+1,σC)-S¯BF(θ^,δ,s=-1,σC) can be computed as
(4.75)
It can be shown that
(4.76)
for all θ^>0, δ>0, and σC>0. The first inequality is trivial, and the proof for the second inequality is given below.
The average Shannon Surprise can be computed in a similar way. For example, for s=+1, we have
(4.77)
and then the difference ΔS¯Sh(θ^,δ,σC)=S¯Sh(θ^,δ,s=+1,σC)-S¯Sh(θ^,δ,s=-1,σC) can be computed as
(4.78)
where m=pc1-pc. Then, using the results for the Bayes Factor Surprise, we have
(4.79)
for all θ^>0, δ>0, and σC>0.
Proof of the Second Inequality for SBF. Let us define the variables
(4.80)
as well as the functions
(4.81)
The following inequalities hold true:
(4.82)
Then, the derivative of f(δ) can be computed as
(4.83)
Therefore, we have δΔS¯BF(θ^,δ,σC)=ddδf(δ)<0.
Proof of the Second Inequality for SSh. Using the functions we defined for the previous proof and after computing the partial derivative of equation 4.78, we have
(4.84)
The derivative of the last term can be written in terms of the derivates of f1 and f2, indicated by f1' and f2', respectively,
(4.85)
The first term is always positive based on the proof for SBF. The second term is also always positive because
(4.86)
As a result, we have δΔS¯Sh(θ^,δ,σC)>0.

4.9.3  Simulation Procedure for Prediction 1

In order to relax the main assumption of our theoretical proofs (the belief is always a gaussian distribution), to include the practical difficulties of a real experiment (e.g., to use |δt|δ instead of |δt|=δ), and to have an estimation of the effect size, we also performed simulations for our first experimental prediction.

For each simulated subject, the procedure of our simulation was as follows:

  1. We fixed the hyperparameters σ2 and pc for producing samples.

  2. We selected a learning algorithm (e.g., pf20) and fixed its corresponding tuned parameters (based on our simulations in section 2).

  3. We applied the learning algorithm over a sequence of observations y1:T. Note that in a real experiment, this step can be done through a few episodes, which makes it possible to have a long sequence of observations (i.e., large T).

  4. At each time t, we saved the values yt, θ^t, σ^t, δt, st, SSh(yt;π^(t-1)), and SBF(yt;π^(t-1)).

Then, given an absolute prediction θ^>0, an absolute prediction error δ>0, a standard deviation σC>0, and a sign bias s{-1,1}, we defined the set of time points,
(4.87)
where ||θ^t-1|-θ^|<Δθ, ||δt|-δ|<Δδ, and |σ^t-σC|<ΔσC are equivalent to |θ^t-1|θ^, |δt|δ, and σ^tσC, respectively. Δθ, Δδ, and ΔσC are positive real values that should be determined based on practical limitations (mainly the length of the observation sequence T). We then computed the average surprise values as
(4.88)
We repeated this procedure for N different simulated subjects (with different random seeds). The average of S¯BF and S¯Sh over N=20 subjects, for two learning algorithms (Nas12* and pf20), and for T=500, θ^=1, σC=0.5, Δθ=0.25, Δδ=0.1, and ΔσC=1 is shown in Figure 10B. The results are the same as what our theoretical analysis predicted.

4.9.4  Simulation Procedure for Prediction 2

For our second prediction, the theoretical proof is trivial. However, in order to have a setting similar to a real experiment (e.g., to use P(yt+1;π^(t))p instead of P(yt+1;π^(t))=p), and to have an estimation of the effect size, we used simulations also for our second experimental predictions.

We followed the same procedure as the one for the simulation of the first prediction. For each simulated subject and at each time t, we saved the quantities P(yt+1;π^(t)), P(yt+1;π^(0)), SSh(yt;π^(t-1)), and SBF(yt;π^(t-1)). Then, for a given a probability value p>0, we defined the set of time points,
(4.89)
where |P(yt+1;π^(t))-p|<Δp and |P(yt+1;π^(0))-p|<Δp are equivalent to P(yt+1;π^(t))p and P(yt+1;π^(0))p, respectively. Δp is a positive real value that should be determined based on practical limitations (mainly the length of the observation sequence T). We then computed the average surprise S¯BF(p) and S¯Sh(p) over T for each value of p. We repeated this procedure for N different simulated subjects (with different random seeds). The average of S¯BF and S¯Sh over N=20 subjects, for two learning algorithms (Nas12* and pf20), and for T=500 and Δp=0.0125 is shown in Figure 11B.

A.1  Modified Algorithm of Nassar et al. (2012, 2010): Adaptation for Gaussian Prior

A.1.1  Recursive Update of the Estimated Mean for Gaussian Prior

Let us first consider the case of a stationary regime (no change points) where observed samples are drawn from a gaussian distribution with known variance, yt+1|θN(θ,σ2), and the parameter θ is also drawn from a gaussian distribution θN(μ0,σ02). After having observed samples y1,...,yt+1, it can be shown that, using Bayes' rule, the posterior distribution P(θ|y1:t+1)=πB(t+1)(θ) is
(A.1)

An estimate of θ is its expected value E(θ|y1:t+1)=μB,t+1.

In a nonstationary regime where, after having observed y1,,yt from the same hidden state, there is the possibility for a change point upon observing yt+1, the posterior distribution is
(A.2)
To facilitate notation, we denote ct+1=0 as “stay” and ct+1=1 as “change” so that
(A.3)
Note that the above is equivalent to the Bayesian recursive formula, equation 2.10 of the main text, where γt+1 is the adaptation rate we saw in equation 2.9 of the main text, and is essentially the probability to change given the new observation, P(ct+1=1|y1:t+1). In Nassar et al. (2010) this quantity is denoted as Ωt+1. Taking equation A.1 into account, we have
(A.4)
where rt is the time interval of observations coming from the same hidden state, calculated at time t. Taking the expectation of equation A.3 the estimated mean upon observing the new sample yt+1 is
(A.5)
where we dropped the subscript t+1 in γ to simplify notations. We have
(A.6)
Because μ^t=11σ02+rtσ2μ0σ02+i=t+1-rttyiσ2, after a few lines of algebra, we have
(A.7)
We now define ρ=σ2σ02 and find
(A.8)
A rearrangement of the terms and inclusion of the dependency of γ on time yields
(A.9)
In order to obtain a form similar to the one of Nassar et al. (2012, 2010) we continue and spell out the terms that include the quantities μ^t,μ0, and yt+1:
(A.10)
Using that 1ρ+rt+1=1ρ+1-rt(ρ+1)(ρ+rt+1), we have
(A.11)
After a further step of algebra, we arrive at
(A.12)
If we define 1-α=(1-γ)rtρ+rt+1α=1-(1-γ)rtρ+rt+1α=ρ+γrt+1ρ+rt+1 and rearrange the terms, we have
(A.13)
Adding back the dependency of γ and α on time, we finally have
(A.14)

A.1.2  Recursive Update of the Estimated Variance for Gaussian Prior

Nassar et al. (2012) first calculate the variance σ^t+12=Var(θ|y1:t+1) and then, based on this, compute r^t+1. We derive here these calculations for the case of gaussian prior. We note again that
(A.15)
Then for the variance σ^t+12=Var(θ|y1:t+1), we have
(A.16)
where σB,t+12=11σ02+rt+1σ2 and σchange2=11σ02+1σ2.
We have already defined ρ=σ2σ02 so that
(A.17)
Using, as before, that 1ρ+rt+1=1ρ+1-rt(ρ+1)(ρ+rt+1), we have
(A.18)
We already defined the learning rate α=1-(1-γ)rtρ+rt+1, so we can write
(A.19)
Note that μt=11σ02+rtσ2μ0σ02+i=t+1-rttyiσ2 so for the calculation of the last term, we have
(A.20)
We now rearrange the terms
(A.21)
and finally we have
(A.22)

A.2  Implementation of Nas10*, Nas12*, and Particle Filtering with One Particle for the Gaussian Estimation Task

We provide here the pseudocode for the algorithms Nas10*, Nas12*, and Particle Filtering with one particle for the gaussian estimation task (see algorithms A1, A2, and A3). Observations are drawn from a gaussian distribution with known variance and unknown mean: yt+1|μt+1N(μt+1,σ2) and θt=μt. When there is a change, the parameter μ is also drawn from a gaussian distribution, μN(μ0,σ02). All three algorithms estimate the expected μt+1 (i.e., μ^t+1) upon observing a new sample yt+1.

After rewriting equation 4.56, we have
(A.23)

Note that the pseudocode for pf1 provided here is a translation of algorithm 3 to the case of a single particle (where there are no weights to calculate) and for the gaussian distribution as a particular instance of the exponential family.

1

Liakoni, V., Lehmann, M. P., Modirshanechi, A., Brea, J., Herzog, M. H., Gerstner, W., & Preuschoff, K. Dissociating brain regions encoding reward prediction error and surprise. Manuscript in preparation.

We thank both reviewers for their constructive and helpful comments. This research was supported by the Swiss National Science Foundation (200020 184615) and the European Union Horizon 2020 Framework Program under grant agreement 785907 (Human Brain Project, SGA2).

Adams
,
R. P.
, &
MacKay
,
D. J.
(
2007
).
Bayesian online changepoint detection.
arXiv:0710.3742.
Aminikhanghahi
,
S.
, &
Cook
,
D. J.
(
2017
).
A survey of methods for time series change point detection
.
Knowledge and Information Systems
,
51
(
2
),
339
367
.
Barber
,
D.
(
2006
).
Expectation correction for smoothed inference in switching linear dynamical systems
.
Journal of Machine Learning Research
,
7
,
2515
2540
.
Barber
,
D.
(
2012
).
Bayesian reasoning and machine learning
.
Cambridge
:
Cambridge University Press
.
Beal
,
M. J.
(
2003
).
Variational algorithms for approximate Bayesian inference
. Ph.D. diss., University College London.
Behrens
,
T. E.
,
Woolrich
,
M. W.
,
Walton
,
M. E.
, &
Rushworth
,
M. F.
(
2007
).
Learning the value of information in an uncertain world
.
Nature Neuroscience
,
10
(
9
), 1214.
Bogacz
,
R.
(
2017
).
A tutorial on the free-energy framework for modelling perception and learning
.
Journal of Mathematical Psychology
,
76
,
198
211
.
Bogacz
,
R.
(
2019
).
Dopamine role in learning and action inference.
bioRxiv:837641.
Boyd
,
S.
, &
Vandenberghe
,
L.
(
2004
).
Convex optimization
.
Cambridge
:
Cambridge University Press
.
Brown
,
S. D.
, &
Steyvers
,
M.
(
2009
).
Detecting and predicting changes
.
Cognitive Psychology
,
5
(
1
),
49
67
.
Cummings
,
R.
,
Krehbiel
,
S.
,
Mei
,
Y.
,
Tuo
,
R.
, &
Zhang
,
W.
(
2018
). Differentially private change-point detection. In
S.
Bengio
,
H.
Wallach
,
H.
Larochelle
,
K.
Grauman
,
N.
Cesa-Bianchi
, &
R.
Garnett
(Eds.),
Advances in neural information processing systems, 31
(pp.
10825
10834
).
Red Hook, NY
:
Curran
.
d'Acremont
,
M.
, &
Bossaerts
,
P.
(
2016
).
Neural mechanisms behind identification of leptokurtic noise and adaptive behavioral response
.
Cerebral Cortex
,
26
(
4
),
1818
1830
.
Daw
,
N.
, &
Courville
,
A.
(
2008
). The pigeon as particle filter. In
J. C.
Platt
,
D.
Koller
,
Y.
Singer
, &
S. T.
Roweis
(Eds.),
Advances in neural information processing systems
, 20 (pp.
369
376
).
Red Hook, NY
:
Curran
.
Doucet
,
A.
,
Godsill
,
S.
, &
Andrieu
,
C.
(
2000
).
On sequential Monte Carlo sampling methods for Bayesian filtering
.
Statistics and Computing
,
10
(
3
),
197
208
.
Doucet
,
A.
, &
Johansen
,
A. M.
(
2009
). A tutorial on particle filtering and smoothing: Fifteen years later. In
D.
Crisan
&
B.
Rozovskii
(Eds.),
The Oxford handbook of nonlinear filtering
(pp.
656
704
), 3.
Oxford
:
Oxford University Press
.
Doucet
,
A.
, &
Tadić
,
V. B.
(
2003
).
Parameter estimation in general state-space models using particle methods
.
Annals of the Institute of Statistical Mathematics
,
55
(
2
),
409
422
.
Efron
,
B.
, &
Hastie
,
T.
(
2016
).
Computer age statistical inference
.
Cambridge
:
Cambridge University Press
.
Faraji
,
M.
,
Preuschoff
,
K.
, &
Gerstner
,
W.
(
2018
).
Balancing new against old information: The role of puzzlement surprise in learning
.
Neural Computation
,
30
(
1
),
34
83
.
Fearnhead
,
P.
, &
Liu
,
Z.
(
2007
).
On-line inference for multiple changepoint problems
.
Journal of the Royal Statistical Society: Series B (Statistical Methodology)
,
69
(
4
),
589
605
.
Findling
,
C.
,
Chopin
,
N.
, &
Koechlin
,
E.
(
2019
).
Imprecise neural computations as source of human adaptive behavior in volatile environments.
bioRxiv:799239.
Fox
,
E.
,
Sudderth
,
E. B.
,
Jordan
,
M. I.
, &
Willsky
,
A. S.
(
2011
).
Bayesian nonparametric inference of switching dynamic linear models
.
IEEE Transactions on Signal Processing
,
59
(
4
),
1569
1585
.
Frémaux
,
N.
, &
Gerstner
,
W.
(
2016
).
Neuromodulated spike-timing-dependent plasticity, and theory of three-factor learning rules
.
Frontiers in Neural Circuits
,
9
, 85.
Friston
,
K.
(
2010
).
The free-energy principle: A unified brain theory?
Nature Reviews Neuroscience
,
1
(
2
),
127
.
Friston
,
K.
,
FitzGerald
,
T.
,
Rigoli
,
F.
,
Schwartenbeck
,
P.
, &
Pezzulo
,
G.
(
2017
).
Active inference: A process theory
.
Neural Computation
,
29
(
1
),
1
49
.
George
,
C. P.
, &
Doss
,
H.
(
2017
).
Principled selection of hyperparameters in the latent Dirichlet allocation model
.
Journal of Machine Learning Research
,
18
,
1
38
.
Gershman
,
S. J.
(
2019
).
What does the free energy principle tell us about the brain?
arXiv:1901/07945.
Gershman
,
S. J.
,
Monfils
,
M.-H.
,
Norman
,
K. A.
, &
Niv
,
Y.
(
2017
).
The computational nature of memory modification
.
Elife
,
6
, e23763.
Gershman
,
S. J.
,
Radulescu
,
A.
,
Norman
,
K. A.
, &
Niv
,
Y.
(
2014
).
Statistical computations underlying the dynamics of memory updating
.
PLOS Computational Biology
,
10
(
11
), e1003939.
Gerstner
,
W.
,
Lehmann
,
M.
,
Liakoni
,
V.
,
Corneil
,
D.
, &
Brea
,
J.
(
2018
).
Eligibility traces and plasticity on behavioral time scales: Experimental support of neo-Hebbian three-factor learning rules
.
Frontiers in Neural Circuits
,
12
.
Ghahramani
,
Z.
, &
Hinton
,
G. E.
(
2000
).
Variational learning for switching state-space models
.
Neural Computation
,
12
(
4
),
831
864
.
Glaze
,
C. M.
,
Kable
,
J. W.
, &
Gold
,
J. I.
(
2015
).
Normative evidence accumulation in unpredictable environments
.
Elife
,
4
, e08825.
Gordon
,
N. J.
,
Salmond
,
D. J.
, &
Smith
,
A. F.
(
1993
). Novel approach to nonlinear/non-gaussian Bayesian state estimation. In
IEE Proceedings (Radar and Signal Processing)
,
140
,
107
113
.
Heilbron
,
M.
, &
Meyniel
,
F.
(
2019
).
Confidence resets reveal hierarchical adaptive learning in humans
.
PLOS Computational Biology
,
15
(
4
), e1006972.
Huang
,
Y.
, &
Rao
,
R. P.
(
2014
). Neurons as Monte Carlo samplers: Bayesian inference and learning in spiking networks. In
Z.
Ghahramani
,
M.
Welling
,
C.
Cortes
,
N.
Lawrence
, &
K. Q.
Weinberger
(Eds.),
Advances in neural information processing systems
,
27
(pp.
1943
1951
).
Red Hook, NY
:
Curran
.
Huettel
,
S. A.
,
Mack
,
P. B.
, &
McCarthy
,
G.
(
2002
).
Perceiving patterns in random series: Dynamic processing of sequence in prefrontal cortex
.
Nature Neuroscience
,
5
(
5
),
485
490
.
Itti
,
L.
, &
Baldi
,
P. F.
(
2006
). Bayesian surprise attracts human attention. In
Y.
Weiss
,
B.
Schölkopf
, &
J.
Platt
(Eds.),
Advances in neural information processing systems, 18
(pp.
547
554
).
Cambridge, MA
:
MIT Press
.
Joshi
,
S.
, &
Gold
,
J. I.
(
2020
).
Pupil size as a window on neural substrates of cognition
.
Trends in Cognitive Sciences
,
24
.
Kass
,
R. E.
, &
Raftery
,
A. E.
(
1995
).
Bayes factors
.
Journal of the American Statistical Association
,
9
(
430
),
773
795
.
Konovalov
,
A.
, &
Krajbich
,
I.
(
2018
).
Neurocomputational dynamics of sequence learning
.
Neuron
,
98
(
6
),
1282
1293
.
Kopp
,
B.
, &
Lange
,
F.
(
2013
).
Electrophysiological indicators of surprise and entropy in dynamic task-switching environments
.
Frontiers in Human Neuroscience
,
7
, 300.
Kutschireiter
,
A.
,
Surace
,
S. C.
,
Sprekeler
,
H.
, &
Pfister
,
J.-P.
(
2017
).
Nonlinear Bayesian filtering and learning: A neuronal dynamics for perception
.
Scientific Reports
,
7
(
1
), 8722.
Legenstein
,
R.
, &
Maass
,
W.
(
2014
).
Ensembles of spiking neurons with noise support optimal probabilistic inference in a dynamically changing environment
.
PLOS Computational Biology
,
10
(
10
).
Lieder
,
F.
,
Daunizeau
,
J.
,
Garrido
,
M. I.
,
Friston
,
K. J.
, &
Stephan
,
K. E.
(
2013
).
Modelling trial-by-trial changes in the mismatch negativity
.
PLOS Computational Biology
,
9
(
2
).
Lin
,
K.
,
Sharpnack
,
J. L.
,
Rinaldo
,
A.
, &
Tibshirani
,
R. J.
(
2017
). A sharp error analysis for the fused lasso, with application to approximate changepoint screening. In
I.
Guyon
,
U. V.
Luxburg
,
S.
Bengio
,
H.
Wallach
,
R.
Fergus
,
S.
Vishwanathan
, &
R.
Garnett
(Eds.),
Advances in neural information processing systems, 30
(pp.
6884
6893
).
Red Hook, NY
:
Curran
.
Lisman
,
J.
,
Grace
,
A. A.
, &
Duzel
,
E.
(
2011
).
A neohebbian framework for episodic memory; role of dopamine-dependent late LTP
.
Trends in Neurosciences
,
34
(
10
),
536
547
.
Liu
,
J.
, &
West
,
M.
(
2001
). Combined parameter and state estimation in simulation-based filtering. In
A.
Doucet
,
N.
de Freitas
, &
N.
Gordon
(Eds.),
Sequential Monte Carlo methods in practice
(pp.
197
223
).
Berlin
:
Springer
.
Lomonaco
,
V.
,
Desai
,
K.
,
Culurciello
,
E.
, &
Maltoni
,
D.
(
2019
).
Continual reinforcement learning in 3D non-stationary environments.
arXiv:1905.1011.
Loued-Khenissi
,
L.
,
Pfeuffer
,
A.
,
Einhäuser
,
W.
, &
Preuschoff
,
K.
(
2020
).
Anterior insula reflects surprise in value-based decision-making and perception
.
NeuroImage
, 116549.
Maheu
,
M.
,
Dehaene
,
S.
, &
Meyniel
,
F.
(
2019
).
Brain signatures of a multiscale process of sequence learning in humans
.
Elife
,
8
, e41541.
Mars
,
R. B.
,
Debener
,
S.
,
Gladwin
,
T. E.
,
Harrison
,
L. M.
,
Haggard
,
P.
,
Rothwell
,
J. C.
, &
Bestmann
,
S.
(
2008
).
Trial-by-trial fluctuations in the event-related electroencephalogram reflect dynamic changes in the degree of surprise
.
Journal of Neuroscience
,
28
(
47
),
12539
12545
.
Masegosa
,
A.
,
Nielsen
,
T. D.
,
Langseth
,
H.
,
Ramos-López
,
D.
,
Salmerón
,
A.
, &
Madsen
,
A. L.
(
2017
). Bayesian models of data streams with hierarchical power priors. In
Proceedings of the 34th International Conference on Machine Learning
, vol. 70 (pp.
2334
2343
). PMLR.
Mathys
,
C.
,
Daunizeau
,
J.
,
Friston
,
K. J.
, &
Stephan
,
K. E.
(
2011
).
A Bayesian foundation for individual learning under uncertainty
.
Frontiers in Human Neuroscience
,
5
, 39.
Meyniel
,
F.
,
Maheu
,
M.
, &
Dehaene
,
S.
(
2016
).
Human inferences about sequences: A minimal transition probability model
.
PLOS Computational Biology
,
12
(
12
), e1005260.
Modirshanechi
,
A.
,
Kiani
,
M. M.
, &
Aghajan
,
H.
(
2019
).
Trial-by-trial surprise-decoding model for visual and auditory binary oddball tasks
.
NeuroImage
,
196
,
302
317
.
Musiolek
,
L.
,
Blankenburg
,
F.
,
Ostwald
,
D.
, &
Rabovsky
,
M.
(
2019
). Modeling the n400 brain potential as semantic Bayesian surprise. In
Proceedings of the 2019 Conference on Cognitive Computational Neuroscience.
https://ccneuro.org/2019/Papers/AcceptedPapers.asp
Nagabandi
,
A.
,
Clavera
,
I.
,
Liu
,
S.
,
Fearing
,
R. S.
,
Abbeel
,
P.
,
Levine
,
S.
, &
Finn
,
C.
(
2018
).
Learning to adapt in dynamic, real-world environments through meta-reinforcement learning.
arXiv:1803.11347.
Nassar
,
M. R.
,
Bruckner
,
R.
, &
Frank
,
M. J.
(
2019
).
Statistical context dictates the relationship between feedback-related EEG signals and learning
.
Elife
,
8
, e46975.
Nassar
,
M. R.
,
Rumsey
,
K. M.
,
Wilson
,
R. C.
,
Parikh
,
K.
,
Heasly
,
B.
, &
Gold
,
J. I.
(
2012
).
Rational regulation of learning dynamics by pupil-linked arousal systems
.
Nature Neuroscience
,
15
(
7
),
1040
1046
.
Nassar
,
M. R.
,
Wilson
,
R. C.
,
Heasly
,
B.
, &
Gold
,
J. I.
(
2010
).
An approximately Bayesian delta-rule model explains the dynamics of belief updating in a changing environment
.
Journal of Neuroscience
,
30
(
37
),
12366
12378
.
Ostwald
,
D.
,
Spitzer
,
B.
,
Guggenmos
,
M.
,
Schmidt
,
T. T.
,
Kiebel
,
S. J.
, &
Blankenburg
,
F.
(
2012
).
Evidence for neural encoding of Bayesian surprise in human somatosensation
.
NeuroImage
,
62
(
1
),
177
188
.
Özkan
,
E.
,
Šmídl
,
V.
,
Saha
,
S.
,
Lundquist
,
C.
, &
Gustafsson
,
F.
(
2013
).
Marginalized adaptive particle filtering for nonlinear models with unknown time-varying noise parameters
.
Automatica
,
49
(
6
),
1566
1575
.
Papoulis
,
A.
, &
Saunders
,
H.
(
1989
).
Probability, random variables and stochastic processes
. American Society of Mechanical Engineers Digital Collection.
Prat-Carrabin
,
A.
,
Wilson
,
R. C.
,
Cohen
,
J. D.
, & Da
Silveira
,
R. A.
(
2020
).
Human inference in changing environments with temporal structure
. bioRxiv:720516.
Preuschoff
,
K.
,
t Hart
,
B. M.
, &
Einhauser
,
W.
(
2011
).
Pupil dilation signals surprise: Evidence for noradrenaline's role in decision making
.
Frontiers in Neuroscience
,
5
, 115.
Ryali
,
C.
,
Reddy
,
G.
, &
Yu
,
A. J.
(
2018
). Demystifying excessively volatile human learning: A Bayesian persistent prior and a neural approximation. In
S.
Bengio
,
H.
Wallach
,
H.
Larochelle
,
K.
Grauman
,
N. Cesa
-
Bianchi
, &
R.
Garnett
,
Advances in neural information processing systems
,
31
(pp.
2781
2790
).
Red Hook, NY
:
Curran
.
Särkkä
,
S.
(
2013
).
Bayesian filtering and smoothing
(Vol. 3).
Cambridge
:
Cambridge University Press
.
Schmidhuber
,
J.
(
2010
).
Formal theory of creativity, fun, and intrinsic motivation (1990–2010)
.
IEEE Transactions on Autonomous Mental Development
,
2
(
3
),
230
247
.
Schwartenbeck
,
P.
,
FitzGerald
,
T.
,
Dolan
,
R.
, &
Friston
,
K.
(
2013
).
Exploration, novelty, surprise, and free energy minimization
.
Frontiers in Psychology
,
4
, 710.
Shannon
,
C.
(
1948
). A mathematical theory of communication.
Bell System Technical Journal, 20
,
623
656
;
27
, 379-423.
Shi
,
L.
, &
Griffiths
,
T. L.
(
2009
). Neural implementation of hierarchical Bayesian inference by importance sampling. In
Y.
Bengio
,
D,
Schuurmans
,
J.
Lafferty
,
C.
Williams
, &
A.
Culotta
(Eds.),
Advances in neural information processing systems
,
22
(pp.
1669
1677
).
Red Hook, NY
:
Curran
.
Storck
,
J.
,
Hochreiter
,
S.
, &
Schmidhuber
,
J.
(
1995
). Reinforcement driven information acquisition in non-deterministic environments. In
Proceedings of the International Conference on Artificial Neural Networks
(Vol. 2, pp.
159
164
).
Piscataway, NJ
:
IEEE
.
Traoré
,
R.
,
Caselles-Dupré
,
H.
,
Lesort
,
T.
,
Sun
,
T.
,
Cai
,
G.
,
Díaz-Rodríguez
,
N.
, &
Filliat
,
D.
(
2019
).
Discorl: Continual reinforcement learning via policy distillation
. arXiv:1907.05855.
Wilson
,
R. C.
,
Nassar
,
M. R.
, &
Gold
,
J. I.
(
2010
).
Bayesian online learning of the hazard rate in change-point problems
.
Neural Computation
,
22
(
9
),
2452
2476
.
Wilson
,
R. C.
,
Nassar
,
M. R.
, &
Gold
,
J. I.
(
2013
).
A mixture of delta-rules approximation to Bayesian inference in change-point problems
.
PLOS Computational Biology
,
9
(
7
), e1003150.
Yu
,
A. J.
(
2012
).
Change is in the eye of the beholder
.
Nature Neuroscience
,
15
(
7
), 933.
Yu
,
A. J.
, &
Cohen
,
J. D.
(
2009
). Sequential effects: Superstition or rational behavior? In
D.
Koller
,
D.
Schuurmans
,
Y.
Bengio
, &
L.
Bottou
(Eds.),
Advances in neural information processing systems, 21
(pp.
1873
1880
).
Red Hook, NY
:
Curran
.
Yu
,
A. J.
, &
Dayan
,
P.
(
2005
).
Uncertainty, neuromodulation, and attention
.
Neuron
,
46
(
4
),
681
692
.