Skip to Main Content

Table 1 provides a summary of these uncertainty-reducing processes, where uncertainty is associated with free energy formulations of surprise such that uncertainty-resolving behavior reduces expected free energy. To motivate and illustrate this formalism, we set ourselves the task of simulating a curious agent that spontaneously learned rules—governing the sensory consequences of her action—from limited and ambiguous sensory evidence (Lu et al., 2016; Tervo, Tenenbaum, & Gershman, 2016). We chose abstract rule learning to illustrate how conceptual knowledge could be accumulated through experience (Botvinick & Toussaint, 2012; Zhang & Maloney, 2012; Koechlin, 2015) and how implicit Bayesian belief updating can be accelerated by applying Bayesian principles not to sensory samples but to beliefs based on those samples. This structure learning (Tenenbaum et al., 2011; Tervo et al., 2016) is based on recent developments in Bayesian model selection, namely, Bayesian model reduction (Friston, Litvak et al., 2016). Bayesian model reduction refers to the evaluation of reduced forms of a full model to find simpler (reduced) models using only posterior beliefs (Friston & Penny, 2011). Reduced models furnish parsimonious explanations for sensory contingencies that are inherently more generalizable (Navarro & Perfors, 2011; Lu et al., 2016) and, as we will see, provide for simpler and more efficient inference. In brief, we use simulations of abstract rule learning to show that context-sensitive contingencies, which are manifest in a high-dimensional space of latent or hidden states, can be learned using straightforward variational principles (i.e., minimization of free energy). This speaks to the notion that people ”use their knowledge of real-world environmental statistics to guide their search behavior” (Nelson et al., 2014). We then show that Bayesian model reduction adds an extra level of inference, which rests on testing plausible hypotheses about the structure of internal or generative models. We will see that this process is remarkably similar to physiological processes in sleep, where redundant (synaptic) model parameters are eliminated to minimize model complexity (Hobson & Friston, 2012). We then show that qualitative changes in model structure emerge when Bayesian model reduction operates online during the assimilation of experience. The ensuing optimization of model evidence provides a plausible (Bayesian) account of abductive reasoning that looks very much like an “aha” moment. To simulate something akin to an aha moment requires a formalism that deals explicitly with probabilistic beliefs about states of the world and its causal structure. This contrasts with the sort of structure or manifold learning that predominates in machine learning (e.g., deep learning; LeCun, Bengio, & Hinton, 2015), where the objective is to discover structure in large data sets by learning the parameters of neural networks. This article asks whether abstract rules can be identified using active (Bayesian) inference, following a handful of observations and plausible, uncertainty-reducing hypotheses about how sensory outcomes are generated.

Table 1:
Sources of Uncertainty Scored by (Expected) Free Energy and the Behaviors Entailed by Its Minimization (Resolution of Uncertainty through Approximate Bayesian Inference).
Source ofFree Energy
Uncertainty(Surprise)MinimizationActive Inference
Uncertainty about hidden states given a policy  With respect to expected states  Perceptual inference (state estimation
    
Uncertainty about policies in terms of expected:Future states(intrinsic value)Future outcomes(extrinsic value)Model parameters(novelty With respect to policies  Epistemic planningIntrinsic motivationExtrinsic motivationCuriosity 
Uncertainty about model parameters given a model  With respect to parameters  Epistemic learning (active learning
Uncertainty about the model  With respect to model  Structure learning (insight and understanding
Source ofFree Energy
Uncertainty(Surprise)MinimizationActive Inference
Uncertainty about hidden states given a policy  With respect to expected states  Perceptual inference (state estimation
    
Uncertainty about policies in terms of expected:Future states(intrinsic value)Future outcomes(extrinsic value)Model parameters(novelty With respect to policies  Epistemic planningIntrinsic motivationExtrinsic motivationCuriosity 
Uncertainty about model parameters given a model  With respect to parameters  Epistemic learning (active learning
Uncertainty about the model  With respect to model  Structure learning (insight and understanding

Close Modal

or Create an Account

Close Modal
Close Modal