Abstract
All biological and artificial agents must act given limits on their ability to acquire and process information. As such, a general theory of adaptive behavior should be able to account for the complex interactions between an agent’s learning history, decisions, and capacity constraints. Recent work in computer science has begun to clarify the principles that shape these dynamics by bridging ideas from reinforcement learning, Bayesian decision-making, and rate-distortion theory. This body of work provides an account of capacity-limited Bayesian reinforcement learning, a unifying normative framework for modeling the effect of processing constraints on learning and action selection. Here, we provide an accessible review of recent algorithms and theoretical results in this setting, paying special attention to how these ideas can be applied to studying questions in the cognitive and behavioral sciences.
INTRODUCTION
Cognitive science aims to identify the principles and mechanisms that underlie adaptive behavior. An important part of this endeavor is the development of normative theories that specify the computational goals and constraints of an intelligent system (Anderson, 1990; Gershman et al., 2015; Griffiths et al., 2015; Lewis et al., 2014; Marr, 1982). For example, accounts of learning, cognition, and decision-making often posit a function that an organism is optimizing—e.g., maximizing long-term reward or minimizing prediction error—and test plausible algorithms that achieve this—e.g., a particular learning rule or inference process. Historically, normative theories in cognitive science have been developed in tandem with new formal approaches in computer science and statistics. This partnership has been fruitful even given differences in scientific goals (e.g., engineering artificial intelligence versus reverse-engineering biological intelligence). Normative theories play a key role in facilitating cross-talk between different disciplines by providing a shared set of mathematical, analytical, and conceptual tools for describing computational problems and how to solve them (Ho & Griffiths, 2022).
This paper is written in the spirit of such cross-disciplinary fertilization. Here, we review recent work in computer science (Arumugam & Van Roy, 2021a, 2022) that develops a novel approach for unifying three distinct mathematical frameworks that will be familiar to many cognitive scientists (Figure 1). The first is Bayesian inference, which has been used to study a variety of perceptual and higher-order cognitive processes such as categorization, causal reasoning, and social reasoning in terms of inference over probabilistic representations (Baker et al., 2009; Battaglia et al., 2013; Collins & Frank, 2013; Tenenbaum et al., 2011; Yuille & Kersten, 2006). The second is reinforcement learning (Sutton & Barto, 1998), which has been used to model key phenomena in learning and decision-making including habitual versus goal-directed choice as well as trade-offs between exploring and exploiting (Daw et al., 2011; Dayan & Niv, 2008; Radulescu et al., 2019; Wilson et al., 2014). The third is rate-distortion theory (Berger, 1971; Shannon, 1959), a subfield of information theory (Cover & Thomas, 2012; Shannon, 1948), which in recent years has been used to model the influence of capacity-limitations in perceptual and choice processes (Lai & Gershman, 2021; Sims, 2016; Zaslavsky et al., 2021; Zénon et al., 2019). All three of these formalisms have been used as normative frameworks in the sense discussed above: They provide general design principles (e.g., rational inference, reward-maximization, efficient coding) that explain the function of observed behavior and constrain the investigation of underlying mechanisms.
Although these formalisms have been applied to analyzing individual psychological processes, less work has used them to study learning, decision-making, and capacity limitations holistically. One reason is the lack of principled modeling tools that comprehensively integrate these multiple normative considerations. The framework of capacity-limited Bayesian reinforcement learning, originally developed by Arumugam and Van Roy (2021a, 2022) in the context of machine learning, directly addresses the question of how to combine these perspectives. As its name suggests, the cornerstone of this framework is classic reinforcement learning, which traditionally focuses on idealized decision-making agents determined to synthesize optimal behavior without regard for resource constraints that may adversely impact the efficiency of learning. While the intersection of Bayesian inference and reinforcement learning has also been well-studied in the machine-learning literature (Bellman & Kalaba, 1959; Duff, 2002; Ghavamzadeh et al., 2015) and offers a powerful mechanism for gracefully tackling exploration (Agrawal & Jia, 2017; Osband et al., 2013; Osband & Van Roy, 2017; Strens, 2000), it too only offers consideration for optimal decision-making without regard for agent limitations that may leave optimal behavior highly challenging to obtain or even categorically unachievable. In contrast, while the intersection of rate-distortion theory and reinforcement learning (Abel et al., 2019; Lai & Gershman, 2021; Polani, 2009, 2011; Rubin et al., 2012; Still & Precup, 2012; Tishby & Polani, 2011) does offer one notion of capacity-sensitive behavior, it only specifies an alternative outcome to the traditional optimal policy but fails to prescribe a mechanism for orienting exploration around such a behavior. Consequently, these algorithms only offer insight into the end products of learning but do not clarify how agent limitations impact the dynamics of the learning process itself. By operating at the intersection of these three areas (Figure 1), capacity-limited Bayesian reinforcement learning highlights how capacity constraints impact an agent’s exploration strategy, thereby not only leading to tractable learning outcomes but also influencing the full dynamics of learning over time. Our goal is to review this work and present its key developments in a way that will be accessible to the broader research community and can pave the way for future cross-disciplinary investigations.
Notably, while the capacity constraints accommodated by the work presented in this paper can be quite versatile, a key motivation of this framework is offering a treatment of decision-making subject to constraints on time. Indeed, people often find themselves forced to select from considerably-large action spaces with significantly less time than what is needed to adequately explore all available decisions. When the disparity between total time allotted for learning and total number of actions available becomes sufficiently large, identifying an optimal action becomes entirely infeasible as a learning objective. While one could nevertheless deploy a classic decision-making algorithm in such a setting, acknowledging that it will not succeed in reaching optimal performance, such agents are designed with one of many strategies to address the explore-exploit trade-off. Unfortunately, this exploration mechanism is likely tailored for uncovering information salient to (unachievable) optimal behavior and is not guaranteed to be effective for gathering information about any other alternative, feasible behavior. In contrast, capacity-limited Bayesian decision-making offers a mechanism by which an agent may align exploratory decisions to a feasible behavior under the time constraints at hand.
We present the framework in two parts. First, we discuss a formalization of capacity-limited Bayesian decision-making, beginning with a few simple key tenets that underlie the coupling of Bayesian inference, information theory, and decision making. These core principles come together and allow for the introduction of an information bottleneck between an agent’s beliefs about the world and what it aspires to learn from its interactions with the world. To the extent that exploration is a challenge of information acquisition, this bottleneck serves as a targeting mechanism through which a bounded agent can prioritize which pieces of information to seek out. This motivates a novel family of algorithms for consuming environmental beliefs and an information-constrained target to select actions in a manner that optimally trades off between reward and information. Second, through a series of simple toy simulations, we analyze a specific algorithm: a variant of Thompson Sampling (Thompson, 1933) modified to incorporate such an information bottleneck. Afterwards, we turn more fully to capacity-limited Bayesian reinforcement learning, in which a decision-maker is continuously interacting with and adapting to their environment. We report a mixture of both novel as well as previously-established simulations and theoretical results in several learning settings, including multi-armed bandits as well as continual and episodic reinforcement learning. One feature of this framework is that it provides tools for analyzing how the interaction between capacity-limitations and learning dynamics can influence learning outcomes; in the discussion, we explore how such analyses and our framework can be applied to questions in cognitive science. We also discuss similarities and differences between capacity-limited Bayesian reinforcement learning and existing proposals including information-theoretic bounded rationality (Gottwald & Braun, 2019; Ortega & Braun, 2011), policy compression (Lai & Gershman, 2021), and resource-rational models based on principles separate from information theory (Callaway et al., 2022; Ho et al., 2022; Lieder et al., 2014).
CAPACITY-LIMITED BAYESIAN DECISION-MAKING
This section provides a preliminary account of capacity-limited Bayesian decision-making. As previously discussed, the incorporation of capacity limitations will be realized through rate-distortion theory; accordingly, we organize the section to separately introduce the elements of distortion and rate before turning our attention to the tension between them that a bounded decision-making agent is expected to negotiate. We conclude the section with a discussion and analysis of a practical algorithm for computing capacity-limited Bayesian decision procedures based on Thompson Sampling.
Bayesian Inference & Utility
Bayesian or probabilistic models have been used to characterize a range of psychological phenomena, including perception, categorization, feature learning, causal reasoning, social interaction, and motor control (Goodman & Frank, 2016; Itti & Baldi, 2009; Körding & Wolpert, 2004; Ma, 2012). One distinguishing feature of Bayesian models is that they separate learning and decision-making into two stages: inferring a function or statistic of the environment and choosing an action based on those inferences (Figure 1A). This separation of inference and choice into an independent Bayesian estimator and decision-rule is commonly assumed throughout psychology, economics, and computer science (Kaelbling et al., 1998; Ma, 2019; von Neumann & Morgenstern, 1944). However, even if inference about the environment is exact, exploring to learn good decisions incurs some non-trivial degree of cognitive load and the associated cost or limit on how much those inferences can inform what an agent learns remains unaccounted for. We now turn to extending (Arumugam & Van Roy, 2021a, 2022) the standard Bayesian framework to incorporate such capacity limitations (Figure 1B). Our focus begins purely on the inference process while later (see Thompson Sampling: Combining Bayesian Inference and Decision-Making section) clarifying how these capacity limitations during inference manifest in the choice process of an agent.
While the environment 𝓔 denotes the cumulative knowledge an agent maintains about the world, the goal or objective an agent aspires to learn about through its interactions within the environment is formalized as a learning target χ. That is, if 𝓔 denotes the information an agent retains, then χ denotes the information an agent seeks out through its interactions (Lu et al., 2023). This target is a (potentially stochastic) function of the unknown environment that can be represented as a conditional probability distribution over actions, given the identity of the environment, δ(χ | 𝓔) = ℙ(χ | 𝓔). Intuitively, for a particular realization of the environment 𝓔 = θ, the learning target χ ∼ δ(· | 𝓔) = θ characterizes the agent’s beliefs about what it should learn when treating environment θ as reality.
Suppose we have a real-valued utility function U(a, θ) that quantifies the performance or goodness of an action a ∈ 𝒜 for a particular realization of the environment 𝓔 = θ (later we discuss reinforcement learning and will consider specific utility functions that represent reward and/or value). A standard and widely-studied choice of learning target is an optimal action A⋆ ∈ U(a, 𝓔) that maximizes utility. For an unconstrained agent with unlimited capacity, there is perhaps no reason to entertain any other learning target besides A⋆. In the next section, however, we use information theory to articulate the associated cost of exploring to learn an optimal decision A⋆, which may be infeasible for a capacity-limited decision-making agent.
The Duality Between Uncertainty & Information
While the previous section establishes the desirability of a learning target within some environment through its utility, this section provides a parallel account for the cost of learning through information. As a simple example, suppose an agent wishes to learn about the outcome of a coin flip χ ∼ Bernoulli(𝓔) from a coin with unknown bias 𝓔 ∈ [0, 1]. Note that a trick coin with 𝓔 = 1 would result in a target χ = f(𝓔) = HEADS that is just a deterministic function f always returning HEADS. On the other hand, for a fair coin 𝓔 = 0.5, the target is now a random function χ = g(𝓔) = . The cumulative randomness present in χ stems not only from possibly being a non-deterministic function but also from its dependence on 𝓔, which is itself a random variable.
We now turn our attention to the role of information theory (Cover & Thomas, 2012; Shannon, 1948), giving verbal descriptions of the salient quantities and deferring precise mathematical definitions to the appendix (please see Appendix A). The entropy ℍ(χ) of χ quantifies all uncertainty in the agent’s mind about the outcome of the coin flip. Equivalently, an agent that obtains these ℍ(χ) bits of information would have zero uncertainty and identify the flip outcome exactly. However, even if the agent had perfect knowledge of the environment 𝓔 to distinguish between a biased or fair coin, there could still be residual uncertainty left over simply because the coin flip is an inherently random outcome (such as in the fair coin scenario above). We can quantify uncertainty with the provision of such knowledge through conditioning and examine the conditional entropy of the flip outcome given the coin bias ℍ(χ | 𝓔). In general, if the learning target happens to be a deterministic function of the environment (χ = f(𝓔), for deterministic f), then a well-known fact of information theory already establishes that ℍ(χ | 𝓔) = 0. If not, however, then ℍ(χ | 𝓔) ≥ 0 and, due to the conditioning, this residual uncertainty cannot be eliminated by making decisions and collecting more interaction data from the environment 𝓔. Consequently, while the entropy ℍ(χ) quantifies all of the agent’s uncertainty in the learning target, the conditional entropy ℍ(χ | 𝓔) captures only the irreducible or aleatoric uncertainty (Der Kiureghian & Ditlevsen, 2009) the agent has in χ due to random noise.
The exploration strategy employed by a decision-making agent is responsible for the acquisition of these 𝕀(𝓔; χ) bits of information over the course of learning. Thus, intuitively, it follows that some targets are easier to learn than others. More concretely, for two targets χ1 and χ2, having 𝕀(𝓔; χ1) ≤ 𝕀(𝓔; χ2) implies that an agent is closer to resolving its uncertainty in target χ1 than χ2, thereby implying χ1 is easier to learn. Of course, if χ2 allows an agent to obtain significantly higher utility relative to what is possible with the knowledge encoded in χ1, then perhaps it is worthwhile for a limited agent to pursue the more challenging target χ2. The next section discusses how such an agent can negotiate this tension between information and utility to reduce cognitive load when deciding what to learn.
Balancing Between Bits & Utility
Thompson Sampling: Combining Bayesian Inference and Decision-Making
Unlike classic information theory applications in compression and communication where all bits are created equal to be transmitted with identical priority, decision makers take actions to learn about a particular target χ and not all information about the world revealed by a decision is guaranteed to provide target-relevant information. Prudent strategies for exploration tailored for a particular χ capitalize on the agent’s current beliefs about the world 𝓔 given the history of interaction thus far Ht to select actions that either succeed in revealing target-relevant information or, when such information has been exhausted from the environment, 𝕀t(χ; 𝓔) = 0, allow the agent to exploit what it has learned. In this section, we review an algorithm known as Thompson Sampling for establishing a powerful link between the agent’s inference process that maintains beliefs about the world 𝓔 coupled with a learning target χ to direct the choice process.
Of course, other more-elaborate possibilities do exist in the literature (Russo & Van Roy, 2014, 2018a), however this paper focuses in on Thompson Sampling as a simple yet effective choice among them. Different decision-rules are distinguished by the type of representation they use and the algorithms that operate over those representations. For example, some decision-rules only use a point-estimate of each action’s expected reward, such as reward maximization, ε-greedy reward maximization (Cesa-Bianchi & Fischer, 1998; Kuleshov & Precup, 2014; Vermorel & Mohri, 2005), Boltzmann/softmax action selection (Asadi & Littman, 2017; Kuleshov & Precup, 2014; Littman, 1996), or upper-confidence bound (UCB) action selection (Auer, 2002; Auer et al., 2002; Kocsis & Szepesvári, 2006). Some of these rules also provide parameterized levels of “noisiness” that facilitate random exploration—e.g., the probability of selecting an action at random in ε-greedy, the temperature in a Boltzmann distribution, and the bias factor in UCB. In the Bayesian setting, decision-rules like Thompson Sampling can take advantage of epistemic uncertainty to guide exploration. Additionally, humans often display key signatures of selecting actions via Thompson Sampling (Gershman, 2018; Vulkan, 2000; Wozny et al., 2010). In short, classic Thompson Sampling is a simple, robust, and well-studied Bayesian algorithm that is, by design, tailored to an optimal learning target A⋆; this, however, assumes that a decision-making agent has the unlimited capacity needed to acquire all bits of information relevant to A⋆, 𝕀(𝓔; A⋆).
One instantiation of a capacity-limited Bayesian decision-making agent combines rate-distortion theory and Thompson Sampling by first computing a learning target that optimally strikes some balance between complexity and utility before choosing an action via probability matching with respect to this target. Such an agent employs Blahut-Arimoto Satisficing Thompson Sampling (BLASTS), an algorithm first proposed by Arumugam and Van Roy (2021a). In order to approximate an optimal decision-rule given current beliefs about the world 𝓔 and rate parameter λ ≥ 0, BLASTS (whose pseudocode appears as Algorithm 2) performs three high-level procedures. First, it approximates the environment distribution by drawing Z ∈ ℕ Monte-Carlo samples from η and proceeding with this discrete empirical distribution. Second, it uses Blahut-Arimoto—a classic algorithm from the rate-distortion theory literature (Arimoto, 1972; Blahut, 1972) based on convex optimization (Boyd & Vandenberghe, 2004)—to iteratively compute the (globally) optimal learning target . Finally, it uniformly samples one of the Z initially drawn environment configurations e′ and then samples an action a′ from the computed decision-rule conditioned on that realization e′ of the environment.
One can observe that a BLASTS agent with no regard for respecting capacity limitations (λ = 0) will recover Thompson Sampling as a special case. However, as an agent navigates the space of learning targets to find a suitable balance between complexity and utility via a setting of λ, this generalized version of Thompson Sampling offers one prescription for how this shift in learning target should impact the dynamics of exploration. To illustrate this behavior, we conducted two sets of simulations that manipulated these factors in simple three-armed bandit tasks. Our first set of simulations examined the effect of different values of the rate parameter λ, which intuitively corresponds to the cost of information measured in units of utils per nat. We calculated the marginal action distribution, π(a) = ∑eδ⋆(a | e)η(e), where the belief distribution over average rewards for the three arms was represented by three independent Gaussian distributions respectively centered at −1, 0, and 1; all three distributions had a standard deviation of 1 (Figure 2A).
Even on this simple problem, BLASTS displays three qualitatively different regimes of action selection when varying the rate parameter, λ, from 10−2 to 104. When information is inexpensive (λ < 10−1), the action distribution mimics the exploratory behavior of Thompson Sampling (consistent with theoretical predictions [Arumugam & Van Roy, 2021a]). As information becomes moderately expensive (10−1 ≤ λ ≤ 101), BLASTS focuses channel capacity on the actions with higher expected utility by first reducing its selection of the worst action in expectation (a0) followed by the second-worst/second-best action in expectation (a1), which results in it purely exploiting the best action in expectation (a2). Finally, as the util per nat becomes even greater (λ ≥ 101) BLASTS produces actions that are uninformed by its beliefs about the environment. This occurs in a manner that resembles a Boltzmann distribution with increasing temperature, eventually saturating at a uniform distribution over actions. These patterns are visualized in Figure 2B–D, which compare action probabilities for Boltzmann, Thompson Sampling, and BLASTS.
Our second set of simulations examine the relationship between the cost of information λ and BLASTS action probabilities for different environment-estimates. Specifically, we first examined the effect of changing beliefs about the action gap, the difference between the best and second-best action in expectation (Agrawal & Goyal, 2012, 2013; Auer et al., 2002; Bellemare et al., 2016; Farahmand, 2011). As shown in Figure 3A, when the action gap is lower (corresponding to a more difficult decision-making task), BLASTS chooses the optimal action with lower probability for all values of λ. In addition, we examined the effect of changing uncertainty in the average rewards by setting different standard deviations for beliefs about the arms. Figure 3B shows that as uncertainty increases, BLASTS is less likely to differentially select an arm even in the “exploitation” regime for moderate values of λ. Sensitivity to both the action gap and uncertainty are key features of BLASTS that derive from the fact that it uses distributional information to guide decision-making, unlike decision-rules such as ε-greedy or Boltzmann softmax.
Since BLASTS is essentially a parameterized version of Thompson Sampling, it can be used as an alternative decision rule for fitting human data (Wilson & Collins, 2019). Specifically, one approach to using BLASTS would be to jointly fit parameters associated with the inference process (e.g., a participant’s priors about the task) as well as the information cost (λ). An important direction for future work will be to validate such an approach and develop efficient algorithms for parameter estimation from participant data.
In the standard formulation of Bayesian decision-making, it is assumed that an agent has unbounded capacity and, therefore, optimal behavior A⋆ is always achievable. By extending ideas from rate-distortion theory, Arumugam and Van Roy (2021a) defined a notion of capacity limitation applicable to a broader space of learning targets as well as an efficient algorithm for finding such optimal, capacity-limited targets through a variant of Thompson Sampling (BLASTS). In this section, we analyzed how choice distributions change as a function of the cost of information and current environment estimates, which provides some intuition for how capacity-limitations affect choice from the agent’s subjective point of view. In the next section, we take a more objective point of view by studying the learning dynamics that arise when capacity-limited agents interact with an environment over time.
CAPACITY-LIMITED BAYESIAN REINFORCEMENT LEARNING
The preceding section provides a cursory overview of how rate-distortion theory accommodates capacity-limited learning within a Bayesian decision-making agent. In this section, we aim to provide mathematically-precise instantiations of the earlier concepts for two distinct problem classes: (1) continual or lifelong learning and (2) multi-armed bandits; we defer a presentation of our framework applied to episodic Markov decision processes to the appendix. Our aim is to provide a coherent, cohesive narrative for those problem settings that have been examined separately in prior work (Arumugam & Van Roy, 2021a, 2021b, 2022) while also providing a novel extension to the continual learning setting. For the clarity of exposition, a mathematically-inclined reader should consult the appendix for details on notation, definitions of information-theoretic quantities, and all theoretical results.
Continual Learning
At the most abstract level, we may think of a decision-making agent faced with a continual or lifelong learning setting (Abel et al., 2018; Brunskill & Li, 2013, 2015; Isele et al., 2016; Konidaris & Barto, 2006; Lazaric & Restelli, 2011; Thrun & Schwartz, 1994; Wilson et al., 2007) within a single, stationary environment, which makes no further assumptions about Markovity or episodicity; such a problem formulation aligns with those of Lu et al. (2023) and Dong et al. (2022), spanning multi-armed bandits and reinforcement-learning problems (Lattimore & Szepesvári, 2020; Sutton & Barto, 1998).
Problem Formulation.
We adopt a generic agent-environment interface where, at each time period t, the agent executes an action At ∈ 𝒜 within an environment 𝓔 ∈ θ that results in an associated next observation Ot ∈ 𝒪. This sequential interaction between agent and environment yields an associated history1 at each timestep t, Ht = (O0, A1, O1, …, At−1, Ot−1) ∈ 𝓗, representing the action-observation sequence available to the agent upon making its selection of its current action At. We may characterize the overall environment as 𝓔 = 〈𝒜, 𝒪, ρ〉 ∈ θ containing the action set 𝒜, observation set 𝒪, and observation function ρ : 𝓗 × 𝒜 → Δ(𝒪), prescribing the distribution over next observations given the current history and action selection: ρ(Ot | Ht, At) = ℙ(Ot | 𝓔, Ht, At).
An agent’s policy π : 𝓗 → Δ(𝒜) encapsulates the relationship between the history encountered in each timestep Ht and the executed action At such that πt(a) = ℙ(At = a | Ht) assigns a probability to each action a ∈ 𝒜 given the history. Preferences across histories are expressed via a known reward function r : 𝓗 × 𝒜 × 𝒪 → ℝ so that an agent enjoys a reward Rt = r(Ht, At, Ot) on each timestep. Given any finite time horizon T ∈ ℕ, the accumulation of rewards provide a notion of return r(Ht, At, Ot). To develop preferences over behaviors and to help facilitate action selection, it is often natural to associate with each policy π a corresponding expected return or action-value function Qπ : 𝓗 × 𝒜 → ℝ across the horizon T as Qπ(h, a) = 𝔼, where the expectation integrates over the randomness in the policy π as well as the observation function ρ. Traditionally, focus has centered on agents that strive to achieve the optimal value within the confines of some policy class Π ⊆ {𝓗 → Δ(𝒜)}, Q⋆(h, a) = Qπ(h, a), ∀(h, a) ∈ 𝓗 × 𝒜. The optimal policy then follows by acting greedily with respect to this optimal value function: π⋆(h) = Q⋆(h, a).
Observe that when rewards and the distribution of the next observation Ot depend only on the current observation-action pair (Ot−1, At), rather than the full history Ht, we recover the traditional Markov Decision Process (Bellman, 1957; Puterman, 1994) studied throughout the reinforcement-learning literature (Sutton & Barto, 1998). Alternatively, when these quantities rely solely upon the most recent action At, we recover the traditional multi-armed bandit (Bubeck & Cesa-Bianchi, 2012; Lai & Robbins, 1985; Lattimore & Szepesvári, 2020). Regardless of precisely which of these two problem settings one encounters, a default presumption throughout both literatures is that an agent should always act in pursuit of learning an optimal policy π⋆. Bayesian decision-making agents (Bellman & Kalaba, 1959; Duff, 2002; Ghavamzadeh et al., 2015) aim to achieve this by explicitly representing and maintaining the agent’s current knowledge of the environment, recognizing that it is the uncertainty in the underlying environment 𝓔 that drives uncertainty in optimal behavior π⋆. A Bayesian learner reflects this uncertainty through conditional probabilities ηt(e) ≜ ℙ(𝓔 = e | Ht), ∀e ∈ Θ aimed at estimating the underlying environment. The problem of explorations centers around how an agent operationalizes its beliefs about the world ηt in order to select actions reveal information salient to good decision-making.
Rate-Distortion Theory for Target Actions.
The core insight of this work is recognizing that a delicate balance between the amount of information an agent seeks out through its interactions (cognitive load) and the quality of decision-making with that information (utility) can be aptly characterized through rate-distortion theory, providing a formal framework for capacity-limited decision making. At each time period t ∈ [T], the agent’s current knowledge about the underlying environment is fully specified by the distribution ηt. An unconstrained agent will attempt to use this knowledge and explore to further acquire information that helps identify an optimal action A⋆ ∈ Q⋆(Ht, a). By default, however, a capacity-limited agent may not be capable of obtaining all 𝕀t(𝓔; A⋆) bits of information from the world to learn such an optimal action A⋆. To remedy this, it behooves the agent to first determine an alternative learning target χ and then orient exploration to prioritize information gathering about this feasible surrogate. Naively discarding bits of information in each time period to obtain an easily learned target with small 𝕀t(𝓔; χ), however, may result in agent that is entirely unproductive with respect to the task at hand. Thus, while a good target χ does allow an agent to get away with exploring for less information, some bits have more utility to the task than others.
Rate-distortion theory (Berger, 1971; Shannon, 1959) is a branch of information theory (Cover & Thomas, 2012; Shannon, 1948) dedicated to the study of lossy compression problems which necessarily must optimize for a balance between the raw amount of information retained in the compression and the utility of those bits for some downstream task; a classic example of this from the information-theory literature is image compression down to a smaller resolution (fewer bits of information) without overly compromising the visual acuity of the content (bounded distortion). A capacity-limited agent will take its current knowledge ηt as the information source to be compressed in each time period t ∈ [T]. The learning target χ(𝓔) ∈ 𝒜 can be interpreted as the result of lossy compression, characterized by a channel or conditional probability distribution p(χ | 𝓔) that maps a potential realization of the unknown environment 𝓔 ∈ Θ to a corresponding distribution over actions. For a given realization of the environment θ ∈ Θ, one should interpret p(χ | 𝓔 = θ) as an agent’s belief about which actions are desirable taking 𝓔 = θ as reality. Naturally, the amount of information used contained in the environment about this action that is not accounted for by the agent’s interactions Ht thus far is precisely quantified by the mutual information between these two random variables, 𝕀t(𝓔; χ), where the t subscript captures the dependence of the agent’s beliefs ηt on the current random history Ht.
Fact 1 (Lemma 10.4.1 [Cover & Thomas, 2012]). For all t ∈ [T], the rate-distortion function 𝓡t(D) is a non-negative, convex, and non-increasing function in D ≥ 0.
In this section, we have provided a mathematical formulation for how a capacity-limited agent discerns what to learn in each time period so as to limit overall cognitive load in an information-theoretically optimal fashion while incurring bounded sub-optimality. Notably, we have yet to discuss how such an agent ultimately selects actions so as to facilitate efficient learning of the target action t computed via rate-distortion theory. To elucidate this, we dedicate the next section to the simple yet illustrative multi-armed bandit problem, which allows for theoretical and as well as empirical analysis.
Multi-Armed Bandit
In this section, we begin with the formal specification of a multi-armed bandit problem (Bubeck & Cesa-Bianchi, 2012; Lai & Robbins, 1985; Lattimore & Szepesvári, 2020) before revisiting Thompson Sampling as a quintessential algorithm for identifying optimal actions. We then present a corresponding generalization of Thompson Sampling that takes an agent’s capacity limitations into account.
Problem Formulation.
We obtain a bandit environment as a special case of the problem formulation given in Continual Learning section by treating the initial observation as null O0 = ∅ while each subsequent observation denotes a reward signal Rt ∼ ρ(· | At) drawn from an observation function ρ : 𝒜 → Δ(ℝ) that only depends on the most recent action selection At and not the current history Ht = (A1, R1, A2, R2, …, At−1, Rt−1). While the actions 𝒜 and total time periods T ∈ ℕ are known to the agent, the underlying reward function ρ is unknown and, consequently, the environment 𝓔 is itself a random variable such that p(Rt | 𝓔, At) = ρ(Rt | At). We let : 𝒜 → [0, 1] denote the mean reward function (a) = 𝔼[Rt | At = a, 𝓔], ∀a ∈ 𝒜, and define an optimal action A⋆ ∈ (a) as achieving the maximal mean reward denoted as R⋆ = (A⋆), both of which are random variables due to their dependence on 𝓔.
Observe that, if the agent knew the underlying environment 𝓔 exactly, there would be no uncertainty in the optimal action A⋆; consequently, it is the agent’s epistemic uncertainty (Der Kiureghian & Ditlevsen, 2009) in 𝓔 that drives uncertainty in A⋆. Since learning is a process of acquiring information, an agent explores to learn about the environment and reduce this uncertainty. As there is only a null history at the start H1 = ∅, initial uncertainty in the environment 𝓔 ∈ Θ is given by the prior probabilities η1 ∈ Δ(Θ) while, as time unfolds, updated knowledge of the environment is reflected by posterior probabilities ηt ∈ Δ(Θ).
The customary goal within a multi-armed bandit problem is to identify an optimal action A⋆ and, in the next section, we review one such algorithm that is widely used in practice before motivating consideration of satisficing solutions for bandit problems.
Thompson Sampling & Satisficing.
As previously mentioned, standard choice of algorithm for identifying optimal actions in multi-armed bandit problems is Thompson Sampling (TS) (Russo et al., 2018; Thompson, 1933), which has been well-studied both theoretically (Agrawal & Goyal, 2012, 2013; Auer et al., 2002; Bubeck & Liu, 2013; Russo & Van Roy, 2016) and empirically (Chapelle & Li, 2011, Gopalan et al., 2014; Granmo, 2010; Scott, 2010). For convenience, we provide generic pseudocode for classic TS as Algorithm 3, whereas more granular classes of bandit problems (Bernoulli bandits or Gaussian bandits, for example) can often lead to more computationally explicit versions of TS that leverage special structure like conjugate priors (see (Russo et al., 2018) for more detailed implementations). In each time period t ∈ [T], a TS agent proceeds by drawing one sample θt ∼ ηt(𝓔), representing a statistically-plausible hypothesis about the underlying environment based on the agent’s current posterior beliefs from observing the history Ht; the agent then proceeds as if this sample dictates reality and acts optimally with respect to it, drawing an action to execute this time period At uniformly at random among the optimal actions for this realization of 𝓔 = θt of the environment. Executing actions in this manner recovers the hallmark probability-matching principle (Russo & Van Roy, 2016; Scott, 2010) of classic TS whereby, in each time period t ∈ [T], the agent selects actions according to their (posterior) probability of being optimal given everything observed up to this point in Ht or, more formally, πt(a) = pt(A⋆ = a), ∀a ∈ 𝒜.
Naturally, a core premise of this work is to consider decision-making problems where an agent’s inherent and unavoidable capacity limitations drastically impact the tractability of learning optimal actions. While there are other classes of algorithms for handling multi-armed bandit problems (Auer et al., 2002; Powell & Ryzhov, 2012; Russo & Van Roy, 2014, 2018a; Ryzhov et al., 2012), TS serves an exemplary representative among them that relentlessly pursues the optimal action A⋆, by design. Consider a human decision maker faced with a bandit problem containing 1,000,000,000 (one trillion) arms—does one genuinely expect any individual to successfully identify A⋆ within a reasonable amount of time? Similarly, the Bayesian regret bound for TS scales with the agent’s prior entropy in A⋆ (Russo & Van Roy, 2016), informing us that the performance shortfall of TS will increase as the number of actions tends to ∞.
Satisficing is a longstanding, well-studied idea about how to understand resource-limited cognition (Newell et al., 1958; Newell & Simon, 1972; Simon, 1955, 1956, 1982) in which an agent settles for the first recovered solution that is deemed to be “good enough,” for some suitable notion of goodness. Inspired by this idea, Russo and Van Roy (2018b, 2022) present the Satisficing Thompson Sampling (STS) algorithm, which we present as Algorithm 4, to address the shortcomings of algorithms like TS that relentlessly pursue A⋆. STS employs a minimal adjustment to the original TS algorithm through a threshold parameter ε ≥ 0, which an agent designer may use to communicate that identifying a ε-optimal action would be sufficient for their needs. The use of a minimum over all such ε-optimal actions instead of a uniform distribution reflects the idea of settling for the first solution deemed to be “good enough” according to ε. Naturally, the intuition follows that as ε increases and the STS agent becomes more permissive, such ε-optimal actions can be found in potentially far fewer time periods than what is needed to obtain A⋆ through TS. If we define an analogous random variable to A⋆ as Aε ∼ min({a ∈ 𝒜 | 𝔼t[(A⋆) − (a) | 𝓔 = θt] ≤ ε}) then STS simply employs probability matching with respect to this alternative target as πt(a) = pt(Aε = a), ∀a ∈ 𝒜 and, as ε ↓ 0, recovers TS as a special case. Russo and Van Roy (2022) go on to prove a complementary information-theoretic regret bound for STS, which depends on the mutual information between the environment and Aε, 𝕀1(𝓔; Aε), rather than the prior entropy in the optimal action A⋆, ℍ1(A⋆).
While it is clear that STS does embody the principle of satisficing for a capacity-limited decision maker, the Aε action targeted by a STS agent instead of A⋆ only achieves some arbitrary and unspecified trade-off between the simplicity of what the agent set out to learn and the utility of the resulting solution, as ε varies. Rather than setting for an arbitrary balance between these competing concerns, the next section examines how rate-distortion theory yields a target action that strikes the best trade-off.
Rate-Distortion Theory for Target Actions.
The notion of a target action is based on the observation that A⋆ = f(𝓔) is merely a statistic of the environment whose computation is determined by some function f. It follows that a surrogate action an agent may alternatively prioritize during learning will be some other computable statistic of the environment that embodies a kind of trade-off between two key properties: (1) ease of learnability and (2) bounded sub-optimality or performance shortfall relative to A⋆.
The previous section already gives two concrete examples of potential target actions, A⋆ and Aε, where the former represents an extreme point on the spectrum of potential learning targets as one that demands a potentially intractable amount of information to identify but comes with no sub-optimality. At the other end of the spectrum, there is simply the uniform random action ∼ Uniform(𝒜) which requires no learning or sampling on the part of the agent to learn it but, in general, will likely lead to considerably large performance shortfall relative to an optimal solution. While, for any fixed ε > 0, Aε lives in between these extremes, it also suffers from two shortcomings of its own. Firstly, by virtue of satisficing and a willingness to settle for anything that is “good enough,” it is unclear how well Aε balances between the two aforementioned desiderata. In particular, the parameterization of Aε around ε as an upper bound to the expected regret suggests that there could exist an even simpler target action which is also ε-optimal but easier to learn insofar as it requires the agent obtain fewer bits of information from the environment. Secondly, from a computational perspective, a STS agent striving to learn Aε (just as a TS agent does for learning A⋆) computes the same statistic repeatedly across all T time periods. Meanwhile, with every step of interaction, the agent’s knowledge of the environment 𝓔 is further refined, potentially changing the outlook on what can be tractably learned in subsequent time periods. This would suggest that one may stand to have considerable performance gains by designing agents that adapt their learning target as knowledge of the environment accumulates, rather than iterating on the same static computation. From a biological view, this encapsulates a perspective that an organism’s outlook on learning goals adapts with its knowledge of the world.
Experiments.
In order to make the algorithm of the previous section (Algorithm 5) amenable to practical implementation, Arumugam and Van Roy (2021a) look to the classic Blahut-Arimoto algorithm (Arimoto, 1972; Blahut, 1972). Just as TS and STS perform probability matching with respect to A⋆ and Aε in each time period, respectively, the Blahut-Arimoto STS (BLASTS) algorithm (presented as Algorithm 2 where one should recall that reward maximization and regret minimization are equivalent) conducts probability matching with respect to t in each time period to determine the policy: πt(a) = pt(t = a), ∀a ∈ 𝒜. For two discrete random variables representing an uncompressed information source and the resulting lossy compression, the Blahut-Arimoto algorithm computes the channel that achieves the rate-distortion limit (that is, achieve the infimum in Equation 4) by iterating alternating update equations until convergence. More concretely, the algorithm is derived by optimizing the Lagrangian of the constrained optimization (Boyd & Vandenberghe, 2004) that is the rate-distortion function, which is itself known to be a convex optimization problem (Chiang & Boyd, 2004). We refer readers to Arumugam and Van Roy (2021a) for precise computational details of the Blahut-Arimoto algorithm for solving the rate-distortion function 𝓡t(D) that yields t as well as Arumugam and Van Roy (2021b) for details on the exact theoretical derivation.
One salient detail that emerges from using the Blahut-Arimoto algorithm in this manner is that it no longer depends on a distortion threshold D ∈ ℝ≥0 as input but, instead, provides a value of the Lagrange multiplier β ∈ ℝ≥0; lower values of β communicate a preferences for rate minimization whereas larger values of β prioritize distortion minimization. To each value of β, there is an associate distortion threshold D as β represents the desired slope achieved along the corresponding rate-distortion curve (Blahut, 1972; Csiszár, 1974a, 1974b). As, in practice, ηt(𝓔) tends to be a continuous distribution, Arumugam and Van Roy (2021a) induce a discrete information source by drawing a sufficiently large number of Monte-Carlo samples and leveraging the resulting empirical distribution, which is a theoretically-sound estimator of the true rate-distortion function (Harrison & Kontoyiannis, 2008; Palaiyanur & Sahai, 2008).
As these target actions {t}t∈[T] are born out of a need to balance the simplicity and utility of what an agent aims to learn from its interactions within the environment, we can decompose empirical results into those that affirm these two criteria are satisfied in isolation. Since assessing utility or, equivalently, performance shortfall is a standard evaluation metric used throughout the literature, we begin there and offer regret curves in Figure 4 for Bernoulli and Gaussian bandits with 10 independent arms (matching, for example, the empirical evaluation of Russo and Van Roy [2018a]); recall that the former implies Bernoulli rewards Rt ∼ Bernoulli((At)) while the latter yields Gaussian rewards with unit variance Rt ∼ 𝒩((At), 1). For readers unfamiliar with such plots, recall that the regret in a given time period reflects the performance shortfall between an agent’s chosen action and the optimal action. Cumulative regret curves as shown in Figure 4 show the sum of all per-period regret up to and including the current time period. A sub-optimal agent will yield linear regret where the slope conveys the degree of the sub-optimality. Meanwhile, optimal agents will eventually incur per-period regret of zero and so will have cumulative regret that eventually converges to a fixed value. We evaluate TS and BLASTS agents where, for the latter, the Lagrange multiplier hyperparameter β ∈ ℝ≥0 is fixed and tested over a broad range of values. All agents begin with a Beta(1, 1) prior for each action of the Bernoulli bandit and a 𝒩(0, 1) prior for the Gaussian bandit. For each individual agent, the cumulative regret incurred by the agent is plotted over each time period t ∈ [T].
Recalling that our distortion function is directly connected to the expected regret of the BLASTS agent, we observe that smaller values of β so aggressively prioritize rate minimization that the resulting agents incur linear regret; in both bandit environments, this trend persists for all values β ≤ 100. Notably, as β ↑ ∞, we observe the resulting agents yield performance more similar to regular TS. This observation aligns with expectations since, for a sufficiently large value of β, the Blahut-Arimoto algorithm will proceed to return a channel that only places probability mass on the distortion-minimizing actions, which are indeed, the optimal actions A⋆ for each realization of the environment. A notable auxiliary finding in these results, also seen in the original experiments of Arumugam and Van Roy (2021a), is that intermediate values of β manage to yield regret curves converging towards the optimal policy more efficiently that TS; this is, of course, only possible when the distortion threshold D implied by a particular setting of β falls below the smallest action gap of the bandit problem.
While the previous experiments confirm that BLASTS can be used to instantiate a broad spectrum of agents that target actions of varying utilities, it is difficult to assess the simplicity of these targets and discern whether or not less-performant target actions can in fact be identified more quickly than near-optimal ones. As a starting point, one might begin with the agent’s prior over the environment and compute 𝕀1(𝓔; t) to quantify how much information each agent’s initial learning target requires from the environment a priori. In Figure 5, we compare this to 𝕀1(𝓔; Aε) and sweep over the respective β and ε values to generate the result rate-distortion curves for Bernoulli and Gaussian bandits with 1000 independent arms. The results corroborate earlier discussion of how a STS agent engages with a learning target Aε that yields some trade-off between ease of learnability and performance, but not necessarily the best trade-off. In contrast, since 𝓡1(D) ≈ 𝕀1(𝓔; t) (where the approximation is due to sampling), we expect and do indeed recover a better trade-off between rate and performance using the Blahut-Arimoto algorithm. To verify that target actions at the lower end of the spectrum (lower rate and higher distortion) can indeed by learned more quickly, we can plot the rate of the channel δt(t | 𝓔) computed by BLASTS across time periods, as shown in Figure 6; for TS, we additionally plot the entropy over the optimal action ℍt(A⋆) as time passes and observe that smaller values of β lead to learning targets with smaller initial rates that decay much more quickly than their counterparts at larger values of β. Again, as β ↑ ∞, these rate curves concentrate around that of regular TS.
Overall, this section has provided an overview of prior work that moves past the standard goal of finding optimal actions A⋆ in multi-armed bandit problems and towards capacity-limited decision-making agents. Extending beyond the empirical findings observed in these prior works, we provide additional experiments (see Figure 6) that show how the minimization of rate leads to target actions that are simpler to learn, allowing for an agent to curtail its interactions with the environment in fewer time periods and respect limitations on time and computational resources. Crucially, rate-distortion theory emerges as a natural conduit for identifying target actions that balance between respecting an agent’s limitations while still being sufficiently useful for the task at hand.
DISCUSSION
In this paper, we have introduced capacity-limited Bayesian reinforcement learning, capturing a novel perspective on lifelong learning under a limited cognitive load while also surveying existing theoretical and algorithmic advances specific to multi-armed bandits (Arumugam & Van Roy, 2021a) and reinforcement learning (Arumugam & Van Roy, 2022). Taking a step back, we now situate our contributions in a broader context by reviewing related work on capacity-limited cognition as well as information-theoretic reinforcement learning. As our framework sits at the intersection of Bayesian inference, reinforcement learning, and rate-distortion theory, we use this opportunity to highlight particularly salient pieces of prior work that sit at the intersection Bayesian inference and rate-distortion theory as well as the intersection of reinforcement learning and rate-distortion theory, respectively. Furthermore, while the algorithms discussed in this work all operationalize the Blahut-Arimoto algorithm and Thompson Sampling as the primary mechanisms for handling rate-distortion optimization and exploration respectively, we also discuss opportunities to expand to more sophisticated strategies for computing a target action and exploring once it has been determined. Lastly, we conclude our discussion by returning to a key assumption used throughout this work that an agent consistently maintains idealized beliefs about the environment 𝓔 through perfect Bayesian inference.
Related Work on Learning, Decision-Making, and Rate-Distortion Theory
There is a long, rich literature exploring the natural limitations on time, knowledge, and cognitive capacity faced by human (and animal) decision makers (Amir et al., 2020; Bhui et al., 2021; Binz & Schulz, 2022; Brown et al., 2022; Gershman et al., 2015; Gigerenzer & Goldstein, 1996; Griffiths et al., 2015; Ho et al., 2022; Icard & Goodman, 2015; Lieder & Griffiths, 2020; Newell & Simon, 1972; Newell et al., 1958; Prystawski et al., 2022; Simon, 1956, 1982; Shugan, 1980; Vul et al., 2014). Crucially, our focus is on a recurring theme throughout this literature of modeling these limitations on cognitive capabilities as being information-theoretic in nature (Bari & Gershman, 2022; Botvinick et al., 2015; Gershman, 2020, 2023; Gershman & Lai, 2020; Ho et al., 2020; Jakob & Gershman, 2022; Lai & Gershman, 2021; Mikhael et al., 2021; Parush et al., 2011; Peng, 2005; Sims, 2003, 2016, 2018; Zénon et al., 2019).
Broadly speaking and under the episodic reinforcement learning formulation of Appendix B, these approaches all center around the perspective that a policy πh : 𝒮 → Δ(𝒜) mapping states to distributions over actions should be modeled as a communication channel that, like a human decision-maker with limited information processing capability, is subject to a constraint on the maximal number of bits that may be transmitted across it. Consequently, an agent aspiring to maximize returns must do so subject to this constraint on policy complexity; conversely, an agent ought to transmit the minimum amount of information possible while it endeavors to reach a desired level of performance (Polani, 2009, 2011; Rubin et al., 2012; Tishby & Polani, 2011). Paralleling the distortion-rate function 𝒟(R), the resulting policy-optimization objective follows as 𝔼[Qπ(S, A)] such that 𝕀(S; A) ≤ R. It is important to acknowledge that such a formulation sits directly at the intersection of reinforcement learning and rate-distortion theory without invoking any principles of Bayesian inference. Depending on the precise work, subtle variations on this optimization problem exist from choosing a fixed state distribution for the random variable S (Polani, 2009, 2011), incorporating the state visitation distribution of the policy being optimized (Gershman, 2020; Lai & Gershman, 2021; Still & Precup, 2012), or assuming access to the generative model of the MDP and decomposing the objective across a finite state space (Rubin et al., 2012; Tishby & Polani, 2011). In all of these cases, the end empirical result tends to converge by also making use of variations on the classic Blahut-Arimoto algorithm to solve the Lagrangian associated with the constrained optimization (Boyd & Vandenberghe, 2004) and produce policies that exhibit higher entropy across states under an excessively limited rate R, with a gradual convergence towards the greedy optimal policy as R increases.
The alignment between this optimization problem and that of the distortion-rate function is slightly wrinkled by the non-stationarity of the distortion function (here, Qπ is used as an analogue to distortion which changes as the policy or channel does) and, when using the policy visitation distribution for S, the non-stationarity of the information source. Despite these slight, subtle mismatches with the core rate-distortion problem, the natural synergy between cognitive and computational decision making (Lake et al., 2017; Tenenbaum et al., 2011) has led to various reinforcement-learning approaches that draw direct inspiration from this line of thinking (Abel et al., 2019; Goyal, Bengio, et al., 2020; Goyal, Sodhani, et al., 2020; Goyal et al., 2019; Klyubin et al., 2005; Lerch & Sims, 2018, 2019; Ortega & Braun, 2011, 2013; Shafieepoorfard et al., 2016; Still & Precup, 2012; Tiomkin & Tishby, 2017), most notably including parallel connections to work on “control as inference” or KL-regularized reinforcement learning (Fox et al., 2016; Galashov et al., 2019; Haarnoja et al., 2017, 2018; Kappen et al., 2012; Levine, 2018; Tirumala et al., 2019; Todorov, 2007; Toussaint, 2009; Ziebart, 2010). Nevertheless, despite their empirical successes, such approaches lack principled mechanisms for addressing the exploration challenge (O’Donoghue et al., 2020). In short, the key reason behind this is that the incorporation of Bayesian inference allows for a separation of reducible or epistemic uncertainty that exists due to an agent’s lack of knowledge versus irreducible or aleatoric uncertainty that exists due to the natural stochasticity that may exist within a random outcome (Der Kiureghian & Ditlevsen, 2009). Without leveraging a Bayesian setting, a random variable denoting an agent’s belief about the environment 𝓔 or underlying MDP 𝓜⋆ no longer exists and a channel like the ones explored throughout this work from beliefs to action cease to exist. That said, the notion of rate preserved by these methods has been shown to constitute a reasonable notion of policy complexity (Lai & Gershman, 2021) and future work may benefit from combining the two approaches.
Similar to human decision making (Gershman, 2018, 2019; Schulz & Gershman, 2019), provably-efficient reinforcement-learning algorithms have historically relied upon one of two possible exploration strategies: optimism in the face of uncertainty (Auer et al., 2009; Azar et al., 2017; Bartlett & Tewari, 2009; Brafman & Tennenholtz, 2002; Dann & Brunskill, 2015; Dann et al., 2017; Dong et al., 2022; Jaksch et al., 2010; Jin et al., 2018; Kakade, 2003; Kearns & Singh, 2002; Strehl et al., 2009; Zanette & Brunskill, 2019) or posterior sampling (Agrawal & Jia, 2017; Lu & Van Roy, 2019; Lu et al., 2023; Osband et al., 2013; Osband & Van Roy, 2017). While both paradigms have laid down solid theoretical foundations, a line of work has demonstrated how posterior-sampling methods can be more favorable both in theory and in practice (Dwaracherla et al., 2020; Osband, Blundell, et al., 2016; Osband, Van Roy, et al., 2016; Osband et al., 2013, 2019; Osband & Van Roy, 2017). The theoretical results discussed in this work advance and further generalize this line of thinking through the concept of learning targets, introduced by Lu et al. (2023), which open up new avenues for entertaining solutions beyond optimal policies and conditioning an agent’s exploration based on what it endeavors to learn from its environment; future work may be able to draw a tangential but interesting parallel between such exploratory strategies and, for example, those empirically observed in preschool children (Cook et al., 2011) who are demonstrably capable of designing interventions targeted towards maximizing information gain about particular facets of the environment. While this literature traditionally centers on consideration of a single agent interacting within its environment, generalizations to multiple agents acting concurrently while coupled through shared beliefs have been formalized and examined in theory as well as in practice (Chen et al., 2022; Dimakopoulou & Van Roy, 2018; Dimakopoulou et al., 2018); translating the ideas discussed here to further account for capacity limitations in that setting constitutes a promising direction for future work.
Finally, we note while the work cited thus far was developed in the reinforcement learning community, the coupling of rate-distortion theory and Bayesian inference to strike a balance between the simplicity and utility of what an agent learns has been studied extensively by Gottwald and Braun (2019), who come from an information-theoretic background studying bounded rationality (Ortega & Braun, 2011, 2013). Perhaps the key distinction between the work surveyed here and theirs is the further incorporation of reinforcement learning, which then provides a slightly more precise foundation upon which existing machinery can be repurposed to derive theoretical results like regret bounds. In contrast, the formulation of Gottwald and Braun (2019) follows more abstract utility-theoretic decision making while also leveraging ideas from microeconomics and generalizing beyond from standard Shannon information-theoretic quantities; we refer readers to their excellent, rigorous treatment of this topic.
Generalizations to Other Families of Decision Rules
The previous sections demonstrated several concrete implementations of capacity-limited Bayesian decision-making. We focused on BLASTS, an algorithm that generalizes Thompson Sampling, which itself is already a quintessential algorithm for navigating the explore-exploit tradeoff in a principled manner in multi-armed bandit and sequential decision-making problems. That said, however, we emphasize that BLASTS is only one particular instantiation of the framework espoused by the rate-distortion function of Equation 2. Here, we briefly sketch other directions in which the framework has been or could be applied.
First, the general framework of capacity-limited Bayesian decision-making can, in principle, be applied to any algorithm that, when supplied with beliefs about the environment and a particular target for learning, induces a policy to execute in the environment. For example, in information-directed sampling, choices are made not only based on current beliefs about immediate rewards but also based on how actions produce informative consequences that can guide future behavior (Hao & Lattimore, 2022; Hao et al., 2022; Lu et al., 2023; Russo & Van Roy, 2014, 2018a). This strategy motivates a decision-maker to engage in direct exploration as opposed to random exploration (Thompson Sampling being one example) (Wilson et al., 2014) and better resolve the explore-exploit dilemma. Work by Arumugam and Van Roy (2021b) has extended the BLASTS algorithm to develop variants of information-directed sampling that similarly minimize the rate between environment estimates and actions. Future work could explore even richer families of decision-rules such as those based on Bayes-optimal solutions over longer time horizons (Duff, 2002) and even ones that look past the KL-divergence as the core quantifier of information (Lattimore & Gyorgy, 2021; Lattimore & Szepesvári, 2019; Zimmert & Lattimore, 2019).
Additionally, BLASTS itself uses a seminal algorithm from the information-theory literature to ultimately address the rate-distortion optimization problem and find the decision-rule that optimally trades off reward and information—namely, the Blahut-Arimoto algorithm (Arimoto, 1972; Blahut, 1972). However, this standard algorithm, while mathematically sound for random variables taking values on abstract spaces (Csiszár, 1974b), can only be made computationally tractable in the face of discrete random variables. Extending to general input distributions (e.g., distributions with continuous or countable support) occurs through the use of an estimator with elegant theoretical properties such as asymptotic consistency (Harrison & Kontoyiannis, 2008; Palaiyanur & Sahai, 2008). Despite this, it is still limited to output distributions that have finite support. This limits its applicability to problems where the action space is finite and relatively small (even if the environment space is complex). Thus, an important direction for future research will be to develop algorithms for finding capacity-limited decision-rules based on versions of Blahut-Arimoto designed for general output distributions (e.g., particle filter-based algorithms [Dauwels, 2005]).
Capacity-Limited Estimation and Alternative Information Bottlenecks
Throughout this paper, we have assumed that environment estimation is not directly subject to capacity-limitations and that decision-makers perform perfect Bayesian inference. Naturally, however, this idealized scenario isn’t guaranteed to hold for biological or artificial decision making agents. One high-level perspective on the core problem addressed in this work is that decision-making agents cannot acquire unbounded quantities of information from the environment—this reality motivates the need to prioritize information and rate-distortion theory emerges as a natural tool for facilitating such a prioritization scheme.
By the same token, capacity-limited decision-making agents should also seldom find themselves capable of retaining all bits of information uncovered about the underlying environment 𝓔. If this were possible, then maintaining perfect belief estimates about the environment via ηt would be a reasonable supposition. In reality, however, an agent must also be judicious in what pieces of environment information are actually retained. Lu et al. (2023) introduce terminology for discussing this limited corpus of world knowledge as an environment proxy, . The lack of fidelity between this surrogate and true environment 𝓔 translates to the approximate nature of an agent’s Bayesian inference when maintaining beliefs about in lieu of 𝓔. For biological decision-making agents, the concept of a proxy seems intuitive, as noted by Herbert Simon (Simon, 1956) many decades ago: “we are not interested in describing some physically objective world in its totality, but only those aspects of the totality that have relevance as the ‘life space’ of the organism considered. Hence, what we call the ‘environment’ will depend upon the ‘needs,’ ‘drives,’ or ‘goals’ of the organism.”
Curiously, the relationship between the original environment 𝓔 and this proxy can also be seen as a lossy compression problem where only a salient subset of the cumulative environment information need by retained by the agent for competent decision-making. Consequently, the associated rate-distortion function and the question of what suitable candidate notions of distortion apply may likely be an interesting object of study for future work. Practical optimization of such a rate-distortion function would likely benefit from recent statistical advances in empirical distribution compression (Dwivedi & Mackey, 2021) to permit representing the information source via a limited number of Monte-Carlo samples.
Finally, although an in-depth analysis of capacity-limits on inference is beyond the scope of the current paper, it is worth noting that recent findings in neuroscience support the possibility of a bottleneck on choice processes even if the bottleneck on inference is minimal. For example, when trained on stimuli presented at different angles, mice have been shown to discriminate orientations as low as 20°–30° based on behavioral measures (Abdolrahmani et al., 2019). However, direct neural measurements from visual processing regions reveal sensitivity to orientations as low as 0.37° (Stringer et al., 2021). The higher precision (nearly 100× higher) of sensory versus behavioral discrimination is consistent with a greater information bandwidth on inference compared to choice, as assumed in the current version of the model.2 Similarly, work tracking the development of decision-making strategies in children provides evidence of capacity limits on choice processes even in the absence of limits on inference. For example, Decker et al. (2016) report that on a task designed to dissociate model-free versus model-based learning mechanisms, 8–12 year olds show signs of encoding changes in transition structure (longer reaction times) but do not appear to use this information to make better decisions, unlike 13–17 year olds and adults.3 This result is consistent with a distinct bottleneck between inference and action that has a developmental trajectory. In short, the analyses developed in this paper provide a starting point for understanding the computational principles that underlie cases in which decision-makers display approximately optimal inference but systematically suboptimal choice.
Conclusion
Our goal in this paper has been to review key insights from work on capacity-limited Bayesian decision-making by Arumugam and Van Roy (2021a, 2022) and situate it within existing work on capacity-limited cognition and decision-making. This discussion naturally leads to a number of questions, in particular, how the general framework presented can be applied to a wider range of algorithms, how other kinds of information bottlenecks could affect learning, and whether humans and other animals are capacity-limited Bayesian decision-makers. We hope that by formally outlining the different components of capacity-limited inference and choice, the current work can facilitate future cross-disciplinary investigations to address such topics.
ACKNOWLEDGMENTS
We thank the action editor and reviewers for their helpful comments and feedback on the article.
AUTHOR CONTRIBUTIONS
D.A.: Conceptualization; Formal analysis; Methodology; Writing – review & editing; M.K.H.: Conceptualization; Formal analysis; Methodology; Writing – review & editing; N.D.G: Conceptualization; Supervision; Writing – review & editing; B.V.R.: Conceptualization; Supervision; Writing – review & editing.
FUNDING INFORMATION
Financial support from Army Research Office (ARO) grant W911NF2010055 (to BVR) is gratefully acknowledged.
Notes
At the very first timestep, the initial history only consists of an initial observation H0 = O0 ∈ 𝒪.
Special thanks to Harrison Ritz and Jonathan Cohen for pointing out the connection to these findings.
Special thanks to Catherine Hartley for pointing out the connection to these findings.
REFERENCES
APPENDIX A: PRELIMINARIES
In this section, we provide details on our notation and information-theoretic quantities used throughout the paper. We encourage readers to consult (Cover & Thomas, 2012; Duchi, 2021; Gray, 2011; Polyanskiy & Wu, 2024) for more background on information theory. We define all random variables with respect to a probability space (Ω, 𝓕, ℙ). For any two random variables X and Y, we use the shorthand notation p(X) ≜ ℙ(X ∈ ·) to denote the law or distribution of the random variable X and, analogously, p(X | Y) ≜ ℙ(X ∈ · | Y) as well as p(X | Y = y) ≜ ℙ(X ∈ · | Y = y) for the associated conditional distributions given Y and a realization of Y = y, respectively. For the ease of exposition, we will assume throughout this work that all random variables are discrete; aside from there being essentially no loss of generality by assuming this (see Equation 2.2.1 of Duchi [2021] or Theorem 4.5 of Polyanskiy and Wu [2024] for the Gelfand-Yaglom-Perez definition of divergence [Gelfand & Yaglom, 1959; Perez, 1959]), extensions to arbitrary random variables taking values on abstract spaces are straightforward and any theoretical results presented follow through naturally to these settings. In the case of any mentioned real-valued or vector-valued random variables, one should think of these as discrete with support obtained from some suitably fine quantization such that the resulting discretization error is negligible. For any natural number N ∈ ℕ, we denote the index set as [N] ≜ {1, 2, …, N}. For any arbitrary set 𝒳, Δ(𝒳) denotes the set of all probability distributions with support on 𝒳. For any two arbitrary sets 𝒳 and 𝒴, we denote the class of all functions mapping from 𝒳 to 𝒴 as {𝒳 → 𝒴} ≜ {f | f : 𝒳 → 𝒴}.
APPENDIX B: EPISODIC REINFORCEMENT LEARNING
In this section, we again specialize the general problem formulation of Continual Learning section, this time by introducing the assumption of episodicity commonly made throughout the reinforcement-learning literature. Thompson Sampling will again reappear as a quintessential algorithm for addressing exploration under an additional assumption that planning across any world model is always computationally feasible. Under this caveat, we survey existing theoretical results which accommodate capacity-limited agents via rate-distortion theory.
Problem Formulation
We formulate a sequential decision-making problem as an episodic, finite-horizon Markov Decision Process (MDP) (Bellman, 1957; Puterman, 1994) defined by 𝓜 = 〈𝒮, 𝒜, 𝒰, 𝒯, β, H〉. Here 𝒮 denotes a set of states, 𝒜 is a set of actions, 𝒰 : 𝒮 × 𝒜 → [0, 1] is a deterministic reward or utility function providing evaluative feedback signals, 𝒯 : 𝒮 × 𝒜 → Δ(𝒮) is a transition function prescribing distributions over next states, β ∈ Δ(𝒮) is an initial state distribution, and H ∈ ℕ is the maximum length or horizon. Within each one of K ∈ ℕ episodes, the agent acts for exactly H steps beginning with an initial state s1 ∼ β. For each timestep h ∈ [H], the agent observes the current state sh ∈ 𝒮, selects action ah ∼ πh(· | sh) ∈ 𝒜, enjoys a reward rh = 𝒰(sh, ah) ∈ [0, 1], and transitions to the next state sh+1 ∼ 𝒯( · | sh, ah) ∈ 𝒮.
A stationary, stochastic policy for timestep h ∈ [H], πh : 𝒮 → Δ(𝒜), encodes behavior as a mapping from states to distributions over actions. Letting Π ≜ {𝒮 → Δ(𝒜)} denote the class of all stationary, stochastic policies, a non-stationary policy π = (π1, …, πH) ∈ ΠH is a collection of exactly H stationary, stochastic policies whose overall performance in any MDP 𝓜 at timestep h ∈ [H] when starting at state s ∈ 𝒮 and taking action a ∈ 𝒜 is assessed by its associated action-value function , where the expectation integrates over randomness in the action selections and transition dynamics. Taking the corresponding value function as , we define the optimal policy π⋆ = as achieving supremal value for all s ∈ 𝒮, h ∈ [H].
We let τk = be the random variable denoting the trajectory experienced by the agent in the kth episode. Meanwhile, Hk = {τ1, τ2, …, τk−1} ∈ 𝓗k is the random variable representing the entire history of the agent’s interaction within the environment at the start of the kth episode. As is standard in Bayesian reinforcement learning (Bellman & Kalaba, 1959; Duff, 2002; Ghavamzadeh et al., 2015), neither the transition function nor the reward function are known to the agent and, consequently, both are treated as random variables.
Just as in the previous section but with a slight abuse of notation, we will use pk(X) = p(X | Hk) as shorthand notation for the conditional distribution of any random variable X given a random realization of an agent’s history Hk ∈ 𝓗, at any episode k ∈ [K]. Furthermore, we will denote the entropy and conditional entropy conditioned upon a specific realization of an agent’s history Hk, for some episode k ∈ [K], as ℍk(X) ≜ ℍ(X | Hk = Hk) and ℍk(X | Y) ≜ ℍk(X | Y, Hk = Hk), for two arbitrary random variables X and Y. This notation will also apply analogously to mutual information: 𝕀k(X; Y) ≜ 𝕀(X; Y | Hk = Hk) = ℍk(X) − ℍk(X | Y) = ℍk(Y) − ℍk(Y | X). We reiterate that a reader should interpret this as recognizing that, while standard information-theoretic quantities average over all associated random variables, an agent attempting to quantify information for the purposes of exploration does so not by averaging over all possible histories that it could potentially experience, but rather by conditioning based on the particular random history Hk. The dependence on the realization of a random history Hk makes 𝕀k(X; Y) a random variable and the usual conditional mutual information arises by integrating over this randomness: 𝔼[𝕀k(X; Y)] = 𝕀(X; Y | Hk). Additionally, we will also adopt a similar notation to express a conditional expectation given the random history Hk : 𝔼k[X] ≜ 𝔼[X | Hk].
Posterior Sampling for Reinforcement Learning
A natural starting point for addressing the exploration challenge in a principled manner is via Thompson Sampling (Russo et al., 2018; Thompson, 1933). The Posterior Sampling for Reinforcement Learning (PSRL) (Abbasi-Yadkori & Szepesvari, 2014; Agrawal & Jia, 2017; Lu & Van Roy, 2019; Osband et al., 2013; Osband & Van Roy, 2014, 2017; Strens, 2000) algorithm (given as Algorithm 6) does this by, in each episode k ∈ [K], sampling a candidate MDP 𝓜k ∼ pk(𝓜⋆) and executing its optimal policy in the environment π(k) = ; notably, such posterior sampling guarantees the hallmark probability-matching principle of Thompson Sampling: pk(𝓜k = M) = pk(𝓜⋆ = M), ∀M ∈ 𝔐, k ∈ [K]. The resulting trajectory τk leads to a new history Hk+1 = Hk ∪ τk and an updated posterior over the true MDP pk+1(𝓜⋆).
Unfortunately, for complex environments, pursuit of the exact MDP 𝓜⋆ may be an entirely infeasible goal, akin to pursuing an optimal action A⋆ within a multi-armed bandit problem. A MDP representing control of a real-world, physical system, for example, suggests that learning the associated transition function requires the agent internalize laws of physics and motion with near-perfect accuracy. More formally, identifying 𝓜⋆ demands the agent obtain exactly ℍ1(𝓜⋆) bits of information from the environment which, under an uninformative prior, may either be prohibitively large by far exceeding the agent’s capacity constraints or be simply impractical under time and resource constraints (Lu et al., 2023).
Rate-Distortion Theory for Target MDPs
To remedy the intractabilities imposed by PSRL when an agent must contend with an overwhelmingly-complex environment, we once again turn to rate-distortion theory as a tool for defining an information-theoretic surrogate than an agent may use to prioritize its information acquisition strategy in lieu of 𝓜⋆. If one were to follow the rate-distortion optimization of Equation 2, this would suggest identifying a channel δt(πχ | 𝓜⋆) that directly maps a bounded agent’s beliefs about 𝓜⋆ to a target policy πχ. For the purposes of analysis, Arumugam and Van Roy (2022) instead perform lossy MDP compression with the interpretation that various facets of the true MDP 𝓜⋆ must be discarded by a capacity-limited agent who can only hope identify a simplified world model that strives to retain as many salient details as possible. Implicit to such an approach is an assumption that the act of planning (that is, mapping any MDP M ∈ 𝔐 to its optimal policy ) can always be done in a computationally-efficient manner irrespective of the agent’s capacity limitations. From a mechanistic perspective, this is likely implausible for both artificial agents in large-scale, high-dimensional environments of interest as well as biological agents (Ho et al., 2022). On the other hand, this construction induces a Markov chain 𝓜⋆ − − πχ, where denotes the compressed world model; by the data-processing inequality, we have for all k ∈ [K] that 𝕀k(𝓜⋆; πχ) ≤ 𝕀k(𝓜⋆; ), such that minimizing the rate of the lossy MDP compression must also limit the amount of information that flows from the agent’s beliefs about the world to the executed behavior policy.
For the precise details of this MDP compression, we first require (just as with any lossy compression problem) the specification of an information source to be compressed as well as a distortion function that quantifies the loss of fidelity between uncompressed and compressed values. Akin to the multi-armed bandit setting, we will take the agent’s current beliefs pk(𝓜⋆) as the information source to be compressed in each episode. Unlike in the bandit setting, however, the choice of distortion function d : 𝔐 × 𝔐 → ℝ≥0 presents an opportunity for the agent designer to be judicious in specifying which aspects of the environment are preserved in the agent’s compressed view of the world. From a biological perspective, one might hypothesize that some combination of nature and evolutionary pressures adapt suitable distortion functions for biological decision-making agents.
It is fairly well accepted that human beings do not model all facets of the environment when making decisions (Gigerenzer & Goldstein, 1996; Simon, 1956) and the choice of which details are deemed salient enough to warrant retention in the mind of an agent is precisely governed by the choice of distortion function. In the computational reinforcement-learning literature, this reality has called into question longstanding approaches to model-based reinforcement learning (Littman, 2015; Sutton, 1991; Sutton & Barto, 1998) which use standard maximum-likelihood estimation techniques that endeavor to learn the exact model (𝒰, 𝒯) that governs the underlying MDP. The end result has been a flurry of recent work (Abachi et al., 2020; Asadi et al., 2018; Ayoub et al., 2020; Cui et al., 2020; D’Oro et al., 2020; Farahmand, 2018; Farahmand et al., 2017; Grimm et al., 2020, 2021, 2022; Nair et al., 2020; Nikishin et al., 2022; Oh et al., 2017; Schrittwieser et al., 2020; Silver et al., 2017; Voelcker et al., 2022) which eschews the traditional maximum-likelihood objective in favor of various surrogate objectives which restrict the focus of the agent’s modeling towards specific aspects of the environment. As the core goal of endowing a decision-making agent with its own internal model of the world is to facilitate model-based planning (Bertsekas, 1995), central among these recent approaches is the value-equivalence principle (Grimm et al., 2020, 2021, 2022) which provides mathematical clarity on how surrogate models can still enable lossless planning relative to the true model of the environment.
In summary, under a technical assumption of episodicity for the purposes of analysis, the theoretical results surveyed in this section parallel those for multi-armed bandits. While computational experiments for this episodic reinforcement learning setting have not yet been established due to the computational efficiency of running the Blahut-Arimoto algorithm for such a lossy MDP compression problem, the core takeaway of this section is that there is strong theoretical justification for using these tools from rate-distortion theory to empirically study capacity-limited sequential decision-making agents. We refer readers to the discussion in Appendix B.3 of Arumugam and Van Roy (2022) for consideration of how these ideas might productively scale with deep reinforcement learning to high-dimensional environments that necessitate the use of function approximation.
APPENDIX C: REGRET ANALYSIS FOR RATE-DISTORTION THOMPSON SAMPLING
When D = 0 and the agent designer is not willing to tolerate any sub-optimality relative to A⋆, Fact 1 allows this bound to recover the guarantee of TS exactly. At the other extreme, increasing D to 1 (recall that mean reward are bounded in [0, 1]) allows 𝓡1(D) = 0 and the agent has nothing to learn from the environment but also suffers the linear regret of T. Naturally, the “sweet spot” is to entertain intermediate values of D where smaller values will lead to larger amounts of information 𝓡1(D) needed to identify the corresponding target action, but not as many bits as what learning A⋆ necessarily entails.
The semantics of this performance guarantee are identical to those of Theorem 1, only now expressed explicitly through the agent’s capacity R. Namely, when the agent has no capacity for learning R = 0, D1(R) = 1 and the agent incurs linear regret of T. Conversely, with sufficient capacity R = ℍ1(A⋆), D1(R) = 0 and we recover the regret bound of Thompson Sampling. Intermediate values of agent capacity will result in an agent that fully utilizes its capacity to acquire no more than R bits of information from the environment, resulting in the minimum possible expected distortion quantified by 𝒟1(R).
We begin our analysis by establishing the following fact, which also appears in the proof of Lemma 3 of Arumugam and Van Roy (2021a):
Lemma 1 shows that the expected amount of information needed from the environment in each successive time period is non-increasing and further highlights two possible sources for this improvement: (1) a change in learning target from t to t+1 and (2) information acquired about t in the current time period, 𝕀t(t; (At, Ot+1)). With this in hand, we can obtain control over the cumulative information gain of an agent across all time periods using the learning target identified under our prior, following an identical argument as Arumugam and Van Roy (2022).
Intuitively, the information ratio is a “conversation factor” that converts bits of information an agent acquires from interacting with the environment at a given time period into units of squared regret.
Competing Interests
Competing Interests: The authors declare no conflict of interests.
Author notes
Equal contribution.