Bayesian Reinforcement Learning With Limited Cognitive Load

Abstract All biological and artificial agents must act given limits on their ability to acquire and process information. As such, a general theory of adaptive behavior should be able to account for the complex interactions between an agent’s learning history, decisions, and capacity constraints. Recent work in computer science has begun to clarify the principles that shape these dynamics by bridging ideas from reinforcement learning, Bayesian decision-making, and rate-distortion theory. This body of work provides an account of capacity-limited Bayesian reinforcement learning, a unifying normative framework for modeling the effect of processing constraints on learning and action selection. Here, we provide an accessible review of recent algorithms and theoretical results in this setting, paying special attention to how these ideas can be applied to studying questions in the cognitive and behavioral sciences.


Introduction
Cognitive science aims to identify the principles and mechanisms that underlie adaptive behavior.An important part of this endeavor is the development of unifying, normative theories that specify "design principles" that guide or constrain how intelligent systems respond to their environment [Marr, 1982, Anderson, 1990, Lewis et al., 2014, Griffiths et al., 2015, Gershman et al., 2015].For example, accounts of learning, cognition, and decision-making often posit a function that an organism is optimizing-e.g., maximizing long-term reward or minimizing prediction error-and test plausible algorithms that achieve this-e.g., a particular learning rule or inference process.Historically, normative theories in cognitive science have been developed in tandem with new formal approaches in computer science and statistics.This partnership has been fruitful even given differences in scientific goals (e.g., engineering artificial intelligence versus reverse-engineering biological intelligence).Normative theories play a key role in facilitating cross-talk between different disciplines by providing a shared set of mathematical, analytical, and conceptual tools for describing computational problems and how to solve them [Ho and Griffiths, 2022].
This paper is written in the spirit of such cross-disciplinary fertilization.Here, we review recent work in computer science [Arumugam andVan Roy, 2021a, 2022] that develops a novel approach for unifying three distinct mathematical frameworks that will be familiar to many cognitive scientists (Figure 1).The first is Bayesian inference, which has been used to study a variety of perceptual and higher-order cognitive processes such as categorization, causal reasoning, and social reasoning in terms of inference over probabilistic representations [Yuille and Kersten, 2006, Baker et al., 2009, Tenenbaum et al., 2011, Battaglia et al., 2013, Collins and Frank, 2013].The second is reinforcement learning [Sutton and Barto, 1998], which has been used to model key phenomena in learning and decision-making including habitual versus goal-directed choice as well as trade-offs between exploring and exploiting [Daw et al., 2012, Dayan and Niv, 2008, Radulescu et al., 2019, Wilson et al., 2014].The third is rate-distortion theory [Shannon, 1959, Berger, 1971], a subfield of information theory [Shannon, 1948, Cover andThomas, 2012], which in recent years has been used to model the influence of capacity-limitations in perceptual and choice processes [Sims, 2016, Lai and Gershman, 2021, Zenon et al., 2019, Zaslavsky et al., 2021].All three of these formalisms have been used as normative frameworks in the sense discussed above: They provide general design principles (e.g., rational inference, reward-maximization, efficient coding) that explain the function of observed behavior and constrain the investigation of underlying mechanisms.
Although these formalisms have been applied to analyzing individual psychological processes, less work has used them to study learning, decision-making, and capacity limitations holistically.One reason is the lack of principled modeling tools that comprehensively integrate these multiple normative considerations.The framework of capacity-limited Bayesian reinforcement learning, originally developed by Arumugam andVan Roy [2021a, 2022] in the context of machine learning, directly addresses the question of how to combine these perspectives.Our goal is to review this work and present its key developments in a way that will be accessible to the broader research community and can pave the way for future cross-disciplinary investigations.
We present the framework in two parts.First, we discuss a formalization of capacity-limited Bayesian decision-making that introduces an information bottleneck between an agent's beliefs about the world and its actions.This motivates a novel family of algorithms for identifying decision-rules that optimally trade off reward and information.Through a series of simple toy simulations, we analyze a specific algorithm: a variant of Thompson Sampling [Thompson, 1933] that incorporates such an information bottleneck.Afterwards, we turn more fully to capacity-limited Bayesian reinforcement learning, in which a decision-maker is continuously interacting with and adapting to their environment.We report both novel simulations and previouslyestablished theoretical results in several learning settings, including multi-armed bandits as well as continual and episodic reinforcement learning.One feature of this framework is that it provides tools for analyzing how the interaction between capacity-limitations and learning dynamics can influence learning outcomes.In the discussion, we explore how such analyses and our framework can be applied to questions in cognitive science.We also discuss similarities and differences between capacity-limited Bayesian reinforcement learning and existing proposals, including information-theoretic bounded rationality [Ortega andBraun, 2011, Gottwald andBraun, 2019], policy compression [Lai and Gershman, 2021], and resource-rational models based on principles separate from information theory [Lieder et al., 2014, Callaway et al., 2022, Ho et al., 2022].

Capacity-Limited Bayesian Decision-Making
This section provides a review of Bayesian models before introducing a general account of capacity-limited Bayesian decision-making.We then discuss and analyze a practical algorithm for computing capacity-limited Bayesian decision procedures based on Thompson Sampling.

Bayesian Inference and Decision-Making
Bayesian or probabilistic models have been used to characterize a range of psychological phenomena, including perception, categorization, feature learning, causal reasoning, social interaction, and motor control [Körding and Wolpert, 2004, Itti and Baldi, 2009, Ma, 2012, Goodman and Frank, 2016].One distinguishing feature of Bayesian models is that they separate learning and decision-making into two stages: inferring a parameter of the environment and choosing an action based on those inferences (Figure 1A). Figure 1: (A) Bayesian learning and decision-making is typically modularized into distinct stages of inference and choice.That is, the decision-maker is conceptualized as mapping experiences to probabilistic beliefs about the environment (an inference process) and then performing computations based on the resulting beliefs to produce distributions over actions (a choice process).Inference and choice processes are usually specified independently and assume that the channel from one to the other has unlimited capacity (thick solid arrow).(B) In capacity-limited Bayesian decision-making, there exists an information bottleneck between inferences and choices (narrow dotted arrow).Given the results of a fixed inference process (e.g., exact or approximate Bayesian inference), the optimal choice process trades off expected rewards and the mutual information (the rate) between beliefs about the environment and the distribution over desirable actions.(C) Capacity-limited Bayesian reinforcement learning integrates ideas from Bayesian inference [Jaynes, 2003], reinforcement learning [Kaelbling et al., 1996], and rate-distortion theory [Cover and Thomas, 2012].
Inference is formalized in terms of an environment-estimator, a probability distribution over the unknown environment E that is updated based on the experiences of the agent.Formally, given a history of experiences H t up to time t, an environment-estimator η t is updated according to Bayes rule: where P(H t | E) is the likelihood and P(E) is the prior probability assigned to E. Note that the environmentestimator η t takes the form of a probability mass function over environments.
Choice is formalized as a decision-rule, which bases the selection of actions on the results of the inference process (e.g., Bayesian inference).Concretely, a decision-rule δ lies internal to the agent and is a probability mass function over actions given the identity of the environment E. That is, if at timestep t, the agent samples a plausible environment θ ∼ η t , then δ(A = a | E = θ) is the probability that any action a ∈ A is a desirable decision for the environment θ.Given an environment-estimator η t and decision-rule δ, we can then define the joint distribution (2) Finally, suppose we have a real-valued utility function U (e, a) that defines the utility of an action a for a particular version of the environment e (later we discuss reinforcement learning and will consider specific utility functions that represent reward and/or value).Then the utility of an environment-estimator and decision-rule pair is the expected utility of the joint distribution they induce: This separation of inference and choice into an independent Bayesian estimator and decision-rule is commonly assumed throughout psychology, economics, and computer science [von Neumann and Morgenstern, 1944, Kaelbling et al., 1998, Ma, 2019].However, even if inference about the environment is exact, discerning what decisions are desirable from it incurs some non-trivial degree of cognitive load and the associated cost or limit on how much those inferences can inform choices remains unaccounted for.To remedy this, Arumugam andVan Roy [2021a, 2022] developed a framework for Bayesian learning and decision-making given an information bottleneck between inference and choice (Figure 1B).We now turn to how to extend the standard Bayesian framework to incorporate such capacity limitations.

Choice with Capacity Limitations
In capacity-limited Bayesian decision-making, we make two modifications to the standard formulation.First, rather than pre-specifying a fixed decision-rule, we allow for the decision-rule δ t to be chosen based on the current environment-estimator η t ; intuitively, this allows for a valuation of which decisions are desirable based on the agent's current knowledge of the world, η t .Second, rather than allowing arbitrarily complex dependencies between environment estimates and actions, we can view the decision-rule δ t as an estimateto-action channel that has limited capacity.We can formulate capacity limitations in a general way by considering the mutual information or rate of the estimate-to-action channel £ [Cover and Thomas, 2012].The notion of rate comes from rate-distortion theory, a sub-field of information theory that studies how to design efficient but lossy coding schemes [Shannon, 1959, Berger, 1971].In particular, the rate of any channel quantifies the number of bits transmitted or communicated on average per data sample; in our context, this gives a precise mathematical form for how much decisions (channel outputs) are impacted by environment beliefs (channel inputs).Intuitively, the rate resulting from a decision rule captures the amount of coupling between a decision-maker's estimates of the environment and actions taken.The central assumption of this framework is that greater estimate-to-action coupling is more cognitively costly.
Thus, formally, an optimal agent would use a decision-rule (estimate-to-action channel) that both maximizes utility and minimizes rate.If we additionally assume that the environment-estimator η t is fixed and exact as Equation 1 (in Section 4, we consider relaxing this assumption), then the optimal capacity-limited decision-rule at time t is given by: where U(η t , δ t ) is the is the expected utility of the estimate-to-action channel induced by η t and δ t , the cost C(η t , δ t ) is the rate of the channel, and λ ≥ 0 is a parameter that trades off utility and rate.Equation 3 defines an optimization target for a capacity-limited Bayesian decision-rule.However, having an optimization target does not tell us how difficult it is to find or approximate a solution, or what the solution is for a specific problem.In the next section, we discuss and analyze one illustrative procedure for finding δ t which then tethers the decision-rule to agent learning via Thompson Sampling [Thompson, 1933, Russo et al., 2018].

Thompson Sampling with Capacity-Limitations
Different decision-rules are distinguished by the type of representation they use and the algorithms that operate over those representations.For example, some decision-rules only use a point-estimate of each £ For a joint distribution p(X, Y ) the mutual information between random variables X and Y is: where p(X = •) and p(Y = •) are the marginal distributions for X and Y , respectively.Intuitively, the mutual information captures the degree to which two random variables are "coupled".For example, if one random variable is a bijective function of the other (i.e., there is a deterministic, one-to-one correspondence between realizations of X and Y ) then the mutual information will be a large positive number; conversely, if X and Y are completely independent of one another, then the mutual information is 0.
Algorithm 2 Blahut-Arimoto STS (BLASTS) [Arumugam and Van Roy, 2021a] In the Bayesian setting, decision-rules can take advantage of distributional information which captures epistemic uncertainty [Der Kiureghian and Ditlevsen, 2009] reflected by the agent's knowledge, rather than the aleatoric uncertainty present due to random noise.For example, Thompson Sampling [Thompson, 1933, Russo et al., 2018] makes explicit use of distributional information by first sampling an environment and then selecting the best action under the premise that the sampled version of the environment reflects reality.This specific mapping from the sampled candidate environment and the corresponding best action(s) constitutes a particular decision-rule.Selecting actions to execute within the environment by sampling from this decision rule constitutes a coherent procedure (formally outlined in Algorithm 1) that implicitly determines an action distribution given the current history of interaction H t ; sampling and executing actions in this manner is often characterized as probability matching [Agrawal and Goyal, 2012, 2013, Russo and Van Roy, 2016] where an action is only ever executed according to its probability of being optimal.Because Thompson Sampling is straightforward to implement and has good theoretical learning guarantees, it is frequently used in machine learning applications [Chapelle and Li, 2011].Additionally, humans often display key signatures of selecting actions via Thompson Sampling [Vulkan, 2000, Wozny et al., 2010, Gershman, 2018].In short, Thompson Sampling is a simple, robust, and well-studied Bayesian algorithm that is, by design, tailored to a particular decision-rule.However, this decision-rule and, by extension, the standard version of Thompson Sampling as a whole, assumes that the estimate-to-action channel has unlimited capacity.What if, instead, we consider a version in which the rate is penalized and the decision-rule is optimized as in Equation 3? This consideration motivates Blahut-Arimoto Satisficing Thompson Sampling (BLASTS), an algorithm first proposed by Arumugam and Van Roy [2021a].In order to approximate an optimal decision-rule given an environment-estimator η and rate parameter λ ≥ 0, BLASTS (whose pseudocode appears as Algorithm 2) performs three high-level procedures.First, it approximates the environment distribution by drawing Z ∈ N Monte-Carlo samples from η and proceeding with the resulting empirical distribution.Second, it uses Blahut-Arimoto-a classic algorithm from the rate-distortion theory literature [Blahut, 1972, Arimoto, 1972]-to iteratively compute the (globally) optimal decision-rule, δ , whose support is a finite action space A. Finally, it uniformly samples one of the Z initially drawn environment configurations e and then samples an action a from the computed decision-rule conditioned on that realization e of the environment.This last step allows for a generalized retention of the probability matching principle seen in Thompson Sampling; that is, actions are only ever executed according to their probability of striking the right balance in Equation 3. One can observe that a BLASTS agent with no regard for respecting capacity limitations (λ = 0) will recover the Thompson Sampling decision-rule as a special case.
Since BLASTS constructs the estimate-to-action channel that optimally trades off utility and rate, the action distribution it generates is primarily sensitive to the rate parameter, λ, and the environment-estimator, η.To illustrate the behavior of the optimal decision-rule, we conducted two sets of simulations that manipulated these factors in simple three-armed bandit tasks.Our first set of simulations examined the effect of different values of the rate parameter λ, which intuitively corresponds to the cost of information measured in units of utils per nat.We calculated the marginal action distribution, π(a) = e δ (a | e)η(e), where the belief distribution over average rewards for the three arms was represented by three independent Gaussian distributions respectively centered at −1, 0, and 1; all three distributions had a standard deviation of 1 (Figure 2A).
Remarkably, even on this simple problem, BLASTS displays three qualitatively different regimes of action selection when varying the rate parameter, λ, from 10 −2 to 10 4 .When information is inexpensive (λ < 10 −1 ), the action distribution mimics the exploratory behavior of Thompson Sampling (consistent with theoretical predictions [Arumugam and Van Roy, 2021a]).As information becomes moderately expensive (10 −1 ≤ λ ≤ 10 1 ), BLASTS focuses channel capacity on the actions with higher expected utility by first reducing its selection of the worst action in expectation (a 0 ) followed by the second-worst/second-best action in expectation (a 1 ), which results in it purely exploiting the best action in expectation (a 2 ).Finally, as the util per nat becomes even greater (λ ≥ 10 1 ) BLASTS produces actions that are uninformed by its beliefs about the environment.This occurs in a manner that resembles a Boltzmann distribution with increasing temperature, eventually saturating at a uniform distribution over actions.These patterns are visualized in Figure 2B-D, which compare action probabilities for Boltzmann, Thompson Sampling, and BLASTS.
Figure 2: Capacity-limited decision-making in a three-armed bandit.(A) Bayesian decision-makers represent probabilistic uncertainty over their environment.Shown are Gaussian beliefs for average rewards for three actions, a 0 , a 1 , and a 2 , with location parameters µ 0 = −1, µ 1 = 0, µ 2 = 1, and standard deviations σ i = 1 for i = 0, 1, 2. (B) A non-Bayesian decision-rule is the Boltzmann or soft-max distribution [Littman, 1996], which has a temperature parameter α > 0. For the values in panel A, as α → 0, the action with the highest expected reward is chosen more deterministically; as α → ∞, actions are chosen uniformly at random.The Boltzmann decision-rule ignores distributional information.(C) An alternative decision-rule that is sensitive to distributional information is Thompson Sampling [Thompson, 1933], which implements a form of probability matching that is useful for exploration [Russo and Van Roy, 2016].Shown are the Thompson Sampling probabilities based on N = 10, 000 samples.Thompson Sampling has no parameters.(D) In capacity-limited decision-making, action distributions that are more tightly coupled to beliefs about average rewards-i.e., those with higher mutual information or rate-are penalized.The parameter λ ≥ 0 controls the penalty and represents the cost of information in rewards per nat.Blahut-Arimoto Satisficing Thompson Sampling (BLASTS) [Arumugam and Van Roy, 2021a] generalizes Thompson Sampling by finding the estimate-to-action channel that optimally trades off rewards and rate for a value of λ.In the current example, when 0 < λ ≤ 10 −1 , information is cheap and BLASTS implements standard Thompson Sampling; when 10 −1 ≤ λ ≤ 10 1 , BLASTS prioritizes information relevant to maximizing rewards and focuses on exploiting arms with higher expected reward, eventually only focusing on the single best; when λ ≥ 10 1 , information is too expensive to even exploit, so BLASTS resembles a Boltzmann distribution with increasing temperature, tending towards a uniform action distribution-that is, one that is completely uninformed by beliefs.Solid lines represent action probabilities according to BLASTS (Z = 50, 000); dotted lines are standard Thompson Sampling probabilities for reference.
Figure 3: Blahut-Arimoto Satisficing Thompson Sampling (BLASTS) for different beliefs about average rewards in a three-armed bandit.(A) BLASTS is sensitive to the action gap-the difference between the expected reward of the highest and second highest actions.Shown are action probability by information cost curves when µ 1 from the example in Figure 2A is set to values in {−1.0, 0.5, 0.0, 0.5, 1.0} and all other belief parameters are held constant.(B) BLASTS is also sensitive to the degree of uncertainty-e.g., the standard deviation of average reward estimates for each action.Shown are action probability / information cost curves when the standard deviation for each arm in Figure 2, σ i , i = 0, 1, 2 is set to different values.
Our second set of simulations examine the relationship between the cost of information λ and BLASTS action probabilities for different environment-estimates.Specifically, we first examined the effect of changing beliefs about the action gap, the difference between the best and second-best action in expectation [Auer et al., 2002, Agrawal and Goyal, 2012, 2013, Farahmand, 2011, Bellemare et al., 2016].As shown in Figure 3A, when the action gap is lower (corresponding to a more difficult decision-making task), BLASTS chooses the optimal action with lower probability for all values of λ.In addition, we examined the effect of changing uncertainty in the average rewards by setting different standard deviations for beliefs about the arms.Figure 3B shows that as uncertainty increases, BLASTS is less likely to differentially select an arm even in the "exploitation" regime for moderate values of λ.Sensitivity to the action gap and uncertainty are key features of BLASTS that derive from the fact that it uses distributional information to guide decision-making, unlike decision-rules such as ε-greedy or Boltzmann softmax.

Summary
In the standard formulation of Bayesian decision-making, we assume an environment-estimator and decisionrule that are specified independently.By extending ideas from rate-distortion theory, Arumugam and Van Roy [2021a] defined a notion of capacity-limitations applicable to decision-rules as well as an efficient algorithm for finding an optimal capacity-limited variant of Thompson Sampling (BLASTS).In this section, we analyzed how choice distributions change as a function of the cost of information and current environment estimates, which provides some intuition for how capacity-limitations affect choice from the agent's subjective point of view.In the next section, we take a more objective point of view by studying the learning dynamics that arise when capacity-limited agents interact with an environment over time.

Capacity-Limited Bayesian Reinforcement Learning
The preceding section provides a cursory overview of how rate-distortion theory accommodates capacitylimited learning within a Bayesian decision-making agent.In this section, we aim to provide mathematicallyprecise instantiations of the earlier concepts for three distinct problem classes: (1) continual or lifelong learning, (2) multi-armed bandits, and (3) episodic Markov decision processes.Of these three types of environments, the capacity-limited learning framework we provide for continual learning is a novel contribution of this work whereas the remaining two classes (which emerge as special cases of continual learning) have been examined in prior work [Arumugam andVan Roy, 2021a,b, 2022].

Preliminaries
In this section, we provide brief details on our notation and information-theoretic quantities used throughout the remainder of the paper.We encourage readers to consult [Cover and Thomas, 2012, Gray, 2011, Duchi, 2021, Polyanskiy and Wu, 2022] for more background on information theory.We define all random variables with respect to a probability space (Ω, F, P).For any two random variables X and Y , we use the shorthand notation p(X) P(X ∈ •) to denote the law or distribution of the random variable X and, analogously, for the associated conditional distributions given Y and a realization of Y , respectively.For the ease of exposition, we will assume throughout this work that all random variables are discrete; aside from there being essentially no loss of generality by assuming this (see Equation 2.2.1 of [Duchi, 2021] or Theorem 4.5 of [Polyanskiy and Wu, 2022] for the Gelfand-Yaglom-Perez definition of divergence [Gelfand andYaglom, 1959, Perez, 1959]), extensions to arbitrary random variables taking values on abstract spaces are straightforward and any theoretical results presented follow through naturally to these settings.In the case of any mentioned real-valued or vector-valued random variables, one should think of these as discrete with support obtained from some suitably fine quantization such that the resulting discretization error is negligible.For any natural number N ∈ N, we denote the index set as [N ] {1, 2, . . ., N }.For any arbitrary set X , ∆(X ) denotes the set of all probability distributions with support on X .For any two arbitrary sets X and Y, we denote the class of all functions mapping from X to Y as {X → Y} {f | f : X → Y}.
We define the mutual information between any two random variables X, Y through the Kullback-Leibler (KL) divergence: where q 1 , q 2 ∈ ∆(X ) are both probability distributions.An analogous definition of conditional mutual information holds through the expected KL-divergence for any three random variables X, Y, Z: With these definitions in hand, we may define the entropy and conditional entropy for any two random variables X, Y as This yields the following identities for mutual information and conditional mutual information for any three arbitrary random variables X, Y , and Z: Finally, for any three random variables X, Y , and Z which form the Markov chain X → Y → Z, we have the following data-processing inequality: I(X; Z) ≤ I(X; Y ).In subsequent sections, the random variable H t will often appear denoting the current history of an agent's interaction with the environment.We will use p t (X) = p(X | H t ) as shorthand notation for the conditional distribution of any random variable X given a random realization of an agent's history H t , at any timestep t ∈ [T ].Similarly, we denote the entropy and conditional entropy conditioned upon a specific realization of an agent's history H t , for some timestep t ∈ , for two arbitrary random variables X and Y .This notation will also apply analogously to the mutual information , given an arbitrary third random variable, Z.A reader should interpret this as recognizing that, while standard information-theoretic quantities average over all associated random variables, an agent attempting to quantify information for the purposes of exploration does so not by averaging over all possible histories that it could potentially experience, but rather by conditioning based on the particular random history H t that it has currently observed thus far.This dependence on the random realization of history H t makes all of the aforementioned quantities random variables themselves.The traditional notions of conditional entropy and conditional mutual information given the random variable H t arise by taking an expectation over histories: .
Additionally, we will also adopt a similar notation to express a conditional expectation given the random history
More concretely, we adopt a generic agent-environment interface where, at each time period t, the agent executes an action A t ∈ A within an environment E ∈ Θ that results in an associated next observation O t ∈ O.This sequential interaction between agent and environment yields an associated history £ at each timestep t, H t = (O 0 , A 1 , O 1 , . . ., A t−1 , O t−1 ) ∈ H, representing the action-observation sequence available to the agent upon making its selection of its current action A t .We may characterize the overall environment as E = A, O, ρ ∈ Θ containing the action set A, observation set O, and observation function ρ : H × A → ∆(O), prescribing the distribution over next observations given the current history and action selection: ).An agent's policy π : H → ∆(A) encapsulates the relationship between the history encountered in each timestep H t and the executed action A t such that π t (a) = P(A t = a | H t ) assigns a probability to each action a ∈ A given the history.Preferences across histories are expressed via a known reward function r : H × A × O → R so that an agent enjoys a reward R t = r(H t , A t , O t ) on each timestep.Given any finite time horizon T ∈ N, the accumulation of rewards provide a notion of return To develop preferences over behaviors and to help facilitate action selection, it is often natural to associate with each policy π a corresponding expected return or action-value function , where the expectation integrates over the randomness in the policy π as well as the observation function ρ.Traditionally, an agent designer focuses on agents that strive to achieve the optimal value within the confines of some policy class Π The optimal policy then follows by acting greedily with respect to this optimal value function: £ At the very first timestep, the initial history only consists of an initial observation Observe that when rewards and the distribution of the next observation O t depend only on the current observation-action pair (O t−1 , A t ), rather than the full history H t , we recover the traditional Markov Decision Process [Bellman, 1957, Puterman, 1994] studied throughout the reinforcement-learning literature [Sutton and Barto, 1998].Alternatively, when these quantities rely solely upon the most recent action A t , we recover the traditional multi-armed bandit [Lai and Robbins, 1985, Bubeck and Cesa-Bianchi, 2012, Lattimore and Szepesvári, 2020].Regardless of precisely which of these two problem settings one encounters, a default presumption throughout both literatures is that an agent should always act in pursuit of learning an optimal policy π .Bayesian decision-making agents [Bellman and Kalaba, 1959, Duff, 2002, Ghavamzadeh et al., 2015] aim to achieve this by explicitly representing and maintaining the agent's current knowledge of the environment, recognizing that it is the uncertainty in the underlying environment E that drives uncertainty in optimal behavior π .A Bayesian learner reflects this uncertainty through conditional probabilities η t (e) P(E = e | H t ), ∀e ∈ Θ aimed at estimating the underlying environment.Under the prior distribution η 1 (E), the entropy of this random variable E implies that a total of H 1 (E) bits quantify all of the information needed for identifying the environment and, as a result, synthesizing optimal behavior.For sufficiently rich and complex environments, however, H 1 (E) can become prohibitively large or even infinite, making the pursuit of an optimal policy entirely intractable.
The core insight of this work is recognizing that a delicate balance between the amount of information processing that goes into a decision (cognitive load ) and the quality of that decision (utility) can be aptly characterized through rate-distortion theory, providing a formal framework for capacity-limited decision making.At each time period t ∈ [T ], the agent's current knowledge about the underlying environment is fully specified by the distribution η t .Whereas the standard Thompson Sampling (TS) agent will attempt to use this knowledge for identifying an optimal action A ∈ arg max a∈A Q (H t , a) by default, a capacity-limited agent may not be capable of operationalizing all bits of information from its beliefs about the world to discern a current action A t .
Rate-distortion theory [Shannon, 1959, Berger, 1971] is a branch of information theory [Shannon, 1948, Cover andThomas, 2012] dedicated to the study of lossy compression problems which necessarily must optimize for a balance between the raw amount of information retained in the compression and the utility of those bits for some downstream task; a classic example of this from the information-theory literature is a particular image that must be compressed down to a smaller resolution (fewer bits of information) without overly compromising the visual acuity of the content (bounded distortion).A capacity-limited agent will take its current knowledge η t as the information source to be compressed in each time period t ∈ [T ].The lossy compression mechanism or channel itself is simply a conditional probability distribution p(A t | E) that maps a potential realization of the unknown environment E ∈ Θ to a corresponding distribution over actions for the current time period.Naturally, the amount of information used from the environment to identify this action is precisely quantified by the mutual information between these two random variables, I t (E; A t ), where the t subscript capture the dependence of the agent's beliefs η t on the current random history H t .
Aside from identifying the data to be compressed, a lossy compression problem also requires the specification of a distortion function d : A × Θ → R ≥0 which helps distinguish between useful and irrelevant bits of information contained in the environment.Intuitively, environment-action pairs yielding high distortion are commensurate with achieving high loss and, naturally, a good lossy compression mechanism is one that can avoid large expected distortion, E t [d(A t , E)] .Putting these two pieces together, the fundamental limit of lossy compression is given by the rate-distortion function where we denote the conditional distribution that achieves this infimum as δ t ( A t | E) where A t is the random variable representing this target action that achieves the rate-distortion limit.A bounded decision maker with limited information processing can only hope to make near-optimal decisions.Thus, a natural way to quantify distortion is given by the expected performance shortfall between an optimal decision and the chosen one.
The distortion threshold D ∈ R ≥0 input to the rate-distortion function is a free parameter specified by an agent designer that communicates a preferences for the minimization of rate versus the minimization of distortion.This aligns with a perspective that a decision-making agent has a certain degree of tolerance for sub-optimal behavior and, with that degree of error in mind, chooses among the viable near-optimal solutions that incur the least cognitive load to compute actions from beliefs about the world.If one is willing to tolerate significant errors and large amounts of regret, than decision-making should be far simpler in the sense that very few bits of information from beliefs about the environment are needed to select an action.Conversely, as prioritizing near-optimal behavior becomes more important, each decision requires greater cognitive effort as measure by the amount of information utilized to compute actions from current beliefs.The power of rate-distortion theory, in part, lies in the ability to give precise mathematical form to this intuitive narrative, as demonstrated by Fact 1.
Fact 1 (Lemma 10.4.1 [Cover and Thomas, 2012]).For all t ∈ [T ] and any D > 0, the rate-distortion function R t (D) is a non-negative, convex, and non-increasing function in its argument.
In particular, Fact 1 establishes the following relationship for any D > 0, confirming that the amount of information used to determine A t is less than what would be needed to identify an optimal action A .Alternatively, in lieu of presuming that an agent is cognizant of what constitutes a "good enough" solution, one may instead adopt the perspective that an agent is made aware of its capacity limitations.In this context, agent capacity refers to a bound R ∈ R ≥0 on the number of bits an agent may operationalize from its beliefs about the world in order to discern its current action selection A t .Conveniently, the information-theoretic optimal solution is characterized by the Shannon distortion-rate function: (5) Natural limitations on a decision-making agent's time or computational resources can be translated and expressed as limitations on the sheer amount of information that can possibly be leveraged from beliefs about the environment E to execute actions; the distortion-rate function D t (R) quantifies the fundamental limit on minimum expected distortion that an agent should expect under such a capacity constraint.It is oftentimes convenient that the rate-distortion function and distortion-rate function are inverses of one another such that R t (D t (R)) = R.
In this section, we have provided a mathematical formulation for how a capacity-limited agent might go about action selections in each time period that limit overall cognitive load in an information-theoretically optimal fashion while also leveraging as much of its environmental knowledge as possible to behave with limited sub-optimality.To elucidate the value of this formulation, we dedicate the following sections to simpler and more tractable problem settings which allow for theoretical and as well as empirical analysis.

Multi-Armed Bandit
In this section, we begin with the formal specification of a multi-armed bandit problem [Lai and Robbins, 1985, Bubeck and Cesa-Bianchi, 2012, Lattimore and Szepesvári, 2020] before presenting Thompson Sampling as a quintessential algorithm for identifying optimal actions.We then present a corresponding generalization of Thompson Sampling that takes an agent's capacity limitations into account.

Problem Formulation
We obtain a bandit environment as a special case of the problem formulation given in Section 3.2 by treating the initial observation as null O 0 = ∅ while each subsequent observation denotes a reward signal drawn from an observation function ρ : A → ∆(R) that only depends on the most recent action selection A t and not the current history H t = (A 1 , R 1 , A 2 , R 2 , . . ., A t−1 , R t−1 ).While the actions A and total time periods T ∈ N are known to the agent, the underlying reward function ρ is unknown and, consequently, the environment E is itself a random variable such that p E], ∀a ∈ A, and define an optimal action A ∈ arg max a∈A ρ(a) as achieving the maximal mean reward denoted as R = ρ(A ), both of which are random variables due to their dependence on E.
Observe that, if the agent knew the underlying environment E exactly, there would be no uncertainty in the optimal action A ; consequently, it is the agent's epistemic uncertainty [Der Kiureghian and Ditlevsen, 2009] in E that drives uncertainty in A and, since learning is the process of acquiring information, an agent explores to learn about the environment and reduce this uncertainty.As there is only a null history at the start H 1 = ∅, initial uncertainty in the environment E ∈ Θ is given by the prior probabilities η 1 ∈ ∆(Θ) while, as time unfolds, updated knowledge of the environment is reflected by posterior probabilities η t ∈ ∆(Θ).
For a fixed choice of environment E, the performance of an agent is assessed through the regret of its policies over T time periods Since the environment is itself a random quantity, we integrate over this randomness with respect to the prior η 1 (E) to arrive at the Bayesian regret: The customary goal within a multi-armed bandit problem is to identify an optimal action A and provablyefficient bandit learning emerges from algorithms whose Bayesian regret can be bounded from above.In the next section, we review one such algorithm that is widely used in practice before motivating consideration of satisficing solutions for bandit problems.

Thompson Sampling & Satisficing
A standard choice of algorithm for addressing multi-armed bandit problems is Thompson Sampling (TS) [Thompson, 1933, Russo et al., 2018], which has been well-studied both theoretically [Auer et al., 2002, Agrawal and Goyal, 2012, 2013, Bubeck and Liu, 2013, Russo and Van Roy, 2016] and empirically [Granmo, 2010, Scott, 2010, Chapelle and Li, 2011, Gopalan et al., 2014].For convenience, we provide generic pseudocode for TS as Algorithm 3, whereas more granular classes of bandit problems (Bernoulli bandits or Gaussian bandits, for example) can often lead to more computationally explicit versions of TS that leverage special structure like conjugate priors (see [Russo et al., 2018] for more detailed implementations).In each time period t ∈ [T ], a TS agent proceeds by drawing one sample θ t ∼ η t (E), representing a statistically-plausible hypothesis about the underlying environment based on the agent's current posterior beliefs from observing the history H t ; the agent then proceeds as if this sample dictates reality and acts optimally with respect to it, drawing an action to execute this time period A t uniformly at random among the optimal actions for this realization of E = θ t of the environment.Executing actions in this manner recovers the hallmark probability-matching principle [Scott, 2010, Russo andVan Roy, 2016] of TS whereby, in each time period t ∈ [T ], the agent selects actions according to their (posterior) probability of being optimal given everything observed up to this point in H t or, more formally, π t (a) = p t (A = a), ∀a ∈ A.
Aside from admitting a simple, computationally-efficient procedure for learning optimal actions A over time, TS also boasts rigorous theoretical guarantees.While the classic Gittins' indices [Gittins, 1979, Gittins et al., 2011] yield the Bayes-optimal policy, they are extremely limited to problems of modest size such that, for our finite-horizon setting, they are computationally intractable.Nevertheless, Russo and Van Roy [2016] offer a rigorous corroborating analysis of TS that, for our setting, yields an information-theoretic Bayesian regret bound: This result communicates that the overall Bayesian regret of TS is governed by the entropy over the optimal arm A under the prior η 1 (E).When an agent designer has strong prior knowledge about the optimal arm, initializing TS accordingly results in a very small upper bound on Bayesian regret; conversely, in the case of an uninformative prior, the worst-case entropy over the optimal arm is equal to log(|A|) and the second inequality is tight, which still matches the best-known regret lower bound Ω( |A|T ) for multi-armed bandit problems up to logarithmic factors [Bubeck and Liu, 2013].Naturally, a core premise of this work is to consider decision-making problems where an agent's inherent and unavoidable capacity limitations drastically impact the tractability of learning optimal actions.While there are other classes of algorithms for handling multi-armed bandit problems [Auer et al., 2002, Ryzhov et al., 2012, Powell and Ryzhov, 2012, Russo and Van Roy, 2014, 2018a], TS serves an exemplary representative among them as it relentlessly pursues the optimal action A , by design.Consider a human decision maker faced with a bandit problem containing 1, 000, 000, 000 (one trillion) arms -does one genuinely expect any individual to successfully identify A ?Similarly, the final inequality in the Bayesian regret bound above informs us that the performance shortfall of TS will increase as the number of actions tends to ∞, quantifying the folly of pursuing A as the agent continuously experiments with untested but potentially optimal actions.Algorithm 3 Thompson Sampling (TS) [Thompson, 1933] Algorithm 4 Satisficing TS [Russo and Van Roy, 2022] Satisficing is a longstanding, well-studied idea about how to understand resource-limited cognition [Simon, 1955, 1956, Newell et al., 1958, Newell and Simon, 1972, Simon, 1982] in which an agent settles for the first recovered solution that is deemed to be "good enough," for some suitable notion of goodness.Inspired by this idea, Russo andVan Roy [2018b, 2022] present the Satisficing Thompson Sampling (STS) algorithm, which we present as Algorithm 4, to address the shortcomings of algorithms like TS that relentlessly pursue A .STS employs a minimal adjustment to the original TS algorithm through a threshold parameter ε ≥ 0, which an agent designer may use to communicate that identifying a ε-optimal action would be sufficient for their needs.The use of a minimum over all such ε-optimal actions instead of a uniform distribution reflects the idea of settling for the first solution deemed to be "good enough" according to ε. Naturally, the intuition follows that as ε increases and the STS agent becomes more permissive, such ε-optimal actions can be found in potentially far fewer time periods than what is needed to obtain A through TS.If we define an analogous random variable to A as then STS simply employs probability matching as π t (a) = p t (A ε = a), ∀a ∈ A and, as ε ↓ 0, recovers TS as a special case.Russo and Van Roy [2022] go on to prove a complementary information-theoretic regret bound for STS, which depends on I 1 (E; A ε ), rather than the entropy of A , H 1 (A ).
While it is clear that STS does embody the principle of satisficing for a capacity-limited decision maker, the A ε action targeted by a STS agent instead of A only achieves some arbitrary and unspecified tradeoff between the simplicity of what the agent set out to learn and the utility of the resulting solution, as ε varies.This is in contrast to a resource-rational approach [Anderson, 1990, Griffiths et al., 2015] which aims to instead strike the best trade-off between these two competing interests.One interpretation of the next section is that we provide a mathematically-precise characterization of such resource-rational solutions through rate-distortion theory.

Rate-Distortion Theory for Target Actions
To see how the rate-distortion function (Equation 4) fits into the preceding discussion of Thompson Sampling and STS, Arumugam and Van Roy [2021a] replace the A t of Equation 4 with a target action A t .This notion of a target action based on the observation is that A = f (E) is merely a statistic of the environment whose computation is determined by some abstract function f .It follows that an alternative surrogate action an agent may prioritize during learning will be some other computable statistic of the environment that embodies a kind of trade-off between two key properties: (1) ease of learnability and (2) bounded sub-optimality or performance shortfall relative to A .
The previous section already gives two concrete examples of potential target actions, A and A ε , where the former represents an extreme point on the spectrum of potential learning targets as one that demands a potentially intractable amount of information to identify but comes with no sub-optimality.At the other end of the spectrum, there is simply the uniform random action A ∼ Uniform(A) which requires no learning or sampling on the part of the agent to learn it but, in general, will likely lead to considerably large performance shortfall relative to an optimal solution.While, for any fixed ε > 0, A ε lives in between these extremes, it also suffers from two shortcomings of its own.Firstly, by virtue of satisficing and a willingness to settle for anything that is "good enough," it is unclear how well A ε balances between the two aforementioned desiderata.In particular, the parameterization of A ε around ε as an upper bound to the expected regret suggests that there could exist an even simpler target action than A ε that is also ε-optimal but easier to learn insofar as it requires the agent obtain fewer bits of information from the environment.Secondly, from a computational perspective, a STS agent striving to learn A ε (just as a TS agent does for learning A ) computes the same statistic repeatedly across all T time periods.Meanwhile, with every step of interaction, the agent's knowledge of the environment E is further refined, potentially changing the outlook on what can be tractably learned in subsequent time periods.This suggests that one stands to have considerable gains by designing agents that adapt their learning target as knowledge of the environment accumulates, rather than iterating on the same static computation.
Recall that from Equation 4, a target action A t following distribution δ t ( A t | E) achieves the ratedistortion limit given by In order to satisfy the second desideratum of bounded performance shortfall for learning targets and to facilitate a regret analysis, Arumugam and Van Roy [2021a] define the distortion function as While having bounded expected distortion satisfies our second criterion for a learning target, the fact that A t requires fewer bits of information to learn is immediately given by properties of the rate-distortion function R t (D) itself, through Fact 1.
Algorithm 5 Rate-Distortion Thompson Sampling (RDTS) Abstractly, one could consider a procedure like Algorithm 5 that, for an input distortion threshold D, identifies the corresponding target action A t of Equation 6 and then performs probability matching with respect to it.The following theorem provides an information-theoretic Bayesian regret bound that generalizes the performance guarantee of traditional TS by Russo and Van Roy [2016] while also providing a more direct connection to the rate-distortion function than Theorem 3 of Arumugam and Van Roy [2021a] using proof techniques developed by Arumugam and Van Roy [2022].
Theorem 1.For any D ≥ 0, When D = 0 and the agent designer is not willing to tolerate any sub-optimality relative to A , Fact 1 allows this bound to recover the guarantee of TS exactly.At the other extreme, increasing D to 1 (recall that mean reward are bounded in [0, 1]) allows R 1 (D) = 0 and the agent has nothing to learn from the environment but also suffers the linear regret of T .Naturally, the "sweet spot" is to entertain intermediate values of D where smaller values will lead to larger amounts of information R 1 (D) needed to identify the corresponding target action, but not as many bits as what learning A necessarily entails.
Just as in the previous subsection, it may often be sensible to also consider a scenario where an agent designer is unable to precisely specify a reasonable threshold on expected distortion D and can, instead, only characterize a limit on the amount of information an agent may acquire from the environment R > 0. One might interpret this as a notion of capacity which differs quite fundamentally from other notions examined in prior work [Lai andGershman, 2021, Gershman, 2021] (see Section 4 for a more in-depth comparison).For this, we may consider the distortion-rate function which quantifies the fundamental limit of lossy compression subject to a rate constraint, rather than the distortion threshold of R(D).Similar to the rate-distortion function, however, the distortion rate function also adheres to the three properties outlined in Fact 1.More importantly, it is the inverse of the ratedistortion function such that R t (D t (R)) = R for any t ∈ [T ] and R > 0. Consequently, by selecting D = D 1 (R) as input to Algorithm 5, we immediately recover the following corollary to Theorem 1 that provides an information-theoretic Bayesian regret bound in terms of agent capacity, rather than a threshold on expected distortion.
Corollary 1.For any R > 0, The semantics of this performance guarantee are identical to those of Theorem 1, only now expressed explicitly through the agent's capacity R. Namely, when the agent has no capacity for learning R = 0, D 1 (R) = 1 and the agent incurs linear regret of T .Conversely, with sufficient capacity R = H 1 (A ), D 1 (R) = 0 and we recover the regret bound of Thompson Sampling.Intermediate values of agent capacity will result in an agent that fully utilizes its capacity to acquire no more than R bits of information from the environment, resulting in the minimum possible expected distortion quantified by D 1 (R).
While a non-technical reader of this section should remain unencumbered by the mathematical minutia of these theoretical results, the salient takeaway is an affirmation that rate-distortion theory not only provides an intuitive and mathematically-precise articulation of capacity-limited Bayesian decision-making in multiarmed bandits, but also facilitates the design of a complementary algorithm for statistically-efficient learning.The next section proceeds to illustrate how these theoretical results hold up in practice.

Experiments
In order to make the algorithm of the previous section (Algorithm 5) amenable to practical implementation, Arumugam and Van Roy [2021a] look to the classic Blahut-Arimoto algorithm [Blahut, 1972, Arimoto, 1972].Just as TS and STS perform probability matching with respect to A and A ε in each time period, respectively, the Blahut-Arimoto STS (BLASTS) algorithm (presented as Algorithm 2 where one should recall that reward maximization and regret minimization are equivalent) conducts probability matching with respect to A t in each time period to determine the policy: π t (a) = p t ( A t = a), ∀a ∈ A. For two discrete random variables representing an uncompressed information source and the resulting lossy compression, the Blahut-Arimoto algorithm computes the channel that achieves the rate-distortion limit (that is, achieve the infimum in Equation 6) by iterating alternating update equations until convergence.More concretely, the algorithm is derived by optimizing the Lagrangian of the constrained optimization [Boyd and Vandenberghe, 2004] that is the rate-distortion function, which is itself known to be a convex optimization problem [Chiang and Boyd, 2004].We refer readers to [Arumugam and Van Roy, 2021a] for precise computational details of the Blahut-Arimoto algorithm for solving the rate-distortion function R t (D) that yields A t as well as [Arumugam and Van Roy, 2021b] for details on the exact theoretical derivation.
One salient detail that emerges from using the Blahut-Arimoto algorithm in this manner is that it an agent designer's no longer specifies a distortion threshold D ∈ R ≥0 as input but, instead, provides a value of the Lagrange multiplier β ∈ R ≥0 ; lower values of β communicate a preferences for rate minimization whereas larger values of β prioritize distortion minimization.To each value of β, there is an associate distortion threshold D as β represents the desired slope achieved along the corresponding rate-distortion curve [Blahut, 1972, Csiszár, 1974a,b].As, in practice, η t (E) tends to be a continuous distribution, Arumugam and Van Roy [2021a] induce a discrete information source by drawing a sufficiently large number of Monte-Carlo samples and leveraging the resulting empirical distribution, which turns out to be a theoretically sound estimator of the true rate-distortion function [Harrison andKontoyiannis, 2008, Palaiyanur andSahai, 2008].
As these target actions { A t } t∈[T ] are born out of a need to balance the simplicity and utility of what an agent aims to learn from its interactions within the environment, we can decompose empirical results into those that affirm these two criteria are satisfied in isolation.Since assessing utility or, equivalently, performance shortfall is a standard evaluation metric used throughout the literature, we begin there and offer regret curves in Figure 4 for Bernoulli and Gaussian bandits with 10 independent arms (matching, for example, the empirical evaluation of Russo and Van Roy [2018a]); recall that the former implies Bernoulli rewards R t ∼ Bernoulli(ρ(A t )) while the latter yields Gaussian rewards with unit variance R t ∼ N (ρ(A t ), 1).We evaluate TS and BLASTS agents where, for the latter, the Lagrange multiplier hyperparameter β ∈ R ≥0 is fixed and tested over a broad range of values.All agents begin with a Beta(1, 1) prior for each action of the Bernoulli bandit and a N (0, 1) prior for the Gaussian bandit.For each individual agent, the cumulative regret incurred by the agent is plotted over each time period t ∈ [T ].
Recalling that our distortion function is directly connected to the expected regret of the BLASTS agent, we observe that smaller values of β so aggressively prioritize rate minimization that the resulting agents incur linear regret; in both bandit environments, this trend persists for all values β ≤ 100.Notably, as β ↑ ∞, we observe the resulting agents yield performance more similar to regular TS.This observation aligns with expectations since, for a sufficiently large value of β, the Blahut-Arimoto algorithm will proceed to return a channel that only places probability mass on the distortion-minimizing actions, which are indeed, the optimal actions A for each realization of the environment.A notable auxiliary finding in these results, also seen in the original experiments of Arumugam and Van Roy [2021a], is that intermediate values of β manage to yield regret curves converging towards the optimal policy more efficiently that TS; this is, of course, only possible when the distortion threshold D implied by a particular setting of β falls below the smallest action gap of the bandit problem.While the previous experiments confirm that BLASTS can be used to instantiate a broad spectrum of agents that target actions of varying utilities, it is difficult to assess the simplicity of these targets and discern whether or not less-performant target actions can in fact be identified more quickly than near-optimal ones.As a starting point, one might begin with the agent's prior over the environment and compute I 1 (E; A t ) to quantify how much information each agent's initial learning target requires from the environment a priori.In Figure 5, we compare this to I 1 (E; A ε ) and sweep over the respective β and ε values to generate the result rate-distortion curves for Bernoulli and Gaussian bandits with 1000 independent arms.The results corroborate earlier discussion of how a STS agent engages with a learning target A ε that yields some tradeoff between ease of learnability and performance, but not necessarily the best trade-off.In contrast, since R 1 (D) ≈ I 1 (E; A t ) (where the approximation is due to sampling), we expect and do indeed recover a better trade-off between rate and performance using the Blahut-Arimoto algorithm.To verify that target actions at the lower end of the spectrum (lower rate and higher distortion) can indeed by learned more quickly, we can plot the rate of the channel δ t ( A t | E) computed by BLASTS across time periods, as shown in Figure 6; for TS, we additionally plot the entropy over the optimal action H t (A ) as time passes and observe that smaller values of β lead to learning targets with smaller initial rates that decay much more quickly than their counterparts at larger values of β.Again, as β ↑ ∞, these rate curves concentrate around that of regular TS.Overall, this section has provided an overview of prior work that moves past the standard goal of finding optimal actions A in multi-armed bandit problems and towards capacity-limited decision-making agents.Extending beyond the empirical findings observed in these prior works, we provide additional experiments (see Figure 6) that show how the minimization of rate leads to target actions that are simpler to learn, allowing for an agent to curtail its interactions with the environment in fewer time periods and respect limitations on time and computational resources.Crucially, rate-distortion theory emerges as a natural conduit for identifying target actions that balance between respecting an agent's limits while still being sufficiently useful for the task at hand.In the next section, we extend this line of thinking to the episodic reinforcement-learning problem and survey recent theoretical results in this space that, analogous to Theorem 1 and Corollary 1, set the stage for subsequent empirical investigations into their practical veracity for both biological and artificial decision-making agents.

Episodic Reinforcement Learning
In this section, we again specialize the general problem formulation of Section 3.2, this time by introducing the assumption of episodicity commonly made throughout the reinforcement-learning literature.Just as in the preceding section, Thompson Sampling will again reappear as a quintessential algorithm for addressing exploration under an additional assumption that planning across any world model is always computationally feasible.Under this caveat, we survey existing theoretical results which accommodate capacity-limited agents via rate-distortion theory.

Problem Formulation
We formulate a sequential decision-making problem as an episodic, finite-horizon Markov Decision Process (MDP) [Bellman, 1957, Puterman, 1994] defined by M = S, A, U, T , β, H .Here S denotes a set of states, A is a set of actions, U : S × A → [0, 1] is a deterministic reward or utility function providing evaluative feedback signals, T : S × A → ∆(S) is a transition function prescribing distributions over next states, β ∈ ∆(S) is an initial state distribution, and H ∈ N is the maximum length or horizon.Within each one of K ∈ N episodes, the agent acts for exactly H steps beginning with an initial state s 1 ∼ β.For each timestep h ∈ [H], the agent observes the current state s h ∈ S, selects action A stationary, stochastic policy for timestep h ∈ [H], π h : S → ∆(A), encodes behavior as a mapping from states to distributions over actions.Letting Π {S → ∆(A)} denote the class of all stationary, stochastic policies, a non-stationary policy π = (π 1 , . . ., π H ) ∈ Π H is a collection of exactly H stationary, stochastic policies whose overall performance in any MDP M at timestep h ∈ [H] when starting at state s ∈ S and taking action a ∈ A is assessed by its associated action-value function , where the expectation integrates over randomness in the action selections and transition dynamics.Taking the corresponding value function as , we define the optimal policy π = (π 1 , π 2 , . . ., π H ) as achieving supremal value V M,h (s) = sup H , a H , s H+1 ) be the random variable denoting the trajectory experienced by the agent in the kth episode.Meanwhile, H k = {τ 1 , τ 2 , . . ., τ k−1 } ∈ H k is the random variable representing the entire history of the agent's interaction within the environment at the start of the kth episode.
As is standard in Bayesian reinforcement learning [Bellman and Kalaba, 1959, Duff, 2002, Ghavamzadeh et al., 2015], neither the transition function nor the reward function are known to the agent and, consequently, both are treated as random variables.An agent's initial uncertainty in the (unknown) true MDP M = (U , T ) is reflected by a prior distribution p 1 (M ).As the agent's history of interaction within the environment unfolds, updated knowledge of the underlying MDP is reflected by posterior probabilities p k (M ).Since the regret is a random variable due to our uncertainty in M , we integrate over this randomness to arrive at the Bayesian regret over K episodes: M ,1 (s 1 ) .
Just as in the previous section but with a slight abuse of notation, we will use p k (X) = p(X | H k ) as shorthand notation for the conditional distribution of any random variable X given a random realization of an agent's history H k ∈ H, at any episode k ∈ [K].Furthermore, we will denote the entropy and conditional entropy conditioned upon a specific realization of an agent's history H k , for some episode k ∈ for two arbitrary random variables X and Y .This notation will also apply analogously to mutual information: We reiterate that a reader should interpret this as recognizing that, while standard information-theoretic quantities average over all associated random variables, an agent attempting to quantify information for the purposes of exploration does so not by averaging over all possible histories that it could potentially experience, but rather by conditioning based on the particular random history H k that it has currently observed thus far.The dependence on the realization of a random history H k makes I k (X; Y ) a random variable and the usual conditional mutual information arises by integrating over this randomness: E [I k (X; Y )] = I(X; Y | H k ).Additionally, we will also adopt a similar notation to express a conditional expectation given the random history

Posterior Sampling for Reinforcement Learning
A natural starting point for addressing the exploration challenge in a principled manner is via Thompson Sampling [Thompson, 1933, Russo et al., 2018].The Posterior Sampling for Reinforcement Learning (PSRL) [Strens, 2000, Osband et al., 2013, Osband and Van Roy, 2014, Abbasi-Yadkori and Szepesvari, 2014, Agrawal and Jia, 2017, Osband and Van Roy, 2017, Lu and Van Roy, 2019] algorithm (given as Algorithm 6) does this by, in each episode k ∈ [K], sampling a candidate MDP M k ∼ p k (M ) and executing its optimal policy in the environment π (k) = π M k ; notably, such posterior sampling guarantees the hallmark probability-matching principle of Thompson Sampling: The resulting trajectory τ k leads to a new history H k+1 = H k ∪ τ k and an updated posterior over the true MDP p k+1 (M ).
Algorithm 6 Posterior Sampling for Reinforcement Learning (PSRL) [Strens, 2000]  k) and get trajectory τ k Update history H k+1 = H k ∪ τ k Induce posterior p k+1 (M ) end for Algorithm 7 Value-equivalent Sampling for Reinforcement Learning (VSRL) [Arumugam and Van Roy, 2022] k) and observe trajectory τ k Update history H k+1 = H k ∪ τ k Induce posterior p k+1 (M ) end for Unfortunately, for complex environments, pursuit of the exact MDP M may be an entirely infeasible goal, akin to pursuing an optimal action A within a multi-armed bandit problem.A MDP representing control of a real-world, physical system, for example, suggests that learning the associated transition function requires the agent internalize laws of physics and motion with near-perfect accuracy.More formally, identifying M demands the agent obtain exactly H 1 (M ) bits of information from the environment which, under an uninformative prior, may either be prohibitively large by far exceeding the agent's capacity constraints or be simply impractical under time and resource constraints [Lu et al., 2021].

Rate-Distortion Theory for Target MDPs
To remedy the intractabilities imposed by PSRL when an agent must contend with an overwhelminglycomplex environment, we once again turn to rate-distortion theory as a tool for defining an informationtheoretic surrogate than an agent may use to prioritize its information acquisition strategy in lieu of M .If one were to follow the rate-distortion optimization of Equation 4, this would suggest identifying a channel δ t (π (k) | M ) that directly maps a bounded agent's beliefs about M to a behavior policy π (k) for use in the current episode k ∈ [K].For the purposes of analysis, Arumugam and Van Roy [2022] instead perform lossy MDP compression with the interpretation that various facets of the true MDP M must be discarded by a capacity-limited agent who can only hope identify a simplified world model that strives to retain as many salient details as possible.Implicit to such an approach is an assumption that the act of planning (that is, mapping any MDP M ∈ M to its optimal policy π M ) can always be done in a computationally-efficient manner irrespective of the agent's capacity limitations.From a mechanistic perspective, this is likely implausible for both artificial agents in large-scale, high-dimensional environments of interest as well as biological agents [Ho et al., 2022].On the other hand, this construction induces a Markov chain M − M − π (k) , where M denotes the compressed world model; by the data-processing inequality, we have for all k ∈ [K] that I k (M ; π (k) ) ≤ I k (M ; M), such that minimizing the rate of the lossy MDP compression must also limit the amount of information that flows from the agent's beliefs about the world to the executed behavior policy.
For the precise details of this MDP compression, we first require (just as with any lossy compression problem) the specification of an information source to be compressed as well as a distortion function that quantifies the loss of fidelity between uncompressed and compressed values.Akin to the multi-armed bandit setting, we will take the agent's current beliefs p k (M ) as the information source to be compressed in each episode.Unlike in the bandit setting, however, the choice of distortion function d : M × M → R ≥0 presents an opportunity for the agent designer to be judicious in specifying which aspects of the environment are preserved in the agent's compressed view of the world.
It is fairly well accepted that human beings do not model all facets of the environment when making decisions [Simon, 1956, Gigerenzer andGoldstein, 1996] and the choice of which details are deemed salient enough to warrant retention in the mind of an agent is precisely governed by the choice of distortion function.In the computational reinforcement-learning literature, this reality has called into question longstanding approaches to model-based reinforcement learning [Sutton, 1991, Sutton and Barto, 1998, Littman, 2015] which use standard maximum-likelihood estimation techniques that endeavor to learn the exact model (U, T ) that governs the underlying MDP.The end result has been a flurry of recent work [Silver et al., 2017, Farahmand et al., 2017, Oh et al., 2017, Asadi et al., 2018, Farahmand, 2018, Grimm et al., 2020, D'Oro et al., 2020, Abachi et al., 2020, Cui et al., 2020, Ayoub et al., 2020, Schrittwieser et al., 2020, Nair et al., 2020, Grimm et al., 2021, Nikishin et al., 2022, Voelcker et al., 2022, Grimm et al., 2022] which eschews the traditional maximum-likelihood objective in favor of various surrogate objectives which restrict the focus of the agent's modeling towards specific aspects of the environment.As the core goal of endowing a decisionmaking agent with its own internal model of the world is to facilitate model-based planning [Bertsekas, 1995], central among these recent approaches is the value-equivalence principle [Grimm et al., 2020[Grimm et al., , 2021[Grimm et al., , 2022] ] which provides mathematical clarity on how surrogate models can still enable lossless planning relative to the true model of the environment.
For any arbitrary MDP M with model (U, T ) and any stationary, stochastic policy π : S → ∆(A), define the Bellman operator B π M : {S → R} → {S → R} as follows: The Bellman operator is a foundational tool in dynamic-programming approaches to reinforcement learning [Bertsekas, 1995] and gives rise to the classic Bellman equation: for any MDP M = S, A, U, T , β, H and any non-stationary policy π = (π 1 , . . ., π H ), the value functions induced by π satisfy , for all h ∈ [H] and with V π M,H+1 (s) = 0, ∀s ∈ S. For any two MDPs M = S, A, U, T , β, H and M = S, A, U, T , β, H , Grimm et al. [2020] define a notion of equivalence between them despite their differing models.For any policy class Π ⊆ {S → ∆(A)} and value function class V ⊆ {S → R}, M and M are value equivalent with respect to Π and V if and only if In words, two different models are deemed value equivalent if they induce identical Bellman updates under any pair of policy and value function from Π × V. Grimm et al. [2020] prove that when Π = {S → ∆(A)} and V = {S → R}, the set of all exactly value-equivalent models is a singleton set containing only the true model of the environment.By recognizing that the ability to plan over all arbitrary behaviors is not necessarily in the agent's best interest and restricting focus to decreasing subsets of policies Π ⊂ {S → ∆(A)} and value functions V ⊂ {S → R}, the space of exactly value-equivalent models is monotonically increasing.
Still, however, exact value equivalence still presumes that an agent has the capacity for planning with complete fidelity to the true environment; more plausibly, an agent may only have the resources to plan in an approximately-value-equivalent manner [Grimm et al., 2022].For brevity, let R {S × A → [0, 1]} and T {S × A → ∆(S)} denote the classes of all reward functions and transition functions, respectively.Recall that, with S, A, β, H all known, the uncertainty in a random MDP M is entirely driven by its model (R, T ) such that we may think of the support of M as supp(M ) = M R × T. We define a distortion function on pairs of MDPs d : In words, d Π,V is the supremal squared Bellman error between MDPs M and M across all states s ∈ S with respect to the policy class Π and value function class V.With an information source and distortion function defined, Arumugam and Van Roy [2022] employ the following rate-distortion function that articulates the lossy MDP compression a capacity-limited decision agent performs to identify a simplified MDP to pursue instead of M : By definition, the target MDP M k that achieves this rate-distortion limit will demand that the agent acquire fewer bits of information than what is needed to identify M .Once again, by virtue of Fact 1, this claim is guaranteed for all k ∈ [K] and any Crucially, however, the use of the value-equivalence principle in the distortion function ensures that agent capacity is allocated towards preserving the regions of the world model needed to plan over behaviors as defined through Π, V. Arumugam and Van Roy [2022] establish an information-theoretic Bayesian regret bound for a posteriorsampling algorithm (given as Algorithm 7) that performs probability matching with respect to where Γ < ∞ is an uniform upper bound to the information ratio [Russo and Van Roy, 2016, 2014, 2018a] that emerges as a technical assumption for the analysis; a reader should interpret this Γ as a sort of conversion factor communicating the worst case number of units of squared regret incurred by the agent per bit of information acquired from the environment.
Just as with the BLASTS algorithm for the multi-armed bandit setting, this VSRL algorithm directly couples an agent's exploratory choices in each episode to the epistemic uncertainty it maintains over the resource-rational learning target M k which it aspires to learn.The bound communicates that an agent with limited capacity must tolerate a higher distortion threshold D and pursue the resulting compressed MDP that bears less fidelity to the original MDP; in exchange, the resulting number of bits needed from the environment to identify such a simplified model of the world is given as R 1 (D) and guaranteed to be less than the entropy of M .Additionally, just as with the regret bound for BLASTS, one can express a near-identical result through the associated distortion-rate function.In particular, this encourages a particular notion of agent capacity as a limit R ∈ R ≥0 on the number of bits an agent may obtain from its interactions with the environment.Subject to this constraint, the fundamental limit on the amount of expected distortion incurred is given by Embracing this distortion-rate function and taking the VSRL distortion threshold as D = D 1 (R) allows for a performance guarantee that explicitly accounts for the agent capacity limits: In summary, under a technical assumption of episodicity for the purposes of analysis, the theoretical results surveyed in this section parallel those of the preceding section for multi-armed bandits.While computational experiments for this episodic reinforcement learning setting have not yet been established due to the computational efficiency of running the Blahut-Arimoto algorithm for such a lossy MDP compression problem, the core takeaway of this section is that there is strong theoretical justification for using these tools from rate-distortion theory to empirically study capacity-limited sequential decision-making agents.

Discussion
In this paper, we have introduced capacity-limited Bayesian reinforcement learning, capturing a novel perspective on lifelong learning under a limited cognitive load while also surveying existing theoretical and algorithmic advances specific to multi-armed bandits [Arumugam and Van Roy, 2021a] and reinforcement learning [Arumugam and Van Roy, 2022].Taking a step back, we now situate our contributions in a broader context by reviewing related work on capacity-limited cognition as well as information-theoretic reinforcement learning.As our framework sits at the intersection of Bayesian inference, reinforcement learning, and rate-distortion theory, we use this opportunity to highlight particularly salient pieces of prior work that sit at the intersection Bayesian inference and rate-distortion theory as well as the intersection of reinforcement learning and rate-distortion theory, respectively.Furthermore, while the algorithms discussed in this work all operationalize the Blahut-Arimoto algorithm and Thompson Sampling as the primary mechanisms for handling rate-distortion optimization and exploration respectively, we also discuss opportunities to expand to more sophisticated strategies for computing target actions and exploring once a target action has been determined.Lastly, we conclude our discussion by returning to a key assumption used throughout this work that an agent consistently maintains idealized beliefs about the environment E through perfect Bayesian inference.
Broadly speaking and under the episodic reinforcement learning formulation of the previous section, these approaches all center around the perspective that a policy π h : S → ∆(A) should be modeled as a communication channel that, like a human decision-maker with limited information processing capability, is subject to a constraint on the maximal number of bits that may be sent across it.Consequently, an agent aspiring to maximize returns must do so subject to this constraint on policy complexity; conversely, an agent ought to transmit the minimum amount of information possible while it endeavors to reach a desired level of performance [Polani, 2009, 2011, Tishby and Polani, 2011, Rubin et al., 2012].Paralleling the distortion-rate function D(R), the resulting policy-optimization objective follows as sup It is important to acknowledge that such a formulation sits directly at the intersection of reinforcement learning and rate-distortion theory without invoking any principles of Bayesian inference.Depending on the precise work, subtle variations on this optimization problem exist from choosing a fixed state distribution for the random variable S [Polani, 2009[Polani, , 2011]], incorporating the state visitation distribution of the policy being optimized [Still and Precup, 2012, Gershman, 2020, Lai and Gershman, 2021], or assuming access to the generative model of the MDP and decomposing the objective across a finite state space [Tishby andPolani, 2011, Rubin et al., 2012].In all of these cases, the end empirical result tends to converge by also making use of variations on the classic Blahut-Arimoto algorithm to solve the Lagrangian associated with the constrained optimization [Boyd and Vandenberghe, 2004] and produce policies that exhibit higher entropy across states under an excessively limited rate R, with a gradual convergence towards the greedy optimal policy as R increases.
The alignment between this optimization problem and that of the distortion-rate function is slightly wrinkled by the non-stationarity of the distortion function (here, Q π is used as an analogue to distortion which changes as the policy or channel does) and, when using the policy visitation distribution for S, the nonstationarity of the information source.Despite these slight, subtle mismatches with the core rate-distortion problem, the natural synergy between cognitive and computational decision making [Tenenbaum et al., 2011, Lake et al., 2017] has led to various reinforcement-learning approaches that draw direct inspiration from this line of thinking [Klyubin et al., 2005, Ortega and Braun, 2011, Still and Precup, 2012, Ortega and Braun, 2013, Shafieepoorfard et al., 2016, Tiomkin and Tishby, 2017, Goyal et al., 2018, Lerch and Sims, 2018, 2019, Abel et al., 2019, Goyal et al., 2020a,b], most notably including parallel connections to work on "control as inference" or KL-regularized reinforcement learning [Todorov, 2007, Toussaint, 2009, Kappen et al., 2012, Levine, 2018, Ziebart, 2010, Fox et al., 2016, Haarnoja et al., 2017, 2018, Galashov et al., 2019, Tirumala et al., 2019].Nevertheless, despite their empirical successes, such approaches lack principled mechanisms for addressing the exploration challenge [O'Donoghue et al., 2020].In short, the key reason behind this is that the incorporation of Bayesian inference allows for a separation of reducible or epistemic uncertainty that exists due to an agent's lack of knowledge versus irreducible or aleatoric uncertainty that exists due to the natural stochasticity that may exist within a random outcome [Der Kiureghian and Ditlevsen, 2009].Without leveraging a Bayesian setting, a random variable denoting an agent's belief about the environment E or underlying MDP M no longer exists and a channel like the ones explored throughout this work from beliefs to action cease to exist.That said, the notion of rate preserved by these methods has been shown to constitute a reasonable notion of policy complexity [Lai and Gershman, 2021] and future work may benefit from combining the two approaches.
Similar to human decision making [Gershman, 2018, Schulz and Gershman, 2019, Gershman, 2019], provably-efficient reinforcement-learning algorithms have historically relied upon one of two possible exploration strategies: optimism in the face of uncertainty [Kearns and Singh, 2002, Brafman and Tennenholtz, 2002, Kakade, 2003, Auer et al., 2009, Bartlett and Tewari, 2009, Strehl et al., 2009, Jaksch et al., 2010, Dann and Brunskill, 2015, Azar et al., 2017, Dann et al., 2017, Jin et al., 2018, Zanette and Brunskill, 2019, Dong et al., 2022] or posterior sampling [Osband et al., 2013, Osband and Van Roy, 2017, Agrawal and Jia, 2017, Lu and Van Roy, 2019, Lu et al., 2021].While both paradigms have laid down solid theoretical foundations, a line of work has demonstrated how posterior-sampling methods can be more favorable both in theory and in practice [Osband et al., 2013, 2016a,b, Osband and Van Roy, 2017, Osband et al., 2019, Dwaracherla et al., 2020].The theoretical results discussed in this work advance and further generalize this line of thinking through the concept of learning targets (referred to in this work as target actions for clarity of exposition), introduced by Lu et al. [2021], which opens up new avenues for entertaining solutions beyond optimal policies and conditioning an agent's exploration based on what it endeavors to learn from its environment, not unlike preschool children [Cook et al., 2011].While this literature traditionally centers on consideration of a single agent interacting within its environment, generalizations to multiple agents acting concurrently while coupled through shared beliefs have been formalized and examined in theory as well as in practice [Dimakopoulou and Van Roy, 2018, Dimakopoulou et al., 2018, Chen et al., 2022]; translating the ideas discussed here to further account for capacity limitations in that setting constitutes a promising direction for future work.
Finally, we note while the work cited thus far was developed in the reinforcement learning community, the coupling of rate-distortion theory and Bayesian inference to strike a balance between the simplicity and utility of what an agent learns has been studied extensively by Gottwald and Braun [2019], who come from an information-theoretic background studying bounded rationality [Ortega andBraun, 2011, 2013].Perhaps the key distinction between the work surveyed here and theirs is the further incorporation of reinforcement learning, which then provides a slightly more precise foundation upon which existing machinery can be repurposed to derive theoretical results like regret bounds.In contrast, the formulation of Gottwald and Braun [2019] follows more abstract utility-theoretic decision making while also leveraging ideas from microeconomics and generalized beyond from standard Shannon information-theoretic quantities; we refer readers to their excellent, rigorous treatment of this topic.

Generalizations to Other Families of Decision Rules
The previous sections demonstrated several concrete implementations of capacity-limited Bayesian decisionmaking.We focused on BLASTS, an algorithm that generalizes Thompson Sampling, which itself is already a quintessential algorithm for navigating the explore-exploit tradeoff in a principled manner in multi-armed bandit and sequential decision-making problems.That said, however, we emphasize that BLASTS is only one particular instantiation of the framework espoused by the rate-distortion function of Equation 4. Here, we briefly sketch other directions in which the framework has been or could be applied.
First, the general framework of capacity-limited Bayesian decision-making can, in principle, be applied to any algorithm that, when supplied with beliefs about the environment and a particular target for learning, induces a policy to execute in the environment.For example, in information-directed sampling, choices are made not only based on current beliefs about immediate rewards but also based on how actions produce informative consequences that can guide future behavior [Russo and Van Roy, 2014, 2018a, Lu et al., 2021, Hao et al., 2022, Hao and Lattimore, 2022].This strategy motivates a decision-maker to engage in direct exploration as opposed to random exploration (Thompson Sampling being one example) [Wilson et al., 2014] and better resolve the explore-exploit dilemma.Work by Arumugam and Van Roy [2021b] has extended the BLASTS algorithm to develop variants of information-directed sampling that similarly minimize the rate between environment estimates and actions.Future work could explore even richer families of decision-rules such as those based on Bayes-optimal solutions over longer time horizons [Duff, 2002] and even ones that look past the KL-divergence as the core quantifier of information [Lattimore and Szepesvári, 2019, Zimmert and Lattimore, 2019, Lattimore and Gyorgy, 2021].
Additionally, BLASTS itself uses a seminal algorithm from the information-theory literature to ultimately address the rate-distortion optimization problem and find the decision-rule that optimally trades off reward and information-namely, the Blahut-Arimoto algorithm [Blahut, 1972, Arimoto, 1972].However, this standard algorithm, while mathematically sound for random variables taking values on abstract spaces [Csiszár, 1974b], can only be made computationally tractable in the face of discrete random variables.Extending to general input distributions (e.g., distributions with continuous or countable support) occurs through the use of an estimator with elegant theoretical properties such as asymptotic consistency [Harrison andKontoyiannis, 2008, Palaiyanur andSahai, 2008].Despite this, it is still limited to output distributions that have finite support.This limits its applicability to problems where the action space is finite and relatively small (even if the environment space is complex).Thus, an important direction for future research will be to develop algorithms for finding capacity-limited decision-rules based on versions of Blahut-Arimoto designed for general output distributions (e.g., particle filter-based algorithms [Dauwels, 2005]).

Capacity-Limited Estimation and Alternative Information Bottlenecks
Throughout this paper, we have assumed that environment estimation is not directly subject to capacitylimitations and that decision-makers perform perfect Bayesian inference.Naturally, however, this idealized scenario isn't guaranteed to hold for biological or artificial decision making agents.One high-level perspective on the core agent design problem addressed in this work is that decision-making agents cannot acquire unbounded quantities of information from the environment -this reality motivates the need to prioritize information and rate-distortion theory emerges as a natural tool for facilitating such a prioritization scheme.
By the same token, capacity-limited decision-making agents should also seldom find themselves capable of retaining all bits of information uncovered about the underlying environment E. If this were possible, then maintaining perfect belief estimates about the environment via η t would be a reasonable supposition.In reality, however, an agent must also be judicious in what pieces of environment information are actually retained.Lu et al. [2021] introduce terminology for discussing this limited corpus of world knowledge as an environment proxy, E. The lack of fidelity between this surrogate and true environment E translates to the approximate nature of an agent's Bayesian inference when maintaining beliefs about E in lieu of E. For biological decision-making agents, the concept of a proxy seems intuitive as "we are not interested in describing some physically objective world in its totality, but only those aspects of the totality that have relevance as the 'life space' of the organism considered.Hence, what we call the 'environment' will depend upon the 'needs,' 'drives,' or 'goals' of the organism," as noted by Herbert Simon many decades ago [Simon, 1956].
Curiously, the relationship between the original environment E and this proxy E can also be seen as a lossy compression problem where only a salient subset of the cumulative environment information need by retained by the agent for competent decision-making.Consequently, the associated rate-distortion function and the question of what suitable candidate notions of distortion apply may likely be an interesting object of study for future work.Practical optimization of such a rate-distortion function would likely benefit from recent statistical advances in empirical distribution compression [Dwivedi and Mackey, 2022] to get away with representing the information source via a limited number of Monte-Carlo samples.
Finally, although consideration of capacity-limits on inference would extend the scope of the current framework, it is worth noting that recent findings in neuroscience support the possibility of a bottleneck on choice processes even if the bottleneck on inference is minimal.For example, when trained on stimuli presented at different angles, mice have been shown to discriminate orientations as low as 20 • -30 • based on behavioral measures [Abdolrahmani et al., 2019].However, direct neural measurements from visual processing regions reveal sensitivity to orientations as low as 0.37 • [Stringer et al., 2021].The higher precision (nearly 100× higher) of sensory versus behavioral discrimination is consistent with a greater information bandwidth on inference compared to choice, as assumed in the current version of the model.Similarly, work tracking the development of decision-making strategies in children provides evidence of capacity limits on choice processes even in the absence of limits on inference.For example, Decker et al. [2016] report that on a task designed to dissociate model-free versus model-based learning mechanisms, 8-12 year olds show signs of encoding changes in transition structure (longer reaction times) but do not appear to use this information to make better decisions, unlike 13-17 year olds and adults.This result is consistent with a distinct bottleneck between inference and action that may have a developmental trajectory.In short, the analyses developed in this paper may shed light on the general computational principles that underlie cases in which decision-makers display optimal inference but suboptimal choice.

Conclusion
Our goal in this paper has been to review key insights from work on capacity-limited Bayesian decisionmaking by Arumugam andVan Roy [2021a, 2022] and situate it within existing work on resource-rational cognition and decision-making [Griffiths et al., 2015, Lieder and Griffiths, 2020, Gershman et al., 2015].This discussion naturally leads to a number of questions, in particular, how the general framework presented can be applied to a wider range of algorithms, how other kinds of information bottlenecks could affect learning, and whether humans and other animals are capacity-limited Bayesian decision-makers.We hope that by formally outlining the different components of capacity-limited inference and choice, the current work can facilitate future cross-disciplinary investigations to address such topics.

A Proof of Theorem 1
We begin our analysis of Rate-Distortion Thompson Sampling by establishing the following fact, which also appears in the proof of Lemma 3 of [Arumugam and Van Roy, 2021a]  Proof.By definition, Ãt achieves the rate-distortion limit such that E t d( Ãt , E) ≤ D. Recall that, by Fact 1, the rate-distortion function is a non-increasing function in its argument.This implies that for any D 1 ≤ D 2 , R t+1 (D 2 ) ≤ R t+1 (D 1 ).Applying this fact to the inequality above and taking expectations, we obtain .
Observe by the tower property of expectation that Moreover, from Fact 1, we recall that the rate-distortion function is a convex function.Consequently, by Jensen's inequality, we have Inspecting the definition of the rate-distortion in the expectation, we see that Since, by definition, Ãt achieves the rate-distortion limit at time period t, we know that I t (E; Ãt ) = R t (D).Applying this fact yields the desired inequality: Lemma 1 shows that the expected amount of information needed from the environment in each successive time period is non-increasing and further highlights two possible sources for this improvement: (1) a change in learning target from Ãt to Ãt+1 and (2) information acquired about Ãt in the current time period, I t ( Ãt ; (A t , O t+1 )).With this in hand, we can obtain control over the cumulative information gain of an agent across all time periods using the learning target identified under our prior, following an identical argument as Arumugam and Van Roy [2022].
Lemma 2. For any fixed D > 0 and any t ∈ [T ], Proof.Observe that we can apply Lemma 1 directly to each term of the sum and obtain Applying linearity of expectation and breaking apart the sum, we have We may complete the proof by applying the tower property of expectation and then re-indexing the last summation With all of these tools in hand, we may now establish an information-theoretic regret bound.For each time period t ∈ [T ], define the information ratio as where the first term captures our cumulative performance shortfall by pursuing a learning target Ãt in each time period, rather than A , while the second term captures our regret with respect to each target.The latter term is also known as the satisficing regret [Russo and Van Roy, 2022].Focusing on the first term, we may apply the tower property of expectation to leverage the fact that each target action Ãt achieves the rate-distortion limit and, therefore, has bounded expected distortion: The remainder of the proof follows as a standard information-ratio analysis [Russo and Van Roy, 2016], only now with the provision of Lemma 2. Namely, we have where the first inequality follows from our uniform upper bound to the information ratios, the second inequality is the Cauchy-Scwharz inequality, and the final inequality is due to Lemma 2. Putting everything together, we have established that Theorem 1 then follows by Proposition 3 of Russo and Van Roy [2016], which establishes that Γ = 1 2 |A| for a multi-armed bandit problem with rewards bounded in the unit interval and a finite action space.

Figure 4 :
Figure 4: Cumulative regret curves for Bernoulli and Gaussian bandits with 10 independent arms comparing traditional Thompson Sampling (TS) against Blahut-Arimoto STS (BLASTS), sweeping over the β hyperparameter of the latter.

Figure 5 :
Figure 5: Rate-distortion curves for target actions computed via BLASTS ( A t ) and STS (A ε ) in the first time periods of Bernoulli and Gaussian bandits with 1000 independent arms.

Figure 6 :
Figure 6: Rate curves for Bernoulli and Gaussian bandits with 10 independent arms comparing traditional Thompson Sampling (TS) against Blahut-Arimoto STS (BLASTS), sweeping over the β hyperparameter of the latter.