Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-4 of 4
Marek Grześ
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2024) 36 (11): 2479–2504.
Published: 11 October 2024
FIGURES
| View All (5)
Abstract
View articletitled, Multimodal and Multifactor Branching Time Active Inference
View
PDF
for article titled, Multimodal and Multifactor Branching Time Active Inference
Active inference is a state-of-the-art framework for modeling the brain that explains a wide range of mechanisms. Recently, two versions of branching time active inference (BTAI) have been developed to handle the exponential (space and time) complexity class that occurs when computing the prior over all possible policies up to the time horizon. However, those two versions of BTAI still suffer from an exponential complexity class with regard to the number of observed and latent variables being modeled. We resolve this limitation by allowing each observation to have its own likelihood mapping and each latent variable to have its own transition mapping. The implicit mean field approximation was tested in terms of its efficiency and computational cost using a dSprites environment in which the metadata of the dSprites data set was used as input to the model. In this setting, earlier implementations of branching time active inference (namely, BTAI VMP and BTAI BF ) underperformed in relation to the mean field approximation ( BTAI 3MF ) in terms of performance and computational efficiency. Specifically, BTAI VMP was able to solve 96.9% of the task in 5.1 seconds, and BTAI BF was able to solve 98.6% of the task in 17.5 seconds. Our new approach outperformed both of its predecessors by solving the task completely (100%) in only 2.559 seconds.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2024) 36 (11): 2403–2445.
Published: 11 October 2024
FIGURES
| View All (19)
Abstract
View articletitled, Deconstructing Deep Active Inference: A Contrarian Information Gatherer
View
PDF
for article titled, Deconstructing Deep Active Inference: A Contrarian Information Gatherer
Active inference is a theory of perception, learning, and decision making that can be applied to neuroscience, robotics, psychology, and machine learning. Recently, intensive research has been taking place to scale up this framework using Monte Carlo tree search and deep learning. The goal of this activity is to solve more complicated tasks using deep active inference. First, we review the existing literature and then progressively build a deep active inference agent as follows: we (1) implement a variational autoencoder (VAE), (2) implement a deep hidden Markov model (HMM), and (3) implement a deep critical hidden Markov model (CHMM). For the CHMM, we implemented two versions, one minimizing expected free energy, CHMM[EFE] and one maximizing rewards, CHMM[reward]. Then we experimented with three different action selection strategies: the ε -greedy algorithm as well as softmax and best action selection. According to our experiments, the models able to solve the dSprites environment are the ones that maximize rewards. On further inspection, we found that the CHMM minimizing expected free energy almost always picks the same action, which makes it unable to solve the dSprites environment. In contrast, the CHMM maximizing reward keeps on selecting all the actions, enabling it to successfully solve the task. The only difference between those two CHMMs is the epistemic value, which aims to make the outputs of the transition and encoder networks as close as possible. Thus, the CHMM minimizing expected free energy repeatedly picks a single action and becomes an expert at predicting the future when selecting this action. This effectively makes the KL divergence between the output of the transition and encoder networks small. Additionally, when selecting the action down the average reward is zero, while for all the other actions, the expected reward will be negative. Therefore, if the CHMM has to stick to a single action to keep the KL divergence small, then the action down is the most rewarding. We also show in simulation that the epistemic value used in deep active inference can behave degenerately and in certain circumstances effectively lose, rather than gain, information. As the agent minimizing EFE is not able to explore its environment, the appropriate formulation of the epistemic value in deep active inference remains an open question.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2022) 34 (10): 2132–2144.
Published: 12 September 2022
Abstract
View articletitled, Branching Time Active Inference with Bayesian Filtering
View
PDF
for article titled, Branching Time Active Inference with Bayesian Filtering
Branching time active inference is a framework proposing to look at planning as a form of Bayesian model expansion. Its root can be found in active inference, a neuroscientific framework widely used for brain modeling, as well as in Monte Carlo tree search, a method broadly applied in the reinforcement learning literature. Up to now, the inference of the latent variables was carried out by taking advantage of the flexibility offered by variational message passing, an iterative process that can be understood as sending messages along the edges of a factor graph. In this letter, we harness the efficiency of an alternative method for inference, Bayesian filtering, which does not require the iteration of the update equations until convergence of the variational free energy. Instead, this scheme alternates between two phases: integration of evidence and prediction of future states. Both phases can be performed efficiently, and this provides a forty times speedup over the state of the art.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2021) 33 (10): 2762–2826.
Published: 16 September 2021
FIGURES
Abstract
View articletitled, Realizing Active Inference in Variational Message Passing: The Outcome-Blind Certainty Seeker
View
PDF
for article titled, Realizing Active Inference in Variational Message Passing: The Outcome-Blind Certainty Seeker
Active inference is a state-of-the-art framework in neuroscience that offers a unified theory of brain function. It is also proposed as a framework for planning in AI. Unfortunately, the complex mathematics required to create new models can impede application of active inference in neuroscience and AI research. This letter addresses this problem by providing a complete mathematical treatment of the active inference framework in discrete time and state spaces and the derivation of the update equations for any new model. We leverage the theoretical connection between active inference and variational message passing as described by John Winn and Christopher M. Bishop in 2005. Since variational message passing is a well-defined methodology for deriving Bayesian belief update equations, this letter opens the door to advanced generative models for active inference. We show that using a fully factorized variational distribution simplifies the expected free energy, which furnishes priors over policies so that agents seek unambiguous states. Finally, we consider future extensions that support deep tree searches for sequential policy optimization based on structure learning and belief propagation.