Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-3 of 3
Alexander Tschantz
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Active Inference and Intentional Behavior
UnavailablePublisher: Journals Gateway
Neural Computation (2025) 37 (4): 666–700.
Published: 18 March 2025
FIGURES
| View all 9
Abstract
View articletitled, Active Inference and Intentional Behavior
View
PDF
for article titled, Active Inference and Intentional Behavior
Recent advances in theoretical biology suggest that key definitions of basal cognition and sentient behavior may arise as emergent properties of in vitro cell cultures and neuronal networks. Such neuronal networks reorganize activity to demonstrate structured behaviors when embodied in structured information landscapes. In this article, we characterize this kind of self-organization through the lens of the free energy principle, that is, as self-evidencing. We do this by first discussing the definitions of reactive and sentient behavior in the setting of active inference, which describes the behavior of agents that model the consequences of their actions. We then introduce a formal account of intentional behavior that describes agents as driven by a preferred end point or goal in latent state-spaces. We then investigate these forms of (reactive, sentient, and intentional) behavior using simulations. First, we simulate the in vitro experiments, in which neuronal cultures modulated activity to improve gameplay in a simplified version of Pong by implementing nested, free energy minimizing processes. The simulations are then used to deconstruct the ensuing predictive behavior, leading to the distinction between merely reactive, sentient, and intentional behavior with the latter formalized in terms of inductive inference. This distinction is further studied using simple machine learning benchmarks (navigation in a grid world and the Tower of Hanoi problem) that show how quickly and efficiently adaptive behavior emerges under an inductive form of active inference.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (2022) 34 (6): 1329–1368.
Published: 19 May 2022
FIGURES
Abstract
View articletitled, Predictive Coding Approximates Backprop Along Arbitrary Computation Graphs
View
PDF
for article titled, Predictive Coding Approximates Backprop Along Arbitrary Computation Graphs
Backpropagation of error (backprop) is a powerful algorithm for training machine learning architectures through end-to-end differentiation. Recently it has been shown that backprop in multilayer perceptrons (MLPs) can be approximated using predictive coding, a biologically plausible process theory of cortical computation that relies solely on local and Hebbian updates. The power of backprop, however, lies not in its instantiation in MLPs but in the concept of automatic differentiation, which allows for the optimization of any differentiable program expressed as a computation graph. Here, we demonstrate that predictive coding converges asymptotically (and in practice, rapidly) to exact backprop gradients on arbitrary computation graphs using only local learning rules. We apply this result to develop a straightforward strategy to translate core machine learning architectures into their predictive coding equivalents. We construct predictive coding convolutional neural networks, recurrent neural networks, and the more complex long short-term memory, which include a nonlayer-like branching internal graph structure and multiplicative interactions. Our models perform equivalently to backprop on challenging machine learning benchmarks while using only local and (mostly) Hebbian plasticity. Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry and may also contribute to the development of completely distributed neuromorphic architectures.
Journal Articles
Whence the Expected Free Energy?
Open AccessPublisher: Journals Gateway
Neural Computation (2021) 33 (2): 447–482.
Published: 01 February 2021
Abstract
View articletitled, Whence the Expected Free Energy?
View
PDF
for article titled, Whence the Expected Free Energy?
The expected free energy (EFE) is a central quantity in the theory of active inference. It is the quantity that all active inference agents are mandated to minimize through action, and its decomposition into extrinsic and intrinsic value terms is key to the balance of exploration and exploitation that active inference agents evince. Despite its importance, the mathematical origins of this quantity and its relation to the variational free energy (VFE) remain unclear. In this letter, we investigate the origins of the EFE in detail and show that it is not simply ”the free energy in the future.” We present a functional that we argue is the natural extension of the VFE but actively discourages exploratory behavior, thus demonstrating that exploration does not directly follow from free energy minimization into the future. We then develop a novel objective, the free energy of the expected future (FEEF), which possesses both the epistemic component of the EFE and an intuitive mathematical grounding as the divergence between predicted and desired futures.