The free energy principle describes cognitive functions such as perception, action, learning and attention in terms of surprisal minimisation. Under simplifying assumptions, agents are depicted as systems minimising a weighted sum of prediction errors encoding the mismatch between incoming sensations and an agent’s predictions about such sensations. The “dark room” is defined as a state that an agent would occupy should it only look to minimise this sum of prediction errors. This (paradoxical) state emerges as the contrast between the attempts to describe the richness of human and animal behaviour in terms of surprisal minimisation and the trivial solution of a dark room, where the complete lack of sensory stimuli would provide the easiest way to minimise prediction errors, i.e., to be in a perfectly predictable state of darkness with no incoming stimuli. Using a process theory derived from the free energy principle, active inference, we investigate with an agent-based model the meaning of the dark room problem and discuss some of its implications for natural and artificial systems. In this set up, we propose that the presence of this paradox is primarily due to the long-standing belief that agents should encode accurate world models, typical of traditional (computational) theories of cognition.

This content is only available as a PDF.