Abstract
Representations are internal models of the environment that can provide guidance to a behaving agent, even in the absence of sensory information. It is not clear how representations are developed and whether they are necessary or even essential for intelligent behavior. We argue here that the ability to represent relevant features of the environment is the expected consequence of an adaptive process, give a formal definition of representation based on information theory, and quantify it with a measure R. To measure how R changes over time, we evolve two types of networks—an artificial neural network and a network of hidden Markov gates—to solve a categorization task using a genetic algorithm. We find that the capacity to represent increases during evolutionary adaptation and that agents form representations of their environment during their lifetime. This ability allows the agents to act on sensorial inputs in the context of their acquired representations and enables complex and context-dependent behavior. We examine which concepts (features of the environment) our networks are representing, how the representations are logically encoded in the networks, and how they form as an agent behaves to solve a task. We conclude that R should be able to quantify the representations within any cognitive system and should be predictive of an agent's long-term adaptive success.