Abstract

We consider the problem of the evolution of a code within a structured population of agents. The agents try to maximize their information about their environment by acquiring information from the outputs of other agents in the population. A naive use of information-theoretic methods would assume that every agent knows how to interpret the information offered by other agents. However, this assumes that it knows which other agents it observes, and thus which code they use. In our model, however, we wish to preclude that: It is not clear which other agents an agent is observing, and the resulting usable information is therefore influenced by the universality of the code used and by which agents an agent is listening to. We further investigate whether an agent that does not directly perceive the environment can distinguish states by observing other agents' outputs. For this purpose, we consider a population of different types of agents talking about different concepts, and try to extract new ones by considering their outputs only.

You do not currently have access to this content.