Skip Nav Destination
1-1 of 1
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Publisher: Journals Gateway
Artificial Life (2016) 22 (2): 196–210.
Published: 01 May 2016
FIGURES | View All (13)
AbstractView article PDF
We consider the problem of the evolution of a code within a structured population of agents. The agents try to maximize their information about their environment by acquiring information from the outputs of other agents in the population. A naive use of information-theoretic methods would assume that every agent knows how to interpret the information offered by other agents. However, this assumes that it knows which other agents it observes, and thus which code they use. In our model, however, we wish to preclude that: It is not clear which other agents an agent is observing, and the resulting usable information is therefore influenced by the universality of the code used and by which agents an agent is listening to. We further investigate whether an agent that does not directly perceive the environment can distinguish states by observing other agents' outputs. For this purpose, we consider a population of different types of agents talking about different concepts, and try to extract new ones by considering their outputs only.