Self-replicating neural networks can be trained to output a representation of themselves, making them navigate towards non-trivial fixpoints in their weight space. We explore the problem of adding a secondary functionality to the primary task of replication. We find a successful solution in training the networks with separate input/output vectors for one network trained in both tasks so that the additional task does not hinder (and even stabilizes) the self-replication task. Furthermore, we observe the interaction of our goal-networks in an artificial chemistry environment. We examine the influence of different action parameters on the population and their effects on the group's learning capability. Lastly we show the possibility of safely guiding the whole group to goal-fulfilling weight configurations via the inclusion of one specially-developed guiding particle that is able to propagate a secondary task to its peers.

This content is only available as a PDF.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.