Abstract
Self-replicating neural networks can be trained to output a representation of themselves, making them navigate towards non-trivial fixpoints in their weight space. We explore the problem of adding a secondary functionality to the primary task of replication. We find a successful solution in training the networks with separate input/output vectors for one network trained in both tasks so that the additional task does not hinder (and even stabilizes) the self-replication task. Furthermore, we observe the interaction of our goal-networks in an artificial chemistry environment. We examine the influence of different action parameters on the population and their effects on the group's learning capability. Lastly we show the possibility of safely guiding the whole group to goal-fulfilling weight configurations via the inclusion of one specially-developed guiding particle that is able to propagate a secondary task to its peers.