Internal models allow us to simulate and predict the consequences of interacting with the objects in our environment. Applying such models in intelligent robots and machines is a key challenge in increasing their autonomy, robustness and responsiveness. One obstacle in allowing this is the need to maintain multiple internal models, corresponding to the multitude of objects in our surroundings, without interference between them. We propose evolving neural networks as a way to generate multiple internal models, and study the role of neural modularity in doing so. Intuitively, modularity should help reduce interference between internal models. In a task requiring neural networks to control multiple different objects, we demonstrate that neuroevolution can produce multiple internal inverse models. Results indicate that modularity may play a role – but the evolved neural networks reveal an unexpected modular decomposition: Rather than separating models of different objects, networks frequently divide into modules separately processing different observed features of objects.

This content is only available as a PDF.
This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.