Internal models allow us to simulate and predict the consequences of interacting with the objects in our environment. Applying such models in intelligent robots and machines is a key challenge in increasing their autonomy, robustness and responsiveness. One obstacle in allowing this is the need to maintain multiple internal models, corresponding to the multitude of objects in our surroundings, without interference between them. We propose evolving neural networks as a way to generate multiple internal models, and study the role of neural modularity in doing so. Intuitively, modularity should help reduce interference between internal models. In a task requiring neural networks to control multiple different objects, we demonstrate that neuroevolution can produce multiple internal inverse models. Results indicate that modularity may play a role – but the evolved neural networks reveal an unexpected modular decomposition: Rather than separating models of different objects, networks frequently divide into modules separately processing different observed features of objects.