Abstract
Online evolution gives robots the capacity to learn new tasks and to adapt to changing environmental conditions during task execution. Previous approaches to online evolution of neural controllers are typically limited to the optimisation of weights in networks with a prespecified, fixed topology. In this article, we propose a novel approach to online learning in groups of autonomous robots called odNEAT. odNEAT is a distributed and decentralised neuroevolution algorithm that evolves both weights and network topology. We demonstrate odNEAT in three multirobot tasks: aggregation, integrated navigation and obstacle avoidance, and phototaxis. Results show that odNEAT approximates the performance of rtNEAT, an efficient centralised method, and outperforms IM-(), a decentralised neuroevolution algorithm. Compared with rtNEAT and IM-(), odNEAT’s evolutionary dynamics lead to the synthesis of less complex neural controllers with superior generalisation capabilities. We show that robots executing odNEAT can display a high degree of fault tolerance as they are able to adapt and learn new behaviours in the presence of faults. We conclude with a series of ablation studies to analyse the impact of each algorithmic component on performance.