Evolving agents to learn how to solve complex, multi-stage tasks to achieve a goal is a challenging problem. Problems such as the River Crossing Task are used to explore how these agents evolve and what they learn, but it is still often difficult to explain why agents behave in the way they do. We present the Minimal River Crossing (RC-) Task testbed, designed to reduce the complexity of the original River Crossing Task while keeping its essential components, such that the fundamental learning challenges it presents can be understood in more detail. Specifically to illustrate this, we demonstrate that the RC- environment can be used to investigate the effect that a cost to movement has on agent evolution and learning, and more importantly that the findings obtained as a result can be generalised back to the original River Crossing Task.

This content is only available as a PDF.