Abstract
Understanding the structure and evolution of cognition is a topic of broad scientific interest. Computational substrates are ideal for conducting investigations into this topic because they can be incorporated in rapidly evolving Artificial Life systems and are easy to manipulate. However, design differences between currently existing digital systems make it difficult to identify which manipulations are responsible for broad patterns in evolved behavior. This is further confounded if we are trying to disentangle how multiple features interact. Here we systematically analyze components from two evolvable digital neural substrates (Recurrent Artificial Neural Networks (RNNs) and Markov brains) to develop a proof-of-concept for a comparative hybrid approach. We identified elements of the logic and memory storage architectures in each substrate, then altered and recombined properties of the original substrates to create hybrid substrates. In particular, we chose to investigate the differences between RNNs and Markov Brains relating to network sparsity, whether memory is discrete or continuous, and the basic logic operator in each substrate. We then tested the original substrates and the hybrids across a suite of distinct environments with different logic and memory requirements. While we observed trends across all three of the axes that we investigated, we identified discreteness of memory as an especially important determinant of performance across our test conditions. However, the specific effect of discretization varied by environment and whether the associated task relied on information integration. Our results demonstrate that the comparative hybrid approach can identify structural components that enable cognition and facilitate task performance across multiple computational structures.