Abstract
Understanding the structure and evolution of natural cognition is a topic of broad scientific interest, as is the development of an engineering toolkit to construct artificial cognitive systems. One open question is determining which components and techniques to use in such a toolkit. To investigate this question, we employ agent-based AI, using simple computational substrates (i.e., digital brains) undergoing rapid evolution. Such systems are an ideal choice as they are fast to process, easy to manipulate, and transparent for analysis. Even in this limited domain, however, hundreds of different computational substrates are used. While benchmarks exist to compare the quality of different substrates, little work has been done to build broader theory on how substrate features interact. We propose a technique called the Comparative Hybrid Approach and develop a proof-of-concept by systematically analyzing components from three evolvable substrates: recurrent artificial neural networks, Markov brains, and Cartesian genetic programming. We study the role and interaction of individual elements of these substrates by recombining them in a piecewise manner to form new hybrid substrates that can be empirically tested. Here, we focus on network sparsity, memory discretization, and logic operators of each substrate. We test the original substrates and the hybrids across a suite of distinct environments with different logic and memory requirements. While we observe many trends, we see that discreteness of memory and the Markov brain logic gates correlate with high performance across our test conditions. Our results demonstrate that the Comparative Hybrid Approach can identify structural subcomponents that predict task performance across multiple computational substrates.