Abstract
One of the major challenges in the field of neurally driven evolved autonomous agents is deciphering the neural mechanisms underlying their behavior. Aiming at this goal, we have developed the multi-perturbation Shapley value analysis (MSA)—the first axiomatic and rigorous method for deducing causal function localization from multiple-perturbation data, substantially improving on earlier approaches. Based on fundamental concepts from game theory, the MSA provides a formal way of defining and quantifying the contributions of network elements, as well as the functional interactions between them. The previously presented versions of the MSA require full knowledge (or at least an approximation) of the network's performance under all possible multiple perturbations, limiting their applicability to systems with a small number of elements. This article focuses on presenting new scalable MSA variants, allowing for the analysis of large complex networks in an efficient manner, including large-scale neurocontrollers. The successful operation of the MSA along with the new variants is demonstrated in the analysis of several neurocontrollers solving a food foraging task, consisting of up to 100 neural elements.