Abstract
Computational scientists studying cognition, robotics, and Artificial Intelligence have discovered that variation is beneficial for many applications of problem-solving. With the addition of variation to a simple algorithm, local attractors may be avoided (breaking out of poor behaviors), generalizations discovered (leading to robustness), and exploration of new state spaces made. But exactly how much variation and where it should be applied is still difficult to generalize between implementations and problems as there is no guiding theory or broad understanding for why variation should help cognitive systems and in what contexts. Historically, computational scientists could look to biology for insights, in this case to understand variation and its effect on cognition. However, neuroscientists also struggle with explaining the variation observed in neural circuitry (neuronal variation) so cannot offer strong insights whether it originates externally, internally, or is merely the result of an incomplete neural model. Here, we show preliminary data suggesting that a small amount of internal variation is preferentially selected through evolution for problem domains where a balance of cognitive strategies must be used. This finding suggests an evolutionary explanation for the existence of and reason for internal neuronal variation, and lays the groundwork for understanding when and why to apply variation in Artificial Intelligences.