Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-4 of 4
Jory Schossau
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal, ALIFE 2022: The 2022 Conference on Artificial Life34, (July 18–22, 2022) doi: 10.1162/isal_a_00516
Abstract
PDF
The success of deep learning is to some degree based on our ability to train models quickly using GPU or TPU hardware accelerators. Markov Brains, which are also a form of neural networks, could benefit from such an acceleration as well. However, Markov Brains are optimized using genetic algorithms, which present an even higher demand on the acceleration hardware: Not only inputs to the network and its outputs need to be communicated but new network configurations have to be loaded and tested repeatedly in large numbers. FPGAs are a natural substrate to implement Markov Brains, who are already made from deterministic logic gates. Here a Markov Brain hardware accelerator is implemented and tested, showing that Markov Brains can be computed within a single clock cycle, the ultimate hardware acceleration. However, how current FPGA design and supporting development toolchains are limiting factors, and if there is a future size speed trade-off are explored here as well.
Proceedings Papers
. isal2020, ALIFE 2020: The 2020 Conference on Artificial Life350-358, (July 13–18, 2020) doi: 10.1162/isal_a_00275
Abstract
PDF
It has been hypothesized that sexual selection, in conjunction with sexual runaway effects, is the way nature discovers novelty. At the same time, the novelty search algorithm has been proposed as the computational means to effectively explore a solution space without using an objective fitness function. Here, the sexual selection algorithm is defined in such a way that it is largely compatible with novelty search so that it can be used in future applications. In comparison to novelty search, the sexual selection algorithm is capable of exploring the solution space more effectively. This work also supports the idea that sexual selection, disregarding possible confounding effects natural organisms might have, is a very effective way of finding novel adaptations in nature.
Proceedings Papers
. isal2020, ALIFE 2020: The 2020 Conference on Artificial Life744-746, (July 13–18, 2020) doi: 10.1162/isal_a_00274
Abstract
PDF
One goal of the Artificial Life field is to achieve a computational system with a complex richness similar to that of biological life. In lieu of the knowledge to achieve this, Open-ended evolution is often cited as a promising method. However, this is also not straightforward because it is unknown how to achieve open-ended evolution in a computational setting. One popular hypothesis is that a continuously changing fitness landscape can drive open-ended evolution toward the evolution of complex organisms. Here, we test this idea using the neuroevolution of neural network foraging agents in a smoothly and continuously changing environment for 500, 000 generations compared to an unchanging static environment. Surprisingly, we find evidence that the degree to which novel solutions are found is very similar between static and dynamic environments.
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life57-58, (July 23–27, 2018) doi: 10.1162/isal_a_00017
Abstract
PDF
Computational scientists studying cognition, robotics, and Artificial Intelligence have discovered that variation is beneficial for many applications of problem-solving. With the addition of variation to a simple algorithm, local attractors may be avoided (breaking out of poor behaviors), generalizations discovered (leading to robustness), and exploration of new state spaces made. But exactly how much variation and where it should be applied is still difficult to generalize between implementations and problems as there is no guiding theory or broad understanding for why variation should help cognitive systems and in what contexts. Historically, computational scientists could look to biology for insights, in this case to understand variation and its effect on cognition. However, neuroscientists also struggle with explaining the variation observed in neural circuitry (neuronal variation) so cannot offer strong insights whether it originates externally, internally, or is merely the result of an incomplete neural model. Here, we show preliminary data suggesting that a small amount of internal variation is preferentially selected through evolution for problem domains where a balance of cognitive strategies must be used. This finding suggests an evolutionary explanation for the existence of and reason for internal neuronal variation, and lays the groundwork for understanding when and why to apply variation in Artificial Intelligences.