Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-6 of 6
Jory Schossau
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2023, ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference21, (July 24–28, 2023) 10.1162/isal_a_00605
Abstract
View Papertitled, Towards a Theory of Mind for Artificial Intelligence Agents
View
PDF
for content titled, Towards a Theory of Mind for Artificial Intelligence Agents
In the growing fervor around artificial intelligence (A.I.) old questions have resurfaced regarding its potential to achieve human-like intelligence and consciousness. A proposed path toward human-level cognition involves the development of representations in A.I. systems. This paper focuses on establishing the methods and metrics necessary toward developing and studying an A.I. that can “impute the mental states of others” (Theory of Mind). Here we examine existing psychological and robotic research on this subject, then propose an information-theoretic metric to quantify the extent to which agents have a Theory of Mind. The metric is applied to agents trained using a genetic algorithm, demonstrating that an agent-specific Theory of Mind can be achieved without the need for a general Theory of Mind. This framework lays the operational groundwork for development toward more general Theory of Mind in artificial intelligence.
Proceedings Papers
. isal2023, ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference53, (July 24–28, 2023) 10.1162/isal_a_00655
Abstract
View Papertitled, A Simple Sparsity Function to Promote Evolutionary Search
View
PDF
for content titled, A Simple Sparsity Function to Promote Evolutionary Search
This study investigates the relationship between sparse computation and evolution in various models using a simple function we call sparsify . We use the sparsify function to alter the sparsity of arbitrary matrices during evolutionary search. The sparsify function is tested on a recurrent neural network, a gene interaction matrix, and a gene regulatory network in the context of four different optimization problems. We demonstrate that the function positively affects evolutionary adaptation. Furthermore, this study shows that the sparsify function enables automatic meta-adaptation of sparsity for the discovery of better solutions. Overall, the findings suggest that the sparsify function can be a valuable tool to improve the optimization of complex systems.
Proceedings Papers
Towards an FPGA Accelerator for Markov Brains
Open Access
. isal2022, ALIFE 2022: The 2022 Conference on Artificial Life34, (July 18–22, 2022) 10.1162/isal_a_00516
Abstract
View Papertitled, Towards an FPGA Accelerator for Markov Brains
View
PDF
for content titled, Towards an FPGA Accelerator for Markov Brains
The success of deep learning is to some degree based on our ability to train models quickly using GPU or TPU hardware accelerators. Markov Brains, which are also a form of neural networks, could benefit from such an acceleration as well. However, Markov Brains are optimized using genetic algorithms, which present an even higher demand on the acceleration hardware: Not only inputs to the network and its outputs need to be communicated but new network configurations have to be loaded and tested repeatedly in large numbers. FPGAs are a natural substrate to implement Markov Brains, who are already made from deterministic logic gates. Here a Markov Brain hardware accelerator is implemented and tested, showing that Markov Brains can be computed within a single clock cycle, the ultimate hardware acceleration. However, how current FPGA design and supporting development toolchains are limiting factors, and if there is a future size speed trade-off are explored here as well.
Proceedings Papers
Sexual Selection Compared to Novelty Search
Open Access
. isal2020, ALIFE 2020: The 2020 Conference on Artificial Life350-358, (July 13–18, 2020) 10.1162/isal_a_00275
Abstract
View Papertitled, Sexual Selection Compared to Novelty Search
View
PDF
for content titled, Sexual Selection Compared to Novelty Search
It has been hypothesized that sexual selection, in conjunction with sexual runaway effects, is the way nature discovers novelty. At the same time, the novelty search algorithm has been proposed as the computational means to effectively explore a solution space without using an objective fitness function. Here, the sexual selection algorithm is defined in such a way that it is largely compatible with novelty search so that it can be used in future applications. In comparison to novelty search, the sexual selection algorithm is capable of exploring the solution space more effectively. This work also supports the idea that sexual selection, disregarding possible confounding effects natural organisms might have, is a very effective way of finding novel adaptations in nature.
Proceedings Papers
. isal2020, ALIFE 2020: The 2020 Conference on Artificial Life744-746, (July 13–18, 2020) 10.1162/isal_a_00274
Abstract
View Papertitled, Neuroevolution in Dynamically Changing Environments
View
PDF
for content titled, Neuroevolution in Dynamically Changing Environments
One goal of the Artificial Life field is to achieve a computational system with a complex richness similar to that of biological life. In lieu of the knowledge to achieve this, Open-ended evolution is often cited as a promising method. However, this is also not straightforward because it is unknown how to achieve open-ended evolution in a computational setting. One popular hypothesis is that a continuously changing fitness landscape can drive open-ended evolution toward the evolution of complex organisms. Here, we test this idea using the neuroevolution of neural network foraging agents in a smoothly and continuously changing environment for 500, 000 generations compared to an unchanging static environment. Surprisingly, we find evidence that the degree to which novel solutions are found is very similar between static and dynamic environments.
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life57-58, (July 23–27, 2018) 10.1162/isal_a_00017
Abstract
View Papertitled, Neuronal Variation as a Cognitive Evolutionary Adaptation
View
PDF
for content titled, Neuronal Variation as a Cognitive Evolutionary Adaptation
Computational scientists studying cognition, robotics, and Artificial Intelligence have discovered that variation is beneficial for many applications of problem-solving. With the addition of variation to a simple algorithm, local attractors may be avoided (breaking out of poor behaviors), generalizations discovered (leading to robustness), and exploration of new state spaces made. But exactly how much variation and where it should be applied is still difficult to generalize between implementations and problems as there is no guiding theory or broad understanding for why variation should help cognitive systems and in what contexts. Historically, computational scientists could look to biology for insights, in this case to understand variation and its effect on cognition. However, neuroscientists also struggle with explaining the variation observed in neural circuitry (neuronal variation) so cannot offer strong insights whether it originates externally, internally, or is merely the result of an incomplete neural model. Here, we show preliminary data suggesting that a small amount of internal variation is preferentially selected through evolution for problem domains where a balance of cognitive strategies must be used. This finding suggests an evolutionary explanation for the existence of and reason for internal neuronal variation, and lays the groundwork for understanding when and why to apply variation in Artificial Intelligences.