Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-20 of 21
Arend Hintze
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
1
Sort by
Proceedings Papers
. isal2024, ALIFE 2024: Proceedings of the 2024 Artificial Life Conference99, (July 22–26, 2024) 10.1162/isal_a_00761
Abstract
View Paper
PDF
This paper introduces a Gene Regulatory Neural Cellular Automata (ENIGMA), an innovative extension of the Neural Cellular Automata (NCA) framework aimed at modeling biological development with a greater degree of biological fidelity. Traditional NCAs, while capable of generating complex patterns through neural network-driven update rules, lack mechanisms that closely mimic biological processes such as cell-cell signaling and gene regulatory networks (GRNs). Our ENIGMA model addresses these limitations by incorporating update rules based on a simulated gene regulatory network driven by cell-cell signaling, optimized both through backpropagation and genetic algorithms. We demonstrate the structure and functionality of ENIGMA through various experiments, comparing its performance and properties with those of natural organisms. Our findings reveal that ENIGMA can successfully simulate complex cellular networks and exhibit phenomena such as homeotic transformations, pattern maintenance in variable tissue sizes, and the formation of simple regulatory motifs akin to those observed in developmental biology. The introduction of ENIGMA represents a significant step towards bridging the gap between computational models and the intricacies of biological development, offering a versatile tool for exploring developmental and evolutionary questions with profound implications for understanding gene regulation, pattern formation, and the emergent behavior of complex systems.
Proceedings Papers
. isal2023, ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference21, (July 24–28, 2023) 10.1162/isal_a_00605
Abstract
View Paper
PDF
In the growing fervor around artificial intelligence (A.I.) old questions have resurfaced regarding its potential to achieve human-like intelligence and consciousness. A proposed path toward human-level cognition involves the development of representations in A.I. systems. This paper focuses on establishing the methods and metrics necessary toward developing and studying an A.I. that can “impute the mental states of others” (Theory of Mind). Here we examine existing psychological and robotic research on this subject, then propose an information-theoretic metric to quantify the extent to which agents have a Theory of Mind. The metric is applied to agents trained using a genetic algorithm, demonstrating that an agent-specific Theory of Mind can be achieved without the need for a general Theory of Mind. This framework lays the operational groundwork for development toward more general Theory of Mind in artificial intelligence.
Proceedings Papers
. isal2023, ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference95, (July 24–28, 2023) 10.1162/isal_a_00604
Abstract
View Paper
PDF
The study of gene regulatory networks (GRNs) is fundamental to the understanding of evolutionary dynamics and artificial life modeling. This paper presents an integration of a GRN into the NK-fitness landscape model and explores the impact of sparsity on epistasis and pleiotropy. As sparsity augments, gene interactions diminish, expectedly leading to a reduction in both epistasis and pleiotropy. Our findings corroborate the model’s response to such perturbations, demonstrating its potential for investigating a range of GRN adaptations within the NK-fitness landscape framework.
Proceedings Papers
. isal2023, ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference53, (July 24–28, 2023) 10.1162/isal_a_00655
Abstract
View Paper
PDF
This study investigates the relationship between sparse computation and evolution in various models using a simple function we call sparsify . We use the sparsify function to alter the sparsity of arbitrary matrices during evolutionary search. The sparsify function is tested on a recurrent neural network, a gene interaction matrix, and a gene regulatory network in the context of four different optimization problems. We demonstrate that the function positively affects evolutionary adaptation. Furthermore, this study shows that the sparsify function enables automatic meta-adaptation of sparsity for the discovery of better solutions. Overall, the findings suggest that the sparsify function can be a valuable tool to improve the optimization of complex systems.
Proceedings Papers
. isal2022, ALIFE 2022: The 2022 Conference on Artificial Life34, (July 18–22, 2022) 10.1162/isal_a_00516
Abstract
View Paper
PDF
The success of deep learning is to some degree based on our ability to train models quickly using GPU or TPU hardware accelerators. Markov Brains, which are also a form of neural networks, could benefit from such an acceleration as well. However, Markov Brains are optimized using genetic algorithms, which present an even higher demand on the acceleration hardware: Not only inputs to the network and its outputs need to be communicated but new network configurations have to be loaded and tested repeatedly in large numbers. FPGAs are a natural substrate to implement Markov Brains, who are already made from deterministic logic gates. Here a Markov Brain hardware accelerator is implemented and tested, showing that Markov Brains can be computed within a single clock cycle, the ultimate hardware acceleration. However, how current FPGA design and supporting development toolchains are limiting factors, and if there is a future size speed trade-off are explored here as well.
Proceedings Papers
. isal2022, ALIFE 2022: The 2022 Conference on Artificial Life13, (July 18–22, 2022) 10.1162/isal_a_00491
Abstract
View Paper
PDF
Genome-wide association studies (GWAS) are a powerful tool for identifying genes. They exploit the standing genetic variation and correlate phenotypic diversity to genetic markers close to or with genes of interest. However, their power is limited when it comes to complex phenotypes caused by highly epistatically interacting genes. To improve GWAS and to develop new methods, a computational model system could prove invaluable. In the computational model system presented here, the functionality of all genes in question can be identified using knockouts. This allows the comparison between the quantitative genetics results and the functional analysis. Here the goal is to perform a pilot study to investigate to which degree such a computational model can serve as a positive control for a GWAS. Surprisingly, even though the model used here is relatively simple and uses only a few genes, the GWAS struggles to identify all relevant genes. The advantages and limitations of this approach will be discussed to improve the model for future comparisons.
Proceedings Papers
. isal2021, ALIFE 2021: The 2021 Conference on Artificial Life113, (July 18–22, 2021) 10.1162/isal_a_00457
Abstract
View Paper
PDF
Artificial cognitive systems (e.g., artificial neural networks) have taken an ever more present role in the modern world, providing enhancements to everyday life in our cars, in our phones, and on the internet. In order to produce systems more capable of achieving their designated tasks, previous work has sought to direct the evolution of networks using a process referred to as R -augmentation. This process selects for the maximisation of an information-theoretic measure of the agent's stored understanding of the environment, or its representation ( R ) in addition to selecting for task performance. This method was shown to induce increased task performance in a shorter amount of evolutionary time compared to a standard genetic algorithm. Extensions of this work have looked at how R -augmentation affects the distribution of representations across the neurons of the brain ”tissue” or nodes of the network, referred to as smearedness ( S ). Here we seek to improve upon the prior methods by moving beyond the simple maximization used in the original augmentation formula by using the MAP-Elites algorithm to identify intermediate target values to optimize towards. We also examine the feasibility of using MAP-Elites itself as an optimization method as opposed to the traditional selection methods used with R -augmentation, to mixed success. These methods will allow us to shape how the network evolves, and produce better-performing artificial cognitive systems.
Proceedings Papers
. isal2020, ALIFE 2020: The 2020 Conference on Artificial Life350-358, (July 13–18, 2020) 10.1162/isal_a_00275
Abstract
View Paper
PDF
It has been hypothesized that sexual selection, in conjunction with sexual runaway effects, is the way nature discovers novelty. At the same time, the novelty search algorithm has been proposed as the computational means to effectively explore a solution space without using an objective fitness function. Here, the sexual selection algorithm is defined in such a way that it is largely compatible with novelty search so that it can be used in future applications. In comparison to novelty search, the sexual selection algorithm is capable of exploring the solution space more effectively. This work also supports the idea that sexual selection, disregarding possible confounding effects natural organisms might have, is a very effective way of finding novel adaptations in nature.
Proceedings Papers
. isal2020, ALIFE 2020: The 2020 Conference on Artificial Life744-746, (July 13–18, 2020) 10.1162/isal_a_00274
Abstract
View Paper
PDF
One goal of the Artificial Life field is to achieve a computational system with a complex richness similar to that of biological life. In lieu of the knowledge to achieve this, Open-ended evolution is often cited as a promising method. However, this is also not straightforward because it is unknown how to achieve open-ended evolution in a computational setting. One popular hypothesis is that a continuously changing fitness landscape can drive open-ended evolution toward the evolution of complex organisms. Here, we test this idea using the neuroevolution of neural network foraging agents in a smoothly and continuously changing environment for 500, 000 generations compared to an unchanging static environment. Surprisingly, we find evidence that the degree to which novel solutions are found is very similar between static and dynamic environments.
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life247-254, (July 29–August 2, 2019) 10.1162/isal_a_00170
Abstract
View Paper
PDF
Sexual selection is a powerful yet poorly understood evolutionary force. Research into sexual selection, whether biological, computational, or mathematical, has tended to take a top-down approach studying complex natural systems. Many simplifying assumptions must be made in order to make these systems tractable, but it is unclear if these simplifications result in a system which still represents natural ecological and evolutionary dynamics. Here, we take a bottom-up approach in which we construct simple computational systems from subsets of biologically plausible components and focus on examining the underlying dynamics resulting from the interactions of those components. We use this method to investigate sexual selection in general and the sexy sons theory in particular. The minimally necessary components are therefore genomes, genome-determined displays and preferences, and a process capable of overseeing parent selection and mating. We demonstrate the efficacy of our approach (i.e we observe the evolution of female preference) and provide support for sexy sons theory, including illustrating the oscillatory behavior that developed in the presence of multiple costly display traits.
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life432-439, (July 29–August 2, 2019) 10.1162/isal_a_00198
Abstract
View Paper
PDF
Natural environments are full of ambient noise; nevertheless, natural cognitive systems deal greatly with uncertainty but also have ways to suppress or ignore noise unrelated to the task at hand. For most intelligent tasks, experiences and observations have to be committed to memory and these representations of reality inform future decisions. We know that deep learned artificial neural networks (ANNs) often struggle with the formation of representations. This struggle may be due to the ANN’s fully interconnected, layered architecture. This forces information to be propagated over the entire system, which is different from natural brains that instead have sparsely distributed representations. Here we show how ambient noise causes neural substrates such as recurrent ANNs and long short-term memory neural networks to evolve more representations in order to function in these noisy environments, which also greatly improves their functionality. However, these systems also tend to further smear their representations over their internal states making them more vulnerable to internal noise. We also show that Markov Brains (MBs) are mostly unaffected by ambient noise, and their representations remain sparsely distributed (i.e. not smeared). This suggests that ambient noise helps to increase the amount of representations formed in neural networks, but also requires us to find additional solutions to prevent smearing of said representations.
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life388-395, (July 23–27, 2018) 10.1162/isal_a_00076
Abstract
View Paper
PDF
Artificial neural networks (ANNs), while exceptionally useful for classification, are vulnerable to misdirection. Small amounts of noise can significantly affect their ability to correctly complete a task. Instead of generalizing concepts, ANNs seem to focus on surface statistical regularities in a given task. Here we compare how recurrent artificial neural networks, long short-term memory units, and Markov Brains sense and remember their environments. We show that information in Markov Brains is localized and sparsely distributed, while the other neural network substrates “smear” information about the environment across all nodes, which makes them vulnerable to noise.
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life57-58, (July 23–27, 2018) 10.1162/isal_a_00017
Abstract
View Paper
PDF
Computational scientists studying cognition, robotics, and Artificial Intelligence have discovered that variation is beneficial for many applications of problem-solving. With the addition of variation to a simple algorithm, local attractors may be avoided (breaking out of poor behaviors), generalizations discovered (leading to robustness), and exploration of new state spaces made. But exactly how much variation and where it should be applied is still difficult to generalize between implementations and problems as there is no guiding theory or broad understanding for why variation should help cognitive systems and in what contexts. Historically, computational scientists could look to biology for insights, in this case to understand variation and its effect on cognition. However, neuroscientists also struggle with explaining the variation observed in neural circuitry (neuronal variation) so cannot offer strong insights whether it originates externally, internally, or is merely the result of an incomplete neural model. Here, we show preliminary data suggesting that a small amount of internal variation is preferentially selected through evolution for problem domains where a balance of cognitive strategies must be used. This finding suggests an evolutionary explanation for the existence of and reason for internal neuronal variation, and lays the groundwork for understanding when and why to apply variation in Artificial Intelligences.
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life469-476, (July 23–27, 2018) 10.1162/isal_a_00087
Abstract
View Paper
PDF
Natural organisms have transitioned from one niche to another over the course of evolution and have adapted accordingly. In particular, if these transition go back and forth between two niches repeatedly, such as transitioning between diurnal and nocturnal lifestyles, this should over time result in adaptations that are beneficial to both environments. Furthermore, they should also adapt to the transitions themselves. Here we answer how Markov Brains, which are an analogue to natural brains, change structurally and functionally when experiencing periodic changes. We show that if environments change sufficiently fast, the structural components that form the brains become useful in both environments. However, brains evolve to perform different computations while using the same components, and thus have computational structures that are multifunctional.
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life530-535, (July 23–27, 2018) 10.1162/isal_a_00098
Abstract
View Paper
PDF
Many evolutionary models that explore the emergence of cooperation rely on either individual level selection or group level selection. However, natural systems are often more complex and selection is never just on the level of the individual or group alone. Here we explore how systems of collaborating agents evolve when selection is based on a mixture of group and individual performances. It has been suggested that under such situations free riders thrive and hamper evolution significantly. Here we show that free rider effects can almost be ignored. Sharing resources even with free riders benefits the evolution of cooperators, which in the long run is more beneficial than the short term cost.
Proceedings Papers
. ecal2017, ECAL 2017, the Fourteenth European Conference on Artificial Life76-83, (September 4–8, 2017) 10.1162/isal_a_016
Abstract
View Paper
PDF
A great deal of effort in digital evolution research is invested in developing experimental tools. Because each experiment is different and because the emphasis is on generating results, the tools that are developed are usually not designed to be extendable or multipurpose. Here we present MABE , a modular and reconfigurable digital evolution research tool designed to minimize the time from hypotheses generation to hypotheses testing. MABE provides an accessible framework which seeks to increase collaborations and to facilitate reuse by implementing only features that are common to most experiments, while leaving experimentally dependent details up to the user. MABE was initially released in August 2016 and has since then been used to ask questions related to Evolution, Sexual Selection, Psychology, Cognition, Neuroscience, Cooperation, Spatial Navigation and Computer Science.
Proceedings Papers
. alif2016, ALIFE 2016, the Fifteenth International Conference on the Synthesis and Simulation of Living Systems250-257, (July 4–6, 2016) 10.1162/978-0-262-33936-0-ch045
Abstract
View Paper
PDF
A common idiom in biology education states, Eyes in the front, the animal hunts. Eyes on the side, the animal hides. In this paper, we explore one possible explanation for why predators tend to have forward-facing, high-acuity visual sys- tems. We do so using an agent-based computational model of evolution, where predators and prey interact and adapt their behavior and morphology to one another over successive generations of evolution. In this model, we observe a coevolutionary cycle between prey swarming behavior and the predators visual system, where the predator and prey continually adapt their visual system and behavior, respectively, over evolutionary time in reaction to one another due to the well-known predator confusion effect. Furthermore, we provide evidence that the predator visual system is what drives this coevolutionary cycle, and suggest that the cycle could be closed if the predator evolves a hybrid visual system capable of narrow, high-acuity vision for tracking prey as well as broad, coarse vision for prey discovery. Thus, the conflicting demands imposed on a predators visual system by the predator confusion effect could have led to the evolution of complex eyes in many predators.
Proceedings Papers
. ecal2015, ECAL 2015: the 13th European Conference on Artificial Life595-602, (July 20–24, 2015) 10.1162/978-0-262-33027-5-ch103
Proceedings Papers
. alife2014, ALIFE 14: The Fourteenth International Conference on the Synthesis and Simulation of Living Systems366-367, (July 30–August 2, 2014) 10.1162/978-0-262-32621-6-ch058
Proceedings Papers
. ecal2013, ECAL 2013: The Twelfth European Conference on Artificial Life126-133, (September 2–6, 2013) 10.1162/978-0-262-31709-2-ch019
1