Skip Nav Destination
Close Modal
1-6 of 6
Evolutionary Robotics
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
Stochastic Ontogenesis in Evolutionary Robotics
Open Access
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life214-221, (July 23–27, 2018) 10.1162/isal_a_00045
Abstract
View Papertitled, Stochastic Ontogenesis in Evolutionary Robotics
View
PDF
for content titled, Stochastic Ontogenesis in Evolutionary Robotics
This paper investigates the hypothesis that noise in the genotype–phenotype mapping, here called stochastic ontogenesis (SO) , is an important consideration in Evolutionary Robotics. This is examined in two ways: first, in the context of seeking to generalise controller performance in an incremental task domain in simulation, and second, in a preliminary study of its effectiveness as a mechanism for crossing the “reality gap” from simulation to physical robots. The performance of evolved neurocontrollers for a fixed-morphology simulated robot is evaluated in both the presence and absence of ontogenic noise, in a task requiring the development of a walking gait that accommodates a varying environment. When SO is applied, evolution of controllers is more effective (replicates achieve higher fitness) and more robust (fewer replicates fail) than evolution using a deterministic mapping. This result is found in a variety of incremental scenarios. For the preliminary study of the utility of SO for moving between simulation and reality, the capacity of evolved controllers to handle unforeseen environmental noise is tested by introducing a stochastic coefficient of friction and evaluating previous populations in the new problem domain. Controllers evolved with deterministic ontogenesis fail to accommodate the new source of noise and show reduced fitness. In contrast, those which experienced ontogenic noise during evolution are not significantly disrupted by the additional noise in the environment. It is argued that SO is a catch-all mechanism for increasing performance of Evolutionary Robotics designs and may have further more general implications for Evolutionary Computation.
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life206-213, (July 23–27, 2018) 10.1162/isal_a_00044
Abstract
View Papertitled, Behavioral search drivers and the role of elitism in soft robotics
View
PDF
for content titled, Behavioral search drivers and the role of elitism in soft robotics
Behavioral search drivers allow more information about the behavior of individuals in an environment to be used during selection. In this paper, we examine several selection methods based on de-aggregating the motion of soft robots into behavior vectors used to drive search. We adapt three behavioral search drivers to this task: є-lexicase selection, discovery of objectives by clustering, and novelty search. These methods are compared to age-fitness pareto optimization and random search. We analyze how these search drivers affect the diversity and quality of soft robots that are tasked with moving as far of a distance as possible. Perhaps the most surprising finding is that random search with elitism is competitive with previously published methods. Overall, we find that elitism plays an important role in the ability to find high fitness solutions, and that lexicase selection and discovery of objectives by clustering with elitism tend to produce the most fit solutions.
Proceedings Papers
Neural Network Quine
Open Access
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life234-241, (July 23–27, 2018) 10.1162/isal_a_00049
Abstract
View Papertitled, Neural Network Quine
View
PDF
for content titled, Neural Network Quine
Self-replication is a key aspect of biological life that has been largely overlooked in Artificial Intelligence systems. Here we describe how to build and train self-replicating neural networks. The network replicates itself by learning to output its own weights. The network is designed using a loss function that can be optimized with either gradient-based or non-gradient-based methods. We also describe a method we call regeneration to train the network without explicit optimization, by injecting the network with predictions of its own parameters. The best solution for a self-replicating network was found by alternating between regeneration and optimization steps. Finally, we describe a design for a self-replicating neural network that can solve an auxiliary task such as MNIST image classification. We observe that there is a trade-off between the network’s ability to classify images and its ability to replicate, but training is biased towards increasing its specialization at image classification at the expense of replication. This is analogous to the trade-off between reproduction and other tasks observed in nature. We suggest that a self-replication mechanism for artificial intelligence is useful because it introduces the possibility of continual improvement through natural selection.
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life232-233, (July 23–27, 2018) 10.1162/isal_a_00048
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life224-231, (July 23–27, 2018) 10.1162/isal_a_00047
Abstract
View Papertitled, Effects of Selection Preferences on Evolved Robot Morphologies and Behaviors
View
PDF
for content titled, Effects of Selection Preferences on Evolved Robot Morphologies and Behaviors
This paper investigates the evolution of modular robots using different selection preferences (i.e., fitness functions), aiming at novelty, speed of locomotion, number of limbs, and combinations of these. The outcomes are analyzed from different perspectives: sampling of the search space, evolved morphologies, and evolved behaviors. This results in a wealth of findings, including a surprise about the number of sampled regions of the search space and the effect of different fitness functions on the evolved morphologies.
Proceedings Papers
Improving performance in distributed embodied evolution: Distributed Differential Embodied Evolution
Open Access
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life222-223, (July 23–27, 2018) 10.1162/isal_a_00046
Abstract
View Papertitled, Improving performance in distributed embodied evolution: Distributed Differential Embodied Evolution
View
PDF
for content titled, Improving performance in distributed embodied evolution: Distributed Differential Embodied Evolution
The field of Embodied Evolution has been strongly developing during the last ten years by more than doubling the yearly number of contributions since 2008 (Bredeche et al., 2018). Many different scenarios and tasks have been addressed and some works have already focus in formalizing and standardizing the paradigm. There hasn’t been a lot of effort, however, towards comparing and improving the performance of the algorithms, which is essential to increase the complexity of the experimental setups and therefore the applicability of the technique. This paper extends the work started in (Trueba, 2017) to compare different variations of EE algorithms with the incorporation of a Differential Evolution based distributed EE algorithm.