Skip Nav Destination
Close Modal
Update search
NARROW
Format
Date
Availability
1-2 of 2
Justin K. Pugh
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. ecal2017, ECAL 2017, the Fourteenth European Conference on Artificial Life553-560, (September 4–8, 2017) 10.1162/isal_a_088
Abstract
View Paper
PDF
Developing a comprehensive theory of open-ended evolution (OEE) depends critically on understanding the mechanisms underlying the major evolutionary transitions; such periods of rapid innovation, such as the Cambrian explosion, have resulted in exactly the kind of diversity and complexity deemed the hallmarks of strong OEE. This paper introduces a new domain for studying major transitions in an evolutionary robotics context. Inspired by the popular Minecraft video game, the new Voxelbuild domain centers on agents that evolve the capacity to build arbitrarily complex block structures with minimal objectives. Initial experiments demonstrate both the rich expressive potential of the new domain and, intriguingly, the occurrence of major evolutionary transitions in at least some runs, thereby providing a unique opportunity to probe how and why such transitions occur or fail to occur in different runs of the same system.
Proceedings Papers
. alif2016, ALIFE 2016, the Fifteenth International Conference on the Synthesis and Simulation of Living Systems382-389, (July 4–6, 2016) 10.1162/978-0-262-33936-0-ch064
Abstract
View Paper
PDF
Hebbian plasticity in artificial neural networks is compelling for both its simplicity and biological plausibility. Changing the weight of a connection based only on the activations of the neurons it connects is straightforward and effective in combination with neuromodulation for reinforcing good behaviors. However, a major obstacle to any ambitious application of Hebbian plasticity is that the performance of a layer of Hebbian neurons is highly sensitive to the choice of inputs. If the inputs do not represent precisely the features of the environment that Hebbian connections must learn to correlate to actions, the network will struggle to learn at all. A recently-proposed solution to this problem is the Real-time Autoencoder- Augmented Hebbian Network (RAAHN), which inserts an autoencoder between the inputs and the Hebbian layer. This autoencoder then learns in real time to encode the raw inputs into higher-level features while the Hebbian connections in turn learn to correlate these higher-level features to correct actions. Until now, RAAHN has only been demonstrated to work when it is driven by an autopilot during training (in a robot navigation task), which means its experiences are carefully controlled. Progressing significantly beyond this early demonstration, the present investigation now shows how RAAHN can learn to navigate from scratch entirely on its own, without an autopilot. By removing the need for an autopilot, RAAHN becomes a powerful new Hebbian-centered approach to learning from sparse reinforcement with broad potential applications.