Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-3 of 3
Tanja Katharina Kaiser
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2023, ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference50, (July 24–28, 2023) 10.1162/isal_a_00650
Abstract
View Papertitled, Evolving Dynamic Collective Behaviors by Minimizing Surprise
View
PDF
for content titled, Evolving Dynamic Collective Behaviors by Minimizing Surprise
Our minimize surprise method evolves swarm robot controllers using a task-independent reward for prediction accuracy. Since no specific task is rewarded during optimization, various collective behaviors can emerge, as has also been shown in previous work. But so far, all generated behaviors were static or repetitive allowing for easy sensor predictions due to mostly constant sensor input. Our goal is to generate more dynamic behaviors that vary behavior based on changes in sensor input. We modify environment and agent capabilities, and extend the minimize surprise reward with additional components rewarding homing or curiosity. In preliminary experiments, we were able to generate first dynamic behaviors through our modifications, providing a promising basis for future work.
Proceedings Papers
. isal2023, ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference65, (July 24–28, 2023) 10.1162/isal_a_00671
Abstract
View Papertitled, Social Neural Network Soups with Surprise Minimization
View
PDF
for content titled, Social Neural Network Soups with Surprise Minimization
A recent branch of research in artificial life has constructed artificial chemistry systems whose particles are dynamic neural networks. These particles can be applied to each other and show a tendency towards self-replication of their weight values. We define new interactions for said particles that allow them to recognize one another and learn predictors for each other’s behavior. For instance, each particle minimizes its surprise when observing another particle’s behavior. Given a special catalyst particle to exert evolutionary selection pressure on the soup of particles, these ‘social’ interactions are sufficient to produce emergent behavior similar to the stability pattern previously only achieved via explicit self-replication training.
Proceedings Papers
. isal2020, ALIFE 2020: The 2020 Conference on Artificial Life384-392, (July 13–18, 2020) 10.1162/isal_a_00266
Abstract
View Papertitled, Evolution of Diverse Swarm Behaviors with Minimal Surprise
View
PDF
for content titled, Evolution of Diverse Swarm Behaviors with Minimal Surprise
Complementary to machine learning, controllers for swarm robotics can also be evolved using methods of evolutionary computation. Approaches such as novelty search and MAP-Elites go beyond mere fitness-based optimization by increasing the time spent on exploration. Instead of optimizing a fitness function, selective pressure towards unexplored behavior space is generated by forcing behavioral distance to previously seen behaviors. Ideally, we would like to define a generic behavioral distance function; however, effective distance functions are usually domain specific. Our minimize surprise approach concurrently evolves two artificial neural networks: one for action selection and one as world model. Selective pressure is implemented by rewarding good predictions of the world model. As an effect, the evolutionary dynamics push towards swarm behaviors that are easy to predict, that is, the robots virtually try to minimize surprise in their environment. Here, we compare minimize surprise to novelty search and, as baseline, a genetic algorithm in simulations of swarm robots. We observe a diversity of collective behaviors, such as aggregation, dispersion, clustering, line formation, etc. We find that minimize surprise is competitive to novelty search for the investigated swarm scenario, although it does not require a cleverly crafted domain-specific behavioral distance function.