Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-9 of 9
Karl Tuyls
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life467-474, (July 29–August 2, 2019) 10.1162/isal_a_00205
Abstract
View Paper
PDF
It is well documented that cooperation may not be achieved in societies where self-interested agents are engaging in Prisoner’s Dilemma scenarios. In this paper we demonstrate, in contrast, that agent societies that use human-inspired emotions within their decision making, can reach stability in cooperation. Our work makes use of the Ortony, Clore, and Collins (OCC) model of emotions and we analyse the evolutionary stability of two different implementations that make use of key emotions from this model. Firstly, we consider an agent society that solely make use of this model of emotions for the agents’ decision making. Secondly we look at a model that extends the emotional agents with a model for representing mood. We set out a proof that shows that our emotional agents are an evolutionarily stable strategy when playing against a worst-case scenario strategy. The proof demonstrates that our established model of emotional agents enables evolutionary stability to be achieved, without modification to this model. In contrast, the model of moody agents was shown not to be an evolutionarily stable strategy. Our analysis sheds light on the nature of cooperation within agent societies and the useful role that simulated emotions can play in the agents’ decision making and the society as a whole.
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life477-484, (July 23–27, 2018) 10.1162/isal_a_00088
Abstract
View Paper
PDF
Numerous studies have developed and analysed strategies for maximising utility in social dilemmas from both an individual agent’s perspective and more generally from the viewpoint of a society. In this paper we bring this body of work together by investigating the success of a wide range of strategies in environments with varying characteristics, comparing their success. In particular we study within agent-based simulations, different interaction topologies, agents with and without mobility, and strategies with and without adaptation in the form of reinforcement learning, in both competitive and cooperative settings represented by the Prisoner’s Dilemma and the Stag Hunt, respectively. The results of our experiments show that allowing agents mobility decreases the level of cooperation in the society of agents, due to singular interactions with individual opponents that limit the possibility for direct reciprocity. Unstructured environments similarly support a greater number of singular interactions and thus higher levels of defection in the Prisoner’s Dilemma. In the Stag Hunt, strategies that prioritise risk taking show a greater level of success regardless of environment topology. Our range of experiments yield new insights into the role that mobility and interaction topologies play in the study of cooperation in agent societies.
Proceedings Papers
. ecal2017, ECAL 2017, the Fourteenth European Conference on Artificial Life106-113, (September 4–8, 2017) 10.1162/isal_a_021
Abstract
View Paper
PDF
Simulating mood within a decision making process has been shown to allow cooperation to occur within the Prisoner’s Dilemma. In this paper we propose how to integrate a mood model into the classical reinforcement learning algorithm Sarsa, and show how this addition can allow self-interested agents to be successful within a multi agent environment. The human-inspired moody agent will learn to cooperate in social dilemmas without the use of punishments or other external incentives. We use both the Prisoner’s Dilemma and the Stag Hunt as our dilemmas. We show that the model provides improvements in both individual payoffs and levels of cooperation within the system when compared to the standard Sarsa model. We also show that the agents’ interaction model and their ability to differentiate between opponents influences how the reinforcement learning process converges.
Proceedings Papers
. alif2016, ALIFE 2016, the Fifteenth International Conference on the Synthesis and Simulation of Living Systems666-673, (July 4–6, 2016) 10.1162/978-0-262-33936-0-ch106
Abstract
View Paper
PDF
In this paper we explore a novel perspective on surveillance robotics, which is based on a coordination principle of honeybees, and on the integration of an autonomous telepresence robot in such system. Coordination principles, based on biological systems such as ant, bee and termite colonies, show several properties which are essential to multi-robot surveillance, including low computation load, robustness, scalability and adaptability. In this paper we aim to improve on the efficiency of such a robotic swarm by taking a human in the loop by means of a telepresence robot. The human operator controlling the telepresence robot will aim to speed up the convergence of the swarm. The experiments, which evaluate the proposed multi- robot coordination system both in simulation and on real robots, show how the telepresence robot substantially increases the efficiency of the process.
Proceedings Papers
. ecal2015, ECAL 2015: the 13th European Conference on Artificial Life562-569, (July 20–24, 2015) 10.1162/978-0-262-33027-5-ch098
Proceedings Papers
. alife2014, ALIFE 14: The Fourteenth International Conference on the Synthesis and Simulation of Living Systems761-768, (July 30–August 2, 2014) 10.1162/978-0-262-32621-6-ch123
Proceedings Papers
. alife2014, ALIFE 14: The Fourteenth International Conference on the Synthesis and Simulation of Living Systems376-383, (July 30–August 2, 2014) 10.1162/978-0-262-32621-6-ch060
Proceedings Papers
. ecal2013, ECAL 2013: The Twelfth European Conference on Artificial Life775-782, (September 2–6, 2013) 10.1162/978-0-262-31709-2-ch111
Proceedings Papers
. ecal2013, ECAL 2013: The Twelfth European Conference on Artificial Life462-469, (September 2–6, 2013) 10.1162/978-0-262-31709-2-ch066