Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-3 of 3
Katie Atkinson
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life467-474, (July 29–August 2, 2019) doi: 10.1162/isal_a_00205
Abstract
PDF
It is well documented that cooperation may not be achieved in societies where self-interested agents are engaging in Prisoner’s Dilemma scenarios. In this paper we demonstrate, in contrast, that agent societies that use human-inspired emotions within their decision making, can reach stability in cooperation. Our work makes use of the Ortony, Clore, and Collins (OCC) model of emotions and we analyse the evolutionary stability of two different implementations that make use of key emotions from this model. Firstly, we consider an agent society that solely make use of this model of emotions for the agents’ decision making. Secondly we look at a model that extends the emotional agents with a model for representing mood. We set out a proof that shows that our emotional agents are an evolutionarily stable strategy when playing against a worst-case scenario strategy. The proof demonstrates that our established model of emotional agents enables evolutionary stability to be achieved, without modification to this model. In contrast, the model of moody agents was shown not to be an evolutionarily stable strategy. Our analysis sheds light on the nature of cooperation within agent societies and the useful role that simulated emotions can play in the agents’ decision making and the society as a whole.
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life477-484, (July 23–27, 2018) doi: 10.1162/isal_a_00088
Abstract
PDF
Numerous studies have developed and analysed strategies for maximising utility in social dilemmas from both an individual agent’s perspective and more generally from the viewpoint of a society. In this paper we bring this body of work together by investigating the success of a wide range of strategies in environments with varying characteristics, comparing their success. In particular we study within agent-based simulations, different interaction topologies, agents with and without mobility, and strategies with and without adaptation in the form of reinforcement learning, in both competitive and cooperative settings represented by the Prisoner’s Dilemma and the Stag Hunt, respectively. The results of our experiments show that allowing agents mobility decreases the level of cooperation in the society of agents, due to singular interactions with individual opponents that limit the possibility for direct reciprocity. Unstructured environments similarly support a greater number of singular interactions and thus higher levels of defection in the Prisoner’s Dilemma. In the Stag Hunt, strategies that prioritise risk taking show a greater level of success regardless of environment topology. Our range of experiments yield new insights into the role that mobility and interaction topologies play in the study of cooperation in agent societies.
Proceedings Papers
. ecal2017, ECAL 2017, the Fourteenth European Conference on Artificial Life106-113, (September 4–8, 2017) doi: 10.1162/isal_a_021
Abstract
PDF
Simulating mood within a decision making process has been shown to allow cooperation to occur within the Prisoner’s Dilemma. In this paper we propose how to integrate a mood model into the classical reinforcement learning algorithm Sarsa, and show how this addition can allow self-interested agents to be successful within a multi agent environment. The human-inspired moody agent will learn to cooperate in social dilemmas without the use of punishments or other external incentives. We use both the Prisoner’s Dilemma and the Stag Hunt as our dilemmas. We show that the model provides improvements in both individual payoffs and levels of cooperation within the system when compared to the standard Sarsa model. We also show that the agents’ interaction model and their ability to differentiate between opponents influences how the reinforcement learning process converges.