Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-1 of 1
Kyrill Schmid
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2021, ALIFE 2021: The 2021 Conference on Artificial Life59, (July 18–22, 2021) 10.1162/isal_a_00369
Abstract
View Paper
PDF
A key challenge in AI is the development of algorithms that are capable of cooperative behavior in interactions involving multiple independent machines or individuals. Of particular interest are social dilemmas, which are situations that raise tension between an individual's best choice and the desirable outcome in terms of the group. Although such scenarios have been studied increasingly within the AI community recently, there are still many open questions on which aspects drive cooperative behavior in a particular situation. Based on the insights from behavioral experiments that have suggested positive effects of penalty mechanisms towards cooperation, in this work we adopt the notion of penalties by enabling independent and adaptive agents to penalize others. To that end, we extend agents’ action spaces with penalty actions and define a negative real-valued punishment value. We utilize reinforcement learning to simulate a process of repeated interaction between independent agents, learning by means of trial-and-error. Our evaluation considers different two player social dilemmas, and the N-player Prisoner's Dilemma with up to 128 independent agents, where we demonstrate that the proposed mechanism combined with decentralized learning significantly increases cooperation within all experiments.