A key challenge in AI is the development of algorithms that are capable of cooperative behavior in interactions involving multiple independent machines or individuals. Of particular interest are social dilemmas, which are situations that raise tension between an individual's best choice and the desirable outcome in terms of the group. Although such scenarios have been studied increasingly within the AI community recently, there are still many open questions on which aspects drive cooperative behavior in a particular situation. Based on the insights from behavioral experiments that have suggested positive effects of penalty mechanisms towards cooperation, in this work we adopt the notion of penalties by enabling independent and adaptive agents to penalize others. To that end, we extend agents’ action spaces with penalty actions and define a negative real-valued punishment value. We utilize reinforcement learning to simulate a process of repeated interaction between independent agents, learning by means of trial-and-error. Our evaluation considers different two player social dilemmas, and the N-player Prisoner's Dilemma with up to 128 independent agents, where we demonstrate that the proposed mechanism combined with decentralized learning significantly increases cooperation within all experiments.

This content is only available as a PDF.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.