Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
The Anh Han
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Artificial Life (2021) 27 (3–4): 246–276.
Published: 16 March 2022
FIGURES
| View All (18)
Abstract
View articletitled, Pleasing Enhances Indirect Reciprocity-Based Cooperation Under Private Assessment
View
PDF
for article titled, Pleasing Enhances Indirect Reciprocity-Based Cooperation Under Private Assessment
Indirect reciprocity is an important mechanism for promoting cooperation among self-interested agents. Simplified, it means “you help me; therefore somebody else will help you” (in contrast to direct reciprocity: “you help me; therefore I will help you”). Indirect reciprocity can be achieved via reputation and norms. Strategies, such as the so-called leading eight , relying on these principles can maintain high levels of cooperation and remain stable against invasion, even in the presence of errors. However, this is only the case if the reputation of an agent is modeled as a shared public opinion. If agents have private opinions and hence can disagree as to whether somebody is good or bad, even rare errors can cause cooperation to break apart. We show that most strategies can overcome the private assessment problem by applying pleasing . A pleasing agent acts in accordance with others' expectations of their behaviour (i.e., pleasing them) instead of being guided by their own, private assessment. As such, a pleasing agent can achieve a better reputation than previously considered strategies when there is disagreement in the population. Pleasing is effective even if the opinions of only a few other individuals are considered and when it bears additional costs. Finally, through a more exhaustive analysis of the parameter space than previous studies, we show that some of the leading eight still function under private assessment, i.e., that cooperation rates are well above an objective baseline. Yet, pleasing strategies supersede formerly described ones and enhance cooperation.
Journal Articles
Publisher: Journals Gateway
Artificial Life (2012) 18 (4): 365–383.
Published: 01 October 2012
FIGURES
| View All (5)
Abstract
View articletitled, Corpus-Based Intention Recognition in Cooperation Dilemmas
View
PDF
for article titled, Corpus-Based Intention Recognition in Cooperation Dilemmas
Intention recognition is ubiquitous in most social interactions among humans and other primates. Despite this, the role of intention recognition in the emergence of cooperative actions remains elusive. Resorting to the tools of evolutionary game theory, herein we describe a computational model showing how intention recognition coevolves with cooperation in populations of self-regarding individuals. By equipping some individuals with the capacity of assessing the intentions of others in the course of a prototypical dilemma of cooperation—the repeated prisoner's dilemma—we show how intention recognition is favored by natural selection, opening a window of opportunity for cooperation to thrive. We introduce a new strategy ( IR ) that is able to assign an intention to the actions of opponents, on the basis of an acquired corpus consisting of possible plans achieving that intention, as well as to then make decisions on the basis of such recognized intentions. The success of IR is grounded on the free exploitation of unconditional cooperators while remaining robust against unconditional defectors. In addition, we show how intention recognizers do indeed prevail against the best-known successful strategies of iterated dilemmas of cooperation, even in the presence of errors and reduction of fitness associated with a small cognitive cost for performing intention recognition.