Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-11 of 11
Francisco C. Santos
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal, ALIFE 2021: The 2021 Conference on Artificial Life100, (July 19–23, 2021) doi: 10.1162/isal_a_00438
Abstract
PDF
Climate action, vaccination resistance or social coordination in pandemics are some of the many social endeavours with uncertain, non-linear and long-term returns. The collective risk dilemma offers an excellent game-theoretical abstraction of such scenarios. In this dilemma, players can make stepwise contributions to a public good throughout a fixed number of rounds and will only observe their payoff once the game ends. The non-linearity of returns is modeled through a threshold that determines the risk of collective loss, so that players receive zero payoff if a collective threshold is not achieved. In an article recently published in the Journal of Simulation Practice and Theory we introduce a novel population-based learning model wherein a group of individuals facing a collective risk dilemma acquire their strategies over time through reinforcement learning, while handling different sources of uncertainty. We show that the strategies learned with the model correspond to those observed in behavioral experiments, even in the presence of environmental uncertainty. Furthermore, we confirm that when participants are unsure about when the game will end, agents become more polarized and the number of fair contributions diminishes. The population-based on-line learning framework we propose is general enough to be applicable in a wide range of collective action problems and arbitrarily large sets of available policies.
Proceedings Papers
. isal, ALIFE 2021: The 2021 Conference on Artificial Life65, (July 19–23, 2021) doi: 10.1162/isal_a_00385
Abstract
PDF
We examine a social dilemma that arises with the advancement of technologies such as AI, where technologists can choose a safe (SAFE) vs risk-taking (UNSAFE) course of development. SAFE is costlier and takes more time to implement than UNSAFE, allowing UNSAFE strategists to further claim significant benefits from reaching supremacy in a certain technology. Collectively, SAFE is the preferred choice when the risk is sufficiently high, while risk-taking is preferred otherwise. Given the advantage of risk-taking behaviour in terms of cost and speed, a social dilemma arises when the risk is not high enough to make SAFE the preferred individual choice, enabling UNSAFE to prevail when it is not collectively preferred (leading to a smaller population/social welfare). We show that the range of risk probabilities where the social dilemma arises depends on many factors, the most important among them are the time-scale to reach supremacy in a given domain (i.e. short-term vs long-term AI) and the speed gain by ignoring safety measures. Moreover, given the more complex nature of this scenario, we show that incentives such as reward and punishment (for example, for the purpose of technology regulation) are much more challenging to supply correctly than in case of cooperation dilemmas such as the Prisoner's Dilemma and the Public Good Games.
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life331-332, (July 29–August 2, 2019) doi: 10.1162/isal_a_00183
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life316-323, (July 29–August 2, 2019) doi: 10.1162/isal_a_00181
Abstract
PDF
The design of mechanisms that encourage pro-social behaviours in populations of self-regarding agents is recognised as a major theoretical challenge within several areas of social, life and engineering sciences. When interference from external parties is considered, several heuristics have been identified as capable of engineering a desired collective behaviour at a minimal cost. However, these studies neglect the diverse nature of contexts and social structures that characterise real-world populations. Here we analyse the impact of diversity by means of scale-free interaction networks with high and low levels of clustering, and test various interference mechanisms using simulations of agents facing a cooperative dilemma. Our results show that interference on scale-free networks is not trivial and that distinct levels of clustering react differently to each interference mechanism. As such, we argue that no tailored response fits all scale-free networks and present which mechanisms are more efficient at fostering cooperation in both types of networks. Finally, we discuss the pitfalls of considering reckless interference mechanisms.
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life286-287, (July 23–27, 2018) doi: 10.1162/isal_a_00057
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life614-615, (July 23–27, 2018) doi: 10.1162/isal_a_00112
Proceedings Papers
. alif2016, ALIFE 2016, the Fifteenth International Conference on the Synthesis and Simulation of Living Systems20, (July 4–6, 2016) doi: 10.1162/978-0-262-33936-0-ch008
Abstract
PDF
When attempting to avoid global warming, individuals often face a social dilemma in which, besides securing future benefits, it is also necessary to reduce the chances of future losses. Unfortunately, individuals, regions or nations may opt to be free riders, hoping to benefit from the efforts of others while choosing not to make any effort themselves. Moreover, nations and their leaders seek a collective goal that is shadowed by the uncertainty of its achievement. Such types of uncertainties have repeatedly happened throughout human history from group hunting to voluntary adoption of public health measures and other prospective choices. In this talk, I will discuss a population dynamics approach to a broad class of cooperation problems in which attempting to minimize future losses turns the risk of failure into a central issue in individual decisions. Our results suggest that global coordination for a common good should be attempted by segmenting tasks in many small to medium sized groups in which perception of risk is high. Moreover, whenever the perception of risk is low as it is presently the case we find that a polycentric approach involving multiple institutions is more effective than that associated with a single, global one, indicating that a bottom-up approach, setup at a local scale, provides a better ground on which to attempt a solution for such a complex and global dilemma. Finally, I will discuss the impact on public goods dilemmas of uncertainty in collective goals, heterogeneous political networks, obstinate players and wealth inequality, including a distribution of wealth representative of existing inequalities among nations.
Proceedings Papers
. alif2016, ALIFE 2016, the Fifteenth International Conference on the Synthesis and Simulation of Living Systems470-471, (July 4–6, 2016) doi: 10.1162/978-0-262-33936-0-ch076
Abstract
PDF
Indirect Reciprocity (IR) is possibly the most elaborated and cognitively demanding mechanism of cooperation discovered so far. It involves status and reputations and has been heralded as providing the biological basis of our morality. Most theoretical models employed to date have studied how IR can lead to the emergence and sustainability of cooperation in infinite populations. However, it is known that cooperation, norms, reciprocity and the art of managing reputations, are features that date back to primitive, small-scale societies, when interactions mostly occurred within tribes. In small populations, stochastic finite size effects are not only important, but may even render infinite populations analyses misleading. Thus, it remains an open question which norms prevail in small-scale societies and their influence in the evolutionary dynamics of IR. With the current extended abstract, we would like to offer a new analysis of this problem. In Santos et al. (2016) we show that population size strongly influences the merits of each social norm, while proposing a new formal tool to assess the evolutionary dynamics of reputation-based systems in finite populations. We show that a single social norm (Stern-Judging) emerges as the leading norm in small-scale societies. That simple norm dictates that only whoever cooperates with good individuals, and defects against bad ones, deserves a good reputation.
Proceedings Papers
. ecal2015, ECAL 2015: the 13th European Conference on Artificial Life149, (July 20–24, 2015) doi: 10.1162/978-0-262-33027-5-ch032
Proceedings Papers
. ecal2011, ECAL 2011: The 11th European Conference on Artificial Life108, (August 8–12, 2011) doi: 10.7551/978-0-262-29714-1-ch108
Proceedings Papers
. ecal2011, ECAL 2011: The 11th European Conference on Artificial Life93, (August 8–12, 2011) doi: 10.7551/978-0-262-29714-1-ch093