Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-4 of 4
Tom Lenaerts
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2022, ALIFE 2022: The 2022 Conference on Artificial Life7, (July 18–22, 2022) 10.1162/isal_a_00484
Abstract
View Paper
PDF
With the introduction of Artificial Intelligence (AI) and related technologies in our daily lives, fear and anxiety about their misuse, as well as the hidden biases in their creation, have led to a demand for regulation to address such issues. Yet, blindly regulating an innovation process that is not well understood may stifle this process and reduce benefits that society might gain from the generated technology, even under the best of intentions. Starting from a baseline game-theoretical model that captures the complex ecology of choices associated with a race for domain supremacy using AI technology, we show that socially unwanted outcomes may be produced when sanctioning is applied unconditionally to risk-taking, i.e., potentially unsafe behaviours. As an alternative to resolve the detrimental effect of over-regulation, we propose a voluntary commitment approach, wherein technologists have the freedom of choice between independently pursuing their course of actions or else establishing binding agreements to act safely, with sanctioning of those that do not abide to what they have pledged. Overall, our work reveals for the first time how voluntary commitments, with sanctions either by peers or by an institution, leads to socially beneficial outcomes in all scenarios that can be envisaged in the short-term race towards domain supremacy through AI technology.
Proceedings Papers
. isal2022, ALIFE 2022: The 2022 Conference on Artificial Life45, (July 18–22, 2022) 10.1162/isal_a_00528
Abstract
View Paper
PDF
Regulating the development of advanced technology such as Artificial Intelligence (AI) has become a principal topic, given the potential threat they pose to humanity’s long term future. First deploying such technology promises innumerable benefits, which might lead to the disregard of safety precautions or societal consequences in favour of speedy development, engendering a race narrative among firms and stakeholders due to value erosion. Building upon a previously proposed game-theoretical model describing an idealised technology race, we investigated how various structures of interaction among race participants can alter collective choices and requirements for regulatory actions. Our findings indicate that strong diversity among race participants, both in terms of connections and peer-influence, can reduce the conflicts which arise in purely homogeneous settings, thereby lessening the need for regulation.
Proceedings Papers
. isal2021, ALIFE 2021: The 2021 Conference on Artificial Life65, (July 18–22, 2021) 10.1162/isal_a_00385
Abstract
View Paper
PDF
We examine a social dilemma that arises with the advancement of technologies such as AI, where technologists can choose a safe (SAFE) vs risk-taking (UNSAFE) course of development. SAFE is costlier and takes more time to implement than UNSAFE, allowing UNSAFE strategists to further claim significant benefits from reaching supremacy in a certain technology. Collectively, SAFE is the preferred choice when the risk is sufficiently high, while risk-taking is preferred otherwise. Given the advantage of risk-taking behaviour in terms of cost and speed, a social dilemma arises when the risk is not high enough to make SAFE the preferred individual choice, enabling UNSAFE to prevail when it is not collectively preferred (leading to a smaller population/social welfare). We show that the range of risk probabilities where the social dilemma arises depends on many factors, the most important among them are the time-scale to reach supremacy in a given domain (i.e. short-term vs long-term AI) and the speed gain by ignoring safety measures. Moreover, given the more complex nature of this scenario, we show that incentives such as reward and punishment (for example, for the purpose of technology regulation) are much more challenging to supply correctly than in case of cooperation dilemmas such as the Prisoner's Dilemma and the Public Good Games.
Proceedings Papers
. isal2021, ALIFE 2021: The 2021 Conference on Artificial Life100, (July 18–22, 2021) 10.1162/isal_a_00438
Abstract
View Paper
PDF
Climate action, vaccination resistance or social coordination in pandemics are some of the many social endeavours with uncertain, non-linear and long-term returns. The collective risk dilemma offers an excellent game-theoretical abstraction of such scenarios. In this dilemma, players can make stepwise contributions to a public good throughout a fixed number of rounds and will only observe their payoff once the game ends. The non-linearity of returns is modeled through a threshold that determines the risk of collective loss, so that players receive zero payoff if a collective threshold is not achieved. In an article recently published in the Journal of Simulation Practice and Theory we introduce a novel population-based learning model wherein a group of individuals facing a collective risk dilemma acquire their strategies over time through reinforcement learning, while handling different sources of uncertainty. We show that the strategies learned with the model correspond to those observed in behavioral experiments, even in the presence of environmental uncertainty. Furthermore, we confirm that when participants are unsure about when the game will end, agents become more polarized and the number of fair contributions diminishes. The population-based on-line learning framework we propose is general enough to be applicable in a wide range of collective action problems and arbitrarily large sets of available policies.