TY - JOUR
T1 - Emergence of cooperation in the one-shot Prisoner's dilemma through Discriminatory and Samaritan AIs
AU - Zimmaro, Filippo
AU - Miranda, Manuel
AU - Fernández, José María Ramos
AU - Moreno López, Jesús A.
AU - Reddel, Max
AU - Widler, Valeria
AU - Antonioni, Alberto
AU - Han, The Anh
N1 - Publisher Copyright:
© 2024 The Author(s).
PY - 2024/9/25
Y1 - 2024/9/25
N2 - As artificial intelligence (AI) systems are increasingly embedded in our lives, their presence leads to interactions that shape our behaviour, decision-making and social interactions. Existing theoretical research on the emergence and stability of cooperation, particularly in the context of social dilemmas, has primarily focused on human-to-human interactions, overlooking the unique dynamics triggered by the presence of AI. Resorting to methods from evolutionary game theory, we study how different forms of AI can influence cooperation in a population of human-like agents playing the one-shot Prisoner's dilemma game. We found that Samaritan AI agents who help everyone unconditionally, including defectors, can promote higher levels of cooperation in humans than Discriminatory AI that only helps those considered worthy/cooperative, especially in slow-moving societies where change based on payoff difference is moderate (small intensities of selection). Only in fast-moving societies (high intensities of selection), Discriminatory AIs promote higher levels of cooperation than Samaritan AIs. Furthermore, when it is possible to identify whether a co-player is a human or an AI, we found that cooperation is enhanced when human-like agents disregard AI performance. Our findings provide novel insights into the design and implementation of context-dependent AI systems for addressing social dilemmas.
AB - As artificial intelligence (AI) systems are increasingly embedded in our lives, their presence leads to interactions that shape our behaviour, decision-making and social interactions. Existing theoretical research on the emergence and stability of cooperation, particularly in the context of social dilemmas, has primarily focused on human-to-human interactions, overlooking the unique dynamics triggered by the presence of AI. Resorting to methods from evolutionary game theory, we study how different forms of AI can influence cooperation in a population of human-like agents playing the one-shot Prisoner's dilemma game. We found that Samaritan AI agents who help everyone unconditionally, including defectors, can promote higher levels of cooperation in humans than Discriminatory AI that only helps those considered worthy/cooperative, especially in slow-moving societies where change based on payoff difference is moderate (small intensities of selection). Only in fast-moving societies (high intensities of selection), Discriminatory AIs promote higher levels of cooperation than Samaritan AIs. Furthermore, when it is possible to identify whether a co-player is a human or an AI, we found that cooperation is enhanced when human-like agents disregard AI performance. Our findings provide novel insights into the design and implementation of context-dependent AI systems for addressing social dilemmas.
UR - http://www.scopus.com/inward/record.url?scp=85202168925&partnerID=8YFLogxK
U2 - 10.1098/rsif.2024.0212
DO - 10.1098/rsif.2024.0212
M3 - Article
C2 - 39317332
AN - SCOPUS:85202168925
SN - 1742-5689
VL - 21
JO - Journal of the Royal Society Interface
JF - Journal of the Royal Society Interface
IS - 218
M1 - 20240212
ER -