An evolutionary game theory analysis of trust in repeated games and lessons for human-machine interactions

The Anh Han, Cedric Perret, Simon T. Powers

Research output: Contribution to journalConference articlepeer-review

3 Downloads (Pure)

Abstract

The actions of intelligent agents, such as
chatbots, recommender systems, and virtual assistants are
typically not fully transparent to the user (Beldad et al.,
2016; Chung et al., 2017). Consequently, users take the
risk that such agents act in ways opposed to the users’ preferences or goals (Luhmann, 1979). It is often argued that
people use trust as a cognitive shortcut to reduce the complexity of such interactions. In our recent work (Han et al.,
2021), we study this by using the methods of evolutionary
game theory (EGT) to examine the viability of trust-based
strategies in the context of an iterated prisoner’s dilemma
(IPD) game (Axelrod, 1984a; Sigmund, 2010). We show
that these strategies can reduce the opportunity cost of verifying whether the action of their co-player was actually cooperative, and out-compete strategies that are always conditional, such as Tit-for-Tat. We argue that the opportunity
cost of checking the action of the co-player is likely to be
greater when the interaction is between people and intelligent artificial agents, because of the reduced transparency of
the agent.
Original languageEnglish
Pages (from-to)332-334
Number of pages3
JournalArtificial Life Conference Proceedings
Volume33
DOIs
Publication statusPublished - 19 Jul 2021
Event2021 Conference on Artificial Life, ALIFE 2021 - Virtual, Online
Duration: 18 Jul 202122 Jul 2021

Fingerprint

Dive into the research topics of 'An evolutionary game theory analysis of trust in repeated games and lessons for human-machine interactions'. Together they form a unique fingerprint.

Cite this