Abstract
The actions of intelligent agents, such as
chatbots, recommender systems, and virtual assistants are
typically not fully transparent to the user (Beldad et al.,
2016; Chung et al., 2017). Consequently, users take the
risk that such agents act in ways opposed to the users’ preferences or goals (Luhmann, 1979). It is often argued that
people use trust as a cognitive shortcut to reduce the complexity of such interactions. In our recent work (Han et al.,
2021), we study this by using the methods of evolutionary
game theory (EGT) to examine the viability of trust-based
strategies in the context of an iterated prisoner’s dilemma
(IPD) game (Axelrod, 1984a; Sigmund, 2010). We show
that these strategies can reduce the opportunity cost of verifying whether the action of their co-player was actually cooperative, and out-compete strategies that are always conditional, such as Tit-for-Tat. We argue that the opportunity
cost of checking the action of the co-player is likely to be
greater when the interaction is between people and intelligent artificial agents, because of the reduced transparency of
the agent.
chatbots, recommender systems, and virtual assistants are
typically not fully transparent to the user (Beldad et al.,
2016; Chung et al., 2017). Consequently, users take the
risk that such agents act in ways opposed to the users’ preferences or goals (Luhmann, 1979). It is often argued that
people use trust as a cognitive shortcut to reduce the complexity of such interactions. In our recent work (Han et al.,
2021), we study this by using the methods of evolutionary
game theory (EGT) to examine the viability of trust-based
strategies in the context of an iterated prisoner’s dilemma
(IPD) game (Axelrod, 1984a; Sigmund, 2010). We show
that these strategies can reduce the opportunity cost of verifying whether the action of their co-player was actually cooperative, and out-compete strategies that are always conditional, such as Tit-for-Tat. We argue that the opportunity
cost of checking the action of the co-player is likely to be
greater when the interaction is between people and intelligent artificial agents, because of the reduced transparency of
the agent.
Original language | English |
---|---|
Pages (from-to) | 332-334 |
Number of pages | 3 |
Journal | Artificial Life Conference Proceedings |
Volume | 33 |
DOIs | |
Publication status | Published - 19 Jul 2021 |
Event | 2021 Conference on Artificial Life, ALIFE 2021 - Virtual, Online Duration: 18 Jul 2021 → 22 Jul 2021 |