Projects per year
With the introduction of Artificial Intelligence (AI) and related technologies in our daily lives, fear and anxiety about their misuse, as well as the hidden biases in their creation, have led to a demand for regulation to address such issues. Yet, blindly regulating an innovation process that is not well understood may stifle this process and reduce benefits that society might gain from the generated technology, even under the best of intentions. Starting from a baseline game-theoretical model that captures the complex ecology of choices associated with a race for domain supremacy using AI technology, we show that socially unwanted outcomes may be produced when sanctioning is applied unconditionally to risk-taking, i.e., potentially unsafe behaviours. As an alternative to resolve the detrimental effect of over-regulation, we propose a voluntary commitment approach, wherein technologists have the freedom of choice between independently pursuing their course of actions or else establishing binding agreements to act safely, with sanctioning of those that do not abide to what they have pledged. Overall, our work reveals for the first time how voluntary commitments, with sanctions either by peers or by an institution, leads to socially beneficial outcomes in all scenarios that can be envisaged in the short-term race towards domain supremacy through AI technology.
|Title of host publication||ALIFE 2022: The 2022 Conference on Artificial Life|
|Number of pages||3|
|Publication status||Published - 18 Jul 2022|
|Event||ALIFE 2022: The 2022 Conference on Artificial Life - University of Trento, Trento, Italy|
Duration: 18 Jul 2022 → 22 Oct 2022
|Name||The 2022 Conference on Artificial Life|
|Abbreviated title||ALIFE 2022|
|Period||18/07/22 → 22/10/22|
FingerprintDive into the research topics of 'Voluntary safety pledges overcome over-regulation dilemma in AI development: an evolutionary game analysis'. Together they form a unique fingerprint.
- 1 Finished
Future of Life Institute Safety Grant "Incentives for Safety Agreement Compliance in AI Race"
30/11/18 → 31/10/20