Projects per year
Abstract
Rapid technological advancements in Artificial Intelligence (AI), as well as the growing deployment of intelligent technologies in new application domains, have generated serious anxiety and a fear of missing out among different stake-holders, fostering a racing narrative. Whether real or not, the belief in such a race for domain supremacy through AI, can make it real simply from its consequences, as put forward by the Thomas theorem. These consequences may be negative, as racing for technological supremacy creates a complex ecology of choices that could push stake-holders to underestimate or even ignore ethical and safety procedures. As a consequence, different actors are urging to consider both the normative and social impact of these technological advancements, contemplating the use of the precautionary principle in AI innovation and research. Yet, given the breadth and depth of AI and its advances, it is difficult to assess which technology needs regulation and when. As there is no easy access to data describing this alleged AI race, theoretical models are necessary to understand its potential dynamics, allowing for the identification of when procedures need to be put in place to favour outcomes beneficial for all. We show that, next to the risks of setbacks and being reprimanded for unsafe behaviour, the time-scale in which domain supremacy can be achieved plays a crucial role. When this can be achieved in a short term, those who completely ignore the safety precautions are bound to win the race but at a cost to society, apparently requiring regulatory actions. Our analysis reveals that imposing regulations for all risk and timing conditions may not have the anticipated effect as only for specific conditions a dilemma arises between what is individually preferred and globally beneficial. Similar observations can be made for the long-term development case. Yet different from the short-term situation, conditions can be identified that require the promotion of risk-taking as opposed to compliance with safety regulations in order to improve social welfare. These results remain robust both when two or several actors are involved in the race and when collective rather than individual setbacks are produced by risk-taking behaviour. When defining codes of conduct and regulatory policies for applications of AI, a clear understanding of the time-scale of the race is thus required, as this may induce important non-trivial effects.
Original language | English |
---|---|
Pages (from-to) | 881-921 |
Number of pages | 41 |
Journal | Journal of Artificial Intelligence Research |
Volume | 69 |
DOIs | |
Publication status | Published - 22 Nov 2020 |
Bibliographical note
Funding Information:T.A.H., L.M.P. and T.L. are supported by Future of Life Institute grant RFP2-154. L.M.P. is supported by NOVA LINCS (UIDB/04516/2020) with the financial support of FCT-
Funding Information:
Fundac¸ão para a Ciência e a Tecnologia, Portugal, through national funds. F.C.S. acknowledges support from FCT Portugal (grants PTDC/EEI-SII/5081/2014, PTDC/MAT/STA/3358/2014, and UIDB/50021/2020). T.L. acknowledges support of the F.N.R.S. project with grant number 31257234 and the FuturICT2.0 (www.futurict2.eu) project funded by the FLAG-ERA JCT 2016.
Publisher Copyright:
© 2020 AI Access Foundation. All rights reserved.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
Fingerprint
Dive into the research topics of 'To regulate or not: A social dynamics analysis of an idealised ai race'. Together they form a unique fingerprint.Projects
- 1 Finished
-
Future of Life Institute Safety Grant "Incentives for Safety Agreement Compliance in AI Race"
Han, T. A. (PI)
30/11/18 → 31/07/21
Project: Research