Future of Life institute funded project "An empirical study of safe development behaviours in a rapid AI development competition "

Project: Research

Project Details

Description

The tension in the competition towards powerful Artificial Intelligence (AI) systems between different development teams (e.g., companies, nations), could lead to serious negative consequences, especially when ethical and safety procedures are underestimated or even ignored. Existing works on regulation of AI safety development behaviors and associated risks primarily focused on theoretical modelling. This project aims to provide empirical understanding of AI development competition dynamics and the impact of certain regulatory actions (such as sanctioning of unsafe behavior and reward of positive ones, or safety agreements). To that end, we will perform incentivized behavioral experiments studying how human participants behave when being presented with an AI competition scenario and regulation measure. This approach has proven powerful in studying other types of risk such as that from climate change and social dilemmas but has never been studied for AI risks. This is important to validate existing theoretical predictions, but also to inform the development of more realistic and accurate models of AI safety development dynamics.
Short titleAn empirical study of safe development behaviours in a rapid AI development competition
StatusActive
Effective start/end date31/05/23 → …

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.