TY - JOUR
T1 - Trust AI regulation? Discerning users are vital to build trust and effective AI regulation
AU - Alalawi, Zainab
AU - Bova, Paolo
AU - Cimpeanu, Theodor
AU - Stefano, Alessandro Di
AU - Duong, Manh Hong
AU - Domingos, Elias Fernández
AU - Han, The Anh
AU - Krellner, Marcus
AU - Ogbo, Ndidi Bianca
AU - Powers, Simon T.
AU - Zimmaro, Filippo
PY - 2025/7/15
Y1 - 2025/7/15
N2 - There is general agreement that some form of regulation is necessary both for AI creators to be incentivised to develop trustworthy systems, and for users to actually trust those systems. But there is much debate about what form these regulations should take and how they should be implemented. Most work in this area has been qualitative, and has not been able to make formal predictions. Here, we propose that evolutionary game theory can be used to quantitatively model the dilemmas faced by users, AI creators, and regulators, and provide insights into the possible effects of different regulatory regimes. We show that achieving safe AI and user trust requires regulators to be incentivised to regulate effectively. We demonstrate two effective mechanisms. In the first, governments can recognise and reward regulators that do a good job. In that case, if the AI technology is not too risky, some level of safe development and user trust evolves. In the second mechanism, users can condition their trust decision on the effectiveness of the regulators. This leads to effective regulation, and consequently the development of trustworthy AI and user trust, provided that the cost of implementing regulations is not too high. Our findings highlight the importance of considering the effect of different regulatory regimes from an evolutionary game theoretic perspective.
AB - There is general agreement that some form of regulation is necessary both for AI creators to be incentivised to develop trustworthy systems, and for users to actually trust those systems. But there is much debate about what form these regulations should take and how they should be implemented. Most work in this area has been qualitative, and has not been able to make formal predictions. Here, we propose that evolutionary game theory can be used to quantitatively model the dilemmas faced by users, AI creators, and regulators, and provide insights into the possible effects of different regulatory regimes. We show that achieving safe AI and user trust requires regulators to be incentivised to regulate effectively. We demonstrate two effective mechanisms. In the first, governments can recognise and reward regulators that do a good job. In that case, if the AI technology is not too risky, some level of safe development and user trust evolves. In the second mechanism, users can condition their trust decision on the effectiveness of the regulators. This leads to effective regulation, and consequently the development of trustworthy AI and user trust, provided that the cost of implementing regulations is not too high. Our findings highlight the importance of considering the effect of different regulatory regimes from an evolutionary game theoretic perspective.
UR - https://doi.org/10.1016/j.amc.2025.129627
U2 - 10.1016/j.amc.2025.129627
DO - 10.1016/j.amc.2025.129627
M3 - Article
SN - 0096-3003
VL - 508
JO - Applied Mathematics and Computation
JF - Applied Mathematics and Computation
M1 - 129627
ER -