TY - UNPB
T1 - Media and responsible AI governance
T2 - a game-theoretic and LLM analysis
AU - Balabanova, Nataliya
AU - Bashir, Adeela
AU - Bova, Paolo
AU - Buscemi, Alessio
AU - Cimpeanu, Theodor
AU - da Fonseca, Henrique Correia
AU - Stefano, Alessandro Di
AU - Duong, Manh Hong
AU - Domingos, Elias Fernandez
AU - Fernandes, Antonio
AU - Han, The Anh
AU - Krellner, Marcus
AU - Ogbo, Ndidi Bianca
AU - Powers, Simon T.
AU - Proverbio, Daniele
AU - Santos, Fernando P.
AU - Shamszaman, Zia Ush
AU - Song, Zhao
PY - 2025/3/12
Y1 - 2025/3/12
N2 - This paper investigates the complex interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems. Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes. The research explores two key mechanisms for achieving responsible governance, safe AI development and adoption of safe AI: incentivising effective regulation through media reporting, and conditioning user trust on commentariats' recommendation. The findings highlight the crucial role of the media in providing information to users, potentially acting as a form of "soft" regulation by investigating developers or regulators, as a substitute to institutional AI regulation (which is still absent in many regions). Both game-theoretic analysis and LLM-based simulations reveal conditions under which effective regulation and trustworthy AI development emerge, emphasising the importance of considering the influence of different regulatory regimes from an evolutionary game-theoretic perspective. The study concludes that effective governance requires managing incentives and costs for high quality commentaries.
AB - This paper investigates the complex interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems. Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes. The research explores two key mechanisms for achieving responsible governance, safe AI development and adoption of safe AI: incentivising effective regulation through media reporting, and conditioning user trust on commentariats' recommendation. The findings highlight the crucial role of the media in providing information to users, potentially acting as a form of "soft" regulation by investigating developers or regulators, as a substitute to institutional AI regulation (which is still absent in many regions). Both game-theoretic analysis and LLM-based simulations reveal conditions under which effective regulation and trustworthy AI development emerge, emphasising the importance of considering the influence of different regulatory regimes from an evolutionary game-theoretic perspective. The study concludes that effective governance requires managing incentives and costs for high quality commentaries.
U2 - 10.48550/arXiv.2503.09858
DO - 10.48550/arXiv.2503.09858
M3 - Preprint
BT - Media and responsible AI governance
PB - arXiv
ER -