AbstractHumans have developed considerable machinery to create policy and to distribute incentives, forming institutions designed to foster pro-social behaviour in society. Constantly faced with decisions on how best to allocate funding, institutions wrestle with limited budgets and a demand for positive outcomes. These issues are compounded when we consider that real populations are diverse in nature and social structure, in which certain individuals are more connected or influential than others. Understanding the complex interplay between institutional incentives and social diversity can shed light on human behaviour, allowing us to build mechanisms capable of engineering pro-sociality in social systems.
In this thesis, we develop mathematical and computational formulations to explore the evolutionary dynamics and cost-efficiency of institutional incentives. We achieve this through a systematic study of several economic games and by varying the networks of interaction that underpin these settings. We start by (i) exploring a cooperative dilemma, testing whether previous findings accrued in homogeneous populations still apply in the presence of social diversity. Subsequently, we study (ii) the dynamics and evolution of fairness, using an asymmetric interaction paradigm. When interactions are asymmetric, participants can enact multiple roles, and external decision-makers or institutions must also consider which roles are more suitable candidates for incentives. Moving away from positive incentives, we then propose (iii) an original model of signalling the threat of punishment, asking whether evolutionary dynamics can explain the deterrence of anti-social behaviour by way of fear, and whether such costly signalling can be used as cost-saving measures for institutions. Finally, we suggest (iv) a timely application domain for our findings, the regulation of advanced technology with potential safety concerns, such as Artificial Intelligence (AI).
Through extensive computer simulations and mathematical analysis, we conclude that, with respect to (i), interference in complex networks is not trivial and that no tailored response can fit all networks, but also that reckless interference can lead to the collapse of cooperation. Regarding (ii), strictly targeting specific roles is an effective way of fostering fairness, and that social diversity relaxes these requirements, reducing the burden on institutions. Concerning (iii), signalling the threat of punishment can lead to the evolution of fearful defectors, deterring anti-social behaviour and heightening social welfare. Finally, pertaining to (iv), technology governance and regulation may profit from the world's patent heterogeneity and inequality among firms and nations. This can enable the design and implementation of meticulous interventions on a minority of participants which is capable of influencing an entire population towards an ethical and sustainable use of AI.
|Date of Award||1 Oct 2022|
|Supervisor||The Anh Han (Supervisor), Simon Lynch (Supervisor) & Yifeng Zeng (Supervisor)|