The Stuff We Swim in: Regulation Alone Will Not Lead to Justifiable Trust in AI

Simon T. Powers, Olena Linnyk, Micheal Guckert, Jennifer Hannig, Jeremy Pitt, Neil Urquhart, Aniko Ekárt, Nils Gumpfer, The Anh Han, Peter R. Lewis, Stephen Marsh, Tim Weber

Research output: Contribution to journalArticlepeer-review

6 Downloads (Pure)

Abstract

Recent activity in the field of artificial intelligence (AI) has given rise to large language models (LLMs) such as GPT-4 and Bard. These are undoubtedly impressive achievements, but they raise serious questions about appropriation, accuracy, explainability, accessibility, responsibility, and more. There have been pusillanimous and self-exculpating calls for a halt in development by senior researchers in the field and largely self-serving comments by industry leaders around the potential of AI systems, good or bad. Many of these commentaries leverage misguided conceptions, in the popular imagination, of the competence of machine intelligence, based on some sort of Frankenstein or Terminator-like fiction: however, this leaves it entirely unclear what exactly the relationship between human(ity) and AI, as represented by LLMs or what comes after, is or could be.
Original languageEnglish
Pages (from-to)95-106
Number of pages12
JournalIEEE Technology and Society Magazine
Volume42
Issue number4
DOIs
Publication statusPublished - 22 Jan 2024

Fingerprint

Dive into the research topics of 'The Stuff We Swim in: Regulation Alone Will Not Lead to Justifiable Trust in AI'. Together they form a unique fingerprint.

Cite this