What’s It Like to Trust an LLM: The Devolution of Trust Psychology?

  • Simon T. Powers
  • , Neil Urquhart
  • , Chloe M. Barnes
  • , Theodor Cimpeanu
  • , Anikó Ekárt
  • , The Anh Han
  • , Jeremy Pitt
  • , Michael Guckert

Research output: Contribution to journalComment/debatepeer-review

Abstract

The advent of large language models (LLMs), their sudden popularity, and their extensive use by an unprepared and, therefore, unskilled public raises profound questions about the societal consequences that this might have on both the individual and collective levels. In particular, the benefits of a marginal increase in productivity are offset by the potential for widespread cognitive deskilling or nonskilling. While there has been much discussion about the trust relationship between humans and generative AI technologies, the long-term consequences that the use of generative AI can have on the human capability to make trust decisions in other contexts, including interpersonal relations, have not been considered. We analyze this development using the functionalist lens of a general trust model and deconstruct the potential loss of the human ability to make informed and reasoned trust decisions. From our observations and conclusions, we derive a first set of recommendations to increase the awareness of the underlying threats, laying the foundation for a more substantive analysis of the opportunities and threats of delegating educative, cognitive, and knowledge-centric tasks to unrestricted automation.

Original languageEnglish
Pages (from-to)30-37
Number of pages8
JournalIEEE Technology and Society Magazine
Volume44
Issue number3
DOIs
Publication statusPublished - 12 Sept 2025

Bibliographical note

Publisher Copyright:
© 1982-2012 IEEE.

Fingerprint

Dive into the research topics of 'What’s It Like to Trust an LLM: The Devolution of Trust Psychology?'. Together they form a unique fingerprint.

Cite this