TY - JOUR
T1 - What’s It Like to Trust an LLM
T2 - The Devolution of Trust Psychology?
AU - Powers, Simon T.
AU - Urquhart, Neil
AU - Barnes, Chloe M.
AU - Cimpeanu, Theodor
AU - Ekárt, Anikó
AU - Han, The Anh
AU - Pitt, Jeremy
AU - Guckert, Michael
N1 - Publisher Copyright:
© 1982-2012 IEEE.
PY - 2025/9/12
Y1 - 2025/9/12
N2 - The advent of large language models (LLMs), their sudden popularity, and their extensive use by an unprepared and, therefore, unskilled public raises profound questions about the societal consequences that this might have on both the individual and collective levels. In particular, the benefits of a marginal increase in productivity are offset by the potential for widespread cognitive deskilling or nonskilling. While there has been much discussion about the trust relationship between humans and generative AI technologies, the long-term consequences that the use of generative AI can have on the human capability to make trust decisions in other contexts, including interpersonal relations, have not been considered. We analyze this development using the functionalist lens of a general trust model and deconstruct the potential loss of the human ability to make informed and reasoned trust decisions. From our observations and conclusions, we derive a first set of recommendations to increase the awareness of the underlying threats, laying the foundation for a more substantive analysis of the opportunities and threats of delegating educative, cognitive, and knowledge-centric tasks to unrestricted automation.
AB - The advent of large language models (LLMs), their sudden popularity, and their extensive use by an unprepared and, therefore, unskilled public raises profound questions about the societal consequences that this might have on both the individual and collective levels. In particular, the benefits of a marginal increase in productivity are offset by the potential for widespread cognitive deskilling or nonskilling. While there has been much discussion about the trust relationship between humans and generative AI technologies, the long-term consequences that the use of generative AI can have on the human capability to make trust decisions in other contexts, including interpersonal relations, have not been considered. We analyze this development using the functionalist lens of a general trust model and deconstruct the potential loss of the human ability to make informed and reasoned trust decisions. From our observations and conclusions, we derive a first set of recommendations to increase the awareness of the underlying threats, laying the foundation for a more substantive analysis of the opportunities and threats of delegating educative, cognitive, and knowledge-centric tasks to unrestricted automation.
UR - https://www.scopus.com/pages/publications/105016085264
U2 - 10.1109/mts.2025.3583233
DO - 10.1109/mts.2025.3583233
M3 - Comment/debate
AN - SCOPUS:105016085264
SN - 0278-0097
VL - 44
SP - 30
EP - 37
JO - IEEE Technology and Society Magazine
JF - IEEE Technology and Society Magazine
IS - 3
ER -