Trusting Intelligent Machines: Deepening trust within socio-technical systems

Peter Andras, Lukas Esterle, Michael Guckert, The Anh Han, Peter R Lewis, Kristina Milanovic, Terry Payne, Cedric Perret, Jeremy Pitt, Simon T Powers, Neil Urquhart, Simon Wells

    Research output: Contribution to journalArticlepeer-review

    231 Downloads (Pure)

    Abstract

    Intelligent machines have reached capabilities that go beyond a level that a human being can fully comprehend without sufficiently detailed understanding of the underlying mechanisms. The choice of moves in the game Go (generated by Deep Mind’s Alpha Go Zero [1]) are an impressive example for an artificial intelligence system calculating results that even a human expert for the game can hardly retrace [2]. But this is, quite literally, a toy example. In reality, intelligent algorithms are encroaching more and more into our everyday
    lives, be it through algorithms that recommend products for us to buy, or whole systems such as driverless vehicles. We are delegating ever more aspects of our daily routines to machines, and this trend looks set to continue in the future. Indeed, continued economic growth is set to depend on it. The nature of human-computer interaction in the world that the digital transformation is creating will require (mutual) trust between humans and intelligent, or seemingly intelligent, machines. But what does it mean to trust an intelligent machine?
    How can trust be established between human societies and intelligent machines?
    The concept of trust plays an important role in many contexts [3]–[5]. In the social world trust is about the expectation of cooperative, supportive, and non-hostile behavior. In psychological terms, trust is the result of cognitive learning from experiences of trusting behavior with others. Philosophically, trust is the taking of risk on the basis of a moral relationship between individuals. In the context of economics and international relations, trust is based on calculated incentives for alternative behaviors, conceptualized through game theory. Game theory is also used in the field of multi-agent systems to model trust between artificial agents. In the case of management of organizations, trust is about
    exposing vulnerability, while assuming that the other individual will not take advantage of this. In the world of automation, trust is seen as a feature of entities that can be calculated as the probability of reliable behavior in the presence of externally induced uncertainty. In general, perhaps the common summary of the various views of trust is that it expresses a willingness to take risk under uncertainty [6].
    Original languageEnglish
    Article number8558724
    Pages (from-to)76-83
    Number of pages8
    JournalIEEE Technology and Society Magazine
    Volume37
    Issue number4
    DOIs
    Publication statusPublished - 4 Dec 2018

    Fingerprint

    Dive into the research topics of 'Trusting Intelligent Machines: Deepening trust within socio-technical systems'. Together they form a unique fingerprint.

    Cite this