Your browser doesn't support javascript.
loading
In human-machine trust, humans rely on a simple averaging strategy.
Love, Jonathon; Gronau, Quentin F; Palmer, Gemma; Eidels, Ami; Brown, Scott D.
Afiliación
  • Love J; Psychological Sciences, University of Newcastle, University Drive, Callaghan, NSW, 2308, Australia. jonathon.love@uon.edu.au.
  • Gronau QF; Psychological Sciences, University of Newcastle, University Drive, Callaghan, NSW, 2308, Australia.
  • Palmer G; Psychological Sciences, University of Newcastle, University Drive, Callaghan, NSW, 2308, Australia.
  • Eidels A; Psychological Sciences, University of Newcastle, University Drive, Callaghan, NSW, 2308, Australia.
  • Brown SD; Psychological Sciences, University of Newcastle, University Drive, Callaghan, NSW, 2308, Australia.
Cogn Res Princ Implic ; 9(1): 58, 2024 Sep 02.
Article en En | MEDLINE | ID: mdl-39218841
ABSTRACT
With the growing role of artificial intelligence (AI) in our lives, attention is increasingly turning to the way that humans and AI work together. A key aspect of human-AI collaboration is how people integrate judgements or recommendations from machine agents, when they differ from their own judgements. We investigated trust in human-machine teaming using a perceptual judgement task based on the judge-advisor system. Participants ( n = 89 ) estimated a perceptual quantity, then received a recommendation from a machine agent. The participants then made a second response which combined their first estimate and the machine's recommendation. The degree to which participants shifted their second response in the direction of the recommendations provided a measure of their trust in the machine agent. We analysed the role of advice distance in people's willingness to change their judgements. When a recommendation falls a long way from their initial judgement, do people come to doubt their own judgement, trusting the recommendation more, or do they doubt the machine agent, trusting the recommendation less? We found that although some participants exhibited these behaviours, the most common response was neither of these tendencies, and a simple model based on averaging accounted best for participants' trust behaviour. We discuss implications for theories of trust, and human-machine teaming.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Inteligencia Artificial / Confianza / Juicio Límite: Adult / Female / Humans / Male Idioma: En Revista: Cogn Res Princ Implic Año: 2024 Tipo del documento: Article País de afiliación: Australia

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Inteligencia Artificial / Confianza / Juicio Límite: Adult / Female / Humans / Male Idioma: En Revista: Cogn Res Princ Implic Año: 2024 Tipo del documento: Article País de afiliación: Australia