In AI We Trust? Effects of Agency Locus and Transparency on Uncertainty Reduction in Human-AI Interaction

被引:77
作者
Liu, Bingjie [1 ]
机构
[1] Calif State Univ Los Angeles, Los Angeles, CA 90032 USA
来源
JOURNAL OF COMPUTER-MEDIATED COMMUNICATION | 2021年 / 26卷 / 06期
关键词
Machine Learning; Agency Locus; Agency Attribution; Transparency; Uncertainty; Trust; INFORMATION; MOTIVATION; AUTOMATION;
D O I
10.1093/jcmc/zmab013
中图分类号
G2 [信息与知识传播];
学科分类号
05 ; 0503 ;
摘要
Artificial intelligence (AI) is increasingly used to make decisions for humans. Unlike traditional AI that is programmed to follow human-made rules, machine-learning AI generates rules from data. These machine-generated rules are often unintelligible to humans. Will users feel more uncertainty about decisions governed by such rules? To what extent does rule transparency reduce uncertainty and increase users' trust? In a 2x3x2 between-subjects online experiment, 491 participants interacted with a website that was purported to be a decision-making AI system. Three factors of the AI system were manipulated: agency locus (human-made rules vs. machine-learned rules), transparency (no vs. placebic vs. real explanations), and task (detecting fake news vs. assessing personality). Results show that machine-learning AI triggered less social presence, which increased uncertainty and lowered trust. Transparency reduced uncertainty and enhanced trust, but the mechanisms for this effect differed between the two types of AI. Lay Summary Machine-learning AI systems are governed by system-generated rules based on their analysis of large databases. These rules are not predetermined by humans. Furthermore, they can sometimes be seen as difficult to interpret by humans. In this research, I ask whether users trust the judgments of such systems that are driven by machine-made rules. The results show that when compared with a traditional system that was programmed to follow human-made rules, machine-learning AI was perceived as less humanlike. This led users to be more uncertain about the decisions produced by the machine-learning AI system. This also decreased their trust in the system and their intention to use it. Transparency of the rationales for its decisions alleviated users' uncertainty and enhanced their trust, provided that the rationales are meaningful and informative.
引用
收藏
页码:384 / 402
页数:19
相关论文
共 45 条
  • [1] [Anonymous], 2002, CHI 02 EXTENDED ABST, DOI [10.1145/506443.506491, DOI 10.1145/506443.506491]
  • [2] In AI we trust? Perceptions about automated decision-making by artificial intelligence
    Araujo, Theo
    Helberger, Natali
    Kruikemeier, Sanne
    de Vreese, Claes H.
    [J]. AI & SOCIETY, 2020, 35 (03) : 611 - 623
  • [3] Berger C.R., 1987, Interpersonal processes: New directions in communication research, P39
  • [4] Berger C.R., 1982, LANGUAGE SOCIAL KNOW
  • [5] Berger Charles R., 1975, Human Communication Research, V2, P99, DOI [10.1111/j.1468-2958.1975.tb00258.x, DOI 10.1111/J.1468-2958.1975.TB00258.X]
  • [6] Clatterbuck G.W., 1979, Human Communication Research, V5, P147, DOI DOI 10.1111/J.1468-2958.1979.TB00630.X
  • [7] The Very Efficient Assessment of Need for Cognition: Developing a Six-Item Version*
    Coelho, Gabriel Lins de Holanda
    Hanel, Paul H. P.
    Wolf, Lukas J.
    [J]. ASSESSMENT, 2020, 27 (08) : 1870 - 1885
  • [8] PRECIS OF THE INTENTIONAL STANCE
    DENNETT, DC
    [J]. BEHAVIORAL AND BRAIN SCIENCES, 1988, 11 (03) : 495 - 505
  • [9] The role of trust in automation reliance
    Dzindolet, MT
    Peterson, SA
    Pomranky, RA
    Pierce, LG
    Beck, HP
    [J]. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2003, 58 (06) : 697 - 718
  • [10] COGNITIVE THEORIES OF PERSUASION
    EAGLY, AH
    CHAIKEN, S
    [J]. ADVANCES IN EXPERIMENTAL SOCIAL PSYCHOLOGY, 1984, 17 : 267 - 359