Co-constructing intersubjectivity with artificial conversational agents: People are more likely to initiate repairs of misunderstandings with agents represented as human

被引:53
作者
Corti, Kevin [1 ]
Gillespie, Alex [1 ]
机构
[1] London Sch Econ, London WC2A 2AE, England
关键词
Common ground; Conversational repair; Echoborg; Human-agent interaction; Intersubjectivity; Psychological benchmarks; INTERFACE; ORGANIZATION; BENCHMARKS; ROBOTS; SELF;
D O I
10.1016/j.chb.2015.12.039
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
This article explores whether people more frequently attempt to repair misunderstandings when speaking to an artificial conversational agent if it is represented as fully human. Interactants in dyadic conversations with an agent (the chat bot Cleverbot) spoke to either a text screen interface (agent's responses shown on a screen) or a human body interface (agent's responses vocalized by a human speech shadower via the echoborg method) and were either informed or not informed prior to interlocution that their interlocutor's responses would be agent-generated. Results show that an interactant is less likely to initiate repairs when an agent-interlocutor communicates via a text screen interface as well as when they explicitly know their interlocutor's words to be agent-generated. That is to say, people demonstrate the most "intersubjective effort" toward establishing common ground when they engage an agent under the same social psychological conditions as face-to-face human human interaction (i.e., when they both encounter another human body and assume that they are speaking to an autonomously-communicating person). This articles methodology presents a novel means of benchmarking intersubjectivity and intersubjective effort in human-agent interaction. (C) 2015 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
引用
收藏
页码:431 / 442
页数:12
相关论文
共 80 条
  • [1] Representation, interaction, and intersubjectivity
    Alterman, Richard
    [J]. COGNITIVE SCIENCE, 2007, 31 (05) : 815 - 841
  • [2] Constituting face in conversation: Face, facework, and interactional achievement
    Arundale, Robert B.
    [J]. JOURNAL OF PRAGMATICS, 2010, 42 (08) : 2078 - 2105
  • [3] Digital chameleons - Automatic assimilation of nonverbal gestures in immersive virtual environments
    Bailenson, JN
    Yee, N
    [J]. PSYCHOLOGICAL SCIENCE, 2005, 16 (10) : 814 - 819
  • [4] "I Know That You Know How I Feel": Behavioral and Physiological Signals Demonstrate Emotional Attunement While Interacting with a Computer Simulating Emotional Intelligence
    Balzarotti, Stefania
    Piccini, Luca
    Andreoni, Giuseppe
    Ciceri, Rita
    [J]. JOURNAL OF NONVERBAL BEHAVIOR, 2014, 38 (03) : 283 - 299
  • [5] Berk L.E., 1995, SCAFFOLDING CHILDREN
  • [6] The role of beliefs in lexical alignment: Evidence from dialogs with humans and computers
    Branigan, Holly P.
    Pickering, Martin J.
    Pearson, Jamie
    McLean, Janet F.
    Brown, Ash
    [J]. COGNITION, 2011, 121 (01) : 41 - 57
  • [7] Computers that are care: investigating the effects of orientation of emotion exhibited by an embodied computer agent
    Brave, S
    Nass, C
    Hutchinson, K
    [J]. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2005, 62 (02) : 161 - 178
  • [8] Brennan S. E., 1991, User Modeling and User-Adapted Interaction, V1, P67, DOI 10.1007/BF00158952
  • [9] Caissie R, 1997, VOLTA REV, V99, P203
  • [10] Carpenter R., 2015, CLEVERBOT SOFTWARE