Explanation in artificial intelligence: Insights from the social sciences

被引:2272
作者
Miller, Tim [1 ]
机构
[1] Univ Melbourne, Sch Comp & Informat Syst, Melbourne, Vic, Australia
基金
澳大利亚研究理事会;
关键词
Explanation; Explainability; Interpretability; Explainable Al; Transparency; STRUCTURAL-MODEL APPROACH; EXPLAIN; PRECONDITIONS; ATTRIBUTION; KNOWLEDGE; INFERENCE; BEHAVIOR; BLAME; GOALS; CONVERSATION;
D O I
10.1016/j.artint.2018.07.007
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to provide more transparency to their algorithms. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers' intuition of what constitutes a 'good' explanation. There exist vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations to the explanation process. This paper argues that the field of explainable artificial intelligence can build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence. (C) 2018 Elsevier B.V. All rights reserved.
引用
收藏
页码:1 / 38
页数:38
相关论文
共 193 条
[31]  
Charniak E., 1991, AAAI-91. Proceedings Ninth National Conference on Artificial Intelligence, P160
[32]  
Chen J. Y., 2014, ARLTR6905 US ARM RES
[33]  
Chevaleyre Y, 2007, LECT NOTES COMPUT SC, V4362, P51
[34]   Contrastive Constraints Guide Explanation-Based Category Learning [J].
Chin-Parker, Seth ;
Cantelon, Julie .
COGNITIVE SCIENCE, 2017, 41 (06) :1645-1655
[35]   Background shifts affect explanatory style: how a pragmatic theory of explanation accounts for background effects in the generation of explanations [J].
Chin-Parker, Seth ;
Bradner, Alexandra .
COGNITIVE PROCESSING, 2010, 11 (03) :227-249
[36]   Responsibility and blame: A structural-model approach [J].
Chockler, H ;
Halpern, JY .
JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2004, 22 :93-115
[37]   The inherence heuristic: An intuitive means of making sense of the world, and a potential precursor to psychological essentialism [J].
Cimpian, Andrei ;
Salomon, Erika .
BEHAVIORAL AND BRAIN SCIENCES, 2014, 37 (05) :461-480
[38]  
Cooper A., 2004, The Inmates Are Running the Asylum: Why High-Tech Products Drive Us Crazy and How to Restore the Sanity
[39]  
Davey G.C. L., 1992, ANXIETY RES, V4, P299
[40]  
de Graaf M. D., 2017, AAAI FALL S, P19