Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)

被引:2717
作者
Adadi, Amina [1 ]
Berrada, Mohammed [1 ]
机构
[1] Sidi Mohammed Ben Abdellah Univ, Comp & Interdisciplinary Phys Lab, Fes 30050, Morocco
来源
IEEE ACCESS | 2018年 / 6卷
关键词
Explainable artificial intelligence; interpretable machine learning; black-box models; DECISION TREE; RULES; CLASSIFIERS; SELECTION;
D O I
10.1109/ACCESS.2018.2870052
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
At the dawn of the fourth industrial revolution, we are witnessing a fast and widespread adoption of artificial intelligence (AI) in our daily life, which contributes to accelerating the shift towards a more algorithmic society. However, even with such unprecedented advancements, a key impediment to the use of AI-based systems is that they often lack transparency. Indeed, the black-box nature of these systems allows powerful predictions, but it cannot be directly explained. This issue has triggered a new debate on explainable AI (XAI). A research field holds substantial promise for improving trust and transparency of AI-based systems. It is recognized as the sine qua non for AI to continue making steady progress without disruption. This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI. Through the lens of the literature, we review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.
引用
收藏
页码:52138 / 52160
页数:23
相关论文
共 179 条
[1]  
Akyol E., 2016, PRICE TRANSPARENCY S
[2]   Survey and critique of techniques for extracting rules from trained artificial neural networks [J].
Andrews, R ;
Diederich, J ;
Tickle, AB .
KNOWLEDGE-BASED SYSTEMS, 1995, 8 (06) :373-389
[3]  
Andrzejak A, 2013, 2013 IEEE SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DATA MINING (CIDM), P1, DOI 10.1109/CIDM.2013.6597210
[4]  
[Anonymous], P ICCBR 1 WORKSH CAS
[5]  
[Anonymous], EXPLANATIONS EXPECTA
[6]  
[Anonymous], 2016, P ADV NEURAL INFORM
[7]  
[Anonymous], 2016, END END LEARNING SEL
[8]  
[Anonymous], 2017, Adversarial examples: Attacks and defenses for deep learning
[9]  
[Anonymous], LEARNING FUNCTIONAL
[10]  
[Anonymous], 2018, P EXPL ROB SYST WORK