A Survey on Neural Network Interpretability

被引:434
作者
Zhang, Yu [1 ,2 ,3 ]
Tino, Peter [3 ]
Leonardis, Ales [3 ]
Tang, Ke [1 ,2 ]
机构
[1] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Guangdong Key Lab Brain Inspired Intelligent Comp, Shenzhen 518055, Peoples R China
[2] Southern Univ Sci & Technol, Res Inst Trust Worthy Autonomous Syst, Shenzhen 518055, Peoples R China
[3] Univ Birmingham, Sch Comp Sci, Birmingham B15 2TT, W Midlands, England
来源
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE | 2021年 / 5卷 / 05期
基金
英国工程与自然科学研究理事会;
关键词
Taxonomy; Deep learning; Tools; Reliability; Decision trees; Training; Task analysis; Machine learning; neural networks; inter-pretability; survey; EXPLANATIONS; DROPOUT; RULES;
D O I
10.1109/TETCI.2021.3100641
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Along with the great success of deep neural networks, there is also growing concern about their black-box nature. The interpretability issue affects people's trust on deep learning systems. It is also related to many ethical problems, e.g., algorithmic discrimination. Moreover, interpretability is a desired property for deep networks to become powerful tools in other research fields, e.g., drug discovery and genomics. In this survey, we conduct a comprehensive review of the neural network interpretability research. We first clarify the definition of interpretability as it has been used in many different contexts. Then we elaborate on the importance of interpretability and propose a novel taxonomy organized along three dimensions: type of engagement (passive vs. active interpretation approaches), the type of explanation, and the focus (from local to global interpretability). This taxonomy provides a meaningful 3D view of distribution of papers from the relevant literature as two of the dimensions are not simply categorical but allow ordinal subcategories. Finally, we summarize the existing interpretability evaluation methods and suggest possible research directions inspired by our new taxonomy.
引用
收藏
页码:726 / 742
页数:17
相关论文
共 148 条
  • [1] Adebayo J, 2018, ADV NEUR IN, V31
  • [2] Auditing black-box models for indirect influence
    Adler, Philip
    Falk, Casey
    Friedler, Sorelle A.
    Nix, Tionney
    Rybeck, Gabriel
    Scheidegger, Carlos
    Smith, Brandon
    Venkatasubramanian, Suresh
    [J]. KNOWLEDGE AND INFORMATION SYSTEMS, 2018, 54 (01) : 95 - 122
  • [3] Ancona M., 2018, P INT C LEARN REPR 2
  • [4] Ancona M, 2019, PR MACH LEARN RES, V97
  • [5] Survey and critique of techniques for extracting rules from trained artificial neural networks
    Andrews, R
    Diederich, J
    Tickle, AB
    [J]. KNOWLEDGE-BASED SYSTEMS, 1995, 8 (06) : 373 - 389
  • [6] Nguyen A, 2015, PROC CVPR IEEE, P427, DOI 10.1109/CVPR.2015.7298640
  • [7] [Anonymous], 2017, ARXIV171204741
  • [8] [Anonymous], 2017, P 14 IEEE INT C ADV
  • [9] On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
    Bach, Sebastian
    Binder, Alexander
    Montavon, Gregoire
    Klauschen, Frederick
    Mueller, Klaus-Robert
    Samek, Wojciech
    [J]. PLOS ONE, 2015, 10 (07):
  • [10] Baehrens D, 2010, J MACH LEARN RES, V11, P1803