A Survey on Neural Network Interpretability

被引:434
作者
Zhang, Yu [1 ,2 ,3 ]
Tino, Peter [3 ]
Leonardis, Ales [3 ]
Tang, Ke [1 ,2 ]
机构
[1] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Guangdong Key Lab Brain Inspired Intelligent Comp, Shenzhen 518055, Peoples R China
[2] Southern Univ Sci & Technol, Res Inst Trust Worthy Autonomous Syst, Shenzhen 518055, Peoples R China
[3] Univ Birmingham, Sch Comp Sci, Birmingham B15 2TT, W Midlands, England
来源
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE | 2021年 / 5卷 / 05期
基金
英国工程与自然科学研究理事会;
关键词
Taxonomy; Deep learning; Tools; Reliability; Decision trees; Training; Task analysis; Machine learning; neural networks; inter-pretability; survey; EXPLANATIONS; DROPOUT; RULES;
D O I
10.1109/TETCI.2021.3100641
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Along with the great success of deep neural networks, there is also growing concern about their black-box nature. The interpretability issue affects people's trust on deep learning systems. It is also related to many ethical problems, e.g., algorithmic discrimination. Moreover, interpretability is a desired property for deep networks to become powerful tools in other research fields, e.g., drug discovery and genomics. In this survey, we conduct a comprehensive review of the neural network interpretability research. We first clarify the definition of interpretability as it has been used in many different contexts. Then we elaborate on the importance of interpretability and propose a novel taxonomy organized along three dimensions: type of engagement (passive vs. active interpretation approaches), the type of explanation, and the focus (from local to global interpretability). This taxonomy provides a meaningful 3D view of distribution of papers from the relevant literature as two of the dimensions are not simply categorical but allow ordinal subcategories. Finally, we summarize the existing interpretability evaluation methods and suggest possible research directions inspired by our new taxonomy.
引用
收藏
页码:726 / 742
页数:17
相关论文
共 148 条
  • [61] Kim B, 2018, PR MACH LEARN RES, V80
  • [62] Kindermans P.-J., 2019, P EXPL INT EXPL VIS
  • [63] Koh PW, 2017, PR MACH LEARN RES, V70
  • [64] Extracting decision trees from trained neural networks
    Krishnan, R
    Sivakumar, G
    Bhattacharya, P
    [J]. PATTERN RECOGNITION, 1999, 32 (12) : 1999 - 2009
  • [65] ImageNet Classification with Deep Convolutional Neural Networks
    Krizhevsky, Alex
    Sutskever, Ilya
    Hinton, Geoffrey E.
    [J]. COMMUNICATIONS OF THE ACM, 2017, 60 (06) : 84 - 90
  • [66] Unmasking Clever Hans predictors and assessing what machines really learn
    Lapuschkin, Sebastian
    Waeldchen, Stephan
    Binder, Alexander
    Montavon, Gregoire
    Samek, Wojciech
    Mueller, Klaus-Robert
    [J]. NATURE COMMUNICATIONS, 2019, 10 (1)
  • [67] Deep learning
    LeCun, Yann
    Bengio, Yoshua
    Hinton, Geoffrey
    [J]. NATURE, 2015, 521 (7553) : 436 - 444
  • [68] Li O, 2018, AAAI CONF ARTIF INTE, P3530
  • [69] Lipton Zachary C, 2016, COMMUN ACM, DOI DOI 10.1145/3233231
  • [70] Lundberg SM, 2017, ADV NEUR IN, V30