A Survey on Neural Network Interpretability

被引:434
作者
Zhang, Yu [1 ,2 ,3 ]
Tino, Peter [3 ]
Leonardis, Ales [3 ]
Tang, Ke [1 ,2 ]
机构
[1] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Guangdong Key Lab Brain Inspired Intelligent Comp, Shenzhen 518055, Peoples R China
[2] Southern Univ Sci & Technol, Res Inst Trust Worthy Autonomous Syst, Shenzhen 518055, Peoples R China
[3] Univ Birmingham, Sch Comp Sci, Birmingham B15 2TT, W Midlands, England
来源
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE | 2021年 / 5卷 / 05期
基金
英国工程与自然科学研究理事会;
关键词
Taxonomy; Deep learning; Tools; Reliability; Decision trees; Training; Task analysis; Machine learning; neural networks; inter-pretability; survey; EXPLANATIONS; DROPOUT; RULES;
D O I
10.1109/TETCI.2021.3100641
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Along with the great success of deep neural networks, there is also growing concern about their black-box nature. The interpretability issue affects people's trust on deep learning systems. It is also related to many ethical problems, e.g., algorithmic discrimination. Moreover, interpretability is a desired property for deep networks to become powerful tools in other research fields, e.g., drug discovery and genomics. In this survey, we conduct a comprehensive review of the neural network interpretability research. We first clarify the definition of interpretability as it has been used in many different contexts. Then we elaborate on the importance of interpretability and propose a novel taxonomy organized along three dimensions: type of engagement (passive vs. active interpretation approaches), the type of explanation, and the focus (from local to global interpretability). This taxonomy provides a meaningful 3D view of distribution of papers from the relevant literature as two of the dimensions are not simply categorical but allow ordinal subcategories. Finally, we summarize the existing interpretability evaluation methods and suggest possible research directions inspired by our new taxonomy.
引用
收藏
页码:726 / 742
页数:17
相关论文
共 148 条
  • [41] 6 Deep Learning in Drug Discovery
    Gawehn, Erik
    Hiss, Jan A.
    Schneider, Gisbert
    [J]. MOLECULAR INFORMATICS, 2016, 35 (01) : 3 - 14
  • [42] Ghorbani A, 2019, ADV NEUR IN, V32
  • [43] Ghorbani Amirata, 2019, P AAAI C ART INT, V33
  • [44] Explaining Explanations: An Overview of Interpretability of Machine Learning
    Gilpin, Leilani H.
    Bau, David
    Yuan, Ben Z.
    Bajwa, Ayesha
    Specter, Michael
    Kagal, Lalana
    [J]. 2018 IEEE 5TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA), 2018, : 80 - 89
  • [45] Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
  • [46] European Union Regulations on Algorithmic Decision Making and a "Right to Explanation"
    Goodman, Bryce
    Flaxman, Seth
    [J]. AI MAGAZINE, 2017, 38 (03) : 50 - 57
  • [47] Goyal Y, 2019, PR MACH LEARN RES, V97
  • [48] A Survey of Methods for Explaining Black Box Models
    Guidotti, Riccardo
    Monreale, Anna
    Ruggieri, Salvatore
    Turin, Franco
    Giannotti, Fosca
    Pedreschi, Dino
    [J]. ACM COMPUTING SURVEYS, 2019, 51 (05)
  • [49] Stable architectures for deep neural networks
    Haber, Eldad
    Ruthotto, Lars
    [J]. INVERSE PROBLEMS, 2018, 34 (01)
  • [50] Global Optimality in Neural Network TrainingGlobal Optimality in Neural Network Training
    Haeffele, Benjamin D.
    Vidal, Rene
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 4390 - 4398