Extracting symbolic rules from trained neural network ensembles

被引:7
作者
Zhou, ZH [1 ]
Jiang, Y [1 ]
Chen, SF [1 ]
机构
[1] Nanjing Univ, Natl Lab Novel Software Technol, Nanjing 210093, Peoples R China
关键词
neural networks; neural network ensembles; rule extraction; machine learning; comprehensibility;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural network ensemble can significantly improve the generalization ability of neural network based systems. However, its comprehensibility is even worse than that of a single neural network because it comprises a collection of individual neural networks. In this paper, an approach named REFNE is proposed to improve the comprehensibility of trained neural network ensembles that perform classification tasks. REFNE utilizes the trained ensembles to generate instances and then extracts symbolic rules from those instances. It gracefully breaks the ties made by individual neural networks in prediction. It also employs specific discretization scheme, rule form, and fidelity evaluation mechanism. Experiments show that with different configurations, REFNE can extract rules with good fidelity that well explain the function of trained neural network ensembles, or rules with strong generalization ability that are even better than the trained neural network ensembles in prediction.
引用
收藏
页码:3 / 15
页数:13
相关论文
共 49 条
  • [1] Survey and critique of techniques for extracting rules from trained artificial neural networks
    Andrews, R
    Diederich, J
    Tickle, AB
    [J]. KNOWLEDGE-BASED SYSTEMS, 1995, 8 (06) : 373 - 389
  • [2] [Anonymous], 1992, The Tenth National Conference on Artificial Intelligence
  • [3] What Size Net Gives Valid Generalization?
    Baum, Eric B.
    Haussler, David
    [J]. NEURAL COMPUTATION, 1989, 1 (01) : 151 - 160
  • [4] Are artificial neural networks black boxes?
    Benitez, JM
    Castro, JL
    Requena, I
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1997, 8 (05): : 1156 - 1164
  • [5] Blake C.L., 1998, UCI repository of machine learning databases
  • [6] Bagging predictors
    Breiman, L
    [J]. MACHINE LEARNING, 1996, 24 (02) : 123 - 140
  • [7] Cherkauer Kevin J., 1996, Working Notes of the AAAI Workshop on Integrating Multiple Learned Models, P15
  • [8] Craven MW, 1994, P 11 INT C MACH LEAR, P37, DOI DOI 10.1016/B978-1-55860-335-6.50013-1
  • [9] Efron B., 1993, INTRO BOOTSTRAP, V1st ed., DOI DOI 10.1201/9780429246593
  • [10] BOOSTING A WEAK LEARNING ALGORITHM BY MAJORITY
    FREUND, Y
    [J]. INFORMATION AND COMPUTATION, 1995, 121 (02) : 256 - 285