Visualizing and Understanding Convolutional Networks

被引:9837
作者
Zeiler, Matthew D. [1 ]
Fergus, Rob [1 ]
机构
[1] NYU, Dept Comp Sci, New York, NY 10003 USA
来源
COMPUTER VISION - ECCV 2014, PT I | 2014年 / 8689卷
关键词
D O I
10.1007/978-3-319-10590-1_53
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark Krizhevsky et al. [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform Krizhevsky et al. on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.
引用
收藏
页码:818 / 833
页数:16
相关论文
共 29 条
  • [1] [Anonymous], 2011, ICCV
  • [2] [Anonymous], 2009, TECHNICAL REPORT
  • [3] [Anonymous], 2010, NIPS
  • [4] [Anonymous], 2011, CVPR
  • [5] [Anonymous], 2008, P ICML, DOI 10.1145/1390156.1390294
  • [6] [Anonymous], 2014, CVPR
  • [7] [Anonymous], 2013, 31 INT C MACH LEARN
  • [8] [Anonymous], 2014, ARXIV13112524
  • [9] [Anonymous], IMAGENET COMPETITION
  • [10] [Anonymous], 256 CALT