SCA-CNN: Spatial and Channel-wise Attention in Convolutional Networks for Image Captioning

被引:1528
作者
Chen, Long [1 ]
Zhang, Hanwang [2 ]
Xiao, Jun [1 ]
Nie, Liqiang [3 ]
Shao, Jian [1 ]
Liu, Wei [4 ]
Chua, Tat-Seng [5 ]
机构
[1] Zhejiang Univ, Hangzhou, Zhejiang, Peoples R China
[2] Columbia Univ, New York, NY 10027 USA
[3] Shandong Univ, Jinan, Shandong, Peoples R China
[4] Tencent AI Lab, Jinan, Shandong, Peoples R China
[5] Natl Univ Singapore, Singapore, Singapore
来源
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017) | 2017年
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR.2017.667
中图分类号
TP18 [人工智能理论];
学科分类号
140502 [人工智能];
摘要
Visual attention has been successfully applied in structural prediction tasks such as visual captioning and question answering. Existing visual attention models are generally spatial, i.e., the attention is modeled as spatial probabilities that re-weight the last conv-layer feature map of a CNN encoding an input image. However, we argue that such spatial attention does not necessarily conform to the attention mechanism-a dynamic feature extractor that combines contextual fixations over time, as CNN features are naturally spatial, channel-wise and multi-layer. In this paper, we introduce a novel convolutional neural network dubbed SCA-CNN that incorporates Spatial and Channel-wise Attentions in a CNN. In the task of image captioning, SCA-CNN dynamically modulates the sentence generation context in multi-layer feature maps, encoding where (i.e., attentive spatial locations at multiple layers) and what (i.e., attentive channels) the visual attention is. We evaluate the proposed SCA-CNN architecture on three benchmark image captioning datasets: Flickr8K, Flickr30K, and MSCOCO. It is consistently observed that SCA-CNN significantly outperforms state-of-the-art visual attention-based image captioning methods.
引用
收藏
页码:6298 / 6306
页数:9
相关论文
共 43 条
[1]
[Anonymous], 2016, IJCV
[2]
[Anonymous], 2016, ARXIV151105960
[3]
VQA: Visual Question Answering [J].
Antol, Stanislaw ;
Agrawal, Aishwarya ;
Lu, Jiasen ;
Mitchell, Margaret ;
Batra, Dhruv ;
Zitnick, C. Lawrence ;
Parikh, Devi .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2425-2433
[4]
Bahdanau D, 2016, Arxiv, DOI [arXiv:1409.0473, DOI 10.48550/ARXIV.1409.0473]
[5]
Banerjee S., 2005, P ACL WORKSH INTR EX, P65
[6]
Control of goal-directed and stimulus-driven attention in the brain [J].
Corbetta, M ;
Shulman, GL .
NATURE REVIEWS NEUROSCIENCE, 2002, 3 (03) :201-215
[7]
Donahue J, 2015, PROC CVPR IEEE, P2625, DOI 10.1109/CVPR.2015.7298878
[8]
Gao H., 2015, NIPS
[9]
He K., 2016, P IEEE C COMPUTER VI, P770, DOI DOI 10.1109/CVPR.2016.90
[10]
Hochreiter S, 1997, NEURAL COMPUT, V9, P1735, DOI [10.1162/neco.1997.9.1.1, 10.1007/978-3-642-24797-2]