VQA: Visual Question Answering

被引:2671
作者
Antol, Stanislaw [1 ]
Agrawal, Aishwarya [1 ]
Lu, Jiasen [1 ]
Mitchell, Margaret [2 ]
Batra, Dhruv [1 ]
Zitnick, C. Lawrence [2 ]
Parikh, Devi [1 ]
机构
[1] Virginia Tech, Blacksburg, VA 24061 USA
[2] Microsoft Res, Cambridge, MA USA
来源
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV) | 2015年
基金
美国国家科学基金会;
关键词
D O I
10.1109/ICCV.2015.279
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing similar to 0.25M images, similar to 0.76M questions, and similar to 10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines for VQA are provided and compared with human performance.
引用
收藏
页码:2425 / 2433
页数:9
相关论文
共 49 条
[1]  
[Anonymous], 2015, INT C COMP VIS ICCCV
[2]  
[Anonymous], 2014, CORR
[3]  
[Anonymous], 2015, NIPS
[4]  
[Anonymous], HLT NAACL
[5]  
[Anonymous], 2015, CVPR
[6]  
[Anonymous], 2013, EMNLP
[7]  
[Anonymous], PAMI
[8]  
[Anonymous], 2014, CVPR
[9]  
[Anonymous], BT TECHNOLOGY J
[10]  
[Anonymous], 2013, ICCV