Learning Rich Features from RGB-D Images for Object Detection and Segmentation

被引:995
作者
Gupta, Saurabh [1 ]
Girshick, Ross [1 ]
Arbelaez, Pablo [2 ]
Malik, Jitendra [1 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] Univ Ios Andes, Bogota, Colombia
来源
COMPUTER VISION - ECCV 2014, PT VII | 2014年 / 8695卷
关键词
RGB-D perception; object detection; object segmentation;
D O I
10.1007/978-3-319-10584-0_23
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we study the problem of object detection for RGB-D images using semantically rich image and depth features. We propose a new geocentric embedding for depth images that encodes height above ground and angle with gravity for each pixel in addition to the horizontal disparity. We demonstrate that this geocentric embedding works better than using raw depth images for learning feature representations with convolutional neural networks. Our final object detection system achieves an average precision of 37.3%, which is a 56% relative improvement over existing methods. We then focus on the task of instance segmentation where we label pixels belonging to object instances found by our detector. For this task, we propose a decision forest approach that classifies pixels in the detection window as foreground or background using a family of unary and binary tests that query shape and geocentric pose features. Finally, we use the output from our object detectors in an existing superpixel classification framework for semantic scene segmentation and achieve a 24% relative improvement over current state-of-the-art for the object categories that we study. We believe advances such as those represented in this paper will facilitate the use of perception in fields like robotics.
引用
收藏
页码:345 / 360
页数:16
相关论文
共 37 条
[1]  
[Anonymous], ABS13127715 CORR
[2]  
[Anonymous], 2011, NIPS
[3]  
[Anonymous], ABS14065549 CORR
[4]  
[Anonymous], 2012, NIPS
[5]  
[Anonymous], 2013, CONSUMER DEPTH CAMER
[6]  
[Anonymous], 2013, TPAMI
[7]  
[Anonymous], 2014, ICML
[8]  
[Anonymous], LNCS
[9]  
[Anonymous], 2013, CVPR
[10]  
[Anonymous], 2011, CVPR