Plenoptic cameras in real-time robotics

被引:54
作者
Dong, Fengchun [1 ]
Ieng, Sio-Hoi [2 ]
Savatier, Xavier [1 ]
Etienne-Cummings, Ralph [3 ]
Benosman, Ryad [2 ]
机构
[1] Instrumentat IT & Syst Dept IRSEEM, Rouen, France
[2] Univ Paris 06, UMR 7210, UMR S968, Vis Inst, F-75005 Paris, France
[3] Johns Hopkins Univ, Computat Sensory Motor Syst Lab, Baltimore, MD USA
基金
美国国家科学基金会;
关键词
plenoptic function; egomotion; non central vision; scalespace; vision based navigation; CALIBRATION; NAVIGATION; MOTION;
D O I
10.1177/0278364912469420
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Real-time vision-based navigation is a difficult task largely due to the limited optical properties of single cameras that are usually mounted on robots. Multiple camera systems such as polydioptric sensors provide more efficient and precise solutions for autonomous navigation. They are particularly suitable for motion estimation because they allow one to formulate a linear optimization. These sensors capture the visual information in a more complete form called the plenoptic function that encodes the spatial and temporal light radiance of the scene. The polydioptric sensors are rarely used in robotics because they are usually thought to increase the amount of data produced and require more computational power. This paper shows that these cameras provide more accurate estimation results in mobile robotics navigation if designed properly. It also shows that a plenoptic vision sensor with a resolution ranging from 3 x 3 to 40 x 30 pixels camera, provides higher accuracy than a mono-SLAM running on a 320 x 240 pixels camera. The paper also gives a complete scheme to design usable real-time plenoptic cameras for mobile robotics applications by establishing the link between velocity, resolution and motion estimation accuracy. Finally, experiments on a mobile robot are shown allowing for a comparison between optimal plenoptic visual sensors and single high-resolution cameras. The estimation with the plenoptic sensor is more accurate than a monocular high-definition camera with a processing time 100 times lower.
引用
收藏
页码:206 / 217
页数:12
相关论文
共 47 条
[21]  
HASSENSTEIN B, 1956, Z NATURFORSCH PT B, V11, P513
[22]  
Hildreth E., 1984, MEASUREMENT VISUAL M
[23]   Navigation in an autonomous flying robot by using a biologically inspired visual odometer [J].
Iida, F ;
Lambrinos, D .
SENSOR FUSION AND DECENTRALIZED CONTROL IN ROBOTIC SYSTEMS III, 2000, 4196 :86-97
[24]  
Land M.F., 2002, Animal eyes, Vi-xii, P1
[25]  
Land M.F., 1974, J COMP PHYSIOL A, V89, P525
[26]  
Levoy M., 1996, Computer Graphics Proceedings. SIGGRAPH '96, P31, DOI 10.1145/237170.237199
[27]  
Lichtsteiner P, 2006, IEEE INT SOL STAT CI, P2060, DOI DOI 10.1109/ISSCC.2006.1696265
[28]   Fly-like visuomotor responses of a robot using aVLSI motion-sensitive chips [J].
Liu, SC ;
Usseglio-Viretta, A .
BIOLOGICAL CYBERNETICS, 2001, 85 (06) :449-457
[29]  
Lopez Rivas, 2008, COMPUTER VISION
[30]  
Neumann J, 2002, P WORKSH OMN VIS