Colored 3D surface reconstruction using Kinect sensor

被引:13
作者
Guo L.-P. [1 ]
Chen X.-N. [1 ]
Chen Y. [1 ]
Liu B. [1 ]
机构
[1] Department of Optical and Electronic Equipment, Equipment Academy of PLA, Beijing
关键词
Color - Image segmentation - Surface reconstruction - Image enhancement - Image reconstruction - Nonlinear filtering - Color image processing - Rendering (computer graphics) - Signal to noise ratio;
D O I
10.1007/s11801-015-5013-2
中图分类号
学科分类号
摘要
A colored 3D surface reconstruction method which effectively fuses the information of both depth and color image using Microsoft Kinect is proposed and demonstrated by experiment. Kinect depth images are processed with the improved joint-bilateral filter based on region segmentation which efficiently combines the depth and color data to improve its quality. The registered depth data are integrated to achieve a surface reconstruction through the colored truncated signed distance fields presented in this paper. Finally, the improved ray casting for rendering full colored surface is implemented to estimate color texture of the reconstruction object. Capturing the depth and color images of a toy car, the improved joint-bilateral filter based on region segmentation is used to improve the quality of depth images and the peak signal-to-noise ratio (PSNR) is approximately 4.57 dB, which is better than 1.16 dB of the joint-bilateral filter. The colored construction results of toy car demonstrate the suitability and ability of the proposed method. © 2015, Tianjin University of Technology and Springer-Verlag Berlin Heidelberg.
引用
收藏
页码:153 / 156
页数:3
相关论文
共 15 条
[1]
Cui Y., Schuon S., Thrun S., Stricker D., Theobalt C., IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, (2013)
[2]
Khilar R., Chitrakala S., SelvamParvathy S., 3D Image Reconstruction: Techniques, Applications and Challenges, IEEE International Conference on Optical Imaging Sensor and Security, (2013)
[3]
Freedman B., Shpunt A., Machline M., Arieli Y., Depth Mapping Using Projected Patterns, (2012)
[4]
Cheng Z., Hairong Y., Hong C., Sui W., Journal of Optoelectronics·Laser, 24, (2013)
[5]
Lai K., Bo L., Ren X., Fox D., Sparse Distance Learning for Object Recognition Combining RGB and Depth Information, IEEE International Conference on Robotics and Automation (ICRA), (2011)
[6]
Khoshelham K., Elberink S.O., Sensors, 12, (2012)
[7]
Herbst E., Henry P., Ren X., Fox D., Toward Object Discovery and Modeling via 3-D Scene Comparison, IEEE International Conference on Robotics and Automation (ICRA), (2011)
[8]
Menna F., Remondino F., Battisti R., Nocerino E., Proceedings of SPIE, 8085, (2011)
[9]
Newcombe R.A., Izadi S., Hilliges O., Molyneaux D., KinectFusion: Real-time Dense Surface Mapping and Tracking, 10th IEEE International Symposium on Mixed and Augmented Reality, (2011)
[10]
Izadi S., Kim D., Hilliges O., Molyneaux D., Newcombe R., Kohli P., Fitzgibbon A., KinectFusion: Real-Time 3D Reconstruction and Interaction Using a Moving Depth Camera, Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, (2011)