Progressive LiDAR Adaptation for Road Detection

被引:122
作者
Chen, Zhe [1 ,2 ]
Zhang, Jing [3 ,4 ,5 ]
Tao, Dacheng [1 ,2 ]
机构
[1] Univ Sydney, UBTECH Sydney Artificial Intelligence Ctr, Fac Engn & Informat Technol, Sydney, NSW 2008, Australia
[2] Univ Sydney, Sch Comp Sci, Fac Engn & Informat Technol, Sydney, NSW 2008, Australia
[3] Hangzhou Dianzi Univ, Sch Automat, Hangzhou, Zhejiang, Peoples R China
[4] Univ Technol Sydney, Sch Software, Sydney, NSW, Australia
[5] Univ Technol Sydney, Adv Analyt Inst, Sydney, NSW, Australia
基金
中国国家自然科学基金; 澳大利亚研究理事会;
关键词
Autonomous driving; computer vision; deep learning; LiDAR processing; road detection;
D O I
10.1109/JAS.2019.1911459
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Despite rapid developments in visual image-based road detection, robustly identifying road areas in visual images remains challenging due to issues like illumination changes and blurry images. To this end, LiDAR sensor data can be incorporated to improve the visual image-based road detection, because LiDAR data is less susceptible to visual noises. However, the main difficulty in introducing LiDAR information into visual image-based road detection is that LiDAR data and its extracted features do not share the same space with the visual data and visual features. Such gaps in spaces may limit the benefits of LiDAR information for road detection. To overcome this issue, we introduce a novel Progressive LiDAR adaptation-aided road detection (PLARD) approach to adapt LiDAR information into visual image-based road detection and improve detection performance. In PLARD, progressive LiDAR adaptation consists of two subsequent modules: 1) data space adaptation, which transforms the LiDAR data to the visual data space to align with the perspective view by applying altitude difference-based transformation; and 2) feature space adaptation, which adapts LiDAR features to visual features through a cascaded fusion structure. Comprehensive empirical studies on the well-known KITTI road detection benchmark demonstrate that PLARD takes advantage of both the visual and LiDAR information, achieving much more robust road detection even in challenging urban scenes. In particular, PLARD outperforms other state-of-the-rt road detection models and is currently top of the publicly accessible benchmark leader-board.
引用
收藏
页码:693 / 702
页数:10
相关论文
共 45 条
  • [1] Alvarez JM, 2013, IEEE INT VEH SYM, P423, DOI 10.1109/IVS.2013.6629505
  • [2] Real time Detection of Lane Markers in Urban Streets
    Aly, Mohamed
    [J]. 2008 IEEE INTELLIGENT VEHICLES SYMPOSIUM, VOLS 1-3, 2008, : 165 - 170
  • [3] [Anonymous], 2018, P EUR C COMP VIS
  • [4] [Anonymous], 2013, INT J ROBOTICS RES
  • [5] [Anonymous], 2016, IROS
  • [6] [Anonymous], P COMP VIS PATT REC
  • [7] [Anonymous], 2015, ICLR
  • [8] [Anonymous], 2017, P IEEE C COMP VIS PA
  • [9] [Anonymous], 2012, P C COMP VIS PATT RE
  • [10] [Anonymous], 2017, IEEE T PATTERN ANAL