Deep Multi-Modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges

被引:709
作者
Feng, Di [1 ,2 ]
Haase-Schutz, Christian [3 ,4 ]
Rosenbaum, Lars [1 ]
Hertlein, Heinz [3 ]
Glaser, Claudius [1 ]
Timm, Fabian [1 ]
Wiesbeck, Werner [4 ]
Dietmayer, Klaus [2 ]
机构
[1] Robert Bosch GmbH, Corp Res, Driver Assistance Syst & Automated Driving, D-71272 Renningen, Germany
[2] Ulm Univ, Inst Measurement Control & Microtechnol, D-89081 Ulm, Germany
[3] Robert Bosch GmbH, Chassis Syst Control, Engn Cognit Syst, Automated Driving, D-74232 Abstatt, Germany
[4] Karlsruhe Inst Technol, Inst Radio Frequency Engn & Elect, D-76131 Karlsruhe, Germany
关键词
Multi-modality; object detection; semantic segmentation; deep learning; autonomous driving; NEURAL-NETWORKS; ROAD; FUSION; LIDAR; ENVIRONMENTS; SET;
D O I
10.1109/TITS.2020.2972974
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Recent advancements in perception for autonomous driving are driven by deep learning. In order to achieve robust and accurate scene understanding, autonomous vehicles are usually equipped with different sensors (e.g. cameras, LiDARs, Radars), and multiple sensing modalities can be fused to exploit their complementary properties. In this context, many methods have been proposed for deep multi-modal perception problems. However, there is no general guideline for network architecture design, and questions of "what to fuse", "when to fuse", and "how to fuse" remain open. This review paper attempts to systematically summarize methodologies and discuss challenges for deep multi-modal object detection and semantic segmentation in autonomous driving. To this end, we first provide an overview of on-board sensors on test vehicles, open datasets, and background information for object detection and semantic segmentation in autonomous driving research. We then summarize the fusion methodologies and discuss challenges and open questions. In the appendix, we provide tables that summarize topics and methods. We also provide an interactive online platform to navigate each reference: https://boschresearch.github.io/multimodalperception/.
引用
收藏
页码:1341 / 1360
页数:20
相关论文
共 250 条
  • [51] Dai J, 2016, PROCEEDINGS 2016 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY (ICIT), P1796, DOI 10.1109/ICIT.2016.7475036
  • [52] THE WELL-CALIBRATED BAYESIAN
    DAWID, AP
    [J]. JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 1982, 77 (379) : 605 - 610
  • [53] Learning Diverse Image Colorization
    Deshpande, Aditya
    Lu, Jiajun
    Yeh, Mao-Chuang
    Chong, Min Jin
    Forsyth, David
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 2877 - 2885
  • [54] Dewan A., 2019, ARXIV190606962
  • [55] Dewan A, 2017, IEEE INT C INT ROBOT, P3544, DOI 10.1109/IROS.2017.8206198
  • [56] RECURSIVE 3-D ROAD AND RELATIVE EGO-STATE RECOGNITION
    DICKMANNS, ED
    MYSLIWETZ, BD
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1992, 14 (02) : 199 - 213
  • [57] Dosovitskiy A, 2017, P 1 ANN C ROB LEARN, P1, DOI DOI 10.48550/ARXIV.1711.03938
  • [58] Dou J, 2019, IEEE INT CONF ROBOT, P4362, DOI [10.1109/icra.2019.8793492, 10.1109/ICRA.2019.8793492]
  • [59] Du XX, 2018, IEEE INT CONF ROBOT, P3194
  • [60] Du XX, 2017, IEEE INT C INT ROBOT, P749, DOI 10.1109/IROS.2017.8202234