Learning traversability models for autonomous mobile vehicles

被引:33
作者
Shneier, Michael [1 ]
Chang, Tommy [1 ]
Hong, Tsai [1 ]
Shackleford, Will [1 ]
Bostelman, Roger [1 ]
Albus, James S. [1 ]
机构
[1] Natl Inst Stand & Technol, Gaithersburg, MD 20899 USA
关键词
learning; traversability; classification; color models; texture; range; mobile robotics;
D O I
10.1007/s10514-007-9063-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Autonomous mobile robots need to adapt their behavior to the terrain over which they drive, and to predict the traversability of the terrain so that they can effectively plan their paths. Such robots usually make use of a set of sensors to investigate the terrain around them and build up an internal representation that enables them to navigate. This paper addresses the question of how to use sensor data to learn properties of the environment and use this knowledge to predict which regions of the environment are traversable. The approach makes use of sensed information from range sensors (stereo or ladar), color cameras, and the vehicle's navigation sensors. Models of terrain regions are learned from subsets of pixels that are selected by projection into a local occupancy grid. The models include color and texture as well as traversability information obtained from an analysis of the range data associated with the pixels. The models are learned without supervision, deriving their properties from the geometry and the appearance of the scene. The models are used to classify color images and assign traversability costs to regions. The classification does not use the range or position information, but only color images. Traversability determined during the model-building phase is stored in the models. This enables classification of regions beyond the range of stereo or ladar using the information in the color images. The paper describes how the models are constructed and maintained, how they are used to classify image regions, and how the system adapts to changing environments. Examples are shown from the implementation of this algorithm in the DARPA Learning Applied to Ground Robots (LAGR) program, and an evaluation of the algorithm against human-provided ground truth is presented.
引用
收藏
页码:69 / 86
页数:18
相关论文
共 21 条
  • [1] ALBUS J, 2002, 4D RCS VERSION 2 0 R
  • [2] Albus J. S., 2001, Engineering of Mind: An Introduction to the Science of Intelligent Systems
  • [3] Learning in a hierarchical control system: 4D/RCS in the DARPA LAGR program
    Albus, Jim
    Bostelman, Roger
    Chang, Tommy
    Hong, Tsai
    Shackleford, Will
    Shneier, Michael
    [J]. JOURNAL OF FIELD ROBOTICS, 2006, 23 (11-12) : 975 - 1003
  • [4] [Anonymous], INT TRANSP SYST C 20
  • [5] Chakravarty S., 1999, NAT ASS WELF RES STA
  • [6] CHANG T, 1999, P ROB APPL C SANT BA, P147
  • [7] Vision for mobile robot navigation: A survey
    DeSouza, GN
    Kak, AC
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2002, 24 (02) : 237 - 267
  • [8] HADSELL R, 2006, LEARN WORKSH SNOWB U
  • [9] Howard A, 2001, JOINT 9TH IFSA WORLD CONGRESS AND 20TH NAFIPS INTERNATIONAL CONFERENCE, PROCEEDINGS, VOLS. 1-5, P7, DOI 10.1109/NAFIPS.2001.944218
  • [10] The DARPA LAGR Program: Goals, challenges, methodology, and phase I results
    Jackel, L. D.
    Krotkov, Eric
    Perschbacher, Michael
    Pippine, Jim
    Sullivan, Chad
    [J]. JOURNAL OF FIELD ROBOTICS, 2006, 23 (11-12) : 945 - 973