Robot homing by exploiting panoramic vision

被引:84
作者
Argyros, AA [1 ]
Bekris, KE
Orphanoudakis, SC
Kavraki, LE
机构
[1] Fdn Res & Technol Hellas FORTH, Inst Comp Sci, Iraklion, Crete, Greece
[2] Rice Univ, Dept Comp Sci, Houston, TX 77251 USA
关键词
robot homing; omni-directional vision; panoramic cameras; vision-based robot navigation;
D O I
10.1007/s10514-005-0603-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose a novel, vision-based method for robot homing, the problem of computing a route so that a robot can return to its initial "home" position after the execution of an arbitrary "prior" path. The method assumes that the robot tracks visual features in panoramic views of the environment that it acquires as it moves. By exploiting only angular information regarding the tracked features, a local control strategy moves the robot between two positions, provided that there are at least three features that can be matched in the panoramas acquired at these positions. The strategy is successful when certain geometric constraints on the configuration of the two positions relative to the features are fulfilled. In order to achieve long-range homing, the features' trajectories are organized in a visual memory during the execution of the "prior" path. When homing is initiated, the robot selects Milestone Positions ( MPs) on the "prior" path by exploiting information in its visual memory. The MP selection process aims at picking positions that guarantee the success of the local control strategy between two consecutive MPs. The sequential visit of successive MPs successfully guides the robot even if the visual context in the "home" position is radically different from the visual context at the position where homing was initiated. Experimental results from a prototype implementation of the method demonstrate that homing can be achieved with high accuracy, independent of the distance traveled by the robot. The contribution of this work is that it shows how a complex navigational task such as homing can be accomplished efficiently, robustly and in real-time by exploiting primitive visual cues. Such cues carry implicit information regarding the 3D structure of the environment. Thus, the computation of explicit range information and the existence of a geometric map are not required.
引用
收藏
页码:7 / 25
页数:19
相关论文
共 45 条
  • [1] [Anonymous], 1991, CMUCS91132 SCH COMP
  • [2] Biomimetic centering behavior - Mobile robots with panoramic sensors
    Argyros, AA
    Tsakiris, DP
    Groyer, C
    [J]. IEEE ROBOTICS & AUTOMATION MAGAZINE, 2004, 11 (04) : 21 - +
  • [3] Fusion of laser and visual data for robot motion planning and collision avoidance
    Baltzakis, H
    Argyros, A
    Trahanias, P
    [J]. MACHINE VISION AND APPLICATIONS, 2003, 15 (02) : 92 - 100
  • [4] Visual homing: Surfing on the epipoles
    Basri, R
    Rivlin, E
    Shimshoni, I
    [J]. SIXTH INTERNATIONAL CONFERENCE ON COMPUTER VISION, 1998, : 863 - 869
  • [5] Bianco G., 1999, Proceedings 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human and Environment Friendly Robots with High Intelligence and Emotional Quotients (Cat. No.99CH36289), P671, DOI 10.1109/IROS.1999.812757
  • [6] BURGARD W, 1997, P 15 INT JOINT C ART
  • [7] BURGARD W, 2002, P IROS 02 WORKSH ROB
  • [8] LANDMARK MAPS FOR HONEYBEES
    CARTWRIGHT, BA
    COLLETT, TS
    [J]. BIOLOGICAL CYBERNETICS, 1987, 57 (1-2) : 85 - 93
  • [9] LANDMARK LEARNING IN BEES - EXPERIMENTS AND MODELS
    CARTWRIGHT, BA
    COLLETT, TS
    [J]. JOURNAL OF COMPARATIVE PHYSIOLOGY, 1983, 151 (04): : 521 - 543
  • [10] CASSINIS R, 1996, SERIES MACHINE PERCE, V23, P57