Visual Attention for Robotic Cognition: A Survey

被引:44
作者
Begum, Momotaz [1 ]
Karray, Fakhri [2 ]
机构
[1] Georgia Inst Technol, Sch Interact Comp, Athens, GA 30602 USA
[2] Univ Waterloo, Dept Elect & Comp Engn, Waterloo, ON N2L 3G1, Canada
关键词
Human-robot interaction; joint attention; overt attention; robotic cognition; visual attention; OBJECT-BASED ATTENTION; FEATURE-INTEGRATION-THEORY; SELECTIVE ATTENTION; BIASED COMPETITION; NEURAL MECHANISMS; GUIDED SEARCH; MODEL; IMITATION; MODULATION; GENERATION;
D O I
10.1109/TAMD.2010.2096505
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The goal of the cognitive robotics research is to design robots with human-like cognition (albeit reduced complexity) in perception, reasoning, action planning, and decision making. Such a venture of cognitive robotics has developed robots with redundant number of sensors and actuators in order to perceive the world and act up on it in a human-like fashion. A major challenge to deal with these robots is managing the enormous amount of information continuously arriving through multiple sensors. The primates master this information management skill through their custom-built attention mechanism. Mimicking the attention behavior of the primates, therefore, has gained tremendous popularity in robotic research in the recent years (Bar-Cohen et al., Biologically Inspired Intelligent Robots, 2003, and B. Webb et al., Biorobotics, 2003). The difficulties of redundant information management, however, is the most severe in case of visual perception of the robots. Even a moderate size image of the natural scene generally contains enough visual information to easily overload the on-line decision making process of an autonomous robot. Modeling primates-like visual attention mechanism for the robot, therefore, is becoming more popular among the robotic researchers. A visual attention model enables the robot to selectively (and autonomously) choose a "behaviorally relevant" segment of visual information for further processing while relative exclusion of the others. This paper sheds light on the ongoing journey of robotics research to achieve a visual attention model which will serve as a component of cognition of the modern-day robots.
引用
收藏
页码:92 / 105
页数:14
相关论文
共 117 条
[71]   Implementation of a neurophysiological model of saccadic eye movements on an anthropomorphic robotic head [J].
Manfredi, Luigi ;
Maini, Eliseo Stefano ;
Dario, Paolo ;
Laschi, Cecilia ;
Girard, Benoit ;
Tabareau, Nicolas ;
Berthoz, Alain .
2006 6TH IEEE-RAS INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS, VOLS 1 AND 2, 2006, :438-+
[72]  
McGuire P, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P1082, DOI 10.1109/IRDS.2002.1043875
[73]  
Metta G., 2001, P IEEE RAS INT C HUM
[74]  
MILANESE R, 1994, 1994 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, PROCEEDINGS, P781, DOI 10.1109/CVPR.1994.323898
[75]  
MILANESE R, 1993, THESIS U GENEVA GENE
[76]  
MURATA A, 1999, P IEEE INT C SYST MA, P60
[77]  
Nagai Y, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P168
[78]   A developmental approach accelerates learning of joint attention [J].
Nagai, Y ;
Asada, M ;
Hosoda, K .
2ND INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING, PROCEEDINGS, 2002, :277-282
[79]  
NAGAI Y, 2009, IEEE T AUTONOM MENTA, V1
[80]   Top-down attention selection is fine grained [J].
Navalpakkam, Vidhya ;
Itti, Laurent .
JOURNAL OF VISION, 2006, 6 (11) :1180-1193