Developing haptic and visual perceptual categories for reaching and grasping with a humanoid robot

被引:42
作者
Coelho, J [1 ]
Piater, J [1 ]
Grupen, R [1 ]
机构
[1] Univ Massachusetts, Lab Perceptual Robot, Dept Comp Sci, Amherst, MA 01003 USA
基金
美国国家科学基金会;
关键词
humanoid; learning; haptics; vision; development;
D O I
10.1016/S0921-8890(01)00158-0
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Properties of the human embodiment - sensorimotor apparatus and neurological structure - participate directly in the growth and development of cognitive processes against enormous worst case complexity. It is our position that relationships between morphology and perception. overtime lead to increasingly comprehensive models that describe the agent's relationship to the world. We are applying insight derived from neuroscience, neurology, and developmental psychology to the design of advanced robot architectures. To investigate developmental processes, we have begun to approximate the human sensorimotor configuration and to engage sensory and motor subsystems in developmental sequences. Many such sequences have been documented in studies of infant development, so we intend to bootstrap cognitive structures in robots by emulating some of these growth processes that bear an essential resemblance to the human morphology. In this paper, we will show two related examples in which a humanoid robot determines the models and representations that govern its behavior. The first is a model that captures the dynamics of a haptic exploration of an object with a dextrous robot hand that supports skillful grasping. The second example constructs constellations of visual features to predict relative hand/object postures that lead reliably to haptic utility. The result is a first step in a trajectory toward associative visual-haptic categories that bounds the incremental complexity of each stage of development. (C) 2001 Published by Elsevier Science B.V.
引用
收藏
页码:195 / 218
页数:24
相关论文
共 76 条
[1]  
[Anonymous], TIME SERIES PREDICTI
[2]  
ARAUJO EG, 1996, ANIMALS ANIMATS, V4, P333
[3]  
ARONSON AE, 1981, CLIN EXAMINATIONS NE
[4]   LEARNING TO ACT USING REAL-TIME DYNAMIC-PROGRAMMING [J].
BARTO, AG ;
BRADTKE, SJ ;
SINGH, SP .
ARTIFICIAL INTELLIGENCE, 1995, 72 (1-2) :81-138
[5]   DISTRIBUTED REPRESENTATION OF LIMB MOTOR PROGRAMS IN ARRAYS OF ADJUSTABLE PATTERN GENERATORS [J].
BERTHIER, NE ;
SINGH, SP ;
BARTO, AG ;
HOUK, JC .
JOURNAL OF COGNITIVE NEUROSCIENCE, 1993, 5 (01) :56-78
[6]   Visual information and object size in the control of reaching [J].
Berthier, NE ;
Clifton, RK ;
Gullapalli, V ;
McCall, DD ;
Robin, DJ .
JOURNAL OF MOTOR BEHAVIOR, 1996, 28 (03) :187-197
[7]   Assessing a mixture model for clustering with the integrated completed likelihood [J].
Biernacki, C ;
Celeux, G ;
Govaert, G .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2000, 22 (07) :719-725
[8]   A ROBUST LAYERED CONTROL-SYSTEM FOR A MOBILE ROBOT [J].
BROOKS, RA .
IEEE JOURNAL OF ROBOTICS AND AUTOMATION, 1986, 2 (01) :14-23
[9]  
BROST RC, 1986, P 1986 IEEE C ROB AU, V3, P1575
[10]  
BRUNER JS, 1973, CHILD DEV, V44, P1