A syntactic approach to robot imitation learning using probabilistic activity grammars

被引:51
作者
Lee, Kyuhwa [1 ]
Su, Yanyu [1 ]
Kim, Tae-Kyun [1 ]
Demiris, Yiannis [1 ]
机构
[1] Univ London Imperial Coll Sci Technol & Med, Dept Elect & Elect Engn, Personal Robot Lab, London SW7 2BT, England
关键词
Robot imitation learning; Probabilistic grammars; Activity representation; RECOGNITION; ROUTE; TASKS;
D O I
10.1016/j.robot.2013.08.003
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper describes a syntactic approach to imitation learning that captures important task structures in the form of probabilistic activity grammars from a reasonably small number of samples under noisy conditions. We show that these learned grammars can be recursively applied to help recognize unforeseen, more complicated tasks that share underlying structures. The grammars enforce an observation to be consistent with the previously observed behaviors which can correct unexpected, out-of-context actions due to errors of the observer and/or demonstrator. To achieve this goal, our method (1) actively searches for frequently occurring action symbols that are subsets of input samples to uncover the hierarchical structure of the demonstration, and (2) considers the uncertainties of input symbols due to imperfect low-level detectors. We evaluate the proposed method using both synthetic data and two sets of real-world humanoid robot experiments. In our Towers of Hanoi experiment, the robot learns the important constraints of the puzzle after observing demonstrators solving it. In our Dance Imitation experiment, the robot learns 3 types of dances from human demonstrations. The results suggest that under reasonable amount of noise, our method is capable of capturing the reusable task structures and generalizing them to cope with recursions. (C) 2013 Elsevier B.V. All rights reserved.
引用
收藏
页码:1323 / 1334
页数:12
相关论文
共 47 条
[21]   Automated derivation of primitives for movement classification [J].
Fod, A ;
Mataric, MJ ;
Jenkins, OC .
AUTONOMOUS ROBOTS, 2002, 12 (01) :39-54
[22]  
Gurbuz S, 2005, IEEE-RAS INT C HUMAN, P363
[23]   Recognition of visual activities and interactions by stochastic parsing [J].
Ivanov, YA ;
Bobick, AF .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2000, 22 (08) :852-872
[24]   RECOVERING THE BASIC STRUCTURE OF HUMAN ACTIVITIES FROM NOISY VIDEO-BASED SYMBOL STRINGS [J].
Kitani, Kris M. ;
Sato, Yoichi ;
Sugimoto, Akihiro .
INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2008, 22 (08) :1621-1646
[25]   LEARNING BY WATCHING - EXTRACTING REUSABLE TASK KNOWLEDGE FROM VISUAL OBSERVATION OF HUMAN-PERFORMANCE [J].
KUNIYOSHI, Y ;
INABA, M ;
INOUE, H .
IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, 1994, 10 (06) :799-822
[26]  
Langley P, 2000, LECT NOTES ARTIF INT, V1810, P220
[27]   Effective Robot Task Learning by Focusing on Task-relevant Objects [J].
Lee, Kyu Hwa ;
Lee, Jinhan ;
Thomaz, Andrea L. ;
Bobick, Aaron F. .
2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, 2009, :2551-2556
[28]  
Lee K, 2012, INT C PATT RECOG, P3778
[29]  
Lin CD, 2008, 2008 IEEE INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS, VOLS 1 AND 2, P1
[30]   Visual learning by imitation with motor representations [J].
Lopes, M ;
Santos-Victor, J .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2005, 35 (03) :438-449