State-chain sequential feedback reinforcement learning for path planning of autonomous mobile robots

被引:18
作者
Ma, Xin [1 ]
Xu, Ya [1 ]
Sun, Guo-qiang [1 ]
Deng, Li-xia [1 ]
Li, Yi-bin [1 ]
机构
[1] Shandong Univ, Sch Control Sci & Engn, Jinan 250061, Peoples R China
来源
JOURNAL OF ZHEJIANG UNIVERSITY-SCIENCE C-COMPUTERS & ELECTRONICS | 2013年 / 14卷 / 03期
基金
中国国家自然科学基金;
关键词
Path planning; Q-learning; Autonomous mobile robot; Reinforcement learning; GENETIC ALGORITHMS; INITIALIZATION; ENVIRONMENTS; OPTIMIZATION; EXPLORATION; NAVIGATION; KNOWLEDGE; STRATEGY;
D O I
10.1631/jzus.C1200226
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper deals with a new approach based on Q-learning for solving the problem of mobile robot path planning in complex unknown static environments. As a computational approach to learning through interaction with the environment, reinforcement learning algorithms have been widely used for intelligent robot control, especially in the field of autonomous mobile robots. However, the learning process is slow and cumbersome. For practical applications, rapid rates of convergence are required. Aiming at the problem of slow convergence and long learning time for Q-learning based mobile robot path planning, a state-chain sequential feedback Q-learning algorithm is proposed for quickly searching for the optimal path of mobile robots in complex unknown static environments. The state chain is built during the searching process. After one action is chosen and the reward is received, the Q-values of the state-action pairs on the previously built state chain are sequentially updated with one-step Q-learning. With the increasing number of Q-values updated after one action, the number of actual steps for convergence decreases and thus, the learning time decreases, where a step is a state transition. Extensive simulations validate the efficiency of the newly proposed approach for mobile robot path planning in complex environments. The results show that the new approach has a high convergence speed and that the robot can find the collision-free optimal path in complex unknown static environments with much shorter time, compared with the one-step Q-learning algorithm and the Q(lambda)-learning algorithm.
引用
收藏
页码:167 / 178
页数:12
相关论文
共 39 条
[1]   A new APF strategy for path planning in environments with obstacles [J].
Agirrebeitia, J ;
Avilés, R ;
de Bustos, IF ;
Ajuria, G .
MECHANISM AND MACHINE THEORY, 2005, 40 (06) :645-658
[2]  
Al-Taharwa Ismail, 2008, Journal of Computer Sciences, V4, P341, DOI 10.3844/jcssp.2008.341.344
[3]   PATH PLANNING FOR A MOBILE ROBOT [J].
ALEXOPOULOS, C ;
GRIFFIN, PM .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS, 1992, 22 (02) :318-322
[4]  
[Anonymous], 1991, ROBOT MOTION PLANNIN, DOI DOI 10.1007/978-1-4615-4022-9_1
[5]   NUMERICAL POTENTIAL-FIELD TECHNIQUES FOR ROBOT PATH PLANNING [J].
BARRAQUAND, J ;
LANGLOIS, B ;
LATOMBE, JC .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS, 1992, 22 (02) :224-241
[6]  
Cao QX, 2006, 2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-12, P3331
[7]   Multiple objective genetic algorithms for path-planning optimization in autonomous mobile robots [J].
Castillo, Oscar ;
Trujillo, Leonardo ;
Melin, Patricia .
SOFT COMPUTING, 2007, 11 (03) :269-279
[8]  
Dearden R, 1998, FIFTEENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI-98) AND TENTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICAL INTELLIGENCE (IAAI-98) - PROCEEDINGS, P761
[9]   Path Planning for Autonomous Vehicles in Unknown Semi-structured Environments [J].
Dolgov, Dmitri ;
Thrun, Sebastian ;
Montemerlo, Michael ;
Diebel, James .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2010, 29 (05) :485-501
[10]   Guiding exploration by pre-existing knowledge without modifying reward [J].
Framling, Kary .
NEURAL NETWORKS, 2007, 20 (06) :736-747