Hierarchical reinforcement learning with the MAXQ value function decomposition

被引:712
作者
Dietterich, TG [1 ]
机构
[1] Oregon State Univ, Dept Comp Sci, Corvallis, OR 97331 USA
关键词
D O I
10.1613/jair.639
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a new approach to hierarchical reinforcement learning based on decomposing the target Markov decision process (MDP) into a hierarchy of smaller MDPs and decomposing the value function of the target MDP into an additive combination of the value functions of the smaller MDPs. The decomposition, known as the MAXQ decomposition, has both a procedural semantics-as a subroutine hierarchy-and a declarative semantics-as a representation of the value function of a hierarchical policy. MAXQ unifies and extends previous work on hierarchical reinforcement learning by Singh, Kaelbling, and Dayan and Hinton. It is based on the assumption that the programmer can identify useful subgoals and define subtasks that achieve these subgoals. By defining such subgoals, the programmer constrains the set of policies that need to be considered during reinforcement learning. The MAXQ value function decomposition can represent the value function of any policy that is consistent with the given hierarchy. The decomposition also creates opportunities to exploit state abstractions, so that individual MDPs within the hierarchy can ignore large parts of the state space. This is important for the practical application of the method. This paper defines the MAXQ hierarchy, proves formal results on its representational power, and establishes five conditions for the safe use of state abstractions. The paper presents an online model-free learning algorithm, MAXQ-Q, and proves that it converges with probability 1 to a kind of locally-optimal policy known as a recursively optimal policy, even in the presence of the five kinds of state abstraction. The paper evaluates the MAXQ representation and MAXQ-Q through a series of experiments in three domains and shows experimentally that MAXQ-Q (with state abstractions) converges to a recursively optimal policy much faster than at Q learning. The fact that MAXQ learns a representation of the value function has an important benefit: it makes it possible to compute and execute an improved, non-hierarchical policy via a procedure similar to the policy improvement step of policy iteration. The paper demonstrates the effectiveness of this non-hierarchical execution experimentally. Finally, the paper concludes with a comparison to related work and a discussion of the design tradeoffs in hierarchical reinforcement learning.
引用
收藏
页码:227 / 303
页数:77
相关论文
共 33 条
[1]  
[Anonymous], CS9510 BROWN U DEP C
[2]  
[Anonymous], THESIS U CALIFORNIA
[3]  
[Anonymous], P 14 C UNC ART INT
[4]  
[Anonymous], THESIS KINGS COLL OX
[5]  
Bellman R., 1957, DYNAMIC PROGRAMMING
[6]  
Bertsekas D. P., 1996, Neuro Dynamic Programming, V1st
[7]  
BOUTILIER C, 1995, P 14 INT JOINT C ART, P1104
[8]   O-PLAN - THE OPEN PLANNING ARCHITECTURE [J].
CURRIE, K ;
TATE, A .
ARTIFICIAL INTELLIGENCE, 1991, 52 (01) :49-86
[9]  
Dayan P., 1992, Advances in Neural Information Processing Systems, V5, P271, DOI DOI 10.5555/2987061.2987095
[10]  
Dietterich T. G., 1998, Machine Learning. Proceedings of the Fifteenth International Conference (ICML'98), P118