Framing reinforcement learning from human reward: Reward positivity, temporal discounting, episodicity, and performance

被引:35
作者
Knox, W. Bradley [1 ]
Stone, Peter [2 ]
机构
[1] MIT, Media Lab, Cambridge, MA 02139 USA
[2] Univ Texas Austin, Dept Comp Sci, Austin, TX 78712 USA
基金
美国国家科学基金会;
关键词
Reinforcement learning; Modeling user behavior; End-user programming; Human-agent interaction; Interactive machine learning; Human teachers; ROBOT;
D O I
10.1016/j.artint.2015.03.009
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Several studies have demonstrated that reward from a human trainer can be a powerful feedback signal for control-learning algorithms. However, the space of algorithms for learning from such human reward has hitherto not been explored systematically. Using model-based reinforcement learning from human reward, this article investigates the problem of learning from human reward through six experiments, focusing on the relationships between reward positivity, which is how generally positive a trainer's reward values are; temporal discounting, the extent to which future reward is discounted in value; episodicity, whether task learning occurs in discrete learning episodes instead of one continuing session; and task performance, the agent's performance on the task the trainer intends to teach. This investigation is motivated by the observation that an agent can pursue different learning objectives, leading to different resulting behaviors. We search for learning objectives that lead the agent to behave as the trainer intends. We identify and empirically support a "positive circuits" problem with low discounting (i.e., high discount factors) for episodic, goal-based tasks that arises from an observed bias among humans towards giving positive reward, resulting in an endorsement of myopic learning for such domains. We then show that converting simple episodic tasks to be non-episodic (i.e., continuing) reduces and in some cases resolves issues present in episodic tasks with generally positive reward and-relatedly-enables highly successful learning with non-myopic valuation in multiple user studies. The primary learning algorithm introduced in this article, which we call "VI-TAMER", is the first algorithm to successfully learn non-myopically from reward generated by a human trainer; we also empirically show that such non-myopic valuation facilitates higher-level understanding of the task. Anticipating the complexity of real-world problems, we perform further studies-one with a failure state added-that compare (1) learning when states are updated asynchronously with local bias-i.e., states quickly reachable from the agent's current state are updated more often than other states-to (2) learning with the fully synchronous sweeps across each state in the VI-TAMER algorithm. With these locally biased updates, we find that the general positivity of human reward creates problems even for continuing tasks, revealing a distinct research challenge for future work. (C) 2015 Elsevier B.V. All rights reserved.
引用
收藏
页码:24 / 50
页数:27
相关论文
共 31 条
  • [1] [Anonymous], P IEEE INT C MACH LE
  • [2] [Anonymous], 1999, P 16 INT C MACH LEAR
  • [3] [Anonymous], 2009, Learning representation and control in markov decision processes
  • [4] A survey of robot learning from demonstration
    Argall, Brenna D.
    Chernova, Sonia
    Veloso, Manuela
    Browning, Brett
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2009, 57 (05) : 469 - 483
  • [5] A survey of cross-validation procedures for model selection
    Arlot, Sylvain
    Celisse, Alain
    [J]. STATISTICS SURVEYS, 2010, 4 : 40 - 79
  • [6] LEARNING TO ACT USING REAL-TIME DYNAMIC-PROGRAMMING
    BARTO, AG
    BRADTKE, SJ
    SINGH, SP
    [J]. ARTIFICIAL INTELLIGENCE, 1995, 72 (1-2) : 81 - 138
  • [7] Boots B., 2010, P 24 C NEUR INF PROC
  • [8] GELLY S, 2008, P 23 AAAI C ART INT
  • [9] Grollman D., 2007, P IEEE INT C ROB AUT
  • [10] Guyon I., 2003, J MACH LEARN RES, V3, P1157