Human-level control through deep reinforcement learning

被引:18208
作者
Mnih, Volodymyr [1 ]
Kavukcuoglu, Koray [1 ]
Silver, David [1 ]
Rusu, Andrei A. [1 ]
Veness, Joel [1 ]
Bellemare, Marc G. [1 ]
Graves, Alex [1 ]
Riedmiller, Martin [1 ]
Fidjeland, Andreas K. [1 ]
Ostrovski, Georg [1 ]
Petersen, Stig [1 ]
Beattie, Charles [1 ]
Sadik, Amir [1 ]
Antonoglou, Ioannis [1 ]
King, Helen [1 ]
Kumaran, Dharshan [1 ]
Wierstra, Daan [1 ]
Legg, Shane [1 ]
Hassabis, Demis [1 ]
机构
[1] Google DeepMind, London EC4A 3TW, England
关键词
RECOGNITION;
D O I
10.1038/nature14236
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The theory of reinforcement learning provides a normative account', deeply rooted in psychological' and neuroscientifie perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems4'5, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms'. While reinforcement learning agents have achieved some successes in a variety of domains", their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks'" to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games". We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
引用
收藏
页码:529 / 533
页数:5
相关论文
共 30 条
  • [21] Riedmiller M, 2005, LECT NOTES ARTIF INT, V3720, P317, DOI 10.1007/11564096_32
  • [22] Reinforcement learning for robot soccer
    Riedmiller, Martin
    Gabel, Thomas
    Hafner, Roland
    Lange, Sascha
    [J]. AUTONOMOUS ROBOTS, 2009, 27 (01) : 55 - 73
  • [23] A neural substrate of prediction and reward
    Schultz, W
    Dayan, P
    Montague, PR
    [J]. SCIENCE, 1997, 275 (5306) : 1593 - 1599
  • [24] Serre T, 2005, PROC CVPR IEEE, P994
  • [25] Visual categorization shapes feature selectivity in the primate temporal cortex
    Sigala, N
    Logothetis, NK
    [J]. NATURE, 2002, 415 (6869) : 318 - 320
  • [26] Sutton RS, 2018, ADAPT COMPUT MACH LE, P1
  • [27] TEMPORAL DIFFERENCE LEARNING AND TD-GAMMON
    TESAURO, G
    [J]. COMMUNICATIONS OF THE ACM, 1995, 38 (03) : 58 - 68
  • [28] An analysis of temporal-difference learning with function approximation
    Tsitsiklis, JN
    VanRoy, B
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1997, 42 (05) : 674 - 690
  • [29] van der Maaten L, 2008, J MACH LEARN RES, V9, P2579
  • [30] WATKINS CJCH, 1992, MACH LEARN, V8, P279, DOI 10.1007/BF00992698