Never-Ending Learning

被引:509
作者
Mitchell, T. [1 ]
Cohen, W. [1 ]
Hruschka, E. [2 ]
Talukdar, P. [3 ]
Yang, B. [1 ]
Betteridge, J. [1 ]
Carlson, A. [4 ]
Dalvi, B. [1 ]
Gardner, M. [1 ]
Kisiel, B. [1 ]
Krishnamurthy, J. [1 ]
Lao, N. [4 ]
Mazaitis, K. [1 ]
Mohamed, T. [1 ]
Nakashole, N. [1 ]
Platanios, E. [1 ]
Ritter, A. [5 ]
Samadi, M. [1 ]
Settles, B. [6 ]
Wang, R. [1 ]
Wijaya, D. [1 ]
Gupta, A. [1 ]
Chen, X. [1 ]
Saparov, A. [1 ]
Greaves, M. [7 ]
Welling, J. [8 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[2] Univ Fed Sao Carlos, Sao Carlos, SP, Brazil
[3] Indian Inst Sci, Bangalore, Karnataka, India
[4] Google Inc, Mountain View, CA USA
[5] Ohio State Univ, Columbus, OH 43210 USA
[6] Duolingo, Columbus, OH USA
[7] Alpine Data Labs, San Francisco, CA USA
[8] Pittsburgh Supercomp Ctr, Pittsburgh, PA USA
基金
美国国家科学基金会; 巴西圣保罗研究基金会;
关键词
Knowledge based systems - Machine learning;
D O I
10.1145/3191513
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
080201 [机械制造及其自动化];
摘要
Whereas people learn many different types of knowledge from diverse experiences over many years, and become better learners over time, most current machine learning systems are much more narrow, learning just a single function or data model based on statistical analysis of a single data set. We suggest that people learn better than computers precisely because of this difference, and we suggest a key direction for machine learning research is to develop software architectures that enable intelligent agents to also learn many types of knowledge, continuously over many years, and to become better learners over time. In this paper we define more precisely this never-ending learning paradigm for machine learning, and we present one case study: the Never-Ending Language Learner (NELL), which achieves a number of the desired properties of a never-ending learner. NELL has been learning to read the Web 24hrs/day since January 2010, and so far has acquired a knowledge base with 120mn diverse, confidence-weighted beliefs (e.g., servedWith(tea, biscuits)), while learning thousands of interrelated functions that continually improve its reading competence over time. NELL has also learned to reason over its knowledge base to infer new beliefs it has not yet read from those it has, and NELL is inventing new relational predicates to extend the ontology it uses to represent beliefs. We describe the design of NELL, experimental results illustrating its behavior, and discuss both its successes and shortcomings as a case study in never-ending learning. NELL can be tracked online at http://rtw.ml.cmu.edu, and followed on Twitter at @CMUNELL.
引用
收藏
页码:103 / 115
页数:13
相关论文
共 44 条
[1]
[Anonymous], 2017, ACL
[2]
[Anonymous], P 24 C UNC ART INT U
[3]
[Anonymous], 2011, EMNLP 11 PROC C EMPI
[4]
[Anonymous], 2013, P ICCV
[5]
[Anonymous], INDUCTIVE LOGIC PROG
[6]
[Anonymous], 1998, LEARNING LEARN, DOI DOI 10.1007/978-1-4615-5529-2_8
[7]
[Anonymous], IJCAI
[8]
[Anonymous], P INT C LEARN REPR I
[9]
[Anonymous], 2008, P 17 ACM C INFORM KN
[10]
[Anonymous], 2007, WWW