Human-level concept learning through probabilistic program induction

被引:1762
作者
Lake, Brenden M. [1 ]
Salakhutdinov, Ruslan [2 ,3 ]
Tenenbaum, Joshua B. [4 ]
机构
[1] NYU, Ctr Data Sci, New York, NY 10003 USA
[2] Univ Toronto, Dept Comp Sci, Toronto, ON M5S 3G4, Canada
[3] Univ Toronto, Dept Stat, Toronto, ON M5S 3G4, Canada
[4] MIT, Dept Brain & Cognit Sci, Cambridge, MA 02139 USA
基金
加拿大自然科学与工程研究理事会;
关键词
NEURAL-NETWORKS; MODEL; EXEMPLAR; LANGUAGE;
D O I
10.1126/science.aab3050
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms-for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world's alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches. We also present several "visual Turing tests" probing the model's creative generalization abilities, which in many cases are indistinguishable from human behavior.
引用
收藏
页码:1332 / 1338
页数:7
相关论文
共 60 条