A Primer on Neural Network Models for Natural Language Processing

被引:522
作者
Goldberg, Yoav [1 ]
机构
[1] Bar Ilan Univ, Dept Comp Sci, Ramat Gan, Israel
关键词
DISTRIBUTED REPRESENTATIONS;
D O I
10.1613/jair.4992
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Over the past few years, neural networks have re-emerged as powerful machine-learning models, yielding state-of-the-art results in fields such as image recognition and speech processing. More recently, neural network models started to be applied also to textual natural language signals, again with very promising results. This tutorial surveys neural network models from the perspective of natural language processing research, in an attempt to bring natural-language researchers up to speed with the neural techniques. The tutorial covers input encoding for natural language tasks, feed-forward networks, convolutional networks, recurrent networks and recursive networks, as well as the computation graph abstraction for automatic gradient computation.
引用
收藏
页码:345 / 420
页数:76
相关论文
共 172 条
[1]  
Ando RK, 2005, J MACH LEARN RES, V6, P1817
[2]  
[Anonymous], ARXIV150304881CS
[3]  
[Anonymous], 2014, P 8 WORKSH SYNT SEM
[4]  
[Anonymous], 2014, COMPUT SCI
[5]  
[Anonymous], 2014, P 2014 C EMP METH NA, DOI 10.3115/v1/D14-1003
[6]  
[Anonymous], 2008, THESIS
[7]  
[Anonymous], EMPIRICAL METHODS NA
[8]  
[Anonymous], 1992, CONNECTIONIST NATURA, DOI DOI 10.1007/978-94-011-2624-3_11
[9]  
[Anonymous], 2012, INTERSPEECH
[10]  
[Anonymous], 2004, KERNEL METHODS PATTE