Empirical tests of the Gradual Learning Algorithm

被引:341
作者
Boersma, P
Hayes, B
机构
[1] Univ Amsterdam, Inst Phonet Sci, NL-1016 CG Amsterdam, Netherlands
[2] Univ Calif Los Angeles, Dept Linguist, Los Angeles, CA 90095 USA
关键词
learnability; optimality theory; variation; Ilokano; Finnish;
D O I
10.1162/002438901554586
中图分类号
H0 [语言学];
学科分类号
030303 ; 0501 ; 050102 ;
摘要
The Gradual Learning Algorithm (Boersma 1997) is a constraint-ranking algorithm for learning optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and Smolensky (1993, 1996, 1998, 2000), which initiated the learnability research program for Optimality Theory. We argue that the Gradual Learning Algorithm has a number of special advantages: it can learn free variation, deal effectively with noisy learning data, and account for gradient well-formedness judgments. The case studies we examine involve Ilokano reduplication and metathesis. Finnish genitive plurals, and the distribution of English light and dark /1/.
引用
收藏
页码:45 / 86
页数:42
相关论文
共 58 条
  • [1] [Anonymous], U MARYLAND WORKING P
  • [2] [Anonymous], 1996, OPTIMALITY THEORY
  • [3] [Anonymous], 1995, THESIS U COLORADO BO
  • [4] [Anonymous], 1985, CAMBRIDGE PAPERS PHO
  • [5] [Anonymous], LANG VAR CHANGE
  • [6] [Anonymous], NELS
  • [7] [Anonymous], 2000, DERIVATIONAL RESIDUE
  • [8] [Anonymous], 1994, THESIS U PENNSYLVANI
  • [9] Anttila A., 1997, VARIATION CHANGE PHO, P35, DOI DOI 10.1075/CILT.146
  • [10] Anttila A, 1997, THESIS STANFORD U ST