A Winnow-based approach to context-sensitive spelling correction

被引:118
作者
Golding, AR
Roth, D
机构
[1] Mitsubishi Elect Res Lab, Cambridge, MA 02139 USA
[2] Univ Illinois, Dept Comp Sci, Urbana, IL 61801 USA
关键词
Winnow; multiplicative weight-update algorithms; context-sensitive spelling correction; Bayesian classifiers;
D O I
10.1023/A:1007545901558
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A large class of machine-learning problems in natural language require the characterization of linguistic context. Two characteristic properties of such problems are that their feature space is of very high dimensionality, and their target concepts depend on only a small subset of the features in the space. Under such conditions, multiplicative weight-update algorithms such as Winnow have been shown to have exceptionally good theoretical properties. In the work reported here, we present an algorithm combining variants of Winnow and weighted-majority voting, and apply it to a problem in the aforementioned class: context-sensitive spelling correction. This is the task of fixing spelling errors that happen to result in valid words, such as substituting to for too, casual for causal, and so on. We evaluate our algorithm, WinSpell, by comparing it against BaySpell, a statistics-based method representing the state of the art for this task. We find: (1) When run with a full (unpruned) set of features, WinSpell achieves accuracies significantly higher than BaySpell was able to achieve in either the pruned or unpruned condition; (2) When compared with other systems in the literature, WinSpell exhibits the highest performance: (3) While several aspects of WinSpell's architecture contribute to its superiority over Bay Spell, the primary factor is that it is able to learn a better linear separator than BaySpell learns: (4) When run on a test set drawn from a different corpus than the training set was drawn from, WinSpell is better able than BaySpell to adapt, using a strategy we will present that combines supervised learning on the training set with unsupervised learning on the (noisy) test set.
引用
收藏
页码:107 / 130
页数:24
相关论文
共 40 条