Text classification based on multi-word with support vector machine

被引:194
作者
Zhang, Wen [1 ]
Yoshida, Taketoshi [1 ]
Tang, Xijin [2 ]
机构
[1] Japan Adv Inst Sci & Technol, Sch Knowledge Sci, Tatsunokuchi, Ishikawa 9231292, Japan
[2] Chinese Acad Sci, Inst Syst Sci, Acad Math & Syst Sci, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Text classification; Multi-word; Feature selection; Information gain; Support vector machine;
D O I
10.1016/j.knosys.2008.03.044
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
One of the main themes which support text mining is text representation: that is, its task is to look for appropriate terms to transfer documents into numerical vectors. Recently, many efforts have been invested on this topic to enrich text representation using vector space model (VSM) to improve the performances of text mining techniques such as text classification and text clustering. The main concern in this paper is to investigate the effectiveness of using multi-words for text representation on the performances of text classification. Firstly, a practical method is proposed to implement the multi-word extraction from documents based on the syntactical structure. Secondly, two strategies as general concept representation and subtopic representation are presented to represent the documents using the extracted multi-words. In particular, the dynamic k-mismatch is proposed to determine the presence of a long multi-word which is a subtopic of the content of a document. Finally, we carried out a series of experiments on classifying the Reuters-21578 documents using the representations with multi-words. We used the performance of representation in individual words as the baseline, which has the largest dimension of feature set for representation without linguistic preprocessing. Moreover, linear kernel and non-linear polynomial kernel in support vector machines (SVM) are examined comparatively for classification to investigate the effect of kernel type on their performances. Index terms with low information gain (IG) are removed from the feature set at different percentages to observe the robustness of each classification method. Our experiments demonstrate that in multi-word representation, subtopic representation outperforms the general concept representation and the linear kernel outperforms the non-linear kernel of SVM in classifying the Reuters data. The effect of applying different representation strategies is greater than the effect of applying the different SVM kernels on classification performance. Furthermore, the representation using individual words outperforms any representation using multi-words. This is consistent with the major opinions concerning the role of linguistic preprocessing on documents' features when using SVM for text classification. (C) 2008 Elsevier B.V. All rights reserved.
引用
收藏
页码:879 / 886
页数:8
相关论文
共 38 条
  • [1] AIZERMAN MA, 2000, J MACHINE LEARNING R, P113
  • [2] [Anonymous], 1997, Proceedings of the fourteenth international conference on machine learning, DOI DOI 10.1016/J.ESWA.2008.05.026
  • [3] BAASE S, 1978, COMPUTER ALGORITHMS, P173
  • [4] Bourigault D., 1992, P 14 C COMPUTATIONAL, V3, P977, DOI DOI 10.3115/992383.992415
  • [5] Cavnar WB., 1994, Proceedings of the 3rd Annual Symposium on Document Analysis and Information Retrieval, VVol. 48113, P161, DOI DOI 10.1.1.53.9367
  • [6] CHANG JS, 1994, COMPUTER PROCESSING, V8, P75
  • [7] Improving self-organization of document collections by semantic mapping
    Correa, Renato Fernandes
    Ludermir, Teresa Bernarda
    [J]. NEUROCOMPUTING, 2006, 70 (1-3) : 62 - 69
  • [8] DAILLE B, 1994, P INT C COMP LING KY, P93
  • [9] FAHMI I, 2005, SEM STAT METH ALF IN
  • [10] Firth J. R., 1957, STUDIES LINGUISTIC A, P1