A Nested Chinese Restaurant Topic Model for Short Texts with Document Embeddings

被引:5
作者
Niu, Yue [1 ]
Zhang, Hongjie [1 ]
Li, Jing [1 ]
机构
[1] Univ Sci & Technol China, Dept Comp Sci & Technol, Hefei 230052, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2021年 / 11卷 / 18期
关键词
topic model; text mining; document embeddings; short text;
D O I
10.3390/app11188708
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
In recent years, short texts have become a kind of prevalent text on the internet. Due to the short length of each text, conventional topic models for short texts suffer from the sparsity of word co-occurrence information. Researchers have proposed different kinds of customized topic models for short texts by providing additional word co-occurrence information. However, these models cannot incorporate sufficient semantic word co-occurrence information and may bring additional noisy information. To address these issues, we propose a self-aggregated topic model incorporating document embeddings. Aggregating short texts into long documents according to document embeddings can provide sufficient word co-occurrence information and avoid incorporating non-semantic word co-occurrence information. However, document embeddings of short texts contain a lot of noisy information resulting from the sparsity of word co-occurrence information. So we discard noisy information by changing the document embeddings into global and local semantic information. The global semantic information is the similarity probability distribution on the entire dataset and the local semantic information is the distances of similar short texts. Then we adopt a nested Chinese restaurant process to incorporate these two kinds of information. Finally, we compare our model to several state-of-the-art models on four real-world short texts corpus. The experiment results show that our model achieves better performances in terms of topic coherence and classification accuracy.
引用
收藏
页数:19
相关论文
共 36 条
[1]   Probabilistic Topic Models [J].
Blei, David M. .
COMMUNICATIONS OF THE ACM, 2012, 55 (04) :77-84
[2]   Latent Dirichlet allocation [J].
Blei, DM ;
Ng, AY ;
Jordan, MI .
JOURNAL OF MACHINE LEARNING RESEARCH, 2003, 3 (4-5) :993-1022
[3]   BTM: Topic Modeling over Short Texts [J].
Cheng, Xueqi ;
Yan, Xiaohui ;
Lan, Yanyan ;
Guo, Jiafeng .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2014, 26 (12) :2928-2941
[4]  
Das R, 2015, PROCEEDINGS OF THE 53RD ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 7TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1, P795
[5]  
Finegan-Dollak C, 2016, PROCEEDINGS OF THE 54TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1, P654
[6]   Finding scientific topics [J].
Griffiths, TL ;
Steyvers, M .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2004, 101 :5228-5235
[7]   Support vector machines [J].
Hearst, MA .
IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS, 1998, 13 (04) :18-21
[8]  
Hong L., 2010, Proceedings of the first workshop on social media analytics, P80, DOI 10.1145/1964858.1964870
[9]  
Hu Yuheng., 2012, AAAI, V12, P59
[10]  
Kohavi R., 1995, IJCAI-95. Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence, P1137