Summarization of changes in dynamic text collections using Latent Dirichlet Allocation model

被引:41
作者
Kar, Manika [1 ]
Nunes, Sergio [1 ]
Ribeiro, Cristina [1 ]
机构
[1] Univ Porto, Fac Engn, DEI, INESC TEC, P-4200465 Oporto, Portugal
关键词
Changes summarization; Temporal term weighting; Sentence ranking; Latent Dirichlet Allocation; Wikipedia;
D O I
10.1016/j.ipm.2015.06.002
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the area of Information Retrieval, the task of automatic text summarization usually assumes a static underlying collection of documents, disregarding the temporal dimension of each document. However, in real world settings, collections and individual documents rarely stay unchanged over time. The World Wide Web is a prime example of a collection where information changes both frequently and significantly over time, with documents being added, modified or just deleted at different times. In this context, previous work addressing the summarization of web documents has simply discarded the dynamic nature of the web, considering only the latest published version of each individual document. This paper proposes and addresses a new challenge - the automatic summarization of changes in dynamic text collections. In standard text summarization, retrieval techniques present a summary to the user by capturing the major points expressed in the most recent version of an entire document in a condensed form. In this new task, the goal is to obtain a summary that describes the most significant changes made to a document during a given period. In other words, the idea is to have a summary of the revisions made to a document over a specific period of time. This paper proposes different approaches to generate summaries using extractive summarization techniques. First, individual terms are scored and then this information is used to rank and select sentences to produce the final summary. A system based on Latent Dirichlet Allocation model (LDA) is used to find the hidden topic structures of changes. The purpose of using the LDA model is to identify separate topics where the changed terms from each topic are likely to carry at least one significant change. The different approaches are then compared with the previous work in this area. A collection of articles from Wikipedia, including their revision history, is used to evaluate the proposed system. For each article, a temporal interval and a reference summary from the article's content are selected manually. The articles and intervals in which a significant event occurred are carefully selected. The summaries produced by each of the approaches are evaluated comparatively to the manual summaries using ROUGE metrics. It is observed that the approach using the LDA model outperforms all the other approaches. Statistical tests reveal that the differences in ROUGE scores for the LDA-based approach is statistically significant at 99% over baseline. (C) 2015 Elsevier Ltd. All rights reserved.
引用
收藏
页码:809 / 833
页数:25
相关论文
共 37 条
[1]  
Agirre E., 2012, P 1 JOINT C LEXICAL, V2, P385, DOI DOI 10.5555/2387636.2387697
[2]  
Allan J., 2001, SIGIR Forum, P10
[3]  
Allan J., 2002, INTRO TOPIC DETECTIO, DOI DOI 10.1007/978-1-4615-0933-21
[4]  
[Anonymous], 2008, Proceedings of ACL-08: HLT, short papers
[5]  
[Anonymous], 2002, Tech. Report
[6]  
[Anonymous], 2003, P 2003 C N AM CHAPT
[7]  
[Anonymous], 2010, HUMAN LANGUAGE TECHN
[8]  
[Anonymous], ACM SIGIR FORUM
[9]  
[Anonymous], 2006, ADV NEURAL INFORM PR
[10]  
[Anonymous], 2013, P 22 INT C WORLD WID