Combining Similarity Features and Deep Representation Learning for Stance Detection in the Context of Checking Fake News

被引:34
作者
Borges, Luis [1 ]
Martins, Bruno [1 ]
Calado, Pavel [1 ]
机构
[1] Univ Lisbon, Inst Super Tecn, INESC ID, Rua Alves Redol 9, P-1000029 Lisbon, Portugal
来源
ACM JOURNAL OF DATA AND INFORMATION QUALITY | 2019年 / 11卷 / 03期
关键词
Fake news; fact checking; stance detection; deep learning; natural language processing; recurrent neural networks;
D O I
10.1145/3287763
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Fake news is nowadays an issue of pressing concern, given its recent rise as a potential threat to high-quality journalism and well-informed public discourse. The Fake News Challenge (FNC-1) was organized in early 2017 to encourage the development of machine-learning-based classification systems for stance detection (i.e., for identifying whether a particular news article agrees, disagrees, discusses, or is unrelated to a particular news headline), thus helping in the detection and analysis of possible instances of fake news. This article presents a novel approach to tackle this stance detection problem, based on the combination of string similarity features with a deep neural network architecture that leverages ideas previously advanced in the context of learning-efficient text representations, document classification, and natural language inference. Specifically, we use bi-directional Recurrent Neural Networks (RNNs), together with max-pooling over the temporal/sequential dimension and neural attention, for representing (i) the headline, (ii) the first two sentences of the news article, and (iii) the entire news article. These representations are then combined/compared, complemented with similarity features inspired on other FNC-1 approaches, and passed to a final layer that predicts the stance of the article toward the headline. We also explore the use of external sources of information, specifically large datasets of sentence pairs originally proposed for training and evaluating natural language inference methods to pre-train specific components of the neural network architecture (e.g., the RNNs used for encoding sentences). The obtained results attest to the effectiveness of the proposed ideas and show that our model, particularly when considering pre-training and the combination of neural representations together with similarity features, slightly outperforms the previous state of the art.
引用
收藏
页数:26
相关论文
共 52 条
  • [1] Bhatt Gaurav, 2018, P WEB C
  • [2] Bourgonje Peter, 2017, P C EMP METH NAT LAN
  • [3] Bowman Samuel R., 2015, C EMPIRICAL METHODS, P632, DOI 10.18653/v1/D15-1075
  • [4] Cer Daniel, 2018, ARXIV1803111752018
  • [5] Charlet Delphine, 2017, P INT WORKSH SEM EV
  • [6] Chen Qian, 2017, P ANN M ASS COMP LIN
  • [7] Chen Qian, 2017, P WORKSH EV VECT SPA
  • [8] Choi J., 2017, P C ASS ADV ART INT
  • [9] Chung J, 2014, ARXIV
  • [10] Conneau Alexis, 2017, PROCEEDINGS OF THE C