Efficient Content-Based Sparse Attention with Routing Transformers

被引:252
作者
Roy, Aurko [1 ]
Saffar, Mohammad [1 ]
Vaswani, Ashish [1 ]
Grangier, David [1 ]
机构
[1] Google Res, Mountain View, CA 94043 USA
关键词
723 Computer Software; Data Handling and Applications - 903.1 Information Sources and Analysis - 922.2 Mathematical Statistics;
D O I
10.1162/tacl_a_00353
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-attention has recently been adopted for a wide range of sequence modeling problems. Despite its effectiveness, self-attention suffers from quadratic computation and memory requirements with respect to sequence length. Successful approaches to reduce this complexity focused on attending to local sliding windows or a small set of locations independent of content. Our work proposes to learn dynamic sparse attention patterns that avoid allocating computation and memory to attend to content unrelated to the query of interest. This work builds upon two lines of research: It combines the modeling flexibility of prior work on content-based sparse attention with the efficiency gains from approaches based on local, temporal sparse attention. Our model, the Routing Transformer, endows self-attention with a sparse routing module based on online k-means while reducing the overall complexity of attention to O(n(1.5)d) from O(n(2)d) for sequence length n and hidden dimension d. We show that our model outperforms comparable sparse attention models on language modeling on Wikitext-103 (15.8 vs 18.3 perplexity), as well as on image generation on ImageNet-64 (3.43 vs 3.44 bits/dim) while using fewer self-attention layers. Additionally, we set a new state-of-the-art on the newly released PG-19 data-set, obtaining a test perplexity of 33.2 with a 22 layer Routing Transformer model trained on sequences of length 8192. We open-source the code for Routing Transformer in Tensorflow.(1)
引用
收藏
页码:53 / 68
页数:16
相关论文
共 55 条
[1]  
Al-Rfou R, 2019, AAAI CONF ARTIF INTE, P3159
[2]  
[Anonymous], 2018, 6 INT C LEARN REPR I
[3]  
Auvolat Alex, 2015, ARXIV150705910
[4]  
Ba J., 2016, ARXIV160706450, V1050, P21
[5]  
Baevski Alexei, 2019, 7 INT C LEARN REPR I
[6]  
Bahdanau D, 2016, Arxiv, DOI arXiv:1409.0473
[7]   Frequency-sensitive competitive learning for scalable balanced clustering on high-dimensional hyperspheres [J].
Banerjee, A ;
Ghosh, J .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2004, 15 (03) :702-719
[8]  
Bengio Yoshua, 2013, Statistical Language and Speech Processing. First International Conference, SLSP 2013. Proceedings: LNCS 7978, P1, DOI 10.1007/978-3-642-39593-2_1
[9]  
Blondel M, 2019, PR MACH LEARN RES, V89, P606
[10]  
Bottou L., 1995, Advances in Neural Information Processing Systems 7, P585