共 18 条
- [3] Unicoder-VL: A Universal Encoder for Vision and Language by Cross-Modal Pre-Training[J] . Gen Li,Nan Duan,Yuejian Fang,Ming Gong,Daxin Jiang.Proceedings of the AAAI Conference on Artificial Intelligence . 2020 (07)
- [4] Symbolic, Distributed, and Distributional Representations for Natural Language Processing in the Era of Deep Learning: A Survey[J] . Ferrone Lorenzo,Zanzotto Fabio Massimo.Frontiers in Robotics and AI . 2020
- [5] UNITER: Learning UNiversal Image-TExt Representations[J] . Yen-Chun Chen,Linjie Li,Licheng Yu,Ahmed El Kholy,Faisal Ahmed,Zhe Gan,Yu Cheng,Jingjing Liu.CoRR . 2019
- [6] RoBERTa: A Robustly Optimized BERT Pretraining Approach[J] . Yinhan Liu,Myle Ott,Naman Goyal,Jingfei Du,Mandar Joshi,Danqi Chen,Omer Levy,Mike Lewis,Luke Zettlemoyer,Veselin Stoyanov.CoRR . 2019
- [7] DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter[J] . Victor Sanh,Lysandre Debut,Julien Chaumond,Thomas Wolf.CoRR . 2019
- [8] Multi-Task Deep Neural Networks for Natural Language Understanding[J] . Xiaodong Liu,Pengcheng He,Weizhu Chen,Jianfeng Gao.CoRR . 2019
- [9] Distilling Task-Specific Knowledge from BERT into Simple Neural Networks[J] . Raphael Tang,Yao Lu,Linqing Liu,Lili Mou,Olga Vechtomova,Jimmy Lin.CoRR . 2019
- [10] Unicoder: A Universal Language Encoder by Pre-training with Multiple Cross-lingual Tasks[J] . Haoyang Huang,Yaobo Liang,Nan Duan,Ming Gong,Linjun Shou,Daxin Jiang,Ming Zhou 0001.CoRR . 2019