1 |
Poolingformer: Long Document Modeling with Pooling Attention ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
ProphetNet-X: Large-Scale Pre-training Models for English, Chinese, Multi-lingual, Dialog, and Code Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
GLGE: A New General Language Generation Evaluation Benchmark ...
|
|
Liu, Dayiheng; Yan, Yu; Gong, Yeyun; Qi, Weizhen; Zhang, Hang; Jiao, Jian; Chen, Weizhu; Fu, Jie; Shou, Linjun; Gong, Ming; Wang, Pengcheng; Chen, Jiusheng; Jiang, Daxin; Lv, Jiancheng; Zhang, Ruofei; Wu, Winnie; Zhou, Ming; Duan, Nan. - : arXiv, 2020
|
|
Abstract:
Multi-task benchmarks such as GLUE and SuperGLUE have driven great progress of pretraining and transfer learning in Natural Language Processing (NLP). These benchmarks mostly focus on a range of Natural Language Understanding (NLU) tasks, without considering the Natural Language Generation (NLG) models. In this paper, we present the General Language Generation Evaluation (GLGE), a new multi-task benchmark for evaluating the generalization capabilities of NLG models across eight language generation tasks. For each task, we continue to design three subtasks in terms of task difficulty (GLGE-Easy, GLGE-Medium, and GLGE-Hard). This introduces 24 subtasks to comprehensively compare model performance. To encourage research on pretraining and transfer learning on NLG models, we make GLGE publicly available and build a leaderboard with strong baselines including MASS, BART, and ProphetNet (The source code and dataset are publicly available at https://github.com/microsoft/glge). ... : Findings of Association for Computational Linguistics. ACL 2021 ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://dx.doi.org/10.48550/arxiv.2011.11928 https://arxiv.org/abs/2011.11928
|
|
BASE
|
|
Hide details
|
|
|
|