1 |
The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics
|
|
Gehrmann, Sebastian; Adewumi, Tosin; Aggarwal, Karmanya; Ammanamanchi, Pawan Sasanka; Aremu, Anuoluwapo; Bosselut, Antoine; Chandu, Khyathi Raghavi; Clinciu, Miruna-Adriana; Das, Dipanjan; Dhole, Kaustubh; Du, Wanyu; Durmus, Esin; Dušek, Ondřej; Emezue, Chris Chinenye; Gangal, Varun; Garbacea, Cristina; Hashimoto, Tatsunori; Hou, Yufang; Jernite, Yacine; Jhamtani, Harsh; Ji, Yangfeng; Jolly, Shailza; Kale, Mihir; Kumar, Dhruv; Ladhak, Faisal; Madaan, Aman; Maddela, Mounica; Mahajan, Khyati; Mahamood, Saad; Majumder, Bodhisattwa Prasad; Martins, Pedro Henrique; McMillan-Major, Angelina; Mille, Simon; van Miltenburg, Emiel; Nadeem, Moin; Narayan, Shashi; Nikolaev, Vitaly; Niyongabo Rubungo, Andre; Osei, Salomey; Parikh, Ankur; Perez-Beltrachini, Laura; Rao, Niranjan Ramesh; Raunak, Vikas; Rodriguez, Juan Diego; Santhanam, Sashank; Sedoc, João; Sellam, Thibault; Shaikh, Samira; Shimorina, Anastasia; Sobrevilla Cabezudo, Marco Antonio; Strobelt, Hendrik; Subramani, Nishant; Xu, Wei; Yang, Diyi; Yerukola, Akhila; Zhou, Jiawei
|
|
In: Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021) ; https://hal.archives-ouvertes.fr/hal-03466171 ; Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), Aug 2021, Online, France. pp.96-120, ⟨10.18653/v1/2021.gem-1.10⟩ (2021)
|
|
Abstract:
International audience ; We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metrics. Measuring progress in NLG relies on a constantly evolving ecosystem of automated metrics, datasets, and human evaluation standards. Due to this moving target, new models often still evaluate on divergent anglo-centric corpora with well-established, but flawed, metrics. This disconnect makes it challenging to identify the limitations of current models and opportunities for progress. Addressing this limitation, GEM provides an environment in which models can easily be applied to a wide set of tasks and in which evaluation strategies can be tested. Regular updates to the benchmark will help NLG research become more multilingual and evolve the challenge alongside models. This paper serves as the description of the data for which we are organizing a shared task at our ACL2021 Workshop and to which we invite the entire NLG community to participate.
|
|
Keyword:
[INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL]
|
|
URL: https://hal.archives-ouvertes.fr/hal-03466171 https://doi.org/10.18653/v1/2021.gem-1.10
|
|
BASE
|
|
Hide details
|
|
3 |
The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|