DE eng

Search in the Catalogues and Directories

Hits 1 – 2 of 2

1
End-to-end style-conditioned poetry generation: What does it take to learn from examples alone? ...
BASE
Show details
2
Global Explainability of BERT-Based Evaluation Metrics by Disentangling along Linguistic Factors ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.701/ Abstract: Evaluation metrics are a key ingredient for progress of text generation systems. In recent years, several BERT-based evaluation metrics have been proposed (including BERTScore, MoverScore, BLEURT, etc.) which correlate much better with human assessment of text generation quality than BLEU or ROUGE, invented two decades ago. However, little is known what these metrics, which are based on black-box language model representations, actually capture (it is typically assumed they model semantic similarity). In this work, we use a simple regression based global explainability technique to disentangle metric scores along linguistic factors, including semantics, syntax, morphology, and lexical overlap. We show that the different metrics capture all aspects to some degree, but that they are all substantially sensitive to lexical overlap, just like BLEU and ROUGE. This exposes limitations of these novelly proposed metrics, which we also ...
Keyword: Computational Linguistics; Language Models; Machine Learning; Machine Learning and Data Mining; Natural Language Processing
URL: https://dx.doi.org/10.48448/aajb-9k90
https://underline.io/lecture/37492-global-explainability-of-bert-based-evaluation-metrics-by-disentangling-along-linguistic-factors
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
2
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern