1 |
Towards Explainable Evaluation Metrics for Natural Language Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Pushing the right buttons: adversarial evaluation of quality estimation
|
|
|
|
In: Proceedings of the Sixth Conference on Machine Translation ; 625 ; 638 (2022)
|
|
BASE
|
|
Show details
|
|
5 |
Continual Quality Estimation with Online Bayesian Meta-Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Findings of the WMT 2021 Shared Task on Quality Estimation ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Pushing the Right Buttons: Adversarial Evaluation of Quality Estimation ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
deepQuest-py: large and distilled models for quality estimation
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Findings of the WMT 2021 shared task on quality estimation
|
|
|
|
In: 689 ; 730 (2021)
|
|
BASE
|
|
Show details
|
|
12 |
deepQuest-py: large and distilled models for quality estimation
|
|
|
|
In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations ; 382 ; 389 (2021)
|
|
BASE
|
|
Show details
|
|
13 |
Backtranslation feedback improves user confidence in MT, not quality
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Knowledge distillation for quality estimation
|
|
|
|
In: 5091 ; 5099 (2021)
|
|
BASE
|
|
Show details
|
|
15 |
MLQE-PE: A Multilingual Quality Estimation and Post-Editing Dataset ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Unsupervised quality estimation for neural machine translation
|
|
|
|
In: 8 ; 539 ; 555 (2020)
|
|
BASE
|
|
Show details
|
|
17 |
An exploratory study on multilingual quality estimation
|
|
|
|
In: 366 ; 377 (2020)
|
|
BASE
|
|
Show details
|
|
18 |
BERGAMOT-LATTE submissions for the WMT20 quality estimation shared task
|
|
|
|
In: 1010 ; 1017 (2020)
|
|
Abstract:
© 2020 The Authors. Published by Association for Computational Linguistics. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: https://www.aclweb.org/anthology/2020.wmt-1.116/ ; This paper presents our submission to the WMT2020 Shared Task on Quality Estimation (QE). We participate in Task and Task 2 focusing on sentence-level prediction. We explore (a) a black-box approach to QE based on pre-trained representations; and (b) glass-box approaches that leverage various indicators that can be extracted from the neural MT systems. In addition to training a feature-based regression model using glass-box quality indicators, we also test whether they can be used to predict MT quality directly with no supervision. We assess our systems in a multi-lingual setting and show that both types of approaches generalise well across languages. Our black-box QE models tied for the winning submission in four out of seven language pairs inTask 1, thus demonstrating very strong performance. The glass-box approaches also performed competitively, representing a light-weight alternative to the neural-based models.
|
|
URL: http://hdl.handle.net/2436/623856
|
|
BASE
|
|
Hide details
|
|
19 |
Findings of the WMT 2020 shared task on quality estimation
|
|
|
|
In: 743 ; 764 (2020)
|
|
BASE
|
|
Show details
|
|
20 |
MLQE-PE: A multilingual quality estimation and post-editing dataset
|
|
|
|
BASE
|
|
Show details
|
|
|
|