3 |
Controllable Text Simplification with Explicit Paraphrasing ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
The (Un)Suitability of Automatic Evaluation Metrics for Text Simplification ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
deepQuest-py: large and distilled models for quality estimation
|
|
|
|
BASE
|
|
Show details
|
|
7 |
IAPUCP at SemEval-2021 task 1: Stacking fine-tuned transformers is almost all you need for lexical complexity prediction
|
|
|
|
BASE
|
|
Show details
|
|
8 |
The (un)suitability of automatic evaluation metrics for text simplification
|
|
|
|
Abstract:
In order to simplify sentences, several rewriting operations can be performed, such as replacing complex words per simpler synonyms, deleting unnecessary information, and splitting long sentences. Despite this multi-operation nature, evaluation of automatic simplification systems relies on metrics that moderately correlate with human judgments on the simplicity achieved by executing specific operations (e.g., simplicity gain based on lexical replacements). In this article, we investigate how well existing metrics can assess sentence-level simplifications where multiple operations may have been applied and which, therefore, require more general simplicity judgments. For that, we first collect a new and more reliable data set for evaluating the correlation of metrics and human judgments of overall simplicity. Second, we conduct the first meta-evaluation of automatic metrics in Text Simplification, using our new data set (and other existing data) to analyze the variation of the correlation between metrics’ scores and human judgments across three dimensions: the perceived simplicity level, the system type, and the set of references used for computation. We show that these three aspects affect the correlations and, in particular, highlight the limitations of commonly used operation-specific metrics. Finally, based on our findings, we propose a set of recommendations for automatic evaluation of multi-operation simplifications, suggesting which metrics to compute and how to interpret their scores.
|
|
URL: https://orca.cardiff.ac.uk/147256/ https://doi.org/10.1162/coli_a_00418 https://orca.cardiff.ac.uk/147256/1/coli_a_00418.pdf
|
|
BASE
|
|
Hide details
|
|
10 |
deepQuest-py: large and distilled models for quality estimation
|
|
|
|
In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations ; 382 ; 389 (2021)
|
|
BASE
|
|
Show details
|
|
11 |
Knowledge distillation for quality estimation
|
|
|
|
In: 5091 ; 5099 (2021)
|
|
BASE
|
|
Show details
|
|
12 |
ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations
|
|
|
|
In: ACL 2020 - 58th Annual Meeting of the Association for Computational Linguistics ; https://hal.inria.fr/hal-02889823 ; ACL 2020 - 58th Annual Meeting of the Association for Computational Linguistics, Jul 2020, Seattle / Virtual, United States (2020)
|
|
BASE
|
|
Show details
|
|
13 |
Controllable Text Simplification with Explicit Paraphrasing ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
ASSET: A dataset for tuning and evaluation of sentence simplification models with multiple rewriting transformations
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Data-Driven Sentence Simplification: Survey and Benchmark
|
|
|
|
In: Computational Linguistics, Vol 46, Iss 1, Pp 135-187 (2020) (2020)
|
|
BASE
|
|
Show details
|
|
17 |
Automatic Sentence Simplification with Multiple Rewriting Transformations
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Distributed knowledge based clinical auto-coding system
|
|
Kaur, Rajvir (S33301). - : U.S., Association for Computational Linguistics, 2019
|
|
BASE
|
|
Show details
|
|
19 |
Towards semi-supervised Brazilian Portuguese semantic role labeling: Building a benchmark
|
|
|
|
BASE
|
|
Show details
|
|
|
|