3 |
AUTOLEX: An Automatic Framework for Linguistic Exploration ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
MCoNaLa: A Benchmark for Code Generation from Multiple Natural Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
A Systematic Evaluation of Large Language Models of Code ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Attention-Passing Models for Robust and Data-Efficient End-to-End Speech Translation
|
|
|
|
In: Transactions of the Association for Computational Linguistics, 7, 313–325 ; ISSN: 2307-387X (2022)
|
|
BASE
|
|
Show details
|
|
9 |
MasakhaNER: Named entity recognition for African languages
|
|
|
|
In: EISSN: 2307-387X ; Transactions of the Association for Computational Linguistics ; https://hal.inria.fr/hal-03350962 ; Transactions of the Association for Computational Linguistics, The MIT Press, 2021, ⟨10.1162/tacl⟩ (2021)
|
|
BASE
|
|
Show details
|
|
10 |
Phoneme Recognition through Fine Tuning of Phonetic Representations: a Case Study on Luhya Language Varieties ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Few-shot Language Coordination by Modeling Theory of Mind ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Systematic Inequalities in Language Technology Performance across the World's Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Multilingual Multimodal Pre-training for Zero-Shot Cross-Lingual Transfer of Vision-Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
MetaXL: Meta Representation Transformation for Low-resource Cross-lingual Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
XTREME-R: Towards More Challenging and Nuanced Multilingual Evaluation ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
When Does Translation Require Context? A Data-driven, Multilingual Exploration ...
|
|
|
|
Abstract:
Although proper handling of discourse phenomena significantly contributes to the quality of machine translation (MT), common translation quality metrics do not adequately capture them. Recent works in context-aware MT attempt to target a small set of these phenomena during evaluation. In this paper, we propose a new metric, P-CXMI, which allows us to identify translations that require context systematically and confirm the difficulty of previously studied phenomena as well as uncover new ones that have not been addressed in previous work. We then develop the Multilingual Discourse-Aware (MuDA) benchmark, a series of taggers for these phenomena in 14 different language pairs, which we use to evaluate context-aware MT. We find that state-of-the-art context-aware MT models find marginal improvements over context-agnostic models on our benchmark, which suggests current models do not handle these ambiguities effectively. We release code and data to invite the MT research community to increase efforts on ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences; Machine Learning cs.LG
|
|
URL: https://dx.doi.org/10.48550/arxiv.2109.07446 https://arxiv.org/abs/2109.07446
|
|
BASE
|
|
Hide details
|
|
19 |
Efficient Test Time Adapter Ensembling for Low-resource Language Varieties ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Distributionally Robust Multilingual Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|