DE eng

Search in the Catalogues and Directories

Hits 1 – 20 of 20

1
The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation ...
BASE
Show details
2
LAWDR: Language-Agnostic Weighted Document Representations from Pre-trained Models ...
BASE
Show details
3
Classification-based Quality Estimation: Small and Efficient Models for Real-world Applications ...
BASE
Show details
4
Few-shot Learning with Multilingual Language Models ...
BASE
Show details
5
Findings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas ...
Mager, Manuel; Oncevay, Arturo; Ebrahimi, Abteen. - : Association for Computational Linguistics, 2021
BASE
Show details
6
Alternative Input Signals Ease Transfer in Multilingual Machine Translation ...
Sun, Simeng; Fan, Angela; Cross, James. - : arXiv, 2021
BASE
Show details
7
Adapting High-resource NMT Models to Translate Low-resource Related Languages without Parallel Data ...
BASE
Show details
8
Findings of the WMT 2021 Shared Task on Quality Estimation ...
BASE
Show details
9
AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages ...
Abstract: Pretrained multilingual models are able to perform cross-lingual transfer in a zero-shot setting, even for languages unseen during pretraining. However, prior work evaluating performance on unseen languages has largely been limited to low-level, syntactic tasks, and it remains unclear if zero-shot learning of high-level, semantic tasks is possible for unseen languages. To explore this question, we present AmericasNLI, an extension of XNLI (Conneau et al., 2018) to 10 indigenous languages of the Americas. We conduct experiments with XLM-R, testing multiple zero-shot and translation-based approaches. Additionally, we explore model adaptation via continued pretraining and provide an analysis of the dataset by considering hypothesis-only models. We find that XLM-R's zero-shot performance is poor for all 10 languages, with an average performance of 38.62%. Continued pretraining offers improvements, with an average accuracy of 44.05%. Surprisingly, training on poorly translated data by far outperforms all other ... : Accepted to ACL 2022 ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://arxiv.org/abs/2104.08726
https://dx.doi.org/10.48550/arxiv.2104.08726
BASE
Hide details
10
Findings of the WMT 2021 shared task on quality estimation
In: 689 ; 730 (2021)
BASE
Show details
11
Multilingual Translation with Extensible Multilingual Pretraining and Finetuning ...
Tang, Yuqing; Tran, Chau; Li, Xian. - : arXiv, 2020
BASE
Show details
12
MLQE-PE: A Multilingual Quality Estimation and Post-Editing Dataset ...
BASE
Show details
13
Beyond English-Centric Multilingual Machine Translation ...
BASE
Show details
14
Unsupervised quality estimation for neural machine translation
In: 8 ; 539 ; 555 (2020)
BASE
Show details
15
An exploratory study on multilingual quality estimation
In: 366 ; 377 (2020)
BASE
Show details
16
BERGAMOT-LATTE submissions for the WMT20 quality estimation shared task
In: 1010 ; 1017 (2020)
BASE
Show details
17
Findings of the WMT 2020 shared task on quality estimation
In: 743 ; 764 (2020)
BASE
Show details
18
MLQE-PE: A multilingual quality estimation and post-editing dataset
BASE
Show details
19
Unsupervised Cross-lingual Representation Learning at Scale ...
BASE
Show details
20
WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia ...
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
20
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern