DE eng

Search in the Catalogues and Directories

Hits 1 – 12 of 12

1
The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation ...
BASE
Show details
2
LAWDR: Language-Agnostic Weighted Document Representations from Pre-trained Models ...
BASE
Show details
3
Classification-based Quality Estimation: Small and Efficient Models for Real-world Applications ...
BASE
Show details
4
Few-shot Learning with Multilingual Language Models ...
BASE
Show details
5
Alternative Input Signals Ease Transfer in Multilingual Machine Translation ...
Sun, Simeng; Fan, Angela; Cross, James. - : arXiv, 2021
BASE
Show details
6
Adapting High-resource NMT Models to Translate Low-resource Related Languages without Parallel Data ...
BASE
Show details
7
AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages ...
BASE
Show details
8
Multilingual Translation with Extensible Multilingual Pretraining and Finetuning ...
Tang, Yuqing; Tran, Chau; Li, Xian. - : arXiv, 2020
BASE
Show details
9
MLQE-PE: A Multilingual Quality Estimation and Post-Editing Dataset ...
BASE
Show details
10
Beyond English-Centric Multilingual Machine Translation ...
BASE
Show details
11
Unsupervised Cross-lingual Representation Learning at Scale ...
BASE
Show details
12
WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia ...
Abstract: We present an approach based on multilingual sentence embeddings to automatically extract parallel sentences from the content of Wikipedia articles in 85 languages, including several dialects or low-resource languages. We do not limit the the extraction process to alignments with English, but systematically consider all possible language pairs. In total, we are able to extract 135M parallel sentences for 1620 different language pairs, out of which only 34M are aligned with English. This corpus of parallel sentences is freely available at https://github.com/facebookresearch/LASER/tree/master/tasks/WikiMatrix. To get an indication on the quality of the extracted bitexts, we train neural MT baseline systems on the mined data only for 1886 languages pairs, and evaluate them on the TED corpus, achieving strong BLEU scores for many language pairs. The WikiMatrix bitexts seem to be particularly interesting to train MT systems between distant languages without the need to pivot through English. ... : 13 pages, 3 figures, 6 tables ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://dx.doi.org/10.48550/arxiv.1907.05791
https://arxiv.org/abs/1907.05791
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
12
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern