3 |
AUTOLEX: An Automatic Framework for Linguistic Exploration ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
MCoNaLa: A Benchmark for Code Generation from Multiple Natural Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
A Systematic Evaluation of Large Language Models of Code ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Attention-Passing Models for Robust and Data-Efficient End-to-End Speech Translation
|
|
|
|
In: Transactions of the Association for Computational Linguistics, 7, 313–325 ; ISSN: 2307-387X (2022)
|
|
BASE
|
|
Show details
|
|
9 |
MasakhaNER: Named entity recognition for African languages
|
|
|
|
In: EISSN: 2307-387X ; Transactions of the Association for Computational Linguistics ; https://hal.inria.fr/hal-03350962 ; Transactions of the Association for Computational Linguistics, The MIT Press, 2021, ⟨10.1162/tacl⟩ (2021)
|
|
BASE
|
|
Show details
|
|
10 |
Phoneme Recognition through Fine Tuning of Phonetic Representations: a Case Study on Luhya Language Varieties ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Few-shot Language Coordination by Modeling Theory of Mind ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Systematic Inequalities in Language Technology Performance across the World's Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Multilingual Multimodal Pre-training for Zero-Shot Cross-Lingual Transfer of Vision-Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
MetaXL: Meta Representation Transformation for Low-resource Cross-lingual Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
XTREME-R: Towards More Challenging and Nuanced Multilingual Evaluation ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
When Does Translation Require Context? A Data-driven, Multilingual Exploration ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Breaking Down Multilingual Machine Translation ...
|
|
|
|
Abstract:
While multilingual training is now an essential ingredient in machine translation (MT) systems, recent work has demonstrated that it has different effects in different multilingual settings, such as many-to-one, one-to-many, and many-to-many learning. These training settings expose the encoder and the decoder in a machine translation model with different data distributions. In this paper, we examine how different varieties of multilingual training contribute to learning these two components of the MT model. Specifically, we compare bilingual models with encoders and/or decoders initialized by multilingual training. We show that multilingual training is beneficial to encoders in general, while it only benefits decoders for low-resource languages (LRLs). We further find the important attention heads for each language pair and compare their correlations during inference. Our analysis sheds light on how multilingual translation models work and enables us to propose methods to improve performance by training with ... : ACL 2022 Findings ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://dx.doi.org/10.48550/arxiv.2110.08130 https://arxiv.org/abs/2110.08130
|
|
BASE
|
|
Hide details
|
|
19 |
Efficient Test Time Adapter Ensembling for Low-resource Language Varieties ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Distributionally Robust Multilingual Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|