3 |
AUTOLEX: An Automatic Framework for Linguistic Exploration ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
MCoNaLa: A Benchmark for Code Generation from Multiple Natural Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
A Systematic Evaluation of Large Language Models of Code ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Attention-Passing Models for Robust and Data-Efficient End-to-End Speech Translation
|
|
|
|
In: Transactions of the Association for Computational Linguistics, 7, 313–325 ; ISSN: 2307-387X (2022)
|
|
BASE
|
|
Show details
|
|
9 |
MasakhaNER: Named entity recognition for African languages
|
|
|
|
In: EISSN: 2307-387X ; Transactions of the Association for Computational Linguistics ; https://hal.inria.fr/hal-03350962 ; Transactions of the Association for Computational Linguistics, The MIT Press, 2021, ⟨10.1162/tacl⟩ (2021)
|
|
BASE
|
|
Show details
|
|
10 |
Phoneme Recognition through Fine Tuning of Phonetic Representations: a Case Study on Luhya Language Varieties ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Few-shot Language Coordination by Modeling Theory of Mind ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Systematic Inequalities in Language Technology Performance across the World's Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Multilingual Multimodal Pre-training for Zero-Shot Cross-Lingual Transfer of Vision-Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Multi-view Subword Regularization ...
|
|
|
|
Abstract:
Multilingual pretrained representations generally rely on subword segmentation algorithms to create a shared multilingual vocabulary. However, standard heuristic algorithms often lead to sub-optimal segmentation, especially for languages with limited amounts of data. In this paper, we take two major steps towards alleviating this problem. First, we demonstrate empirically that applying existing subword regularization methods(Kudo, 2018; Provilkov et al., 2020) during fine-tuning of pre-trained multilingual representations improves the effectiveness of cross-lingual transfer. Second, to take full advantage of different possible input segmentations, we propose Multi-view Subword Regularization (MVR), a method that enforces the consistency between predictions of using inputs tokenized by the standard and probabilistic segmentations. Results on the XTREME multilingual benchmark(Hu et al., 2020) show that MVR brings consistent improvements of up to 2.5 points over using standard segmentation algorithms. ... : Accepted at NAACL 2021 ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/2103.08490 https://dx.doi.org/10.48550/arxiv.2103.08490
|
|
BASE
|
|
Hide details
|
|
15 |
MetaXL: Meta Representation Transformation for Low-resource Cross-lingual Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
XTREME-R: Towards More Challenging and Nuanced Multilingual Evaluation ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
When Does Translation Require Context? A Data-driven, Multilingual Exploration ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Efficient Test Time Adapter Ensembling for Low-resource Language Varieties ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Distributionally Robust Multilingual Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|