DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 38

1
XTREME-S: Evaluating Cross-lingual Speech Representations ...
BASE
Show details
2
One Country, 700+ Languages: NLP Challenges for Underrepresented Languages and Dialects in Indonesia ...
BASE
Show details
3
Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation ...
BASE
Show details
4
MasakhaNER: Named entity recognition for African languages
In: EISSN: 2307-387X ; Transactions of the Association for Computational Linguistics ; https://hal.inria.fr/hal-03350962 ; Transactions of the Association for Computational Linguistics, The MIT Press, 2021, ⟨10.1162/tacl⟩ (2021)
BASE
Show details
5
Charformer: Fast Character Transformers via Gradient-based Subword Tokenization ...
BASE
Show details
6
Multi-view Subword Regularization ...
BASE
Show details
7
XTREME-R: Towards More Challenging and Nuanced Multilingual Evaluation ...
BASE
Show details
8
Efficient Test Time Adapter Ensembling for Low-resource Language Varieties ...
BASE
Show details
9
Analogy Training Multilingual Encoders ...
Garneau, Nicolas; Hartmann, Mareike; Sandholm, Anders. - : Apollo - University of Cambridge Repository, 2021
BASE
Show details
10
XTREME-R: Towards More Challenging and Nuanced Multilingual Evaluation ...
BASE
Show details
11
A Call for More Rigor in Unsupervised Cross-lingual Learning ...
BASE
Show details
12
Rethinking embedding coupling in pre-trained language models ...
BASE
Show details
13
MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer ...
BASE
Show details
14
How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models ...
BASE
Show details
15
UNKs Everywhere: Adapting Multilingual Language Models to New Scripts ...
Abstract: Massively multilingual language models such as multilingual BERT offer state-of-the-art cross-lingual transfer performance on a range of NLP tasks. However, due to limited capacity and large differences in pretraining data sizes, there is a profound performance gap between resource-rich and resource-poor target languages. The ultimate challenge is dealing with under-resourced languages not covered at all by the models and written in scripts unseen during pretraining. In this work, we propose a series of novel data-efficient methods that enable quick and effective adaptation of pretrained multilingual models to such low-resource languages and unseen scripts. Relying on matrix factorization, our methods capitalize on the existing latent knowledge about multiple languages already available in the pretrained model's embedding matrix. Furthermore, we show that learning of the new dedicated embedding matrix in the target language can be improved by leveraging a small number of vocabulary items (i.e., the so-called ... : EMNLP 2021 ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://arxiv.org/abs/2012.15562
https://dx.doi.org/10.48550/arxiv.2012.15562
BASE
Hide details
16
MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer ...
Pfeiffer, Jonas; Vulic, Ivan; Gurevych, Iryna. - : Apollo - University of Cambridge Repository, 2020
BASE
Show details
17
Morphologically Aware Word-Level Translation ...
BASE
Show details
18
Morphologically Aware Word-Level Translation
In: Proceedings of the 28th International Conference on Computational Linguistics (2020)
BASE
Show details
19
Morphologically Aware Word-Level Translation ...
BASE
Show details
20
XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization ...
BASE
Show details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
38
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern