1 |
Delving Deeper into Cross-lingual Visual Question Answering ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Combating Temporal Drift in Crisis with Adapted Embeddings ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Annotation Curricula to Implicitly Train Non-Expert Annotators ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Smelting Gold and Silver for Improved Multilingual AMR-to-Text Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
GPL: Generative Pseudo Labeling for Unsupervised Domain Adaptation of Dense Retrieval ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Modeling Global and Local Node Contexts for Text Generation from Knowledge Graphs
|
|
|
|
In: EISSN: 2307-387X ; Transactions of the Association for Computational Linguistics ; https://hal.archives-ouvertes.fr/hal-03020314 ; Transactions of the Association for Computational Linguistics, The MIT Press, 2020, 8, ⟨10.1162/tacl_a_00332⟩ (2020)
|
|
BASE
|
|
Show details
|
|
11 |
Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
How to Probe Sentence Embeddings in Low-Resource Languages: On Structural Design Choices for Probing Task Evaluation ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Predicting the Humorousness of Tweets Using Gaussian Process Preference Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
UNKs Everywhere: Adapting Multilingual Language Models to New Scripts ...
|
|
|
|
Abstract:
Massively multilingual language models such as multilingual BERT offer state-of-the-art cross-lingual transfer performance on a range of NLP tasks. However, due to limited capacity and large differences in pretraining data sizes, there is a profound performance gap between resource-rich and resource-poor target languages. The ultimate challenge is dealing with under-resourced languages not covered at all by the models and written in scripts unseen during pretraining. In this work, we propose a series of novel data-efficient methods that enable quick and effective adaptation of pretrained multilingual models to such low-resource languages and unseen scripts. Relying on matrix factorization, our methods capitalize on the existing latent knowledge about multiple languages already available in the pretrained model's embedding matrix. Furthermore, we show that learning of the new dedicated embedding matrix in the target language can be improved by leveraging a small number of vocabulary items (i.e., the so-called ... : EMNLP 2021 ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/2012.15562 https://dx.doi.org/10.48550/arxiv.2012.15562
|
|
BASE
|
|
Hide details
|
|
17 |
PuzzLing Machines: A Challenge on Learning From Small Data ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
A Matter of Framing: The Impact of Linguistic Formalism on Probing Results ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Modeling Global and Local Node Contexts for Text Generation from Knowledge Graphs ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Empowering Active Learning to Jointly Optimize System and User Demands ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|