1 |
First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT
|
|
|
|
In: https://hal.inria.fr/hal-03161685 ; 2021 (2021)
|
|
BASE
|
|
Show details
|
|
2 |
Can Multilingual Language Models Transfer to an Unseen Dialect? A Case Study on North African Arabizi
|
|
|
|
In: https://hal.inria.fr/hal-03161677 ; 2021 (2021)
|
|
BASE
|
|
Show details
|
|
3 |
Cross-Lingual GenQA: A Language-Agnostic Generative Question Answering Approach for Open-Domain Question Answering ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT ...
|
|
|
|
Abstract:
Multilingual pretrained language models have demonstrated remarkable zero-shot cross-lingual transfer capabilities. Such transfer emerges by fine-tuning on a task of interest in one language and evaluating on a distinct language, not seen during the fine-tuning. Despite promising results, we still lack a proper understanding of the source of this transfer. Using a novel layer ablation technique and analyses of the model's internal representations, we show that multilingual BERT, a popular multilingual language model, can be viewed as the stacking of two sub-networks: a multilingual encoder followed by a task-specific language-agnostic predictor. While the encoder is crucial for cross-lingual transfer and remains mostly unchanged during fine-tuning, the task predictor has little importance on the transfer and can be reinitialized during fine-tuning. We present extensive experiments with three distinct tasks, seventeen typologically diverse languages and multiple domains to support our hypothesis. ... : Accepted at EACL 2021 ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/2101.11109 https://dx.doi.org/10.48550/arxiv.2101.11109
|
|
BASE
|
|
Hide details
|
|
|
|