2 |
IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Multilingual Pre-training with Language and Task Adaptation for Multilingual Text Style Transfer ...
|
|
|
|
Abstract:
We exploit the pre-trained seq2seq model mBART for multilingual text style transfer. Using machine translated data as well as gold aligned English sentences yields state-of-the-art results in the three target languages we consider. Besides, in view of the general scarcity of parallel data, we propose a modular approach for multilingual formality transfer, which consists of two training strategies that target adaptation to both language and task. Our approach achieves competitive performance without monolingual task-specific parallel data and can be applied to other style transfer tasks as well as to other languages. ... : Accepted to ACL 2022 ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/2203.08552 https://dx.doi.org/10.48550/arxiv.2203.08552
|
|
BASE
|
|
Hide details
|
|
5 |
Thank you BART! Rewarding Pre-Trained Models Improves Formality Style Transfer ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Adapting Monolingual Models: Data can be Scarce when Language Similarity is High ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Generic resources are what you need: Style transfer tasks without task-specific parallel training data ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Adapting Monolingual Models: Data can be Scarce when Language Similarity is High ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
As Good as New. How to Successfully Recycle English GPT-2 to Make Models for Other Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Teaching NLP with Bracelets and Restaurant Menus: An Interactive Workshop for Italian Students ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Teaching NLP with Bracelets and Restaurant Menus:An Interactive Workshop for Italian Students
|
|
|
|
BASE
|
|
Show details
|
|
12 |
What's so special about BERT's layers? A closer look at the NLP pipeline in monolingual and multilingual models ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Personal-ITY: A Novel YouTube-based Corpus for Personality Prediction in Italian ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Datasets and Models for Authorship Attribution on Italian Personal Writings ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender Bias ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Matching Theory and Data with Personal-ITY: What a Corpus of Italian YouTube Comments Reveals About Personality ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT'S Gender Bias ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
As Good as New. How to Successfully Recycle English GPT-2 to Make Models for Other Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Fair Is Better than Sensational: Man Is to Doctor as Woman Is to Doctor
|
|
|
|
In: Computational Linguistics, Vol 46, Iss 2, Pp 487-497 (2020) (2020)
|
|
BASE
|
|
Show details
|
|
|
|