1 |
Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets
|
|
|
|
In: https://hal.inria.fr/hal-03177623 ; 2021 (2021)
|
|
BASE
|
|
Show details
|
|
2 |
Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Transformers without Tears: Improving the Normalization of Self-Attention ...
|
|
|
|
Abstract:
We evaluate three simple, normalization-centric changes to improve Transformer training. First, we show that pre-norm residual connections (PreNorm) and smaller initializations enable warmup-free, validation-based training with large learning rates. Second, we propose $\ell_2$ normalization with a single scale parameter (ScaleNorm) for faster training and better performance. Finally, we reaffirm the effectiveness of normalizing word embeddings to a fixed length (FixNorm). On five low-resource translation pairs from TED Talks-based corpora, these changes always converge, giving an average +1.1 BLEU over state-of-the-art bilingual baselines and a new 32.8 BLEU on IWSLT'15 English-Vietnamese. We observe sharper performance curves, more consistent gradient norms, and a linear relationship between activation scaling and decoder depth. Surprisingly, in the high-resource setting (WMT'14 English-German), ScaleNorm and FixNorm remain competitive but PreNorm degrades performance. ... : Accepted to IWSLT 2019 (oral); code is available at https://github.com/tnq177/transformers_without_tears ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences; Machine Learning cs.LG; Machine Learning stat.ML
|
|
URL: https://arxiv.org/abs/1910.05895 https://dx.doi.org/10.48550/arxiv.1910.05895
|
|
BASE
|
|
Hide details
|
|
4 |
Transformers without Tears: Improving the Normalization of Self-Attention ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Transformers without Tears: Improving the Normalization of Self-Attention ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|