DE eng

Search in the Catalogues and Directories

Hits 1 – 3 of 3

1
Voice Query Auto Completion ...
BASE
Show details
2
The Art of Abstention: Selective Prediction and Error Regularization for Natural Language Processing ...
BASE
Show details
3
What Would Elsa Do? Freezing Layers During Transformer Fine-Tuning ...
Lee, Jaejun; Tang, Raphael; Lin, Jimmy. - : arXiv, 2019
Abstract: Pretrained transformer-based language models have achieved state of the art across countless tasks in natural language processing. These models are highly expressive, comprising at least a hundred million parameters and a dozen layers. Recent evidence suggests that only a few of the final layers need to be fine-tuned for high quality on downstream tasks. Naturally, a subsequent research question is, "how many of the last layers do we need to fine-tune?" In this paper, we precisely answer this question. We examine two recent pretrained language models, BERT and RoBERTa, across standard tasks in textual entailment, semantic similarity, sentiment analysis, and linguistic acceptability. We vary the number of final layers that are fine-tuned, then study the resulting change in task-specific effectiveness. We show that only a fourth of the final layers need to be fine-tuned to achieve 90% of the original quality. Surprisingly, we also find that fine-tuning all layers does not always help. ... : 5 pages ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://arxiv.org/abs/1911.03090
https://dx.doi.org/10.48550/arxiv.1911.03090
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
3
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern