2 |
Effects of Parameter Norm Growth During Transformer Training: Inductive Bias from Gradient Descent ...
|
|
|
|
Abstract:
Anthology paper link: https://aclanthology.org/2021.emnlp-main.133/ Abstract: The capacity of neural networks like the widely adopted transformer is known to be very high. Evidence is emerging that they learn successfully due to inductive bias in the training routine, typically a variant of gradient descent (GD). To better understand this bias, we study the tendency for transformer parameters to grow in magnitude ($\ell_2$ norm) during training, and its implications for the emergent representations within self attention layers. Empirically, we document norm growth in the training of transformer language models, including T5 during its pretraining. As the parameters grow in magnitude, we prove that the network approximates a discretized network with saturated activation functions. Such "saturated" networks are known to have a reduced capacity compared to the full network family that can be described in terms of formal languages and automata. Our results suggest saturation is a new characterization of an ...
|
|
Keyword:
Language Models; Natural Language Processing; Semantic Evaluation; Sociolinguistics
|
|
URL: https://underline.io/lecture/37533-effects-of-parameter-norm-growth-during-transformer-training-inductive-bias-from-gradient-descent https://dx.doi.org/10.48448/2yr8-q466
|
|
BASE
|
|
Hide details
|
|
3 |
Softmax Tree: An Accurate, Fast Classifier When the Number of Classes Is Large ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
GOLD: Improving Out-of-Scope Detection in Dialogues using Data Augmentation ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
RuleBERT: Teaching Soft Rules to Pre-Trained Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Implicit Premise Generation with Discourse-aware Commonsense Knowledge Models ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
On the Challenges of Evaluating Compositional Explanations in Multi-Hop Inference: Relevance, Completeness, and Expert Ratings ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Enhanced Language Representation with Label Knowledge for Span Extraction ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
VeeAlign: Multifaceted Context Representation Using Dual Attention for Ontology Alignment ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Shortcutted Commonsense: Data Spuriousness in Deep Learning of Commonsense Reasoning ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
On Classifying whether Two Texts are on the Same Side of an Argument ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Causal Direction of Data Collection Matters: Implications of Causal and Anticausal Learning for NLP ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
MTAdam: Automatic Balancing of Multiple Training Loss Terms ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Types of Out-of-Distribution Texts and How to Detect Them ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Asking It All: Generating Contextualized Questions for any Semantic Role ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Competency Problems: On Finding and Removing Artifacts in Language Data ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|