DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...8
Hits 1 – 20 of 143

1
WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation ...
BASE
Show details
2
Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection ...
BASE
Show details
3
Probing Across Time: What Does RoBERTa Know and When? ...
BASE
Show details
4
Specializing Multilingual Language Models: An Empirical Study ...
Chau, Ethan C.; Smith, Noah A.. - : arXiv, 2021
BASE
Show details
5
Provable Limitations of Acquiring Meaning from Ungrounded Form: What will Future Language Models Understand? ...
BASE
Show details
6
Finetuning Pretrained Transformers into RNNs ...
BASE
Show details
7
Green NLP panel ...
BASE
Show details
8
Sentence Bottleneck Autoencoders from Transformer Language Models ...
BASE
Show details
9
All That's 'Human' Is Not Gold: Evaluating Human Evaluation of Generated Text ...
BASE
Show details
10
Measuring Association Between Labels and Free-Text Rationales ...
BASE
Show details
11
Promoting Graph Awareness in Linearized Graph-to-Text Generation ...
BASE
Show details
12
Shortformer: Better Language Modeling using Shorter Inputs ...
BASE
Show details
13
DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts ...
BASE
Show details
14
Specializing Multilingual Language Models: An Empirical Study ...
BASE
Show details
15
Challenges in Automated Debiasing for Toxic Language Detection ...
BASE
Show details
16
NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics ...
Lu, Ximing; Welleck, Sean; West, Peter. - : arXiv, 2021
BASE
Show details
17
Effects of Parameter Norm Growth During Transformer Training: Inductive Bias from Gradient Descent ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.133/ Abstract: The capacity of neural networks like the widely adopted transformer is known to be very high. Evidence is emerging that they learn successfully due to inductive bias in the training routine, typically a variant of gradient descent (GD). To better understand this bias, we study the tendency for transformer parameters to grow in magnitude ($\ell_2$ norm) during training, and its implications for the emergent representations within self attention layers. Empirically, we document norm growth in the training of transformer language models, including T5 during its pretraining. As the parameters grow in magnitude, we prove that the network approximates a discretized network with saturated activation functions. Such "saturated" networks are known to have a reduced capacity compared to the full network family that can be described in terms of formal languages and automata. Our results suggest saturation is a new characterization of an ...
Keyword: Language Models; Natural Language Processing; Semantic Evaluation; Sociolinguistics
URL: https://underline.io/lecture/37533-effects-of-parameter-norm-growth-during-transformer-training-inductive-bias-from-gradient-descent
https://dx.doi.org/10.48448/2yr8-q466
BASE
Hide details
18
Competency Problems: On Finding and Removing Artifacts in Language Data ...
BASE
Show details
19
Infusing Finetuning with Semantic Dependencies ...
BASE
Show details
20
Extracting and Inferring Personal Attributes from Dialogue
Wang, Zhilin. - 2021
BASE
Show details

Page: 1 2 3 4 5...8

Catalogues
0
0
4
0
0
0
0
Bibliographies
7
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
135
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern