DE eng

Search in the Catalogues and Directories

Hits 1 – 12 of 12

1
Towards Interactive Language Modeling ...
BASE
Show details
2
Language Models Use Monotonicity to Assess NPI Licensing ...
BASE
Show details
3
Language Modelling as a Multi-Task Problem ...
BASE
Show details
4
How BPE Affects Memorization in Transformers ...
Abstract: Training data memorization in NLP can both be beneficial (e.g., closed-book QA) and undesirable (personal data extraction). In any case, successful model training requires a non-trivial amount of memorization to store word spellings, various linguistic idiosyncrasies and common knowledge. However, little is known about what affects the memorization behavior of NLP models, as the field tends to focus on the equally important question of generalization. In this work, we demonstrate that the size of the subword vocabulary learned by Byte-Pair Encoding (BPE) greatly affects both ability and tendency of standard Transformer models to memorize training data, even when we control for the number of learned parameters. We find that with a large subword vocabulary size, Transformer models fit random mappings more easily and are more vulnerable to membership inference attacks. Similarly, given a prompt, Transformer-based language models with large subword vocabularies reproduce the training data more often. We ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://arxiv.org/abs/2110.02782
https://dx.doi.org/10.48550/arxiv.2110.02782
BASE
Hide details
5
Causal Transformers Perform Below Chance on Recursive Nested Constructions, Unlike Humans ...
BASE
Show details
6
Sparse Interventions in Language Models with Differentiable Masking ...
BASE
Show details
7
Mechanisms for Handling Nested Dependencies in Neural-Network Language Models and Humans ...
BASE
Show details
8
Assessing incrementality in sequence-to-sequence models ...
BASE
Show details
9
Compositionality decomposed: how do neural networks generalise? ...
BASE
Show details
10
Formal models of structure building in music, language, and animal song
In: The origins of musicality (Cambridge, 2018), p. 253-286
MPI für Psycholinguistik
Show details
11
Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information ...
BASE
Show details
12
Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items ...
Jumelet, Jaap; Hupkes, Dieuwke. - : arXiv, 2018
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
1
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
11
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern