2 |
Language Models Use Monotonicity to Assess NPI Licensing ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
How BPE Affects Memorization in Transformers ...
|
|
|
|
Abstract:
Training data memorization in NLP can both be beneficial (e.g., closed-book QA) and undesirable (personal data extraction). In any case, successful model training requires a non-trivial amount of memorization to store word spellings, various linguistic idiosyncrasies and common knowledge. However, little is known about what affects the memorization behavior of NLP models, as the field tends to focus on the equally important question of generalization. In this work, we demonstrate that the size of the subword vocabulary learned by Byte-Pair Encoding (BPE) greatly affects both ability and tendency of standard Transformer models to memorize training data, even when we control for the number of learned parameters. We find that with a large subword vocabulary size, Transformer models fit random mappings more easily and are more vulnerable to membership inference attacks. Similarly, given a prompt, Transformer-based language models with large subword vocabularies reproduce the training data more often. We ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/2110.02782 https://dx.doi.org/10.48550/arxiv.2110.02782
|
|
BASE
|
|
Hide details
|
|
5 |
Causal Transformers Perform Below Chance on Recursive Nested Constructions, Unlike Humans ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Sparse Interventions in Language Models with Differentiable Masking ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Mechanisms for Handling Nested Dependencies in Neural-Network Language Models and Humans ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Compositionality decomposed: how do neural networks generalise? ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|