Home
Catalogue search
Refine your search:
Keyword
Creator / Publisher:
Hupkes, Dieuwke (6)
., Dieuwke (3)
Jumelet, Jaap (3)
Denić, Milica (2)
Steinert-Threlkeld, Shane (2)
The 2021 Conference on Empirical Methods in Natural Language Processing 2021 (2)
Williams, Adina (2)
., Jakub (1)
., Kate (1)
Baroni, Marco (1)
more
Year:
2021 (9)
Medium
Type:
Miscellaneous (6)
Article (3)
BLLDB-Access
Search in the Catalogues and Directories
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
Sort by
creator [A → Z]
'
creator [Z → A]
'
publishing year ↑ (asc)
'
publishing year ↓ (desc)
'
title [A → Z]
'
title [Z → A]
'
Simple Search
Hits 1 – 9 of 9
1
Towards Interactive Language Modeling ...
ter Hoeve, Maartje
;
Kharitonov, Evgeny
;
Hupkes, Dieuwke
. - : arXiv, 2021
BASE
Show details
2
Language Models Use Monotonicity to Assess NPI Licensing ...
Jumelet, Jaap
;
Denić, Milica
;
Szymanik, Jakub
. - : arXiv, 2021
BASE
Show details
3
Language Modelling as a Multi-Task Problem ...
Weber, Lucas
;
Jumelet, Jaap
;
Bruni, Elia
. - : arXiv, 2021
BASE
Show details
4
How BPE Affects Memorization in Transformers ...
Kharitonov, Eugene
;
Baroni, Marco
;
Hupkes, Dieuwke
. - : arXiv, 2021
Abstract:
Training data memorization in NLP can both be beneficial (e.g., closed-book QA) and undesirable (personal data extraction). In any case, successful model training requires a non-trivial amount of memorization to store word spellings, various linguistic idiosyncrasies and common knowledge. However, little is known about what affects the memorization behavior of NLP models, as the field tends to focus on the equally important question of generalization. In this work, we demonstrate that the size of the subword vocabulary learned by Byte-Pair Encoding (BPE) greatly affects both ability and tendency of standard Transformer models to memorize training data, even when we control for the number of learned parameters. We find that with a large subword vocabulary size, Transformer models fit random mappings more easily and are more vulnerable to membership inference attacks. Similarly, given a prompt, Transformer-based language models with large subword vocabularies reproduce the training data more often. We ...
Keyword:
Computation and Language cs.CL
;
FOS Computer and information sciences
URL:
https://arxiv.org/abs/2110.02782
https://dx.doi.org/10.48550/arxiv.2110.02782
BASE
Hide details
5
Causal Transformers Perform Below Chance on Recursive Nested Constructions, Unlike Humans ...
Lakretz, Yair
;
Desbordes, Théo
;
Hupkes, Dieuwke
. - : arXiv, 2021
BASE
Show details
6
Sparse Interventions in Language Models with Differentiable Masking ...
De Cao, Nicola
;
Schmid, Leon
;
Hupkes, Dieuwke
. - : arXiv, 2021
BASE
Show details
7
Language Models Use Monotonicity to Assess NPI Licensing ...
The Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing 2021
;
., Dieuwke
;
., Jakub
. - : Underline Science Inc., 2021
BASE
Show details
8
Generalising to German Plural Noun Classes, from the Perspective of a Recurrent Neural Network ...
The 2021 Conference on Empirical Methods in Natural Language Processing 2021
;
., Dieuwke
;
., Kate
. - : Underline Science Inc., 2021
BASE
Show details
9
Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little ...
The 2021 Conference on Empirical Methods in Natural Language Processing 2021
;
., Dieuwke
;
Jia, Robin
. - : Underline Science Inc., 2021
BASE
Show details
Mobile view
All
Catalogues
UB Frankfurt Linguistik
0
IDS Mannheim
0
OLC Linguistik
0
UB Frankfurt Retrokatalog
0
DNB Subject Category Language
0
Institut für Empirische Sprachwissenschaft
0
Leibniz-Centre General Linguistics (ZAS)
0
Bibliographies
BLLDB
0
BDSL
0
IDS Bibliografie zur deutschen Grammatik
0
IDS Bibliografie zur Gesprächsforschung
0
IDS Konnektoren im Deutschen
0
IDS Präpositionen im Deutschen
0
IDS OBELEX meta
0
MPI-SHH Linguistics Collection
0
MPI for Psycholinguistics
0
Linked Open Data catalogues
Annohub
0
Online resources
Link directory
0
Journal directory
0
Database directory
0
Dictionary directory
0
Open access documents
BASE
9
Linguistik-Repository
0
IDS Publikationsserver
0
Online dissertations
0
Language Description Heritage
0
© 2013 - 2024 Lin|gu|is|tik
|
Imprint
|
Privacy Policy
|
Datenschutzeinstellungen ändern