2 |
Language Models Use Monotonicity to Assess NPI Licensing ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Language Modelling as a Multi-Task Problem ...
|
|
|
|
Abstract:
In this paper, we propose to study language modelling as a multi-task problem, bringing together three strands of research: multi-task learning, linguistics, and interpretability. Based on hypotheses derived from linguistic theory, we investigate whether language models adhere to learning principles of multi-task learning during training. To showcase the idea, we analyse the generalisation behaviour of language models as they learn the linguistic concept of Negative Polarity Items (NPIs). Our experiments demonstrate that a multi-task setting naturally emerges within the objective of the more general task of language modelling.We argue that this insight is valuable for multi-task learning, linguistics and interpretability research and can lead to exciting new findings in all three domains. ... : Accepted for publication at EACL 2021 ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences; Machine Learning cs.LG
|
|
URL: https://arxiv.org/abs/2101.11287 https://dx.doi.org/10.48550/arxiv.2101.11287
|
|
BASE
|
|
Hide details
|
|
5 |
Causal Transformers Perform Below Chance on Recursive Nested Constructions, Unlike Humans ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Sparse Interventions in Language Models with Differentiable Masking ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Language Models Use Monotonicity to Assess NPI Licensing ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Generalising to German Plural Noun Classes, from the Perspective of a Recurrent Neural Network ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Mechanisms for Handling Nested Dependencies in Neural-Network Language Models and Humans ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Assessing incrementality in sequence-to-sequence models ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Compositionality decomposed: how do neural networks generalise? ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
The time course of verb processing in Dutch sentences
|
|
|
|
In: http://www.cogsci.northwestern.edu/cogsci2004/papers/paper389.pdf (2004)
|
|
BASE
|
|
Show details
|
|
|
|