2 |
Language Models Use Monotonicity to Assess NPI Licensing ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Causal Transformers Perform Below Chance on Recursive Nested Constructions, Unlike Humans ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Sparse Interventions in Language Models with Differentiable Masking ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Language Models Use Monotonicity to Assess NPI Licensing ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Generalising to German Plural Noun Classes, from the Perspective of a Recurrent Neural Network ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Mechanisms for Handling Nested Dependencies in Neural-Network Language Models and Humans ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Assessing incrementality in sequence-to-sequence models ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Compositionality decomposed: how do neural networks generalise? ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items ...
|
|
|
|
Abstract:
In this paper, we attempt to link the inner workings of a neural language model to linguistic theory, focusing on a complex phenomenon well discussed in formal linguis- tics: (negative) polarity items. We briefly discuss the leading hypotheses about the licensing contexts that allow negative polarity items and evaluate to what extent a neural language model has the ability to correctly process a subset of such constructions. We show that the model finds a relation between the licensing context and the negative polarity item and appears to be aware of the scope of this context, which we extract from a parse tree of the sentence. With this research, we hope to pave the way for other studies linking formal linguistics to deep learning. ... : Accepted to the EMNLP workshop "Analyzing and interpreting neural networks for NLP" ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://dx.doi.org/10.48550/arxiv.1808.10627 https://arxiv.org/abs/1808.10627
|
|
BASE
|
|
Hide details
|
|
20 |
The time course of verb processing in Dutch sentences
|
|
|
|
In: http://www.cogsci.northwestern.edu/cogsci2004/papers/paper389.pdf (2004)
|
|
BASE
|
|
Show details
|
|
|
|