1 |
Using the Interpolated Maze Task to Assess Incremental Processing in English Relative Clauses
|
|
|
|
In: Proceedings of the Annual Meeting of the Cognitive Science Society, vol 43, iss 43 (2021)
|
|
BASE
|
|
Show details
|
|
2 |
Hierarchical Representation in Neural Language Models: Suppression and Recovery of Expectations
|
|
|
|
In: Association for Computational Linguistics (2021)
|
|
BASE
|
|
Show details
|
|
3 |
SyntaxGym: An Online Platform for Targeted Evaluation of Language Models
|
|
|
|
In: Association for Computational Linguistics (2021)
|
|
BASE
|
|
Show details
|
|
4 |
Investigating Novel Verb Learning in BERT: Selectional Preference Classes and Alternation-Based Syntactic Generalization
|
|
|
|
In: Association for Computational Linguistics (2021)
|
|
BASE
|
|
Show details
|
|
5 |
Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study
|
|
|
|
In: Association for Computational Linguistics (2021)
|
|
BASE
|
|
Show details
|
|
6 |
SyntaxGym: An Online Platform for Targeted Evaluation of Language Models
|
|
|
|
In: Association for Computational Linguistics (2021)
|
|
BASE
|
|
Show details
|
|
7 |
Structural Supervision Improves Few-Shot Learning and Syntactic Generalization in Neural Language Models
|
|
|
|
In: Association for Computational Linguistics (2021)
|
|
BASE
|
|
Show details
|
|
8 |
Structural Supervision Improves Learning of Non-Local Grammatical Dependencies
|
|
|
|
In: Association for Computational Linguistics (2021)
|
|
BASE
|
|
Show details
|
|
9 |
What do RNN Language Models Learn about Filler–Gap Dependencies?
|
|
|
|
In: Association for Computational Linguistics (2021)
|
|
BASE
|
|
Show details
|
|
10 |
A Targeted Assessment of Incremental Processing in Neural Language Models and Humans ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
A Targeted Assessment of Incremental Processing in Neural LanguageModels and Humans ...
|
|
|
|
Abstract:
We present a targeted, scaled-up comparison of incremental processing in humans and neural language models by collecting by-word reaction time data for sixteen different syntactic test suites across a range of structural phenomena. Human reaction time data comes from a novel online experimental paradigm called the Interpolated Maze task. We compare human reaction times to by-word probabilities for four contemporary language models, with different architectures and trained on a range of data set sizes. We find that across many phenomena, both humans and language models show increased processing difficulty in ungrammatical sentence regions with human and model `accuracy' scores (a la Marvin and Linzen(2018)) about equal. However, although language model outputs match humans in direction, we show that models systematically under-predict the difference in magnitude of incremental processing difficulty between grammatical and ungrammatical sentences. Specifically, when models encounter syntactic violations they ... : To appear at ACL 2021 ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/2106.03232 https://dx.doi.org/10.48550/arxiv.2106.03232
|
|
BASE
|
|
Hide details
|
|
12 |
Which Presuppositions are Subject to Contextual Felicity Constraints?
|
|
|
|
In: Semantics and Linguistic Theory; Proceedings of SALT 31; 345-364 ; 2163-5951 (2021)
|
|
BASE
|
|
Show details
|
|
13 |
A Systematic Assessment of Syntactic Generalization in Neural Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Hierarchical Representation in Neural Language Models: Suppression and Recovery of Expectations ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
RNNs as psycholinguistic subjects: Syntactic state and grammatical dependency ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|