1 |
Uncovering Constraint-Based Behavior in Neural Models via Targeted Fine-Tuning ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Uncovering Constraint-Based Behavior in Neural Models via Targeted Fine-Tuning ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
All Bark and No Bite: Rogue Dimensions in Transformer Language Models Obscure Representational Quality ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
To Point or Not to Point: Understanding How Abstractive Summarizers Paraphrase Text ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Recurrent Neural Network Language Models Always Learn English-Like Relative Clause Attachment ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Discourse structure interacts with reference but not syntax in neural language models ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Can Entropy Explain Successor Surprisal Effects in Reading?
|
|
|
|
In: Proceedings of the Society for Computation in Linguistics (2019)
|
|
BASE
|
|
Show details
|
|
10 |
Can Entropy Explain Successor Surprisal Effects in Reading? ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
The Influence of Syntactic Frequencies on Human Sentence Processing
|
|
|
|
In: http://rave.ohiolink.edu/etdc/view?acc_num=osu1502452939626929 (2017)
|
|
BASE
|
|
Show details
|
|
|
|