1 |
Horse or pony? Visual Typicality and Lexical Frequency Affect Variability in Object Naming ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
The interaction between cognitive ease and informativeness shapes the lexicons of natural languages ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Horse or pony? Visual Typicality and Lexical Frequency Affect Variability in Object Naming
|
|
|
|
In: Proceedings of the Society for Computation in Linguistics (2022)
|
|
BASE
|
|
Show details
|
|
4 |
The interaction between cognitive ease and informativeness shapes the lexicons of natural languages
|
|
|
|
In: Proceedings of the Society for Computation in Linguistics (2022)
|
|
BASE
|
|
Show details
|
|
5 |
Does referent predictability affect the choice of referential form? A computational approach using masked coreference resolution ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Does referent predictability affect the choice of referential form? A computational approach using masked coreference resolution ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Deep daxes: Mutual exclusivity arises through both learning biases and pragmatic strategies in neural networks ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
A closer look at scalar diversity using contextualized semantic similarity
|
|
|
|
In: Sinn und Bedeutung; Bd. 24 Nr. 2 (2020): Proceedings of Sinn und Bedeutung 24; 439-454 ; Proceedings of Sinn und Bedeutung; Vol 24 No 2 (2020): Proceedings of Sinn und Bedeutung 24; 439-454 ; 2629-6055 (2020)
|
|
BASE
|
|
Show details
|
|
9 |
Recurrent Instance Segmentation using Sequences of Referring Expressions ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Don't Blame Distributional Semantics if it can't do Entailment ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
What do Entity-Centric Models Learn? Insights from Entity Linking in Multi-Party Dialogue ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Putting words in context: LSTM language models and lexical ambiguity ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Living a discrete life in a continuous world: Reference with distributed representations ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
The LAMBADA dataset ...
|
|
Paperno, Denis; Kruszewski, Germán; Lazaridou, Angeliki; Pham, Quan Ngoc; Bernardi, Raffaella; Pezzelle, Sandro; Baroni, Marco; Boleda, Gemma; Fernández, Raquel. - : Zenodo, 2016
|
|
Abstract:
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text. The LAMBADA paper can be found here. ...
|
|
URL: https://zenodo.org/record/2630551 https://dx.doi.org/10.5281/zenodo.2630551
|
|
BASE
|
|
Hide details
|
|
17 |
The LAMBADA dataset: Word prediction requiring a broad discourse context ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
"Show me the cup": Reference with Continuous Representations ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Zipf’s Law for Word Frequencies: Word Forms versus Lemmas in Long Texts
|
|
|
|
BASE
|
|
Show details
|
|
|
|