DE eng

Search in the Catalogues and Directories

Hits 1 – 16 of 16

1
Deep daxes: Mutual exclusivity arises through both learning biases and pragmatic strategies in neural networks ...
BASE
Show details
2
Putting words in context: LSTM language models and lexical ambiguity ...
BASE
Show details
3
Towards incremental learning of word embeddings using context informativeness
In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics : Student Research Workshop pp. 162-168 (2019)
BASE
Show details
4
Colorless green recurrent networks dream hierarchically
In: Proceedings of the Society for Computation in Linguistics (2019)
BASE
Show details
5
Colorless green recurrent networks dream hierarchically ...
BASE
Show details
6
Word order variation and dependency length minimisation : a cross-linguistic computational approach ...
Gulordava, Kristina. - : Université de Genève, 2018
BASE
Show details
7
Word order variation and dependency length minimisation : a cross-linguistic computational approach
Gulordava, Kristina. - : Université de Genève, 2018
BASE
Show details
8
Colorless green recurrent networks dream hierarchically
In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018 P. 1195–1205 (2018)
BASE
Show details
9
Discontinuous Verb Phrases in Parsing and Machine Translation of English and German
In: Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016) (2016)
BASE
Show details
10
Multi-lingual Dependency Parsing Evaluation : a Large-scale Analysis of Word Order Properties using Artificial Data
In: ISSN: 2307-387X ; Transactions of the Association for Computational Linguistics, Vol. 4 (2016) pp. 343-356 (2016)
BASE
Show details
11
Dependency length minimisation effects in short spans: a large-scale analysis of adjective placement in complex noun phrases
In: ACL 2015 - the 53rd annual meeting of the Association for Computational Linguistics ; https://hal.inria.fr/hal-01174617 ; ACL 2015 - the 53rd annual meeting of the Association for Computational Linguistics, Jul 2015, Beijing, China (2015)
BASE
Show details
12
Deep daxes: mutual exclusivity arises through both learning biases and pragmatic strategies in neural networks
Gulordava, Kristina; Brochhagen, Thomas; Boleda, Gemma. - : Cognitive Science Society
Abstract: Children’s tendency to associate novel words with novel referents has been taken to reflect a bias toward mutual exclusivity. This tendency may be advantageous both as (1) an ad-hoc referent selection heuristic to single out referents lacking a label and as (2) an organizing principle of lexical acquisition. This paper investigates under which circumstances cross-situational neural models can come to exhibit analogous behavior to children, focusing on these two possibilities and their interaction. To this end, we evaluate neural networks’ on both symbolic data and, as a first, on large-scale image data. We find that constraints in both learning and selection can foster mutual exclusivity, as long as they put words in competition for lexical meaning. For computational models, these findings clarify the role of available options for better performance in tasks where mutual exclusivity is advantageous. For cognitive research, they highlight latent interactions between word learning, referent selection mechanisms, and the structure of stimuli of varying complexity: symbolic and visual ; This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 715154), and from the Spanish Ramon´y Cajal programme (grant RYC-2015-18907). We thankfully acknowledge the computer resources at CTE-POWER and the technical support provided by Barcelona Supercomputing Center (RES-IM-2019-3-0006). We are grateful to the NVIDIA Corporation for the donation of GPUs used for this research.
Keyword: Acquisition; Learning biases; Lexical meaning; Mutual exclusivity; Neural networks; Pragmatics; Referent selection
URL: http://hdl.handle.net/10230/48508
BASE
Hide details
13
Putting words in context: LSTM language models and lexical ambiguity
Boleda, Gemma; Gulordava, Kristina; Aina, Laura. - : ACL (Association for Computational Linguistics)
BASE
Show details
14
Probing for referential information in language models
Sorodoc, Ionut-Teodor; Gulordava, Kristina; Boleda, Gemma. - : ACL (Association for Computational Linguistics)
BASE
Show details
15
How to represent a word and predict it, too: improving tied architectures for language modelling
Gulordava, Kristina; Aina, Laura; Boleda, Gemma. - : ACL (Association for Computational Linguistics)
BASE
Show details
16
How to represent a word and predict it, too: improving tied architectures for language modelling
Boleda, Gemma; Aina, Laura; Gulordava, Kristina. - : ACL (Association for Computational Linguistics)
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
16
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern