DE eng

Search in the Catalogues and Directories

Hits 1 – 14 of 14

1
The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation ...
BASE
Show details
2
Does referent predictability affect the choice of referential form? A computational approach using masked coreference resolution ...
BASE
Show details
3
Does referent predictability affect the choice of referential form? A computational approach using masked coreference resolution ...
BASE
Show details
4
Similarity is closeness: using distributional semantic spaces to model similarity in visual and linguistic metaphors
In: Corpus linguistics and linguistic theory. - Berlin ; New York : Mouton de Gruyter 15 (2019) 1, 101-137
BLLDB
Show details
5
What do Entity-Centric Models Learn? Insights from Entity Linking in Multi-Party Dialogue ...
BASE
Show details
6
Putting words in context: LSTM language models and lexical ambiguity ...
BASE
Show details
7
Negated adjectives and antonyms in distributional semantics: not similar?
Aina, Laura; Bernardi, Raffaella; Fernández, Raquel. - : Associazione Italiana di Linguistica Computazionale
BASE
Show details
8
Putting words in context: LSTM language models and lexical ambiguity
Boleda, Gemma; Gulordava, Kristina; Aina, Laura. - : ACL (Association for Computational Linguistics)
Abstract: Comunicació presentada al 57th Annual Meeting of the Association for Computational Linguistic (ACL 2019), celebrat els dies 28 de juliol a 2 d'agost de 2019 a Florència, Itàlia. ; In neural network models of language, words are commonly represented using context invariant representations (word embeddings) which are then put in context in the hidden layers. Since words are often ambiguous, representing the contextually relevant information is not trivial. We investigate how an LSTM language model deals with lexical ambiguity in English, designing a method to probe its hidden representations for lexical and contextual information about words. We find that both types of information are represented to a large extent, but also that there is room for improvement for contextual information. ; This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 715154), and from the Ramón y Cajal programme (grant RYC-2015-18907).
Keyword: Language models; Lexical ambiguity; Neural networks
URL: https://doi.org/10.18653/v1/P19-1324
http://hdl.handle.net/10230/42372
BASE
Hide details
9
Modeling word interpretation with deep language models: the interaction between expectations and lexical information
Aina, Laura; Brochhagen, Thomas; Boleda, Gemma. - : Cognitive Science Society
BASE
Show details
10
A distributional study of negated adjectives and antonyms
Fernández, Raquel; Bernardi, Raffaella; Aina, Laura. - : CEUR Workshop Proceedings
BASE
Show details
11
AMORE-UPF at SemEval-2018 Task 4: BiLSTM with entity library
Westera, Matthijs; Silberer, Carina; Aina, Laura. - : ACL (Association for Computational Linguistics)
BASE
Show details
12
How to represent a word and predict it, too: improving tied architectures for language modelling
Gulordava, Kristina; Aina, Laura; Boleda, Gemma. - : ACL (Association for Computational Linguistics)
BASE
Show details
13
How to represent a word and predict it, too: improving tied architectures for language modelling
Boleda, Gemma; Aina, Laura; Gulordava, Kristina. - : ACL (Association for Computational Linguistics)
BASE
Show details
14
What do entity-centric models learn? Insights from entity linking in multi-party dialogue
Westera, Matthijs; Silberer, Carina; Aina, Laura. - : ACL (Association for Computational Linguistics)
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
1
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
13
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern