1 |
A Self-Paced Reading Study on Processing Constructions with Different Degrees of Compositionality
|
|
|
|
In: The 35th Annual Conference on Human Sentence Processing ; https://hal.archives-ouvertes.fr/hal-03620795 ; The 35th Annual Conference on Human Sentence Processing, Mar 2022, UC Santa Cruz, United States (2022)
|
|
BASE
|
|
Show details
|
|
3 |
Does BERT really agree ? Fine-grained Analysis of Lexical Dependence on a Syntactic Task ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Did the Cat Drink the Coffee? Challenging Transformers with Generalized Event Knowledge
|
|
|
|
In: Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics ; SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics ; https://hal.archives-ouvertes.fr/hal-03312774 ; SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics, Aug 2021, Online, France. pp.1-11, ⟨10.18653/v1/2021.starsem-1.1⟩ (2021)
|
|
BASE
|
|
Show details
|
|
6 |
Not all arguments are processed equally: a distributional model of argument complexity
|
|
|
|
In: ISSN: 1574-020X ; EISSN: 1574-0218 ; Language Resources and Evaluation ; https://hal.archives-ouvertes.fr/hal-03533181 ; Language Resources and Evaluation, Springer Verlag, 2021, 55 (4), pp.873-900. ⟨10.1007/s10579-021-09533-9⟩ (2021)
|
|
BASE
|
|
Show details
|
|
10 |
Not all arguments are processed equally: a distributional model of argument complexity
|
|
|
|
In: Springer Netherlands (2021)
|
|
BASE
|
|
Show details
|
|
11 |
Decoding Word Embeddings with Brain-Based Semantic Features ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Did the Cat Drink the Coffee? Challenging Transformers with Generalized Event Knowledge ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
A comparative evaluation and analysis of three generations of Distributional Semantic Models ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Constructional associations trump lexical associations in processing valency coercion
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Common-Sense and Common-Knowledge. How much do Neural Language Models know about the world?
|
|
|
|
In: http://etd.adm.unipi.it/theses/available/etd-03122021-000321/ (2021)
|
|
BASE
|
|
Show details
|
|
16 |
I neologismi nelle edizioni del 2010 e del 2021 del dizionario "Zingarelli della lingua italiana"
|
|
|
|
In: http://etd.adm.unipi.it/theses/available/etd-05102021-214231/ (2021)
|
|
BASE
|
|
Show details
|
|
17 |
Large-scale Cross-lingual Word Sense Disambiguation using Parallel Corpora
|
|
|
|
In: http://etd.adm.unipi.it/theses/available/etd-09112021-110903/ (2021)
|
|
BASE
|
|
Show details
|
|
18 |
Probing the linguistic knowledge of word embeddings: A case study on colexification
|
|
|
|
In: http://etd.adm.unipi.it/theses/available/etd-06212021-172428/ (2021)
|
|
BASE
|
|
Show details
|
|
19 |
"Love is an open door but not a table". Come uomini e macchine 'comprendono' le metafore lessicalizzate e creative.
|
|
|
|
In: http://etd.adm.unipi.it/theses/available/etd-03242021-214055/ (2021)
|
|
Abstract:
Metaphor is a widespread linguistic and cognitive phenomenon. Many studies were carried out to investigate how humans understand and produce metaphors. A key aspect of the phenomenon is the difference between frozen and creative metaphors. It was shown that humans interpretate in a different way conventional metaphors and novel metaphors and that humans are sensible to this difference. Our study confirms that result. Furthermore, metaphors, especially creative ones, are difficult to model computationally. Recent progress has been made in metaphor identification, also thanks to the contextualized embeddings from models like BERT. To test what BERT, RoBERTa and GPT2 know about metaphors, we challenge them with a new dataset of conventional and creative metaphors accompanied by various types of human judgments. We find that the models can "recognize" metaphors and shows interesting abilities like that of predicting creative metaphors. At the same time, we show that the models still struggle in "interpreting" metaphorical language, even if it outperforms traditional static vectors. Our findings confirm previous claims about the abilities and limitations of the models. Furthermore, they show that RoBERTa has better performance than BERT and GPT2 in the first experiment and that BERT large is almost good in "interpreting" metaphors in the upper-intermediate layers as suggested by the results of the second experiment. Finally, the models seem to show some similarities with humans, but still miss a significant part of human intuitions about the meaning of metaphors.
|
|
Keyword:
FILOLOGIA; LETTERATURA E LINGUISTICA
|
|
URL: http://etd.adm.unipi.it/theses/available/etd-03242021-214055/
|
|
BASE
|
|
Hide details
|
|
20 |
Le interpretazioni del concetto di composizionalita' delle espressioni idiomatiche nella letteratura psicolinguistica
|
|
|
|
In: http://etd.adm.unipi.it/theses/available/etd-09132021-151853/ (2021)
|
|
BASE
|
|
Show details
|
|
|
|