DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 30

1
Multi-SimLex: A Large-Scale Evaluation of Multilingual and Cross-Lingual Lexical Semantic Similarity
In: ISSN: 0891-2017 ; EISSN: 1530-9312 ; Computational Linguistics ; https://hal.archives-ouvertes.fr/hal-02975786 ; Computational Linguistics, Massachusetts Institute of Technology Press (MIT Press), 2020, 46 (4), pp.847-897 ; https://direct.mit.edu/coli/article/46/4/847/97326/Multi-SimLex-A-Large-Scale-Evaluation-of (2020)
BASE
Show details
2
Multidirectional Associative Optimization of Function-Specific Word Representations ...
Gerz, Daniela; Vulic, Ivan; Rei, Marek. - : Apollo - University of Cambridge Repository, 2020
BASE
Show details
3
XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning ...
BASE
Show details
4
Emergent Communication Pretraining for Few-Shot Machine Translation ...
BASE
Show details
5
Manual Clustering and Spatial Arrangement of Verbs for Multilingual Evaluation and Typology Analysis ...
Majewska, Olga; Vulic, Ivan; McCarthy, Diana. - : Apollo - University of Cambridge Repository, 2020
BASE
Show details
6
Emergent Communication Pretraining for Few-Shot Machine Translation ...
Li, Yaoyiran; Ponti, Edoardo; Vulic, Ivan. - : Apollo - University of Cambridge Repository, 2020
BASE
Show details
7
Emergent Communication Pretraining for Few-Shot Machine Translation ...
BASE
Show details
8
Manual Clustering and Spatial Arrangement of Verbs for Multilingual Evaluation and Typology Analysis ...
BASE
Show details
9
A Closer Look at Few-Shot Crosslingual Transfer: The Choice of Shots Matters ...
BASE
Show details
10
Verb Knowledge Injection for Multilingual Event Processing ...
BASE
Show details
11
Multi-SimLex: A Large-Scale Evaluation of Multilingual and Cross-Lingual Lexical Semantic Similarity ...
BASE
Show details
12
Probing Pretrained Language Models for Lexical Semantics ...
BASE
Show details
13
The Secret is in the Spectra: Predicting Cross-lingual Task Performance with Spectral Similarity Measures ...
BASE
Show details
14
SemEval-2020 Task 2: Predicting Multilingual and Cross-Lingual (Graded) Lexical Entailment ...
BASE
Show details
15
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity ...
BASE
Show details
16
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity ...
Lauscher, Anne; Vulic, Ivan; Ponti, Edoardo. - : Apollo - University of Cambridge Repository, 2020
BASE
Show details
17
Classification-Based Self-Learning for Weakly Supervised Bilingual Lexicon Induction ...
Karan, Mladen; Vulic, Ivan; Korhonen, Anna. - : Apollo - University of Cambridge Repository, 2020
BASE
Show details
18
Improving Bilingual Lexicon Induction with Unsupervised Post-Processing of Monolingual Word Vector Spaces ...
Vulic, Ivan; Korhonen, Anna; Glavas, Goran. - : Apollo - University of Cambridge Repository, 2020
BASE
Show details
19
Improving Bilingual Lexicon Induction with Unsupervised Post-Processing of Monolingual Word Vector Spaces
Vulic, Ivan; Korhonen, Anna; Glavas, Goran. - : 5TH WORKSHOP ON REPRESENTATION LEARNING FOR NLP (REPL4NLP-2020), 2020
BASE
Show details
20
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity
Lauscher, Anne; Vulic, Ivan; Ponti, Edoardo; Korhonen, Anna; Glavas, Goran. - : International Committee on Computational Linguistics, 2020. : https://www.aclweb.org/anthology/2020.coling-main.118, 2020. : Proceedings of the 28th International Conference on Computational Linguistics (COLING 2020), 2020
Abstract: Unsupervised pretraining models have been shown to facilitate a wide range of downstream NLP applications. These models, however, retain some of the limitations of traditional static word embeddings. In particular, they encode only the distributional knowledge available in raw text corpora, incorporated through language modeling objectives. In this work, we complement such distributional knowledge with external lexical knowledge, that is, we integrate the discrete knowledge on word-level semantic similarity into pretraining. To this end, we generalize the standard BERT model to a multi-task learning setting where we couple BERT’s masked language modeling and next sentence prediction objectives with an auxiliary task of binary word relation classification. Our experiments suggest that our "Lexically Informed” BERT (LIBERT), specialized for the word-level semantic similarity, yields better performance than the lexically blind “vanilla” BERT on several language understanding tasks. Concretely, LIBERT outperforms BERT in 9 out of 10 tasks of the GLUE benchmark and is on a par with BERT in the remaining one. Moreover, we show consistent gains on 3 benchmarks for lexical simplification, a task where knowledge about word-level semantic similarity is paramount, as well as large gains on lexical reasoning probes.
URL: https://doi.org/10.17863/CAM.62219
https://www.repository.cam.ac.uk/handle/1810/315112
BASE
Hide details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
30
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern