1 |
Parameter-Efficient Neural Reranking for Cross-Lingual and Multilingual Retrieval ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Data for paper: "Parameter-Efficient Neural Reranking for Cross-Lingual and Multilingual Retrieval" ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Data for paper: "Evaluating Resource-Lean Cross-Lingual Embedding Models in Unsupervised Retrieval" ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
On Cross-Lingual Retrieval with Multilingual Text Encoders ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Evaluating Multilingual Text Encoders for Unsupervised Cross-Lingual Retrieval ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Data for paper: "Evaluating Multilingual Text Encoders for Unsupervised Cross-Lingual Retrieval" ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Evaluating multilingual text encoders for unsupervised cross-lingual retrieval
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Probing Pretrained Language Models for Lexical Semantics ...
|
|
|
|
Abstract:
The success of large pretrained language models (LMs) such as BERT and RoBERTa has sparked interest in probing their representations, in order to unveil what types of knowledge they implicitly capture. While prior research focused on morphosyntactic, semantic, and world knowledge, it remains unclear to which extent LMs also derive lexical type-level knowledge from words in context. In this work, we present a systematic empirical analysis across six typologically diverse languages and five different lexical tasks, addressing the following questions: 1) How do different lexical knowledge extraction strategies (monolingual versus multilingual source LM, out-of-context versus in-context encoding, inclusion of special tokens, and layer-wise averaging) impact performance? How consistent are the observed effects across tasks and languages? 2) Is lexical knowledge stored in few parameters, or is it scattered throughout the network? 3) How do these representations fare against traditional static word vectors in ... : EMNLP 2020: Long paper ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/2010.05731 https://dx.doi.org/10.48550/arxiv.2010.05731
|
|
BASE
|
|
Hide details
|
|
11 |
Probing Pretrained Language Models for Lexical Semantics ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Towards Instance-Level Parser Selection for Cross-Lingual Transfer of Dependency Parsers
|
|
Glavas, Goran; Agic, Zeljko; Vulic, Ivan. - : International Committee on Computational Linguistics, 2020. : https://www.aclweb.org/anthology/2020.coling-main.345, 2020. : Proceedings of the 28th International Conference on Computational Linguistics (COLING 2020), 2020
|
|
BASE
|
|
Show details
|
|
15 |
Towards instance-level parser selection for cross-lingual transfer of dependency parsers
|
|
|
|
BASE
|
|
Show details
|
|
16 |
How to (Properly) Evaluate Cross-Lingual Word Embeddings: On Strong Baselines, Comparative Analyses, and Some Misconceptions ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
How to (properly) evaluate cross-lingual word embeddings: On strong baselines, comparative analyses, and some misconceptions
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Unsupervised Cross-Lingual Information Retrieval using Monolingual Data Only ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Unsupervised Cross-Lingual Information Retrieval Using Monolingual Data Only ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Unsupervised Cross-Lingual Information Retrieval Using Monolingual Data Only
|
|
|
|
BASE
|
|
Show details
|
|
|
|