1 |
Delving Deeper into Cross-lingual Visual Question Answering ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Cross-Lingual Dialogue Dataset Creation via Outline-Based Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Improving Word Translation via Two-Stage Contrastive Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Crossing the Conversational Chasm: A Primer on Natural Language Processing for Multilingual Task-Oriented Dialogue Systems ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Learning Domain-Specialised Representations for Cross-Lingual Biomedical Entity Linking ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Combining Deep Generative Models and Multi-lingual Pretraining for Semi-supervised Document Classification ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Context vs Target Word: Quantifying Biases in Lexical Semantic Datasets ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
AM2iCo: Evaluating Word Meaning in Context across Low-Resource Languages with Adversarial Examples ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Emergent Communication Pretraining for Few-Shot Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
A Closer Look at Few-Shot Crosslingual Transfer: The Choice of Shots Matters ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Verb Knowledge Injection for Multilingual Event Processing ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Multi-SimLex: A Large-Scale Evaluation of Multilingual and Cross-Lingual Lexical Semantic Similarity ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Probing Pretrained Language Models for Lexical Semantics ...
|
|
|
|
Abstract:
The success of large pretrained language models (LMs) such as BERT and RoBERTa has sparked interest in probing their representations, in order to unveil what types of knowledge they implicitly capture. While prior research focused on morphosyntactic, semantic, and world knowledge, it remains unclear to which extent LMs also derive lexical type-level knowledge from words in context. In this work, we present a systematic empirical analysis across six typologically diverse languages and five different lexical tasks, addressing the following questions: 1) How do different lexical knowledge extraction strategies (monolingual versus multilingual source LM, out-of-context versus in-context encoding, inclusion of special tokens, and layer-wise averaging) impact performance? How consistent are the observed effects across tasks and languages? 2) Is lexical knowledge stored in few parameters, or is it scattered throughout the network? 3) How do these representations fare against traditional static word vectors in ... : EMNLP 2020: Long paper ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/2010.05731 https://dx.doi.org/10.48550/arxiv.2010.05731
|
|
BASE
|
|
Hide details
|
|
18 |
The Secret is in the Spectra: Predicting Cross-lingual Task Performance with Spectral Similarity Measures ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Do We Really Need Fully Unsupervised Cross-Lingual Embeddings? ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|