1 |
Negative language transfer in learner English: A new dataset ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Parallel sentences mining with transfer learning in an unsupervised setting ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Source and Target Bidirectional Knowledge Distillation for End-to-end Speech Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Detoxifying Language Models Risks Marginalizing Minority Voices ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Domain Adaptation for Arabic Cross-Domain and Cross-Dialect Sentiment Analysis from Contextualized Word Embedding ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Knowledge Enhanced Masked Language Model for Stance Detection ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Frustratingly Easy Edit-based Linguistic Steganography with a Masked Language Model ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
MelBERT: Metaphor Detection via Contextualized Late Interaction using Metaphorical Identification Theories ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
DirectProbe: Studying Representations without Classifiers ...
|
|
|
|
Abstract:
Read the paper on the folowing link: https://www.aclweb.org/anthology/2021.naacl-main.401/ Abstract: Understanding how the linguistic structure is encoded in contextualized embedding could help explain their impressive performance across NLP. Existing approaches for probing them usually call for training classifiers and use the accuracy, mutual information, or complexity as a proxy for the representation's goodness. In this work, we argue that doing so can be unreliable because different representations may need different classifiers. We develop a heuristic, DirectProbe, that directly studies the geometry of a representation by building upon the notion of a version space for a task. Experiments with several linguistic tasks and contextualized embeddings show that, even without training classifiers, DirectProbe can shine lights on how an embedding space represents labels and also anticipate the classifier performance for the representation. ...
|
|
Keyword:
Artificial Intelligence; Computer Science and Engineering; Intelligent System; Natural Language Processing
|
|
URL: https://dx.doi.org/10.48448/vyzv-j336 https://underline.io/lecture/19706-directprobe-studying-representations-without-classifiers
|
|
BASE
|
|
Hide details
|
|
12 |
Challenging distributional models with a conceptual network of philosophical terms ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
ERNIE-Gram: Pre-Training with Explicitly N-Gram Masked Language Modeling for Natural Language Understanding ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Proteno: Text Normalization with Limited Data for Fast Deployment in Text to Speech Systems ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
CaSiNo: A Corpus of Campsite Negotiation Dialogues for Automatic Negotiation Systems ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
multiPRover: Generating Multiple Proofs for Improved Interpretability in Rule Reasoning ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Modeling Framing in Immigration Discourse on Social Media ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Interpretability Analysis for Named Entity Recognition to Understand System Predictions and How They Can Improve ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
SPLAT: Speech-Language Joint Pre-Training for Spoken Language Understanding ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|