DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 37

1
A Latent-Variable Model for Intrinsic Probing ...
BASE
Show details
2
A Neighbourhood Framework for Resource-Lean Content Flagging ...
BASE
Show details
3
A Primer on Contrastive Pretraining in Language Processing: Methods, Lessons Learned and Perspectives ...
BASE
Show details
4
QA Dataset Explosion: A Taxonomy of NLP Resources for Question Answering and Reading Comprehension ...
BASE
Show details
5
Quantifying Gender Bias Towards Politicians in Cross-Lingual Language Models ...
BASE
Show details
6
Few-Shot Cross-Lingual Stance Detection with Sentiment-Based Pre-Training ...
BASE
Show details
7
Can Edge Probing Tasks Reveal Linguistic Knowledge in QA Models? ...
BASE
Show details
8
CiteWorth: Cite-Worthiness Detection for Improved Scientific Document Understanding ...
BASE
Show details
9
How Does Counterfactually Augmented Data Impact Models for Social Computing Constructs? ...
BASE
Show details
10
Is Sparse Attention more Interpretable? ...
BASE
Show details
11
Is Sparse Attention more Interpretable? ...
BASE
Show details
12
Quantifying Gender Biases Towards Politicians on Reddit ...
BASE
Show details
13
A Survey on Gender Bias in Natural Language Processing ...
BASE
Show details
14
Semi-Supervised Exaggeration Detection of Health Science Press Releases ...
BASE
Show details
15
Inducing Language-Agnostic Multilingual Representations ...
BASE
Show details
16
Zero-Shot Cross-Lingual Transfer with Meta Learning ...
BASE
Show details
17
SIGTYP 2020 Shared Task: Prediction of Typological Features ...
BASE
Show details
18
Generating Fact Checking Explanations ...
BASE
Show details
19
X-WikiRE: A Large, Multilingual Resource for Relation Extraction as Machine Comprehension ...
BASE
Show details
20
TX-Ray: Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLP ...
Abstract: While state-of-the-art NLP explainability (XAI) methods focus on explaining per-sample decisions in supervised end or probing tasks, this is insufficient to explain and quantify model knowledge transfer during (un-)supervised training. Thus, for TX-Ray, we modify the established computer vision explainability principle of 'visualizing preferred inputs of neurons' to make it usable transfer analysis and NLP. This allows one to analyze, track and quantify how self- or supervised NLP models first build knowledge abstractions in pretraining (1), and then transfer these abstractions to a new domain (2), or adapt them during supervised fine-tuning (3). TX-Ray expresses neurons as feature preference distributions to quantify fine-grained knowledge transfer or adaptation and guide human analysis. We find that, similar to Lottery Ticket based pruning, TX-Ray based pruning can improve test set generalization and that it can reveal how early stages of self-supervision automatically learn linguistic abstractions like ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences; Machine Learning cs.LG; Machine Learning stat.ML
URL: https://arxiv.org/abs/1912.00982
https://dx.doi.org/10.48550/arxiv.1912.00982
BASE
Hide details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
37
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern