DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 37

1
A Latent-Variable Model for Intrinsic Probing ...
BASE
Show details
2
A Neighbourhood Framework for Resource-Lean Content Flagging ...
BASE
Show details
3
A Primer on Contrastive Pretraining in Language Processing: Methods, Lessons Learned and Perspectives ...
BASE
Show details
4
QA Dataset Explosion: A Taxonomy of NLP Resources for Question Answering and Reading Comprehension ...
BASE
Show details
5
Quantifying Gender Bias Towards Politicians in Cross-Lingual Language Models ...
BASE
Show details
6
Few-Shot Cross-Lingual Stance Detection with Sentiment-Based Pre-Training ...
BASE
Show details
7
Can Edge Probing Tasks Reveal Linguistic Knowledge in QA Models? ...
BASE
Show details
8
CiteWorth: Cite-Worthiness Detection for Improved Scientific Document Understanding ...
BASE
Show details
9
How Does Counterfactually Augmented Data Impact Models for Social Computing Constructs? ...
BASE
Show details
10
Is Sparse Attention more Interpretable? ...
Abstract: Read paper: https://www.aclanthology.org/2021.acl-short.17 Abstract: Sparse attention has been claimed to increase model interpretability under the assumption that it highlights influential inputs. Yet the attention distribution is typically over representations internal to the model rather than the inputs themselves, suggesting this assumption may not have merit. We build on the recent work exploring the interpretability of attention; we design a set of experiments to help us understand how sparsity affects our ability to use attention as an explainability tool. On three text classification tasks, we verify that only a weak relationship between inputs and co-indexed intermediate representations exists—under sparse attention and otherwise. Further, we do not find any plausible mappings from sparse attention distributions to a sparse set of influential inputs through other avenues. Rather, we observe in this setting that inducing sparsity may make it less plausible that attention can be used as a tool for ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://underline.io/lecture/25435-is-sparse-attention-more-interpretablequestion
https://dx.doi.org/10.48448/90jh-y922
BASE
Hide details
11
Is Sparse Attention more Interpretable? ...
BASE
Show details
12
Quantifying Gender Biases Towards Politicians on Reddit ...
BASE
Show details
13
A Survey on Gender Bias in Natural Language Processing ...
BASE
Show details
14
Semi-Supervised Exaggeration Detection of Health Science Press Releases ...
BASE
Show details
15
Inducing Language-Agnostic Multilingual Representations ...
BASE
Show details
16
Zero-Shot Cross-Lingual Transfer with Meta Learning ...
BASE
Show details
17
SIGTYP 2020 Shared Task: Prediction of Typological Features ...
BASE
Show details
18
Generating Fact Checking Explanations ...
BASE
Show details
19
X-WikiRE: A Large, Multilingual Resource for Relation Extraction as Machine Comprehension ...
BASE
Show details
20
TX-Ray: Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLP ...
BASE
Show details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
37
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern