DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 37

1
A Latent-Variable Model for Intrinsic Probing ...
BASE
Show details
2
A Neighbourhood Framework for Resource-Lean Content Flagging ...
BASE
Show details
3
A Primer on Contrastive Pretraining in Language Processing: Methods, Lessons Learned and Perspectives ...
BASE
Show details
4
QA Dataset Explosion: A Taxonomy of NLP Resources for Question Answering and Reading Comprehension ...
BASE
Show details
5
Quantifying Gender Bias Towards Politicians in Cross-Lingual Language Models ...
BASE
Show details
6
Few-Shot Cross-Lingual Stance Detection with Sentiment-Based Pre-Training ...
BASE
Show details
7
Can Edge Probing Tasks Reveal Linguistic Knowledge in QA Models? ...
Abstract: There have been many efforts to try to understand what gram-matical knowledge (e.g., ability to understand the part of speech of a token) is encoded in large pre-trained language models (LM). This is done through 'Edge Probing' (EP) tests: simple ML models that predict the grammatical properties ofa span (whether it has a particular part of speech) using only the LM's token representations. However, most NLP applications use fine-tuned LMs. Here, we ask: if a LM is fine-tuned, does the encoding of linguistic information in it change, as measured by EP tests? Conducting experiments on multiple question-answering (QA) datasets, we answer that question negatively: the EP test results do not change significantly when the fine-tuned QA model performs well or in adversarial situations where the model is forced to learn wrong correlations. However, a critical analysis of the EP task datasets reveals that EP models may rely on spurious correlations to make predictions. This indicates even if fine-tuning changes the ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://arxiv.org/abs/2109.07102
https://dx.doi.org/10.48550/arxiv.2109.07102
BASE
Hide details
8
CiteWorth: Cite-Worthiness Detection for Improved Scientific Document Understanding ...
BASE
Show details
9
How Does Counterfactually Augmented Data Impact Models for Social Computing Constructs? ...
BASE
Show details
10
Is Sparse Attention more Interpretable? ...
BASE
Show details
11
Is Sparse Attention more Interpretable? ...
BASE
Show details
12
Quantifying Gender Biases Towards Politicians on Reddit ...
BASE
Show details
13
A Survey on Gender Bias in Natural Language Processing ...
BASE
Show details
14
Semi-Supervised Exaggeration Detection of Health Science Press Releases ...
BASE
Show details
15
Inducing Language-Agnostic Multilingual Representations ...
BASE
Show details
16
Zero-Shot Cross-Lingual Transfer with Meta Learning ...
BASE
Show details
17
SIGTYP 2020 Shared Task: Prediction of Typological Features ...
BASE
Show details
18
Generating Fact Checking Explanations ...
BASE
Show details
19
X-WikiRE: A Large, Multilingual Resource for Relation Extraction as Machine Comprehension ...
BASE
Show details
20
TX-Ray: Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLP ...
BASE
Show details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
37
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern