DE eng

Search in the Catalogues and Directories

Hits 1 – 14 of 14

1
First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT
In: https://hal.inria.fr/hal-03161685 ; 2021 (2021)
BASE
Show details
2
First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT
In: EACL 2021 - The 16th Conference of the European Chapter of the Association for Computational Linguistics ; https://hal.inria.fr/hal-03239087 ; EACL 2021 - The 16th Conference of the European Chapter of the Association for Computational Linguistics, Apr 2021, Kyiv / Virtual, Ukraine ; https://2021.eacl.org/ (2021)
BASE
Show details
3
First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT ...
BASE
Show details
4
Contrastive Explanations for Model Interpretability ...
BASE
Show details
5
Measuring and Improving Consistency in Pretrained Language Models ...
BASE
Show details
6
Amnesic Probing: Behavioral Explanation With Amnesic Counterfactuals ...
BASE
Show details
7
It's not Greek to mBERT: Inducing Word-Level Translations from Multilingual BERT ...
BASE
Show details
8
The Extraordinary Failure of Complement Coercion Crowdsourcing ...
BASE
Show details
9
Do Language Embeddings Capture Scales? ...
BASE
Show details
10
Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals ...
BASE
Show details
11
Evaluating Models' Local Decision Boundaries via Contrast Sets ...
Abstract: Standard test sets for supervised learning evaluate in-distribution generalization. Unfortunately, when a dataset has systematic gaps (e.g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities. We propose a new annotation paradigm for NLP that helps to close systematic gaps in the test data. In particular, after a dataset is constructed, we recommend that the dataset authors manually perturb the test instances in small but meaningful ways that (typically) change the gold label, creating contrast sets. Contrast sets provide a local view of a model's decision boundary, which can be used to more accurately evaluate a model's true linguistic capabilities. We demonstrate the efficacy of contrast sets by creating them for 10 diverse NLP datasets (e.g., DROP reading comprehension, UD parsing, IMDb sentiment analysis). Although our contrast sets are not explicitly adversarial, model ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://arxiv.org/abs/2004.02709
https://dx.doi.org/10.48550/arxiv.2004.02709
BASE
Hide details
12
Unsupervised Distillation of Syntactic Information from Contextualized Word Representations ...
BASE
Show details
13
How Large Are Lions? Inducing Distributions over Quantitative Attributes ...
BASE
Show details
14
Where’s My Head? Definition, Data Set, and Models for Numeric Fused-Head Identification and Resolution
In: Transactions of the Association for Computational Linguistics, Vol 7, Pp 519-535 (2019) (2019)
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
14
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern