DE eng

Search in the Catalogues and Directories

Hits 1 – 14 of 14

1
First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT
In: https://hal.inria.fr/hal-03161685 ; 2021 (2021)
BASE
Show details
2
First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT
In: EACL 2021 - The 16th Conference of the European Chapter of the Association for Computational Linguistics ; https://hal.inria.fr/hal-03239087 ; EACL 2021 - The 16th Conference of the European Chapter of the Association for Computational Linguistics, Apr 2021, Kyiv / Virtual, Ukraine ; https://2021.eacl.org/ (2021)
BASE
Show details
3
First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT ...
BASE
Show details
4
Contrastive Explanations for Model Interpretability ...
BASE
Show details
5
Measuring and Improving Consistency in Pretrained Language Models ...
BASE
Show details
6
Amnesic Probing: Behavioral Explanation With Amnesic Counterfactuals ...
BASE
Show details
7
It's not Greek to mBERT: Inducing Word-Level Translations from Multilingual BERT ...
BASE
Show details
8
The Extraordinary Failure of Complement Coercion Crowdsourcing ...
Abstract: Crowdsourcing has eased and scaled up the collection of linguistic annotation in recent years. In this work, we follow known methodologies of collecting labeled data for the complement coercion phenomenon. These are constructions with an implied action -- e.g., "I started a new book I bought last week", where the implied action is reading. We aim to collect annotated data for this phenomenon by reducing it to either of two known tasks: Explicit Completion and Natural Language Inference. However, in both cases, crowdsourcing resulted in low agreement scores, even though we followed the same methodologies as in previous work. Why does the same process fail to yield high agreement scores? We specify our modeling schemes, highlight the differences with previous work and provide some insights about the task and possible explanations for the failure. We conclude that specific phenomena require tailored solutions, not only in specialized algorithms, but also in data collection methods. ... : Workshop on Insights from Negative Results in NLP, co-located with EMNLP 2020 ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://arxiv.org/abs/2010.05971
https://dx.doi.org/10.48550/arxiv.2010.05971
BASE
Hide details
9
Do Language Embeddings Capture Scales? ...
BASE
Show details
10
Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals ...
BASE
Show details
11
Evaluating Models' Local Decision Boundaries via Contrast Sets ...
BASE
Show details
12
Unsupervised Distillation of Syntactic Information from Contextualized Word Representations ...
BASE
Show details
13
How Large Are Lions? Inducing Distributions over Quantitative Attributes ...
BASE
Show details
14
Where’s My Head? Definition, Data Set, and Models for Numeric Fused-Head Identification and Resolution
In: Transactions of the Association for Computational Linguistics, Vol 7, Pp 519-535 (2019) (2019)
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
14
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern