DE eng

Search in the Catalogues and Directories

Hits 1 – 14 of 14

1
First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT
In: https://hal.inria.fr/hal-03161685 ; 2021 (2021)
BASE
Show details
2
First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT
In: EACL 2021 - The 16th Conference of the European Chapter of the Association for Computational Linguistics ; https://hal.inria.fr/hal-03239087 ; EACL 2021 - The 16th Conference of the European Chapter of the Association for Computational Linguistics, Apr 2021, Kyiv / Virtual, Ukraine ; https://2021.eacl.org/ (2021)
BASE
Show details
3
First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT ...
BASE
Show details
4
Contrastive Explanations for Model Interpretability ...
BASE
Show details
5
Measuring and Improving Consistency in Pretrained Language Models ...
BASE
Show details
6
Amnesic Probing: Behavioral Explanation With Amnesic Counterfactuals ...
BASE
Show details
7
It's not Greek to mBERT: Inducing Word-Level Translations from Multilingual BERT ...
BASE
Show details
8
The Extraordinary Failure of Complement Coercion Crowdsourcing ...
BASE
Show details
9
Do Language Embeddings Capture Scales? ...
Abstract: Pretrained Language Models (LMs) have been shown to possess significant linguistic, common sense, and factual knowledge. One form of knowledge that has not been studied yet in this context is information about the scalar magnitudes of objects. We show that pretrained language models capture a significant amount of this information but are short of the capability required for general common-sense reasoning. We identify contextual information in pre-training and numeracy as two key factors affecting their performance and show that a simple method of canonicalizing numbers can have a significant effect on the results. ... : Accepted at EMNLP Findings 2020 and EMNLP BlackboxNLP workshop 2020; 8 pages, 2 figures; Minor changes to the acknowledgment section ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences; I.2.7
URL: https://dx.doi.org/10.48550/arxiv.2010.05345
https://arxiv.org/abs/2010.05345
BASE
Hide details
10
Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals ...
BASE
Show details
11
Evaluating Models' Local Decision Boundaries via Contrast Sets ...
BASE
Show details
12
Unsupervised Distillation of Syntactic Information from Contextualized Word Representations ...
BASE
Show details
13
How Large Are Lions? Inducing Distributions over Quantitative Attributes ...
BASE
Show details
14
Where’s My Head? Definition, Data Set, and Models for Numeric Fused-Head Identification and Resolution
In: Transactions of the Association for Computational Linguistics, Vol 7, Pp 519-535 (2019) (2019)
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
14
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern