DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4
Hits 1 – 20 of 64

1
Fairlex: A multilingual benchmark for evaluating fairness in legal text processing ...
BASE
Show details
2
Fairlex: A multilingual benchmark for evaluating fairness in legal text processing ...
BASE
Show details
3
UK-LEX Dataset - Part of Chalkidis and Søgaard (2022) ...
Chalkidis, Ilias; Søgaard, Anders. - : Zenodo, 2022
BASE
Show details
4
UK-LEX Dataset - Part of Chalkidis and Søgaard (2022) ...
Chalkidis, Ilias; Søgaard, Anders. - : Zenodo, 2022
BASE
Show details
5
FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing ...
BASE
Show details
6
Generalized Quantifiers as a Source of Error in Multilingual NLU Benchmarks ...
BASE
Show details
7
Challenges and Strategies in Cross-Cultural NLP ...
BASE
Show details
8
Factual Consistency of Multilingual Pretrained Language Models ...
Abstract: Pretrained language models can be queried for factual knowledge, with potential applications in knowledge base acquisition and tasks that require inference. However, for that, we need to know how reliable this knowledge is, and recent work has shown that monolingual English language models lack consistency when predicting factual knowledge, that is, they fill-in-the-blank differently for paraphrases describing the same fact. In this paper, we extend the analysis of consistency to a multilingual setting. We introduce a resource, mParaRel, and investigate (i) whether multilingual language models such as mBERT and XLM-R are more consistent than their monolingual counterparts; and (ii) if such models are equally consistent across languages. We find that mBERT is as inconsistent as English BERT in English paraphrases, but that both mBERT and XLM-R exhibit a high degree of inconsistency in English and even more so for all the other 45 languages. ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences; Machine Learning cs.LG
URL: https://arxiv.org/abs/2203.11552
https://dx.doi.org/10.48550/arxiv.2203.11552
BASE
Hide details
9
Zero-Shot Dependency Parsing with Worst-Case Aware Automated Curriculum Learning ...
BASE
Show details
10
How Conservative are Language Models? Adapting to the Introduction of Gender-Neutral Pronouns ...
BASE
Show details
11
Replicating and Extending "Because Their Treebanks Leak": Graph Isomorphism, Covariants, and Parser Performance ...
BASE
Show details
12
The Impact of Positional Encodings on Multilingual Compression ...
BASE
Show details
13
Minimax and Neyman–Pearson Meta-Learning for Outlier Languages ...
BASE
Show details
14
Evaluation of Summarization Systems across Gender, Age, and Race ...
BASE
Show details
15
Locke's Holiday: Belief Bias in Machine Reading ...
BASE
Show details
16
Dynamic Forecasting of Conversation Derailment ...
BASE
Show details
17
Replicating and Extending ``Because Their Treebanks Leak'': Graph Isomorphism, Covariants, and Parser Performance ...
BASE
Show details
18
Can Language Models Encode Perceptual Structure Without Grounding? A Case Study in Color ...
BASE
Show details
19
Spurious Correlations in Cross-Topic Argument Mining ...
BASE
Show details
20
Minimax and Neyman–Pearson Meta-Learning for Outlier Languages ...
BASE
Show details

Page: 1 2 3 4

Catalogues
1
0
6
0
1
0
0
Bibliographies
7
0
0
0
0
0
2
0
1
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
51
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern