DE eng

Search in the Catalogues and Directories

Page: 1 2 3
Hits 1 – 20 of 51

1
SOCIOFILLMORE: A Tool for Discovering Perspectives ...
BASE
Show details
2
IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation ...
Sarti, Gabriele; Nissim, Malvina. - : arXiv, 2022
BASE
Show details
3
Multilingual Pre-training with Language and Task Adaptation for Multilingual Text Style Transfer ...
BASE
Show details
4
DALC: the Dutch Abusive Language Corpus ...
BASE
Show details
5
Thank you BART! Rewarding Pre-Trained Models Improves Formality Style Transfer ...
BASE
Show details
6
Adapting Monolingual Models: Data can be Scarce when Language Similarity is High ...
BASE
Show details
7
Generic resources are what you need: Style transfer tasks without task-specific parallel training data ...
BASE
Show details
8
Adapting Monolingual Models: Data can be Scarce when Language Similarity is High ...
BASE
Show details
9
As Good as New. How to Successfully Recycle English GPT-2 to Make Models for Other Languages ...
BASE
Show details
10
Teaching NLP with Bracelets and Restaurant Menus: An Interactive Workshop for Italian Students ...
BASE
Show details
11
Teaching NLP with Bracelets and Restaurant Menus:An Interactive Workshop for Italian Students
Pannitto, Ludovica; Busso, Lucia; Combei, Claudia Roberta. - : Association for Computational Linguistics, 2021
BASE
Show details
12
What's so special about BERT's layers? A closer look at the NLP pipeline in monolingual and multilingual models ...
BASE
Show details
13
Personal-ITY: A Novel YouTube-based Corpus for Personality Prediction in Italian ...
BASE
Show details
14
Datasets and Models for Authorship Attribution on Italian Personal Writings ...
BASE
Show details
15
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender Bias ...
BASE
Show details
16
Matching Theory and Data with Personal-ITY: What a Corpus of Italian YouTube Comments Reveals About Personality ...
BASE
Show details
17
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT'S Gender Bias ...
Abstract: Is it possible to draw a line between workforce statistics and gender bias in contextualized word embeddings? Focusing on BERT (Devlin et al., 2018), we measure gender bias by studying associations between gender-denoting target words and names of professions (Kurita et al., 2019), offering two further perspectives: a comparison with U.S. labor statistics as well as a cross-lingual approach. We mitigate bias by fine-tuning BERT on the GAP corpus (Webster et al., 2018), after applying Counterfactual Data Substitution (CDS; Maudslay et al., 2019). While our method of measuring bias is appropriate for languages such as English, it is not suitable for languages with gender-marking, such as German. Our results highlight the importance of investigating bias and mitigation techniques cross-linguistically and connect large-scale language models to real-world data. ...
Keyword: Natural Language Processing
URL: https://underline.io/lecture/6595-unmasking-contextual-stereotypes-measuring-and-mitigating-bert's-gender-bias
https://dx.doi.org/10.48448/j5m3-4d93
BASE
Hide details
18
As Good as New. How to Successfully Recycle English GPT-2 to Make Models for Other Languages ...
de Vries, Wietse; Nissim, Malvina. - : arXiv, 2020
BASE
Show details
19
Fair Is Better than Sensational: Man Is to Doctor as Woman Is to Doctor
In: Computational Linguistics, Vol 46, Iss 2, Pp 487-497 (2020) (2020)
BASE
Show details
20
BERTje: A Dutch BERT Model ...
BASE
Show details

Page: 1 2 3

Catalogues
1
0
4
0
0
0
0
Bibliographies
7
0
0
0
0
0
0
0
1
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
41
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern