DE eng

Search in the Catalogues and Directories

Hits 1 – 12 of 12

1
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity ...
BASE
Show details
2
Do We Really Need Fully Unsupervised Cross-Lingual Embeddings? ...
BASE
Show details
3
SEAGLE: A platform for comparative evaluation of semantic encoders for information retrieval
Schmidt, Fabian David; Dietsche, Markus; Ponzetto, Simone Paolo. - : Association for Computational Linguistics, 2019
BASE
Show details
4
Multilingual and cross-lingual graded lexical entailment
Glavaš, Goran; Vulić, Ivan; Ponzetto, Simone Paolo. - : Association for Computational Linguistics, 2019
BASE
Show details
5
Specializing distributional vectors of all words for lexical entailment
Ponti, Edoardo Maria; Kamath, Aishwarya; Pfeiffer, Jonas. - : Association for Computational Linguistics, 2019
BASE
Show details
6
How to (properly) evaluate cross-lingual word embeddings: On strong baselines, comparative analyses, and some misconceptions
Glavaš, Goran; Litschko, Robert; Ruder, Sebastian. - : Association for Computational Linguistics, 2019
BASE
Show details
7
Cross-lingual semantic specialization via lexical relation induction
Glavaš, Goran; Vulić, Ivan; Korhonen, Anna. - : Association for Computational Linguistics, 2019
BASE
Show details
8
Generalized tuning of distributional word vectors for monolingual and cross-lingual lexical entailment
Vulić, Ivan; Glavaš, Goran. - : Association for Computational Linguistics, 2019
BASE
Show details
9
SenZi: A sentiment analysis lexicon for the latinised Arabic (Arabizi)
BASE
Show details
10
Informing unsupervised pretraining with external linguistic knowledge
Lauscher, Anne; Vulić, Ivan; Ponti, Edoardo Maria. - : Cornell University, 2019
BASE
Show details
11
Do we really need fully unsupervised cross-lingual embeddings?
Vulić, Ivan; Glavaš, Goran; Reichart, Roi. - : Association for Computational Linguistics, 2019
BASE
Show details
12
Are we consistently biased? Multidimensional analysis of biases in distributional word vectors
Lauscher, Anne; Glavaš, Goran. - : Association for Computational Linguistics, 2019
Abstract: Word embeddings have recently been shown to reflect many of the pronounced societal biases (e.g., gender bias or racial bias). Existing studies are, however, limited in scope and do not investigate the consistency of biases across relevant dimensions like embedding models, types of texts, and different languages. In this work, we present a systematic study of biases encoded in distributional word vector spaces: we analyze how consistent the bias effects are across languages, corpora, and embedding models. Furthermore, we analyze the cross-lingual biases encoded in bilingual embedding spaces, indicative of the effects of bias transfer encompassed in cross-lingual transfer of NLP models. Our study yields some unexpected findings, e.g., that biases can be emphasized or downplayed by different embedding models or that user-generated content may be less biased than encyclopedic text. We hope our work catalyzes bias research in NLP and informs the development of bias reduction techniques.
Keyword: 004 Informatik
URL: https://madoc.bib.uni-mannheim.de/49699/
https://doi.org/10.18653/v1/S19-1010
https://www.aclweb.org/anthology/S19-1010/
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
12
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern