DE eng

Search in the Catalogues and Directories

Hits 1 – 12 of 12

1
Please Mind the Root: Decoding Arborescences for Dependency Parsing
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
2
Measuring the Similarity of Grammatical Gender Systems by Comparing Partitions
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
3
Investigating Cross-Linguistic Adjective Ordering Tendencies with a Latent-Variable Model
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
4
Learning a Cost-Effective Annotation Policy for Question Answering
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
5
Pareto Probing: Trading Off Accuracy for Complexity
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
6
Speakers Fill Lexical Semantic Gaps with Context
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
7
Exploring the Linear Subspace Hypothesis in Gender Bias Mitigation
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
Abstract: Bolukbasi et al. (2016) presents one of the first gender bias mitigation techniques for word embeddings. Their method takes pre-trained word embeddings as input and attempts to isolate a linear subspace that captures most of the gender bias in the embeddings. As judged by an analogical evaluation task, their method virtually eliminates gender bias in the embeddings. However, an implicit and untested assumption of their method is that the bias subspace is actually linear. In this work, we generalize their method to a kernelized, non-linear version. We take inspiration from kernel principal component analysis and derive a non-linear bias isolation technique. We discuss and overcome some of the practical drawbacks of our method for non-linear gender bias mitigation in word embeddings and analyze empirically whether the bias subspace is actually linear. Our analysis shows that gender bias is in fact well captured by a linear subspace, justifying the assumption of Bolukbasi et al. (2016).
URL: https://hdl.handle.net/20.500.11850/462322
https://doi.org/10.3929/ethz-b-000462322
BASE
Hide details
8
Intrinsic Probing through Dimension Selection
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
9
Textual Data Augmentation for Efficient Active Learning on Tiny Datasets
Sutcliffe, Richard; Samothrakis, Spyridon; Quteineh, Husam. - : Association for Computational Linguistics, 2020
BASE
Show details
10
Probing pretrained language models for lexical semantics
Vulić, Ivan; Korhonen, Anna; Litschko, Robert. - : Association for Computational Linguistics, 2020
BASE
Show details
11
XCOPA: A multilingual dataset for causal commonsense reasoning
Ponti, Edoardo Maria; Majewska, Olga; Liu, Qianchu. - : Association for Computational Linguistics, 2020
BASE
Show details
12
From zero to hero: On the limitations of zero-shot language transfer with multilingual transformers
Ravishankar, Vinit; Glavaš, Goran; Lauscher, Anne. - : Association for Computational Linguistics, 2020
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
12
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern