DE eng

Search in the Catalogues and Directories

Hits 1 – 15 of 15

1
RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of Conversational Language Models ...
BASE
Show details
2
AraWEAT: Multidimensional Analysis of Biases in Arabic Word Embeddings ...
BASE
Show details
3
From Zero to Hero: On the Limitations of Zero-Shot Cross-Lingual Transfer with Multilingual Transformers ...
BASE
Show details
4
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity ...
BASE
Show details
5
Specializing unsupervised pretraining models for word-level semantic similarity
Ponti, Edoardo Maria; Korhonen, Anna; Vulić, Ivan. - : Association for Computational Linguistics, ACL, 2020
BASE
Show details
6
AraWEAT: Multidimensional analysis of biases in Arabic word embeddings
Lauscher, Anne; Takieddin, Rafik; Ponzetto, Simone Paolo. - : Association for Computational Linguistics, 2020
BASE
Show details
7
Common sense or world knowledge? Investigating adapter-based knowledge injection into pretrained transformers
Lauscher, Anne; Majewska, Olga; Ribeiro, Leonardo F. R.. - : Association for Computational Linguistics, 2020
BASE
Show details
8
From zero to hero: On the limitations of zero-shot language transfer with multilingual transformers
Ravishankar, Vinit; Glavaš, Goran; Lauscher, Anne. - : Association for Computational Linguistics, 2020
BASE
Show details
9
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity ...
BASE
Show details
10
Informing unsupervised pretraining with external linguistic knowledge
Abstract: Unsupervised pretraining models have been shown to facilitate a wide range of downstream applications. These models, however, still encode only the distributional knowledge, incorporated through language modeling objectives. In this work, we complement the encoded distributional knowledge with external lexical knowledge. We generalize the recently proposed (state-of-the-art) unsupervised pretraining model BERT to a multi-task learning setting: we couple BERT's masked language modeling and next sentence prediction objectives with the auxiliary binary word relation classification, through which we inject clean linguistic knowledge into the model. Our initial experiments suggest that our "linguistically-informed" BERT (LIBERT) yields performance gains over the linguistically-blind "vanilla" BERT on several language understanding tasks.
Keyword: 004 Informatik
URL: https://madoc.bib.uni-mannheim.de/51956/
https://arxiv.org/pdf/1909.02339.pdf
BASE
Hide details
11
Are we consistently biased? Multidimensional analysis of biases in distributional word vectors
Lauscher, Anne; Glavaš, Goran. - : Association for Computational Linguistics, 2019
BASE
Show details
12
ArguminSci: a tool for analyzing argumentation and rhetorical aspects in scientific writing
Glavaš, Goran; Lauscher, Anne; Eckert, Kai. - : Association for Computational Linguistics, 2018
BASE
Show details
13
An argument-annotated corpus of scientific publications
Ponzetto, Simone Paolo; Lauscher, Anne; Glavaš, Goran. - : Association for Computational Linguistics, 2018
BASE
Show details
14
Investigating the role of argumentation in the rhetorical analysis of scientific publications with neural multi-task learning models
Ponzetto, Simone Paolo; Eckert, Kai; Lauscher, Anne. - : Association for Computational Linguistics, 2018
BASE
Show details
15
University of Mannheim @ CLSciSumm-17: Citation-Based Summarization of Scientific Articles Using Semantic Textual Similarity
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
15
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern