DE eng

Search in the Catalogues and Directories

Page: 1 2 3
Hits 1 – 20 of 52

1
Exploiting emojis for abusive language detection
Wiegand, Michael [Verfasser]; Ruppenhofer, Josef [Verfasser]; Merlo, Paola [Herausgeber]. - Mannheim : Leibniz-Institut für Deutsche Sprache (IDS), Bibliothek, 2021
DNB Subject Category Language
Show details
2
Implicitly abusive comparisons – a new dataset and linguistic analysis
Wiegand, Michael [Verfasser]; Geulig, Maja [Verfasser]; Ruppenhofer, Josef [Verfasser]. - Mannheim : Leibniz-Institut für Deutsche Sprache (IDS), Bibliothek, 2021
DNB Subject Category Language
Show details
3
Applying the Transformer to Character-level Transduction
In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume (2021)
BASE
Show details
4
Telling BERT's Full Story: from Local Attention to Global Aggregation
In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume (2021)
Abstract: We take a deep look into the behaviour of self-attention heads in the transformer architecture. In light of recent work discouraging the use of attention distributions for explaining a model’s behaviour, we show that attention distributions can nevertheless provide insights into the local behaviour of attention heads. This way, we propose a distinction between local patterns revealed by attention and global patterns that refer back to the input, and analyze BERT from both angles. We use gradient attribution to analyze how the output of an attention head depends on the input tokens, effectively extending the local attention-based analysis to account for the mixing of information throughout the transformer layers. We find that there is a significant mismatch between attention and attribution distributions, caused by the mixing of context inside the model. We quantify this discrepancy and observe that interestingly, there are some patterns that persist across all layers despite the mixing.
URL: https://doi.org/10.3929/ethz-b-000496002
https://hdl.handle.net/20.500.11850/496002
BASE
Hide details
5
Disambiguatory Signals are Stronger in Word-initial Positions
In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume (2021)
BASE
Show details
6
Multi-Adversarial Learning for Cross-Lingual Word Embeddings ...
NAACL 2021 2021; Henderson, James; Merlo, Paola. - : Underline Science Inc., 2021
BASE
Show details
7
RelWalk - A Latent Variable Model Approach to Knowledge Graph Embedding.
Bollegala, Danushka; Kawarabayashi, Ken-ichi; Yoshida, Yuichi. - : Association for Computational Linguistics, 2021
BASE
Show details
8
Dictionary-based Debiasing of Pre-trained Word Embeddings.
Bollegala, Danushka; Kaneko, Masahiro. - : Association for Computational Linguistics, 2021
BASE
Show details
9
Debiasing Pre-trained Contextualised Embeddings.
Kaneko, Masahiro; Bollegala, Danushka. - : Association for Computational Linguistics, 2021
BASE
Show details
10
Is supervised syntactic parsing beneficial for language understanding tasks? An empirical investigation
Glavaš, Goran; Vulić, Ivan. - : Association for Computational Linguistics, 2021
BASE
Show details
11
Multi-Adversarial Learning for Cross-Lingual Word Embeddings ...
BASE
Show details
12
Weakly-Supervised Concept-based Adversarial Learning for Cross-lingual Word Embeddings
In: http://infoscience.epfl.ch/record/275419 (2020)
BASE
Show details
13
Weakly-Supervised Concept-based Adversarial Learning for Cross-lingual Word Embeddings ...
BASE
Show details
14
The probability of external causation: an empirical account of crosslinguistic variation in lexical causatives
In: Linguistics. - Berlin [u.a.] : Mouton de Gruyter 56 (2018) 5, 895-938
BLLDB
Show details
15
Movement and structure effects on Universal 20 word order frequencies: A quantitative study
In: Glossa: a journal of general linguistics; Vol 3, No 1 (2018); 84 ; 2397-1835 (2018)
BASE
Show details
16
Word order variation and dependency length minimisation : a cross-linguistic computational approach
Gulordava, Kristina. - : Université de Genève, 2018
BASE
Show details
17
CoNLL 2017 Shared Task System Outputs
Zeman, Daniel; Potthast, Martin; Straka, Milan. - : Charles University, Faculty of Mathematics and Physics, Institute of Formal and Applied Linguistics (UFAL), 2017
BASE
Show details
18
CLCL (Geneva) DINN Parser : a Neural Network Dependency Parser Ten Years Later
In: Proceedings of the CoNLL 2017 Shared Task : Multilingual Parsing from Raw Text to Universal Dependencies P. 228–236 (2017)
BASE
Show details
19
Some Recent Results on Cross-Linguistic, Corpus-Based Quantitative Modelling of Word Order and Aspect
In: ISBN: 978-3-319-48831-8 ; Formal Models in the Study of Language pp. 451-464 (2017)
BASE
Show details
20
Quantitative computational syntax : some initial results
In: ISSN: 2499-4553 ; Italian Journal of Computational Linguistics, Vol. 2, No 1 (2016) pp. 11-29 (2016)
BASE
Show details

Page: 1 2 3

Catalogues
6
3
5
0
2
0
0
Bibliographies
14
0
1
0
0
0
0
0
2
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
25
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern