DE eng

Search in the Catalogues and Directories

Page: 1 2 3
Hits 1 – 20 of 52

1
Neural Token Segmentation for High Token-Internal Complexity ...
Brusilovsky, Idan; Tsarfaty, Reut. - : arXiv, 2022
BASE
Show details
2
Morphological Reinflection with Multiple Arguments: An Extended Annotation schema and a Georgian Case Study ...
BASE
Show details
3
Exploiting emojis for abusive language detection
Wiegand, Michael [Verfasser]; Ruppenhofer, Josef [Verfasser]; Merlo, Paola [Herausgeber]. - Mannheim : Leibniz-Institut für Deutsche Sprache (IDS), Bibliothek, 2021
DNB Subject Category Language
Show details
4
Implicitly abusive comparisons – a new dataset and linguistic analysis
Wiegand, Michael [Verfasser]; Geulig, Maja [Verfasser]; Ruppenhofer, Josef [Verfasser]. - Mannheim : Leibniz-Institut für Deutsche Sprache (IDS), Bibliothek, 2021
DNB Subject Category Language
Show details
5
Universal Dependencies 2.9
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
6
Universal Dependencies 2.8.1
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
7
Universal Dependencies 2.8
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
8
Minimal Supervision for Morphological Inflection ...
Goldman, Omer; Tsarfaty, Reut. - : arXiv, 2021
BASE
Show details
9
Well-Defined Morphology is Sentence-Level Morphology ...
BASE
Show details
10
Applying the Transformer to Character-level Transduction
In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume (2021)
Abstract: The transformer has been shown to outperform recurrent neural network-based sequence-to-sequence models in various word-level NLP tasks. Yet for character-level transduction tasks, e.g. morphological inflection generation and historical text normalization, there are few works that outperform recurrent models using the transformer. In an empirical study, we uncover that, in contrast to recurrent sequence-to-sequence models, the batch size plays a crucial role in the performance of the transformer on character-level tasks, and we show that with a large enough batch size, the transformer does indeed outperform recurrent models. We also introduce a simple technique to handle feature-guided character-level transduction that further improves performance. With these insights, we achieve state-of-the-art performance on morphological inflection and historical text normalization. We also show that the transformer outperforms a strong baseline on two other character-level transduction tasks: grapheme-to-phoneme conversion and transliteration.
URL: https://hdl.handle.net/20.500.11850/518998
https://doi.org/10.3929/ethz-b-000518998
BASE
Hide details
11
Telling BERT's Full Story: from Local Attention to Global Aggregation
In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume (2021)
BASE
Show details
12
Disambiguatory Signals are Stronger in Word-initial Positions
In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume (2021)
BASE
Show details
13
Minimal Supervision for Morphological Inflection ...
BASE
Show details
14
Asking It All: Generating Contextualized Questions for any Semantic Role ...
BASE
Show details
15
The Possible, the Plausible, and the Desirable: Event-Based Modality Detection for Language Processing ...
BASE
Show details
16
The Possible, the Plausible, and the Desirable: Event-Based Modality Detection for Language Processing ...
BASE
Show details
17
Formae reformandae: for a reorganisation of verb form annotation in Universal Dependencies illustrated by the specific case of Latin
Cecchini, Flavio Massimiliano (orcid:0000-0001-9029-1822). - : Association for Computational Linguistics, 2021. : country:BGR, 2021. : place:Sofia, 2021
BASE
Show details
18
RelWalk - A Latent Variable Model Approach to Knowledge Graph Embedding.
Bollegala, Danushka; Kawarabayashi, Ken-ichi; Yoshida, Yuichi. - : Association for Computational Linguistics, 2021
BASE
Show details
19
Dictionary-based Debiasing of Pre-trained Word Embeddings.
Bollegala, Danushka; Kaneko, Masahiro. - : Association for Computational Linguistics, 2021
BASE
Show details
20
Debiasing Pre-trained Contextualised Embeddings.
Kaneko, Masahiro; Bollegala, Danushka. - : Association for Computational Linguistics, 2021
BASE
Show details

Page: 1 2 3

Catalogues
0
0
0
0
2
0
0
Bibliographies
1
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
1
0
0
0
Open access documents
48
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern