DE eng

Search in the Catalogues and Directories

Hits 1 – 13 of 13

1
Please Mind the Root: Decoding Arborescences for Dependency Parsing
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
2
Measuring the Similarity of Grammatical Gender Systems by Comparing Partitions
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
3
Investigating Cross-Linguistic Adjective Ordering Tendencies with a Latent-Variable Model
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
4
Learning a Cost-Effective Annotation Policy for Question Answering
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
5
Pareto Probing: Trading Off Accuracy for Complexity
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
6
Speakers Fill Lexical Semantic Gaps with Context
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
7
Exploring the Linear Subspace Hypothesis in Gender Bias Mitigation
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
8
Intrinsic Probing through Dimension Selection
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
9
Control, Generate, Augment: A Scalable Framework for Multi-Attribute Text Generation
In: Findings of the Association for Computational Linguistics: EMNLP 2020 (2020)
Abstract: We introduce CGA, a conditional VAE architecture, to control, generate, and augment text. CGA is able to generate natural English sentences controlling multiple semantic and syntactic attributes by combining adversarial learning with a context-aware loss and a cyclical word dropout routine. We demonstrate the value of the individual model components in an ablation study. The scalability of our approach is ensured through a single discriminator, independently of the number of attributes. We show high quality, diversity and attribute control in the generated sentences through a series of automatic and human assessments. As the main application of our work, we test the potential of this new NLG model in a data augmentation scenario. In a downstream NLP task, the sentences generated by our CGA model show significant improvements over a strong baseline, and a classification performance often comparable to adding same amount of additional real data.
URL: https://hdl.handle.net/20.500.11850/464810
https://doi.org/10.3929/ethz-b-000464810
BASE
Hide details
10
Textual Data Augmentation for Efficient Active Learning on Tiny Datasets
Sutcliffe, Richard; Samothrakis, Spyridon; Quteineh, Husam. - : Association for Computational Linguistics, 2020
BASE
Show details
11
Probing pretrained language models for lexical semantics
Vulić, Ivan; Korhonen, Anna; Litschko, Robert. - : Association for Computational Linguistics, 2020
BASE
Show details
12
XCOPA: A multilingual dataset for causal commonsense reasoning
Ponti, Edoardo Maria; Majewska, Olga; Liu, Qianchu. - : Association for Computational Linguistics, 2020
BASE
Show details
13
From zero to hero: On the limitations of zero-shot language transfer with multilingual transformers
Ravishankar, Vinit; Glavaš, Goran; Lauscher, Anne. - : Association for Computational Linguistics, 2020
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
13
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern