DE eng

Search in the Catalogues and Directories

Hits 1 – 13 of 13

1
Please Mind the Root: Decoding Arborescences for Dependency Parsing
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
2
Measuring the Similarity of Grammatical Gender Systems by Comparing Partitions
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
3
Investigating Cross-Linguistic Adjective Ordering Tendencies with a Latent-Variable Model
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
4
Learning a Cost-Effective Annotation Policy for Question Answering
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
5
Pareto Probing: Trading Off Accuracy for Complexity
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
6
Speakers Fill Lexical Semantic Gaps with Context
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
7
Exploring the Linear Subspace Hypothesis in Gender Bias Mitigation
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
8
Intrinsic Probing through Dimension Selection
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
Abstract: Most modern NLP systems make use of pre-trained contextual representations that attain astonishingly high performance on a variety of tasks. Such high performance should not be possible unless some form of linguistic structure inheres in these representations, and a wealth of research has sprung up on probing for it. In this paper, we draw a distinction between intrinsic probing, which examines how linguistic information is structured within a representation, and the extrinsic probing popular in prior work, which only argues for the presence of such information by showing that it can be successfully extracted. To enable intrinsic probing, we propose a novel framework based on a decomposable multivariate Gaussian probe that allows us to determine whether the linguistic information in word embeddings is dispersed or focal. We then probe fastText and BERT for various morphosyntactic attributes across 36 languages. We find that most attributes are reliably encoded by only a few neurons, with fastText concentrating its linguistic structure more than BERT.
URL: https://doi.org/10.3929/ethz-b-000462314
https://hdl.handle.net/20.500.11850/462314
BASE
Hide details
9
Control, Generate, Augment: A Scalable Framework for Multi-Attribute Text Generation
In: Findings of the Association for Computational Linguistics: EMNLP 2020 (2020)
BASE
Show details
10
Textual Data Augmentation for Efficient Active Learning on Tiny Datasets
Sutcliffe, Richard; Samothrakis, Spyridon; Quteineh, Husam. - : Association for Computational Linguistics, 2020
BASE
Show details
11
Probing pretrained language models for lexical semantics
Vulić, Ivan; Korhonen, Anna; Litschko, Robert. - : Association for Computational Linguistics, 2020
BASE
Show details
12
XCOPA: A multilingual dataset for causal commonsense reasoning
Ponti, Edoardo Maria; Majewska, Olga; Liu, Qianchu. - : Association for Computational Linguistics, 2020
BASE
Show details
13
From zero to hero: On the limitations of zero-shot language transfer with multilingual transformers
Ravishankar, Vinit; Glavaš, Goran; Lauscher, Anne. - : Association for Computational Linguistics, 2020
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
13
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern