2 |
Frustratingly Simple but Surprisingly Strong: Using Language-Independent Features for Zero-shot Cross-lingual Semantic Parsing ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Please Mind the Root: Decoding Arborescences for Dependency Parsing
|
|
|
|
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
|
|
BASE
|
|
Show details
|
|
4 |
Measuring the Similarity of Grammatical Gender Systems by Comparing Partitions
|
|
|
|
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
|
|
BASE
|
|
Show details
|
|
5 |
Investigating Cross-Linguistic Adjective Ordering Tendencies with a Latent-Variable Model
|
|
|
|
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
|
|
BASE
|
|
Show details
|
|
6 |
Learning a Cost-Effective Annotation Policy for Question Answering
|
|
|
|
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
|
|
BASE
|
|
Show details
|
|
7 |
Pareto Probing: Trading Off Accuracy for Complexity
|
|
|
|
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
|
|
BASE
|
|
Show details
|
|
8 |
Speakers Fill Lexical Semantic Gaps with Context
|
|
|
|
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
|
|
BASE
|
|
Show details
|
|
9 |
Exploring the Linear Subspace Hypothesis in Gender Bias Mitigation
|
|
|
|
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
|
|
BASE
|
|
Show details
|
|
10 |
Intrinsic Probing through Dimension Selection
|
|
|
|
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
|
|
BASE
|
|
Show details
|
|
11 |
Textual Data Augmentation for Efficient Active Learning on Tiny Datasets
|
|
|
|
Abstract:
In this paper we propose a novel data augmentation approach where guided outputs of a language generation model, e.g. GPT-2, when labeled, can improve the performance of text classifiers through an active learning process. We transform the data generation task into an optimization problem which maximizes the usefulness of the generated output, using Monte Carlo Tree Search (MCTS) as the optimization strategy and incorporating entropy as one of the optimization criteria. We test our approach against a Non-Guided Data Generation (NGDG) process that does not optimize for a reward function. Starting with a small set of data, our results show an increased performance with MCTS of 26% on the TREC-6 Questions dataset, and 10% on the Stanford Sentiment Treebank SST-2 dataset. Compared with NGDG, we are able to achieve increases of 3% and 5% on TREC-6 and SST-2.
|
|
URL: https://www.aclweb.org/anthology/2020.emnlp-main.600 http://repository.essex.ac.uk/29084/ http://repository.essex.ac.uk/29084/1/2020.emnlp-main.600.pdf
|
|
BASE
|
|
Hide details
|
|
12 |
LitCrit: exploring intentions as a basis for automated feedback on Related Work.
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Unsupervised stance detection for arguments from consequences
|
|
|
|
BASE
|
|
Show details
|
|
15 |
XCOPA: A multilingual dataset for causal commonsense reasoning
|
|
|
|
BASE
|
|
Show details
|
|
16 |
From zero to hero: On the limitations of zero-shot language transfer with multilingual transformers
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Inference in an Approach to Discourse Anaphora
|
|
|
|
In: North East Linguistics Society (2020)
|
|
BASE
|
|
Show details
|
|
20 |
A survey of cross-lingual features for zero-shot cross-lingual semantic parsing ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|