1 |
WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation ...
|
|
|
|
Abstract:
A recurring challenge of crowdsourcing NLP datasets at scale is that human writers often rely on repetitive patterns when crafting examples, leading to a lack of linguistic diversity. We introduce a novel approach for dataset creation based on worker and AI collaboration, which brings together the generative strength of language models and the evaluative strength of humans. Starting with an existing dataset, MultiNLI for natural language inference (NLI), our approach uses dataset cartography to automatically identify examples that demonstrate challenging reasoning patterns, and instructs GPT-3 to compose new examples with similar patterns. Machine generated examples are then automatically filtered, and finally revised and labeled by human crowdworkers. The resulting dataset, WANLI, consists of 108,079 NLI examples and presents unique empirical strengths over existing NLI datasets. Remarkably, training a model on WANLI instead of MultiNLI (which is $4$ times larger) improves performance on seven out-of-domain ... : February 2022 ARR submission version ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://dx.doi.org/10.48550/arxiv.2201.05955 https://arxiv.org/abs/2201.05955
|
|
BASE
|
|
Hide details
|
|
2 |
Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Specializing Multilingual Language Models: An Empirical Study ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Provable Limitations of Acquiring Meaning from Ungrounded Form: What will Future Language Models Understand? ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Sentence Bottleneck Autoencoders from Transformer Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
All That's 'Human' Is Not Gold: Evaluating Human Evaluation of Generated Text ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Measuring Association Between Labels and Free-Text Rationales ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Promoting Graph Awareness in Linearized Graph-to-Text Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Shortformer: Better Language Modeling using Shorter Inputs ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Specializing Multilingual Language Models: An Empirical Study ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Challenges in Automated Debiasing for Toxic Language Detection ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Effects of Parameter Norm Growth During Transformer Training: Inductive Bias from Gradient Descent ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Competency Problems: On Finding and Removing Artifacts in Language Data ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|