DE eng

Search in the Catalogues and Directories

Page: 1...5 6 7 8 9 10 11 12 13...72
Hits 161 – 180 of 1.423

161
KACE: Generating Knowledge Aware Contrastive Explanations for Natural Language Inference ...
BASE
Show details
162
Alpha at SemEval-2021 Tasks 6: Transformer Based Propaganda Classification ...
BASE
Show details
163
Structural Pre-training for Dialogue Comprehension ...
BASE
Show details
164
A Closer Look into the Robustness of Neural Dependency Parsers Using Better Adversarial Examples ...
BASE
Show details
165
Multimodal or Text? Retrieval or BERT? Benchmarking Classifiers for the Shared Task on Hateful Memes ...
BASE
Show details
166
Increasing Faithfulness in Knowledge-Grounded Dialogue with Controllable Features ...
BASE
Show details
167
Personalized Transformer for Explainable Recommendation ...
BASE
Show details
168
Neural-Symbolic Commonsense Reasoner with Relation Predictors ...
BASE
Show details
169
Joint Detection and Coreference Resolution of Entities and Events with Document-level Context Aggregation ...
BASE
Show details
170
Bi-Granularity Contrastive Learning for Post-Training in Few-Shot Scene ...
BASE
Show details
171
Long Text Generation by Modeling Sentence-Level and Discourse-Level Coherence ...
BASE
Show details
172
Semantic Frame Induction using Masked Word Embeddings and Two-Step Clustering ...
BASE
Show details
173
Self-Attention Networks Can Process Bounded Hierarchical Languages ...
BASE
Show details
174
DESCGEN: A Distantly Supervised Datasetfor Generating Entity Descriptions ...
BASE
Show details
175
KaggleDBQA: Realistic Evaluation of Text-to-SQL Parsers ...
BASE
Show details
176
"We will Reduce Taxes" - Identifying Election Pledges with Language Models ...
BASE
Show details
177
Super Tickets in Pre-Trained Language Models: From Model Compression to Improving Generalization ...
BASE
Show details
178
Quantifying and Avoiding Unfair Qualification Labour in Crowdsourcing ...
Abstract: Read paper: https://www.aclanthology.org/2021.acl-short.44 Abstract: Extensive work has argued in favour of paying crowd workers a wage that is at least equivalent to the U.S. federal minimum wage. Meanwhile, research on collecting high quality annotations suggests using a qualification that requires workers to have previously completed a certain number of tasks. If most requesters who pay fairly require workers to have completed a large number of tasks already then workers need to complete a substantial amount of poorly paid work before they can earn a fair wage. Through analysis of worker discussions and guidance for researchers, we estimate that workers spend approximately 2.25 months of full time effort on poorly paid tasks in order to get the qualifications needed for better paid tasks. We discuss alternatives to this qualification and conduct a study of the correlation between qualifications and work quality on two NLP tasks. We find that it is possible to reduce the burden on workers while still ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://dx.doi.org/10.48448/p86m-vy17
https://underline.io/lecture/25466-quantifying-and-avoiding-unfair-qualification-labour-in-crowdsourcing
BASE
Hide details
179
Domain-Adaptive Pretraining Methods for Dialogue Understanding ...
BASE
Show details
180
Semi-Automatic Construction of Text-to-SQL Data for Domain Transfer ...
BASE
Show details

Page: 1...5 6 7 8 9 10 11 12 13...72

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
1.423
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern