22 |
Efficient-FedRec: Efficient Federated Learning Framework for Privacy-Preserving News Recommendation ...
|
|
|
|
BASE
|
|
Show details
|
|
24 |
Not All Negatives are Equal: Label-Aware Contrastive Loss for Fine-grained Text Classification ...
|
|
|
|
BASE
|
|
Show details
|
|
25 |
Improving Graph-based Sentence Ordering with Iteratively Predicted Pairwise Orderings ...
|
|
|
|
BASE
|
|
Show details
|
|
28 |
Unsupervised Multi-View Post-OCR Error Correction With Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
29 |
AttentionRank: Unsupervised Keyphrase Extraction using Self and Cross Attentions ...
|
|
|
|
BASE
|
|
Show details
|
|
30 |
ProtoInfoMax: Prototypical Networks with Mutual Information Maximization for Out-of-Domain Detection ...
|
|
|
|
BASE
|
|
Show details
|
|
31 |
Multi-granularity Textual Adversarial Attack with Behavior Cloning ...
|
|
|
|
BASE
|
|
Show details
|
|
32 |
Automatic Fact-Checking with Document-level Annotations using BERT and Multiple Instance Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
33 |
Towards the Early Detection of Child Predators in Chat Rooms: A BERT-based Approach ...
|
|
|
|
BASE
|
|
Show details
|
|
34 |
TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
35 |
WebSRC: A Dataset for Web-Based Structural Reading Comprehension ...
|
|
|
|
BASE
|
|
Show details
|
|
36 |
Improving Math Word Problems with Pre-trained Knowledge and Hierarchical Reasoning ...
|
|
|
|
BASE
|
|
Show details
|
|
37 |
Semantic Categorization of Social Knowledge for Commonsense Question Answering ...
|
|
|
|
BASE
|
|
Show details
|
|
38 |
Pre-train or Annotate? Domain Adaptation with a Constrained Budget ...
|
|
|
|
BASE
|
|
Show details
|
|
40 |
Learning with Different Amounts of Annotation: From Zero to Many Labels ...
|
|
|
|
Abstract:
Anthology paper link: https://aclanthology.org/2021.emnlp-main.601/ Abstract: Training NLP systems typically assumes access to annotated data that has a single human label per example. Given imperfect labeling from annotators and inherent ambiguity of language, we hypothesize that single label is not sufficient to learn the spectrum of language interpretation. We explore new annotation distribution schemes, assigning multiple labels per example for a small subset of training examples. Introducing such multi label examples at the cost of annotating fewer examples brings clear gains on natural language inference task and entity typing task, even when we simply first train with a single label data and then fine tune with multi label examples. Extending a MixUp data augmentation framework, we propose a learning algorithm that can learn from training examples with different amount of annotation (with zero, one, or multiple labels). This algorithm efficiently combines signals from uneven training data and brings ...
|
|
Keyword:
Computational Linguistics; Machine Learning; Machine Learning and Data Mining; Natural Language Processing
|
|
URL: https://underline.io/lecture/37576-learning-with-different-amounts-of-annotation-from-zero-to-many-labels https://dx.doi.org/10.48448/ys77-8923
|
|
BASE
|
|
Hide details
|
|
|
|