DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6...41
Hits 21 – 40 of 812

21
Efficient-FedRec: Efficient Federated Learning Framework for Privacy-Preserving News Recommendation ...
BASE
Show details
22
Characterizing Test Anxiety on Social Media ...
BASE
Show details
23
Not All Negatives are Equal: Label-Aware Contrastive Loss for Fine-grained Text Classification ...
BASE
Show details
24
Improving Graph-based Sentence Ordering with Iteratively Predicted Pairwise Orderings ...
BASE
Show details
25
Contrastive Code Representation Learning ...
BASE
Show details
26
Machine Translation Decoding beyond Beam Search ...
BASE
Show details
27
Unsupervised Multi-View Post-OCR Error Correction With Language Models ...
BASE
Show details
28
AttentionRank: Unsupervised Keyphrase Extraction using Self and Cross Attentions ...
BASE
Show details
29
ProtoInfoMax: Prototypical Networks with Mutual Information Maximization for Out-of-Domain Detection ...
BASE
Show details
30
Multi-granularity Textual Adversarial Attack with Behavior Cloning ...
BASE
Show details
31
Automatic Fact-Checking with Document-level Annotations using BERT and Multiple Instance Learning ...
BASE
Show details
32
Towards the Early Detection of Child Predators in Chat Rooms: A BERT-based Approach ...
BASE
Show details
33
WebSRC: A Dataset for Web-Based Structural Reading Comprehension ...
BASE
Show details
34
Improving Math Word Problems with Pre-trained Knowledge and Hierarchical Reasoning ...
BASE
Show details
35
Semantic Categorization of Social Knowledge for Commonsense Question Answering ...
BASE
Show details
36
Pre-train or Annotate? Domain Adaptation with a Constrained Budget ...
BASE
Show details
37
Corpus-based Open-Domain Event Type Induction ...
BASE
Show details
38
Learning with Different Amounts of Annotation: From Zero to Many Labels ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.601/ Abstract: Training NLP systems typically assumes access to annotated data that has a single human label per example. Given imperfect labeling from annotators and inherent ambiguity of language, we hypothesize that single label is not sufficient to learn the spectrum of language interpretation. We explore new annotation distribution schemes, assigning multiple labels per example for a small subset of training examples. Introducing such multi label examples at the cost of annotating fewer examples brings clear gains on natural language inference task and entity typing task, even when we simply first train with a single label data and then fine tune with multi label examples. Extending a MixUp data augmentation framework, we propose a learning algorithm that can learn from training examples with different amount of annotation (with zero, one, or multiple labels). This algorithm efficiently combines signals from uneven training data and brings ...
Keyword: Computational Linguistics; Machine Learning; Machine Learning and Data Mining; Natural Language Processing
URL: https://underline.io/lecture/37576-learning-with-different-amounts-of-annotation-from-zero-to-many-labels
https://dx.doi.org/10.48448/ys77-8923
BASE
Hide details
39
Extracting Event Temporal Relations via Hyperbolic Geometry ...
BASE
Show details
40
FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging ...
BASE
Show details

Page: 1 2 3 4 5 6...41

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
812
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern