DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6 7...72
Hits 41 – 60 of 1.423

41
15D: Summarization #2 ...
BASE
Show details
42
Explanations for CommonsenseQA: New Dataset and Models ...
BASE
Show details
43
Societal Biases in Language Generation: Progress and Challenges ...
BASE
Show details
44
UserAdapter: Few-Shot User Learning in Sentiment Analysis ...
BASE
Show details
45
QA-Driven Zero-shot Slot Filling with Weak Supervision Pretraining ...
BASE
Show details
46
9C: Question Answering #2 ...
BASE
Show details
47
GEM: Natural Language Generation, Evaluation, and Metrics - Part 2 ...
BASE
Show details
48
Matching Distributions between Model and Data: Cross-domain Knowledge Distillation for Unsupervised Domain Adaptation ...
Abstract: Read paper: https://www.aclanthology.org/2021.acl-long.421 Abstract: Unsupervised Domain Adaptation (UDA) aims to transfer the knowledge of source domain to the unlabeled target domain. Existing methods typically require to learn to adapt the target model by exploiting the source data and sharing the network architecture across domains. However, this pipeline makes the source data risky and is inflexible for deploying the target model. This paper tackles a novel setting where only a trained source model is available and different network architectures can be adapted for target domain in terms of deployment environments. We propose a generic framework named \emph{Cross-domain Knowledge Distillation} (CdKD) without needing any source data. CdKD matches the joint distributions between a trained source model and a set of target data during distilling the knowledge from the source model to the target domain. As a type of important knowledge in the source domain, for the first time, the gradient information is ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://dx.doi.org/10.48448/z0jg-5x37
https://underline.io/lecture/25847-matching-distributions-between-model-and-data-cross-domain-knowledge-distillation-for-unsupervised-domain-adaptation
BASE
Hide details
49
Measuring and Increasing Context Usage in Context-Aware Machine Translation ...
BASE
Show details
50
UMIC: An Unreferenced Metric for Image Captioning via Contrastive Learning ...
BASE
Show details
51
Uncertainty and Surprisal Jointly Deliver the Punchline: Exploiting Incongruity-Based Features for Humor Recognition ...
BASE
Show details
52
GEM: Natural Language Generation, Evaluation, and Metrics - Part 1 ...
BASE
Show details
53
CoDesc: A Large Code–Description Parallel Dataset ...
BASE
Show details
54
Meta Learning and Its Applications to Natural Language Processing ...
BASE
Show details
55
CausaLM: Causal Model Explanation Through Counterfactual Language Models ...
BASE
Show details
56
Beyond Metadata: What Paper Authors Say About Corpora They Use ...
BASE
Show details
57
Learning Latent Structures for Cross Action Phrase Relations in Wet Lab Protocols ...
BASE
Show details
58
A Knowledge-Guided Framework for Frame Identification ...
BASE
Show details
59
One Teacher is Enough? Pre-trained Language Model Distillation from Multiple Teachers ...
BASE
Show details
60
Exploiting Auxiliary Data for Offensive Language Detection with Bidirectional Transformers ...
BASE
Show details

Page: 1 2 3 4 5 6 7...72

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
1.423
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern