DE eng

Search in the Catalogues and Directories

Hits 1 – 13 of 13

1
Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation ...
BASE
Show details
2
RiddleSense: Reasoning about Riddle Questions Featuring Linguistic Creativity and Commonsense Knowledge ...
BASE
Show details
3
Learning Contextualized Knowledge Structures for Commonsense Reasoning ...
BASE
Show details
4
AdaTag: Multi-Attribute Value Extraction from Product Profiles with Adaptive Decoding ...
BASE
Show details
5
RICA: Evaluating Robust Inference Capabilities Based on Commonsense Axioms ...
BASE
Show details
6
RockNER: A Simple Method to Create Adversarial Examples for Evaluating the Robustness of Named Entity Recognition Models ...
BASE
Show details
7
ECONET: Effective Continual Pretraining of Language Models for Event Temporal Reasoning ...
BASE
Show details
8
Discretized Integrated Gradients for Explaining Language Models ...
BASE
Show details
9
Improving Counterfactual Generation for Fair Hate Speech Detection ...
BASE
Show details
10
Learning to Generate Task-Specific Adapters from Task Description ...
Abstract: Read paper: https://www.aclanthology.org/2021.acl-short.82 Abstract: Pre-trained text-to-text transformers such as BART have achieved impressive performance across a range of NLP tasks. Recent study further shows that they can learn to generalize to novel tasks, by including task descriptions as part of the source sequence and training the model with (source, target) examples. At test time, these fine-tuned models can make inferences on new tasks using the new task descriptions as part of the input. However, this approach has potential limitations, as the model learns to solve individual (source, target) examples (i.e., at the instance level), instead of learning to solve tasks by taking all examples within a task as a whole (i.e., at the task level). To this end, we introduce Hypter, a framework that improves text-to-text transformer's generalization ability to unseen tasks by training a hypernetwork to generate task-specific, light-weight adapters from task descriptions. Experiments on ZEST dataset and a ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://underline.io/lecture/26044-learning-to-generate-task-specific-adapters-from-task-description
https://dx.doi.org/10.48448/08rn-0j25
BASE
Hide details
11
Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources ...
BASE
Show details
12
Common Sense Beyond English: Evaluating and Improving Multilingual Language Models for Commonsense Reasoning ...
BASE
Show details
13
Extract, Denoise and Enforce: Evaluating and Improving Concept Preservation for Text-to-Text Generation ...
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
13
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern