DE eng

Search in the Catalogues and Directories

Hits 1 – 13 of 13

1
Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation ...
BASE
Show details
2
RiddleSense: Reasoning about Riddle Questions Featuring Linguistic Creativity and Commonsense Knowledge ...
BASE
Show details
3
Learning Contextualized Knowledge Structures for Commonsense Reasoning ...
BASE
Show details
4
AdaTag: Multi-Attribute Value Extraction from Product Profiles with Adaptive Decoding ...
BASE
Show details
5
RICA: Evaluating Robust Inference Capabilities Based on Commonsense Axioms ...
BASE
Show details
6
RockNER: A Simple Method to Create Adversarial Examples for Evaluating the Robustness of Named Entity Recognition Models ...
BASE
Show details
7
ECONET: Effective Continual Pretraining of Language Models for Event Temporal Reasoning ...
BASE
Show details
8
Discretized Integrated Gradients for Explaining Language Models ...
BASE
Show details
9
Improving Counterfactual Generation for Fair Hate Speech Detection ...
BASE
Show details
10
Learning to Generate Task-Specific Adapters from Task Description ...
BASE
Show details
11
Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources ...
BASE
Show details
12
Common Sense Beyond English: Evaluating and Improving Multilingual Language Models for Commonsense Reasoning ...
BASE
Show details
13
Extract, Denoise and Enforce: Evaluating and Improving Concept Preservation for Text-to-Text Generation ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.413/ Abstract: Prior studies on text-to-text generation typically assume that the model could figure out what to attend to in the input and what to include in the output via seq2seq learning, with only the parallel training data and no additional guidance. However, it remains unclear whether current models can preserve important concepts in the source input, as seq2seq learning does not have explicit focus on the concepts and commonly used evaluation metrics also treat concepts equally important as other tokens. In this paper, we present a systematic analysis that studies whether current seq2seq models, especially pre-trained language models, are good enough for preserving important input concepts and to what extent explicitly guiding generation with the concepts as lexical constraints is beneficial. We answer the above questions by conducting extensive analytical experiments on four representative text-to-text generation tasks. Based on the ...
Keyword: Computational Linguistics; Machine Learning; Machine Learning and Data Mining; Natural Language Processing; Text Generation
URL: https://dx.doi.org/10.48448/zxwn-5w24
https://underline.io/lecture/37354-extract,-denoise-and-enforce-evaluating-and-improving-concept-preservation-for-text-to-text-generation
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
13
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern