DE eng

Search in the Catalogues and Directories

Page: 1 2 3
Hits 1 – 20 of 46

1
Rewards with Negative Examples for Reinforced Topic-Focused Abstractive Summarization ...
BASE
Show details
2
Low-Resource Dialogue Summarization with Domain-Agnostic Multi-Source Pretraining ...
BASE
Show details
3
HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization ...
BASE
Show details
4
CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in Abstractive Summarization ...
BASE
Show details
5
Weakly supervised discourse segmentation for multiparty oral conversations ...
BASE
Show details
6
Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization ...
BASE
Show details
7
Evaluation of Summarization Systems across Gender, Age, and Race ...
BASE
Show details
8
Controllable Neural Dialogue Summarization with Personal Named Entity Planning ...
BASE
Show details
9
CSDS: A Fine-Grained Chinese Dataset for Customer Service Dialogue Summarization ...
BASE
Show details
10
A Thorough Evaluation of Task-Specific Pretraining for Summarization ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.12/ Abstract: Task-agnostic pretraining objectives like masked language models or corrupted span prediction are applicable to a wide range of NLP downstream tasks (Raffel et al., 2019), but are outperformed by task-specific pretraining objectives like predicting extracted gap sentences on summarization (Zhang et al., 2020). We compare three summarization specific pretraining objectives with the task agnostic corrupted span prediction pretraining in a controlled study. We also extend our study to a low resource and zero shot setup, to understand how many training examples are needed in order to ablate the task-specific pretraining without quality loss. Our results show that task-agnostic pretraining is sufficient for most cases which hopefully reduces the need for costly task-specific pretraining. We also report new state-of-the-art number for two summarization tasks using a T5 model with 11 billion parameters and an optimal beam search length ...
Keyword: Computational Linguistics; Language Models; Machine Learning; Machine Learning and Data Mining; Natural Language Processing; Text Summarization
URL: https://dx.doi.org/10.48448/rxsd-pa23
https://underline.io/lecture/38078-a-thorough-evaluation-of-task-specific-pretraining-for-summarization
BASE
Hide details
11
Effective Sequence-to-Sequence Dialogue State Tracking ...
BASE
Show details
12
Context or No Context? A preliminary exploration of human-in-the-loop approach for Incremental Temporal Summarization in meetings ...
BASE
Show details
13
Exploring Multitask Learning for Low-Resource Abstractive Summarization ...
BASE
Show details
14
Capturing Speaker Incorrectness: Speaker-Focused Post-Correction for Abstractive Dialogue Summarization ...
BASE
Show details
15
Narrative Embedding: Re-Contextualization Through Attention ...
BASE
Show details
16
TWEETSUMM - A Dialog Summarization Dataset for Customer Service ...
BASE
Show details
17
SUBSUME: A Dataset for Subjective Summary Extraction from Wikipedia Documents ...
BASE
Show details
18
AUTOSUMM: Automatic Model Creation for Text Summarization ...
BASE
Show details
19
A Statistical Analysis of Summarization Evaluation Metrics Using Resampling Methods ...
BASE
Show details
20
Retrieval Augmented Code Generation and Summarization ...
BASE
Show details

Page: 1 2 3

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
46
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern