1 |
Rewards with Negative Examples for Reinforced Topic-Focused Abstractive Summarization ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Low-Resource Dialogue Summarization with Domain-Agnostic Multi-Source Pretraining ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in Abstractive Summarization ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Weakly supervised discourse segmentation for multiparty oral conversations ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Evaluation of Summarization Systems across Gender, Age, and Race ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Controllable Neural Dialogue Summarization with Personal Named Entity Planning ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
CSDS: A Fine-Grained Chinese Dataset for Customer Service Dialogue Summarization ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
A Thorough Evaluation of Task-Specific Pretraining for Summarization ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Effective Sequence-to-Sequence Dialogue State Tracking ...
|
|
|
|
Abstract:
Anthology paper link: https://aclanthology.org/2021.emnlp-main.593/ Abstract: Sequence-to-sequence models have been applied to a wide variety of NLP tasks, but how to properly use them for dialogue state tracking has not been systematically investigated. In this paper, we study this problem from the perspectives of pre-training objectives as well as the formats of context representations. We demonstrate that the choice of pre-training objective makes a significant difference to the state tracking quality. In particular, we find that masked span prediction is more effective than auto-regressive language modeling. We also explore using Pegasus, a span prediction-based pre-training objective for text summarization, for the state tracking model. We found that pre-training for the seemingly distant summarization task works surprisingly well for dialogue state tracking. In addition, we found that while recurrent state context representation works also reasonably well, the model may have a hard time recovering from ...
|
|
Keyword:
Computational Linguistics; Language Models; Machine Learning; Machine Learning and Data Mining; Natural Language Processing; Text Summarization
|
|
URL: https://underline.io/lecture/38022-effective-sequence-to-sequence-dialogue-state-tracking https://dx.doi.org/10.48448/nj9n-m011
|
|
BASE
|
|
Hide details
|
|
12 |
Context or No Context? A preliminary exploration of human-in-the-loop approach for Incremental Temporal Summarization in meetings ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Exploring Multitask Learning for Low-Resource Abstractive Summarization ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Capturing Speaker Incorrectness: Speaker-Focused Post-Correction for Abstractive Dialogue Summarization ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Narrative Embedding: Re-Contextualization Through Attention ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
TWEETSUMM - A Dialog Summarization Dataset for Customer Service ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
SUBSUME: A Dataset for Subjective Summary Extraction from Wikipedia Documents ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
AUTOSUMM: Automatic Model Creation for Text Summarization ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
A Statistical Analysis of Summarization Evaluation Metrics Using Resampling Methods ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|