DE eng

Search in the Catalogues and Directories

Page: 1...3 4 5 6 7 8 9 10 11...52
Hits 121 – 140 of 1.029

121
End-to-end style-conditioned poetry generation: What does it take to learn from examples alone? ...
BASE
Show details
122
Memory and Knowledge Augmented Language Models for Inferring Salience in Long-Form Stories ...
BASE
Show details
123
A Thorough Evaluation of Task-Specific Pretraining for Summarization ...
BASE
Show details
124
Rethinking Data Augmentation for Low-Resource Neural Machine Translation: A Multi-Task Learning Approach ...
BASE
Show details
125
PAQ: 65 Million Probably-Asked Questions and What You Can Do With Them ...
BASE
Show details
126
Good-Enough Example Extrapolation ...
BASE
Show details
127
To what extent do human explanations of model behavior align with actual model behavior? ...
BASE
Show details
128
Sequence Length is a Domain: Length-based Overfitting in Transformer Models ...
BASE
Show details
129
Enhancing Multiple-choice Machine Reading Comprehension by Punishing Illogical Interpretations ...
BASE
Show details
130
Disentangling Generative Factors in Natural Language with Discrete Variational Autoencoders ...
BASE
Show details
131
Effective Sequence-to-Sequence Dialogue State Tracking ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.593/ Abstract: Sequence-to-sequence models have been applied to a wide variety of NLP tasks, but how to properly use them for dialogue state tracking has not been systematically investigated. In this paper, we study this problem from the perspectives of pre-training objectives as well as the formats of context representations. We demonstrate that the choice of pre-training objective makes a significant difference to the state tracking quality. In particular, we find that masked span prediction is more effective than auto-regressive language modeling. We also explore using Pegasus, a span prediction-based pre-training objective for text summarization, for the state tracking model. We found that pre-training for the seemingly distant summarization task works surprisingly well for dialogue state tracking. In addition, we found that while recurrent state context representation works also reasonably well, the model may have a hard time recovering from ...
Keyword: Computational Linguistics; Language Models; Machine Learning; Machine Learning and Data Mining; Natural Language Processing; Text Summarization
URL: https://underline.io/lecture/38022-effective-sequence-to-sequence-dialogue-state-tracking
https://dx.doi.org/10.48448/nj9n-m011
BASE
Hide details
132
An Investigation into the Contribution of Locally Aggregated Descriptors to Figurative Language Identification ...
BASE
Show details
133
Solving Aspect Category Sentiment Analysis as a Text Generation Task ...
BASE
Show details
134
Discourse-Driven Integrated Dialogue Development Environment for Open-Domain Dialogue Systems ...
BASE
Show details
135
Context or No Context? A preliminary exploration of human-in-the-loop approach for Incremental Temporal Summarization in meetings ...
BASE
Show details
136
Learning Data Augmentation Schedules for Natural Language Processing ...
BASE
Show details
137
Locke's Holiday: Belief Bias in Machine Reading ...
BASE
Show details
138
Searching for More Efficient Dynamic Programs ...
BASE
Show details
139
Logic-level Evidence Retrieval and Graph-based Verification Network for Table-based Fact Verification ...
BASE
Show details
140
Improving Synonym Recommendation Using Sentence Context ...
BASE
Show details

Page: 1...3 4 5 6 7 8 9 10 11...52

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
1.029
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern