DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6 7 8 9...42
Hits 81 – 100 of 830

81
Wikily Supervised Neural Translation Tailored to Cross-Lingual Tasks ...
BASE
Show details
82
Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization ...
BASE
Show details
83
Sorting through the noise: Testing robustness of information processing in pre-trained language models ...
BASE
Show details
84
Building the Directed Semantic Graph for Coherent Long Text Generation ...
BASE
Show details
85
Detect and Classify – Joint Span Detection and Classification for Health Outcomes ...
BASE
Show details
86
Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation ...
BASE
Show details
87
Evaluation of Summarization Systems across Gender, Age, and Race ...
BASE
Show details
88
A Language Model-based Generative Classifier for Sentence-level Discourse Parsing ...
BASE
Show details
89
Controllable Neural Dialogue Summarization with Personal Named Entity Planning ...
BASE
Show details
90
Foreseeing the Benefits of Incidental Supervision ...
BASE
Show details
91
Graphine: A Dataset for Graph-aware Terminology Definition Generation ...
BASE
Show details
92
CSDS: A Fine-Grained Chinese Dataset for Customer Service Dialogue Summarization ...
BASE
Show details
93
Connecting Attributions and QA Model Behavior on Realistic Counterfactuals ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.447/ Abstract: When a model attribution technique highlights a particular part of the input, a user might understand this highlight as making a statement about counterfactuals (Miller, 2019): if that part of the input were to change, the model's prediction might change as well. This paper investigates how well different attribution techniques align with this assumption on realistic counterfactuals in the case of reading comprehension (RC). RC is a particularly challenging test case, as token-level attributions that have been extensively studied in other NLP tasks such as sentiment analysis are less suitable to represent the reasoning that RC models perform. We construct counterfactual sets for three different RC settings, and through heuristics that can connect attribution methods' outputs to high-level model behavior, we can evaluate how useful different attribution methods and even different formats are for understanding counterfactuals. We ...
Keyword: Computational Linguistics; Machine Learning; Machine Learning and Data Mining; Natural Language Processing; Sentiment Analysis
URL: https://dx.doi.org/10.48448/rxrd-6468
https://underline.io/lecture/37547-connecting-attributions-and-qa-model-behavior-on-realistic-counterfactuals
BASE
Hide details
94
Provable Limitations of Acquiring Meaning from Ungrounded Form: What will Future Language Models Understand? ...
BASE
Show details
95
Generation and Extraction Combined Dialogue State Tracking with Hierarchical Ontology Integration ...
BASE
Show details
96
Error-Sensitive Evaluation for Ordinal Target Variables ...
BASE
Show details
97
CDLM: Cross-Document Language Modeling ...
BASE
Show details
98
Data-to-text Generation by Splicing Together Nearest Neighbors ...
BASE
Show details
99
Natural Language Processing Meets Quantum Physics: A Survey and Categorization ...
BASE
Show details
100
End-to-end style-conditioned poetry generation: What does it take to learn from examples alone? ...
BASE
Show details

Page: 1 2 3 4 5 6 7 8 9...42

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
830
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern