DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5
Hits 1 – 20 of 89

1
Benchmarking Answer Verification Methods for Question Answering-Based Summarization Evaluation Metrics ...
Deutsch, Daniel; Roth, Dan. - : arXiv, 2022
BASE
Show details
2
Question-Based Salient Span Selection for More Controllable Text Summarization ...
Deutsch, Daniel; Roth, Dan. - : arXiv, 2021
BASE
Show details
3
Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies ...
BASE
Show details
4
ESTER: A Machine Reading Comprehension Dataset for Reasoning about Event Semantic Relations ...
BASE
Show details
5
What is Your Article Based On? Inferring Fine-grained Provenance ...
BASE
Show details
6
BabyBERTa: Learning More Grammar With Small-Scale Child-Directed Language ...
BASE
Show details
7
{Z}ero-shot {L}abel-Aware {E}vent {T}rigger and {A}rgument {C}lassification ...
BASE
Show details
8
Coreference Reasoning in Machine Reading Comprehension ...
BASE
Show details
9
Event-Centric Natural Language Processing ...
BASE
Show details
10
Zero-shot Event Extraction via Transfer Learning: Challenges and Insights ...
BASE
Show details
11
Do We Know What We Don't Know? Studying Unanswerable Questions beyond SQuAD 2.0 ...
Abstract: Understanding when a text snippet does not provide a sought after information is an essential part of natural language understanding. Recent work (SQuAD 2.0, Rajpurkar et al., 2018) has attempted to make some progress in this direction by enriching the SQuAD dataset for the Extractive QA task with unanswerable questions. However, as we show, the performance of a top system trained on SQuAD 2.0 drops considerably in out-of-domain scenarios, limiting its use in practical situations. In order to study this we build an out-of-domain corpus, focusing on simple event-based questions and distinguish between two types of IDK questions: competitive questions, where the context includes an entity of the same type as the expected answer, and simpler, non-competitive questions where there is no entity of the same type in the context. We find that SQuAD 2.0-based models fail even in the case of the simpler questions. We then analyze the similarities and differences between the IDK phenomenon in Extractive QA and the ...
Keyword: Computational Linguistics; Language Models; Machine Learning; Machine Learning and Data Mining; Natural Language Processing; Question-Answering Systems
URL: https://underline.io/lecture/39616-do-we-know-what-we-don't-knowquestion-studying-unanswerable-questions-beyond-squad-2.0
https://dx.doi.org/10.48448/72cd-f989
BASE
Hide details
12
Towards Question-Answering as an Automatic Metric for Evaluating the Content Quality of a Summary ...
BASE
Show details
13
Constrained Labeled Data Generation for Low-Resource Named Entity Recognition ...
BASE
Show details
14
Extending Multilingual BERT to Low-Resource Languages ...
BASE
Show details
15
Cross-lingual Entity Alignment with Incidental Supervision ...
Chen, Muhao; Shi, Weijia; Zhou, Ben. - : arXiv, 2020
BASE
Show details
16
Do Language Embeddings Capture Scales? ...
BASE
Show details
17
TransOMCS: From Linguistic Graphs to Commonsense Knowledge ...
BASE
Show details
18
Is Killed More Significant than Fled? A Contextual Model for Salient Event Detection ...
BASE
Show details
19
Towards Question-Answering as an Automatic Metric for Evaluating the Content Quality of a Summary ...
BASE
Show details
20
Extending Wikification: Nominal discovery, nominal linking, and the grounding of nouns
Chen, Liang-Wei. - 2020
BASE
Show details

Page: 1 2 3 4 5

Catalogues
0
0
5
0
0
0
0
Bibliographies
5
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
81
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern