DE eng

Search in the Catalogues and Directories

Page: 1...57 58 59 60 61 62 63
Hits 1.201 – 1.220 of 1.255

1201
Paths to Relation Extraction through Semantic Structure ...
BASE
Show details
1202
Rule Augmented Unsupervised Constituency Parsing ...
BASE
Show details
1203
Transition-based Bubble Parsing: Improvements on Coordination Structure Prediction ...
BASE
Show details
1204
Dodrio: Exploring Transformer Models with Interactive Visualization ...
BASE
Show details
1205
Vyākarana A Colorless Green Benchmark for Syntactic Evaluation in Indic Languages ...
BASE
Show details
1206
What if This Modified That? Syntactic Interventions with Counterfactual Embeddings ...
BASE
Show details
1207
Annotations Matter: Leveraging Multi-task Learning to Parse UD and SUD ...
BASE
Show details
1208
The Limitations of Limited Context for Constituency Parsing ...
BASE
Show details
1209
Effective Batching for Recurrent Neural Network Grammars ...
BASE
Show details
1210
Factorising Meaning and Form for Intent-Preserving Paraphrasing ...
BASE
Show details
1211
Infusing Finetuning with Semantic Dependencies ...
BASE
Show details
1212
7D: Syntax: Tagging, Chunking, and Parsing #1 ...
BASE
Show details
1213
OntoGUM: Evaluating Contextualized SOTA Coreference Resolution on 12 More Genres ...
BASE
Show details
1214
Topicalization in Language Models: A Case Study on Japanese ...
BASE
Show details
1215
An In-depth Study on Internal Structure of Chinese Words ...
BASE
Show details
1216
To Point or Not to Point: Understanding How Abstractive Summarizers Paraphrase Text ...
Abstract: Read paper: https://www.aclanthology.org/2021.findings-acl.298 Abstract: Abstractive neural summarization models have seen great improvements in recent years, as shown by ROUGE scores of the generated summaries. But despite these improved metrics, there is limited understanding of the strategies different models employ, and how those strategies relate their understanding of language. To understand this better, we run several experiments to characterize how one popular abstractive model, the pointer-generator model of See et al. (2017), uses its explicit copy/generation switch to control its level of abstraction (generation) vs extraction (copying). On an extractive-biased dataset, the model utilizes syntactic boundaries to truncate sentences that are otherwise often copied verbatim. When we modify the copy/generation switch and force the model to generate, only simple paraphrasing abilities are revealed alongside factual inaccuracies and hallucinations. On an abstractive-biased dataset, the model copies ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://dx.doi.org/10.48448/vaxn-gy13
https://underline.io/lecture/26389-to-point-or-not-to-point-understanding-how-abstractive-summarizers-paraphrase-text
BASE
Hide details
1217
ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information ...
BASE
Show details
1218
When Do You Need Billions of Words of Pretraining Data? ...
BASE
Show details
1219
Bridge-Based Active Domain Adaptation for Aspect Term Extraction ...
BASE
Show details
1220
Recursive Tree-Structured Self-Attention for Answer Sentence Selection ...
BASE
Show details

Page: 1...57 58 59 60 61 62 63

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
1.255
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern