DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 38

1
SyGNS: A Systematic Generalization Testbed Based on Natural Language Semantics ...
BASE
Show details
2
Summarize-then-Answer: Generating Concise Explanations for Multi-hop Reading Comprehension ...
BASE
Show details
3
SHAPE: Shifted Absolute Position Embedding for Transformers ...
BASE
Show details
4
Incorporating Residual and Normalization Layers into Analysis of Masked Language Models ...
BASE
Show details
5
Pseudo Zero Pronoun Resolution Improves Zero Anaphora Resolution ...
BASE
Show details
6
Exploring Methods for Generating Feedback Comments for Writing Learning ...
BASE
Show details
7
Transformer-based Lexically Constrained Headline Generation ...
BASE
Show details
8
Transformer-based Lexically Constrained Headline Generation ...
BASE
Show details
9
Topicalization in Language Models: A Case Study on Japanese ...
BASE
Show details
10
Lower Perplexity is Not Always Human-Like ...
BASE
Show details
11
Lower Perplexity is Not Always Human-Like ...
BASE
Show details
12
An Empirical Study of Contextual Data Augmentation for Japanese Zero Anaphora Resolution ...
BASE
Show details
13
PheMT: A Phenomenon-wise Dataset for Machine Translation Robustness on User-Generated Contents ...
Fujii, Ryo; Mita, Masato; Abe, Kaori. - : arXiv, 2020
BASE
Show details
14
Seeing the world through text: Evaluating image descriptions for commonsense reasoning in machine reading comprehension ...
BASE
Show details
15
Language Models as an Alternative Evaluator of Word Order Hypotheses: A Case Study in Japanese ...
BASE
Show details
16
Encoder-Decoder Models Can Benefit from Pre-trained Masked Language Models in Grammatical Error Correction ...
BASE
Show details
17
Attention is Not Only a Weight: Analyzing Transformers with Vector Norms ...
BASE
Show details
18
Filtering Noisy Dialogue Corpora by Connectivity and Content Relatedness ...
Akama, Reina; Yokoi, Sho; Suzuki, Jun. - : arXiv, 2020
BASE
Show details
19
Modeling Event Salience in Narratives via Barthes' Cardinal Functions ...
Abstract: Events in a narrative differ in salience: some are more important to the story than others. Esti- mating event salience is useful for tasks such as story generation, and as a tool for text analysis in narratology and folkloristics. To compute event salience without any annotations, we adopt Barthes' definition of event salience and propose several unsupervised methods that require only a pre-trained language model. Evaluating the proposed methods on folktales with event salience annotation, we show that the proposed methods outperform baseline methods and find fine-tuning a language model on narrative texts is a key factor in improving the proposed methods. ...
Keyword: Computer and Information Science; Natural Language Processing; Neural Network
URL: https://dx.doi.org/10.48448/tp8a-3y38
https://underline.io/lecture/6264-modeling-event-salience-in-narratives-via-barthes'-cardinal-functions
BASE
Hide details
20
Do Neural Models Learn Systematicity of Monotonicity Inference in Natural Language? ...
BASE
Show details

Page: 1 2

Catalogues
2
0
1
0
2
0
0
Bibliographies
3
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
32
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern