DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 38

1
SyGNS: A Systematic Generalization Testbed Based on Natural Language Semantics ...
BASE
Show details
2
Summarize-then-Answer: Generating Concise Explanations for Multi-hop Reading Comprehension ...
BASE
Show details
3
SHAPE: Shifted Absolute Position Embedding for Transformers ...
BASE
Show details
4
Incorporating Residual and Normalization Layers into Analysis of Masked Language Models ...
BASE
Show details
5
Pseudo Zero Pronoun Resolution Improves Zero Anaphora Resolution ...
BASE
Show details
6
Exploring Methods for Generating Feedback Comments for Writing Learning ...
BASE
Show details
7
Transformer-based Lexically Constrained Headline Generation ...
BASE
Show details
8
Transformer-based Lexically Constrained Headline Generation ...
BASE
Show details
9
Topicalization in Language Models: A Case Study on Japanese ...
BASE
Show details
10
Lower Perplexity is Not Always Human-Like ...
BASE
Show details
11
Lower Perplexity is Not Always Human-Like ...
Abstract: In computational psycholinguistics, various language models have been evaluated against human reading behavior (e.g., eye movement) to build human-like computational models. However, most previous efforts have focused almost exclusively on English, despite the recent trend towards linguistic universal within the general community. In order to fill the gap, this paper investigates whether the established results in computational psycholinguistics can be generalized across languages. Specifically, we re-examine an established generalization -- the lower perplexity a language model has, the more human-like the language model is -- in Japanese with typologically different structures from English. Our experiments demonstrate that this established generalization exhibits a surprising lack of universality; namely, lower perplexity is not always human-like. Moreover, this discrepancy between English and Japanese is further explored from the perspective of (non-)uniform information density. Overall, our results ... : Accepted by ACL 2021 ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://dx.doi.org/10.48550/arxiv.2106.01229
https://arxiv.org/abs/2106.01229
BASE
Hide details
12
An Empirical Study of Contextual Data Augmentation for Japanese Zero Anaphora Resolution ...
BASE
Show details
13
PheMT: A Phenomenon-wise Dataset for Machine Translation Robustness on User-Generated Contents ...
Fujii, Ryo; Mita, Masato; Abe, Kaori. - : arXiv, 2020
BASE
Show details
14
Seeing the world through text: Evaluating image descriptions for commonsense reasoning in machine reading comprehension ...
BASE
Show details
15
Language Models as an Alternative Evaluator of Word Order Hypotheses: A Case Study in Japanese ...
BASE
Show details
16
Encoder-Decoder Models Can Benefit from Pre-trained Masked Language Models in Grammatical Error Correction ...
BASE
Show details
17
Attention is Not Only a Weight: Analyzing Transformers with Vector Norms ...
BASE
Show details
18
Filtering Noisy Dialogue Corpora by Connectivity and Content Relatedness ...
Akama, Reina; Yokoi, Sho; Suzuki, Jun. - : arXiv, 2020
BASE
Show details
19
Modeling Event Salience in Narratives via Barthes' Cardinal Functions ...
BASE
Show details
20
Do Neural Models Learn Systematicity of Monotonicity Inference in Natural Language? ...
BASE
Show details

Page: 1 2

Catalogues
2
0
1
0
2
0
0
Bibliographies
3
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
32
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern