DE eng

Search in the Catalogues and Directories

Hits 1 – 5 of 5

1
Learning Argument Structures with Recurrent Neural Network Grammars
In: Proceedings of the Society for Computation in Linguistics (2022)
BASE
Show details
2
Modeling Human Sentence Processing with Left-Corner Recurrent Neural Network Grammars ...
Abstract: In computational linguistics, it has been shown that hierarchical structures make language models (LMs) more human-like. However, the previous literature has been agnostic about a parsing strategy of the hierarchical models. In this paper, we investigated whether hierarchical structures make LMs more human-like, and if so, which parsing strategy is most cognitively plausible. In order to address this question, we evaluated three LMs against human reading times in Japanese with head-final left-branching structures: Long Short-Term Memory (LSTM) as a sequential model and Recurrent Neural Network Grammars (RNNGs) with top-down and left-corner parsing strategies as hierarchical models. Our computational modeling demonstrated that left-corner RNNGs outperformed top-down RNNGs and LSTM, suggesting that hierarchical and left-corner architectures are more cognitively plausible than top-down or sequential architectures. In addition, the relationships between the cognitive plausibility and (i) perplexity, (ii) ... : Accepted by EMNLP 2021 ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://arxiv.org/abs/2109.04939
https://dx.doi.org/10.48550/arxiv.2109.04939
BASE
Hide details
3
Modeling Human Sentence Processing with Left-Corner Recurrent Neural Network Grammars ...
BASE
Show details
4
Lower Perplexity is Not Always Human-Like ...
BASE
Show details
5
Lower Perplexity is Not Always Human-Like ...
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
5
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern