DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 38

1
SyGNS: A Systematic Generalization Testbed Based on Natural Language Semantics ...
Abstract: Read paper: https://www.aclanthology.org/2021.findings-acl.10 Abstract: Recently, deep neural networks (DNNs) have achieved great success in semantically challenging NLP tasks, yet it remains unclear whether DNN models can capture compositional meanings, those aspects of meaning that have been long studied in formal semantics. To investigate this issue, we propose a Systematic Generalization testbed based on Natural language Semantics (SyGNS), whose challenge is to map natural language sentences to multiple forms of scoped meaning representations, designed to account for various semantic phenomena. Using SyGNS, we test whether neural networks can systematically parse sentences involving novel combinations of logical expressions such as quantifiers and negation. Experiments show that Transformer and GRU models can generalize to unseen combinations of quantifiers, negations, and modifiers that are similar to given training instances in form, but not to the others. We also find that the generalization ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://underline.io/lecture/26101-sygns-a-systematic-generalization-testbed-based-on-natural-language-semantics
https://dx.doi.org/10.48448/cgwb-tv42
BASE
Hide details
2
Summarize-then-Answer: Generating Concise Explanations for Multi-hop Reading Comprehension ...
BASE
Show details
3
SHAPE: Shifted Absolute Position Embedding for Transformers ...
BASE
Show details
4
Incorporating Residual and Normalization Layers into Analysis of Masked Language Models ...
BASE
Show details
5
Pseudo Zero Pronoun Resolution Improves Zero Anaphora Resolution ...
BASE
Show details
6
Exploring Methods for Generating Feedback Comments for Writing Learning ...
BASE
Show details
7
Transformer-based Lexically Constrained Headline Generation ...
BASE
Show details
8
Transformer-based Lexically Constrained Headline Generation ...
BASE
Show details
9
Topicalization in Language Models: A Case Study on Japanese ...
BASE
Show details
10
Lower Perplexity is Not Always Human-Like ...
BASE
Show details
11
Lower Perplexity is Not Always Human-Like ...
BASE
Show details
12
An Empirical Study of Contextual Data Augmentation for Japanese Zero Anaphora Resolution ...
BASE
Show details
13
PheMT: A Phenomenon-wise Dataset for Machine Translation Robustness on User-Generated Contents ...
Fujii, Ryo; Mita, Masato; Abe, Kaori. - : arXiv, 2020
BASE
Show details
14
Seeing the world through text: Evaluating image descriptions for commonsense reasoning in machine reading comprehension ...
BASE
Show details
15
Language Models as an Alternative Evaluator of Word Order Hypotheses: A Case Study in Japanese ...
BASE
Show details
16
Encoder-Decoder Models Can Benefit from Pre-trained Masked Language Models in Grammatical Error Correction ...
BASE
Show details
17
Attention is Not Only a Weight: Analyzing Transformers with Vector Norms ...
BASE
Show details
18
Filtering Noisy Dialogue Corpora by Connectivity and Content Relatedness ...
Akama, Reina; Yokoi, Sho; Suzuki, Jun. - : arXiv, 2020
BASE
Show details
19
Modeling Event Salience in Narratives via Barthes' Cardinal Functions ...
BASE
Show details
20
Do Neural Models Learn Systematicity of Monotonicity Inference in Natural Language? ...
BASE
Show details

Page: 1 2

Catalogues
2
0
1
0
2
0
0
Bibliographies
3
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
32
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern