1 |
A Deep Fusion Matching Network Semantic Reasoning Model
|
|
|
|
In: Applied Sciences; Volume 12; Issue 7; Pages: 3416 (2022)
|
|
BASE
|
|
Show details
|
|
2 |
Meanings Expressed by Primary Schoolchildren When Solving a Partitioning Task
|
|
|
|
In: Mathematics; Volume 10; Issue 8; Pages: 1339 (2022)
|
|
BASE
|
|
Show details
|
|
3 |
MOCKERY AND PROVOCATION FOR FUN: LEXICAL AND SEMANTIC REPRESENTATION IN THE RUSSIAN LANGUAGE ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Formalization of AMR Inference via Hybrid Logic Tableaux ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Semantisch-konzeptuelle Vernetzungen im bilingualen mentalen Lexikon : eine psycholinguistische Studie mit deutsch-türkischsprachigen Jugendlichen
|
|
|
|
BLLDB
|
|
UB Frankfurt Linguistik
|
|
Show details
|
|
7 |
Contextualization of Web contents through semantic enrichment from linked open data ; Contextualisation des contenus Web par l'enrichissement sémantique à partir de données
|
|
|
|
In: https://tel.archives-ouvertes.fr/tel-03561788 ; Databases [cs.DB]. Normandie Université, 2021. English. ⟨NNT : 2021NORMC243⟩ (2021)
|
|
BASE
|
|
Show details
|
|
8 |
Research compendium for Montero-Melis et al. (2021) "No evidence for embodiment: The motor system is not needed to keep action words in working memory" (Cortex) ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Graph-to-Graph Translations To Augment Abstract Meaning Representation Tense And Aspect ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
APiCS-Ligt: Towards Semantic Enrichment of Interlinear Glossed Text ...
|
|
Ionov, Maxim. - : Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2021
|
|
BASE
|
|
Show details
|
|
13 |
Essential Features in a Theory of Context for Enabling Artificial General Intelligence
|
|
|
|
In: Applied Sciences; Volume 11; Issue 24; Pages: 11991 (2021)
|
|
BASE
|
|
Show details
|
|
14 |
Mapping Directional Mid-Air Unistroke Gestures to Interaction Commands: A User Elicitation and Evaluation Study
|
|
|
|
In: Symmetry ; Volume 13 ; Issue 10 (2021)
|
|
BASE
|
|
Show details
|
|
15 |
Achieving Semantic Consistency for Multilingual Sentence Representation Using an Explainable Machine Natural Language Parser (MParser)
|
|
|
|
In: Applied Sciences; Volume 11; Issue 24; Pages: 11699 (2021)
|
|
BASE
|
|
Show details
|
|
17 |
AAA4LLL - Acquisition, Annotation, Augmentation for Lively Language Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Graph-to-Graph Translations To Augment Abstract Meaning Representation Tense And Aspect
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Hy-NLI : a Hybrid system for state-of-the-art Natural Language Inference
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Graph-based broad-coverage semantic parsing
|
|
|
|
Abstract:
Many broad-coverage meaning representations can be characterized as directed graphs, where nodes represent semantic concepts and directed edges represent semantic relations among the concepts. The task of semantic parsing is to generate such a meaning representation from a sentence. It is quite natural to adopt a graph-based approach for parsing, where nodes are identified conditioning on the individual words, and edges are labeled conditioning on the pairs of nodes. However, there are two issues with applying this simple and interpretable graph-based approach for semantic parsing: first, the anchoring of nodes to words can be implicit and non-injective in several formalisms (Oepen et al., 2019, 2020). This means we do not know which nodes should be generated from which individual word and how many of them. Consequently, it makes a probabilistic formulation of the training objective problematical; second, graph-based parsers typically predict edge labels independent from each other. Such an independence assumption, while being sensible from an algorithmic point of view, could limit the expressiveness of statistical modeling. Consequently, it might fail to capture the true distribution of semantic graphs. In this thesis, instead of a pipeline approach to obtain the anchoring, we propose to model the implicit anchoring as a latent variable in a probabilistic model. We induce such a latent variable jointly with the graph-based parser in an end-to-end differentiable training. In particular, we test our method on Abstract Meaning Representation (AMR) parsing (Banarescu et al., 2013). AMR represents sentence meaning with a directed acyclic graph, where the anchoring of nodes to words is implicit and could be many-to-one. Initially, we propose a rule-based system that circumvents the many-to-one anchoring by combing nodes in some pre-specified subgraphs in AMR and treats the alignment as a latent variable. Next, we remove the need for such a rule-based system by treating both graph segmentation and alignment as latent variables. Still, our graph-based parsers are parameterized by neural modules that require gradient-based optimization. Consequently, training graph-based parsers with our discrete latent variables can be challenging. By combing deep variational inference and differentiable sampling, our models can be trained end-to-end. To overcome the limitation of graph-based parsing and capture interdependency in the output, we further adopt iterative refinement. Starting with an output whose parts are independently predicted, we iteratively refine it conditioning on the previous prediction. We test this method on semantic role labeling (Gildea and Jurafsky, 2000). Semantic role labeling is the task of predicting the predicate-argument structure. In particular, semantic roles between the predicate and its arguments need to be labeled, and those semantic roles are interdependent. Overall, our refinement strategy results in an effective model, outperforming strong factorized baseline models.
|
|
Keyword:
Abstract Meaning Representation parsing; AMR parsing; graph-based parsers; hand-crafted pipelines; semantic parsing; semantic role labeling
|
|
URL: https://doi.org/10.7488/era/1390 https://hdl.handle.net/1842/38121
|
|
BASE
|
|
Hide details
|
|
|
|