DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6 7...83
Hits 41 – 60 of 1.648

41
What Should/Do/Can LSTMs Learn When Parsing Auxiliary Verb Constructions?
In: Computational Linguistics, Vol 46, Iss 4, Pp 763-784 (2021) (2021)
Abstract: AbstractThere is a growing interest in investigating what neural NLP models learn about language. A prominent open question is the question of whether or not it is necessary to model hierarchical structure. We present a linguistic investigation of a neural parser adding insights to this question. We look at transitivity and agreement information of auxiliary verb constructions (AVCs) in comparison to finite main verbs (FMVs). This comparison is motivated by theoretical work in dependency grammar and in particular the work of Tesnière (1959), where AVCs and FMVs are both instances of a nucleus, the basic unit of syntax. An AVC is a dissociated nucleus; it consists of at least two words, and an FMV is its non-dissociated counterpart, consisting of exactly one word. We suggest that the representation of AVCs and FMVs should capture similar information. We use diagnostic classifiers to probe agreement and transitivity information in vectors learned by a transition-based neural parser in four typologically different languages. We find that the parser learns different information about AVCs and FMVs if only sequential models (BiLSTMs) are used in the architecture but similar information when a recursive layer is used. We find explanations for why this is the case by looking closely at how information is learned in the network and looking at what happens with different dependency representations of AVCs. We conclude that there may be benefits to using a recursive layer in dependency parsing and that we have not yet found the best way to integrate it in our parsers.
Keyword: Computational linguistics. Natural language processing; P98-98.5
URL: https://doi.org/10.1162/coli_a_00392
https://doaj.org/article/7b4d6e78cc6e49f294e0664fb59690c8
BASE
Hide details
42
Efficient Computation of Expectations under Spanning Tree Distributions
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 675-690 (2021) (2021)
BASE
Show details
43
Revisiting Multi-Domain Machine Translation
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 17-35 (2021) (2021)
BASE
Show details
44
Interpretability Analysis for Named Entity Recognition to Understand System Predictions and How They Can Improve
In: Computational Linguistics, Vol 47, Iss 1, Pp 117-140 (2021) (2021)
BASE
Show details
45
Semantic Data Set Construction from Human Clustering and Spatial Arrangement
In: Computational Linguistics, Vol 47, Iss 1, Pp 69-116 (2021) (2021)
BASE
Show details
46
WikiAsp: A Dataset for Multi-domain Aspect-based Summarization
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 211-225 (2021) (2021)
BASE
Show details
47
Latent Compositional Representations Improve Systematic Generalization in Grounded Question Answering
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 195-210 (2021) (2021)
BASE
Show details
48
Aligning Faithful Interpretations with their Social Attribution
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 294-310 (2021) (2021)
BASE
Show details
49
Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 1408-1424 (2021) (2021)
BASE
Show details
50
Evaluating Document Coherence Modeling
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 621-640 (2021) (2021)
BASE
Show details
51
Comparing Knowledge-Intensive and Data-Intensive Models for English Resource Semantic Parsing
In: Computational Linguistics, Vol 47, Iss 1, Pp 43-68 (2021) (2021)
BASE
Show details
52
Model Compression for Domain Adaptation through Causal Effect Estimation
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 1355-1373 (2021) (2021)
BASE
Show details
53
Planning with Learned Entity Prompts for Abstractive Summarization
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 1475-1492 (2021) (2021)
BASE
Show details
54
Supertagging the Long Tail with Tree-Structured Decoding of Complex Categories
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 243-260 (2021) (2021)
BASE
Show details
55
Lexically Aware Semi-Supervised Learning for OCR Post-Correction
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 1285-1302 (2021) (2021)
BASE
Show details
56
Supervised and Unsupervised Neural Approaches to Text Readability
In: Computational Linguistics, Vol 47, Iss 1, Pp 141-179 (2021) (2021)
BASE
Show details
57
A Graph-Based Framework for Structured Prediction Tasks in Sanskrit
In: Computational Linguistics, Vol 46, Iss 4, Pp 785-845 (2021) (2021)
BASE
Show details
58
What Helps Transformers Recognize Conversational Structure? Importance of Context, Punctuation, and Labels in Dialog Act Recognition
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 1163-1179 (2021) (2021)
BASE
Show details
59
Efficient Outside Computation
In: Computational Linguistics, Vol 46, Iss 4, Pp 745-762 (2021) (2021)
BASE
Show details
60
There Once Was a Really Bad Poet, It Was Automated but You Didn’t Know It
In: Transactions of the Association for Computational Linguistics, Vol 9, Pp 605-620 (2021) (2021)
BASE
Show details

Page: 1 2 3 4 5 6 7...83

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
1.648
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern