DE eng

Search in the Catalogues and Directories

Hits 1 – 12 of 12

1
Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics ; Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics: Dagstuhl Seminar 21351
In: Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics ; https://hal.archives-ouvertes.fr/hal-03507948 ; Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics, Aug 2021, pp.89--138, 2021, 2192-5283. ⟨10.4230/DagRep.11.7.89⟩ ; https://gitlab.com/unlid/dagstuhl-seminar/-/wikis/home (2021)
BASE
Show details
2
Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics (Dagstuhl Seminar 21351)
Croft, William; Savary, Agata; Baldwin, Timothy. - : Dagstuhl Reports. DagRep, Volume 11, Issue 7, 2021
BASE
Show details
3
Universal Dependencies 2.9
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
4
Universal Dependencies 2.8.1
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
5
Universal Dependencies 2.8
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
6
Universal Dependencies ...
BASE
Show details
7
Syntactic Nuclei in Dependency Parsing -- A Multilingual Exploration ...
Basirat, Ali; Nivre, Joakim. - : arXiv, 2021
BASE
Show details
8
Revisiting Negation in Neural Machine Translation ...
BASE
Show details
9
Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics (Dagstuhl Seminar 21351) ...
Baldwin, Timothy; Croft, William; Nivre, Joakim. - : Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2021
BASE
Show details
10
Attention Can Reflect Syntactic Structure (If You Let It) ...
BASE
Show details
11
What Should/Do/Can LSTMs Learn When Parsing Auxiliary Verb Constructions? ...
NAACL 2021 2021; de Lhoneux, Miryam; Nivre, Joakim; Stymne, Sara. - : Underline Science Inc., 2021
Abstract: Read the paper on the folowing link: https://direct.mit.edu/coli/article/46/4/763/97325/What-Should-Do-Can-LSTMs-Learn-When-Parsing Abstract: There is a growing interest in investigating what neural NLP models learn about language. A prominent open question is the question of whether or not it is necessary to model hierarchical structure. We present a linguistic investigation of a neural parser adding insights to this question. We look at transitivity and agreement information of auxiliary verb constructions (AVCs) in comparison to finite main verbs (FMVs). This comparison is motivated by theoretical work in dependency grammar and in particular the work of Tesnière (1959) where AVCs and FMVs are both instances of a nucleus, the basic unit of syntax. An AVC is a dissociated nucleus, it consists of at least two words, and an FMV is its non-dissociated counterpart, consisting of exactly one word. We suggest that the representation of AVCs and FMVs should capture similar information. We use diagnostic ...
Keyword: Artificial Intelligence; Computer Science and Engineering; Intelligent System; Natural Language Processing
URL: https://underline.io/lecture/20052-what-shoulddashdodashcan-lstms-learn-when-parsing-auxiliary-verb-constructionsquestion
https://dx.doi.org/10.48448/ys11-hq56
BASE
Hide details
12
Schrödinger's Tree -- On Syntax and Neural Language Models ...
Kulmizev, Artur; Nivre, Joakim. - : arXiv, 2021
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
12
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern