1 |
Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics ; Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics: Dagstuhl Seminar 21351
|
|
|
|
In: Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics ; https://hal.archives-ouvertes.fr/hal-03507948 ; Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics, Aug 2021, pp.89--138, 2021, 2192-5283. ⟨10.4230/DagRep.11.7.89⟩ ; https://gitlab.com/unlid/dagstuhl-seminar/-/wikis/home (2021)
|
|
BASE
|
|
Show details
|
|
2 |
Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics (Dagstuhl Seminar 21351)
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Syntactic Nuclei in Dependency Parsing -- A Multilingual Exploration ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics (Dagstuhl Seminar 21351) ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Attention Can Reflect Syntactic Structure (If You Let It) ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
What Should/Do/Can LSTMs Learn When Parsing Auxiliary Verb Constructions? ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Schrödinger's Tree -- On Syntax and Neural Language Models ...
|
|
|
|
Abstract:
In the last half-decade, the field of natural language processing (NLP) has undergone two major transitions: the switch to neural networks as the primary modeling paradigm and the homogenization of the training regime (pre-train, then fine-tune). Amidst this process, language models have emerged as NLP's workhorse, displaying increasingly fluent generation capabilities and proving to be an indispensable means of knowledge transfer downstream. Due to the otherwise opaque, black-box nature of such models, researchers have employed aspects of linguistic theory in order to characterize their behavior. Questions central to syntax -- the study of the hierarchical structure of language -- have factored heavily into such work, shedding invaluable insights about models' inherent biases and their ability to make human-like generalizations. In this paper, we attempt to take stock of this growing body of literature. In doing so, we observe a lack of clarity across numerous dimensions, which influences the hypotheses ... : preprint, submitted to Frontiers in Artificial Intelligence: Perspectives for Natural Language Processing between AI, Linguistics and Cognitive Science ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/2110.08887 https://dx.doi.org/10.48550/arxiv.2110.08887
|
|
BASE
|
|
Hide details
|
|
16 |
Køpsala: Transition-Based Graph Parsing via Efficient Training and Effective Encoding ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Understanding Pure Character-Based Neural Machine Translation: The Case of Translating Finnish into English ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Understanding Pure Character-Based Neural Machine Translation: The Case of Translating Finnish into English ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Universal Dependencies v2: An Evergrowing Multilingual Treebank Collection ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Do Neural Language Models Show Preferences for Syntactic Formalisms? ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|