1 |
Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics ; Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics: Dagstuhl Seminar 21351
|
|
|
|
In: Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics ; https://hal.archives-ouvertes.fr/hal-03507948 ; Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics, Aug 2021, pp.89--138, 2021, 2192-5283. ⟨10.4230/DagRep.11.7.89⟩ ; https://gitlab.com/unlid/dagstuhl-seminar/-/wikis/home (2021)
|
|
BASE
|
|
Show details
|
|
2 |
Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics (Dagstuhl Seminar 21351)
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Syntactic Nuclei in Dependency Parsing -- A Multilingual Exploration ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics (Dagstuhl Seminar 21351) ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Attention Can Reflect Syntactic Structure (If You Let It) ...
|
|
|
|
Abstract:
Since the popularization of the Transformer as a general-purpose feature encoder for NLP, many studies have attempted to decode linguistic structure from its novel multi-head attention mechanism. However, much of such work focused almost exclusively on English -- a language with rigid word order and a lack of inflectional morphology. In this study, we present decoding experiments for multilingual BERT across 18 languages in order to test the generalizability of the claim that dependency syntax is reflected in attention patterns. We show that full trees can be decoded above baseline accuracy from single attention heads, and that individual relations are often tracked by the same heads across languages. Furthermore, in an attempt to address recent debates about the status of attention as an explanatory mechanism, we experiment with fine-tuning mBERT on a supervised parsing objective while freezing different series of parameters. Interestingly, in steering the objective to learn explicit linguistic structure, ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://dx.doi.org/10.48550/arxiv.2101.10927 https://arxiv.org/abs/2101.10927
|
|
BASE
|
|
Hide details
|
|
11 |
What Should/Do/Can LSTMs Learn When Parsing Auxiliary Verb Constructions? ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Schrödinger's Tree -- On Syntax and Neural Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Køpsala: Transition-Based Graph Parsing via Efficient Training and Effective Encoding ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Understanding Pure Character-Based Neural Machine Translation: The Case of Translating Finnish into English ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Understanding Pure Character-Based Neural Machine Translation: The Case of Translating Finnish into English ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Universal Dependencies v2: An Evergrowing Multilingual Treebank Collection ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Do Neural Language Models Show Preferences for Syntactic Formalisms? ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|