2 |
Can a Transformer Pass the Wug Test? Tuning Copying Bias in Neural Morphological Inflection Models ...
|
|
|
|
Abstract:
Deep learning sequence models have been successfully applied to the task of morphological inflection. The results of the SIGMORPHON shared tasks in the past several years indicate that such models can perform well, but only if the training data cover a good amount of different lemmata, or if the lemmata that are inflected at test time have also been seen in training, as has indeed been largely the case in these tasks. Surprisingly, standard models such as the Transformer almost completely fail at generalizing inflection patterns when asked to inflect previously unseen lemmata -- i.e. under "wug test"-like circumstances. While established data augmentation techniques can be employed to alleviate this shortcoming by introducing a copying bias through hallucinating synthetic new word forms using the alphabet in the language at hand, we show that, to be more effective, the hallucination process needs to pay attention to substrings of syllable-like length rather than individual characters or stems. We report a ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/2104.06483 https://dx.doi.org/10.48550/arxiv.2104.06483
|
|
BASE
|
|
Hide details
|
|
3 |
SIGMORPHON 2020 Shared Task 0: Typologically Diverse Morphological Inflection ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
The SIGMORPHON 2019 Shared Task: Morphological Analysis in Context and Cross-Lingual Transfer for Inflection ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Marrying Universal Dependencies and Universal Morphology ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
On the Complexity and Typology of Inflectional Morphological Systems ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
A Comparison of Feature-Based and Neural Scansion of Poetry ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|