DE eng

Search in the Catalogues and Directories

Hits 1 – 8 of 8

1
How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN ...
BASE
Show details
2
Universal linguistic inductive biases via meta-learning ...
BASE
Show details
3
Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs ...
Abstract: Sequence-based neural networks show significant sensitivity to syntactic structure, but they still perform less well on syntactic tasks than tree-based networks. Such tree-based networks can be provided with a constituency parse, a dependency parse, or both. We evaluate which of these two representational schemes more effectively introduces biases for syntactic structure that increase performance on the subject-verb agreement prediction task. We find that a constituency-based network generalizes more robustly than a dependency-based one, and that combining the two types of structure does not yield further improvement. Finally, we show that the syntactic robustness of sequential models can be substantially improved by fine-tuning on a small amount of constructed data, suggesting that data augmentation is a viable alternative to explicit constituency structure for imparting the syntactic biases that sequential models are lacking. ... : To appear in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL-2020) ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://dx.doi.org/10.48550/arxiv.2005.00019
https://arxiv.org/abs/2005.00019
BASE
Hide details
4
Does Syntax Need to Grow on Trees? Sources of Hierarchical Inductive Bias in Sequence-to-Sequence Networks
In: Transactions of the Association for Computational Linguistics, Vol 8, Pp 125-140 (2020) (2020)
BASE
Show details
5
RNNs Implicitly Implement Tensor Product Representations
In: International Conference on Learning Representations ; ICLR 2019 - International Conference on Learning Representations ; https://hal.archives-ouvertes.fr/hal-02274498 ; ICLR 2019 - International Conference on Learning Representations, May 2019, New Orleans, United States (2019)
BASE
Show details
6
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference ...
BASE
Show details
7
BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance ...
BASE
Show details
8
Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks ...
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
8
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern