DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5
Hits 1 – 20 of 86

1
Formal Language Recognition by Hard Attention Transformers: Perspectives from Circuit Complexity ...
BASE
Show details
2
Do Language Models Learn Position-Role Mappings? ...
BASE
Show details
3
Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models ...
BASE
Show details
4
Arguments for top-down derivations in syntax
In: Proceedings of the Linguistic Society of America; Vol 7, No 1 (2022): Proceedings of the Linguistic Society of America; 5264 ; 2473-8689 (2022)
BASE
Show details
5
Structure Here, Bias There: Hierarchical Generalization by Jointly Learning Syntactic Transformations
In: Proceedings of the Society for Computation in Linguistics (2021)
Abstract: When learning syntactic transformations, children consistently induce structure-dependent generalizations, even though the primary linguistic data may be consistent with both linear and hierarchical rules. What is the source of this inductive bias? In this paper, we use computational models to investigate the hypothesis that evidence for the structure-sensitivity of one syntactic transformation can bias the acquisition of another transformation in favor of a hierarchical rule. We train sequence-to-sequence models based on artificial neural networks to learn multiple syntactic transformations at the same time in a fragment of English; we hold out cases that disambiguate linear and hierarchical rules for one of those transformations, and then test for hierarchical generalization to these held-out sentence types. Consistent with our hypothesis, we find that multitask learning induces a hierarchical bias for certain combinations of tasks, and that this bias is stronger for transformations that share computational building blocks. At the same time, the bias is in general insufficient to lead the learner to categorically acquire the hierarchical generalization for the target transformation.
Keyword: Computational Linguistics; inductive bias; multitask learning; poverty of the stimulus; structure dependence
URL: https://scholarworks.umass.edu/cgi/viewcontent.cgi?article=1221&context=scil
https://scholarworks.umass.edu/scil/vol4/iss1/13
BASE
Hide details
6
Comparing methods of tree-construction across mildly context-sensitive formalisms
In: Proceedings of the Society for Computation in Linguistics (2021)
BASE
Show details
7
The Role of Linguistic Features in Domain Adaptation: TAG Parsing of Questions ...
Srivastava, Aarohi; Frank, Robert; Widder, Sarah. - : University of Mass Amherst, 2020
BASE
Show details
8
Sequence-to-Sequence Networks Learn the Meaning of Reflexive Anaphora ...
Frank, Robert; Petty, Jackson. - : arXiv, 2020
BASE
Show details
9
Sequence-to-Sequence Networks Learn the Meaning of Reflexive Anaphora ...
BASE
Show details
10
Probabilistic Predictions of People Perusing: Evaluating Metrics of Language Model Performance for Psycholinguistic Modeling ...
BASE
Show details
11
The Role of Linguistic Features in Domain Adaptation: TAG Parsing of Questions
In: Proceedings of the Society for Computation in Linguistics (2020)
BASE
Show details
12
Primitive Asymmetric C-Command Derives X̄-Theory
In: North East Linguistics Society (2020)
BASE
Show details
13
Does Syntax Need to Grow on Trees? Sources of Hierarchical Inductive Bias in Sequence-to-Sequence Networks
In: Transactions of the Association for Computational Linguistics, Vol 8, Pp 125-140 (2020) (2020)
BASE
Show details
14
Jabberwocky Parsing: Dependency Parsing with Lexical Noise ...
Kasai, Jungo; Frank, Robert. - : University of Massachusetts Amherst, 2019
BASE
Show details
15
Open Sesame: Getting Inside BERT's Linguistic Knowledge ...
BASE
Show details
16
Finding Syntactic Representations in Neural Stacks ...
BASE
Show details
17
A Unified Analysis of Reflexives and Reciprocals in Synchronous Tree Adjoining Grammar
BASE
Show details
18
Jabberwocky Parsing: Dependency Parsing with Lexical Noise
In: Proceedings of the Society for Computation in Linguistics (2019)
BASE
Show details
19
Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks ...
BASE
Show details
20
Phonologically Informed Edit Distance Algorithms for Word Alignment with Low-Resource Languages ...
McCoy, Richard T.; Frank, Robert. - : University of Massachusetts Amherst, 2018
BASE
Show details

Page: 1 2 3 4 5

Catalogues
4
3
12
0
1
0
2
Bibliographies
26
0
2
4
0
0
0
4
5
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
28
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern