DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...7
Hits 1 – 20 of 134

1
Learning Stress Patterns with a Sequence-to-Sequence Neural Network
In: Proceedings of the Society for Computation in Linguistics (2022)
BASE
Show details
2
Learning Repetition, but not Syllable Reversal
In: Proceedings of the Annual Meetings on Phonology; Proceedings of the 2020 Annual Meeting on Phonology ; 2377-3324 (2021)
BASE
Show details
3
Learning syntacti parameter settings without triggers by assigning credit and blame
In: CLS 55, 2019 : proceedings of the fifty-fifth annual meeting of the Chicago Linguistic Society (2020), S. 337-350
Leibniz-Zentrum Allgemeine Sprachwissenschaft
Show details
4
French schwa and gradient cumulativity
In: Glossa: a journal of general linguistics; Vol 5, No 1 (2020); 24 ; 2397-1835 (2020)
BASE
Show details
5
Assimilation triggers metathesis in Balantak: Implications for theories of possible repair in Optimality Theory
In: University of Massachusetts Occasional Papers in Linguistics (2020)
BASE
Show details
6
*NC
In: North East Linguistics Society (2020)
BASE
Show details
7
French schwa and gradient cumulativity
In: Joe Pater (2019)
BASE
Show details
8
Learning Reduplication with a Neural Network without Explicit Variables
In: Joe Pater (2019)
BASE
Show details
9
Phonological typology in Optimality Theory and Formal Language Theory: Goals and future directions
In: Joe Pater (2019)
BASE
Show details
10
Learning syntactic parameters without triggers by assigning credit and blame
In: Joe Pater (2019)
BASE
Show details
11
Generative linguistics and neural networks at 60: foundation, friction, and fusion
In: Joe Pater (2019)
BASE
Show details
12
Preface: SCiL 2019 Editors’ Note
In: Proceedings of the Society for Computation in Linguistics (2019)
BASE
Show details
13
Substance matters: A reply to Jardine 2016
In: Joe Pater (2018)
BASE
Show details
14
Seq2Seq Models with Dropout can Learn Generalizable Reduplication
In: Joe Pater (2018)
Abstract: Natural language reduplication can pose a challenge to neural models of language, and has been argued to require variables (Marcus et al., 1999). Sequence-to-sequence neural networks have been shown to perform well at a number of other morphological tasks (Cotterell et al., 2016), and produce results that highly correlate with human behavior (Kirov, 2017; Kirov & Cotterell, 2018) but do not include any explicit variables in their architecture. We find that they can learn a reduplicative pattern that generalizes to novel segments if they are trained with dropout (Srivastava et al., 2014). We argue that this matches the scope of generalization observed in human reduplication.
Keyword: Artificial Intelligence and Robotics; Cognitive Psychology; Computational Linguistics; Linguistics
URL: https://works.bepress.com/joe_pater/36
BASE
Hide details
15
Preface: SCiL 2018 Editors’ Note
In: Proceedings of the Society for Computation in Linguistics (2018)
BASE
Show details
16
Learning opacity in Stratal Maximum Entropy Grammar ...
Nazarov, Aleksei; Pater, Joe. - : arXiv, 2017
BASE
Show details
17
The Oxford handbook of developmental linguistics
Lidz, Jeffrey (Herausgeber); Snyder, William (Herausgeber); Pater, Joe (Herausgeber). - Oxford : Oxford University Press, 2016
UB Frankfurt Linguistik
Show details
18
The Oxford handbook of developmental linguistics
Lidz, Jeffrey; Snyder, William; Pater, Joe. - Oxford : Oxford University Press, 2016
MPI für Psycholinguistik
Show details
19
Harmonic grammar and harmonic serialism
McCarthy, John J. (Herausgeber); Pater, Joe (Herausgeber). - Bristol, CT : Equinox Publishing, 2016
BLLDB
UB Frankfurt Linguistik
Show details
20
Gradient Exceptionality in Maximum Entropy Grammar with Lexically Specific Constraints
BASE
Show details

Page: 1 2 3 4 5...7

Catalogues
5
0
16
0
0
0
2
Bibliographies
27
0
0
1
0
0
0
0
11
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
80
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern