1 |
Learning Stress Patterns with a Sequence-to-Sequence Neural Network
|
|
|
|
In: Proceedings of the Society for Computation in Linguistics (2022)
|
|
BASE
|
|
Show details
|
|
2 |
Learning Repetition, but not Syllable Reversal
|
|
|
|
In: Proceedings of the Annual Meetings on Phonology; Proceedings of the 2020 Annual Meeting on Phonology ; 2377-3324 (2021)
|
|
BASE
|
|
Show details
|
|
4 |
French schwa and gradient cumulativity
|
|
|
|
In: Glossa: a journal of general linguistics; Vol 5, No 1 (2020); 24 ; 2397-1835 (2020)
|
|
Abstract:
We explore the interaction of two phonological factors that condition schwa–zero alternations in French: schwa is more likely after two consonants than a singleton; and schwa is more likely between stressed syllables than elsewhere. Using new data from a judgment study, we show that both factors play a role in schwa epenthesis and deletion, and that the two factors interact cumulatively: they have a stronger effect together than individually. Treating each factor as a constraint, we find that their cumulative interaction is better modeled with weighted than with ranked constraints. We provide a characterization of patterns of cumulativity in probability space in terms of the effect of constraint on its own versus its effect in a cumulative interaction with another constraint. Stochastic OT can model cumulative interactions, but only sublinear ones, where the effect of a constraint is weaker in the cumulative context than on its own. Weighted constraint models, MaxEnt and Noisy HG, can model the full range of cumulativity — sublinear, linear, and superlinear. In examining the ability of these models to fit our experimental data, we find that Stochastic OT is hampered by the fact that the data displays superlinear cumulativity. Noisy HG and MaxEnt fare better on this dataset, with MaxEnt yielding the best fit.
|
|
Keyword:
French; gradient cumulativity; harmonic grammar; maximum entropy grammars; noisy harmonic grammar; Phonology; stochastic optimality theory
|
|
URL: https://www.glossa-journal.org/jms/article/view/583 https://doi.org/10.5334/gjgl.583
|
|
BASE
|
|
Hide details
|
|
5 |
Assimilation triggers metathesis in Balantak: Implications for theories of possible repair in Optimality Theory
|
|
|
|
In: University of Massachusetts Occasional Papers in Linguistics (2020)
|
|
BASE
|
|
Show details
|
|
8 |
Learning Reduplication with a Neural Network without Explicit Variables
|
|
|
|
In: Joe Pater (2019)
|
|
BASE
|
|
Show details
|
|
9 |
Phonological typology in Optimality Theory and Formal Language Theory: Goals and future directions
|
|
|
|
In: Joe Pater (2019)
|
|
BASE
|
|
Show details
|
|
10 |
Learning syntactic parameters without triggers by assigning credit and blame
|
|
|
|
In: Joe Pater (2019)
|
|
BASE
|
|
Show details
|
|
11 |
Generative linguistics and neural networks at 60: foundation, friction, and fusion
|
|
|
|
In: Joe Pater (2019)
|
|
BASE
|
|
Show details
|
|
12 |
Preface: SCiL 2019 Editors’ Note
|
|
|
|
In: Proceedings of the Society for Computation in Linguistics (2019)
|
|
BASE
|
|
Show details
|
|
13 |
Substance matters: A reply to Jardine 2016
|
|
|
|
In: Joe Pater (2018)
|
|
BASE
|
|
Show details
|
|
14 |
Seq2Seq Models with Dropout can Learn Generalizable Reduplication
|
|
|
|
In: Joe Pater (2018)
|
|
BASE
|
|
Show details
|
|
15 |
Preface: SCiL 2018 Editors’ Note
|
|
|
|
In: Proceedings of the Society for Computation in Linguistics (2018)
|
|
BASE
|
|
Show details
|
|
20 |
Gradient Exceptionality in Maximum Entropy Grammar with Lexically Specific Constraints
|
|
|
|
BASE
|
|
Show details
|
|
|
|