DE eng

Search in the Catalogues and Directories

Hits 1 – 13 of 13

1
Computational Modeling of Nonfinality Effects on Stress Typology
In: Proceedings of the 33. West Coast Conference on Formal Linguistics : [held March 27-29, 2015 at the Simon Fraser University, Vancouver, British Columbia] (2016), S. 371-380
Leibniz-Zentrum Allgemeine Sprachwissenschaft
Show details
2
Learning bias in stress windows: Frequency and attestation
In: Proceedings of the Annual Meetings on Phonology; Proceedings of the 2014 Annual Meeting on Phonology ; 2377-3324 (2016)
BASE
Show details
3
Sign constraints on feature weights improve a joint model of word segmentation and phonology
Johnson, Mark; Pater, Joe; Staubs, Robert. - : Red Hook, New York : The Association for Computational Linguistics, 2015
BASE
Show details
4
Event-Related Potential Evidence of Abstract Phonological Learning in the Laboratory
In: Joe Pater (2015)
BASE
Show details
5
Sign Constraints on Feature Weights Improve a Joint Model of Word Segmentation and Phnology
In: Joe Pater (2015)
BASE
Show details
6
Emergent Contrast in Agent-Based Modeling of Language
In: Linguistics Department Faculty Publication Series (2015)
BASE
Show details
7
Emergent Contrast in Agent-Based Modeling of Language
In: Joe Pater (2015)
BASE
Show details
8
Computational Modeling of Learning Biases in Stress Typology
In: Doctoral Dissertations (2014)
Abstract: This dissertation demonstrates a strong connection between the frequency of stress patterns and their relative learnability under a wide class of learning algorithms. These frequency results follow from hypotheses about the learner's available representations and the distribution of input data. Such hypotheses are combined with a model of learning to derive distinctions between classes of stress patterns, addressing frequency biases not modeled by traditional generative theory. I present a series of results for error-driven learners of constraint-based grammars. These results are shown both for single learners and learners in an iterated learning model. First, I show that with general n-gram constraints, learners show biases in their learning of stress patterns, mirroring frequency effects in the observed typology. These include biases toward full alternation and fixed stress near word edges. I show that these effects arise from the learner's representation of the consistency and distinctiveness of learning data. I formalize this notion within error-driven, constraint-based learners. I show how specific representational assumptions can lead to distinct predictions about frequency, potentially adjudicating between theories. Languages with primary stress placement independent of word parity are shown to be---with the right constraint set---more consistent and thus more readily learned, offering an explanation for their relative frequency. This explanation is especially valuable because, while parity-dependent languages exist, they are a small minority. I continue by showing how such a model predicts biases in the size of stress windows and discuss the role of this approach in deciding the nature of potentially ``accidental" gaps. I demonstrate that such a model can incorporate sources of bias outside the learner's representations. I give a model of a perceptual nonfinality effect based on probabilistic misperception. This modification is shown to help account for typological skews in the edge of fixed stress and windows, as well as foot type for iterative stress. The methods used and conclusions drawn in this dissertation are potentially extendable to a wide range of linguistic phenomenon. This foundation is a way of approaching some otherwise-unexplained frequency biases by grounding them in theories of linguistic representation and learning.
Keyword: Computational Linguistics; computational phonology; learning biases; Maximum Entropy grammar; Phonetics and Phonology; phonological learning; phonology; stress typology
URL: https://scholarworks.umass.edu/cgi/viewcontent.cgi?article=1249&context=dissertations_2
https://scholarworks.umass.edu/dissertations_2/230
BASE
Hide details
9
Modeling Morphological Subgeneralizations
In: Proceedings of the Annual Meetings on Phonology; Proceedings of the 2013 Annual Meeting on Phonology ; 2377-3324 (2014)
BASE
Show details
10
Editors' note
In: Proceedings of the Annual Meetings on Phonology; Proceedings of the 2013 Annual Meeting on Phonology ; 2377-3324 (2014)
BASE
Show details
11
Disjunction in wh-questions
Haida, Andreas; Repp, Sophie. - : GLSA Publications, 2013
BASE
Show details
12
Learning Probabilities Over Underlying Representations
In: Joe Pater (2012)
BASE
Show details
13
Specialization Methods and Cataphoricity in Coreference Resolution
Staubs, Robert. - 2009
BASE
Show details

Catalogues
0
0
0
0
0
0
1
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
12
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern