1 |
POS induction with distributional and morphological information using a distance-dependent Chinese Restaurant Process
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Semantic title evaluation and recommendation based on topic models
|
|
|
|
BASE
|
|
Show details
|
|
4 |
A Summary of the 2012 JHU CLSP workshop on zero resource speech technologies and models of early language acquisition
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Whyisenglishsoeasytosegment? ; Why is English so easy to segment?
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Dynamic 3-D visualization of vocal tract shaping during speech
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Working with a small dataset - semi-supervised dependency parsing for Irish
|
|
|
|
BASE
|
|
Show details
|
|
8 |
A Joint model of word segmentation and phonological variation for English word-final t-deletion
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Parsing entire discourses as very long strings : capturing topic continuity in grounded language learning
|
|
|
|
BASE
|
|
Show details
|
|
10 |
A Non-monotonic Arc-Eager transition system for dependency parsing
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Modeling graph languages with grammars extracted via tree decompositions
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Learning from OzCLO, the Australian Computational and Linguistics Olympiad
|
|
|
|
BASE
|
|
Show details
|
|
14 |
The effect of non-tightness on Bayesian estimation of PCFGs
|
|
|
|
Abstract:
Probabilistic context-free grammars have the unusual property of not always defining tight distributions (i.e., the sum of the “probabilities” of the trees the grammar generates can be less than one). This paper reviews how this non-tightness can arise and discusses its impact on Bayesian estimation of PCFGs. We begin by presenting the notion of “almost everywhere tight grammars” and show that linear CFGs follow it. We then propose three different ways of reinterpreting non-tight PCFGs to make them tight, show that the Bayesian estimators in Johnson et al. (2007) are correct under one of them, and provide MCMC samplers for the other two. We conclude with a discussion of the impact of tightness empirically. ; 9 page(s)
|
|
Keyword:
080100 Artificial Intelligence and Image Processing
|
|
URL: http://hdl.handle.net/1959.14/286332
|
|
BASE
|
|
Hide details
|
|
20 |
Exploring adaptor grammars for native language identification
|
|
|
|
BASE
|
|
Show details
|
|
|
|