DE eng

Search in the Catalogues and Directories

Hits 1 – 10 of 10

1
Maximum Entropy Language Modeling with Non-Local and Syntactic Dependencies
In: http://www.cs.jhu.edu/~junwu/gbo.ps (2002)
BASE
Show details
2
Smoothing Issues in the Structured Language Model
In: http://cs.jhu.edu/~junwu/eurospeech01.ps (2001)
BASE
Show details
3
Smoothing Issues in the Structured Language Mode
In: http://www.clsp.jhu.edu/~woosung/pdf/euro01.pdf (2001)
BASE
Show details
4
Syntactic Heads In Statistical Language Modeling
In: http://www.cs.jhu.edu/~junwu/icassp2000.ps (2000)
BASE
Show details
5
Combining Nonlocal, Syntactic And N-Gram Dependencies In Language Modeling
In: http://www.cs.jhu.edu/~junwu/eurospeech.ps (1999)
BASE
Show details
6
A Maximum Entropy Language Model Integrating N-Grams And Topic Dependencies For Conversational Speech Recognition
In: http://www.cs.jhu.edu/~junwu/topic-lm.ps (1999)
BASE
Show details
7
A maximum entropy language model integrating ngrams and topic dependencies for conversational speech reconition
In: http://www.mirlab.org/conference_papers/International_Conference/ICASSP 1999/PDF/AUTHOR/IC992192.PDF (1999)
BASE
Show details
8
A Maximum Entropy Language Model with Topic Sensitive Features
In: http://www.cs.jhu.edu/~junwu/memodel.ps (1998)
BASE
Show details
9
Building A Topic-Dependent Maximum Entropy Model For Very Large Corpora
In: http://www.cs.jhu.edu/~junwu/icassp02.ps (1217)
BASE
Show details
10
Maximum Entropy Techniques for Exploiting Syntactic, Semantic and Collocational Dependencies in Language Modeling
In: http://www.cs.jhu.edu/~junwu/cslpaper.ps
Abstract: A new statistical language model is presented which combines collocational dependencies with two important sources of long-range statistical dependence: the syntactic structure and the topic of a sentence. These dependencies or constraints are integrated using the maximum entropy technique. Substantial improvements are demonstrated over a trigram model in both perplexity and speech recognition accuracy on the Switchboard task. A detailed analysis of the performance of this language model is provided in order to characterize the manner in which it performs better than a standard N-gram model. It is shown that topic dependencies are most useful in predicting words which are semantically related by the subject matter of the conversation. Syntactic dependencies on the other hand are found to be most helpful in positions where the best predictors of the following word are not within N-gram range due to an intervening phrase or clause. It is also shown that these two methods ind.
URL: http://www.cs.jhu.edu/~junwu/cslpaper.ps
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.34.8552
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
10
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern