DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4
Hits 1 – 20 of 70

1
Automatic Dialect Density Estimation for African American English ...
BASE
Show details
2
DIALKI: Knowledge Identification in Conversational Systems through Dialogue-Document Contextualization ...
BASE
Show details
3
Dialogue State Tracking with a Language Model using Schema-Driven Prompting ...
BASE
Show details
4
A Controllable Model of Grounded Response Generation ...
BASE
Show details
5
Neural Models for Integrating Prosody in Spoken Language Understanding
Tran, Trang. - 2020
BASE
Show details
6
Automatic Analysis of Language Use in K-16 STEM Education and Impact on Student Performance
Nadeem, Farah. - 2020
BASE
Show details
7
Asynchronous Speech Recognition Affects Physician Editing of Notes
Lybarger, Kevin J.; Ostendorf, Mari; Riskin, Eve. - : Georg Thieme Verlag KG, 2018
BASE
Show details
8
Low-Rank RNN Adaptation for Context-Aware Language Modeling
Jaech, Aaron. - 2018
BASE
Show details
9
Parsing Speech: A Neural Approach to Integrating Lexical and Acoustic-Prosodic Information ...
BASE
Show details
10
Effective Use of Cross-Domain Parsing in Automatic Speech Recognition and Error Detection
Marin, Marius. - 2015
BASE
Show details
11
Automatic Characterization of Text Difficulty
Medero, Julie. - 2014
BASE
Show details
12
Data Selection for Statistical Machine Translation
BASE
Show details
13
Graph-based query strategies for active learning
In: Institute of Electrical and Electronics Engineers. IEEE transactions on audio, speech and language processing. - New York, NY : Inst. 21 (2013) 2, 260-269
OLC Linguistik
Show details
14
Rank and Sparsity in Language Processing
Abstract: Thesis (Ph.D.)--University of Washington, 2013 ; Language modeling is one of many problems in language processing that have to grapple with naturally high ambient dimensions. Even in large datasets, the number of unseen sequences is overwhelmingly larger than the number of observed ones, posing clear challenges for estimation. Although existing methods for building smooth language models tend to work well in general, they make assumptions that are not well suited to training with limited data. This thesis introduces a new approach to language modeling that makes different assumptions about how best to smooth the distributions, aimed at better handling the limited data scenario. Among these, it assumes that some words and word sequences behave similarly to others and that similarities can be learned by parameterizing a model with matrices or tensors and controlling the matrix or tensor rank. This thesis also demonstrates that sparsity acts as a complement to the low rank parameters: a low rank component learns the regularities that exist in language, while a sparse one captures the exceptional sequence phenomena. The sparse component not only improves the quality of the model, but the exceptions identified prove to be meaningful for other language processing tasks, making the models useful not only for computing probabilities but as tools for the analysis of language. Three new language models are introduced in this thesis. The first uses a factored low rank tensor to encode joint probabilities. It can be interpreted as a "mixture of unigrams" model and is evaluated on an English genre-adaptation task. The second is an exponential model parameterized by two matrices: one sparse and one low rank. This "Sparse Plus Low Rank Language Model" (SLR-LM) is evaluated with data from six languages, finding consistent gains over the standard baseline. Its ability to exploit features of words is used to incorporate morphological information in a Turkish language modeling experiment, with some improvements over a word-only model. Lastly, its use to discover words in an unsupervised fashion from sub-word segmented data is presented, showing good performance in finding dictionary words. The third model extends the SLR-LM in order to capture diverse and overlapping influences on text (e.g. topic, genre, speaker) using additive sparse matrices. The "Multi-Factor SLR-LM" is evaluated on three corpora with different factoring structures, showing improvements in perplexity and the ability to find high quality factor-dependent keywords. Finally, models and training algorithms are presented that extend the low rank ideas of the thesis to sequence tagging and acoustic modeling.
Keyword: Computer science; continuous space; Electrical engineering; language model; Linguistics; low rank; matrix; sparse; tensor
URL: http://hdl.handle.net/1773/24250
BASE
Hide details
15
Joint reranking of parsing and word recognition with automatic segmentation
In: Computer speech and language. - Amsterdam [u.a.] : Elsevier 26 (2012) 1, 1-19
BLLDB
OLC Linguistik
Show details
16
Graph-based Algorithms for Lexical Semantics and its Applications
Wu, Wei. - 2012
BASE
Show details
17
Expected dependency pair match: predicting translation quality with expected syntactic structure
In: Machine translation. - Dordrecht [u.a.] : Springer Science + Business Media 23 (2010) 2-3, 169-179
BLLDB
OLC Linguistik
Show details
18
A machine learning approach to reading level assessment
In: Computer speech and language. - Amsterdam [u.a.] : Elsevier 23 (2009) 1, 89-106
OLC Linguistik
Show details
19
A machine learning approach to reading level assessment
In: Computer speech and language. - Amsterdam [u.a.] : Elsevier 23 (2009) 1, 89-106
BLLDB
OLC Linguistik
Show details
20
Improving robustness of MLLR adaptation with speaker-clustered regression class trees
In: Computer speech and language. - Amsterdam [u.a.] : Elsevier 23 (2009) 2, 176-199
BLLDB
OLC Linguistik
Show details

Page: 1 2 3 4

Catalogues
4
0
17
0
0
1
0
Bibliographies
27
0
0
0
0
0
0
0
1
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
34
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern