DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4
Hits 1 – 20 of 79

1
Parallel processing in speech perception with local and global representations of linguistic context
In: eLife (2022)
BASE
Show details
2
Using surprisal and fMRI to map the neural bases of broad and local contextual prediction during natural language comprehension ...
BASE
Show details
3
Community-level Research on Suicidality Prediction in a Secure Environment: Overview of the CLPsych 2021 Shared Task
BASE
Show details
4
Connecting Documents, Words, and Languages Using Topic Models
Yang, Weiwei. - 2019
BASE
Show details
5
Assessing Composition in Sentence Vector Representations ...
BASE
Show details
6
Relating lexical and syntactic processes in language: Bridging research in humans and machines
BASE
Show details
7
Guided Probabilistic Topic Models for Agenda-setting and Framing
Nguyen, Viet An. - 2015
BASE
Show details
8
Soft syntactic constraints for Arabic-English hierarchical phrase-based translation
In: Machine translation. - Dordrecht [u.a.] : Springer Science + Business Media 26 (2012) 1-2, 137-157
BLLDB
OLC Linguistik
Show details
9
Crowdsourced Monolingual Translation
Hu, Chang. - 2012
BASE
Show details
10
Decision Tree-based Syntactic Language Modeling
BASE
Show details
11
Modeling Dependencies in Natural Languages with Latent Variables
Abstract: In this thesis, we investigate the use of latent variables to model complex dependencies in natural languages. Traditional models, which have a fixed parameterization, often make strong independence assumptions that lead to poor performance. This problem is often addressed by incorporating additional dependencies into the model (e.g., using higher order N-grams for language modeling). These added dependencies can increase data sparsity and/or require expert knowledge, together with trial and error, in order to identify and incorporate the most important dependencies (as in lexicalized parsing models). Traditional models, when developed for a particular genre, domain, or language, are also often difficult to adapt to another. In contrast, previous work has shown that latent variable models, which automatically learn dependencies in a data-driven way, are able to flexibly adjust the number of parameters based on the type and the amount of training data available. We have created several different types of latent variable models for a diverse set of natural language processing applications, including novel models for part-of-speech tagging, language modeling, and machine translation, and an improved model for parsing. These models perform significantly better than traditional models. We have also created and evaluated three different methods for improving the performance of latent variable models. While these methods can be applied to any of our applications, we focus our experiments on parsing. The first method involves self-training, i.e., we train models using a combination of gold standard training data and a large amount of automatically labeled training data. We conclude from a series of experiments that the latent variable models benefit much more from self-training than conventional models, apparently due to their flexibility to adjust their model parameterization to learn more accurate models from the additional automatically labeled training data. The second method takes advantage of the variability among latent variable models to combine multiple models for enhanced performance. We investigate several different training protocols to combine self-training with model combination. We conclude that these two techniques are complementary to each other and can be effectively combined to train very high quality parsing models. The third method replaces the generative multinomial lexical model of latent variable grammars with a feature-rich log-linear lexical model to provide a principled solution to address data sparsity, handle out-of-vocabulary words, and exploit overlapping features during model induction. We conclude from experiments that the resulting grammars are able to effectively parse three different languages. This work contributes to natural language processing by creating flexible and effective latent variable models for several different languages. Our investigation of self-training, model combination, and log-linear models also provides insights into the effective application of these machine learning techniques to other disciplines.
Keyword: Computer science; language modeling; latent variable models; machine translation; natural language processing; parsing; tagging
URL: http://hdl.handle.net/1903/12295
BASE
Hide details
12
Exploiting syntactic relationships in a phrase-based decoder: an exploration
In: Machine translation. - Dordrecht [u.a.] : Springer Science + Business Media 24 (2010) 2, 123-140
BLLDB
OLC Linguistik
Show details
13
Gibbs Sampling for the Uninitiated
In: DTIC (2010)
BASE
Show details
14
Structured local exponential models for machine translation
BASE
Show details
15
A Formal Model of Ambiguity and its Applications in Machine Translation
BASE
Show details
16
Extending Phrase-Based Decoding with a Dependency-Based Reordering Model
In: DTIC (2009)
BASE
Show details
17
Extending Phrase-Based Decoding with a Dependency-Based Reordering Model
BASE
Show details
18
COMPUTATIONAL ANALYSIS OF THE CONVERSATIONAL DYNAMICS OF THE UNITED STATES SUPREME COURT
Hawes, Timothy. - 2009
BASE
Show details
19
Fine-Grained Linguistic Soft Constraints on Statistical Natural Language Processing Models
BASE
Show details
20
Generalizing Word Lattice Translation
In: DTIC (2008)
BASE
Show details

Page: 1 2 3 4

Catalogues
6
0
7
0
0
0
0
Bibliographies
18
0
0
0
0
0
0
0
6
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
50
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern