DE eng

Search in the Catalogues and Directories

Hits 1 – 8 of 8

1
Feature-Rich Named Entity Recognition for Bulgarian Using Conditional Random Fields ...
BASE
Show details
2
Controlling Complexity in Part-of-Speech Induction
In: Departmental Papers (CIS) (2011)
BASE
Show details
3
Posterior Regularization for Learning with Side Information and Weak Supervision
In: Publicly Accessible Penn Dissertations (2010)
BASE
Show details
4
Posterior regularization for learning with side information and weak supervision
In: Dissertations available from ProQuest (2010)
Abstract: Supervised machine learning techniques have been very successful for a variety of tasks and domains including natural language processing, computer vision, and computational biology. Unfortunately, their use often requires creation of large problem-specific training corpora that can make these methods prohibitively expensive. At the same time, we often have access to external problem-specific information that we cannot alway easily incorporate. We might know how to solve the problem in another domain (e.g. for a different language); we might have access to cheap but noisy training data; or a domain expert might be available who would be able to guide a human learner much more efficiently than by simply creating an IID training corpus. A key challenge for weakly supervised learning is then how to incorporate such kinds of auxiliary information arising from indirect supervision. In this thesis, we present Posterior Regularization, a probabilistic framework for structured, weakly supervised learning. Posterior Regularization is applicable to probabilistic models with latent variables and exports a language for specifying constraints or preferences about posterior distributions of latent variables. We show that this language is powerful enough to specify realistic prior knowledge for a variety applications in natural language processing. Additionally, because Posterior Regularization separates model complexity from the complexity of structural constraints, it can be used for structured problems with relatively little computational overhead. We apply Posterior Regularization to several problems in natural language processing including word alignment for machine translation, transfer of linguistic resources across languages and grammar induction. Additionally, we find that we can apply Posterior Regularization to the problem of multi-view learning, achieving particularly good results for transfer learning. We also explore the theoretical relationship between Posterior Regularization and other proposed frameworks for encoding this kind of prior knowledge, and show a close relationship to Constraint Driven Learning as well as to Generalized Expectation Constraints.
Keyword: Computer science
URL: https://repository.upenn.edu/dissertations/AAI3447566
BASE
Hide details
5
Learning Tractable Word Alignment Models with Complex Constraints
In: Lab Papers (GRASP) (2010)
BASE
Show details
6
PostCat - posterior constrained alignment toolkit
In: The Prague bulletin of mathematical linguistics. - Praha : Univ. (2009) 91, 27-36
BLLDB
OLC Linguistik
Show details
7
Dependency Grammar Induction via Bitext Projection Constraints
In: Lab Papers (GRASP) (2009)
BASE
Show details
8
Penn/Umass/CHOP Biocreative II systems
In: Andrew McCallum (2007)
BASE
Show details

Catalogues
0
0
1
0
0
0
0
Bibliographies
1
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
7
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern