1 |
Inflating Topic Relevance with Ideology: A Case Study of Political Ideology Bias in Social Topic Detection Models ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Inflating Topic Relevance with Ideology: A Case Study of Political Ideology Bias in Social Topic Detection Models ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
MimicProp: Learning to Incorporate Lexicon Knowledge into Distributed Word Representation for Social Media Analysis
|
|
|
|
In: Proceedings of the International AAAI Conference on Web and Social Media; Vol. 14 (2020): Fourteenth International AAAI Conference on Web and Social Media; 738-749 ; 2334-0770 ; 2162-3449 (2020)
|
|
BASE
|
|
Show details
|
|
4 |
Report on EMNLP Reviewer Survey
|
|
|
|
In: https://hal.archives-ouvertes.fr/hal-01660886 ; [Technical Report] Association for computational linguistics. 2017 (2017)
|
|
BASE
|
|
Show details
|
|
7 |
Heuristic Sample Selection to Minimize Reference Standard Training Set for a Part-Of-Speech Tagger
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Breaking the Resource Bottleneck for Multilingual Parsing
|
|
|
|
In: DTIC AND NTIS (2005)
|
|
BASE
|
|
Show details
|
|
14 |
Evaluating Translational Correspondence Using Annotation Projection
|
|
|
|
In: DTIC (2003)
|
|
BASE
|
|
Show details
|
|
15 |
Evaluating Translational Correspondence using Annotation Projection
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Improved Word-Level Alignment: Injecting Knowledge about MT Divergences
|
|
|
|
In: DTIC (2002)
|
|
BASE
|
|
Show details
|
|
17 |
Word-level Alignment for Multilingual Resource Acquisition
|
|
|
|
In: DTIC (2002)
|
|
BASE
|
|
Show details
|
|
19 |
On Minimizing Training Corpus for Parser Acquisition
|
|
|
|
In: DTIC (2001)
|
|
Abstract:
Many corpus-based natural language processing systems rely on using large quantities of annotated text as their training examples. Building this kind of resource is an expensive and labor-intensive project. To minimize effort spent on annotating examples that are not helpful the training process., recent research efforts have begun to apply active learning techniques to selectively choose data to be annotated. In this work, we consider selecting training examples with the it tree-entropy metric. Our goal is to assess how well this selection technique can be applied for training different types of parsers. We find that tree-entropy can significantly reduce the amount of training annotation for both a history-based parser and an EM-based parser. Moreover, the examples selected for the history-based parser are also good for training the EM-based parser, suggesting that the technique is parser independent. ; Additional report no. UMIACS-TR-2001-40. Supported in part by NSF under Contract IR-9712068 and DARPA under Contract N66991-97-C-8540.
|
|
Keyword:
*PARSERS; ACQUISITION; CORPUS; LEARNING; Linguistics; NATURAL LANGUAGE; TRAINING; TREE-ENTROPY
|
|
URL: http://oai.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA458746 http://www.dtic.mil/docs/citations/ADA458746
|
|
BASE
|
|
Hide details
|
|
20 |
Supervised Grammar Induction Using Training Data with Limited Constituent Information ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|