DE eng

Search in the Catalogues and Directories

Hits 1 – 16 of 16

1
Linking Semantic and Knowledge Representations in a Multi-Domain Dialogue System
In: DTIC (2007)
BASE
Show details
2
Syntactic Simplification for Improving Content Selection in Multi-Document Summarization
In: DTIC (2004)
BASE
Show details
3
Spoken Dialogue for Simulation Control and Conversational Tutoring
In: DTIC (2004)
BASE
Show details
4
Consolidating the Results of the CIRCSIM-Tutor Project and Further Consolidation of the Results of the CIRCSIM-Tutor Project
In: DTIC AND NTIS (2003)
BASE
Show details
5
Development and Evaluation of a Korean Treebank and its Application to NLP
In: DTIC (2002)
BASE
Show details
6
Automated Tutoring Dialogues for Training in Shipboard Damage Control
In: DTIC (2001)
BASE
Show details
7
NLP Track at TREC-5
In: DTIC (1996)
BASE
Show details
8
Talking to InterFIS: Adding Speech Input to a Natural Language Interface
In: DTIC AND NTIS (1992)
BASE
Show details
9
Experiments in Evaluating Interactive Spoken Language Systems
In: DTIC (1992)
Abstract: As the DARPA spoken language community moves towards developing useful systems for interactive problem solving, we must explore alternative evaluation procedures that measure whether these systems aid people in solving problems within the task domain. In this paper, we describe several experiments exploring new evaluation procedures. To look at end-to-end evaluation, we modified our data collection procedure slightly in order to experiment with several objective task completion measures. We found that the task completion time is well correlated with the number of queries used. We also explored log file evaluation, where evaluators were asked to judge the clarity of the query and the correctness of the response based on examination of the log file. Our results show that seven evaluators were unanimous on more than 80% of the queries, and that at least 6 out of 7 evaluators agreed over 90% of the time. Finally, we applied these new procedures to compare two systems, one system requiring a complete parse and the other using the more flexible robust parsing mechanism. We found that these metrics could distinguish between these systems: there were significant differences in ability to complete the task, number of queries required to complete the task, and score (as computed through a log file evaluation) between the robust and the non-robust modes.
Keyword: *COMPUTATIONAL LINGUISTICS; *DATA ACQUISITION; *LANGUAGE; *PROBLEM SOLVING; *SPEECH; *SPOKEN LANGUAGE; *TEST AND EVALUATION; CAS(COMMON ANSWER SPECIFICATION); COLLECTION; COMMUNITIES; Computer Systems; DATA BASES; Information Science; INTERACTIONS; INTERROGATION; Linguistics; PARSERS
URL: http://oai.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA460343
http://www.dtic.mil/docs/citations/ADA460343
BASE
Hide details
10
Tipster Shogun System (Joint GE-CMU): MUC-4 Test Results and Analysis
In: DTIC (1992)
BASE
Show details
11
GE-CMU: Description of the Tipster/Shogun System as Used for MUC-4
In: DTIC (1992)
BASE
Show details
12
BBN PLUM: MUC-4 Test Results and Analysis
In: DTIC (1992)
BASE
Show details
13
BBN HARC and DELPHI Results on the ATIS Benchmarks - February 1991
In: DTIC (1991)
BASE
Show details
14
BBN PLUM: MUC-3 Test Results and Analysis
In: DTIC (1991)
BASE
Show details
15
Adaptive Natural Language Processing
In: DTIC AND NTIS (1991)
BASE
Show details
16
Research in Natural Language Processing
In: DTIC AND NTIS (1990)
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
16
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern