DE eng

Search in the Catalogues and Directories

Hits 1 – 18 of 18

1
“There's no rules. It's hackathon.”: Negotiating Commitment in a Context of Volatile Sociality
In: American Anthropological Association (2015)
BASE
Show details
2
2007 NIST Language Recognition Evaluation Test Set
Martin, Alvin; Le, Audrey. - : Linguistic Data Consortium, 2009. : https://www.ldc.upenn.edu, 2009
BASE
Show details
3
2007 NIST Language Recognition Evaluation Supplemental Training Set
Martin, Alvin; Le, Audrey; Graff, David. - : Linguistic Data Consortium, 2009. : https://www.ldc.upenn.edu, 2009
BASE
Show details
4
2007 NIST Language Recognition Evaluation Supplemental Training Set ...
Martin, Alvin; Le, Audrey; Graff, David. - : Linguistic Data Consortium, 2009
BASE
Show details
5
2007 NIST Language Recognition Evaluation Test Set ...
Martin, Alvin; Le, Audrey. - : Linguistic Data Consortium, 2009
BASE
Show details
6
2005 NIST Language Recognition Evaluation
Le, Audrey; Martin, Alvin; Hadfield, Hannah. - : Linguistic Data Consortium, 2008. : https://www.ldc.upenn.edu, 2008
BASE
Show details
7
2005 NIST Language Recognition Evaluation ...
Le, Audrey; Martin, Alvin; Hadfield, Hannah. - : Linguistic Data Consortium, 2008
BASE
Show details
8
NIST speaker recognition evaluations utilizing the Mixer Corpora - 2004, 2005, 2006
In: Institute of Electrical and Electronics Engineers. IEEE transactions on audio, speech and language processing. - New York, NY : Inst. 15 (2007) 7, 1951-1959
BLLDB
OLC Linguistik
Show details
9
2004 Spring NIST Rich Transcription (RT-04S) Development Data
Fiscus, Jonathan G.; Garofolo, John S.; Le, Audrey. - : Linguistic Data Consortium, 2007. : https://www.ldc.upenn.edu, 2007
BASE
Show details
10
2004 Spring NIST Rich Transcription (RT-04S) Evaluation Data
Fiscus, Jonathan G.; Garofolo, John S.; Le, Audrey. - : Linguistic Data Consortium, 2007. : https://www.ldc.upenn.edu, 2007
BASE
Show details
11
2003 NIST Rich Transcription Evaluation Data
Fiscus, Jonathan G.; Doddington, George R.; Le, Audrey. - : Linguistic Data Consortium, 2007. : https://www.ldc.upenn.edu, 2007
BASE
Show details
12
2004 Spring NIST Rich Transcription (RT-04S) Development Data ...
Fiscus, Jonathan G.; Garofolo, John S.; Le, Audrey. - : Linguistic Data Consortium, 2007
BASE
Show details
13
2003 NIST Rich Transcription Evaluation Data ...
Fiscus, Jonathan G.; Doddington, George R.; Le, Audrey. - : Linguistic Data Consortium, 2007
BASE
Show details
14
2004 Spring NIST Rich Transcription (RT-04S) Evaluation Data ...
Fiscus, Jonathan G.; Garofolo, John S.; Le, Audrey. - : Linguistic Data Consortium, 2007
BASE
Show details
15
Effects of speech recognition accuracy on the performance of DARPA Communicator spoken dialogue systems
In: International journal of speech technology. - Boston, Mass. [u.a.] : Kluwer Acad. Publ. 7 (2004) 4, 293-309
BLLDB
OLC Linguistik
Show details
16
2002 Rich Transcription Broadcast News and Conversational Telephone Speech
Garofolo, John S.; Fiscus, Jonathan G.; Le, Audrey. - : Linguistic Data Consortium, 2004. : https://www.ldc.upenn.edu, 2004
BASE
Show details
17
2002 Rich Transcription Broadcast News and Conversational Telephone Speech ...
Garofolo, John S.; Fiscus, Jonathan G.; Le, Audrey. - : Linguistic Data Consortium, 2004
BASE
Show details
18
Effects of Speech Recognition Accuracy on the Performance of DARPA Communicator Spoken Dialogue Systems
In: DTIC (2004)
Abstract: The DARPA Communicator program explored ways to construct better spoken-dialogue systems, with which users interact via speech alone to perform relatively complex tasks such as travel planning. During 2000 and 2001 two large data sets were collected from sessions in which paid users did travel planning using the Communicator systems that had been built by eight research groups. The research groups improved their systems intensively during the ten months between the two data collections. In this paper, we analyze these data sets to estimate the effects of speech recognition accuracy, as measured by Word Error Rate (WER), on other metrics. The effects that we found were linear. We found correlation between WER and Task Completion, and that correlation, unexpectedly, remained more or less linear even for high values of WER. The picture for User Satisfaction metrics is more complex: we found little effect of WER on User Satisfaction for WER less than about 35% to 40% in the 2001 data. The size of the effect of WER on Task Completion was less in 2001 than in 2000, and we believe this difference is due to improved strategies for accomplishing tasks despite speech recognition errors, which is an important accomplishment of the research groups who built the Communicator implementations. We show that additional factors must account for much of the variability in task success, and we present multivariate linear regression models for task success on the 2001 data. We also discuss the apparent gaps in the coverage of our metrics for spoken dialogue systems.
Keyword: *COMMUNICATOR SPOKEN DIALOGUE SYSTEM; *LINEAR REGRESSION ANALYSIS; *SPEECH RECOGNITION; ACCURACY; ASR(AUTOMATIC SPEECH RECOGNITION); DARPA COMMUNICATOR PROGRAM; DATA ACQUISITION; EFFICIENCY; MULTIVARIATE LINEAR REGRESSION MODELS; Statistics and Probability; TASK COMPLETION; TASK SUCCESS; USER SATISFACTION; Voice Communications; WER(WORD ERROR RATE)
URL: http://www.dtic.mil/docs/citations/ADA523241
http://oai.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA523241
BASE
Hide details

Catalogues
0
0
2
0
0
0
0
Bibliographies
2
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
16
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern