DE eng

Search in the Catalogues and Directories

Hits 1 – 18 of 18

1
“There's no rules. It's hackathon.”: Negotiating Commitment in a Context of Volatile Sociality
In: American Anthropological Association (2015)
BASE
Show details
2
2007 NIST Language Recognition Evaluation Test Set
Martin, Alvin; Le, Audrey. - : Linguistic Data Consortium, 2009. : https://www.ldc.upenn.edu, 2009
BASE
Show details
3
2007 NIST Language Recognition Evaluation Supplemental Training Set
Martin, Alvin; Le, Audrey; Graff, David. - : Linguistic Data Consortium, 2009. : https://www.ldc.upenn.edu, 2009
BASE
Show details
4
2007 NIST Language Recognition Evaluation Supplemental Training Set ...
Martin, Alvin; Le, Audrey; Graff, David. - : Linguistic Data Consortium, 2009
BASE
Show details
5
2007 NIST Language Recognition Evaluation Test Set ...
Martin, Alvin; Le, Audrey. - : Linguistic Data Consortium, 2009
BASE
Show details
6
2005 NIST Language Recognition Evaluation
Le, Audrey; Martin, Alvin; Hadfield, Hannah; de Villiers, Jacques; Hosom, John-Paul; van Santen, Jan. - : Linguistic Data Consortium, 2008. : https://www.ldc.upenn.edu, 2008
Abstract: *Introduction* 2005 NIST Language Recognition Evaluation was developed by the Linguistic Data Consortium (LDC) and the National Institute of Standards and Technology (NIST). It contains 73 hours of conversational telephone speech in the following languages: English (American), English (Indian), Hindi, Japanese, Korean, Mandarin (Mainland), Mandarin (Taiwan), Spanish (Mexican), and Tamil. The goal of NIST's Language Recognition Evaluation (LRE) is to establish the baseline of current performance capability for language recognition of conversational telephone speech and to lay the groundwork for further research efforts in the field. NIST conducted two previous evaluations in 1996 and 2003. For the 2005 LRE, the emphasis was on research directed toward a general base of technology to be ported to various language recognition tasks with minimum effort and the development of the ability to make more difficult discriminations between similar languages and dialects of the same language. That focus augmented the traditional evaluation goals, those being: * to drive the technology forward * to measure the state-of-the-art * to find the most promising algorithmic approaches The task evaluated was the detection of a given target language or dialect. From a test segment of speech and a target language or dialect, the system to be evaluated determined whether the speech was from the target language or dialect. The 2005 NIST Language Recognition Evaluation Plan, which includes a description of the evaluation tasks, is included with this release. LDC released other LREs as: * 2003 NIST Language Recognition Evaluation (LDC2006S31) * 2007 NIST Language Recognition Evaluation Test Set (LDC2009S04) * 2007 NIST Language Recognition Evaluation Supplemental Training Set (LDC2009S05) * 2009 NIST Language Recognition Evaluation Test Set (LDC2014S06) * 2011 NIST Language Recognition Evaluation Test Set (LDC2018S06) *Data* Each speech file is one side of a "4-wire" telephone conversation represented as 8-bit 8-kHz mulaw data. There are 11,106 speech files in SPHERE (.sph) format for a total of 73.2 hours of speech. The speech data was compiled from LDC's CALLFRIEND corpora and from data collected by Oregon Health and Science University (OHSU), Beaverton, Oregon. Each test segment was prepared using an automatic speech activity detection algorithm to identify areas and durations of speech. The test segments were stored in SPHERE file format, one segment per file. Unlike previous evaluations, areas of silence were not removed from the segments. Segments were chosen to contain a specified approximate duration of actual speech. Auxiliary information was included in the SPHERE headers to document the source file, start time, and duration of all excerpts that were used to construct the segment. The test segments contain three nominal durations of speech: 3 seconds, 10 seconds, and 30 seconds. Actual speech durations vary, but were constrained to be within the ranges of 2-4 seconds, 7-13 seconds, and 25-35 seconds, respectively. Note that this refers to duration of actual speech contained in segments as determined by the speech activity detection algorithm; signal durations in general are longer due to areas of silence in the segments. Shorter speech duration test segments are subsets of longer speech duration test segments; i.e., each 10-second test segment is a subset of a corresponding 30-second test segment, and each 3-second test segment is a subset of a corresponding 10-second segment. Performance was evaluated separately for test segments of each duration. NIST recommends using data from the 1996 and 2003 evaluations as development data. This data may be found in 2003 NIST Language Recognition Evaluation (LDC2006S31). Because the 1996 and 2003 evaluations did not cover Indian-accented English, this release includes a development data set of Indian-accented English. *Samples* For an example of the data in this corpus, please listen to the following samples: * 3 second (WAV) * 10 second (WAV) * 30 second (WAV) *Updates* None at this time.
URL: https://catalog.ldc.upenn.edu/LDC2008S05
BASE
Hide details
7
2005 NIST Language Recognition Evaluation ...
Le, Audrey; Martin, Alvin; Hadfield, Hannah. - : Linguistic Data Consortium, 2008
BASE
Show details
8
NIST speaker recognition evaluations utilizing the Mixer Corpora - 2004, 2005, 2006
In: Institute of Electrical and Electronics Engineers. IEEE transactions on audio, speech and language processing. - New York, NY : Inst. 15 (2007) 7, 1951-1959
BLLDB
OLC Linguistik
Show details
9
2004 Spring NIST Rich Transcription (RT-04S) Development Data
Fiscus, Jonathan G.; Garofolo, John S.; Le, Audrey. - : Linguistic Data Consortium, 2007. : https://www.ldc.upenn.edu, 2007
BASE
Show details
10
2004 Spring NIST Rich Transcription (RT-04S) Evaluation Data
Fiscus, Jonathan G.; Garofolo, John S.; Le, Audrey. - : Linguistic Data Consortium, 2007. : https://www.ldc.upenn.edu, 2007
BASE
Show details
11
2003 NIST Rich Transcription Evaluation Data
Fiscus, Jonathan G.; Doddington, George R.; Le, Audrey. - : Linguistic Data Consortium, 2007. : https://www.ldc.upenn.edu, 2007
BASE
Show details
12
2004 Spring NIST Rich Transcription (RT-04S) Development Data ...
Fiscus, Jonathan G.; Garofolo, John S.; Le, Audrey. - : Linguistic Data Consortium, 2007
BASE
Show details
13
2003 NIST Rich Transcription Evaluation Data ...
Fiscus, Jonathan G.; Doddington, George R.; Le, Audrey. - : Linguistic Data Consortium, 2007
BASE
Show details
14
2004 Spring NIST Rich Transcription (RT-04S) Evaluation Data ...
Fiscus, Jonathan G.; Garofolo, John S.; Le, Audrey. - : Linguistic Data Consortium, 2007
BASE
Show details
15
Effects of speech recognition accuracy on the performance of DARPA Communicator spoken dialogue systems
In: International journal of speech technology. - Boston, Mass. [u.a.] : Kluwer Acad. Publ. 7 (2004) 4, 293-309
BLLDB
OLC Linguistik
Show details
16
2002 Rich Transcription Broadcast News and Conversational Telephone Speech
Garofolo, John S.; Fiscus, Jonathan G.; Le, Audrey. - : Linguistic Data Consortium, 2004. : https://www.ldc.upenn.edu, 2004
BASE
Show details
17
2002 Rich Transcription Broadcast News and Conversational Telephone Speech ...
Garofolo, John S.; Fiscus, Jonathan G.; Le, Audrey. - : Linguistic Data Consortium, 2004
BASE
Show details
18
Effects of Speech Recognition Accuracy on the Performance of DARPA Communicator Spoken Dialogue Systems
In: DTIC (2004)
BASE
Show details

Catalogues
0
0
2
0
0
0
0
Bibliographies
2
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
16
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern