DE eng

Search in the Catalogues and Directories

Hits 1 – 6 of 6

1
Dynamic Grammars with Lookahead Composition for WFST-based Speech Recognition
In: http://www.gavo.t.u-tokyo.ac.jp/~mine/paper/PDF/2012/INTERSPEECH_1272_t2012-9.pdf
BASE
Show details
2
INTERSPEECH 2011 Painless WFST cascade construction for LVCSR- Transducersaurus
In: http://www.gavo.t.u-tokyo.ac.jp/~mine/paper/PDF/2011/INTERSPEECH_p1537-1540_t2011-8.pdf
BASE
Show details
3
Open-Source WFST tools for LVCSR Cascade Construction
In: http://www.gavo.t.u-tokyo.ac.jp/~mine/paper/PDF/2011/FSMNLP_t2011-7.pdf
BASE
Show details
4
Open Source WFST tools for LVCSR cascade development
In: http://www.aclweb.org/anthology-new/W/W11/W11-4409.pdf
BASE
Show details
5
Failure transitions for Joint n-gram Models and G2P Conversion
In: http://www.gavo.t.u-tokyo.ac.jp/~mine/paper/PDF/2013/INTERSPEECH_p1821-1825_t2013-8.PDF
BASE
Show details
6
CLEF 2009 Question Answering Experiments at Tokyo Institute of Technology
In: http://ceur-ws.org/Vol-1175/CLEF2009wn-QACLEF-HeieEt2009.pdf
Abstract: In this paper we describe the experiments carried out at Tokyo Institute of Technology for the CLEF 2009 Question Answering on Speech Transcriptions (QAST) task, where we participated in the English track. We apply a non-linguistic, data-driven approach to Question Answering (QA). Relevant sentences are rst retrieved from the supplied corpus, using a language model based sentence retrieval module. Our probabilistic answer extraction module then pinpoints exact answers in these sentences. In this year's QAST task the question set contains both factoid and non-factoid questions, where the non-factoid questions ask for denitions of given named entities. We do not make any adjustments of our factoid QA system to account for non-factoid questions. Moreover, we are presented with the challenge of searching for the right answer in a relatively small corpus. Our system is built to take advantage of redundant information in large corpora, however, in this task such redundancy is not available. The results show that our QA framework does not perform well on this task: we end last of four participating teams in seven out of eight runs. However, our performance does not regress as automatic transcriptions of speeches or questions are used instead of manual transcriptions. Thus the only run in which we are not placed last, is the most dicult task, where spoken questions and ASR transcriptions with high WER are used.
Keyword: Categories and Subject Descriptors H.3 [Information Storage and Retrieval; Experimentation Keywords Question answering; H.3.3 Information Search and Retrieval; H.3.4 Systems and Software General Terms Measurement; Performance; Questions beyond factoids
URL: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.664.1724
http://ceur-ws.org/Vol-1175/CLEF2009wn-QACLEF-HeieEt2009.pdf
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
6
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern