DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 28

1
Speech-Centric Information Processing: An Optimization-Oriented Approach
In: http://research.microsoft.com/pubs/179540/ProcIEEE_He_deng_finalsub.pdf (2012)
BASE
Show details
2
High-quality speech-to-speech translation for computer-aided language learning
In: http://people.cs.pitt.edu/~litman/courses/slate/pdf/p1-wang.pdf (2006)
BASE
Show details
3
Conceptual decoding from word lattices: application to the spoken dialogue corpus MEDIA
In: http://lia.univ-avignon.fr/fileadmin/documents/Users/Intranet/chercheurs/bechet/publifred/FB_2006_INTERSPEECH_1.pdf (2006)
BASE
Show details
4
Conceptual decoding from word lattices: application to the spoken dialogue corpus MEDIA
In: http://www-lium.univ-lemans.fr/%7Eservan/publications/Servan_Interspeech2006.pdf (2006)
BASE
Show details
5
Conceptual decoding from word lattices: application to the spoken dialogue corpus MEDIA
In: http://www.ist-luna.eu/pdf/IS061416.pdf (2006)
BASE
Show details
6
Timing of visual and spoken input in robot instructions
In: http://www.swrtec.de/swrtec/research/publications/WolfBugmannsubmissionv3.pdf (2006)
Abstract: Trainable robots will need to understand instructions by humans who combine speech and gesture. This paper reports on the analysis of speech and gesture events in a corpus of human-to-human instructions of the dealing phase of a card game. Such instructions constitute an almost uninterrupted stream of words and gestures. One the task of a multimodal robot interface is to determine which gesture is to be paired with which utterance. The analysis of timing of events in the corpus shows that gestures can start at various time relatively to the speech, from 5 seconds before speech starts to 4 seconds after speech ends. The end of a gesture never precedes the corresponding utterance. A simple algorithm based on temporal proximity allows to pair correctly 83% of gestures with their corresponding utterances. This indicates that timing carries significant information for pairing. For practical applications, however, more reliable pairing algorithms are needed. The paper also describes how individual actions can be grouped into a gesture and discusses the integration of semantic information from gesture and speech.
Keyword: human-computer interaction; multimodal interfaces; natural language understanding; service robots; speech
URL: http://www.swrtec.de/swrtec/research/publications/WolfBugmannsubmissionv3.pdf
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.61.369
BASE
Hide details
7
Confidence Estimation for NLP Applications
In: http://iit-iti.nrc-cnrc.gc.ca/iit-publications-iti/docs/NRC-48755.pdf (2006)
BASE
Show details
8
Probabilistic grounding of situated speech using plan recognition and reference resolution
In: http://web.media.mit.edu/~dkroy/papers/pdf/gorniak_roy_2005.pdf (2005)
BASE
Show details
9
Probabilistic grounding of situated speech using plan recognition and reference resolution
In: http://www.media.mit.edu/cogmac/publications/speech_plan_gorniak_icmi2005.pdf (2005)
BASE
Show details
10
Corpus-based discourse understanding in spoken dialogue systems
In: http://www.kecl.ntt.co.jp/icl/kpro/rh/pdf/ACMTSLPFinalDraft.pdf (2003)
BASE
Show details
11
Trainable Videorealistic Speech Animation
In: http://cuneus.ai.mit.edu:8000/publications/siggraph02.ps.gz (2002)
BASE
Show details
12
Interacting with Virtual Characters In Interactive Storytelling
In: http://www-scm.tees.ac.uk/users/f.charles/publications/conferences/2002/aamas2002.pdf (2002)
BASE
Show details
13
Non-instructional Linguistic Communication with Virtual Actors
In: http://www-scm.tees.ac.uk/users/f.charles/publications/conferences/2001/roman2001.pdf (2001)
BASE
Show details
14
Jupiter: A Telephone-Based Conversational Interface for Weather Information
In: http://www.sls.lcs.mit.edu/sls/publications/2000/IEEE-jupiter.ps.gz (2000)
BASE
Show details
15
Steps Toward Flexible Speech Recognition
In: http://www.furui.cs.titech.ac.jp/english/publication/././publication/2000/sst2000_19.pdf (2000)
BASE
Show details
16
Connectionist Language Models For Speech Understanding: The Problem Of Word Order Variation
In: http://www-iupva.univ-ubs.fr/GT51/JYA/articles/99Eurospeech.ps (1999)
BASE
Show details
17
Speech Perception Using . . . The BeBe System
In: http://www.lcs.mit.edu/publications/pubs/pdf/MIT-LCS-TR-736.pdf (1997)
BASE
Show details
18
International Workshop on Speech Processing Proceedings, pp 7--12. On the Role of Syntax in Speech Understanding
In: ftp://ftp.sanpo.t.u-tokyo.ac.jp/pub/nigel/papers/waseda.ps.Z (1993)
BASE
Show details
19
Modeling Of Time Constituents For Speech Understanding
In: http://www.techfak.uni-bielefeld.de/ags/ai/publications/./papers/Hildebrandt1993-MOT.ps.gz (1993)
BASE
Show details
20
Speech Recognition Using Semantic Hidden Markov Networks
In: http://www.techfak.uni-bielefeld.de/ags/ai/publications/./papers/Fink1993-SRU.ps.gz (1993)
BASE
Show details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
28
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern