DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 40

1
Multimodal Lip-Reading for Tracheostomy Patients in the Greek Language
In: Computers; Volume 11; Issue 3; Pages: 34 (2022)
BASE
Show details
2
Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks
In: Multimodal Technologies and Interaction ; Volume 2 ; Issue 4 (2018)
BASE
Show details
3
Mining a Multimodal Corpus of Doctor's Training for Virtual Patient's Feedbacks
In: 19th International Conference on Multimodal Interaction (ICMI) ; https://hal.archives-ouvertes.fr/hal-01654812 ; 19th International Conference on Multimodal Interaction (ICMI), Nov 2017, Glasgow, United Kingdom. ⟨10.1145/3136755.3136816⟩ (2017)
BASE
Show details
4
ENHANCING EXPRESSIVITY OF DOCUMENT-CENTERED COLLABORATION WITH MULTIMODAL ANNOTATIONS
Yoon, Dongwook. - 2017
BASE
Show details
5
ДВУЯЗЫЧНАЯ МНОГОМОДАЛЬНАЯ СИСТЕМА ДЛЯ АУДИОВИЗУАЛЬНОГО СИНТЕЗА РЕЧИ И ЖЕСТОВОГО ЯЗЫКА ПО ТЕКСТУ
КАРПОВ АЛЕКСЕЙ АНАТОЛЬЕВИЧ; ЖЕЛЕЗНЫ МИЛОШ. - : Федеральное государственное автономное образовательное учреждение высшего образования «Санкт-Петербургский национальный исследовательский университет информационных технологий, механики и оптики», 2014
BASE
Show details
6
Robust Multimodal Cognitive Load Measurement
In: DTIC (2014)
BASE
Show details
7
Robust Multimodal Cognitive Load Measurement (RMCLM)
In: DTIC (2013)
BASE
Show details
8
Interactive multi-modal question-answering
Bosch, Antal van den; Bouma, Gosse. - Berlin : Springer, 2011
MPI für Psycholinguistik
Show details
9
Effective and Spurious Ambiguities due to some Co-verbal Gestures in multimodal dialogue
In: Eighth International Gesture Workshop (GW 2009) ; https://halshs.archives-ouvertes.fr/halshs-00436903 ; Eighth International Gesture Workshop (GW 2009), Feb 2009, Bielefeld, Germany (2009)
BASE
Show details
10
Between linguistic attention and gaze fixations in multimodal conversational interfaces
In: http://web.cse.msu.edu/~fangrui/Papers/ICMI09.pdf (2009)
BASE
Show details
11
Ishizuka M.: Automatic generation of gaze and gestures for dialogues between embodied conversational agents
In: http://research.nii.ac.jp/%7Eprendinger/papers/werner-helmut-IJSC-08.pdf (2008)
BASE
Show details
12
Multi modal gesture identification for HCI using surface EMG
Naik, Ganesh R. (R19010); Kumar, Dinesh K.; Arjunan, Sridhar P.. - : U.S., Association for Computing Machinery, 2008
BASE
Show details
13
Timing of visual and spoken input in robot instructions
In: http://www.swrtec.de/swrtec/research/publications/WolfBugmannsubmissionv3.pdf (2006)
Abstract: Trainable robots will need to understand instructions by humans who combine speech and gesture. This paper reports on the analysis of speech and gesture events in a corpus of human-to-human instructions of the dealing phase of a card game. Such instructions constitute an almost uninterrupted stream of words and gestures. One the task of a multimodal robot interface is to determine which gesture is to be paired with which utterance. The analysis of timing of events in the corpus shows that gestures can start at various time relatively to the speech, from 5 seconds before speech starts to 4 seconds after speech ends. The end of a gesture never precedes the corresponding utterance. A simple algorithm based on temporal proximity allows to pair correctly 83% of gestures with their corresponding utterances. This indicates that timing carries significant information for pairing. For practical applications, however, more reliable pairing algorithms are needed. The paper also describes how individual actions can be grouped into a gesture and discusses the integration of semantic information from gesture and speech.
Keyword: human-computer interaction; multimodal interfaces; natural language understanding; service robots; speech
URL: http://www.swrtec.de/swrtec/research/publications/WolfBugmannsubmissionv3.pdf
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.61.369
BASE
Hide details
14
Using redundant speech and handwriting for learning new vocabulary and understanding abbreviations
In: https://pal.sri.com/CALOfiles/cstore/PAL-publications/calo/2006/p347-kaiser.pdf (2006)
BASE
Show details
15
Collaborative multimodal photo annotation over digital paper
In: http://pdf.aminer.org/000/334/219/collaborative_multimodal_photo_annotation_over_digital_paper.pdf (2006)
BASE
Show details
16
Effective error recovery strategies for multimodal form-filling applications
In: http://lands.let.kun.nl/literature/sturm.2005.1.pdf (2005)
BASE
Show details
17
Distributed pointing for multimodal collaboration over sketched diagrams
In: http://calosystem.com/publications/downloads/kaiser/distpointingmultimodal-kaiser.pdf (2005)
BASE
Show details
18
Distributed Pointing for Multimodal Collaboration Over Sketched Diagrams. ICMI
In: http://www.barthelmess.net/Publications/ICMI/p10-barthelmess.pdf (2005)
BASE
Show details
19
Linguistic theories in efficient multimodal reference resolution: An empirical investigation
In: http://www.soc.northwestern.edu/justine/discourse07/week2/Chai_LinguisticTheories.pdf (2005)
BASE
Show details
20
Multimodal new vocabulary recognition through speech and handwriting in a whiteboard scheduling application
In: http://www.cse.ogi.edu/CHCC/Publications/Multimodal_New_Vocabulary_Recognition_through_Speech_and_Handwriting_in_a_Whiteboard_Scheduling_Application.pdf (2005)
BASE
Show details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
1
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
39
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern