DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 40

1
Multimodal Lip-Reading for Tracheostomy Patients in the Greek Language
In: Computers; Volume 11; Issue 3; Pages: 34 (2022)
BASE
Show details
2
Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks
In: Multimodal Technologies and Interaction ; Volume 2 ; Issue 4 (2018)
BASE
Show details
3
Mining a Multimodal Corpus of Doctor's Training for Virtual Patient's Feedbacks
In: 19th International Conference on Multimodal Interaction (ICMI) ; https://hal.archives-ouvertes.fr/hal-01654812 ; 19th International Conference on Multimodal Interaction (ICMI), Nov 2017, Glasgow, United Kingdom. ⟨10.1145/3136755.3136816⟩ (2017)
BASE
Show details
4
ENHANCING EXPRESSIVITY OF DOCUMENT-CENTERED COLLABORATION WITH MULTIMODAL ANNOTATIONS
Yoon, Dongwook. - 2017
BASE
Show details
5
ДВУЯЗЫЧНАЯ МНОГОМОДАЛЬНАЯ СИСТЕМА ДЛЯ АУДИОВИЗУАЛЬНОГО СИНТЕЗА РЕЧИ И ЖЕСТОВОГО ЯЗЫКА ПО ТЕКСТУ
КАРПОВ АЛЕКСЕЙ АНАТОЛЬЕВИЧ; ЖЕЛЕЗНЫ МИЛОШ. - : Федеральное государственное автономное образовательное учреждение высшего образования «Санкт-Петербургский национальный исследовательский университет информационных технологий, механики и оптики», 2014
BASE
Show details
6
Robust Multimodal Cognitive Load Measurement
In: DTIC (2014)
BASE
Show details
7
Robust Multimodal Cognitive Load Measurement (RMCLM)
In: DTIC (2013)
Abstract: This report summarizes the important research activities, study results and research accomplishments out of the RMCLM project in the past year period. The objective of this project includes research of the fundamental issues related to the use of multiple input modalities and their fusion to enable robust and automatic cognitive load measurement (CLM) in the real world. Firstly, we carried out a further literature review on physiological measures of cognitive workload to include the recent advances of physiological measures of cognitive workload. In the meantime, we examined the use of various features (e.g. spectral and approximate entropies, wavelet-based complexity measures, correlation dimension, Hurst exponent) of electroencephalogram (EEG) signals to evaluate changes in working memory load during the performance of a cognitive task with varying difficulty/load levels. Eye based CLM was also studied. Three types of eye activity were investigated: pupillary response, blink, and eye movement (fixation and saccade). We further investigated the linguistic feature based CLM in this study and analyzed novel linguistic features as potential indices of cognitive load. All together, we had carried out CLM study of three unobtrusive modalities, namely EEG, eye activity, and linguistic feature based CLM, in the past year period.
Keyword: *COGNITION; *COGNITIVE SCIENCE; *MEASUREMENT; BRAIN SCIENCE AND ENGINEERING; COGNITIVE MODELING; COMPUTER AND USER INTERFACES; EEG(ELECTROENCEPHALOGRAM); ELECTROENCEPHALOGRAPHY; Psychology; RMCLM(ROBUST MULTIMODAL COGNITIVE LOAD MEASUREMENT)
URL: http://oai.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA582471
http://www.dtic.mil/docs/citations/ADA582471
BASE
Hide details
8
Interactive multi-modal question-answering
Bosch, Antal van den; Bouma, Gosse. - Berlin : Springer, 2011
MPI für Psycholinguistik
Show details
9
Effective and Spurious Ambiguities due to some Co-verbal Gestures in multimodal dialogue
In: Eighth International Gesture Workshop (GW 2009) ; https://halshs.archives-ouvertes.fr/halshs-00436903 ; Eighth International Gesture Workshop (GW 2009), Feb 2009, Bielefeld, Germany (2009)
BASE
Show details
10
Between linguistic attention and gaze fixations in multimodal conversational interfaces
In: http://web.cse.msu.edu/~fangrui/Papers/ICMI09.pdf (2009)
BASE
Show details
11
Ishizuka M.: Automatic generation of gaze and gestures for dialogues between embodied conversational agents
In: http://research.nii.ac.jp/%7Eprendinger/papers/werner-helmut-IJSC-08.pdf (2008)
BASE
Show details
12
Multi modal gesture identification for HCI using surface EMG
Naik, Ganesh R. (R19010); Kumar, Dinesh K.; Arjunan, Sridhar P.. - : U.S., Association for Computing Machinery, 2008
BASE
Show details
13
Timing of visual and spoken input in robot instructions
In: http://www.swrtec.de/swrtec/research/publications/WolfBugmannsubmissionv3.pdf (2006)
BASE
Show details
14
Using redundant speech and handwriting for learning new vocabulary and understanding abbreviations
In: https://pal.sri.com/CALOfiles/cstore/PAL-publications/calo/2006/p347-kaiser.pdf (2006)
BASE
Show details
15
Collaborative multimodal photo annotation over digital paper
In: http://pdf.aminer.org/000/334/219/collaborative_multimodal_photo_annotation_over_digital_paper.pdf (2006)
BASE
Show details
16
Effective error recovery strategies for multimodal form-filling applications
In: http://lands.let.kun.nl/literature/sturm.2005.1.pdf (2005)
BASE
Show details
17
Distributed pointing for multimodal collaboration over sketched diagrams
In: http://calosystem.com/publications/downloads/kaiser/distpointingmultimodal-kaiser.pdf (2005)
BASE
Show details
18
Distributed Pointing for Multimodal Collaboration Over Sketched Diagrams. ICMI
In: http://www.barthelmess.net/Publications/ICMI/p10-barthelmess.pdf (2005)
BASE
Show details
19
Linguistic theories in efficient multimodal reference resolution: An empirical investigation
In: http://www.soc.northwestern.edu/justine/discourse07/week2/Chai_LinguisticTheories.pdf (2005)
BASE
Show details
20
Multimodal new vocabulary recognition through speech and handwriting in a whiteboard scheduling application
In: http://www.cse.ogi.edu/CHCC/Publications/Multimodal_New_Vocabulary_Recognition_through_Speech_and_Handwriting_in_a_Whiteboard_Scheduling_Application.pdf (2005)
BASE
Show details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
1
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
39
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern