DE eng

Search in the Catalogues and Directories

Hits 1 – 7 of 7

1
How are visemes and graphemes integrated with speech sounds during spoken word recognition? ERP evidence for supra-additive responses during audiovisual compared to auditory speech processing
In: ISSN: 0093-934X ; EISSN: 1090-2155 ; Brain and Language ; https://hal.archives-ouvertes.fr/hal-03472191 ; Brain and Language, Elsevier, 2022, 225, ⟨10.1016/j.bandl.2021.105058⟩ (2022)
BASE
Show details
2
A Sign Language Based ATM Accessing For Blind ...
BASE
Show details
3
A Sign Language Based ATM Accessing For Blind ...
BASE
Show details
4
Developmental Paths of Pointing for Various Motives in Infants with and without Language Delay
In: International Journal of Environmental Research and Public Health; Volume 19; Issue 9; Pages: 4982 (2022)
BASE
Show details
5
Integrating Gestures and Words to Communicate in Full-Term and Low-Risk Preterm Late Talkers
In: International Journal of Environmental Research and Public Health; Volume 19; Issue 7; Pages: 3918 (2022)
BASE
Show details
6
American Sign Language Words Recognition of Skeletal Videos Using Processed Video Driven Multi-Stacked Deep LSTM
In: Sensors; Volume 22; Issue 4; Pages: 1406 (2022)
Abstract: Complex hand gesture interactions among dynamic sign words may lead to misclassification, which affects the recognition accuracy of the ubiquitous sign language recognition system. This paper proposes to augment the feature vector of dynamic sign words with knowledge of hand dynamics as a proxy and classify dynamic sign words using motion patterns based on the extracted feature vector. In this method, some double-hand dynamic sign words have ambiguous or similar features across a hand motion trajectory, which leads to classification errors. Thus, the similar/ambiguous hand motion trajectory is determined based on the approximation of a probability density function over a time frame. Then, the extracted features are enhanced by transformation using maximal information correlation. These enhanced features of 3D skeletal videos captured by a leap motion controller are fed as a state transition pattern to a classifier for sign word classification. To evaluate the performance of the proposed method, an experiment is performed with 10 participants on 40 double hands dynamic ASL words, which reveals 97.98% accuracy. The method is further developed on challenging ASL, SHREC, and LMDHG data sets and outperforms conventional methods by 1.47%, 1.56%, and 0.37%, respectively.
Keyword: American sign language words; bidirectional long short-term memory; computer vision; deep learning; dynamic hand gestures; leap motion controller sensor; sign language recognition; ubiquitous system; video processing
URL: https://doi.org/10.3390/s22041406
BASE
Hide details
7
The medium is still the message: Canadian federal politicians' gestural stance markers of credibility and opinion
Sie, Trevor. - 2022
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
7
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern