Home
Catalogue search
Refine your search:
Keyword
Creator / Publisher:
Garg, Saurabh (2)
Hamarneh, Ghassan (2)
Jongman, Allard (2)
Sereno, Joan A. (2)
Wang, Yue (2)
Year
Medium:
Online (2)
Type
BLLDB-Access
Search in the Catalogues and Directories
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
Sort by
creator [A → Z]
'
creator [Z → A]
'
publishing year ↑ (asc)
'
publishing year ↓ (desc)
'
title [A → Z]
'
title [Z → A]
'
Simple Search
Hits 1 – 2 of 2
1
ADFAC: Automatic detection of facial articulatory features
Garg, Saurabh
;
Hamarneh, Ghassan
;
Jongman, Allard
...
In: MethodsX (2020)
BASE
Show details
2
ADFAC: Automatic detection of facial articulatory features
Garg, Saurabh
;
Hamarneh, Ghassan
;
Jongman, Allard
;
Sereno, Joan A.
;
Wang, Yue
. - : Elsevier, 2020
Abstract:
This work is licensed under a Creative Commons Attribution 4.0 International License. ; Using computer-vision and image processing techniques, we aim to identify specific visual cues as induced by facial movements made during monosyllabic speech production. The method is named ADFAC: Automatic Detection of Facial Articulatory Cues. Four facial points of interest were detected automatically to represent head, eyebrow and lip movements: nose tip (proxy for head movement), medial point of left eyebrow, and midpoints of the upper and lower lips. The detected points were then automatically tracked in the subsequent video frames. Critical features such as the distance, velocity, and acceleration describing local facial movements with respect to the resting face of each speaker were extracted from the positional profiles of each tracked point. In this work, a variant of random forest is proposed to determine which facial features are significant in classifying speech sound categories. The method takes in both video and audio as input and extracts features from any video with a plain or simple background. The method is implemented in MATLAB and scripts are made available on GitHub for easy access. • Using innovative computer-vision and image processing techniques to automatically detect and track keypoints on the face during speech production in videos, thus allowing more natural articulation than previous sensor-based approaches. • Measuring multi-dimensional and dynamic facial movements by extracting time-related, distance-related and kinematics-related features in speech production. • Adopting the novel random forest classification approach to determine and rank the significance of facial features toward accurate speech sound categorization. ; Humanities Research Council of Canada (SSHRC Insight Grant 435–2012–1641) ; Natural Sciences and Engineering Research Council of Canada (NSERC Discovery Grant 2017–05978)
Keyword:
Computer vision
;
Discriminative analysis
;
Facial movements
;
Features
;
Image processing
;
Visual cues
URL:
http://hdl.handle.net/1808/30837
https://doi.org/10.1016/j.mex.2020.101006
BASE
Hide details
Mobile view
All
Catalogues
UB Frankfurt Linguistik
0
IDS Mannheim
0
OLC Linguistik
0
UB Frankfurt Retrokatalog
0
DNB Subject Category Language
0
Institut für Empirische Sprachwissenschaft
0
Leibniz-Centre General Linguistics (ZAS)
0
Bibliographies
BLLDB
0
BDSL
0
IDS Bibliografie zur deutschen Grammatik
0
IDS Bibliografie zur Gesprächsforschung
0
IDS Konnektoren im Deutschen
0
IDS Präpositionen im Deutschen
0
IDS OBELEX meta
0
MPI-SHH Linguistics Collection
0
MPI for Psycholinguistics
0
Linked Open Data catalogues
Annohub
0
Online resources
Link directory
0
Journal directory
0
Database directory
0
Dictionary directory
0
Open access documents
BASE
2
Linguistik-Repository
0
IDS Publikationsserver
0
Online dissertations
0
Language Description Heritage
0
© 2013 - 2024 Lin|gu|is|tik
|
Imprint
|
Privacy Policy
|
Datenschutzeinstellungen ändern