DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...38
Hits 1 – 20 of 760

1
Finding the best way to put media bias research into practice via an annotation app ...
Hornung, Tilman. - : Open Science Framework, 2022
BASE
Show details
2
Multimodal Lip-Reading for Tracheostomy Patients in the Greek Language
In: Computers; Volume 11; Issue 3; Pages: 34 (2022)
BASE
Show details
3
Towards Portuguese Sign Language Identification Using Deep Learning
BASE
Show details
4
SonAmi: A Tangible Creativity Support Tool for Productive Procrastination
In: C&C ’21 - 13th ACM Conference on Creativity & Cognition ; https://hal.inria.fr/hal-03442565 ; C&C ’21 - 13th ACM Conference on Creativity & Cognition, Jun 2021, Virtual Event, Italy. pp.1-10, ⟨10.1145/3450741.3465250⟩ (2021)
BASE
Show details
5
A global-scale screening of non-native aquatic organisms to identify potentially invasive species under current and future climate conditions
In: ISSN: 0048-9697 ; EISSN: 1879-1026 ; Science of the Total Environment ; https://hal.univ-lorraine.fr/hal-03544887 ; Science of the Total Environment, Elsevier, 2021, 788, pp.147868. ⟨10.1016/j.scitotenv.2021.147868⟩ (2021)
BASE
Show details
6
Toucher le son d’avant d’écrire (Touching Sound, Learning Writing)
In: Actes ERGO'IA 2021 ; ERGO'IA 2021 ; https://hal.archives-ouvertes.fr/hal-03365473 ; ERGO'IA 2021, Oct 2021, Bidart, France (2021)
BASE
Show details
7
Potential yield simulated by global gridded crop models: using a process-based emulator to explain their differences
In: ISSN: 1991-959X ; Geoscientific Model Development ; https://hal.archives-ouvertes.fr/hal-03188035 ; Geoscientific Model Development, European Geosciences Union, 2021, 14, pp.1639 - 1656. ⟨10.5194/gmd-14-1639-2021⟩ (2021)
BASE
Show details
8
Dictionaries Integrated into English Learning Apps: Critical Comments and Suggestions for Improvement
In: Lexikos; Vol. 31 (2021); 68-92 ; 2224-0039 (2021)
BASE
Show details
9
Clitics are not enough: on agreement and null subjects in Brazilian Venetan
In: Glossa: a journal of general linguistics; Vol 6, No 1 (2021); 86 ; 2397-1835 (2021)
BASE
Show details
10
Minimax Feature Merge: The Featural Linguistic Turing Machine ...
Van Steene, Louis. - : Zenodo, 2021
BASE
Show details
11
Minimax Feature Merge: The Featural Linguistic Turing Machine ...
Van Steene, Louis. - : Zenodo, 2021
BASE
Show details
12
Enter the Matrix: What the new brain-computer interfaces teach us about agency, privacy, and human subjectivity
In: The iJournal: Graduate Student Journal of the Faculty of Information; Vol 6 No 2 (2021): Spring 2021 ; 2561-7397 (2021)
BASE
Show details
13
Voice-user interfaces for TESOL: Potential and receptiveness among native and non-native English speaking instructors
Kent, David. - : University of Hawaii National Foreign Language Resource Center, 2021. : Center for Language & Technology, 2021. : (co-sponsored by Center for Open Educational Resources and Language Learning, University of Texas at Austin), 2021
BASE
Show details
14
Evolution of human computer interaction
In: Sci. Visualization ; Scientific Visualization (2021)
BASE
Show details
15
Natural Language Processing for Lexical Corpus Analysis
In: Doctoral Dissertations (2021)
BASE
Show details
16
Audio-driven Character Animation
In: Doctoral Dissertations (2021)
Abstract: Generating believable character animations is a fundamentally important problem in the field of computer graphics and computer vision. It also has a diverse set of applications ranging from entertainment (e.g., films, games), medicine (e.g., facial therapy and prosthetics), mixed reality, and education (e.g., language/speech training and cyber-assistants). All these applications are all empowered by the ability to model and animate characters convincingly (human or non-human). Existing key-framing or performance capture approaches used for creating animations, especially facial animations, are either laborious or hard to edit. In particular, producing expressive animations from input speech automatically remains an open challenge. In this thesis, I propose novel deep-learning based approaches to produce speech audio-driven character animations, including talking-head animations for character face rigs and portrait images, and reenacted gesture animations for natural human speech videos. First, I propose a neural network architecture, called VisemeNet, that can automatically animate an input face rig using audio as input. The network has three stages: one that learns to predict a sequence of phoneme-groups from audio; another that learns to predict the geometric location of important facial landmarks from audio; and a final stage that combines the outcome from previous stages to produce animation motion curves for FACS-based (Facial Action Coding System-based) face rigs. Second, I propose MakeItTalk, a method that takes as input a portrait image of a face along with audio, and produces the expressive synchronized talking-head animation. The portrait image can range from artistic cartoons to real human faces. In addition, the method generates the whole head motion dynamics matching the audio stresses and pauses. The key insight of the method is to disentangle the content and speaker identity in the input audio signals, and drive the animation from both of them. The content is used for robust synchronization of lips and nearby facial regions. The speaker information is used to capture the rest of the facial expressions and head motion dynamics that are important for generating expressive talking head animations. I also show that MakeItTalk can generalize to new audio clips and face images not seen during training. Both VisemeNet and MakeItTalk lead to much more expressive talking-head animations with higher overall quality compared to the state-of-the-art. Lastly, I propose a method that generates speech gesture animation by reenacting a given video to match a target speech audio. The key idea is to split and re-assemble clips from an existing reference video through a novel video motion graph encoding valid transitions between clips. To seamlessly connect different clips in the reenactment, I propose a pose-aware video blending network which synthesizes video frames around the stitched frames between two clips. Moreover, the method incorporates an audio-based gesture searching algorithm to find the optimal order of the reenacted frames. The method generates reenactments that are consistent with both the audio rhythms and the speech content. The resulting synthesized videos have much higher quality and consistency with the target audio compared to previous work and baselines.
Keyword: Artificial Intelligence and Robotics; Graphics and Human Computer Interfaces
URL: https://scholarworks.umass.edu/dissertations_2/2393
https://scholarworks.umass.edu/cgi/viewcontent.cgi?article=3336&context=dissertations_2
BASE
Hide details
17
Zeitgeist: Modelando um projeto editorial com interface digital
In: Pandaemonium Germanicum: Revista de Estudos Germanísticos, Vol 24, Iss 42 (2021) (2021)
BASE
Show details
18
Dictionaries Integrated into English Learning Apps: Critical Comments and Suggestions for Improvement
In: Lexikos, Vol 31, Pp 68-92 (2021) (2021)
BASE
Show details
19
Explaining Rainfall Accumulations over Several Days in the French Alps Using Low-Dimensional Atmospheric Predictors Based on Analogy
In: ISSN: 1558-8424 ; EISSN: 1558-8432 ; Journal of Applied Meteorology and Climatology ; https://hal.archives-ouvertes.fr/hal-03087661 ; Journal of Applied Meteorology and Climatology, American Meteorological Society, 2020, 59 (2), pp.237-250. ⟨10.1175/JAMC-D-19-0112.1⟩ (2020)
BASE
Show details
20
Voks: Digital instruments for chironomic control of voice samples
In: ISSN: 0167-6393 ; EISSN: 1872-7182 ; Speech Communication ; https://hal.archives-ouvertes.fr/hal-03009712 ; Speech Communication, Elsevier : North-Holland, 2020, 125, pp.97 - 113. ⟨10.1016/j.specom.2020.10.002⟩ (2020)
BASE
Show details

Page: 1 2 3 4 5...38

Catalogues
0
0
0
0
0
0
1
Bibliographies
0
0
0
0
0
0
0
0
8
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
751
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern