Page: 1 2 3 4 5 6 7 8 9... 554
82 |
Self-Supervised Speech Representations Preserve Speech Characteristics while Anonymizing Voices ...
|
|
|
|
BASE
|
|
Show details
|
|
83 |
Repeat after me: Self-supervised learning of acoustic-to-articulatory mapping by vocal imitation ...
|
|
|
|
BASE
|
|
Show details
|
|
84 |
Synthesizing Dysarthric Speech Using Multi-talker TTS for Dysarthric Speech Recognition ...
|
|
|
|
BASE
|
|
Show details
|
|
85 |
WLASL-LEX: a Dataset for Recognising Phonological Properties in American Sign Language ...
|
|
|
|
BASE
|
|
Show details
|
|
86 |
ConSLT: A Token-level Contrastive Framework for Sign Language Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
87 |
Modeling Intensification for Sign Language Generation: A Computational Approach ...
|
|
|
|
BASE
|
|
Show details
|
|
88 |
A Transformer-Based Contrastive Learning Approach for Few-Shot Sign Language Recognition ...
|
|
|
|
BASE
|
|
Show details
|
|
89 |
Open Source HamNoSys Parser for Multilingual Sign Language Encoding ...
|
|
|
|
BASE
|
|
Show details
|
|
90 |
Including Facial Expressions in Contextual Embeddings for Sign Language Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
91 |
Statistical and Spatio-temporal Hand Gesture Features for Sign Language Recognition using the Leap Motion Sensor ...
|
|
|
|
BASE
|
|
Show details
|
|
92 |
Searching for fingerspelled content in American Sign Language ...
|
|
|
|
BASE
|
|
Show details
|
|
93 |
A Comprehensive Review of Sign Language Recognition: Different Types, Modalities, and Datasets ...
|
|
|
|
BASE
|
|
Show details
|
|
94 |
ASL-Skeleton3D and ASL-Phono: Two Novel Datasets for the American Sign Language ...
|
|
|
|
BASE
|
|
Show details
|
|
95 |
Sign Language Video Retrieval with Free-Form Textual Queries ...
|
|
|
|
BASE
|
|
Show details
|
|
96 |
Machine Translation from Signed to Spoken Languages: State of the Art and Challenges ...
|
|
|
|
BASE
|
|
Show details
|
|
98 |
Extracting linguistic speech patterns of Japanese fictional characters using subword units ...
|
|
|
|
Abstract:
This study extracted and analyzed the linguistic speech patterns that characterize Japanese anime or game characters. Conventional morphological analyzers, such as MeCab, segment words with high performance, but they are unable to segment broken expressions or utterance endings that are not listed in the dictionary, which often appears in lines of anime or game characters. To overcome this challenge, we propose segmenting lines of Japanese anime or game characters using subword units that were proposed mainly for deep learning, and extracting frequently occurring strings to obtain expressions that characterize their utterances. We analyzed the subword units weighted by TF/IDF according to gender, age, and each anime character and show that they are linguistic speech patterns that are specific for each feature. Additionally, a classification experiment shows that the model with subword units outperformed that with the conventional method. ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://dx.doi.org/10.48550/arxiv.2203.02632 https://arxiv.org/abs/2203.02632
|
|
BASE
|
|
Hide details
|
|
100 |
What do complexity measures measure? Correlating and validating corpus-based measures of morphological complexity ...
|
|
|
|
BASE
|
|
Show details
|
|
Page: 1 2 3 4 5 6 7 8 9... 554
|
|