DE eng

Search in the Catalogues and Directories

Page: 1...44 45 46 47 48
Hits 941 – 957 of 957

941
ReTraCk: A Flexible and Efficient Framework for Knowledge Base Question Answering ...
BASE
Show details
942
A Conditional Splitting Framework for Efficient Constituency Parsing ...
BASE
Show details
943
Event Detection as Graph Parsing ...
BASE
Show details
944
COSY: COunterfactual SYntax for Cross-Lingual Understanding ...
BASE
Show details
945
UnNatural Language Inference ...
BASE
Show details
946
Syntax-augmented Multilingual BERT for Cross-lingual Transfer ...
BASE
Show details
947
Dynamic and Multi-Channel Graph Convolutional Networks for Aspect-Based Sentiment Analysis ...
BASE
Show details
948
Structured Sentiment Analysis as Dependency Graph Parsing ...
BASE
Show details
949
Jointly Identifying Rhetoric and Implicit Emotions via Multi-Task Learning ...
BASE
Show details
950
Adapting Unsupervised Syntactic Parsing Methodology for Discourse Dependency Parsing ...
BASE
Show details
951
Representing Syntax and Composition with Geometric Transformations ...
BASE
Show details
952
Lower Perplexity is Not Always Human-Like ...
BASE
Show details
953
Surprisal Estimators for Human Reading Times Need Character Models ...
BASE
Show details
954
Psycholinguistic Tripartite Graph Network for Personality Detection ...
BASE
Show details
955
How is BERT surprised? Layerwise detection of linguistic anomalies ...
BASE
Show details
956
Catchphrase: Automatic Detection of Cultural References ...
BASE
Show details
957
Exploiting Language Relatedness for Low Web-Resource Language Model Adaptation: An Indic Languages Study ...
Abstract: Read paper: https://www.aclanthology.org/2021.acl-long.105 Abstract: Recent research in multilingual language models (LM) has demonstrated their ability to effectively handle multiple languages in a single model. This holds promise for low web-resource languages (LRL) as multilingual models can enable transfer of supervision from high resource languages to LRLs. However, incorporating a new language in an LM still remains a challenge, particularly for languages with limited corpora and in unseen scripts. In this paper we argue that relatedness among languages in a language family may be exploited to overcome some of the corpora limitations of LRLs, and propose RelateLM. We focus on Indian languages, and exploit relatedness along two dimensions: (1) script (since many Indic scripts originated from the Brahmic script), and (2) sentence structure. RelateLM uses transliteration to convert the unseen script of limited LRL text into the script of a Related Prominent Language (RPL) (Hindi in our case). While ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://dx.doi.org/10.48448/x7q0-ax50
https://underline.io/lecture/25996-exploiting-language-relatedness-for-low-web-resource-language-model-adaptation-an-indic-languages-study
BASE
Hide details

Page: 1...44 45 46 47 48

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
957
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern