DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...68
Hits 1 – 20 of 1.343

1
A Neural Pairwise Ranking Model for Readability Assessment ...
Lee, Justin; Vajjala, Sowmya. - : arXiv, 2022
BASE
Show details
2
Subspace-based Representation and Learning for Phonotactic Spoken Language Recognition ...
BASE
Show details
3
A Deep CNN Architecture with Novel Pooling Layer Applied to Two Sudanese Arabic Sentiment Datasets ...
BASE
Show details
4
Mono vs Multilingual BERT: A Case Study in Hindi and Marathi Named Entity Recognition ...
BASE
Show details
5
Informative Causality Extraction from Medical Literature via Dependency-tree based Patterns ...
BASE
Show details
6
WLASL-LEX: a Dataset for Recognising Phonological Properties in American Sign Language ...
BASE
Show details
7
A Transformer-Based Contrastive Learning Approach for Few-Shot Sign Language Recognition ...
BASE
Show details
8
Including Facial Expressions in Contextual Embeddings for Sign Language Generation ...
BASE
Show details
9
Statistical and Spatio-temporal Hand Gesture Features for Sign Language Recognition using the Leap Motion Sensor ...
Bird, Jordan J.. - : arXiv, 2022
BASE
Show details
10
pNLP-Mixer: an Efficient all-MLP Architecture for Language ...
Abstract: Large pre-trained language models drastically changed the natural language processing(NLP) landscape. Nowadays, they represent the go-to framework to tackle diverse NLP tasks, even with a limited number of annotations. However, using those models in production, either in the cloud or at the edge, remains a challenge due to the memory footprint and/or inference costs. As an alternative, recent work on efficient NLP has shown that small weight-efficient models can reach competitive performance at a fraction of the costs. Here, we introduce pNLP-Mixer, an embbedding-free model based on the MLP-Mixer architecture that achieves high weight-efficiency thanks to a novel linguistically informed projection layer. We evaluate our model on two multi-lingual semantic parsing datasets, MTOP and multiATIS. On MTOP our pNLP-Mixer almost matches the performance of mBERT, which has 38 times more parameters, and outperforms the state-of-the-art of tiny models (pQRNN) with 3 times fewer parameters. On a long-sequence ... : Preprint ...
Keyword: Artificial Intelligence cs.AI; Computation and Language cs.CL; FOS Computer and information sciences; Machine Learning cs.LG
URL: https://arxiv.org/abs/2202.04350
https://dx.doi.org/10.48550/arxiv.2202.04350
BASE
Hide details
11
Multilingual Abusiveness Identification on Code-Mixed Social Media Text ...
Ranjan, Ekagra; Poddar, Naman. - : arXiv, 2022
BASE
Show details
12
hate-alert@DravidianLangTech-ACL2022: Ensembling Multi-Modalities for Tamil TrollMeme Classification ...
BASE
Show details
13
StableMoE: Stable Routing Strategy for Mixture of Experts ...
Dai, Damai; Dong, Li; Ma, Shuming. - : arXiv, 2022
BASE
Show details
14
BERTuit: Understanding Spanish language in Twitter through a native transformer ...
BASE
Show details
15
EVI: Multilingual Spoken Dialogue Tasks and Dataset for Knowledge-Based Enrolment, Verification, and Identification ...
BASE
Show details
16
Frame Shift Prediction ...
BASE
Show details
17
Towards the Next 1000 Languages in Multilingual Machine Translation: Exploring the Synergy Between Supervised and Self-Supervised Learning ...
BASE
Show details
18
Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages ...
BASE
Show details
19
Out of Thin Air: Is Zero-Shot Cross-Lingual Keyword Detection Better Than Unsupervised? ...
BASE
Show details
20
Assessment of Massively Multilingual Sentiment Classifiers ...
BASE
Show details

Page: 1 2 3 4 5...68

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
1.343
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern