DE eng

Search in the Catalogues and Directories

Hits 1 – 7 of 7

1
Pre-Training BERT on Arabic Tweets: Practical Considerations ...
Abstract: Pretraining Bidirectional Encoder Representations from Transformers (BERT) for downstream NLP tasks is a non-trival task. We pretrained 5 BERT models that differ in the size of their training sets, mixture of formal and informal Arabic, and linguistic preprocessing. All are intended to support Arabic dialects and social media. The experiments highlight the centrality of data diversity and the efficacy of linguistically aware segmentation. They also highlight that more data or more training step do not necessitate better models. Our new models achieve new state-of-the-art results on several downstream tasks. The resulting models are released to the community under the name QARiB. ... : 6 pages, 5 figures ...
Keyword: Artificial Intelligence cs.AI; Computation and Language cs.CL; FOS Computer and information sciences
URL: https://dx.doi.org/10.48550/arxiv.2102.10684
https://arxiv.org/abs/2102.10684
BASE
Hide details
2
Arabic Offensive Language on Twitter: Analysis and Experiments ...
BASE
Show details
3
Arabic Dialect Identification in the Wild ...
BASE
Show details
4
Arabic Curriculum Analysis ...
BASE
Show details
5
Arabic Diacritic Recovery Using a Feature-Rich biLSTM Model ...
BASE
Show details
6
Diacritization of Maghrebi Arabic Sub-Dialects ...
BASE
Show details
7
Arabic Multi-Dialect Segmentation: bi-LSTM-CRF vs. SVM ...
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
7
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern