DE eng

Search in the Catalogues and Directories

Hits 1 – 12 of 12

1
huggingface/datasets: 1.18.1 ...
BASE
Show details
2
huggingface/transformers: v4.4.0: S2T, M2M100, I-BERT, mBART-50, DeBERTa-v2, XLSR-Wav2Vec2 ...
BASE
Show details
3
huggingface/datasets: 1.16.0 ...
BASE
Show details
4
Low-Complexity Probing via Finding Subnetworks ...
BASE
Show details
5
Block Pruning For Faster Transformers ...
BASE
Show details
6
Low-Complexity Probing via Finding Subnetworks ...
NAACL 2021 2021; Cao, Steven; Rush, Alexander. - : Underline Science Inc., 2021
BASE
Show details
7
Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning ...
BASE
Show details
8
Transformers: State-of-the-Art Natural Language Processing ...
BASE
Show details
9
Transformers: State-of-the-Art Natural Language Processing ...
BASE
Show details
10
huggingface/transformers: ProphetNet, Blenderbot, SqueezeBERT, DeBERTa ...
BASE
Show details
11
huggingface/transformers: Trainer, TFTrainer, Multilingual BART, Encoder-decoder improvements, Generation Pipeline ...
BASE
Show details
12
huggingface/pytorch-transformers: DistilBERT, GPT-2 Large, XLM multilingual models, bug fixes ...
Abstract: New model architecture: DistilBERT Adding Huggingface's new transformer architecture, DistilBERT described in Smaller, faster, cheaper, lighter: Introducing DistilBERT, a distilled version of BERT by Victor Sanh, Lysandre Debut and Thomas Wolf. This new model architecture comes with two pretrained checkpoints: distilbert-base-uncased : the base DistilBert model distilbert-base-uncased-distilled-squad : DistilBert model fine-tuned with distillation on SQuAD. An awaited new pretrained checkpoint: GPT-2 large (774M parameters) The third OpenAI GPT-2 checkpoint (GPT-2 large) is available in the library under the shortcut name gpt2-large : 774M parameters, 36 layers, and 20 heads. New XLM multilingual pretrained checkpoints in 17 and 100 languages We have added two new XLM models in 17 and 100 languages which obtain better performance than multilingual BERT on the XNLI cross-lingual classification task. New dependency: sacremoses Support for XLM is improved by carefully reproducing the original tokenization ...
URL: https://dx.doi.org/10.5281/zenodo.3385998
https://zenodo.org/record/3385998
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
12
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern