DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...13
Hits 1 – 20 of 254

1
Le modèle Transformer: un « couteau suisse » pour le traitement automatique des langues
In: Techniques de l'Ingenieur ; https://hal.archives-ouvertes.fr/hal-03619077 ; Techniques de l'Ingenieur, Techniques de l'ingénieur, 2022, ⟨10.51257/a-v1-in195⟩ ; https://www.techniques-ingenieur.fr/base-documentaire/innovation-th10/innovations-en-electronique-et-tic-42257210/transformer-des-reseaux-de-neurones-pour-le-traitement-automatique-des-langues-in195/ (2022)
BASE
Show details
2
Automatic Error Type Annotation for Arabic ...
BASE
Show details
3
Navigating the Kaleidoscope of COVID-19 Misinformation Using Deep Learning ...
BASE
Show details
4
HittER: Hierarchical Transformers for Knowledge Graph Embeddings ...
BASE
Show details
5
Detecting Gender Bias using Explainability ...
BASE
Show details
6
HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization ...
BASE
Show details
7
Not All Negatives are Equal: Label-Aware Contrastive Loss for Fine-grained Text Classification ...
BASE
Show details
8
Contrastive Code Representation Learning ...
BASE
Show details
9
Unsupervised Multi-View Post-OCR Error Correction With Language Models ...
BASE
Show details
10
AttentionRank: Unsupervised Keyphrase Extraction using Self and Cross Attentions ...
BASE
Show details
11
Automatic Fact-Checking with Document-level Annotations using BERT and Multiple Instance Learning ...
BASE
Show details
12
Towards the Early Detection of Child Predators in Chat Rooms: A BERT-based Approach ...
BASE
Show details
13
Semantic Categorization of Social Knowledge for Commonsense Question Answering ...
BASE
Show details
14
Pre-train or Annotate? Domain Adaptation with a Constrained Budget ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.409/ Abstract: Recent work has demonstrated that pre-training in-domain language models can boost performance when adapting to a new domain. However, the costs associated with pre-training raise an important question: given a fixed budget, what steps should an NLP practitioner take to maximize performance? In this paper, we study domain adaptation under budget constraints, and approach it as a customer choice problem between data annotation and pre-training. Specifically, we measure the annotation cost of three procedural text datasets and the pre-training cost of three in-domain language models. Then we evaluate the utility of different combinations of pre-training and data annotation under varying budget constraints to assess which combination strategy works best. We find that, for small budgets, spending all funds on annotation leads to the best performance; once the budget becomes large enough, a combination of data annotation and in-domain ...
Keyword: Computational Linguistics; Language Models; Machine Learning; Machine Learning and Data Mining; Natural Language Processing
URL: https://underline.io/lecture/37963-pre-train-or-annotatequestion-domain-adaptation-with-a-constrained-budget
https://dx.doi.org/10.48448/z1gf-n855
BASE
Hide details
15
Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you? ...
BASE
Show details
16
CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in Abstractive Summarization ...
BASE
Show details
17
Automatic Text Evaluation through the Lens of Wasserstein Barycenters ...
BASE
Show details
18
Combining sentence and table evidence to predict veracity of factual claims using TaPaS and RoBERTa ...
BASE
Show details
19
Meta Distant Transfer Learning for Pre-trained Language Models ...
BASE
Show details
20
How to Train BERT with an Academic Budget ...
BASE
Show details

Page: 1 2 3 4 5...13

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
254
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern