DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...13
Hits 1 – 20 of 246

1
Automatic Error Type Annotation for Arabic ...
BASE
Show details
2
Navigating the Kaleidoscope of COVID-19 Misinformation Using Deep Learning ...
BASE
Show details
3
HittER: Hierarchical Transformers for Knowledge Graph Embeddings ...
BASE
Show details
4
Detecting Gender Bias using Explainability ...
BASE
Show details
5
HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization ...
BASE
Show details
6
Not All Negatives are Equal: Label-Aware Contrastive Loss for Fine-grained Text Classification ...
BASE
Show details
7
Contrastive Code Representation Learning ...
BASE
Show details
8
Unsupervised Multi-View Post-OCR Error Correction With Language Models ...
BASE
Show details
9
AttentionRank: Unsupervised Keyphrase Extraction using Self and Cross Attentions ...
BASE
Show details
10
Automatic Fact-Checking with Document-level Annotations using BERT and Multiple Instance Learning ...
BASE
Show details
11
Towards the Early Detection of Child Predators in Chat Rooms: A BERT-based Approach ...
BASE
Show details
12
Semantic Categorization of Social Knowledge for Commonsense Question Answering ...
BASE
Show details
13
Pre-train or Annotate? Domain Adaptation with a Constrained Budget ...
BASE
Show details
14
Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you? ...
BASE
Show details
15
CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in Abstractive Summarization ...
BASE
Show details
16
Automatic Text Evaluation through the Lens of Wasserstein Barycenters ...
BASE
Show details
17
Combining sentence and table evidence to predict veracity of factual claims using TaPaS and RoBERTa ...
BASE
Show details
18
Meta Distant Transfer Learning for Pre-trained Language Models ...
BASE
Show details
19
How to Train BERT with an Academic Budget ...
BASE
Show details
20
Temporal Adaptation of BERT and Performance on Downstream Document Classification: Insights from Social Media ...
Abstract: Language use differs between domains and even within a domain, language use changes over time. For pre-trained language models like BERT, domain adaptation through continued pre-training has been shown to improve performance on in-domain downstream tasks. In this article, we investigate whether temporal adaptation can bring additional benefits. For this purpose, we introduce a corpus of social media comments sampled over three years. It contains unlabelled data for adaptation and evaluation on an upstream masked language modelling task as well as labelled data for fine-tuning and evaluation on a downstream document classification task. We find that temporality matters for both tasks: temporal adaptation improves upstream and temporal fine-tuning downstream task performance. Time-specific models generally perform better on past than on future test sets, which matches evidence on the bursty usage of topical words. However, adapting BERT to time and domain does not improve performance on the downstream task ...
Keyword: Computational Linguistics; Language Models; Machine Learning; Machine Learning and Data Mining; Natural Language Processing
URL: https://dx.doi.org/10.48448/xfn5-0714
https://underline.io/lecture/40586-temporal-adaptation-of-bert-and-performance-on-downstream-document-classification-insights-from-social-media
BASE
Hide details

Page: 1 2 3 4 5...13

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
246
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern