DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...13
Hits 1 – 20 of 254

1
Le modèle Transformer: un « couteau suisse » pour le traitement automatique des langues
In: Techniques de l'Ingenieur ; https://hal.archives-ouvertes.fr/hal-03619077 ; Techniques de l'Ingenieur, Techniques de l'ingénieur, 2022, ⟨10.51257/a-v1-in195⟩ ; https://www.techniques-ingenieur.fr/base-documentaire/innovation-th10/innovations-en-electronique-et-tic-42257210/transformer-des-reseaux-de-neurones-pour-le-traitement-automatique-des-langues-in195/ (2022)
BASE
Show details
2
Automatic Error Type Annotation for Arabic ...
BASE
Show details
3
Navigating the Kaleidoscope of COVID-19 Misinformation Using Deep Learning ...
BASE
Show details
4
HittER: Hierarchical Transformers for Knowledge Graph Embeddings ...
BASE
Show details
5
Detecting Gender Bias using Explainability ...
BASE
Show details
6
HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization ...
BASE
Show details
7
Not All Negatives are Equal: Label-Aware Contrastive Loss for Fine-grained Text Classification ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.359/ Abstract: Fine-grained classification involves dealing with datasets with larger number of classes with subtle differences between them. Guiding the model to focus on differentiating dimensions between these commonly confusable classes is key to improving performance on fine-grained tasks. In this work, we analyse the contrastive fine-tuning of pre-trained language models on two fine-grained text classification tasks, emotion classification, and sentiment analysis. We adaptively embed class relationships into a contrastive objective function to help differently weigh the positives and negatives, and in particular, weighting closely confusable negatives more than less similar negative examples. We find that Label-aware Contrastive Loss outperforms previous contrastive methods, in the presence of larger number and/or more confusable classes, and helps models to produce output distributions that are more differentiated. ...
Keyword: Computational Linguistics; Language Models; Machine Learning; Machine Learning and Data Mining; Natural Language Processing; Sentiment Analysis
URL: https://dx.doi.org/10.48448/ywz3-2188
https://underline.io/lecture/38032-not-all-negatives-are-equal-label-aware-contrastive-loss-for-fine-grained-text-classification
BASE
Hide details
8
Contrastive Code Representation Learning ...
BASE
Show details
9
Unsupervised Multi-View Post-OCR Error Correction With Language Models ...
BASE
Show details
10
AttentionRank: Unsupervised Keyphrase Extraction using Self and Cross Attentions ...
BASE
Show details
11
Automatic Fact-Checking with Document-level Annotations using BERT and Multiple Instance Learning ...
BASE
Show details
12
Towards the Early Detection of Child Predators in Chat Rooms: A BERT-based Approach ...
BASE
Show details
13
Semantic Categorization of Social Knowledge for Commonsense Question Answering ...
BASE
Show details
14
Pre-train or Annotate? Domain Adaptation with a Constrained Budget ...
BASE
Show details
15
Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you? ...
BASE
Show details
16
CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in Abstractive Summarization ...
BASE
Show details
17
Automatic Text Evaluation through the Lens of Wasserstein Barycenters ...
BASE
Show details
18
Combining sentence and table evidence to predict veracity of factual claims using TaPaS and RoBERTa ...
BASE
Show details
19
Meta Distant Transfer Learning for Pre-trained Language Models ...
BASE
Show details
20
How to Train BERT with an Academic Budget ...
BASE
Show details

Page: 1 2 3 4 5...13

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
254
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern