DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...32
Hits 1 – 20 of 626

1
Cross-media Scientific Research Achievements Query based on Ranking Learning ...
Wang, Benzhi; Liang, Meiyu; Li, Ang. - : arXiv, 2022
BASE
Show details
2
Exploring Sub-skeleton Trajectories for Interpretable Recognition of Sign Language ...
BASE
Show details
3
Cross-Lingual Query-Based Summarization of Crisis-Related Social Media: An Abstractive Approach Using Transformers ...
Vitiugin, Fedor; Castillo, Carlos. - : arXiv, 2022
BASE
Show details
4
Simplifying Multilingual News Clustering Through Projection From a Shared Space ...
BASE
Show details
5
Towards Best Practices for Training Multilingual Dense Retrieval Models ...
BASE
Show details
6
Addressing Issues of Cross-Linguality in Open-Retrieval Question Answering Systems For Emergent Domains ...
BASE
Show details
7
C3: Continued Pretraining with Contrastive Weak Supervision for Cross Language Ad-Hoc Retrieval ...
BASE
Show details
8
Parameter-Efficient Neural Reranking for Cross-Lingual and Multilingual Retrieval ...
BASE
Show details
9
QALD-9-plus: A Multilingual Dataset for Question Answering over DBpedia and Wikidata Translated by Native Speakers ...
BASE
Show details
10
MuMiN: A Large-Scale Multilingual Multimodal Fact-Checked Misinformation Social Network Dataset ...
BASE
Show details
11
From Examples to Rules: Neural Guided Rule Synthesis for Information Extraction ...
BASE
Show details
12
Topic Discovery via Latent Space Clustering of Pretrained Language Model Representations ...
Abstract: Topic models have been the prominent tools for automatic topic discovery from text corpora. Despite their effectiveness, topic models suffer from several limitations including the inability of modeling word ordering information in documents, the difficulty of incorporating external linguistic knowledge, and the lack of both accurate and efficient inference methods for approximating the intractable posterior. Recently, pretrained language models (PLMs) have brought astonishing performance improvements to a wide variety of tasks due to their superior representations of text. Interestingly, there have not been standard approaches to deploy PLMs for topic discovery as better alternatives to topic models. In this paper, we begin by analyzing the challenges of using PLM representations for topic discovery, and then propose a joint latent space learning and clustering framework built upon PLM embeddings. In the latent space, topic-word and document-topic distributions are jointly modeled so that the discovered ... : WWW 2022. (Code: https://github.com/yumeng5/TopClus) ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences; Information Retrieval cs.IR; Machine Learning cs.LG
URL: https://arxiv.org/abs/2202.04582
https://dx.doi.org/10.48550/arxiv.2202.04582
BASE
Hide details
13
Offensive Language Detection in Under-resourced Algerian Dialectal Arabic Language ...
BASE
Show details
14
Shedding New Light on the Language of the Dark Web ...
BASE
Show details
15
Query Expansion and Entity Weighting for Query Reformulation Retrieval in Voice Assistant Systems ...
BASE
Show details
16
LoL: A Comparative Regularization Loss over Query Reformulation Losses for Pseudo-Relevance Feedback ...
BASE
Show details
17
Finding Inverse Document Frequency Information in BERT ...
BASE
Show details
18
Improving Word Translation via Two-Stage Contrastive Learning ...
BASE
Show details
19
nigam@COLIEE-22: Legal Case Retrieval and Entailment using Cascading of Lexical and Semantic-based models ...
BASE
Show details
20
Out-of-Domain Semantics to the Rescue! Zero-Shot Hybrid Retrieval Models ...
Chen, Tao; Zhang, Mingyang; Lu, Jing. - : arXiv, 2022
BASE
Show details

Page: 1 2 3 4 5...32

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
626
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern