1 |
Universal Conditional Masked Language Pre-training for Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Conditional Bilingual Mutual Information Based Adaptive Training for Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Towards Contextual Spelling Correction for Customization of End-to-end Speech Recognition Systems ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
The Past Mistake is the Future Wisdom: Error-driven Contrastive Probability Optimization for Chinese Spell Checking ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Hierarchical Softmax for End-to-End Low-resource Multilingual Speech Recognition ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Sememe Prediction for BabelNet Synsets using Multilingual and Multimodal Information ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
USTC-NELSLIP at SemEval-2022 Task 11: Gazetteer-Adapted Integration Network for Multilingual Complex Named Entity Recognition ...
|
|
|
|
Abstract:
This paper describes the system developed by the USTC-NELSLIP team for SemEval-2022 Task 11 Multilingual Complex Named Entity Recognition (MultiCoNER). We propose a gazetteer-adapted integration network (GAIN) to improve the performance of language models for recognizing complex named entities. The method first adapts the representations of gazetteer networks to those of language models by minimizing the KL divergence between them. After adaptation, these two networks are then integrated for backend supervised named entity recognition (NER) training. The proposed method is applied to several state-of-the-art Transformer-based NER models with a gazetteer built from Wikidata, and shows great generalization ability across them. The final predictions are derived from an ensemble of these trained models. Experimental results and detailed analysis verify the effectiveness of the proposed method. The official results show that our system ranked 1st on three tracks (Chinese, Code-mixed and Bangla) and 2nd on the ... : Winner system (USTC-NELSLIP) of SemEval 2022 MultiCoNER shared task on 3 tracks (Chinese, Bangla, Code-mixed) ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/2203.03216 https://dx.doi.org/10.48550/arxiv.2203.03216
|
|
BASE
|
|
Hide details
|
|
8 |
Delving Deeper into Cross-lingual Visual Question Answering ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Zero-shot Cross-lingual Conversational Semantic Role Labeling ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Multi-Level Contrastive Learning for Cross-Lingual Alignment ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Tackling data scarcity in speech translation using zero-shot multilingual machine translation techniques ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Bridging Cross-Lingual Gaps During Leveraging the Multilingual Sequence-to-Sequence Pretraining for Text Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Differentiable Multi-Agent Actor-Critic for Multi-Step Radiology Report Summarization ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Attend, Memorize and Generate: Towards Faithful Table-to-Text Generation in Few Shots ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Separate What You Describe: Language-Queried Audio Source Separation ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Learning Functional Distributional Semantics with Visual Data ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|