1 |
Improving Pre-trained Language Models with Syntactic Dependency Prediction Task for Chinese Semantic Error Recognition ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
ExpMRC: explainability evaluation for machine reading comprehension
|
|
|
|
In: Heliyon (2022)
|
|
BASE
|
|
Show details
|
|
3 |
Multilingual multi-aspect explainability analyses on machine reading comprehension models
|
|
|
|
In: iScience (2022)
|
|
BASE
|
|
Show details
|
|
4 |
Multilingual Multi-Aspect Explainability Analyses on Machine Reading Comprehension Models ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Chase: A Large-Scale and Pragmatic Chinese Dataset for Cross-Database Context-Dependent Text-to-SQL ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
GL-GIN: Fast and Accurate Non-Autoregressive Model for Joint Multiple Intent Detection and Slot Filling ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
A Closer Look into the Robustness of Neural Dependency Parsers Using Better Adversarial Examples ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Learning to Bridge Metric Spaces: Few-shot Joint Learning of Intent Detection and Slot Filling ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Neural Stylistic Response Generation with Disentangled Latent Variables ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Language learners' enjoyment and emotion regulation in online collaborative learning
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Canonicalizing Open Knowledge Bases with Multi-Layered Meta-Graph Neural Network ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
TableGPT: Few-shot Table-to-Text Generation with Table Structure Reconstruction and Content Matching ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
N-LTP: An Open-source Neural Language Technology Platform for Chinese ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Cross-Lingual Machine Reading Comprehension ...
|
|
|
|
Abstract:
Though the community has made great progress on Machine Reading Comprehension (MRC) task, most of the previous works are solving English-based MRC problems, and there are few efforts on other languages mainly due to the lack of large-scale training data. In this paper, we propose Cross-Lingual Machine Reading Comprehension (CLMRC) task for the languages other than English. Firstly, we present several back-translation approaches for CLMRC task, which is straightforward to adopt. However, to accurately align the answer into another language is difficult and could introduce additional noise. In this context, we propose a novel model called Dual BERT, which takes advantage of the large-scale training data provided by rich-resource language (such as English) and learn the semantic relations between the passage and question in a bilingual context, and then utilize the learned knowledge to improve reading comprehension performance of low-resource language. We conduct experiments on two Chinese machine reading ... : 10 pages, accepted as a conference paper at EMNLP-IJCNLP 2019 (long paper) ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences; Machine Learning cs.LG; Neural and Evolutionary Computing cs.NE
|
|
URL: https://arxiv.org/abs/1909.00361 https://dx.doi.org/10.48550/arxiv.1909.00361
|
|
BASE
|
|
Hide details
|
|
20 |
Towards Better UD Parsing: Deep Contextualized Word Embeddings, Ensemble, and Treebank Concatenation ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|