DE eng

Search in the Catalogues and Directories

Hits 1 – 17 of 17

1
A Simple and Effective Method To Eliminate the Self Language Bias in Multilingual Representations ...
Yang, Ziyi; Yang, Yinfei; Cer, Daniel. - : arXiv, 2021
BASE
Show details
2
Interpretability Analysis for Named Entity Recognition to Understand System Predictions and How They Can Improve ...
NAACL 2021 2021; Agarwal, Oshin; Nenkova, Ani. - : Underline Science Inc., 2021
BASE
Show details
3
Neural Retrieval for Question Answering with Cross-Attention Supervised Data Augmentation ...
BASE
Show details
4
Universal Sentence Representation Learning with Conditional Masked Language Model ...
BASE
Show details
5
A Simple and Effective Method To Eliminate the Self Language Bias in Multilingual Representations ...
BASE
Show details
6
LAReQA: Language-agnostic answer retrieval from a multilingual pool ...
BASE
Show details
7
Language-agnostic BERT Sentence Embedding ...
BASE
Show details
8
Universal Sentence Representation Learning with Conditional Masked Language Model ...
Yang, Ziyi; Yang, Yinfei; Cer, Daniel. - : arXiv, 2020
BASE
Show details
9
Interpretability Analysis for Named Entity Recognition to Understand System Predictions and How They Can Improve ...
BASE
Show details
10
End-to-end Semantics-based Summary Quality Assessment for Single-document Summarization ...
Bao, Forrest Sheng; Li, Hebi; Luo, Ge. - : arXiv, 2020
BASE
Show details
11
Neural Passage Retrieval with Improved Negative Contrast ...
BASE
Show details
12
PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification ...
Yang, Yinfei; Zhang, Yuan; Tar, Chris. - : arXiv, 2019
BASE
Show details
13
Multilingual Universal Sentence Encoder for Semantic Retrieval ...
Yang, Yinfei; Cer, Daniel; Ahmad, Amin. - : arXiv, 2019
BASE
Show details
14
Improving Multilingual Sentence Embedding using Bi-directional Dual Encoder with Additive Margin Softmax ...
Abstract: In this paper, we present an approach to learn multilingual sentence embeddings using a bi-directional dual-encoder with additive margin softmax. The embeddings are able to achieve state-of-the-art results on the United Nations (UN) parallel corpus retrieval task. In all the languages tested, the system achieves P@1 of 86% or higher. We use pairs retrieved by our approach to train NMT models that achieve similar performance to models trained on gold pairs. We explore simple document-level embeddings constructed by averaging our sentence embeddings. On the UN document-level retrieval task, document embeddings achieve around 97% on P@1 for all experimented language pairs. Lastly, we evaluate the proposed model on the BUCC mining task. The learned embeddings with raw cosine similarity scores achieve competitive results compared to current state-of-the-art models, and with a second-stage scorer we achieve a new state-of-the-art level on this task. ... : Accepted by IJCAI'19(International Joint Conference on Artificial Intelligence) ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://dx.doi.org/10.48550/arxiv.1902.08564
https://arxiv.org/abs/1902.08564
BASE
Hide details
15
Effective Parallel Corpus Mining using Bilingual Sentence Embeddings ...
Guo, Mandy; Shen, Qinlan; Yang, Yinfei. - : arXiv, 2018
BASE
Show details
16
Learning Cross-Lingual Sentence Representations via a Multi-task Dual-Encoder Model ...
BASE
Show details
17
Combining Lexical and Syntactic Features for Detecting Content-dense Texts in News ...
Yang, Yinfei; Nenkova, Ani. - : arXiv, 2017
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
17
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern