DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...7
Hits 1 – 20 of 131

1
Integrating Vectorized Lexical Constraints for Neural Machine Translation ...
Wang, Shuo; Tan, Zhixing; Liu, Yang. - : arXiv, 2022
BASE
Show details
2
Contextual Semantic-Guided Entity-Centric GCN for Relation Extraction
In: Mathematics; Volume 10; Issue 8; Pages: 1344 (2022)
BASE
Show details
3
Virtual Reality-Integrated Immersion-Based Teaching to English Language Learning Outcome
In: Front Psychol (2022)
BASE
Show details
4
Unframing and reframing shanshui
Liu, Yang. - 2022
BASE
Show details
5
Alternated Training with Synthetic and Authentic Data for Neural Machine Translation ...
Jiao, Rui; Yang, Zonghan; Sun, Maosong. - : arXiv, 2021
BASE
Show details
6
CPM-2: Large-scale Cost-effective Pre-trained Language Models ...
Zhang, Zhengyan; Gu, Yuxian; Han, Xu. - : arXiv, 2021
BASE
Show details
7
VISITRON: Visual Semantics-Aligned Interactively Trained Object-Navigator ...
BASE
Show details
8
Assessing Multilingual Fairness in Pre-trained Multimodal Representations ...
Wang, Jialu; Liu, Yang; Wang, Xin Eric. - : arXiv, 2021
BASE
Show details
9
Dialog{S}um: {A} Real-Life Scenario Dialogue Summarization Dataset ...
BASE
Show details
10
Transfer Learning for Sequence Generation: from Single-source to Multi-source ...
BASE
Show details
11
Mask-Align: Self-Supervised Neural Word Alignment ...
Abstract: Read paper: https://www.aclanthology.org/2021.acl-long.369 Abstract: Word alignment, which aims to align translationally equivalent words between source and target sentences, plays an important role in many natural language processing tasks. Current unsupervised neural alignment methods focus on inducing alignments from neural machine translation models, which does not leverage the full context in the target sequence. In this paper, we propose Mask-Align, a self-supervised word alignment model that takes advantage of the full context on the target side. Our model masks out each target token and predicts it conditioned on both source and the remaining target tokens. This two-step process is based on the assumption that the source token contributing most to recovering the masked target token should be aligned. We also introduce an attention variant called leaky attention, which alleviates the problem of unexpected high cross-attention weights on special tokens such as periods. Experiments on four language ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://dx.doi.org/10.48448/b7ad-g040
https://underline.io/lecture/25688-mask-align-self-supervised-neural-word-alignment
BASE
Hide details
12
Segment, Mask, and Predict: Augmenting Chinese Word Segmentation with Self-Supervision ...
BASE
Show details
13
Learning to Selectively Learn for Weakly-supervised Paraphrase Generation ...
BASE
Show details
14
SWSR: A Chinese Dataset and Lexicon for Online Sexism Detection ...
Jiang, Aiqi; Yang, Xiaohan; Liu, Yang. - : arXiv, 2021
BASE
Show details
15
Analyzing the Limits of Self-Supervision in Handling Bias in Language ...
BASE
Show details
16
Statistically significant detection of semantic shifts using contextual word embeddings ...
BASE
Show details
17
SWSR: A Chinese Dataset and Lexicon for Online Sexism Detection ...
Jiang, Aiqi; Xiaohan Yang; Liu, Yang. - : Zenodo, 2021
BASE
Show details
18
Statistically Significant Detection of Semantic Shifts using Contextual Word Embeddings ...
BASE
Show details
19
Leveraging Word-Formation Knowledge for Chinese Word Sense Disambiguation ...
BASE
Show details
20
SWSR: A Chinese Dataset and Lexicon for Online Sexism Detection ...
Jiang, Aiqi; Xiaohan Yang; Liu, Yang. - : Zenodo, 2021
BASE
Show details

Page: 1 2 3 4 5...7

Catalogues
2
0
15
0
3
0
0
Bibliographies
11
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
110
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern