DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4
Hits 1 – 20 of 66

1
Subword Mapping and Anchoring across Languages ...
BASE
Show details
2
Subword Mapping and Anchoring across Languages ...
BASE
Show details
3
Subword Mapping and Anchoring across Languages ...
BASE
Show details
4
Integrating Weakly Supervised Word Sense Disambiguation into Neural Machine Translation ...
BASE
Show details
5
Integrating Weakly Supervised Word Sense Disambiguation into Neural Machine Translation ...
BASE
Show details
6
Integrating Weakly Supervised Word Sense Disambiguation into Neural Machine Translation ...
BASE
Show details
7
Machine Translation of Low-Resource Spoken Dialects: Strategies for Normalizing Swiss German ...
BASE
Show details
8
DiscoMT 2016 Shared Task on Cross-lingual Pronoun Prediction
Guillou, Liane; Hardmeier, Christian; Nakov, Preslav. - : Uppsala University, 2017
BASE
Show details
9
Consistent Translation of Repeated Nouns using Syntactic and Semantic Cues ...
BASE
Show details
10
Multilingual Hierarchical Attention Networks for Document Classification ...
BASE
Show details
11
The Summa Platform Prototype ...
BASE
Show details
12
Multilingual Hierarchical Attention Networks for Document Classification ...
BASE
Show details
13
Multilingual Hierarchical Attention Networks for Document Classification ...
BASE
Show details
14
The Summa Platform Prototype ...
BASE
Show details
15
Machine Translation of Spanish Personal and Possessive Pronouns Using Anaphora Probabilities ...
Luong, Ngoc Quang; Popescu-Belis, Andrei; Rios Gonzales, Annette. - : Association for Computational Linguistics, 2017
BASE
Show details
16
Self-Attentive Residual Decoder for Neural Machine Translation ...
Abstract: Neural sequence-to-sequence networks with attention have achieved remarkable performance for machine translation. One of the reasons for their effectiveness is their ability to capture relevant source-side contextual information at each time-step prediction through an attention mechanism. However, the target-side context is solely based on the sequence model which, in practice, is prone to a recency bias and lacks the ability to capture effectively non-sequential dependencies among words. To address this limitation, we propose a target-side-attentive residual recurrent network for decoding, where attention over previous words contributes directly to the prediction of the next word. The residual learning facilitates the flow of information from the distant past and is able to emphasize any of the previously translated words, hence it gains access to a wider context. The proposed model outperforms a neural MT baseline as well as a memory and self-attention network on three language pairs. The analysis of the ... : Accepted on NAACL-HLT 2018, Volume: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers) ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://dx.doi.org/10.48550/arxiv.1709.04849
https://arxiv.org/abs/1709.04849
BASE
Hide details
17
Sense-Aware Statistical Machine Translation using Adaptive Context-Dependent Clustering ...
BASE
Show details
18
Sense-Aware Statistical Machine Translation using Adaptive Context-Dependent Clustering ...
BASE
Show details
19
Multilingual Hierarchical Attention Networks for Document Classification
In: http://infoscience.epfl.ch/record/231134 (2017)
BASE
Show details
20
Cross-lingual Transfer for News Article Labeling: Benchmarking Statistical and Neural Models
In: http://infoscience.epfl.ch/record/231130 (2017)
BASE
Show details

Page: 1 2 3 4

Catalogues
0
0
2
0
0
0
0
Bibliographies
8
0
0
0
0
0
1
0
1
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
55
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern