DE eng

Search in the Catalogues and Directories

Hits 1 – 6 of 6

1
Adversarial propagation and zero-shot cross-lingual transfer of word vector specialization ...
Ponti, Edoardo; Vulić, I; Glavaš, G. - : Apollo - University of Cambridge Repository, 2020
BASE
Show details
2
Adversarial propagation and zero-shot cross-lingual transfer of word vector specialization
Ponti, Edoardo; Vulić, I; Glavaš, G. - : Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018, 2020
BASE
Show details
3
Fully statistical neural belief tracking
Mrkšić, N; Vulić, I. - : Association for Computational Linguistics, 2018. : ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers), 2018
BASE
Show details
4
Morph-fitting: Fine-tuning word vector spaces with simple language-specific rules ...
Vulic, Ivan; Mrkšic, N; Reichart, R. - : Apollo - University of Cambridge Repository, 2017
BASE
Show details
5
Morph-fitting: Fine-tuning word vector spaces with simple language-specific rules
Vulic, Ivan; Mrkšic, N; Reichart, R. - : Association for Computational Linguistics, 2017. : ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers), 2017
BASE
Show details
6
Multi-domain neural network language generation for spoken dialogue systems
Wen, TH; Gašić, M; Mrkšić, N; Rojas-Barahona, LM; Su, PH; Vandyke, D; Young, Steve. - : Association for Computational Linguistics, 2016. : 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference, 2016
Abstract: Moving from limited-domain natural language generation (NLG) to open domain is difficult because the number of semantic input combinations grows exponentially with the number of domains. Therefore, it is important to leverage existing resources and exploit similarities between domains to facilitate domain adaptation. In this paper, we propose a procedure to train multi-domain, Recurrent Neural Network-based (RNN) language generators via multiple adaptation steps. In this procedure, a model is first trained on counterfeited data synthesised from an out-of-domain dataset, and then fine tuned on a small set of in-domain utterances with a discriminative objective function. Corpus-based evaluation results show that the proposed procedure can achieve competitive performance in terms of BLEU score and slot error rate while significantly reducing the data needed to train generators in new, unseen domains. In subjective testing, human judges confirm that the procedure greatly improves generator performance when only a small amount of data is available in the domain. ; Toshiba Research Europe Ltd. ; This is the accepted manuscript. It is currently embargoed pending publication.
URL: https://www.repository.cam.ac.uk/handle/1810/256144
https://doi.org/10.17863/CAM.84
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
6
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern