DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4
Hits 1 – 20 of 74

1
Challenges and responses: A Complex Dynamic Systems approach to exploring language teacher agency in a blended classroom
Qi, GY; Wang, Y. - : The JALT CALL SIG, 2022
BASE
Show details
2
Exploring the Impact of Negative Samples of Contrastive Learning: A Case Study of Sentence Embedding
Cao, R; Wang, Y; Liang, Y. - : Association for Computational Linguistics, 2022
BASE
Show details
3
Eliciting positive emotion through strategic responses to COVID-19 crisis: evidence from the tourism sector
Li, S.; Wang, Y.; Filieri, R.. - : Elsevier, 2022
BASE
Show details
4
Exploring the paths to big data analytics implementation success in banking and financial service: an integrated approach
Hajiheydari, N.; Delgosha, M.S.; Wang, Y.. - : Emerald, 2021
BASE
Show details
5
Voice for oneself: Self-interested voice and its antecedents and consequences
Duan, J; Xu, Y; Wang, X. - : Wiley, 2021
BASE
Show details
6
The trans-ancestral genomic architecture of glycemic traits
Chen, J; Spracklen, CN; Marenne, G. - : Nature Research, 2021
BASE
Show details
7
The trans-ancestral genomic architecture of glycemic traits.
BASE
Show details
8
Into the Real World: Autonomous and Integrated Chinese Language Learning Through a 3D Immersive Experience
Wang, Y; Grist, M; Grant, S. - : Springer, 2021
BASE
Show details
9
The trans-ancestral genomic architecture of glycemic traits.
In: Nature genetics, vol. 53, no. 6, pp. 840-860 (2021)
BASE
Show details
10
Finding my voice: an interdisciplinary and multi-methodological investigation into the relationship between performers’ speech and musical expression
Wang, Y.. - 2020
BASE
Show details
11
Non-native children's automatic speech recognition: The INTERSPEECH 2020 shared task ALTA systems ...
Knill, Katherine; Wang, L; Wang, Y. - : Apollo - University of Cambridge Repository, 2020
BASE
Show details
12
Non-native children's automatic speech recognition: The INTERSPEECH 2020 shared task ALTA systems
Knill, Katherine; Wang, L; Wang, Y. - : ISCA, 2020. : Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2020
BASE
Show details
13
Study of central exclusive [Image: see text] production in proton-proton collisions at [Formula: see text] and 13TeV
In: Eur Phys J C Part Fields (2020)
BASE
Show details
14
Enhancing the learning of multi-level undergraduate Chinese language with a 3D immersive experience - an exploratory study
Wang, Y; Grant, S; Grist, M. - : Routledge, 2020
BASE
Show details
15
Internal Meanings of the Language in Russian-Chinese Translation ... : Передача внутрилингвистических значений в русско-китайском переводе ...
Wang, Y.. - : ООО «Книжный дом», 2019
BASE
Show details
16
Exploiting future word contexts in neural network language models for speech recognition
Chen, X.; Liu, X.; Wang, Y.; Ragni, A.; Wong, J.H.M.; Gales, M.J.F.. - : Institute of Electrical and Electronics Engineers (IEEE), 2019
Abstract: Language modeling is a crucial component in a wide range of applications including speech recognition. Language models (LMs) are usually constructed by splitting a sentence into words and computing the probability of a word based on its word history. This sentence probability calculation, making use of conditional probability distributions, assumes that there is little impact from approximations used in the LMs, including the word history representations and finite training data. This motivates examining models that make use of additional information from the sentence. In this paper, future word information, in addition to the history, is used to predict the probability of the current word. For recurrent neural network LMs (RNNLMs), this information can be encapsulated in a bi-directional model. However, if used directly, this form of model is computationally expensive when trained on large quantities of data, and can be problematic when used with word lattices. This paper proposes a novel neural network language model structure, the succeeding-word RNNLM, su-RNNLM, to address these issues. Instead of using a recurrent unit to capture the complete future word contexts, a feedforward unit is used to model a fixed finite number of succeeding words. This is more efficient in training than bi-directional models and can be applied to lattice rescoring. The generated lattices can be used for downstream applications, such as confusion network decoding and keyword search. Experimental results on speech recognition and keyword spotting tasks illustrate the empirical usefulness of future word information, and the flexibility of the proposed model to represent this information.
URL: http://eprints.whiterose.ac.uk/150520/1/article.pdf
http://eprints.whiterose.ac.uk/150520/
BASE
Hide details
17
Impact of ASR performance on free speaking language assessment ...
Knill, Katherine; Gales, Mark; Kyriakopoulos, Konstantinos. - : Apollo - University of Cambridge Repository, 2018
BASE
Show details
18
Phonetic and graphemic systems for multi-genre broadcast transcription
Wang, Y.; Chen, X.; Gales, M.J.F.. - : IEEE, 2018
BASE
Show details
19
Impact of ASR performance on free speaking language assessment
Knill, K.; Gales, M.; Kyriakopoulos, K.. - : International Speech Communication Association (ISCA), 2018
BASE
Show details
20
Future word contexts in neural network language models
Chen, X.; Liu, X.; Ragni, A.. - : IEEE, 2018
BASE
Show details

Page: 1 2 3 4

Catalogues
0
0
26
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
1
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
47
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern