DE eng

Search in the Catalogues and Directories

Hits 1 – 11 of 11

1
Learning with joint inference and latent linguistic structure in graphical models ...
Naradowsky, Jason. - : Macquarie University, 2022
BASE
Show details
2
Learning with joint inference and latent linguistic structure in graphical models ...
Naradowsky, Jason. - : Macquarie University, 2022
BASE
Show details
3
Rethinking Offensive Text Detection as a Multi-Hop Reasoning Problem ...
BASE
Show details
4
Language Modeling for Morphologically Rich Languages: Character-Aware Modeling for Word-Level Prediction ...
Gerz, Daniela; Vulić, Ivan; Ponti, Edoardo. - : Apollo - University of Cambridge Repository, 2018
BASE
Show details
5
Language Modeling for Morphologically Rich Languages: Character-Aware Modeling for Word-Level Prediction
Gerz, Daniela; Vulić, Ivan; Ponti, Edoardo; Naradowsky, Jason; Reichart, Roi; Korhonen, Anna-Leena. - : MIT Press - Journals, 2018. : Transactions of the Association for Computational Linguistics, 2018
Abstract: Neural architectures are prominent in the construction of language models (LMs). However, word-level prediction is typically agnostic of subword-level information (characters and character sequences) and operates over a closed vocabulary, consisting of a limited word set. Indeed, while subword-aware models boost performance across a variety of NLP tasks, previous work did not evaluate the ability of these models to assist next-word prediction in language modeling tasks. Such subword-level informed models should be particularly effective for morphologically-rich languages (MRLs) that exhibit high type-to-token ratios. In this work, we present a large-scale LM study on 50 typologically diverse languages covering a wide variety of morphological systems, and offer new LM benchmarks to the community, while considering subword-level information. The main technical contribution of our work is a novel method for injecting subword-level information into semantic word vectors, integrated into the neural language modeling training, to facilitate word-level prediction. We conduct experiments in the LM setting where the number of infrequent words is large, and demonstrate strong perplexity gains across our 50 languages, especially for morphologically-rich languages. Our code and data sets are publicly available. ; This work is supported by the ERC Consolidator Grant LEXICAL (648909)
URL: https://www.repository.cam.ac.uk/handle/1810/279936
https://doi.org/10.17863/CAM.27304
BASE
Hide details
6
A Structured Variational Autoencoder for Contextual Morphological Inflection
Naradowsky, Jason; Cotterell, Ryan; Mielke, Sebastian J. - : Association for Computational Linguistics, 2018. : Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2018
BASE
Show details
7
Represent, Aggregate, and Constrain: A Novel Architecture for Machine Reading from Noisy Sources ...
BASE
Show details
8
Learning with joint inference and latent linguistic structure in graphical models
Naradowsky, Jason. - : Sydney, Australia : Macquarie University, 2015
BASE
Show details
9
Grammarless parsing for joint inference
Naradowsky, Jason; Vieira, Tim; Smith, David A. - : Mumbai, India : The COLING 2012 Organizing Committee, 2012
BASE
Show details
10
Polylingual Topic Models
In: Hanna M. Wallach (2009)
BASE
Show details
11
Polylingual Topic Models
In: Andrew McCallum (2009)
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
11
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern