1 |
Multilingual and Cross-Lingual Intent Detection from Spoken Data ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Multilingual and Cross-Lingual Intent Detection from Spoken Data ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Span-ConveRT: Few-shot Span Extraction for Dialog with Pretrained Conversational Representations ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Multidirectional Associative Optimization of Function-Specific Word Representations ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Representation Learning beyond Semantic Similarity: Character-aware and Function-specific Approaches ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
On the relation between linguistic typology and (limitations of) multilingual language modeling ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
On the relation between linguistic typology and (limitations of) multilingual language modeling
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Representation Learning beyond Semantic Similarity: Character-aware and Function-specific Approaches
|
|
Gerz, Daniela Susanne. - : University of Cambridge, 2020. : Theoretical and Applied Linguistics, 2020. : Lucy Cavendish College, 2020
|
|
BASE
|
|
Show details
|
|
10 |
Multidirectional Associative Optimization of Function-Specific Word Representations
|
|
Gerz, Daniela; Vulic, Ivan; Rei, Marek. - : Association for Computational Linguistics, 2020. : 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020
|
|
BASE
|
|
Show details
|
|
12 |
Span-ConveRT: Few-shot Span Extraction for Dialog with Pretrained Conversational Representations
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Show Some Love to Your n-grams: A Bit of Progress and Stronger n-gram Language Modeling Baselines ...
|
|
|
|
Abstract:
In recent years neural language models (LMs) have set state-of-the-art performance for several benchmarking datasets. While the reasons for their success and their computational demand are well-documented, a comparison between neural models and more recent developments in n-gram models is neglected. In this paper, we examine the recent progress in n-gram literature, running experiments on 50 languages covering all morphological language families. Experimental results illustrate that a simple extension of Modified Kneser-Ney outperforms an LSTM language model on 42 languages while a word-level Bayesian n-gram LM outperforms the character-aware neural model on average across all languages, and its extension which explicitly injects linguistic knowledge on 8 languages. Further experiments on larger Europarl datasets for 3 languages indicate that neural architectures are able to outperform computationally much cheaper n-gram models: n-gram training is up to 15,000 times quicker. Our experiments illustrate that ...
|
|
URL: https://www.repository.cam.ac.uk/handle/1810/292617 https://dx.doi.org/10.17863/cam.39778
|
|
BASE
|
|
Hide details
|
|
14 |
Scoring Lexical Entailment with a Supervised Directional Similarity Network ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Language Modeling for Morphologically Rich Languages: Character-Aware Modeling for Word-Level Prediction ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Scoring lexical entailment with a supervised directional similarity network ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Language Modeling for Morphologically Rich Languages: Character-Aware Modeling for Word-Level Prediction
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Scoring lexical entailment with a supervised directional similarity network
|
|
Rei, Marek; Gerz, Daniela; Vulić, I. - : Association for Computational Linguistics, 2018. : ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers), 2018
|
|
BASE
|
|
Show details
|
|
19 |
HyperLex: A Large-Scale Evaluation of Graded Lexical Entailment ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|