1 |
Understanding the effects of negative (and positive) pointwise mutual information on word vectors
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Assessing idiomaticity representations in vector models with a noun compound dataset labeled at type and token levels
|
|
|
|
BASE
|
|
Show details
|
|
5 |
AStitchInLanguageModels : dataset and methods for the exploration of idiomaticity in pre-trained language models
|
|
|
|
BASE
|
|
Show details
|
|
6 |
CogNLP-Sheffield at CMCL 2021 Shared Task: Blending cognitively inspired features with transformer-based language models for predicting eye tracking patterns
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Investigating language impact in bilingual approaches for computational language documentation
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Unsupervised compositionality prediction of nominal compounds
|
|
|
|
BASE
|
|
Show details
|
|
9 |
A dual-attention hierarchical recurrent neural network for dialogue act classification
|
|
|
|
BASE
|
|
Show details
|
|
10 |
When the whole is greater than the sum of its parts : multiword expressions and idiomaticity
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Empirical evaluation of sequence-to-sequence models for word discovery in low-resource settings
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Similarity Measures for the Detection of Clinical Conditions with Verbal Fluency Tasks
|
|
|
|
BASE
|
|
Show details
|
|
16 |
A corpus study of verbal multiword expressions in Brazilian Portuguese
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Unwritten languages demand attention too! Word discovery with encoder-decoder models
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Restricted recurrent neural tensor networks: Exploiting word frequency and compositionality
|
|
|
|
Abstract:
Increasing the capacity of recurrent neural networks (RNN) usually involves augmenting the size of the hidden layer, with significant increase of computational cost. Recurrent neural tensor networks (RNTN) increase capacity using distinct hidden layer weights for each word, but with greater costs in memory usage. In this paper, we introduce restricted recurrent neural tensor networks (r-RNTN) which reserve distinct hidden layer weights for frequent vocabulary words while sharing a single set of weights for infrequent words. Perplexity evaluations show that for fixed hidden layer sizes, r-RNTNs improve language model performance over RNNs using only a small fraction of the parameters of unrestricted RNTNs. These results hold for r-RNTNs using Gated Recurrent Units and Long Short-Term Memory.
|
|
URL: http://eprints.whiterose.ac.uk/153558/
|
|
BASE
|
|
Hide details
|
|
19 |
UFRGS&LIF at SemEval-2016 task 10: Rule-based MWE identification and predominant-supersense tagging
|
|
|
|
BASE
|
|
Show details
|
|
20 |
How naked is the naked truth? A multilingual lexicon of nominal compound compositionality
|
|
|
|
BASE
|
|
Show details
|
|
|
|