1 |
Masked language models directly encode linguistic uncertainty ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Masked language models directly encode linguistic uncertainty
|
|
|
|
In: Proceedings of the Society for Computation in Linguistics (2022)
|
|
BASE
|
|
Show details
|
|
3 |
Will it Unblend?
|
|
|
|
In: Proceedings of the Society for Computation in Linguistics (2021)
|
|
Abstract:
Natural language processing systems often struggle with out-of-vocabulary (OOV) terms, which do not appear in training data. Blends, such as *innoventor*, are one particularly challenging class of OOV, as they are formed by fusing together two or more bases that relate to the intended meaning in unpredictable manners and degrees. In this work, we run experiments on a novel dataset of English OOV blends to quantify the difficulty of interpreting the meanings of blends by large-scale contextual language models such as BERT. We first show that BERT's processing of these blends does not fully access the component meanings, leaving their contextual representations semantically impoverished. We find this is mostly due to the loss of characters resulting from blend formation. Then, we assess how easily different models can recognize the structure and recover the origin of blends, and find that context-aware embedding systems outperform character-level and context-free embeddings, although their results are still far from satisfactory.
|
|
Keyword:
blends; compounds; Computational Linguistics; contextual-models; oov; out-of-vocabulary; portmanteaux; segmentation
|
|
URL: https://scholarworks.umass.edu/scil/vol4/iss1/62 https://scholarworks.umass.edu/cgi/viewcontent.cgi?article=1189&context=scil
|
|
BASE
|
|
Hide details
|
|
5 |
UniMorph 3.0: Universal Morphology
|
|
|
|
In: Proceedings of the 12th Language Resources and Evaluation Conference (2020)
|
|
BASE
|
|
Show details
|
|
7 |
Compositionality in distributionally acquired phonetic category representations ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Masking auditory feedback does not eliminate repetition reduction ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Masking auditory feedback does not eliminate repetition reduction ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Downstream Behavioral and Electrophysiological Consequences of Word Prediction on Recognition Memory
|
|
|
|
BASE
|
|
Show details
|
|
11 |
The world is not enough to explain lengthening of phonological competitors ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Self-priming in production: evidence for a hybrid model of syntactic priming ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Remembering you read “doctoral dissertation”: Phrase frequency effects in recall and recognition memory
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Knowing a thing is "a thing": The use of acoustic features in multiword expression extraction
|
|
|
|
BASE
|
|
Show details
|
|
15 |
“hotdog”, not “hot” “dog”: The phonological planning of compound words
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Hotdog not hot dog: The phonological planning of compound words
|
|
|
|
BASE
|
|
Show details
|
|
|
|