Home
Catalogue search
Refine your search:
Keyword:
Computation and Language cs.CL (1)
Computational Linguistics (1)
FOS Computer and information sciences (1)
Language Models (1)
Machine Learning (1)
Machine Learning and Data Mining (1)
Natural Language Processing (1)
cs.CL (1)
Creator / Publisher:
Korhonen, Anna (3)
Liu, Fangyu (3)
Liu, Qianchu (3)
Vulić, Ivan (3)
Collier, Nigel (2)
., Nigel (1)
The 2021 Conference on Empirical Methods in Natural Language Processing 2021 (1)
Year
Medium
Type
BLLDB-Access
Search in the Catalogues and Directories
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
Sort by
creator [A → Z]
'
creator [Z → A]
'
publishing year ↑ (asc)
'
publishing year ↓ (desc)
'
title [A → Z]
'
title [Z → A]
'
Simple Search
Hits 1 – 3 of 3
1
MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models ...
Liu, Qianchu
;
Liu, Fangyu
;
Collier, Nigel
. - : arXiv, 2021
BASE
Show details
2
MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models ...
Liu, Qianchu
;
Liu, Fangyu
;
Collier, Nigel
. - : Apollo - University of Cambridge Repository, 2021
BASE
Show details
3
MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models ...
The 2021 Conference on Empirical Methods in Natural Language Processing 2021
;
., Nigel
;
Korhonen, Anna
;
Liu, Fangyu
;
Liu, Qianchu
;
Vulić, Ivan
. - : Underline Science Inc., 2021
Abstract:
Recent work indicated that pretrained language models (PLMs) such as BERT and RoBERTa can be transformed into effective sentence and word encoders even via simple self-supervised techniques. Inspired by this line of work, in this paper we propose a fully unsupervised approach to improving word-in-context (WiC) representations in PLMs, achieved via a simple and efficient WiC-targeted fine-tuning procedure: MirrorWiC. The proposed method leverages only raw texts sampled from Wikipedia, assuming no sense-annotated data, and learns context-aware word representations within a standard contrastive learning setup. We experiment with a series of standard and comprehensive WiC benchmarks across multiple languages. Our proposed fully unsupervised MirrorWiC models obtain substantial gains over off-the-shelf PLMs across all monolingual, multilingual and cross-lingual setups. Moreover, on some standard WiC benchmarks, MirrorWiC is even on-par with supervised models fine-tuned with in-task data and sense labels. ...
Keyword:
Computational Linguistics
;
Language Models
;
Machine Learning
;
Machine Learning and Data Mining
;
Natural Language Processing
URL:
https://underline.io/lecture/39862-mirrorwic-on-eliciting-word-in-context-representations-from-pretrained-language-models
https://dx.doi.org/10.48448/hs20-qq06
BASE
Hide details
Mobile view
All
Catalogues
UB Frankfurt Linguistik
0
IDS Mannheim
0
OLC Linguistik
0
UB Frankfurt Retrokatalog
0
DNB Subject Category Language
0
Institut für Empirische Sprachwissenschaft
0
Leibniz-Centre General Linguistics (ZAS)
0
Bibliographies
BLLDB
0
BDSL
0
IDS Bibliografie zur deutschen Grammatik
0
IDS Bibliografie zur Gesprächsforschung
0
IDS Konnektoren im Deutschen
0
IDS Präpositionen im Deutschen
0
IDS OBELEX meta
0
MPI-SHH Linguistics Collection
0
MPI for Psycholinguistics
0
Linked Open Data catalogues
Annohub
0
Online resources
Link directory
0
Journal directory
0
Database directory
0
Dictionary directory
0
Open access documents
BASE
3
Linguistik-Repository
0
IDS Publikationsserver
0
Online dissertations
0
Language Description Heritage
0
© 2013 - 2024 Lin|gu|is|tik
|
Imprint
|
Privacy Policy
|
Datenschutzeinstellungen ändern