Home
Catalogue search
Refine your search:
Keyword
Creator / Publisher:
Apidianaki, Marianna (2)
Callison-Burch, Chris (2)
Kriz, Reno (2)
Miltsakaki, Eleni (2)
Association for Computational Linguistics (1)
Kumar, Gaurav (1)
Laboratoire d'Informatique pour la Mécanique et les Sciences de l'Ingénieur (LIMSI) (1)
Sedoc, João (1)
Sorbonne Université (SU)-Sorbonne Université (SU)-Université Paris-Saclay-Université Paris-Sud - Paris 11 (UP11) (1)
Université Paris Saclay (COmUE)-Centre National de la Recherche Scientifique (CNRS)-Sorbonne Université - UFR d'Ingénierie (UFR 919) (1)
more
Year
Medium
Type
BLLDB-Access
Search in the Catalogues and Directories
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
Sort by
creator [A → Z]
'
creator [Z → A]
'
publishing year ↑ (asc)
'
publishing year ↓ (desc)
'
title [A → Z]
'
title [Z → A]
'
Simple Search
Hits 1 – 2 of 2
1
Complexity-Weighted Loss and Diverse Reranking for Sentence Simplification ...
Kriz, Reno
;
Sedoc, João
;
Apidianaki, Marianna
;
Zheng, Carolina
;
Kumar, Gaurav
;
Miltsakaki, Eleni
;
Callison-Burch, Chris
. - : arXiv, 2019
Abstract:
Sentence simplification is the task of rewriting texts so they are easier to understand. Recent research has applied sequence-to-sequence (Seq2Seq) models to this task, focusing largely on training-time improvements via reinforcement learning and memory augmentation. One of the main problems with applying generic Seq2Seq models for simplification is that these models tend to copy directly from the original sentence, resulting in outputs that are relatively long and complex. We aim to alleviate this issue through the use of two main techniques. First, we incorporate content word complexities, as predicted with a leveled word complexity model, into our loss function during training. Second, we generate a large set of diverse candidate simplifications at test time, and rerank these to promote fluency, adequacy, and simplicity. Here, we measure simplicity through a novel sentence complexity model. These extensions allow our models to perform competitively with state-of-the-art systems while generating simpler ... : 11 pages, North American Association of Computational Linguistics (NAACL 2019) ...
Keyword:
Computation and Language cs.CL
;
FOS Computer and information sciences
URL:
https://dx.doi.org/10.48550/arxiv.1904.02767
https://arxiv.org/abs/1904.02767
BASE
Hide details
2
Simplification Using Paraphrases and Context-Based Lexical Substitution
Kriz, Reno
;
Miltsakaki, Eleni
;
Apidianaki, Marianna
...
In: Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies ; https://hal.archives-ouvertes.fr/hal-01838519 ; Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics, Jun 2018, Nouvelle Orléans, United States (2018)
BASE
Show details
Mobile view
All
Catalogues
UB Frankfurt Linguistik
0
IDS Mannheim
0
OLC Linguistik
0
UB Frankfurt Retrokatalog
0
DNB Subject Category Language
0
Institut für Empirische Sprachwissenschaft
0
Leibniz-Centre General Linguistics (ZAS)
0
Bibliographies
BLLDB
0
BDSL
0
IDS Bibliografie zur deutschen Grammatik
0
IDS Bibliografie zur Gesprächsforschung
0
IDS Konnektoren im Deutschen
0
IDS Präpositionen im Deutschen
0
IDS OBELEX meta
0
MPI-SHH Linguistics Collection
0
MPI for Psycholinguistics
0
Linked Open Data catalogues
Annohub
0
Online resources
Link directory
0
Journal directory
0
Database directory
0
Dictionary directory
0
Open access documents
BASE
2
Linguistik-Repository
0
IDS Publikationsserver
0
Online dissertations
0
Language Description Heritage
0
© 2013 - 2024 Lin|gu|is|tik
|
Imprint
|
Privacy Policy
|
Datenschutzeinstellungen ändern