DE eng

Search in the Catalogues and Directories

Hits 1 – 5 of 5

1
QuestEval: Summarization Asks for Fact-based Evaluation
In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing ; https://hal.sorbonne-universite.fr/hal-03541895 ; Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Nov 2021, Punta Cana (en ligne), Dominican Republic. pp.6594-6604, ⟨10.18653/v1/2021.emnlp-main.529⟩ ; https://2021.emnlp.org/ (2021)
BASE
Show details
2
Synthetic Data Augmentation for Zero-Shot Cross-Lingual Question Answering ...
BASE
Show details
3
MLSUM: The Multilingual Summarization Corpus
In: 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) ; https://hal.sorbonne-universite.fr/hal-03364407 ; 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Nov 2020, Online, France. pp.8051-8067, ⟨10.18653/v1/2020.emnlp-main.647⟩ (2020)
BASE
Show details
4
Answers Unite! Unsupervised Metrics for Reinforced Summarization Models
In: 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) ; https://hal.sorbonne-universite.fr/hal-02350999 ; 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Nov 2019, Hong Kong, China. pp.3237-3247, ⟨10.18653/v1/D19-1320⟩ (2019)
BASE
Show details
5
Self-Attention Architectures for Answer-Agnostic Neural Question Generation
In: ACL 2019 - Annual Meeting of the Association for Computational Linguistics ; https://hal.sorbonne-universite.fr/hal-02350993 ; ACL 2019 - Annual Meeting of the Association for Computational Linguistics, Jul 2019, Florence, Italy. pp.6027-6032, ⟨10.18653/v1/P19-1604⟩ (2019)
Abstract: International audience ; Neural architectures based on self-attention, such as Transformers, recently attracted interest from the research community, and obtained significant improvements over the state of the art in several tasks. We explore how Transformers can be adapted to the task of Neural Question Generation without constraining the model to focus on a specific answer passage. We study the effect of several strategies to deal with out-of-vocabulary words such as copy mechanisms, placeholders, and contextual word embeddings. We report improvements obtained over the state-of-the-art on the SQuAD dataset according to automated metrics (BLEU, ROUGE), as well as qualitative human assessments of the system outputs.
Keyword: [INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI]; [INFO.INFO-IR]Computer Science [cs]/Information Retrieval [cs.IR]; [INFO.INFO-LG]Computer Science [cs]/Machine Learning [cs.LG]; [INFO.INFO-TT]Computer Science [cs]/Document and Text Processing
URL: https://doi.org/10.18653/v1/P19-1604
https://hal.sorbonne-universite.fr/hal-02350993
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
5
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern