1 |
ProtAugment: Intent Detection Meta-Learning through Unsupervised Diverse Paraphrasing
|
|
|
|
In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) ; https://hal-ujm.archives-ouvertes.fr/ujm-03353731 ; Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Aug 2021, Online, France. pp.2454-2466, ⟨10.18653/v1/2021.acl-long.191⟩ (2021)
|
|
BASE
|
|
Show details
|
|
2 |
A Neural Few-Shot Text Classification Reality Check
|
|
|
|
In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume ; 16th Conference of the European Chapter of the Association for Computational Linguistics ; https://hal-ujm.archives-ouvertes.fr/ujm-03267869 ; 16th Conference of the European Chapter of the Association for Computational Linguistics, Apr 2021, Kyiv (virtual), Ukraine (2021)
|
|
Abstract:
International audience ; Modern classification models tend to struggle when the amount of annotated data is scarce. To overcome this issue, several neural fewshot classification models have emerged, yielding significant progress over time, both in Computer Vision and Natural Language Processing. In the latter, such models used to rely on fixed word embeddings before the advent of transformers. Additionally, some models used in Computer Vision are yet to be tested in NLP applications. In this paper, we compare all these models, first adapting those made in the field of image processing to NLP, and second providing them access to transformers. We then test these models equipped with the same transformer-based encoder on the intent detection task, known for having a large number of classes. Our results reveal that while methods perform almost equally on the ARSC dataset, this is not the case for the Intent Detection task, where the most recent and supposedly best competitors perform worse than older and simpler ones (while all are given access to transformers). We also show that a simple baseline is surprisingly strong. All the new developed models, as well as the evaluation framework, are made publicly available 1 .
|
|
Keyword:
[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI]; ACM: I.: Computing Methodologies/I.2: ARTIFICIAL INTELLIGENCE
|
|
URL: https://hal-ujm.archives-ouvertes.fr/ujm-03267869/file/2021.eacl-main.79.pdf https://hal-ujm.archives-ouvertes.fr/ujm-03267869 https://hal-ujm.archives-ouvertes.fr/ujm-03267869/document
|
|
BASE
|
|
Hide details
|
|
3 |
ProtAugment: Unsupervised diverse short-texts paraphrasing for intent detection meta-learning ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
ProtAugment: Intent Detection Meta-Learning through Unsupervised Diverse Paraphrasing ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Few-shot Pseudo-Labeling for Intent Detection
|
|
|
|
In: Proceedings of the 28th International Conference on Computational Linguistics ; https://hal-ujm.archives-ouvertes.fr/ujm-03267832 ; Proceedings of the 28th International Conference on Computational Linguistics, Dec 2020, Barcelona, France. pp.4993-5003, ⟨10.18653/v1/2020.coling-main.438⟩ (2020)
|
|
BASE
|
|
Show details
|
|
|
|