DE eng

Search in the Catalogues and Directories

Hits 1 – 4 of 4

1
How to Evaluate ASR Output for Named Entity Recognition?
In: 16th Annual Conference of the International Speech Communication Association (Interspeech'15) ; https://hal.archives-ouvertes.fr/hal-01251370 ; 16th Annual Conference of the International Speech Communication Association (Interspeech'15), Sep 2015, Dresden, Germany (2015)
BASE
Show details
2
How to assess the quality of automatic transcriptions for the extraction of named entities? ; Comment évaluer la qualité des transcriptions automatiques pour la détection d’entités nommées ?
In: Actes des XXXe Journées d'Études sur la Parole (JEP'14) ; XXXe Journées d'Études sur la Parole (JEP'14) ; https://hal.archives-ouvertes.fr/hal-01134868 ; XXXe Journées d'Études sur la Parole (JEP'14), Jun 2014, Le Mans, France. pp.430-437 ; http://www-lium.univ-lemans.fr/jep2014/ (2014)
BASE
Show details
3
ETER: a New Metric for the Evaluation of Hierarchical Named Entity Recognition
In: Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14) ; Ninth International Conference on Language Resources and Evaluation (LREC'14) ; https://hal.archives-ouvertes.fr/hal-01134713 ; Ninth International Conference on Language Resources and Evaluation (LREC'14), European Language Resources Association (ELRA), May 2014, Reykjavik, Iceland. pp.3987-3994 ; http://lrec2014.lrec-conf.org/en/ (2014)
BASE
Show details
4
Automatic named entity pre-annotation for out-of-domain human annotation
In: Linguistic Annotation Workshop ; https://hal.archives-ouvertes.fr/hal-01831229 ; Linguistic Annotation Workshop, ACL, Jan 2013, Sofia, Bulgaria (2013)
Abstract: International audience ; Automatic pre-annotation is often used to improve human annotation speed and accuracy. We address here out-of-domain named entity annotation, and examine whether automatic pre-annotation is still beneficial in this setting. Our study design includes two different corpora, three pre-annotation schemes linked to two annotation levels, both expert and novice annotators, a questionnaire-based subjective assessment and a corpus-based quantitative assessment. We observe that pre-annotation helps in all cases, both for speed and for accuracy, and that the subjective assessment of the annotators does not always match the actual benefits measured in the annotation outcome.
Keyword: [INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL]; [INFO]Computer Science [cs]
URL: https://hal.archives-ouvertes.fr/hal-01831229
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
4
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern