DE eng

Search in the Catalogues and Directories

Hits 1 – 4 of 4

1
How to Evaluate ASR Output for Named Entity Recognition?
In: 16th Annual Conference of the International Speech Communication Association (Interspeech'15) ; https://hal.archives-ouvertes.fr/hal-01251370 ; 16th Annual Conference of the International Speech Communication Association (Interspeech'15), Sep 2015, Dresden, Germany (2015)
BASE
Show details
2
How to assess the quality of automatic transcriptions for the extraction of named entities? ; Comment évaluer la qualité des transcriptions automatiques pour la détection d’entités nommées ?
In: Actes des XXXe Journées d'Études sur la Parole (JEP'14) ; XXXe Journées d'Études sur la Parole (JEP'14) ; https://hal.archives-ouvertes.fr/hal-01134868 ; XXXe Journées d'Études sur la Parole (JEP'14), Jun 2014, Le Mans, France. pp.430-437 ; http://www-lium.univ-lemans.fr/jep2014/ (2014)
BASE
Show details
3
ETER: a New Metric for the Evaluation of Hierarchical Named Entity Recognition
In: Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14) ; Ninth International Conference on Language Resources and Evaluation (LREC'14) ; https://hal.archives-ouvertes.fr/hal-01134713 ; Ninth International Conference on Language Resources and Evaluation (LREC'14), European Language Resources Association (ELRA), May 2014, Reykjavik, Iceland. pp.3987-3994 ; http://lrec2014.lrec-conf.org/en/ (2014)
Abstract: International audience ; This paper addresses the question of hierarchical named entity evaluation. In particular, we focus on metrics to deal with complex named entity structures as those introduced within the QUAERO project. The intended goal is to propose a smart way of evaluating partially correctly detected complex entities, beyond the scope of traditional metrics. None of the existing metrics are fully adequate to evaluate the proposed QUAERO task involving entity detection, classification and decomposition.We are discussing the strong and weak points of the existing metrics. We then introduce a new metric, the Entity Tree Error Rate (ETER), to evaluate hierarchical and structured named entity detection, classification and decomposition. The ETER metric builds upon the commonly accepted SER metric, but it takes the complex entity structure into account by measuring errors not only at the slot (or complex entity) level but also at a basic (atomic) entity level. We are comparing our new metric to the standard one using first some examples and then a set of real data selected from the ETAPE evaluation results.
Keyword: [INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL]; [SHS.LANGUE]Humanities and Social Sciences/Linguistics; evaluation; hierarchical named entity; metrics
URL: https://hal.archives-ouvertes.fr/hal-01134713
BASE
Hide details
4
Automatic named entity pre-annotation for out-of-domain human annotation
In: Linguistic Annotation Workshop ; https://hal.archives-ouvertes.fr/hal-01831229 ; Linguistic Annotation Workshop, ACL, Jan 2013, Sofia, Bulgaria (2013)
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
4
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern