Home
Catalogue search
Refine your search:
Keyword:
Categories and Subject Descriptors H.3 [Information Storage and Retrieval (2)
Experimentation Keywords Question answering (2)
H.3.1 Content Analysis and Indexing (2)
H.3.3 Information Search and Retrieval (2)
H.3.4 Systems and Software (2)
I.2 [Artificial Intelligence (2)
I.2.7 Natural Language Processing General Terms Measurement (2)
Performance (2)
Creator / Publisher
Year
Medium
Type
BLLDB-Access
Search in the Catalogues and Directories
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
Sort by
creator [A → Z]
'
creator [Z → A]
'
publishing year ↑ (asc)
'
publishing year ↓ (desc)
'
title [A → Z]
'
title [Z → A]
'
Simple Search
Hits 1 – 2 of 2
1
Overview of the CLEF 2005 Multilingual Question Answering Track
Ro Vallin
;
Bernardo Magnini
;
Danilo Giampiccolo
...
In: http://www.clef-campaign.org/2005/working_notes/workingnotes2005/vallin05.pdf (2005)
BASE
Show details
2
Overview of the CLEF 2005 Multilingual Question Answering Track
Ro Vallin
;
Bernardo Magnini
;
Danilo Giampiccolo
;
Lili Aunimo
;
Christelle Ayache
;
Petya Osenova
;
Anselmo Peas
;
Maarten De Rijke
;
Bogdan Sacaleanu
;
Diana Santos
;
Richard Sutcliffe
In: http://www.science.uva.nl/~mdr/Publications/Files/clef-2005-qa-overview-wn.pdf (2005)
Abstract:
The general aim of the third CLEF Multilingual Question Answering Track was to set up a common and replicable evaluation framework to test both monolingual and cross-language Question Answering (QA) systems that process queries and documents in several European languages. Nine target languages and ten source languages were exploited to enact 8 monolingual and 73 cross-language tasks. Twenty-four groups participated in the exercise.Overall results showed a general increase in performance in comparison to last year. The best performing monolingual system irrespective of target language answered 64.5 % of the questions correctly (in the monolingual Portuguese task), while the average of the best performances for each target language was 42.6%. The cross-language step instead entailed a considerable drop in performance. In addition to accuracy, the organisers also measured the relation between the correctness of an answer and a system’s stated confidence in it, showing that the best systems did not always provide the most reliable confidence score.
Keyword:
Categories and Subject Descriptors H.3 [Information Storage and Retrieval
;
Experimentation Keywords Question answering
;
H.3.1 Content Analysis and Indexing
;
H.3.3 Information Search and Retrieval
;
H.3.4 Systems and Software
;
I.2 [Artificial Intelligence
;
I.2.7 Natural Language Processing General Terms Measurement
;
Performance
URL:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.77.2950
http://www.science.uva.nl/~mdr/Publications/Files/clef-2005-qa-overview-wn.pdf
BASE
Hide details
Mobile view
All
Catalogues
UB Frankfurt Linguistik
0
IDS Mannheim
0
OLC Linguistik
0
UB Frankfurt Retrokatalog
0
DNB Subject Category Language
0
Institut für Empirische Sprachwissenschaft
0
Leibniz-Centre General Linguistics (ZAS)
0
Bibliographies
BLLDB
0
BDSL
0
IDS Bibliografie zur deutschen Grammatik
0
IDS Bibliografie zur Gesprächsforschung
0
IDS Konnektoren im Deutschen
0
IDS Präpositionen im Deutschen
0
IDS OBELEX meta
0
MPI-SHH Linguistics Collection
0
MPI for Psycholinguistics
0
Linked Open Data catalogues
Annohub
0
Online resources
Link directory
0
Journal directory
0
Database directory
0
Dictionary directory
0
Open access documents
BASE
2
Linguistik-Repository
0
IDS Publikationsserver
0
Online dissertations
0
Language Description Heritage
0
© 2013 - 2024 Lin|gu|is|tik
|
Imprint
|
Privacy Policy
|
Datenschutzeinstellungen ändern