DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5
Hits 1 – 20 of 92

1
M.: Application of Axiomatic Approaches to Crosslanguage Retrieval
In: http://clef.isti.cnr.it/2009/working_notes/rkern-paperCLEF2009.pdf (2010)
BASE
Show details
2
Using semantic relatedness and word sense disambiguation for (cl)ir
In: http://clef.isti.cnr.it/2009/working_notes/agirre-robust-paperCLEF2009.pdf (2009)
BASE
Show details
3
Overview of VideoCLEF 2009: New perspectives on speech-based multimedia content enrichment
In: http://clef.isti.cnr.it/2009/working_notes/larsonVideoCLEF2009_overview.pdf (2009)
BASE
Show details
4
Multiple retrieval models and regression models for prior art search
In: http://clef.isti.cnr.it/2009/working_notes/lopez-paperCLEF2009.pdf (2009)
BASE
Show details
5
Overview of iCLEF 2009: Exploring Search Behaviour in a Multilingual Folksonomy environment
In: http://clef.isti.cnr.it/2009/working_notes/iclef_overview_2009.pdf (2009)
BASE
Show details
6
Multiple retrieval models and regression models for prior art search
In: http://hal.archives-ouvertes.fr/docs/00/41/18/35/PDF/technote.pdf (2009)
BASE
Show details
7
German, French, English and Persian Retrieval Experiments at CLEF 2009
In: http://clef.isti.cnr.it/2009/working_notes/tomlinson-paperCLEF2009.pdf (2009)
BASE
Show details
8
G.: IXA at CLEF 2008 Robust-WSD Task: using Word Sense Disambiguation for (Cross Lingual) Information Retrieval
In: http://ixa.si.ehu.es/Ixa/Argitalpenak/Artikuluak/1226564935/publikoak/otegi-paperCLEF2008.pdf (2009)
BASE
Show details
9
Overview of the ImageCLEFphoto 2008 Photographic Retrieval Task. 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access (CLEF 2008
In: http://clef.isti.cnr.it/2008/working_notes/ImageCLEFphoto2008-final.pdf (2008)
BASE
Show details
10
Back to Basics - Again - for Domain Specific Retrieval. This volume
In: http://clef.isti.cnr.it/2008/working_notes/Berkeley_Domain_Specific_08.pdf (2008)
BASE
Show details
11
CLEF 2008 Ad-Hoc Track: On-line Processing Experiments with Xtrieval
In: http://ceur-ws.org/Vol-1174/CLEF2008wn-adhoc-KurstenEt2008.pdf (2008)
BASE
Show details
12
Overview of iCLEF 2008: search log analysis for Multilingual Image Retrieval
In: http://ceur-ws.org/Vol-1174/CLEF2008wn-iCLEF-GonzaloEt2008.pdf (2008)
BASE
Show details
13
German, French, English and Persian Retrieval Experiments at CLEF 2008
In: http://clef.isti.cnr.it/2008/working_notes/tomlinson-paperCLEF2008.pdf (2008)
BASE
Show details
14
Unsupervised morpheme analysis evaluation by IR experiments – Morpho Challenge 2008
In: http://clef.isti.cnr.it/2008/working_notes/kurimo2-paperCLEF2008.pdf (2008)
BASE
Show details
15
The Xtrieval Framework at CLEF 2008: Domain-Specic Track
In: http://ceur-ws.org/Vol-1174/CLEF2008wn-DomainSpecific-KurstenEt2008.pdf (2008)
BASE
Show details
16
SINAI at QA@CLEF2007. Answer Validation Exercise
In: http://ceur-ws.org/Vol-1173/CLEF2007wn-QACLEF-GarciaCumbreras2007.pdf (2007)
BASE
Show details
17
Unsupervised morpheme analysis evaluation by a comparison to a linguistic Gold Standard – Morpho Challenge 2007
In: http://clef.isti.cnr.it/2008/working_notes/kurimo1-paperCLEF2008.pdf (2007)
Abstract: The goal of Morpho Challenge 2008 was to find and evaluate unsupervised algorithms that provide morpheme analyses for words in different languages. Especially in morphologically complex languages, such as Finnish, Turkish and Arabic, morpheme analysis is important for lexical modeling of words in speech recognition, information retrieval and machine translation. The evaluation in Morpho Challenge competitions consisted of both a linguistic and an application oriented performance analysis. This paper describes an evaluation where the competition entries were compared to a linguistic morpheme analysis gold standard. Because the morpheme labels in an unsupervised analysis can be arbitrary, the evaluation is based on matching the morpheme-sharing words between the proposed and the gold standard analyses. In addition to Finnish, Turkish, German and English evaluations performed in Morpho Challenge 2007, the competition this year had an additional evaluation in Arabic. The results in 2008 show that although the level of precision and recall varies substantially between the tasks in different languages, the best methods seem to manage all the tested languages
Keyword: Experimentation Keywords Morphological analysis; H.3 [Information Storage and Retrieval; H.3.1 Content Analysis and Indexing; H.3.3 Information Search and Retrieval; H.3.4 Systems and Software; H.3.7 Digital Libraries General Terms Algorithms; Machine learning; Performance
URL: http://clef.isti.cnr.it/2008/working_notes/kurimo1-paperCLEF2008.pdf
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.410.7235
BASE
Hide details
18
Cross Lingual Question Answering using QRISTAL for CLEF 2007
In: http://www.qristal.fr/pub/Cross Lingual Question Answering using QRISTAL for CLEF 2007.pdf (2007)
BASE
Show details
19
L.: The University of West Bohemia at CLEF 2006, the CL-SR track
In: http://ceur-ws.org/Vol-1172/CLEF2006wn-CLSR-IrcingEt2006.pdf (2006)
BASE
Show details
20
BRUJA System. The University of Jaén at the Spanish task of QA@CLEF 2006
In: http://ceur-ws.org/Vol-1172/CLEF2006wn-QACLEF-GarciaCumbrerasEt2006.pdf (2006)
BASE
Show details

Page: 1 2 3 4 5

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
92
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern