DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5
Hits 1 – 20 of 95

1
ALCIDE: An online platform for the Analysis of Language and Content In a Digital Environment
Sara, Tonelli; Giovanni, Moretti; Sprugnoli, R (orcid:0000-0001-6861-5595). - : Pisa University Press srl, 2014. : country:ITA, 2014. : place:Pisa, 2014
BASE
Show details
2
Converting the parallel treebank ParTUT in Universal Stanford Dependencies
M. Sanguinetti; C. Bosco. - : Pisa University Press, 2014. : country:ITA, 2014. : place:Pisa, 2014
BASE
Show details
3
Developing corpora and tools for sentiment analysis: the experience of the University of Turin group
Manuela Sanguinetti; Emilio Sulis; Viviana Patti. - : Pisa University Press, 2014. : country:ITA, 2014. : place:Pisa, 2014
BASE
Show details
4
Decomposing Semantic Inferences
In: http://elanguage.net/journals/lilt/article/viewFile/3735/3638/ (2013)
BASE
Show details
5
Building textual entailment specialized data sets: a methodology for isolating linguistic phenomena relevant to inference
In: http://www.lrec-conf.org/proceedings/lrec2010/pdf/478_Paper.pdf (2010)
BASE
Show details
6
Toward qualitative evaluation of textual entailment systems
In: http://aclweb.org/anthology-new/C/C10/C10-2012.pdf (2010)
BASE
Show details
7
Using Lexical Resources in a Distance-Based Approach to RTE
In: http://www.nist.gov/tac/publications/2009/participant.papers/FBKirst.proceedings.pdf (2009)
BASE
Show details
8
Combining specialized entailment engines for RTE-4
In: http://www.nist.gov/tac/publications/2008/participant.papers/fbkirst.proceedings.pdf (2008)
BASE
Show details
9
The QALL-ME Benchmark: a Multilingual Resource of Annotated Spoken Requests for Question Answering
In: http://www.lrec-conf.org/proceedings/lrec2008/pdf/628_paper.pdf (2008)
BASE
Show details
10
Building a largescale repository of textual entailment rules
In: http://www.cs.brandeis.edu/~marc/misc/proceedings/lrec-2006/pdf/707_pdf.pdf (2006)
BASE
Show details
11
The query answering system Prodicos
In: Proceedings of Accessing Multilingual Information Repositories: 6th Workshop of the Cross-Language Evaluation Forum, CLEF 2005, Revised Selected Papers, Vienna, Austria, September 2005 ; CLEF 2005 ; https://hal.archives-ouvertes.fr/hal-00444437 ; CLEF 2005, 2006, Austria. pp.527--534 (2006)
BASE
Show details
12
Exploiting Linguistic Indices and Syntactic Structures for Multilingual Question Answering: ITC-irst at CLEF 2005
In: http://tanev.dir.bg/CLEF05.pdf (2005)
BASE
Show details
13
Overview of the CLEF 2005 Multilingual Question Answering Track
In: http://www.clef-campaign.org/2005/working_notes/workingnotes2005/vallin05.pdf (2005)
BASE
Show details
14
Overview of the CLEF 2005 Multilingual Question Answering Track
In: http://www.science.uva.nl/~mdr/Publications/Files/clef-2005-qa-overview-wn.pdf (2005)
BASE
Show details
15
Overview of the CLEF 2005 Multilingual Question Answering Track
In: http://www.science.uva.nl/~mdr/Publications/Files/clef2005-qa-overview-proceedings.pdf (2005)
Abstract: Abstract. The general aim of the third CLEF Multilingual Question Answering Track was to set up a common and replicable evaluation framework to test both monolingual and cross-language Question Answering (QA) systems that process queries and documents in several European languages. Nine target languages and ten source languages were exploited to enact 8 monolingual and 73 cross-language tasks. Twenty-four groups participated in the exercise. Overall results showed a general increase in performance in comparison to last year. The best performing monolingual system irrespective of target language answered 64.5 % of the questions correctly (in the monolingual Portuguese task), while the average of the best performances for each target language was 42.6%. The cross-language step instead entailed a considerable drop in performance. In addition to accuracy, the organisers also measured the relation between the correctness of an answer and a system's stated confidence in it, showing that the best systems did not always provide the most reliable confidence score. We provide an overview of the 2005 QA track, detail the procedure followed to build the test sets and present a general analysis of the results.
URL: http://www.science.uva.nl/~mdr/Publications/Files/clef2005-qa-overview-proceedings.pdf
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.75.245
BASE
Hide details
16
Overview of the CLEF 2004 Multilingual Question Answering Track
In: http://www.science.uva.nl/~mdr/Publications/Files/clef2004-qa-overview.pdf (2005)
BASE
Show details
17
Overview of the CLEF 2004 Multilingual Question Answering Track
In: http://www.science.uva.nl/~mdr/Publications/Files/clef-2004-qa-overview-proceedings.pdf (2005)
BASE
Show details
18
Revising wordnet domains hierarchy: Semantics, coverage, and balancing
In: http://tcc.itc.it/people/bentivogli/papers/coling04-ws-WDH.pdf (2004)
BASE
Show details
19
Bridging Languages for Question Answering: DIOGENE at CLEF 2003
In: http://www.clef-campaign.org/2003/WN_web/38.pdf (2004)
BASE
Show details
20
Bridging Languages for Question Answering: DIOGENE at CLEF 2003
In: http://tcc.fbk.eu/people/negri/papers/CLEF2003/Final-clef03.pdf (2004)
BASE
Show details

Page: 1 2 3 4 5

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
95
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern