1 |
Neuro-symbolic Natural Logic with Introspective Revision for Natural Language Inference
|
|
|
|
In: Transactions of the Association for Computational Linguistics, Vol 10, Pp 240-256 (2022) (2022)
|
|
BASE
|
|
Show details
|
|
2 |
Sentiment Analysis of Short Informal Texts
|
|
|
|
In: http://saifmohammad.com/WebDocs/NRC-Sentiment-JAIR-2014.pdf (2014)
|
|
BASE
|
|
Show details
|
|
3 |
NRC-Canada-2014: Recent improvements in sentiment analysis of tweets, in:
|
|
|
|
In: http://www.cs.toronto.edu/%7Exzhu/SemEval2014_NRC_t9.pdf (2014)
|
|
BASE
|
|
Show details
|
|
4 |
NRC-Canada-2014: Recent improvements in sentiment analysis of tweets, in:
|
|
|
|
In: http://saifmohammad.com/WebDocs/SemEval2014-Task9.pdf (2014)
|
|
BASE
|
|
Show details
|
|
5 |
NRCCanada: Building the State-of-the-Art in Sentiment Analysis of Tweets
|
|
|
|
In: http://www.aclweb.org/anthology/S/S13/S13-2053.pdf (2013)
|
|
BASE
|
|
Show details
|
|
6 |
Prior derivation models for formally syntax-based translation using linguistically syntactic parsing and tree kernels
|
|
|
|
In: http://aclweb.org/anthology-new/W/W08/W08-0403.pdf (2008)
|
|
BASE
|
|
Show details
|
|
7 |
Utterance-level extractive summarization of open-domain spontaneous conversations with rich features
|
|
|
|
In: http://www.cecs.uci.edu/~papers/icme06/pdfs/0000793.pdf (2006)
|
|
BASE
|
|
Show details
|
|
8 |
Summarization of spontaneous conversations
|
|
|
|
In: http://www.cs.toronto.edu/%7Egpenn/papers/zhu-penn-cscw06.pdf (2006)
|
|
BASE
|
|
Show details
|
|
9 |
Analysis of polarity information in medical text
|
|
|
|
In: http://ftp.cs.toronto.edu/pub/gh/Niu-etal-2005.pdf (2005)
|
|
BASE
|
|
Show details
|
|
10 |
Analysis of polarity information in medical text
|
|
|
|
In: http://www.cs.toronto.edu/~yun/papers/Niu_amia05.pdf (2005)
|
|
BASE
|
|
Show details
|
|
11 |
Single Character Chinese Named Entity Recognition
|
|
|
|
In: http://acl.ldc.upenn.edu/acl2003/sighan/pdf/Zhu.pdf (2003)
|
|
BASE
|
|
Show details
|
|
12 |
Single Character Chinese Named Entity Recognition
|
|
|
|
In: http://acl.ldc.upenn.edu/acl2003/sighan/pdfs/Zhu.pdf (2003)
|
|
BASE
|
|
Show details
|
|
13 |
Sentiment, Emotion, Purpose, and Style in Electoral Tweets
|
|
|
|
In: http://saifmohammad.com/WebDocs/tweetSentiment.pdf
|
|
BASE
|
|
Show details
|
|
14 |
Prior derivation models for formally syntax-based translation using linguistically syntactic parsing and tree kernels
|
|
|
|
In: http://www.mt-archive.info/ACL-SSST-2008-Zhou.pdf
|
|
BASE
|
|
Show details
|
|
15 |
Summarizing multiple spoken documents: finding evidence from untranscribed audio
|
|
|
|
In: http://aclweb.org/anthology-new/P/P09/P09-1062.pdf
|
|
BASE
|
|
Show details
|
|
16 |
Ecological Validity and the Evaluation of Speech Summarization Quality
|
|
|
|
In: http://www.aclweb.org/anthology/W/W12/W12-2604.pdf
|
|
Abstract:
There is little evidence of widespread adoption of speech summarization systems. This may be due in part to the fact that the natural language heuristics used to generate summaries are often optimized with respect to a class of evaluation measures that, while computationally and experimentally inexpensive, rely on subjectively selected gold standards against which automatically generated summaries are scored. This evaluation protocol does not take into account the usefulness of a summary in assisting the listener in achieving his or her goal. In this paper we study how current measures and methods for evaluating summarization systems compare to human-centric evaluation criteria. For this, we have designed and conducted an ecologically valid evaluation that determines the value of a summary when embedded in a task, rather than how closely a summary resembles a gold standard. The results of our evaluation demonstrate that in the domain of lecture summarization, the wellknown baseline of maximal marginal relevance (Carbonell and Goldstein, 1998) is statistically significantly worse than human-generated extractive summaries, and even worse than having no summary at all in a simple quiz-taking task. Priming seems to have no statistically significant effect on the usefulness of the human summaries. In addition, ROUGE scores and, in particular, the contextfree annotations that are often supplied to ROUGE as references, may not always be reliable as inexpensive proxies for ecologically valid evaluations. In fact, under some conditions, relying exclusively on ROUGE may even lead to scoring human-generated summaries that are inconsistent in their usefulness relative to using no summaries very favourably. 1
|
|
URL: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.361.8308 http://www.aclweb.org/anthology/W/W12/W12-2604.pdf
|
|
BASE
|
|
Hide details
|
|
|
|