1 |
The Utility and Interplay of Gazetteers and Entity Segmentation for Named Entity Recognition in English ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Interpretability Analysis for Named Entity Recognition to Understand System Predictions and How They Can Improve ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
The Utility and Interplay of Gazetteers and Entity Segmentation for Named Entity Recognition in English ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Interpretability Analysis for Named Entity Recognition to Understand System Predictions and How They Can Improve ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Combining Lexical and Syntactic Features for Detecting Content-dense Texts in News ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Verbose, Laconic or Just Right: A Simple Computational Model of Content Appropriateness under Length Constraints
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Prosodic cues for emotion: analysis with discrete characterization of intonation
|
|
|
|
In: Speech Prosody (2014)
|
|
BASE
|
|
Show details
|
|
11 |
Acoustic and Lexical Representations for Affect Prediction in Spontaneous Conversations
|
|
|
|
Abstract:
In this article we investigate what representations of acoustics and word usage are most suitable for predicting dimensions of affect|AROUSAL, VALANCE, POWER and EXPECTANCY|in spontaneous interactions. Our experiments are based on the AVEC 2012 challenge dataset. For lexical representations, we compare corpus-independent features based on psychological word norms of emotional dimensions, as well as corpus-dependent representations. We find that corpus-dependent bag of words approach with mutual information between word and emotion dimensions is by far the best representation. For the analysis of acoustics, we zero in on the question of granularity. We confirm on our corpus that utterance-level features are more predictive than word-level features. Further, we study more detailed representations in which the utterance is divided into regions of interest (ROI), each with separate representation. We introduce two ROI representations, which significantly outperform less informed approaches. In addition we show that acoustic models of emotion can be improved considerably by taking into account annotator agreement and training the model on smaller but reliable dataset. Finally we discuss the potential for improving prediction by combining the lexical and acoustic modalities. Simple fusion methods do not lead to consistent improvements over lexical classifiers alone but improve over acoustic models.
|
|
Keyword:
Article
|
|
URL: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4219625 http://www.ncbi.nlm.nih.gov/pubmed/25382936 https://doi.org/10.1016/j.csl.2014.04.002
|
|
BASE
|
|
Hide details
|
|
12 |
Action Unit Models of Facial Expression of Emotion in the Presence of Speech
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Combining Video, Audio and Lexical Indicators of Affect in Spontaneous Conversation via Particle Filtering
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Information Status Distinctions and Referring Expressions: An Empirical Study of References to People in News Summaries
|
|
|
|
BASE
|
|
Show details
|
|
17 |
General Versus Specific Sentences: Automatic Identification and Application to Analysis of News Summaries
|
|
|
|
In: Technical Reports (CIS) (2011)
|
|
BASE
|
|
Show details
|
|
18 |
Information Status Distinctions and Referring Expressions: An Empirical Study of References to People in News Summaries
|
|
|
|
In: Departmental Papers (CIS) (2011)
|
|
BASE
|
|
Show details
|
|
|
|