2 |
Overcoming Language Variation in Sentiment Analysis with Social Attention ...
|
|
|
|
Abstract:
Variation in language is ubiquitous, particularly in newer forms of writing such as social media. Fortunately, variation is not random, it is often linked to social properties of the author. In this paper, we show how to exploit social networks to make sentiment analysis more robust to social language variation. The key idea is linguistic homophily: the tendency of socially linked individuals to use language in similar ways. We formalize this idea in a novel attention-based neural network architecture, in which attention is divided among several basis models, depending on the author's position in the social network. This has the effect of smoothing the classification function across the social network, and makes it possible to induce personalized classifiers even for authors for whom there is no labeled data or demographic metadata. This model significantly improves the accuracies of sentiment analysis on Twitter and on review data. ... : Published in Transactions of the Association for Computational Linguistics (TACL), 2017. Please cite the TACL version: https://transacl.org/ojs/index.php/tacl/article/view/1024 ...
|
|
Keyword:
Artificial Intelligence cs.AI; Computation and Language cs.CL; FOS Computer and information sciences; Social and Information Networks cs.SI
|
|
URL: https://arxiv.org/abs/1511.06052 https://dx.doi.org/10.48550/arxiv.1511.06052
|
|
BASE
|
|
Hide details
|
|
3 |
Better Document-level Sentiment Analysis from RST Discourse Parsing ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|