Unsupervised Sentiment Analysis
Asked Answered
C

7

48

I've been reading a lot of articles that explain the need for an initial set of texts that are classified as either 'positive' or 'negative' before a sentiment analysis system will really work.

My question is: Has anyone attempted just doing a rudimentary check of 'positive' adjectives vs 'negative' adjectives, taking into account any simple negators to avoid classing 'not happy' as positive? If so, are there any articles that discuss just why this strategy isn't realistic?

Casto answered 13/10, 2010 at 4:25 Comment(0)
W
71

A classic paper by Peter Turney (2002) explains a method to do unsupervised sentiment analysis (positive/negative classification) using only the words excellent and poor as a seed set. Turney uses the mutual information of other words with these two adjectives to achieve an accuracy of 74%.

Wingo answered 14/10, 2010 at 13:52 Comment(1)
The link is broken.Alishiaalisia
A
20

I haven't tried doing untrained sentiment analysis such as you are describing, but off the top of my head I'd say you're oversimplifying the problem. Simply analyzing adjectives is not enough to get a good grasp of the sentiment of a text; for example, consider the word 'stupid.' Alone, you would classify that as negative, but if a product review were to have '... [x] product makes their competitors look stupid for not thinking of this feature first...' then the sentiment in there would definitely be positive. The greater context in which words appear definitely matters in something like this. This is why an untrained bag-of-words approach alone (let alone an even more limited bag-of-adjectives) is not enough to tackle this problem adequately.

The pre-classified data ('training data') helps in that the problem shifts from trying to determine whether a text is of positive or negative sentiment from scratch, to trying to determine if the text is more similar to positive texts or negative texts, and classify it that way. The other big point is that textual analyses such as sentiment analysis are often affected greatly by the differences of the characteristics of texts depending on domain. This is why having a good set of data to train on (that is, accurate data from within the domain in which you are working, and is hopefully representative of the texts you are going to have to classify) is as important as building a good system to classify with.

Not exactly an article, but hope that helps.

Accelerometer answered 13/10, 2010 at 6:35 Comment(1)
Thanks for your response waffle! I appreciate all the input I can get on this topic.Casto
O
8

The paper of Turney (2002) mentioned by larsmans is a good basic one. In a newer research, Li and He [2009] introduce an approach using Latent Dirichlet Allocation (LDA) to train a model that can classify an article's overall sentiment and topic simultaneously in a totally unsupervised manner. The accuracy they achieve is 84.6%.

Oke answered 2/2, 2012 at 16:19 Comment(3)
Did you actually end up trying it? I'm working on a similar problem trying to do sentiment analysis on the enron email archives.Shalom
@TrungHuynh I'm posting this nearly 4 years after the answer was posted, but the link to the paper has been changed now. Can you tell me the nameof the journal paper so I can search it online?Effete
Reviewing this question in mid-2018, I am tempted to suggest that the Li& He model is now mainstream Guided LDA model. See here: github.com/vi3k6i5/GuidedLDA, and a related blog post linkFolacin
E
4

I tried several methods of Sentiment Analysis for opinion mining in Reviews. What worked the best for me is the method described in Liu book: http://www.cs.uic.edu/~liub/WebMiningBook.html In this Book Liu and others, compared many strategies and discussed different papers on Sentiment Analysis and Opinion Mining.

Although my main goal was to extract features in the opinions, I implemented a sentiment classifier to detect positive and negative classification of this features.

I used NLTK for the pre-processing (Word tokenization, POS tagging) and the trigrams creation. Then also I used the Bayesian Classifiers inside this tookit to compare with other strategies Liu was pinpointing.

One of the methods relies on tagging as pos/neg every trigrram expressing this information, and using some classifier on this data. Other method I tried, and worked better (around 85% accuracy in my dataset), was calculating the sum of scores of PMI (punctual mutual information) for every word in the sentence and the words excellent/poor as seeds of pos/neg class.

Eroto answered 7/3, 2012 at 15:35 Comment(1)
Hi Luchux, I am working on a similar domain, can you please share your dataset , it will be very helpful.Bulwerlytton
J
2

I tried spotting keywords using a dictionary of affect to predict the sentiment label at sentence level. Given the generality of the vocabulary (non domain dependent), the results were just about 61%. The paper is available in my homepage.

In a somewhat improved version, negation adverbs were considered. The whole system, named EmoLib, is available for demo:

http://dtminredis.housing.salle.url.edu:8080/EmoLib/

Regards,

Jecoa answered 13/10, 2010 at 7:33 Comment(1)
Thanks for this atrilla. It ran pretty well for the testing I did.Casto
M
2

David,

I'm not sure if this helps but you may want to look into Jacob Perkin's blog post on using NLTK for sentiment analysis.

Mercado answered 22/11, 2010 at 8:28 Comment(1)
He is doing supervised classification.Penninite
D
0

There are no magic "shortcuts" in sentiment analysis, as with any other sort of text analysis that seeks to discover the underlying "aboutness," of a chunk of text. Attempting to short cut proven text analysis methods through simplistic "adjective" checking or similar approaches leads to ambiguity, incorrect classification, etc., that at the end of the day give you a poor accuracy read on sentiment. The more terse the source (e.g. Twitter), the more difficult the problem.

Dingbat answered 18/9, 2011 at 15:10 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.