1 million sentences to save in DB - removing non-relevant English words
Asked Answered
S

3

6

I am trying to train a Naive Bayes classifier with positive/negative words extracting from a sentiment. example:

I love this movie :))

I hate when it rains :(

The idea is I extract positive or negative sentences based on the emoctions used, but in order to train a classifier and persist it into database.

The problem is that I have more than 1 million such sentences, so if I train it word by word, the database will go for a toss. I want to remove all non-relevant word example 'I','this', 'when', 'it' so that number of times I have to make a database query is less.

Please help me in resolving this issue to suggest me better ways of doing it

Thank you

Stellastellar answered 23/11, 2010 at 17:39 Comment(3)
I would guess that your "non-relevant" words including 'I', 'this', 'when', 'it' should appear very frequently in both positive and negative sentences. Maybe this can help design an algorithm to automatically disqualify some words, either as you go or as a pre-pass.Netherlands
+1 for the phrase "the database will go for a toss"Bidle
Does this have to be a database? How about a full text search engine? Or a simple data structure? lucidimagination.com/Community/Hear-from-the-Experts/Articles/…Eslinger
T
4

You might want to check this out http://books.google.com/books?id=CE1QzecoVf4C&lpg=PA390&ots=OHuYwLRhag&dq=sentiment%20%20mining%20for%20fortune%20500&pg=PA379#v=onepage&q=sentiment%20%20mining%20for%20fortune%20500&f=false

Teahouse answered 30/11, 2010 at 3:55 Comment(1)
Indeed, thanks for that link; interesting to see how other people are doing this...Mattox
P
8

There are two common approaches:

  1. Compile a stop list.
  2. POS tag the sentences and throw out those parts of speech that you think are not interesting.

In both cases, determining which words/POS tags are relevant may be done using a measure such as PMI.

Mind you: standard stop lists from information retrieval may or may not work in sentiment analysis. I recently read a paper (no reference, sorry) where it was claimed that ! and ?, commonly removed in search engines, are valuable clues for sentiment analysis. (So may 'I', esp. when you also have a neutral category.)

Edit: you can also safely throw away everything that occurs only once in the training set (so called hapax legomena). Words that occur once have little information value for your classifier, but may take up a lot of space.

Purpure answered 24/11, 2010 at 10:58 Comment(1)
+1, it is hard to find out which words to remove before training an algorithm and see which words are less significant.Countless
T
4

You might want to check this out http://books.google.com/books?id=CE1QzecoVf4C&lpg=PA390&ots=OHuYwLRhag&dq=sentiment%20%20mining%20for%20fortune%20500&pg=PA379#v=onepage&q=sentiment%20%20mining%20for%20fortune%20500&f=false

Teahouse answered 30/11, 2010 at 3:55 Comment(1)
Indeed, thanks for that link; interesting to see how other people are doing this...Mattox
C
0

To reduce amount of data retrieved from your database, you may create in your database a dictionary -- a table that maps words* to numbers** -- and than retrieve only a number vector for training and a complete sentence for manual marking a sentiment.

|* No scientific publication comes to my mind but maybe it is enough to use only stems or lemmas instead of words. It would reduce the size of the dictionary.

|** If this operation kills your database, you can create a dictionary in a local application -- that uses a text indexing engine (e.g., apache lucene) -- and store only the result in your database.

Countless answered 24/11, 2010 at 18:28 Comment(1)
ps. I would also include the length of a sentence as a feature.Countless

© 2022 - 2024 — McMap. All rights reserved.