Simplest feature selection algorithm
Asked Answered
M

5

7

I am trying to create my own and simple feature selection algorithm. The data set that I am going to work with is here (very famous data set). Can someone give me a pointer on how to do so?

I am planning to write a feature rank algorithm for a text classification. This is for a sentiment analysis of movie reviews, classifying them as either positive or negative.

So my question is on how to write a simple feature selection for a text data set.

Middleweight answered 7/3, 2011 at 17:10 Comment(2)
That's a big topic. Is there something specific you're having trouble with, or do you need ideas of where to start?Snail
I just want to eliminate features that adds noise to the classification. But how do I pick these type of words systematically? What is the appropriate number of features that gives me the best accuracy and which words... I guess that's what I want my final result of algorithm to beMiddleweight
C
3

Feature selection methods are a big topic. You can start with following:

  1. Chi square

  2. Mutual information

  3. Term frequency

etc. Read this paper if you have time: Comparative study on feature selection in text categorization this will help you lot.

The actual implementation depends on how you pre-process the data. Basically its keeping the counts, be it hash table or a database.

Conformist answered 7/3, 2011 at 18:18 Comment(4)
Amongst all that, term frequency seems to be the less powerful right?Middleweight
No. You want to remove a noisy term. And suppose a term occurs just once, then very probably its noise (maybe a misspelled name). You need to run a few tests before you can decide.Conformist
A few tests such as? Remove the terms that is on the bottom 50 rank on frequency and then test the accuracy and keep going until the frequency drops?Middleweight
The optimal answer depends on the data set you have. What you have said as an example test can be one of those tests.Conformist
L
2

Random features work well, when you are then building ensembles. It's known as feature bagging.

Lento answered 10/5, 2012 at 20:37 Comment(0)
P
0

Here's one option: Use pointwise mutual information. Your features will be tokens, and the information should be measured against the sentiment label. Be careful with frequent words (stop words), because in this type of task they may actually be useful.

Psf answered 7/3, 2011 at 17:59 Comment(0)
A
0

I currently use this approach:

calculate mean value and variance of data for each class. A good feature candidate should have small variance and the mean value should be different from mean values of other classes.

Currently having only < 50 features I select them manually. For automation of this process one could calculate variances of average values among all classes and give the higher prioritization to those, having bigger variance. Then, select first those, having smaller variance within one class.

Of cause this doesn't removes redundant features.

Amygdaline answered 1/2, 2015 at 12:6 Comment(0)
S
0

Feature selection methods are divided into fourth groups: Filter

  • Filter : Use statistical measures for feature selection
  • Wrapper : incorporates with a learning algorithm
  • Embedded : use both and filter and wrapper altogether
  • Hybrid : add different steps using filter or wrapper

The simplest way for feature selection is Filter approaches which are very fast with respect to other approaches.

Here are some of them:

  1. Chi-square
  2. Cross Entropy
  3. Fuzzy Entropy Measure
  4. Gini index
  5. Information Gain
  6. Mutual Information
  7. Relative Discrimination Criteria
  8. Term Strength

Here This is an article also i have used a hybrid method for feature selection in text categorization. Check My Article Here

Scarron answered 26/5, 2022 at 19:4 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.