SkLearn Multinomial NB: Most Informative Features
Asked Answered
T

2

11

As my classifier yields about 99% accuracy on test data, I am a bit suspicious and want to gain insight in the most informative features of my NB classifier to see what kind of features it is learning. The following topic has been very useful: How to get most informative features for scikit-learn classifiers?

As for my feature input, I am still playing around and at the moment I am testing a simple unigram model, using CountVectorizer:

 vectorizer = CountVectorizer(ngram_range=(1, 1), min_df=2, stop_words='english')

On the aforementioned topic I found the following function:

def show_most_informative_features(vectorizer, clf, n=20):
feature_names = vectorizer.get_feature_names()
coefs_with_fns = sorted(zip(clf.coef_[0], feature_names))
top = zip(coefs_with_fns[:n], coefs_with_fns[:-(n + 1):-1])
for (coef_1, fn_1), (coef_2, fn_2) in top:
    print "\t%.4f\t%-15s\t\t%.4f\t%-15s" % (coef_1, fn_1, coef_2, fn_2)

Which gives the following result:

    -16.2420        114th                   -4.0020 said           
    -16.2420        115                     -4.6937 obama          
    -16.2420        136                     -4.8614 house          
    -16.2420        14th                    -5.0194 president      
    -16.2420        15th                    -5.1236 state          
    -16.2420        1600                    -5.1370 senate         
    -16.2420        16th                    -5.3868 new            
    -16.2420        1920                    -5.4004 republicans    
    -16.2420        1961                    -5.4262 republican     
    -16.2420        1981                    -5.5637 democrats      
    -16.2420        19th                    -5.6182 congress       
    -16.2420        1st                     -5.7314 committee      
    -16.2420        31st                    -5.7732 white          
    -16.2420        3rd                     -5.8227 security       
    -16.2420        4th                     -5.8256 states         
    -16.2420        5s                      -5.8530 year           
    -16.2420        61                      -5.9099 government     
    -16.2420        900                     -5.9464 time           
    -16.2420        911                     -5.9984 department     
    -16.2420        97                      -6.0273 gop 

It works, but I would like to know what this function does in order to interpret the results. Mostly, I struggle with what the 'coef_' attribute does.

I understand that the left side is the top 20 feature names with lowest coefficients, and the right side the features with the highest coefficients. But how exactly does this work, how do I interpret this overview? Does it mean that the left side holds the most informative features for the negative class, and the right side the most informative features for the positive class?

Also, on the left side it kind of looks as if the feature names are sorted alphabetically, is this correct?

Tillage answered 25/4, 2015 at 15:51 Comment(0)
H
12

The coef_ attribute of MultinomialNB is a re-parameterization of the naive Bayes model as a linear classifier model. For a binary classification problems this is basically the log of the estimated probability of a feature given the positive class. It means that higher values mean more important features for the positive class.

The above print shows the top 20 lowest values (less predictive features) in the first column and the top 20 high values (highest predictive features) in the second column.

Hasa answered 28/4, 2015 at 9:39 Comment(3)
Thank you! That makes sense. Though now I wonder how do I get the most important features for the other class, the negative class?Tillage
np.array_equal(clf.coef_[0], clf.feature_log_prob_[1]) returns True Therefore I assume clf.feature_log_prob_[1]) gives the feature coefficients for the negative class.Landonlandor
I have two classes: array([0, 1]). When I call coef_[0], does it give me the coefficients for the positive class (1) or the negative class (0)?Jabiru
C
0

The numbers shown in the coef_ attribute are the log of the probabilities. The sum of all those probabilities will equal 1 for each predicted feature and the length of the coef_ attributes is equal to the number of predicted features. To check this for yourself, you can use this list comprehension:

sum([np.exp(1)**x for x in clf.coef_[0]])  # The sum of probabilities == 1

Also, to answer the comment by @LN_P, the .classes_ attribute will show the order of the features that are referenced when you are looking at the coef_ arrays.

Here is a similar post I came across: How to calculate feature_log_prob_ in the naive_bayes MultinomialNB

Chemisette answered 17/3, 2021 at 4:24 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.