How can I get the relative importance of features of a logistic regression for a particular prediction?
Asked Answered
T

2

7

I am using a Logistic Regression (in scikit) for a binary classification problem, and am interested in being able to explain each individual prediction. To be more precise, I'm interested in predicting the probability of the positive class, and having a measure of the importance of each feature for that prediction.

Using the coefficients (Betas) as a measure of importance is generally a bad idea as answered here, but I'm yet to find a good alternative.

So far the best I have found are the following 3 options:

  1. Monte Carlo Option: Fixing all other features, re-run the prediction replacing the feature we want to evaluate with random samples from the training set. Do this a large number of times. This would establish a baseline probability for the positive class. Then compare with the probability of the positive class of the original run. The difference is a measure of Importance of the feature.
  2. "Leave-one-out" classifiers: To evaluate the importance of a feature, first create a model which uses all features, and then another that uses all features except the one being tested. Predict the new observation using both models. The difference between the two would be the importance of the feature.
  3. Adjusted betas: Based on this answer, ranking the importance of the features by 'the magnitude of its coefficient times the standard deviation of the corresponding parameter in the data.'

All options (using betas, Monte Carlo and "Leave-one-out") seem like poor solutions to me.

  1. The Monte Carlo is dependent on the distribution of the training set, and I cannot find any literature to support it.
  2. The "leave one out" would be easily tricked by two correlated features (when one were absent, the other one would step in to compensate, and both would be given 0 importance).
  3. The adjusted betas sounds plausible, but I cannot find any literature to support it.

Actual question: What is the best way to interpret the importance of each feature, at the moment of a decision, with a linear classifier?

Quick note #1: for Random Forests this is trivial, we can simply use the prediction + bias decomposition, as explained beautifully in this blog post. The problem here is how to do something similar with linear classifiers such as Logistic Regression.

Quick note #2: there are a number of related questions on stackoverflow (1 2 3 4 5). I have not been able to find an answer to this specific question.

Tremolo answered 30/12, 2015 at 12:23 Comment(2)
Can't you use any feature selection technique for this on the training data itself?Farber
I can, but that's not quite the point here. Let's say I build a model (it has a number of features, selected at that moment). Now I make a prediction. At the moment of that prediction, I want to know the importance of each individual feature, the way I would in a Random Forest with Tree Importances. Mind you, maybe it is always the same for each prediction, I have not figured this out yet.Tremolo
S
2

If you want the importance of the features for a particular decision, why not simulate the decision_function (Which is provided by scikit-learn, so you can test whether you get the same value) step by step? The decision function for linear classifiers is simply:

intercept_ + coef_[0]*feature[0] + coef_[1]*feature[1] + ...

The importance of a feature i is then just coef_[i]*feature[i]. Of course this is similar to looking at the magnitude of the coefficients, but since it is multiplied with the actual feature and it is also what happens under the hood it might be your best bet.

Sideshow answered 11/1, 2016 at 14:5 Comment(0)
B
0

I suggest to use eli5 which already have similar things implemented.

For you question: Actual question: What is the best way to interpret the importance of each feature, at the moment of a decision, with a linear classifier?

I would say the answer come the the function show_weights() from eli5.

Furthermore this can be implemented with many other classifiers.

For more info you can see this question in related question.

Baty answered 20/4, 2018 at 8:54 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.