How to get feature importance in xgboost?
Asked Answered
A

11

80

I'm using xgboost to build a model, and try to find the importance of each feature using get_fscore(), but it returns {}

and my train code is:

dtrain = xgb.DMatrix(X, label=Y)
watchlist = [(dtrain, 'train')]
param = {'max_depth': 6, 'learning_rate': 0.03}
num_round = 200
bst = xgb.train(param, dtrain, num_round, watchlist)

So is there any mistake in my train? How to get feature importance in xgboost?

Audiogenic answered 4/6, 2016 at 8:5 Comment(4)
#38213149Unbolted
Check this function for getting a xgboost feature importance data frame.Homologate
You need to name the features first. For example, bst.feature_names=['foo', 'bar', ...].Incumber
Simplest case of calculating and reporting feature importance: xgboosting.com/xgboost-feature_importances_-propertyMoneyer
S
67

In your code you can get feature importance for each feature in dict form:

bst.get_score(importance_type='gain')

>>{'ftr_col1': 77.21064539577829,
   'ftr_col2': 10.28690566363971,
   'ftr_col3': 24.225014841466294,
   'ftr_col4': 11.234086283060112}

Explanation: The train() API's method get_score() is defined as:

get_score(fmap='', importance_type='weight')

  • fmap (str (optional)) – The name of feature map file.
  • importance_type
    • ‘weight’ - the number of times a feature is used to split the data across all trees.
    • ‘gain’ - the average gain across all splits the feature is used in.
    • ‘cover’ - the average coverage across all splits the feature is used in.
    • ‘total_gain’ - the total gain across all splits the feature is used in.
    • ‘total_cover’ - the total coverage across all splits the feature is used in.

https://xgboost.readthedocs.io/en/latest/python/python_api.html

Selfanalysis answered 2/8, 2018 at 3:29 Comment(2)
Why do I get the following error : AttributeError: 'XGBClassifier' object has no attribute 'get_score' @SelfanalysisPortis
@Portis You need to use bst.get_booster().get_score(importance_type='gain') insteadTervalent
C
57

Get the table containing scores and feature names, and then plot it.

feature_important = model.get_booster().get_score(importance_type='weight')
keys = list(feature_important.keys())
values = list(feature_important.values())

data = pd.DataFrame(data=values, index=keys, columns=["score"]).sort_values(by = "score", ascending=False)
data.nlargest(40, columns="score").plot(kind='barh', figsize = (20,10)) ## plot top 40 features

For example:

enter image description here

Counterproposal answered 12/10, 2018 at 10:47 Comment(0)
T
33

Using sklearn API and XGBoost >= 0.81:

clf.get_booster().get_score(importance_type="gain")

or

regr.get_booster().get_score(importance_type="gain")

For this to work correctly, when you call regr.fit (or clf.fit), X must be a pandas.DataFrame.

Tee answered 20/3, 2019 at 19:15 Comment(2)
For some reason xgboost seems to have broken the model.feature_importances_ so that is what I was looking for. Thank you.Tacnaarica
My experience is that "X passed to .fit must be a pandas.DataFrame" is still true as of 0.9 o/w you get an empty dict.Imamate
J
18

Build the model from XGboost first

from xgboost import XGBClassifier, plot_importance
model = XGBClassifier()
model.fit(train, label)

this would result in an array. So we can sort it with descending

sorted_idx = np.argsort(model.feature_importances_)[::-1]

Then, it is time to print all sorted importances and the name of columns together as lists (I assume the data loaded with Pandas)

for index in sorted_idx:
    print([train.columns[index], model.feature_importances_[index]]) 

Furthermore, we can plot the importances with XGboost built-in function

plot_importance(model, max_num_features = 15)
pyplot.show()

use max_num_features in plot_importance to limit the number of features if you want.

Jinx answered 14/6, 2018 at 1:37 Comment(1)
plot_importance() should be called as: plot_importance(model, importance_type = 'gain') . Else different results are obtained with the 'sorted_idx' method. Default importance_type for plot_importance is 'weight'. This is for xgboost version 1.5.0.Projection
V
16

According to this post there 3 different ways to get feature importance from Xgboost:

  • use built-in feature importance,
  • use permutation based importance,
  • use shap based importance.

Built-in feature importance

Code example:

xgb = XGBRegressor(n_estimators=100)
xgb.fit(X_train, y_train)
sorted_idx = xgb.feature_importances_.argsort()
plt.barh(boston.feature_names[sorted_idx], xgb.feature_importances_[sorted_idx])
plt.xlabel("Xgboost Feature Importance")

Please be aware of what type of feature importance you are using. There are several types of importance, see the docs. The scikit-learn like API of Xgboost is returning gain importance while get_fscore returns weight type.

Permutation based importance

perm_importance = permutation_importance(xgb, X_test, y_test)
sorted_idx = perm_importance.importances_mean.argsort()
plt.barh(boston.feature_names[sorted_idx], perm_importance.importances_mean[sorted_idx])
plt.xlabel("Permutation Importance")

This is my preferred way to compute the importance. However, it can fail in case highly colinear features, so be careful! It's using permutation_importance from scikit-learn.

SHAP based importance

explainer = shap.TreeExplainer(xgb)
shap_values = explainer.shap_values(X_test)
shap.summary_plot(shap_values, X_test, plot_type="bar")

To use the above code, you need to have shap package installed.

I was running the example analysis on Boston data (house price regression from scikit-learn). Below 3 feature importance:

Built-in importance

built in xgboost importance

Permutation based importance

permutation importance

SHAP importance

shap imp

All plots are for the same model! As you see, there is a difference in the results. I prefer permutation-based importance because I have a clear picture of which feature impacts the performance of the model (if there is no high collinearity).

Valeric answered 28/8, 2020 at 10:47 Comment(0)
F
11

For feature importance Try this:

Classification:

pd.DataFrame(bst.get_fscore().items(), columns=['feature','importance']).sort_values('importance', ascending=False)

Regression:

xgb.plot_importance(bst)
Forepeak answered 23/8, 2016 at 17:58 Comment(2)
neither of these solutions currently works. for some reason the model loses the feature names and returns an empty dict.Belak
Is it a model you just trained or are you loading a pickled model?Forepeak
B
10

For anyone who comes across this issue while using xgb.XGBRegressor() the workaround I'm using is to keep the data in a pandas.DataFrame() or numpy.array() and not to convert the data to dmatrix(). Also, I had to make sure the gamma parameter is not specified for the XGBRegressor.

fit = alg.fit(dtrain[ft_cols].values, dtrain['y'].values)
ft_weights = pd.DataFrame(fit.feature_importances_, columns=['weights'], index=ft_cols)

After fitting the regressor fit.feature_importances_ returns an array of weights which I'm assuming is in the same order as the feature columns of the pandas dataframe.

My current setup is Ubuntu 16.04, Anaconda distro, python 3.6, xgboost 0.6, and sklearn 18.1.

Belak answered 17/2, 2017 at 17:54 Comment(0)
S
9

I don't know how to get values certainly, but there is a good way to plot features importance:

model = xgb.train(params, d_train, 1000, watchlist)
fig, ax = plt.subplots(figsize=(12,18))
xgb.plot_importance(model, max_num_features=50, height=0.8, ax=ax)
plt.show()
Steinbach answered 8/7, 2017 at 20:12 Comment(0)
S
7

Try this

fscore = clf.best_estimator_.booster().get_fscore()
Sabian answered 16/2, 2017 at 13:0 Comment(4)
not sure if this is applicable for regression but this does not work either as the clf doesn't have a best_estimator_ attribute and the get_fscore() returns an empty dict.Belak
It's for the XGBClassifierSabian
AttributeError: 'XGBClassifier' object has no attribute 'best_estimator_' Something is wrong here.Henbane
best_estimator_ is required only if you are using something like GridSearchCV for parameter tuning. If you are using xgboost without this, you should just do clf.booster().get_fscore()Galoot
B
3

In case you are using XGBRegressor, try with: model.get_booster().get_score().

That returns the results that you can directly visualize through plot_importance command

Bargainbasement answered 24/4, 2020 at 18:9 Comment(1)
I am using XGBClassifier, however this is the only code that returns value for the features, I am wondering why!Portis
O
1

None of the above worked for me, this was the code I ended up with, to sort features by importance.

from collections import Counter
Counter({k: v for k, v in sorted(model.get_fscore().items(), key=lambda item: item[1], reverse = True)}).most_common

just replace model with the name of your model and everything will be there. Of course I'm doing the same thing twice, there's no need to order a dict before passing to counter, but I figure it wouldn't hurt to leave it there in case anyone hates Counters.

Overestimate answered 16/8, 2021 at 3:52 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.