xgboost: AttributeError: 'DMatrix' object has no attribute 'handle'
Asked Answered
T

2

12

The problem is really strange, because that piece of worked pretty fine with other dataset.

The full code:

import numpy as np
import pandas as pd
import xgboost as xgb
from sklearn.cross_validation import train_test_split

# # Split the Learning Set
X_fit, X_eval, y_fit, y_eval= train_test_split(
    train, target, test_size=0.2, random_state=1
)

clf = xgb.XGBClassifier(missing=np.nan, max_depth=6, 
                        n_estimators=5, learning_rate=0.15, 
                        subsample=1, colsample_bytree=0.9, seed=1400)

# fitting
clf.fit(X_fit, y_fit, early_stopping_rounds=50, eval_metric="logloss", eval_set=[(X_eval, y_eval)])
#print y_pred
y_pred= clf.predict_proba(test)[:,1]

Last line causes the error below (full output provided):

Will train until validation_0 error hasn't decreased in 50 rounds.
[0] validation_0-logloss:0.554366
[1] validation_0-logloss:0.451454
[2] validation_0-logloss:0.372142
[3] validation_0-logloss:0.309450
[4] validation_0-logloss:0.259002
Traceback (most recent call last):
  File "../src/script.py", line 57, in 
    y_pred= clf.predict_proba(test)[:,1]
  File "/opt/conda/lib/python3.4/site-packages/xgboost-0.4-py3.4.egg/xgboost/sklearn.py", line 435, in predict_proba
    test_dmatrix = DMatrix(data, missing=self.missing)
  File "/opt/conda/lib/python3.4/site-packages/xgboost-0.4-py3.4.egg/xgboost/core.py", line 220, in __init__
    feature_types)
  File "/opt/conda/lib/python3.4/site-packages/xgboost-0.4-py3.4.egg/xgboost/core.py", line 147, in _maybe_pandas_data
    raise ValueError('DataFrame.dtypes for data must be int, float or bool')
ValueError: DataFrame.dtypes for data must be int, float or bool
Exception ignored in: >
Traceback (most recent call last):
  File "/opt/conda/lib/python3.4/site-packages/xgboost-0.4-py3.4.egg/xgboost/core.py", line 289, in __del__
    _check_call(_LIB.XGDMatrixFree(self.handle))
AttributeError: 'DMatrix' object has no attribute 'handle'

What is wrong here? I have no idea how to fix that

UPD1: Acctually this is kaggle problem: https://www.kaggle.com/insaff/bnp-paribas-cardif-claims-management/xgboost

Textualist answered 17/3, 2016 at 15:54 Comment(2)
what is the output of X_fit.dtypes and X_eval.dtypes?Hillegass
This is for X_fit.dtypes target int64 v1 float64 v2 float64 v3 int64 v4 float64 ; test has even object typeTextualist
T
14

The problem here is related to the initial data: some of values are float or integer and some object. This is why we need to cast them:

from sklearn import preprocessing 
for f in train.columns: 
    if train[f].dtype=='object': 
        lbl = preprocessing.LabelEncoder() 
        lbl.fit(list(train[f].values)) 
        train[f] = lbl.transform(list(train[f].values))

for f in test.columns: 
    if test[f].dtype=='object': 
        lbl = preprocessing.LabelEncoder() 
        lbl.fit(list(test[f].values)) 
        test[f] = lbl.transform(list(test[f].values))

train.fillna((-999), inplace=True) 
test.fillna((-999), inplace=True)

train=np.array(train) 
test=np.array(test) 
train = train.astype(float) 
test = test.astype(float)
Textualist answered 18/3, 2016 at 20:25 Comment(0)
D
3

You might also want to take a look at categorical variable solution as shown below:

for col in train.select_dtypes(include=['object']).columns:
    train[col] = train[col].astype('category')
    test[col] = test[col].astype('category')

# Encoding categorical features
for col in train.select_dtypes(include=['category']).columns:
    train[col] = train[col].cat.codes
    test[col] = test[col].cat.codes

train.fillna((-999), inplace=True) 
test.fillna((-999), inplace=True)

train=np.array(train) 
test=np.array(test) 
Dichlorodifluoromethane answered 4/12, 2017 at 14:55 Comment(1)
Wow, thank you, didnt know about such data type in pandasTextualist

© 2022 - 2024 — McMap. All rights reserved.