For imbalanced dataset, I used the "weights" parameter in Xgboost where weights is an array of weight assigned according to the class the data belongs to.
def CreateBalancedSampleWeights(y_train, largest_class_weight_coef):
classes = np.unique(y_train, axis = 0)
classes.sort()
class_samples = np.bincount(y_train)
total_samples = class_samples.sum()
n_classes = len(class_samples)
weights = total_samples / (n_classes * class_samples * 1.0)
class_weight_dict = {key : value for (key, value) in zip(classes, weights)}
class_weight_dict[classes[1]] = class_weight_dict[classes[1]] *
largest_class_weight_coef
sample_weights = [class_weight_dict[y] for y in y_train]
return sample_weights
Just pass the target column and the occurance rate of most frequent class (if most frequent class has 75 out of 100 samples, then its 0.75)
largest_class_weight_coef =
max(df_copy['Category'].value_counts().values)/df.shape[0]
#pass y_train as numpy array
weight = CreateBalancedSampleWeights(y_train, largest_class_weight_coef)
#And then use it like this
xg = XGBClassifier(n_estimators=1000, weights = weight, max_depth=20)
Thats it :)
class_weights = dict(enumerate(len(y_train) / (len(np.unique(y_train)) * np.bincount(y_train))))
to calculate class weighs for imbalanced classification. Also see this worked example. – Holp