Optuna TPESampler and RandomSampler try the same suggested integer values (possible floats and loguniforms as well) for any parameter more than once for some reason. I couldn't find a way to stop it from suggesting same values over over again. Out of 100 trials quite a few of them are just duplicates. Unique suggested value count ends up around 80-90 out of 100 trials. If I include more parameters for tuning, say 3, I even see all 3 of them getting the same values a few times in 100 trials.
It's like this. 75 for min_data_in_leaf was used 3 times:
[I 2020-11-14 14:44:05,320] Trial 8 finished with value: 45910.54012028659 and parameters: {'min_data_in_leaf': 75}. Best is trial 4 with value: 45805.19030897498.
[I 2020-11-14 14:44:07,876] Trial 9 finished with value: 45910.54012028659 and parameters: {'min_data_in_leaf': 75}. Best is trial 4 with value: 45805.19030897498.
[I 2020-11-14 14:44:10,447] Trial 10 finished with value: 45831.75933279074 and parameters: {'min_data_in_leaf': 43}. Best is trial 4 with value: 45805.19030897498.
[I 2020-11-14 14:44:13,502] Trial 11 finished with value: 46125.39810101329 and parameters: {'min_data_in_leaf': 4}. Best is trial 4 with value: 45805.19030897498.
[I 2020-11-14 14:44:16,547] Trial 12 finished with value: 45910.54012028659 and parameters: {'min_data_in_leaf': 75}. Best is trial 4 with value: 45805.19030897498.
Example code below:
def lgb_optuna(trial):
rmse = []
params = {
"seed": 42,
"objective": "regression",
"metric": "rmse",
"verbosity": -1,
"boosting": "gbdt",
"num_iterations": 1000,
'min_data_in_leaf': trial.suggest_int('min_data_in_leaf', 1, 100)
}
cv = StratifiedKFold(n_splits=5, random_state=42, shuffle=False)
for train_index, test_index in cv.split(tfd_train, tfd_train[:,-1]):
X_train, X_test = tfd_train[train_index], tfd_train[test_index]
y_train = X_train[:,-2].copy()
y_test = X_test[:,-2].copy()
dtrain = lgb.Dataset(X_train[:,:-2], label=y_train)
dtest = lgb.Dataset(X_test[:,:-2], label=y_test)
booster_gbm = lgb.train(params, dtrain, valid_sets=dtest, verbose_eval=False)
y_predictions = booster_gbm.predict(X_test[:,:-2])
final_mse = mean_squared_error(y_test, y_predictions)
final_rmse = np.sqrt(final_mse)
rmse.append(final_rmse)
return np.mean(rmse)
study = optuna.create_study(sampler=TPESampler(seed=42), direction='minimize')
study.optimize(lgb_optuna, n_trials=100)