GridSearchCV has no attribute grid.grid_scores_
Asked Answered
D

1

13

tried grid.cv_results_ didnt correct problem

from sklearn.model_selection
import GridSearchCV
params = {
    'decisiontreeclassifier__max_depth': [1, 2],
    'pipeline-1__clf__C': [0.001, 0.1, 100.0]
}
grid = GridSearchCV(estimator = mv_clf,
    param_grid = params,
    cv = 10,
    scoring = 'roc_auc')
grid.fit(X_train, y_train)
for params, mean_score, scores in grid.grid_scores_:
    print("%0.3f+/-%0.2f %r" %
        (mean_score, scores.std() / 2, params))
#AttributeError: 'GridSearchCV' object has no attribute 'grid_scores_'

tried replacing grid.grid_scores_ with grid.cv_results_ The objective is to print the different hyperparameter value combinations and the average ROC AUC scores computed via the 10-fold cross validation

from sklearn.model_selection
    import GridSearchCV
    params = {
        'decisiontreeclassifier__max_depth': [1, 2],
        'pipeline-1__clf__C': [0.001, 0.1, 100.0]
    }
    grid = GridSearchCV(estimator = mv_clf,
        param_grid = params,
        cv = 10,
        scoring = 'roc_auc')
    grid.fit(X_train, y_train)
    for params, mean_score, scores in grid.grid_scores_:
        print("%0.3f+/-%0.2f %r" %
            (mean_score, scores.std() / 2, params))
    #AttributeError: 'GridSearchCV' object has no attribute 'grid_scores_'
Danidania answered 5/4, 2019 at 16:26 Comment(1)
grid.cv_results_ works in the latest scikit-learn v0.20.1 (where indeed a grid_scores_ attribute does not exist) - check the documentationHeadcheese
E
15

In latest scitkit-learn libaray, grid_scores_ has been depreciated and it has been replaced with cv_results_

cv_results_ give detailed results of grid search run.

grid.cv_results_.keys()

Output: dict_keys(['mean_fit_time', 'std_fit_time', 'mean_score_time', 'std_score_time', 'param_n_estimators', 'params', 'split0_test_score', 
'split1_test_score', 'split2_test_score', 'split3_test_score', 'split4_test_score',
'mean_test_score', 'std_test_score', 'rank_test_score'])

cv_results_ gives detailed output compared to grid_score. The resultant output is in form of dictionary. We can extract relevant metrics from dictionary by iterating through keys of dictionary. Below is example of running grid-search for cv=5

 for i in ['mean_test_score', 'std_test_score', 'param_n_estimators']:
        print(i," : ",grid.cv_results_[i])

 Output:   mean_test_score  :  [0.833 0.83 0.83 0.837 0.838 0.8381 0.83]
           std_test_score  :  [0.011 0.009 0.010 0.0106 0.010 0.0102 0.0099]
           param_n_estimators  :  [20 30 40 50 60 70 80]
Erinerina answered 15/11, 2019 at 13:3 Comment(2)
the answer given above & the answer on this link https://mcmap.net/q/906545/-attributeerror-39-str-39-object-has-no-attribute-39-parameters-39-due-to-new-version-of-sklearn really helpedKalmick
is it possible to make it include the validation scores for each epoch as well? I cannot find the way. I'm trying to plot validation vs trainins histories.Quipu

© 2022 - 2024 — McMap. All rights reserved.