I am trying to compute area under the ROC curve using sklearn.metrics.roc_auc_score
using the following method:
roc_auc = sklearn.metrics.roc_auc_score(actual, predicted)
where actual
is a binary vector with ground truth classification labels and predicted
is a binary vector with classification labels that my classifier has predicted.
However, the value of roc_auc
that I am getting is EXACTLY similar to accuracy values (proportion of samples whose labels are correctly predicted). This is not a one-off thing. I try my classifier on various values of the parameters and every time I get the same result.
What am I doing wrong here?