I am using sklearn.svm.svc from scikit-learn to do binary classification. I am using its predict_proba() function to get probability estimates. Can anyone tell me how predict_proba() internally calculates the probability?
Scikit-learn uses LibSVM internally, and this in turn uses Platt scaling, as detailed in this note by the LibSVM authors, to calibrate the SVM to produce probabilities in addition to class predictions.
Platt scaling requires first training the SVM as usual, then optimizing parameter vectors A and B such that
P(y|X) = 1 / (1 + exp(A * f(X) + B))
where f(X)
is the signed distance of a sample from the hyperplane (scikit-learn's decision_function
method). You may recognize the logistic sigmoid in this definition, the same function that logistic regression and neural nets use for turning decision functions into probability estimates.
Mind you: the B
parameter, the "intercept" or "bias" or whatever you like to call it, can cause predictions based on probability estimates from this model to be inconsistent with the ones you get from the SVM decision function f
. E.g. suppose that f(X) = 10
, then the prediction for X
is positive; but if B = -9.9
and A = 1
, then P(y|X) = .475
. I'm pulling these numbers out of thin air, but you've noticed that this can occur in practice.
Effectively, Platt scaling trains a probability model on top of the SVM's outputs under a cross-entropy loss function. To prevent this model from overfitting, it uses an internal five-fold cross validation, meaning that training SVMs with probability=True
can be quite a lot more expensive than a vanilla, non-probabilistic SVM.
predict
does not actually use the probabilities, but rather the SVM hyperplane. –
Cerracchio B
, you can get predictions that are really different from the ones you get from the SVM predict
and decision_function
methods. I fear that when you use Platt scaling, you'll have to commit yourself to either believing predict
, or believing predict_proba
, as the two may be inconsistent. –
Cerracchio probability=True
does not affect the outcome of decision_function
, so there's going to be an inconsistency either way. (The more I think about this, the more I become convinced that Platt scaling is just a hack and RVMs should be used instead of SVMs for probability estimates.) –
Cerracchio predict_proba
, (at) RockTheStar is correct, but you can make your result repeatable by setting random_state=0
in the SVC constructor (see: scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html). Note that this has the potential to 'paper over' real issues with the model or training set, so you should proceed with caution. –
Walking Actually I found a slightly different answer that they used this code to convert decision value to probability
'double fApB = decision_value*A+B;
if (fApB >= 0)
return Math.exp(-fApB)/(1.0+Math.exp(-fApB));
else
return 1.0/(1+Math.exp(fApB)) ;'
Here A and B values can be found in the model file (probA and probB). It offers a way to convert probability to decision value and thus to hinge loss.
Use that ln(0) = -200.
© 2022 - 2024 — McMap. All rights reserved.