How does sklearn.svm.svc's function predict_proba() work internally?
Asked Answered
R

2

46

I am using sklearn.svm.svc from scikit-learn to do binary classification. I am using its predict_proba() function to get probability estimates. Can anyone tell me how predict_proba() internally calculates the probability?

Redblooded answered 27/2, 2013 at 11:50 Comment(0)
C
79

Scikit-learn uses LibSVM internally, and this in turn uses Platt scaling, as detailed in this note by the LibSVM authors, to calibrate the SVM to produce probabilities in addition to class predictions.

Platt scaling requires first training the SVM as usual, then optimizing parameter vectors A and B such that

P(y|X) = 1 / (1 + exp(A * f(X) + B))

where f(X) is the signed distance of a sample from the hyperplane (scikit-learn's decision_function method). You may recognize the logistic sigmoid in this definition, the same function that logistic regression and neural nets use for turning decision functions into probability estimates.

Mind you: the B parameter, the "intercept" or "bias" or whatever you like to call it, can cause predictions based on probability estimates from this model to be inconsistent with the ones you get from the SVM decision function f. E.g. suppose that f(X) = 10, then the prediction for X is positive; but if B = -9.9 and A = 1, then P(y|X) = .475. I'm pulling these numbers out of thin air, but you've noticed that this can occur in practice.

Effectively, Platt scaling trains a probability model on top of the SVM's outputs under a cross-entropy loss function. To prevent this model from overfitting, it uses an internal five-fold cross validation, meaning that training SVMs with probability=True can be quite a lot more expensive than a vanilla, non-probabilistic SVM.

Cerracchio answered 27/2, 2013 at 12:49 Comment(10)
Great answer @larsmans. I'm just wondering if the probabilities can be interpreted as a confidence measure for the classification decisions? E.g. very close probabilities for positive and negative classes for a sample means the learner is less sure about its classification?Chinfest
Thanks @larsmans. I've actually observed much more dramatic cases -- predictions of 1, but with probability 0.45. I thought that the bayes optimal cutoff used is 0.5 precisely. Do you reckon that such dramatic cases can still be explained by the numerical instability in LibSVM?Chinfest
@MosesXu: this is something worth investigating, but I don't have the time to dig into the LibSVM code ATM. It seems to be inconsistent behavior at first sight, but I think predict does not actually use the probabilities, but rather the SVM hyperplane.Cerracchio
@MosesXu: I stared at the math a little longer and I realized that with an appropriate value of B, you can get predictions that are really different from the ones you get from the SVM predict and decision_function methods. I fear that when you use Platt scaling, you'll have to commit yourself to either believing predict, or believing predict_proba, as the two may be inconsistent.Cerracchio
@larsmans: it is somewhat surprising that the predict function always sticks to the hyperplane regardless of the probability parameter -- is this because the learned hyperplane always represents minimum structural risk while the fitted logistic regression, though fitted using n-fold cross validation, is still prone to over fitting?Chinfest
@MosesXu: I have no rationale for this behavior except that it is what LibSVM does, and scikit-learn tries to stay compatible with that. A possible reason might be, though, that probability=True does not affect the outcome of decision_function, so there's going to be an inconsistency either way. (The more I think about this, the more I become convinced that Platt scaling is just a hack and RVMs should be used instead of SVMs for probability estimates.)Cerracchio
@AndreasMueller: it's already there, in the dev version.Cerracchio
@FredFoo One question, does predict_proba always give the same output probabilities given a test set???? I was debugging my code for almost 2 days and finally observed that predict_proba DO NOT always give the same output...why would this happen?Algarroba
@Algarroba I think this might b because it's using 5 fold CV to estimate probability. If you randomise sequence before running the SVM it will produce slightly different results each time unless you set the seed.Catamite
For future reference to @FredFoo's remark about unrepeatable results from predict_proba, (at) RockTheStar is correct, but you can make your result repeatable by setting random_state=0 in the SVC constructor (see: scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html). Note that this has the potential to 'paper over' real issues with the model or training set, so you should proceed with caution.Walking
C
-1

Actually I found a slightly different answer that they used this code to convert decision value to probability

'double fApB = decision_value*A+B;
if (fApB >= 0)
    return Math.exp(-fApB)/(1.0+Math.exp(-fApB));
else
     return 1.0/(1+Math.exp(fApB)) ;'

Here A and B values can be found in the model file (probA and probB). It offers a way to convert probability to decision value and thus to hinge loss.

Use that ln(0) = -200.

Channing answered 6/2, 2014 at 21:21 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.