Why does svm_predict and svm_predict_probability give different results in java libsvm for an xor problem?
Asked Answered
G

5

4

I have simple xor problem that I want to learn using the RBF kernel in libsvm. When I train the java libsvm using an XOR problem like:

 x    y
0,0   -1
0,1   1
1,0   1
1,1   -1

The result I get for classifying a test vector (0,0) is -1 if I use svm.svm_predict, but +1 if I use svm.svm_predict_probability. Even the returned probabilities are reversed. The code I use and the results are below. Can anyone please tell me what I am doing wrong here?

public static void main(String[] args) {
    svm_problem sp = new svm_problem();
    svm_node[][] x = new svm_node[4][2];
    for (int i = 0; i < 4; i++) {
        for (int j = 0; j < 2; j++) {
            x[i][j] = new svm_node();
        }
    }
    x[0][0].value = 0;
    x[0][1].value = 0;

    x[1][0].value = 1;
    x[1][1].value = 1;

    x[2][0].value = 0;
    x[2][1].value = 1;

    x[3][0].value = 1;
    x[3][1].value = 0;


    double[] labels = new double[]{-1,-1,1,1};
    sp.x = x;
    sp.y = labels;
    sp.l = 4;
    svm_parameter prm = new svm_parameter();
    prm.svm_type = svm_parameter.C_SVC;
    prm.kernel_type = svm_parameter.RBF;
    prm.C = 1000;
    prm.eps = 0.0000001;
    prm.gamma = 10;
    prm.probability = 1;
    prm.cache_size=1024;
    System.out.println("Param Check " + svm.svm_check_parameter(sp, prm));
    svm_model model = svm.svm_train(sp, prm);
    System.out.println(" PA "+ model.probA[0] );
    System.out.println(" PB " + model.probB[0] );
    System.out.println(model.sv_coef[0][0]);
    System.out.println(model.sv_coef[0][1]);
    System.out.println(model.sv_coef[0][2]);
    System.out.println(model.sv_coef[0][3]);
    System.out.println(model.SV[0][0].value + "\t" + model.SV[0][1].value);
    System.out.println(model.SV[1][0].value + "\t" + model.SV[1][1].value);
    System.out.println(model.SV[2][0].value + "\t" + model.SV[2][1].value);
    System.out.println(model.SV[3][0].value + "\t" + model.SV[3][1].value);
    System.out.println(model.label[0]);
    System.out.println(model.label[1]);
    svm_node[] test = new svm_node[]{new svm_node(), new svm_node()};
    test[0].value = 0;
    test[1].value = 0;
    double[] l = new double[2]; 
    double result_prob = svm.svm_predict_probability(model, test,l);
    double result_normal = svm.svm_predict(model, test);
    System.out.println("Result with prob " + result_prob);
    System.out.println("Result normal " + result_normal);
    System.out.println("Probability " + l[0] + "\t" + l[1]);
}

--------- Result -------------

Param Check null
*
.
.
optimization finished, #iter = 3
nu = 0.0010000908050150552
obj = -2.000181612091545, rho = 0.0
nSV = 4, nBSV = 0
Total nSV = 4
 PA 3.2950351477129125
 PB -2.970957107176531E-12
1.0000908039844314
1.0000908060456788
-1.0000908039844314
-1.0000908060456788
0.0 0.0
1.0 1.0
0.0 1.0
1.0 0.0
-1
1
Result with prob 1.0
Result normal -1.0
Probability 0.03571492727188865     0.9642850727281113

Clearly the results are completely opposite. This seems to happen with any example I chose as test.

Can anybody throw some light on this? Thanks in advance

Gayn answered 13/5, 2011 at 7:20 Comment(0)
S
3

I have asked Chih-Jen Lin about XOR problem, because I had the same issue

citation from answer:

  • for -b 1, internally we need to do a 5-fold cv. Given so few instances, weird results may occur

it means, that for many identical inputs it works. copy/paste input vector 5-6 times, to have 20 entries instead of 4 and it will work.

It means also that svm_predict will give you always the right answer, svm_predict_probability only if data big enough. And don't forget, that output for both methods isn't identical

Schematic answered 23/11, 2012 at 14:25 Comment(0)
H
2

As far as I know, the order of the probability output vector is the same as the order in which libsvm encounters classes in the training data. Ensuring that you first have all the examples of class 0 (e.g. with label 1) and then class 1 (e.g. with label -1), will make the output the way you probably would expect it. This worked for me when training using the matlab interface, but should work the same for the c and java versions.

Hinterland answered 16/10, 2011 at 15:7 Comment(0)
K
1

This is only half an answer as I can't get it to work either...

I think you are specifying your data incorrectly. libsvm uses a sparse data format, which means each svm_node has an index and a position. This is an efficiency measure which allows you to omit features which are zero for large vectors with few non-zero features.

So, you code should be:

x[0][0].index = 1;
x[0][0].value = 0;      
x[0][1].index = 2;
x[0][1].value = 0;
x[1][0].index = 1;
x[1][0].value = 1;
x[1][1].index = 2;
x[1][1].value = 1;
x[2][0].index = 1;
x[2][0].value = 0;      
x[2][1].index = 2;
x[2][1].value = 1;
x[3][0].index = 1;
x[3][0].value = 1;      
x[3][1].index = 2;
x[3][1].value = 0;

and

test[0].index = 1;
test[0].value = 0;
test[1].index = 2;
test[1].value = 0;

This doesn't seem to fix the problem though. Hopefully it's a step in the right direction.

Keijo answered 18/5, 2011 at 8:54 Comment(2)
Thanks, Did not know about the index stuff, will try adding it to my code.Gayn
I'll try to take a closer look at it, I fiddled with the parameters without any luck. From looking at the docs and the source, in a correctly trained model, svm_predict and svm_predict_probability should output the same thing.Keijo
R
0

I don't know libsvm, but judging from other libraries you could simply misunderstand the meaning of the probability output - it might not be that it's the probability of being in "positive" class, but of being in the class of the first input sample, which in your case has a label of -1. So, if you reorder your samples so the first sample has a label of +1, you might get the output you expect.

Resh answered 16/5, 2011 at 13:0 Comment(1)
Hi, thanks for the reply, however I tried interchanging the labels but that does not seem to matter. It always assigns a higher probability to the wrong class, as if some one forgot to reverse the sign or something.Gayn
V
0

your last index should be -1 in the training and testing data.

Vaginismus answered 29/5, 2013 at 2:3 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.