Why is the accuracy coming as 0% ? MATLAB LIBSVM
Asked Answered
A

5

1

I extracted PCA features using:

function [mn,A1,A2,Eigenfaces] = pca(T,f1,nf1) 
m=mean(T,2),   %T is the whole training set
train=size(T,2);
A=[];
for i=1:train
    temp=double(T(:,i))-m;
    A=[A temp];
end

train=size(f1,2);    %f1 - Face 1 images from training set 'T'
A=[];
for i=1:train
    temp=double(f1(:,i))-m;
    A1=[A1 temp];
end


train=size(nf1,2);    %nf1 - Images other than face 1 from training set 'T'
A=[];
for i=1:train
    temp=double(nf1(:,i))-m;
    A2=[A2 temp];
end

L=A'*A;
[V D]=eig(L);
for i=1:size(V,2)
    if(D(i,i)>1)
       L_eig=[L_eig V(:,1)];
    end
end 
Eigenfaces=A*L_eig;
end  

Then i projected only the face 1(class +1) from training data as such :

Function 1

for i=1:15                       %number of images of face 1 in training set
    temp=Eigenfaces'*A1(:,i);
    proj_img1=[proj_img1 temp];
end

Then i projected rest of the faces(class -1) from training data as such :

Function 2

 for i=1:221              %number of images of faces other than face 1 in training set
      temp=Eigenfaces'*A2(:,i);
      proj_img2=[proj_img2 temp];
 end

Function 3 Then the input image vector was obtained using:

diff=double(inputimg)-mn;   %mn is the mean of training data
testfeaturevector=Eigenfaces'*diff;

I wrote the results of Function 1 and 2 in a CSV file with labels +1 and -1 respectively. I then used LIBSVM to obtain the accuracy when giving the true label, it returned 0% and when i tried to predict the label it was -1 instead of +1.

And the accuracy coming as 0% ?

Basically my model is not trained properly and i am failing to see the error.

Any suggestions will be greatly appreciated.

Acceptation answered 31/1, 2014 at 7:8 Comment(6)
Just a note: You aren't using mn (mean along the second dimension) and then you go on to subtract the mean along the first dimension. Not sure what the intent is...Dodona
@Dodona : Sorry, the mean in function 3 had to be "mn"Acceptation
is [f1 nf1] composes your T (or some column permutations of T)?Cognomen
@lennon310: Yes lennon, [f1 nf1] composes TAcceptation
Looks like you are working on this for some time, and I've no clue what's the problem based on what you said here. what's the size of your data? I would like to have a try with libsvm if you can upload all your stuffs somewhere (with the clear description of them included). ThanksCognomen
@Cognomen : Yes i have been :( . Please look into the answer i have given below. I don't know how to write the "code" in this comments section, that's why had to write it there.Acceptation
C
1

Use Eigenfaces as the training set, compose a label vector with 1 or -1s (if the ith column of Eigenfaces refers to 1, then the ith element in label is 1, otherwise it is -1) . And use Eigenfaces and label in svmtrain function.

Cognomen answered 6/2, 2014 at 5:52 Comment(2)
Thank you i will try this. So i don't need to project the "training data" to Eigenfaces and then use "proj_imgs" for training?Acceptation
"for i=1:length(A) temp=Eigenfaces'*A(:,i); proj_imgs=[proj_imgs temp]; end"Acceptation
A
1

@lennon310:

for i=1:length(Eigenfaces)                   
    temp=Eigenfaces'*A(:,i);
    proj_imgs=[proj_imgs temp];
end
Acceptation answered 6/2, 2014 at 9:22 Comment(4)
you don't need to project again. Eigenfaces is already your training samples.Cognomen
@lennon310:Okay Lennon that's what I was confused about. But the way I have projected the test image is ok right?Acceptation
No. You do the PCA again on the test image, the same way as you did in train image except you still use the same mean value from training image. The new generated eigenfaces will be the input in your svmpredictCognomen
Lennon, is the answer below right? I have subtracted the mean of training data and then projected into the eigenspaceAcceptation
A
1

@lennon310:

   diff=double(inputimg)-mn;   %mn is the mean of training data

   testfeaturevector=Eigenfaces'*diff;
Acceptation answered 7/2, 2014 at 3:20 Comment(6)
This does not provide an answer to the question. To critique or request clarification from an author, leave a comment below their post - you can always comment on your own posts, and once you have sufficient reputation you will be able to comment on any post.Speer
just use testimg to generate a new eigenfaces array.Cognomen
@Charles: When i tried to write the code snippet in the comments section, it came as a single line and not a code so for convenience i wrote it in the answer section, but i will be more careful in the future.Acceptation
@lennon310: So i find the PCA of test image as follows??Acceptation
@lennon310: temp=double(testimg)-m; %where 'm' is the mean of the training images L=temp'temp; [V D]=eig(L); for i=1:size(V,2) if(D(i,i)>1) L_eig=[L_eig V(:,1)]; end end Eigenfaces=tempL_eig;Acceptation
@Speer , lennon310 : Sorry don't know how to write a code snippet in the comments section!Acceptation
A
0

Frankly, your code is a mess.

One questionable part:

data = reshape(data, M*N,1);

Doesn't this make data a matrix with just 1 column? This does not make sense.

Look at this tutorial on eigenfaces. There is code and examples within it to show you what to do. See the related webpage here for more details. The Matlab/Octave code can be found here.

Asthenia answered 4/2, 2014 at 2:18 Comment(6)
I changed the way to compute PCA, please see edits in question. Could you please look through it now and tell me where i am going wrong. I will go through the links you mentioned, thank you so much.Acceptation
I went through the link and it is helpful, but again it does not solve my problem of how to "label the projections" ?? Cause its using nearest neighbor for classification and not SVM.Acceptation
Look, if your original vector has a label k, then its projected version just have the same label.Asthenia
I then labeled proj_img1 as label +1 as these are projections from face 1 and prog_ img2 as label -1 as these are projections from all other faces except face 1 .Acceptation
Look, you can train LIBSVM/LIBLINEAR in "multi-class" mode where it will automatically do set up many classifiers for One-vs-all or one-vs-one, which means you do not have to set up many binary SVMs yourself. This can be automatically taken care of by LIBSVM/LIBLINEAR. It is the STANDARD behavior of most ML packages.Asthenia
If i have only two classes that is +1 and -1 , its "binary classification" right? and not "multi-class"?Acceptation
A
0
@lennon310: 

    temp=double(testimg)-m;  %where 'm' is the mean of the training images
    L=temp'*temp;
    [V D]=eig(L);
    for i=1:size(V,2)
        if(D(i,i)>1)
           L_eig=[L_eig V(:,1)];
        end
    end 
    Eigenfaces=temp*L_eig;
Acceptation answered 9/2, 2014 at 12:47 Comment(11)
Lennon I tried this and got 0% accuracy. I can't figure out what's wrong.Acceptation
do you have princomp function in your matlab? Maybe you can use that for the pca for a comparison...Cognomen
@lennon310: I have princomp , i will try that and get back to you. The size of my eigenspace i 65536*10, where 65536=256*256 that is size of the images and 10=number of the training images. This means that my code is retaining all the values and not only the principal components, is that right?Acceptation
i don't quite understand what do you mean by retaining all the values. Actually the recovered image is the same size as your original one (65536*10 in your case). You can take a look at this post on how to implement princomp: #21641301 and make your own code consistent with the result obtained from princomp. If there are still problems even if you are using princomp, that may be the libsvm 's issue. But currently just make sure the pca+eigenspace part is all correct. ThanksCognomen
Lennon what I mean is after computing PCA, the size of the principal components with which I need to train should be less than all the pixels of the image multiplied by the number of images right? I actually wanted to write the PCA code instead of using the inbuilt commandAcceptation
I don't think the row will be changed, what you compress your data is to remove the uncorrelated columns, in your case you keep only 10 columns, but the row number is still the size of each image (256*256)Cognomen
@lennon310: The column that is 10 is the number of images.Acceptation
@lennon310: Lennon i trained the SVM with (256*256)*10. For the first few testing examples it was giving me the right accuracy, such as 93% accuracy if i gave a face from class 1 and below 50 % if i gave a face not from class 1. But after i test with more sample images the accuracy goes to 100% irrespective of the image i am giving! I don't really know what's happening. Any suggestions?Acceptation
what do you mean by test with more sample images? Are you using the same 93% accuracy model to test other images? If so, how can the accuracy be averaged to 100%?Cognomen
Lennon i store the training and testing data in .train files and i just update the testing file and train the model again.Acceptation
@Cognomen : The training data consists of features from various images , whereas for testing i take a single image and extract its features and then test which class it belongs to. This approach is right?? Previously i was using the same model and testing with different testing images(one at a time) that's why i am assuming it didn't give me the required results(gave 100% accuracy for all). But now i am training a model with the features from training images and then testing a model with a single image.Acceptation

© 2022 - 2024 — McMap. All rights reserved.