Help--100% accuracy with LibSVM?
Asked Answered
L

2

14

Nominally a good problem to have, but I'm pretty sure it is because something funny is going on...

As context, I'm working on a problem in the facial expression/recognition space, so getting 100% accuracy seems incredibly implausible (not that it would be plausible in most applications...). I'm guessing there is either some consistent bias in the data set that it making it overly easy for an SVM to pull out the answer, =or=, more likely, I've done something wrong on the SVM side.

I'm looking for suggestions to help understand what is going on--is it me (=my usage of LibSVM)? Or is it the data?

The details:

  • About ~2500 labeled data vectors/instances (transformed video frames of individuals--<20 individual persons total), binary classification problem. ~900 features/instance. Unbalanced data set at about a 1:4 ratio.
  • Ran subset.py to separate the data into test (500 instances) and train (remaining).
  • Ran "svm-train -t 0 ". (Note: apparently no need for '-w1 1 -w-1 4'...)
  • Ran svm-predict on the test file. Accuracy=100%!

Things tried:

  • Checked about 10 times over that I'm not training & testing on the same data files, through some inadvertent command-line argument error
  • re-ran subset.py (even with -s 1) multiple times and did train/test only multiple different data sets (in case I randomly upon the most magical train/test pa
  • ran a simple diff-like check to confirm that the test file is not a subset of the training data
  • svm-scale on the data has no effect on accuracy (accuracy=100%). (Although the number of support vectors does drop from nSV=127, bSV=64 to nBSV=72, bSV=0.)
  • ((weird)) using the default RBF kernel (vice linear -- i.e., removing '-t 0') results in accuracy going to garbage(?!)
  • (sanity check) running svm-predict using a model trained on a scaled data set against an unscaled data set results in accuracy = 80% (i.e., it always guesses the dominant class). This is strictly a sanity check to make sure that somehow svm-predict is nominally acting right on my machine.

Tentative conclusion?:

Something with the data is wacked--somehow, within the data set, there is a subtle, experimenter-driven effect that the SVM is picking up on.

(This doesn't, on first pass, explain why the RBF kernel gives garbage results, however.)

Would greatly appreciate any suggestions on a) how to fix my usage of LibSVM (if that is actually the problem) or b) determine what subtle experimenter-bias in the data LibSVM is picking up on.

Lutestring answered 23/8, 2011 at 0:21 Comment(16)
Mmmm, data and model analysis at arm's length. Doable, but really, really slow. This could be rather challenging. Any chance you can post the data? It's almost surely the data, but having someone else reproduce it could be helpful. Also, if you are proficient in R, that could make it easier to advise.Hotfoot
By "slow", I mean that it is possible to recommend a variety of tests, steps to try, things to investigate, etc., but the whole process can take some time. If you've seen the "House" episode "Frozen" - the idea is similar. Otherwise, the problem may be "too local".Hotfoot
the most likely -) is that you have included your test data in the training set. I know you've check, but check some more.Unblown
@bmargulies-certainly possible, but I've beat this thing multiple times at this point. Next step, I'm going to go back a layer and see if the scripts that compiled data munged it somewhere along the way.Lutestring
@Iterator-posting the data is difficult albeit not impossible. Somewhat large (140MB zip) & nominally proprietary (but sufficiently fuzzed at this point that I could probably get away with uploading it). Re:R...no background in it. Not afraid to try, however, if that would be efficient. Alternately, could take whatever suggestions and try to cook something equivalent up matlab/python/perl/etc.Lutestring
Oh good, this will give you enough time to write something up for CVPR :)Coruscate
@carlosdc has a good point: are these independent samples, or are you choosing rando, train/test splits where images could be very close temporally?Hotfoot
@severian: if you process the data set to a minimum, it should be far less - 2500 * 900 * 15 (assuming roughly 15 characters per feature) is about 33mb, uncompressed. My hunch is that it really is in the data - maybe the positive and negative samples where preprocessed somewhat differently, resulting in artifacts in the features that the classifier picks up on - has happened for other datasets too. As for the "RBF Kernel returning garbage" - that's likely overfitting. Varying C and sigma should help there (but why bother with RBF if a linear kernel seperates perfectly ;))Eldredge
@Ite: OK, now I feel stupid. I hadn't fully thought through the implication of carlos' statement...thank you for clarifying. Yes, this is almost certainly what is going on. Yep, I was just taking all the tagged video data, in one giant tagged data set, and then dividing it into two random subsets (so I was likely ending up with, e.g., frame t for subj i in training and frame t+1 for subj i in testing). Oof. Will go and do a better divide on the data set and see what happens. Many, many thanks...Lutestring
Consider it a useful lesson. ;) You had the good sense to know that 100% accuracy needs to be investigated.Hotfoot
Btw, as the question appears resolved, it would be good to accept @carlosdc's answer - it's a reminder to others who happen on a similar problem (esp in vision) to be sure to check for the possibility of interpolating classes based on the sequence of images.Hotfoot
@Ite: thanks for the reminder. My first time using stack overflow, wasn't/not fully familiar with how to use the site. Done.Lutestring
@Lutestring : I am facing the exact same problem. How did you finally overcome it? Just by searching for the right c and gamma values for RBF Kernal?Tien
@Sid: see the above thread--it was an issue (my fault) of not well-dividing the data set into test & training. E.g., consider a naive sort of dividing all frames 50-50 into test and training. Frame n+1 (and, to a lesser degree, n+2, n+3, etc.) looks a lot like frame n, and is generally (in my case) labeled identically to frame n. The classifier gets trained on frame n, and then sees n+1 in the "test" set and then magically gets it "right"...over and over. A more discrete partitioning (e.g., multi-second/minute/whatever subsets) into test & train was needed.Lutestring
@Lutestring : Basically i have two classes , class 1 has 10 images of "face 1" so its an array of [19600 10], the rows being the dimensions of the image and the columns being the number of images. class 2 has "all other faces" and an array of [19600 40]. For testing i just take a single image and the array is :[19600 1]. Sorry, i don't know why we should split the data into training and testing.Tien
@Lutestring : I do't understand what you mean by "it was an issue (my fault) of not well-dividing the data set into test & training."Tien
S
12

Two other ideas:

Make sure you're not training and testing on the same data. This sounds kind of dumb, but in computer vision applications you should take care that: make sure you're not repeating data (say two frames of the same video fall on different folds), you're not training and testing on the same individual, etc. It is more subtle than it sounds.

Make sure you search for gamma and C parameters for the RBF kernel. There are good theoretical (asymptotic) results that justify that a linear classifier is just a degenerate RBF classifier. So you should just look for a good (C, gamma) pair.

Semiautomatic answered 23/8, 2011 at 2:31 Comment(1)
for RBF, thank you for stating (what should have been!) the obvious--I forgot to search for good parameters. Got overeager once 100% accuracy started coming back (and thus because I knew, with likelihood, that something was bad with what I was doing or how the data is set up).Lutestring
H
11

Notwithstanding that the devil is in the details, here are three simple tests you could try:

  1. Quickie (~2 minutes): Run the data through a decision tree algorithm. This is available in Matlab via classregtree, or you can load into R and use rpart. This could tell you if one or just a few features happen to give a perfect separation.
  2. Not-so-quickie (~10-60 minutes, depending on your infrastructure): Iteratively split the features (i.e. from 900 to 2 sets of 450), train, and test. If one of the subsets gives you perfect classification, split it again. It would take fewer than 10 such splits to find out where the problem variables are. If it happens to "break" with many variables remaining (or even in the first split), select a different random subset of features, shave off fewer variables at a time, etc. It can't possibly need all 900 to split the data.
  3. Deeper analysis (minutes to several hours): try permutations of labels. If you can permute all of them and still get perfect separation, you have some problem in your train/test setup. If you select increasingly larger subsets to permute (or, if going in the other direction, to leave static), you can see where you begin to lose separability. Alternatively, consider decreasing your training set size and if you get separability even with a very small training set, then something is weird.

Method #1 is fast & should be insightful. There are some other methods I could recommend, but #1 and #2 are easy and it would be odd if they don't give any insights.

Hotfoot answered 23/8, 2011 at 2:17 Comment(4)
I'm not sure what you mean by permuting labels, but if you mean permuting the order of the vector variables, this should never affect SVM results!Carbonize
@Ite: can you clarify the "permute" statement (#3)? Do you mean randomly permuting some number of variables in each data instance/point? (So that different instances are permuted slightly differently?) Re:#1 & #2, will start in on them shortly...really do appreciate the suggestions.Lutestring
The label is the response variable. In other words you are introducing noise. In a binary classification, this doesn't introduce misclassification per se, as it resamples from the response distribution. The key is that at some point there is a breakdown. You can intentionally mislabel, but resampling from the response distribution or permuting the labels are useful methods for data and model exploration.Hotfoot
@ldog: Your statement is correct, and that wasn't the suggestion. In fact, that is true of most modeling methods, not just SVMs, as most are permutation invariant, with the exception of those intentionally built on sequences, like time series models.Hotfoot

© 2022 - 2024 — McMap. All rights reserved.