I will explain what I am trying to do, as it seems to be relevant in order to understand my question.
I am currently trying to do face recognition of people that step in front of a camera, based on known pictures in the database.
These known pictures are being collected from an identifying Smart Card (which contains only a single frontal face picture) or a frontal face profile picture from a social network. From what I've read so far, it seems that for a good face recognition, a good amount of training images is required (50+). As such, since my collected images are very few to create a reliable training set, I instead tried using my live camera frame captures (currently using 150) as the training set, and the identified pictures collected previously as the test set. I'm not sure if what I'm trying with this is correct, so please let me know if I'm screwing up.
So, the problem is that after I have let's say, 5 identified pictures that I got from Smart Cards, I tried to do face recognition using as a training set, the 150 frames which the camera captured of my face. When trying to recognize, the confidence values for each of the 5 test faces is EXTREMELY similar, making the whole program useless, because I cannot accurately recognize anyone. Often, using different camera captures as training I get higher confidence values from pictures of random people than the picture of myself.
I would appreciate any help you can give me, because I'm at a loss here.
Thank you.
Note: I'm using the JavaCV wrapper for OpenCV to make my program, and the haarcascades that come included in the package. Eigenfaces being the algorithm used.