facial expression classification in real time using SVM
Asked Answered
S

3

7

I am currently working on a project where I have to extract the facial expression of a user (only one user at a time from a webcam) like sad or happy.

My method for classifying facial expressions is:

  • Use opencv to detect the face in the image
  • Use ASM and stasm to get the facial feature point

facial landmarks

and now i'm trying to do facial expression classification

is SVM a good option ? and if it is how can i start with SVM :

how i'm going to train svm for every emotions using this landmarks ?

Semipostal answered 5/9, 2013 at 15:50 Comment(5)
Deep neural network is always better than SVM.Newtonnext
due to time i have to work with SVM , any help !!Semipostal
@usamec, your statement is not always true. Depends on definition of "better" to start with.Camouflage
@TIBOU: I am doing something very similar, are you using the points as features or you are doing some preprocessing first like distance between points for example ?Violoncellist
Old thread but I must point out that stasm is a wrong tool for the job, as it is designed to work on nuetral frontal faces only. Expressions you are looking to detect fall outside the scope.Hyperopia
A
7

Yes, SVMs have been numerously shown to perform well in this task. There have been dozens (if not hundreads) of papers describing such procedures.

For example:

Some basic sources of the SVMs themselves can be obtained on http://www.support-vector-machines.org/ (like books titles, software links etc.)

And if you are just interested in using them rather then understanding you can get one of basic libraries:

Abdulabdulla answered 5/9, 2013 at 17:18 Comment(4)
i want to train svm to classify facial expression(happy , angry ,disgust ,...) using landmarkss position how can i do that ?Semipostal
#18647905Semipostal
if you don't mind me asking, "I Will sound as a noob here" why use SVM over logistic regression ? isn't this basically the same concept?Conformation
The only similarity is the fact that they are both linear models, so are: perceptrons, OMP, linear regression and dozens more. The true strength of SVM lies in a particular form of regularization, which has been proven to outperform LR in many tasks (both empirically and theoretaically). But still, there is no such thing as "better model", there will always be a task for which it is better to take LR instaed of SVM. Either way - they are not the same. In particular, SVM can be easily "delinarized" (kernel trick), in a very efficient way. LR not.Abdulabdulla
O
3

if you are already using opencv,i suggest you use the built in svm implementation, training/saving/loading in python is as follow. c++ has corresponding api to do the same in about the same amount of code. it also has 'train_auto' to find best parameters

import numpy as np
import cv2

samples = np.array(np.random.random((4,5)), dtype = np.float32)
labels = np.array(np.random.randint(0,2,4), dtype = np.float32)

svm = cv2.SVM()
svmparams = dict( kernel_type = cv2.SVM_LINEAR, 
                       svm_type = cv2.SVM_C_SVC,
                       C = 1 )

svm.train(samples, labels, params = svmparams)

testresult = np.float32( [svm.predict(s) for s in samples])

print samples
print labels
print testresult

svm.save('model.xml')
loaded=svm.load('model.xml')

and output

#print samples
[[ 0.24686454  0.07454421  0.90043277  0.37529686  0.34437731]
 [ 0.41088378  0.79261768  0.46119651  0.50203663  0.64999193]
 [ 0.11879266  0.6869216   0.4808321   0.6477254   0.16334397]
 [ 0.02145131  0.51843268  0.74307418  0.90667248  0.07163303]]
#print labels
[ 0.  1.  1.  0.]
#print testresult
[ 0.  1.  1.  0.]    

so you provide the n flattened shape models as samples and n labels and you are good to go. you probably dont even need the asm part, just apply some filters which are sensitive to orientation like sobel or gabor and concatenate the matrices and flatten them then feed them directly to svm. you probably can get maybe 70-90% accuracy.

as someone said cnn are an alternative to svms.here's some links that implement lenet5. so far,i find svms much simpler to get started.

https://github.com/lisa-lab/DeepLearningTutorials/

http://www.codeproject.com/Articles/16650/Neural-Network-for-Recognition-of-Handwritten-Digi

-edit-

landmarks are just n (x,y) vectors right? so why dont you try put them into a array of size 2n and simply feed them directly to the code above?

for example,3 training samples of 4 land marks (0,0),(10,10),(50,50),(70,70)

samples = [[0,0,10,10,50,50,70,70],
[0,0,10,10,50,50,70,70],
[0,0,10,10,50,50,70,70]]

labels=[0.,1.,2.]

0=happy

1=angry

2=disgust

Obsequies answered 5/9, 2013 at 21:22 Comment(6)
i want to train svm to classify facial expression(happy , angry ,disgust ,...) using landmarkss position how can i do that ?Semipostal
#18647905Semipostal
in the training i have many images for every emotions how can i train the svm then !! for every emotion , sorry i dont get itSemipostal
you will have to start writing code, it should become obvious what to do next once you have a working something. if i have more time, i will try to post a more complete example during this weekendObsequies
thank you sir i'm trying to code this , and i will be waiting for your helpSemipostal
@Semipostal - read this, it will save you a lot of trouble: csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdfCamouflage
E
0

You could check this code to get idea how this could be done using SVM.

You can find algorithm explained here

Echovirus answered 8/9, 2013 at 15:53 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.