In my face recognition project a face is represented as a 128-dimensional embedding(face_descriptor) as used in FaceNet. I could generate embedding from image in 2 ways.
Using Tensorflow resnet model v1.
emb_array = sess.run(embedding_layer,
{images_placeholder: images_array, phase_train_placeholder: False})
An array of images can be passed and a list of embeddings is obtained. This is a bit slow.Took 1.6s.(Though the time is almost constant for large number of images). Note: GPU not available
Other method is using dlib
dlib.face_recognition_model_v1.compute_face_descriptor(image, shape)
This gives fast result. Almost 0.05 seconds. But only one image can be passed at a time.Time increases with number of images.
Is there any way to pass array of images to calculate embeddings in dlib or any way to improve the speed in dlib?
Or is there any other faster method to generate 128 dimensional face embedding?
Update: I concatenated multiple images into single image and passed to dlib
dlib.face_recognition_model_v1.compute_face_descriptor(big_image, shapes)
i.e converted multiple images with single face into single image with multiple faces. Still time is proportional to number of images(i.e number of faces) concatenated. Almost same time for iterating on individual images.