Dlib (GPU supported) is not working properly, not sure?
Asked Answered
H

1

9

My System Configuration:

Windows 10, Nvidia 940mx 2GB GDDR5 GPU, 8GB RAM, i5 8th generation.

Software installed:

  1. CUDA toolkit 9.0
  2. cuDNN 7.1.4

I have successfully installed dlib with GPU support after installing above requirements using commands below:

$ git clone https://github.com/davisking/dlib.git
$ python setup.py install --clean

As stated by dlib's creater @Davis King, on my jupyter notebook I executed :

import dlib
dlib.DLIB_USE_CUDA
[Out 17] :True

Which verifies that my 'dlib' is using GPU through CUDA, and all other libraries depend on dlib like @adma ageitgey's 'face_recognition' will also use cuda acceleration.

So I was running a code for training images so that I can recognize faces in a video, using the code below:

import face_recognition
img = face_recognition.load_image_file('./training images/John_Cena/Gifts-John-Cena-Fans.jpg')
locations = face_recognition.face_loactions(img,model='cnn')

It prints the error as stated below:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\Tushar\Anaconda3\lib\site-packages\face_recognition\api.py", line 116, in face_locations
    return [_trim_css_to_bounds(_rect_to_css(face.rect), img.shape) for face in _raw_face_locations(img, number_of_times_to_upsample, "cnn")]
  File "C:\Users\Tushar\Anaconda3\lib\site-packages\face_recognition\api.py", line 100, in _raw_face_locations
    return cnn_face_detector(img, number_of_times_to_upsample)
RuntimeError: Error while calling cudaMalloc(&data, n) in file C:\Users\Tushar\Desktop\face_recognition\dlib\dlib\cuda\cuda_data_ptr.cpp:28. code: 2, reason: out of memory

After trying again for another image :

img = face_recognition.load_image_file('./training images/John_Cena/Images.jpg')
locations = face_recognition.face_loactions(img,model='cnn')

It gave error:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\Tushar\Anaconda3\lib\site-packages\face_recognition\api.py", line 116, in face_locations
    return [_trim_css_to_bounds(_rect_to_css(face.rect), img.shape) for face in _raw_face_locations(img, number_of_times_to_upsample, "cnn")]
  File "C:\Users\Tushar\Anaconda3\lib\site-packages\face_recognition\api.py", line 100, in _raw_face_locations
    return cnn_face_detector(img, number_of_times_to_upsample)
RuntimeError: Error while calling cudnnConvolutionForward( context(), &alpha, descriptor(data), data.device(), (const cudnnFilterDescriptor_t)filter_handle, filters.device(), (const cudnnConvolutionDescriptor_t)conv_handle, (cudnnConvolutionFwdAlgo_t)forward_algo, forward_workspace, forward_workspace_size_in_bytes, &beta, descriptor(output), output.device()) in file C:\Users\Tushar\Desktop\face_recognition\dlib\dlib\cuda\cudnn_dlibapi.cpp:1007. code: 3, reason: CUDNN_STATUS_BAD_PARAM

Then I restarted the jupyter's kernel and tried once again for different image :

face_recognition.face_locations(face_recognition.load_image_file('./training 
   images/John_Cena/images.jpg'),model='cnn')
[Out] : [(21, 136, 61, 97)]

This time it gave the coordinates of the location of the face in the image.

So this is happening again and again, for some images it just runs fine and for some, it gives one of the 2 errors(as stated above).

While using model='hog' it's running fine for all the similar images as used in model='cnn'.

So when I try to train the classifier on images in different folders using for loop:

from face_recognition.face_detection_cli import image_files_in_folder
import os
import os.path
import face_recognition
for class_dir in os.listdir('./training images/'):
    count = 0
    for img_path in image_files_in_folder(os.path.join('./training images/', class_dir)):  
                count += 1
                image = face_recognition.load_image_file(img_path)
                face_bounding_boxes = face_recognition.face_locations(image,model='cnn')
                print(face_bounding_boxes, count)

It always stops after processing some images showing the same any of 2 errors(as stated above). I tried every possible way to install dlib with GPU support, CUDA 9.0 toolkit and cuDNN 7.1.4. They all are working fine!

I don't know what's the real issue here, Is the memory (2 GB) of Graphic Card is less or something else.

I really want to use GPU's power to make recognition in video faster.

Howlet answered 6/8, 2018 at 18:15 Comment(2)
Facing the same issue as you are! It's probably due to the low 2GB memory? Because I am able to get YOLO/darknet (CNN based) running on GPUVirgy
Did you try resizing the images to something like, 224x224, before using the CNN model? If the input resolution of the images is large you may face out of memory problems.Voccola
I
0

I found that face_encodings is given quick "Indexerror" if the face is slighty rotate and not straight. Despite It found the face locations with the coordinates and when supply the crop image with the coordinates to "face_encodings" it failed with the index error...

Indivisible answered 9/1, 2021 at 14:29 Comment(1)
This answer seems to be completely unrelated to the question.Envy

© 2022 - 2025 — McMap. All rights reserved.