VGG 16/19 Slow Runtimes
Asked Answered
M

1

0

When I try to get an output from the pre-trained VGG 16/19 models using Caffe with Python (both 2.7 and 3.5) it's taking over 15 seconds on the net.forward() step (on my laptop's CPU).

I was wondering if anyone might advise me as to why this could be, as with many other models (i.e. ResNet, AlexNet) I get an output in a split second, this is the only model I've found so far that's performing this poorly.

The code I'm using is as follows:

img = cv2.imread(path + img_name + '.jpg')
img = transform_img(img,224,224) #Resizes image.
net = caffe.Net(model_prototxt,model_trained,caffe.TEST)
transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})
transformer.set_transpose('data', (2,0,1))
net.blobs['data'].data[...] = transformer.preprocess('data', img)
start = timer()
out = net.forward()
end = timer()
print('Runtime: ' + "{0:.2f}".format(end-start) + 's')

Sorry for what may be an extremely rookie question, and thanks in advance to anyone who takes the time to answer.

Multistage answered 22/3, 2017 at 2:14 Comment(2)
What Caffe distribution are you using? What hardware is on your laptop (CPU spec is enough for now)? What is your batch size (1 is all I can see in the posted code)?Sucre
Also, what speeds do you see (forward time per image would be fine) for the other topologies?Sucre
S
7

VGG-19 is much slower than its predecessors. Remember, the metric for the ILSVRC competition is accuracy (top-1 / top-5), regardless of training time. A model that train in a week and gets 95.2% accuracy beats a model that trains in 2 hours and get 95.1% accuracy.

Computing power continues to approximate Moore's Law, so we have the freedom to develop algorithms that won't be real-time practical for a few more doubling times. What trains in a week now will take less than a day in five years.

In general, an earlier model will train faster, but with less accuracy, than a later model. This holds with AlexNet, GoogleNet v1, GoogleNet v2, ResNet, and VGG. There's a huge drop-off with VGG: the topological innovations that make it more accurate severely slow down the training rate.

Sucre answered 22/3, 2017 at 16:40 Comment(5)
Thanks for your comments and answer. Right now I wasn't planning on training the model, I just wanted to use the freely available pre-trained model to see how it works - that's when I noticed that each forward pass/test was taking 15-20 seconds. If you think that this kind of time isn't anything surprising then I'm satisfied with that. :) Your second paragraph makes a lot of sense, and is something that I hadn't really considered in detail before!Multistage
You're quite welcome. Your performance is much worse than I get; that's why I was asking about your hardware. What is the speed ratio you see between VGG and some of the faster topologies?Sucre
Things like ResNet and AlexNet run in under a second for each forward pass (I haven't got any specific timings on hand unfortunately). I'm currently running on the CPU of my laptop only, so this might be the cause.Multistage
That ratio (not too much under a second vs 15-20 sec) is consistent with what I've seen bandied about.Sucre
Thanks! That's a relief, I had to fight a lot with caffe to get it to work with certain nets that needed custom layers, so I thought I might've messed something up doing so. :)Multistage

© 2022 - 2024 — McMap. All rights reserved.