Getting CUDA out of memory under pytorch in Google Colab
Asked Answered
R

1

6

I am trying to replicate the work described here (https://www.analyticsvidhya.com/blog/2019/02/building-crowd-counting-model-python/) in google colab. Unfortunately at first it was working but after a some time it is showing cuda out of memory error for this line of code.

output = model(img.unsqueeze(0))

Here is the error description:

RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 0; 11.17 GiB total capacity; 10.78 GiB already allocated; 40.81 MiB free; 34.25 MiB cached)

I tried to run this code in different google account but shows same error.

Reparable answered 26/11, 2019 at 11:29 Comment(3)
Solved now. Problem was that resolution of image was too high. So I decrease the image resolution and now it works fine.Reparable
You could also resolve the issue by lowering the batch size and not the image resolution - as lowering the latter will affect the classification accuracyMoody
@MuhammadArslan's comment could be placed as an answer. It works.Bevatron
M
2

You could also resolve the issue by lowering the batch size and not the image resolution - as lowering the latter will affect the classification accuracy

Moody answered 11/2, 2022 at 6:37 Comment(1)
i am running llama2-7b model on a Google Colab on a T4 instance. i have reduced batch size to 1 and still get the CUDA out of memory error. Has anyone tried running this model on Colab with T4 ?Sensational

© 2022 - 2024 — McMap. All rights reserved.