CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling cublasCreate(handle)
Asked Answered
S

11

52

I got the following error when I ran my PyTorch deep learning model in Google Colab

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in linear(input, weight, bias)
   1370         ret = torch.addmm(bias, input, weight.t())
   1371     else:
-> 1372         output = input.matmul(weight.t())
   1373         if bias is not None:
   1374             output += bias

RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`

I even reduced batch size from 128 to 64 i.e., reduced to half, but still, I got this error. Earlier, I ran the same code with a batch size of 128 but didn't get any error like this.

Seagoing answered 28/4, 2020 at 5:39 Comment(1)
The error and answers seems to suggest indeed that somehow the GPU memory is full and it is not captured by standard safety protocals. I got the error when too many (notebook) python kernels where using the GPU at the same time.Tetrasyllable
C
35

No, batch size does not matter in this case.

The most likely reason is that there is an inconsistency between number of labels and number of output units.

  • Try printing the size of the final output in the forward pass and check the size of the output

print(model.fc1(x).size())
Here fc1 would be replaced by the name of your model's last linear layer before returning

  • Make sure that label.size() is equal to prediction.size() before calculating the loss

And even after fixing that problem, you'll have to restart the GPU runtime (I needed to do this in my case when using a Colab GPU)

This GitHub issue comment might also be helpful.

Cobb answered 25/7, 2020 at 7:8 Comment(1)
I would remove the part about batch size not mattering in this case because, well, it does for some of us. :)Farnese
P
24

This error can actually be due to different reasons. It is recommended to debug CUDA errors by running the code on the CPU, if possible. If that’s not possible, try to execute the script via:

CUDA_LAUNCH_BLOCKING=1 python [YOUR_PROGRAM]

This will help you get the right line of code which raised the error in the stack trace so that you can resolve it.

Peder answered 2/10, 2020 at 15:32 Comment(2)
Thanks @Peder I ran my program using CUDA_LAUNCH_BLOCKING=1 however it outputs RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling cublasCreate(handle)`` why is it outputting a CUDA error?Cadi
That's strange. Try to run directly on CPU, that's usually the default. But might need to modify your code if GPU is prioritised. Depends on what you're executing.Peder
N
13

Reducing batch size works for me and the training proceeds as planned.

Necker answered 24/1, 2021 at 22:51 Comment(0)
B
7

This error means "Resource allocation failed inside the cuBLAS library".

Decreasing the batch size solved the issue for me. You said you increased to 64 and it didn't help. Try 32, 8, 1, etc. as well.

Also, try running the same on your CPU to check if everything is fine with your tensors' shapes.

Bergman answered 24/9, 2020 at 5:50 Comment(1)
Yup, decreasing from batch size of 32 to 16 worked for me.Farnese
C
6

One cause of this problem may be when the number of label is not equal to the number of network output channels, i.e the number of output classes predicted. Adjust the output to match and it should fix the issue.

Cohesive answered 26/3, 2021 at 6:9 Comment(0)
M
2

Reducing the maximum sequence length for a model that has a limit (e.g. BERT) solved this error for me.

Also, I faced the same issue when I resized the embedding layer of a model: model.resize_token_embeddings(NEW_SIZE), trained, and saved it.

At prediction time, when I loaded the model, I needed to resize the embedding layer again!

Miasma answered 17/8, 2022 at 15:44 Comment(0)
U
1

I had the same problem while I don't know the reason to be exactly I know the cause, my last line of the NN.module was

 self.fc3 = nn.Linear(84, num_classes) 

I changed my real num_classes to be 2 times as much but it did not change the value of the variable num_classes, this probably made a mistake when I was outputting the results somewhere.

after I fixed the value of num_classes it just worked out i recommend going over the numbers in your model again

Unpaidfor answered 8/2, 2022 at 20:9 Comment(1)
This seems like a totally different issue (caused by some unreproducible error in your code), unrelated to the question at hand.Sensitize
U
0

My model is to classify two classes with only one neuron in the last layer. I had this problem when the last layer is nn.Linear(512,1) in pytorch environment. But my label is just [0] or [1]. I solved this problem by adding the layer: nn.sigmoid()

Undeceive answered 6/2, 2022 at 13:22 Comment(0)
D
0

For a large-scale dataset, just delete the temple variables

for batch_idx, (x, target) in enumerate(train_dataloader):
    ...
    del x,target,loss,outputs

Dogtrot answered 15/4, 2022 at 1:23 Comment(0)
M
0

I was instantiating the AutoModelForSequenceClassification class as follows:

model = AutoModelForSequenceClassification.from_pretrained(model_name)

and I ended up having this problem. I corrected it when I declared the number of labels in my dataset:

model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=train_df['label'].nunique())
Morea answered 8/3 at 5:51 Comment(0)
D
-1

I solved this problem to upgrading Gpu. For my case I upgraded T4 --> L4.

Dulsea answered 30/4 at 0:7 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.