"Attempting to perform BLAS operation using StreamExecutor without BLAS support" error occurs
Asked Answered
A

2

14

my computer has only 1 GPU.

Below is what I get the result by entering someone's code

[name: "/device:CPU:0" device_type: "CPU" memory_limit: 268435456
locality {} incarnation: 16894043898758027805, name: "/device:GPU:0"
device_type: "GPU" memory_limit: 10088284160
locality {bus_id: 1 links {}}
incarnation: 17925533084010082620
physical_device_desc: "device: 0, name: GeForce RTX 3060, pci bus id: 0000:17:00.0, compute 
capability: 8.6"]

I use jupyter notebook and I run 2 kernels now. (TensorFlow 2.6.0 and also installed CUDA and cuDNN as TensorFlow guide)

The first kernel is no problem to run my Sequential model from Keras.

But when I learn the same code in the second kernel, I got the error as below.

Attempting to perform BLAS operation using StreamExecutor without BLAS support [[node sequential_3/dense_21/MatMul (defined at \AppData\Local\Temp/ipykernel_14764/3692363323.py:1) ]] [Op:__inference_train_function_7682]

Function call stack: train_function

how can I learn multiple kernels without any problem and share them with only 1 GPU?

I am not familiar with TensorFlow 1.x.x version though.


I just solved this problem as below. This problem is because when keras run with gpu. It uses almost all vram. So i needed to give memory_limit for each notebook. Here is my code how i could solve it. You can just change memory_limit value.

gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
  try:
    tf.config.experimental.set_virtual_device_configuration(
        gpus[0],[tf.config.experimental.VirtualDeviceConfiguration(memory_limit=5120)])
  except RuntimeError as e:
    print(e)
Attenuant answered 1/10, 2021 at 6:14 Comment(0)
E
17

For the benefit of community providing solution here

This problem is because when keras run with gpu, it uses almost all vram. So we needed to give memory_limit for each notebook as shown below

gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
  try:
    tf.config.experimental.set_virtual_device_configuration(
        gpus[0],[tf.config.experimental.VirtualDeviceConfiguration(memory_limit=5120)])
  except RuntimeError as e:
    print(e)

(Paraphrased from MCPMH)

Expostulation answered 1/10, 2021 at 6:14 Comment(2)
This didn't work for me but the answer to this did. #41118240Atreus
@RobertFranklin You linked to a question rather than a particular answer.Allinclusive
D
17

I had this error when trying to run a python script when a Jupyter notebook was open. Killing the notebook kernel before running the script did the trick. It seems that only one program can use the GPU in the same time.

Descendent answered 21/2, 2022 at 16:9 Comment(1)
This happened to me, having Blender open caused the same issue.Gianna

© 2022 - 2024 — McMap. All rights reserved.