whisper AI error : FP16 is not supported on CPU; using FP32 instead
Asked Answered
Z

2

13

I'm trying to use whisper AI on my computer. I have a NVIDIA GPU RTX 2060, installed CUDA and FFMPEG.

I'm running this code :

import whisper

model = whisper.load_model("medium")
result = model.transcribe("venv/files/test1.mp3")
print(result["text"])

and having issue :

whisper\transcribe.py:114: UserWarning: FP16 is not supported on CPU; using FP32 instead
  warnings.warn("FP16 is not supported on CPU; using FP32 instead")

I don't understand why FP16 is not support since I have a good GPU and everything installed. Any help would be appreciated. Thanks.

I installed all the requirement and I was expecting that whisper AI would use the GPU

Zebrass answered 1/4, 2023 at 19:32 Comment(0)
U
9

You could try this:

result = model.transcribe("venv/files/test1.mp3", fp16=False)

That helps me!

Uprear answered 5/4, 2023 at 4:16 Comment(1)
Wouldn't that just turn off the error message, without fixing anything? He wants FP16, you suggest setting it to false.Burthen
M
4

In order to utilize CUDA with whisper, you have to:

  1. Uninstall existing pytorch.
  2. Install pytorch with CUDA.
  3. Load pytorch library
  4. Add a chained call to to method after original load_model.

Full example

Terminal

pip3 uninstall -y torch torchvision torchaudio
# following command was generated using https://pytorch.org/get-started/locally/#with-cuda-1
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 

file.py

import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'
whisper.load_model('medium').to(device)
Mensch answered 25/7, 2023 at 15:24 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.