How do I resolve this LoRA loading error?
Asked Answered
S

2

5

I'm trying to run through the 🤗 LoRA tutorial. I've gotten the dataset pulled down, trained it and have checkpoints on disk (in the form of several subdirectories and .safetensors files).

The last part is trying to run inference. In particular,

from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda")
pipeline.load_lora_weights("path/to/lora/model", weight_name="pytorch_lora_weights.safetensors")

However, on my local when I try to run that load_lora_weights line, I get

>>> pipeline.load_lora_weights("path/to/my/lora", weight_name="pytorch_lora_weights.safetensors")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/path/to/my/site-packages/diffusers/loaders/lora.py", line 107, in load_lora_weights
    raise ValueError("PEFT backend is required for this method.")
ValueError: PEFT backend is required for this method.
>>> 

I have PEFT installed, but there don't seem to be instructions calling for me to do anything else about it in order to load a LoRA.

What am I doing wrong here? If the answer is "nothing, this is the 'it's an experimental API' note coming back to bite you", are there any workarounds?

Shaw answered 25/2 at 15:26 Comment(2)
This might be a totally simple thing, did you actually change path/to/lora/model to the path to your LoRA model ? – Flirtatious
@KaranShishoo - Yup; it was a local path on my machine in the raw transcript; path/to/lora/model is an after-the-fact anonymization :p Fair question though. – Shaw
S
5

I had this same issue and resolved it by upgrading the peft and transformers library.

(As of 1 April 2024) Ensure your library has the following versions:

  • peft >= 0.6.0
  • transformers >= 4.34.0

Hope this helps, cheers!

Siddons answered 1/4 at 22:46 Comment(1)
haha stackoverflow to the rescue after so long! – Caddoan
C
2

I met the same error and resolved it. Before you import package and load a model and lora, install the newest peft by

pip install -U peft

and then restart!!! the kernel. Not disconnect or stop.

Now you can import package and load models.

Considerate answered 13/4 at 1:50 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.