I'm trying to run through the 🤗 LoRA tutorial. I've gotten the dataset pulled down, trained it and have checkpoints on disk (in the form of several subdirectories and .safetensors
files).
The last part is trying to run inference. In particular,
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda")
pipeline.load_lora_weights("path/to/lora/model", weight_name="pytorch_lora_weights.safetensors")
However, on my local when I try to run that load_lora_weights
line, I get
>>> pipeline.load_lora_weights("path/to/my/lora", weight_name="pytorch_lora_weights.safetensors")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/path/to/my/site-packages/diffusers/loaders/lora.py", line 107, in load_lora_weights
raise ValueError("PEFT backend is required for this method.")
ValueError: PEFT backend is required for this method.
>>>
I have PEFT installed, but there don't seem to be instructions calling for me to do anything else about it in order to load a LoRA.
What am I doing wrong here? If the answer is "nothing, this is the 'it's an experimental API' note coming back to bite you", are there any workarounds?
path/to/lora/model
to the path to your LoRA model ? – Flirtatiouspath/to/lora/model
is an after-the-fact anonymization :p Fair question though. – Shaw