How to fix nsfw error for stable diffusion?
Asked Answered
H

5

10

I always get the "Potential NSFW content was detected in one or more images. A black image will be returned instead. Try again with a different prompt and/or seed." error when using stable diffusion, even with the code that was given on huggingface:

import torch
from torch import autocast
from diffusers import StableDiffusionPipeline

model_id = "CompVis/stable-diffusion-v1-4"
device = "cuda"
token = 'MY TOKEN'


pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision="fp16", use_auth_token=token)
pipe = pipe.to(device)

prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
    image = pipe(prompt, guidance_scale=7.5).images[0]  
    
image.save("astronaut_rides_horse.png")
Heliotropin answered 23/9, 2022 at 12:59 Comment(2)
This did the trick for me reddit.com/r/StableDiffusion/comments/wv2nw0/…Keeling
I don't really want to disable the nsfw filter. I just asking if I messed up somewhere with the installation because I always get that error with any given prompt.Heliotropin
R
13

They have a single variable to remove it safety_checker.

StableDiffusionPipeline.from_pretrained(
    safety_checker = None,
)

However, depending on the pipelines you use, you can get a warning message if safety_checker is set to None, but requires_safety_checker is True.

From pipeline_stable_diffusion_inpaint_legacy.py

if safety_checker is None and requires_safety_checker:
            logger.warning(f"...")

So you can do this:

StableDiffusionPipeline.from_pretrained(
    ...
    safety_checker = None,
    requires_safety_checker = False
)

This also works with from_single_file

StableDiffusionPipeline.from_single_file(
    ...
    safety_checker = None,
    requires_safety_checker = False
)

You can also change it later if necessary by doing this.

pipeline.safety_checker = None
pipeline.requires_safety_checker = False
Revolt answered 3/5, 2023 at 2:54 Comment(0)
B
3

This covers a bit of what the checker does: https://vickiboykis.com/2022/11/18/some-notes-on-the-stable-diffusion-safety-filter/

If you want to simply disable it, you can now set the safety_checker argument to None (no longer have to modify the source Python):

StableDiffusionPipeline.from_pretrained(
    safety_checker = None,
Bed answered 22/2, 2023 at 4:45 Comment(0)
P
0

Depending on your usecase, you can simply comment out the run_safety_checker function in pipeline_stable_diffusion img2img or txt2img. You can alter the function in this way.

def run_safety_checker(self, image, device, dtype):
    has_nsfw_concept = None
    return image, has_nsfw_concept
Perdure answered 9/2, 2023 at 4:53 Comment(0)
S
0

If you don't want to disable the NSFW check, try to articulate a different prompt which attempts to work around the problem.

Without having tried it, I would suggest to replace "riding" with something more explicitly safe, like "sitting on the back of".

Sashasashay answered 3/5, 2023 at 4:26 Comment(0)
C
0

folks. I'm having the same issue , with the lambda approach throwing a TypeError because 'bool' object is not iterable. I also tried the other suggested syntax, but StableDiffusion continues to return black images when they would be NSFW. This is being run in Google Colab.

pipe = StableDiffusionPipeline.from_pretrained(
    model_id,
    torch_dtype=torch.float16,
    safety_checker = None,
    requires_safety_checker = False
)

I did the following and it worked for me

from diffusers.pipelines.stable_diffusion import safety_checker

def sc(self, clip_input, images) : return images, [False for i in images]

# edit the StableDiffusionSafetyChecker class so that, when called, it just returns the images and an array of True values
safety_checker.StableDiffusionSafetyChecker.forward = sc
Cockleboat answered 2/4 at 20:27 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.