wanghaofan's picture
Update README.md
a3c6d6d verified
|
raw
history blame
1.85 kB
metadata
tags:
  - text-to-image
  - lora
  - diffusers
  - template:diffusion-lora
widget:
  - text: a woman, Futuristic bzonze-colored
    parameters:
      negative_prompt: (lowres, low quality, worst quality)
    output:
      url: images/b8b98770d257ab5b8fdeee37bcf61e85c562b45c5bb79f0c2708361b.jpg
  - text: a cup, Futuristic bzonze-colored
    parameters:
      negative_prompt: (lowres, low quality, worst quality)
    output:
      url: images/6371e4e34450732c155aa1205f0502dd7e9839ac61a6ac8a460c0282.jpg
  - text: a lion, Futuristic bzonze-colored
    parameters:
      negative_prompt: (lowres, low quality, worst quality)
    output:
      url: images/fefeaac1e88b5883abdf0bc0403cf7c592104729148cc93ffe838b26.jpg
base_model: stabilityai/stable-diffusion-3.5-large
instance_prompt: Futuristic bzonze-colored
license: other
license_name: stabilityai-ai-community
license_link: >-
  https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md

SD3.5-LoRA-Futuristic-Bzonze-Colored

Prompt
a woman, Futuristic bzonze-colored
Negative Prompt
(lowres, low quality, worst quality)
Prompt
a cup, Futuristic bzonze-colored
Negative Prompt
(lowres, low quality, worst quality)
Prompt
a lion, Futuristic bzonze-colored
Negative Prompt
(lowres, low quality, worst quality)

Trigger words

You should use Futuristic bzonze-colored to trigger the image generation.

Inference

import torch
from diffusers import StableDiffusion3Pipeline # pip install diffusers>=0.31.0

pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3.5-large", torch_dtype=torch.bfloat16)
pipe.load_lora_weights("Shakker-Labs/SD3.5-LoRA-Futuristic-Bzonze-Colored", weight_name="SD35-lora-Futuristic-Bzonze-Colored.safetensors")
pipe.fuse_lora(lora_scale=1.0)
pipe.to("cuda")

prompt = "a cup, Futuristic bzonze-colored"
negative_prompt = "(lowres, low quality, worst quality)"

image = pipe(prompt=prompt,
             negative_prompt=negative_prompt
             num_inference_steps=24, 
             guidance_scale=4.0,
             width=960, height=1280,
            ).images[0]
image.save(f"toy_example.jpg")