Using Stable Diffusion XL 1.5, trained a custom model for image generation of Home Decor usecase Also applied Canny ControlNets to it for better performance.
Training Notebook - https://www.kaggle.com/code/reemasirasna/sdxl-training-decor
ControlNet Notebook - https://www.kaggle.com/code/reemasirasna/sdxl-controlnet-decor
Below are the generated images from this model.
SDXL LoRA DreamBooth - reemas-irasna/home-decor_LoRA
Model description
These are reemas-irasna/home-decor_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using DreamBooth.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
Trigger words
You should use "a photo of home" to trigger the image generation.
Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
Intended uses & limitations
How to use
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae,
torch_dtype=torch.float16,
variant="fp16",
use_safetensors=True
)
pipe.load_lora_weights("reemas-irasna/home-decor_LoRA")
_ = pipe.to("cuda")
prompt = "a photo of bedroom in red and white combination"
image = pipe(prompt=prompt, num_inference_steps=25).images[0]
image
- Downloads last month
- 54
Model tree for reemas-irasna/home-decor_LoRA
Base model
stabilityai/stable-diffusion-xl-base-1.0