File size: 2,273 Bytes
dc58d77 3f017df dc58d77 5b56967 dc58d77 3f017df dc58d77 896d1e5 dc58d77 28ce7e7 dc58d77 896d1e5 dc58d77 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
base_model: fluently/Fluently-XL-v4
tags:
- safetensors
- stable-diffusion
- lora
- template:sd-lora
- sdxl
- flash
- sdxl-flash
- lightning
- turbo
- lcm
- hyper
- fast
- fast-sdxl
- sd-community
instance_prompt: <lora:sdxl-flash-lora:0.55>
inference:
parameters:
num_inference_steps: 7
guidance_scale: 3
negative_prompt: >-
(deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong
anatomy, extra limb, missing limb, floating limbs, (mutated hands and
fingers:1.4), disconnected limbs, mutation, mutated, ugly, disgusting,
blurry, amputation
---
# **[SDXL Flash](https://huggingface.co/sd-community/sdxl-flash)** with LoRA *in collaboration with [Project Fluently](https://hf.co/fluently)*
![preview](https://huggingface.co/sd-community/sdxl-flash/resolve/main/images/preview.png)
Introducing the new fast model SDXL Flash, we learned that all fast XL models work fast, but the quality decreases, and we also made a fast model, but it is not as fast as LCM, Turbo, Lightning and Hyper, but the quality is higher. Below you will see the study with steps and cfg.
### --> **Work with LoRA** <--
- **Trigger word**:
```bash
<lora:sdxl-flash-lora:0.55>
```
- **Optimal LoRA multiplier**: 0.45-0.6 (the best - 0.55)
- **Optimal base model**: [fluently/Fluently-XL-v4](https://huggingface.co/fluently/Fluently-XL-v4)
### Steps and CFG (Guidance)
![steps_and_cfg_grid_test](https://huggingface.co/sd-community/sdxl-flash/resolve/main/images/steps_cfg_grid.png)
### Optimal settings
- **Steps**: 6-9
- **CFG Scale**: 2.5-3.5
- **Sampler**: DPM++ SDE
### Diffusers usage
```bash
pip install torch diffusers
```
```py
import torch
from diffusers import StableDiffusionXLPipeline, DPMSolverSinglestepScheduler
# Load model.
pipe = StableDiffusionXLPipeline.from_pretrained("sd-community/sdxl-flash", torch_dtype=torch.float16).to("cuda")
# Ensure sampler uses "trailing" timesteps.
pipe.scheduler = DPMSolverSinglestepScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing")
# Image generation.
pipe("a happy dog, sunny day, realism", num_inference_steps=7, guidance_scale=3).images[0].save("output.png")
```
|