dreamdrop-art commited on
Commit
dc58d77
1 Parent(s): d3e388a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -3
README.md CHANGED
@@ -1,3 +1,66 @@
1
- ---
2
- license: creativeml-openrail-m
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: creativeml-openrail-m
3
+ library_name: diffusers
4
+ pipeline_tag: text-to-image
5
+ base_model: stabilityai/stable-diffusion-xl-base-1.0
6
+ tags:
7
+ - safetensors
8
+ - stable-diffusion
9
+ - lora
10
+ - template:sd-lora
11
+ - sdxl
12
+ - flash
13
+ - sdxl-flash
14
+ - lightning
15
+ - turbo
16
+ - lcm
17
+ - hyper
18
+ - fast
19
+ - fast-sdxl
20
+ - sd-community
21
+ instance_prompt: <lora:sdxl-flash-lora:0.55>
22
+ inference:
23
+ parameters:
24
+ num_inference_steps: 7
25
+ guidance_scale: 3
26
+ negative_prompt: >-
27
+ (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong
28
+ anatomy, extra limb, missing limb, floating limbs, (mutated hands and
29
+ fingers:1.4), disconnected limbs, mutation, mutated, ugly, disgusting,
30
+ blurry, amputation
31
+ ---
32
+ # **SDXL Flash** *in collaboration with [Project Fluently](https://hf.co/fluently)*
33
+
34
+ ![preview](images/preview.png)
35
+
36
+ Introducing the new fast model SDXL Flash, we learned that all fast XL models work fast, but the quality decreases, and we also made a fast model, but it is not as fast as LCM, Turbo, Lightning and Hyper, but the quality is higher. Below you will see the study with steps and cfg.
37
+
38
+ ### **Work with LoRA**
39
+
40
+ Trigger word: ```<lora:sdxl-flash-lora:0.55>```
41
+
42
+ ### Steps and CFG (Guidance)
43
+
44
+ ![steps_and_cfg_grid_test](images/steps_cfg_grid.png)
45
+
46
+ ### Optimal settings
47
+ - **Steps**: 6-9
48
+ - **CFG Scale**: 2.5-3.5
49
+ - **Sampler**: DPM++ SDE
50
+
51
+ ### Diffusers usage
52
+
53
+ ```bash
54
+ pip install torch diffusers
55
+ ```
56
+
57
+ ```py
58
+ import torch
59
+ from diffusers import StableDiffusionXLPipeline, DPMSolverSinglestepScheduler
60
+ # Load model.
61
+ pipe = StableDiffusionXLPipeline.from_pretrained("sd-community/sdxl-flash", torch_dtype=torch.float16).to("cuda")
62
+ # Ensure sampler uses "trailing" timesteps.
63
+ pipe.scheduler = DPMSolverSinglestepScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing")
64
+ # Image generation.
65
+ pipe("a happy dog, sunny day, realism", num_inference_steps=7, guidance_scale=3).images[0].save("output.png")
66
+ ```