Upload README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,55 @@
|
|
1 |
---
|
2 |
license: creativeml-openrail-m
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: creativeml-openrail-m
|
3 |
+
tags:
|
4 |
+
- coreml
|
5 |
+
- stable-diffusion
|
6 |
+
- text-to-image
|
7 |
+
- not-for-all-audiences
|
8 |
---
|
9 |
+
# Core ML Converted SDXL Model:
|
10 |
+
|
11 |
+
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).
|
12 |
+
- Provide the model to an app such as **Mochi Diffusion** [Github](https://github.com/godly-devotion/MochiDiffusion) / [Discord](https://discord.gg/x2kartzxGv) to generate images.
|
13 |
+
- `original` version is only compatible with `CPU & GPU` option.
|
14 |
+
- Resolution is the `SDXL` default of `1024x1024`.
|
15 |
+
- This model was converted with a `vae-encoder` for use with `image2image`.
|
16 |
+
- This model is quantized to `8-bits`.
|
17 |
+
- Descriptions are posted as-is from original model source.
|
18 |
+
- Not all features and/or results may be available in `CoreML` format.
|
19 |
+
- This model does not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
|
20 |
+
- This model does not include a `safety checker` (for NSFW content).
|
21 |
+
- This model can not be used with ControlNet.
|
22 |
+
|
23 |
+
<br>
|
24 |
+
|
25 |
+
# DreamShaper-XL1.0-Alpha2_SDXL_8-bit:
|
26 |
+
Source(s): [CivitAI](https://civitai.com/models/112902/dreamshaper-xl10)<br>
|
27 |
+
|
28 |
+
## This is an SDXL base model converted and quantized to 8-bits.
|
29 |
+
|
30 |
+
Finetuned over SDXL1.0.
|
31 |
+
|
32 |
+
Even if this is still an alpha version, I think it's already much better compared to the first alpha based on xl0.9.
|
33 |
+
|
34 |
+
Basically I do the first gen with DreamShaperXL, then I upscale to 2x and finally I do an img2img step with either DreamShaperXL itself, or a 1.5 model that I find suited, such as DreamShaper7 or AbsoluteReality.
|
35 |
+
|
36 |
+
What does it do better than SDXL1.0?
|
37 |
+
|
38 |
+
No need for refiner. Just do highres fix (upscale+i2i)
|
39 |
+
|
40 |
+
Better looking people
|
41 |
+
|
42 |
+
Less blurry edges
|
43 |
+
|
44 |
+
75% better dragons 🐉
|
45 |
+
|
46 |
+
Better NSFW<br><br>
|
47 |
+
|
48 |
+
|
49 |
+
![image](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/3aeb02b7-31ce-4948-be31-ddaed4a384e4/width=450/xl_upscaled_00824_.jpeg)
|
50 |
+
|
51 |
+
![image](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/a6d92d98-6f2c-4e47-a1b4-b1a34483adee/width=450/xl_upscaled_00819_.jpeg)
|
52 |
+
|
53 |
+
![image](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/9d83e4fa-e2da-4f24-8185-dfe81d6bbb1e/width=450/xl_upscaled_00806_.jpeg)
|
54 |
+
|
55 |
+
![image](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/57a5eae3-5eae-495a-b705-56b123d08280/width=450/xl_upscaled_00811_.jpeg)
|