File size: 3,587 Bytes
ac44477
 
321ddf2
 
 
 
ac44477
321ddf2
 
 
 
 
 
 
 
 
 
 
 
 
 
9b112c8
321ddf2
9b112c8
321ddf2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
license: creativeml-openrail-m
tags:
- coreml
- stable-diffusion
- text-to-image
---
# Core ML Converted Model:

  - This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).
  - Provide the model to an app such as **Mochi Diffusion** [Github](https://github.com/godly-devotion/MochiDiffusion) / [Discord](https://discord.gg/x2kartzxGv) to generate images.
  - `split_einsum` version is compatible with all compute unit options including Neural Engine.
  - `original` version is only compatible with `CPU & GPU` option.
  - Custom resolution versions are tagged accordingly.
  - The `vae-ft-mse-840000-ema-pruned.ckpt` VAE is embedded into the model.
  - This model was converted with a `vae-encoder` for use with `image2image`.
  - This model is `fp16`.
  - Descriptions are posted as-is from original model source.
  - Not all features and/or results may be available in `CoreML` format.
  - This model does not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
  - This model does not include a `safety checker` (for NSFW content).
  - This model can be used with ControlNet.

# DreamShaper-v5.0_cn:
Source(s): [Hugging Face](https://huggingface.co/Lykon/DreamShaper) - [CivitAI](https://civitai.com/models/4384/dreamshaper)<br>

## DreamShaper 5

Please check out my newest models: [NeverEnding Dream](https://civitai.com/models/10028/neverending-dream) and [Anime Pastel Dream](https://civitai.com/models/23521/anime-pastel-dream)

Check the version description below for more info and add a ❤️ to receive future updates.

Do you like what I do? Consider supporting me on [Patreon](https://www.patreon.com/Lykon275) 🅿️ to get exclusive tips and tutorials, or feel free to [buy me a coffee](https://ko-fi.com/lykon) ☕


[Live demo available on HuggingFace](https://huggingface.co/spaces/Lykon/DreamShaper-webui) (CPU is slow but free).

Available on [Sinkin.ai](http://sinkin.ai/) and [Smugo](https://smugo.ai/create?model=dreamshaper) with GPU acceleration.

MY MODELS WILL ALWAYS BE FREE<br><br>

**NOTES**

Version 5 is the best at photorealism and has noise offset.

I get no money from any generative service, but you can buy me a coffee.

After a lot of tests I'm finally releasing my mix. This started as a model to make good portraits that do not look like cg or photos with heavy filters, but more like actual paintings. The result is a model capable of doing portraits like I wanted, but also great backgrounds and anime-style characters. Below you can find some suggestions, including LoRA networks to make anime style images.

I hope you'll enjoy it as much as I do.

Diffuser weights (courtesy of [/u/Different-Bet-1686](https://reddit.com/u/Different-Bet-1686)): https://huggingface.co/Lykon/DreamShaper

Official HF repository: https://huggingface.co/Lykon/DreamShaper

Suggested settings:
- I had CLIP skip 2 on pics
- I had ENSD: 31337 for all of them
- All of them had highres.fix
- I don't use restore faces, as it washes out the painting effect
- Version 4 requires no LoRA for anime style. 

![image](https://huggingface.co/Lykon/DreamShaper/resolve/main/1.png)

![image](https://huggingface.co/Lykon/DreamShaper/resolve/main/4.png)

![image](https://huggingface.co/Lykon/DreamShaper/resolve/main/5.png)

![image](https://huggingface.co/Lykon/DreamShaper/resolve/main/2.png)