Upload README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,70 @@
|
|
| 1 |
---
|
| 2 |
license: creativeml-openrail-m
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: creativeml-openrail-m
|
| 3 |
+
tags:
|
| 4 |
+
- coreml
|
| 5 |
+
- stable-diffusion
|
| 6 |
+
- text-to-image
|
| 7 |
+
- not-for-all-audiences
|
| 8 |
---
|
| 9 |
+
# Core ML Converted Model:
|
| 10 |
+
|
| 11 |
+
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).
|
| 12 |
+
- Provide the model to an app such as **Mochi Diffusion** [Github](https://github.com/godly-devotion/MochiDiffusion) / [Discord](https://discord.gg/x2kartzxGv) to generate images.
|
| 13 |
+
- `split_einsum` version is compatible with all compute unit options including Neural Engine.
|
| 14 |
+
- `original` version is only compatible with `CPU & GPU` option.
|
| 15 |
+
- Custom resolution versions are tagged accordingly.
|
| 16 |
+
- The `vae-ft-mse-840000-ema-pruned.ckpt` VAE is embedded into the model.
|
| 17 |
+
- This model was converted with a `vae-encoder` for use with `image2image`.
|
| 18 |
+
- This model is `fp16`.
|
| 19 |
+
- Descriptions are posted as-is from original model source.
|
| 20 |
+
- Not all features and/or results may be available in `CoreML` format.
|
| 21 |
+
- This model does not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
|
| 22 |
+
- This model does not include a `safety checker` (for NSFW content).
|
| 23 |
+
- This model can be used with ControlNet.
|
| 24 |
+
|
| 25 |
+
<br>
|
| 26 |
+
|
| 27 |
+
# epiCPhotoGasm-zUniversal_cn:
|
| 28 |
+
Source(s): [CivitAI](https://civitai.com/models/132632/epicphotogasm?modelVersionId=201259)<br>
|
| 29 |
+
|
| 30 |
+
## epiCPhotoGasm z-Universal<br><br>
|
| 31 |
+
|
| 32 |
+

|
| 33 |
+
|
| 34 |
+

|
| 35 |
+
|
| 36 |
+

|
| 37 |
+
|
| 38 |
+

|
| 39 |
+
|
| 40 |
+
### Welcome to epiCPhotoGasm
|
| 41 |
+
|
| 42 |
+
This Model is highly tuned for Photorealism with the tiniest amount of exessive prompting needed to shine.
|
| 43 |
+
|
| 44 |
+
All Showcase images are generated without Negatives (V1) to show what is possible on the bare prompt.
|
| 45 |
+
|
| 46 |
+
### Whats special?
|
| 47 |
+
|
| 48 |
+
The model has highly knowledge of what a photo is, so if u promt u can avoid using photo. If the prompt tends to be fantasy like the model will turn away from photo and u have to tweak by rained and known by the model, so try them out too.
|
| 49 |
+
|
| 50 |
+
This should be most versatile version of this epiCPhotoGasm model and probably it will be the last.
|
| 51 |
+
|
| 52 |
+
Have fun trying it out!
|
| 53 |
+
|
| 54 |
+
### How to use
|
| 55 |
+
|
| 56 |
+
- Use simple prompts without "fake" enhancers like "masterpiece, photorealistic, 4k, 8k, super realistic, realism" etc.
|
| 57 |
+
- Don't use a ton of negative embeddings, focus on few tokens or single embeddings
|
| 58 |
+
- You can still use atmospheric enhances like "cinematic, dark, moody light" etc.
|
| 59 |
+
- Start sampling at 20 Steps
|
| 60 |
+
- No extra noise-offset needed
|
| 61 |
+
|
| 62 |
+
### Additional Ressources
|
| 63 |
+
|
| 64 |
+
Style Negatives: colorful Photo | soft Photo
|
| 65 |
+
|
| 66 |
+
### Useful Extensions
|
| 67 |
+
|
| 68 |
+
After Detailer | ControlNet | Agent Scheduler | Ultimate SD Upscale
|
| 69 |
+
|
| 70 |
+
|