Core ML Converted Model:

  • This model was converted to Core ML for use on Apple Silicon devices. Conversion instructions can be found here.
  • Provide the model to an app such as Mochi Diffusion Github / Discord to generate images.
  • split_einsum version is compatible with all compute unit options including Neural Engine.
  • original version is only compatible with CPU & GPU option.
  • Custom resolution versions are tagged accordingly.
  • The vae-ft-mse-840000-ema-pruned.ckpt VAE is embedded into the model.
  • This model was converted with a vae-encoder for use with image2image.
  • This model is fp16.
  • Descriptions are posted as-is from original model source.
  • Not all features and/or results may be available in CoreML format.
  • This model does not have the unet split into chunks.
  • This model does not include a safety checker (for NSFW content).
  • This model can be used with ControlNet.

epiCRealism-pureEvolution-V3_cn:

Source(s): CivitAI

V3 is here!

Since SDXL is right around the corner, let's say it is the final version for now since I put a lot effort into it and probably cannot do much more.

I tried to refine the understanding of the Prompts, Hands and of course the Realism. Let's see what you guys can do with it.

Thanks to @drawaline for the in-depth review, so i'd like to give some advices to use this model.

Advices

Use simple prompts

No need to use keywords like "masterpiece, best quality, 8k, intricate, high detail" or "(extremely detailed face), (extremely detailed hands), (extremely detailed hair)" since it doesn't produce appreciable change

Use simple negatives or small negative embeddings. gives most realistic look (check samples to get an idea of negatives i used)

Add "asian, chinese" to negative if you're looking for ethnicities other than Asian

Light, shadows, and details are excellent without extra keywords

If you're looking for a natural effect, avoid "cinematic"

Avoid using "1girl" since it pushes things to render/anime style

To much description of the face will turn out bad mostly

For a more fantasy like output use 2M Karras Sampler

No extra noise-offset needed, but u can if you like to ๐Ÿ˜‰

How to use?

Prompt: simple explanation of the image (try first without extra keywords)

Negative: "cartoon, painting, illustration, (worst quality, low quality, normal quality:2)"

Steps: >20 (if image has errors or artefacts use higher Steps)

CFG Scale: 5 (higher config scale can lose realism, depends on prompt, sampler and Steps)

Sampler: Any Sampler (SDE, DPM-Sampler will result in more realism)

Size: 512x768 or 768x512

Hires upscaler: 4x_NMKD-Superscale-SP_178000_G (Denoising: 0.35, Upscale: 2x)

image

image

image

image

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .