FP16 (safetensors) & Q8_0 (gguf) example PNG (anime girl) comparison (with ComfyUI workflow!)

#36
by fireYtail - opened

This is the original Flux Dev example PNG of the anime girl and a comparison with the same but using "flux1-dev-Q8_0.gguf" and "t5xxl_fp8_e4m3fn.safetensors", both with their respective ComfyUI workflows. Just drag and drop the PNG to ComfyUI web browser tab OR use the LOAD button and select the PNG file in your downloads location. The FP16 version is as-is. The Q8_0 version took 468.58 seconds on my NVIDIA GTX 1080 ti.

Source: https://github.com/comfyanonymous/ComfyUI_examples/blob/master/flux/README.md

Model: flux1-dev.safetensors (FP16 version) __ flux1-dev-Q8_0.gguf (Q8_0 version)
Clip 1: t5xxl_fp16.safetensors (FP16 version) __ t5xxl_fp8_e4m3fn.safetensors (Q8_0 version)
Clip 2: clip_l.safetensors
VAE: ae.safetensors
Prompt: cute anime girl with massive fluffy fennec ears and a big fluffy tail blonde messy long hair blue eyes wearing a maid outfit with a long black gold leaf pattern dress and a white apron mouth open holding a fancy black forest cake with candles on top in the kitchen of an old dark Victorian mansion lit by candlelight with a bright window to the foggy forest and very expensive stuff everywhere
Noise seed: 219670278747233
Resolution: 1024x1024
Steps: 20
Sampler: Euler
Denoise: 1.00
Guidance conditioning: 3.5
base_shift: 0.50
max_shift: 1.15

FP16 version:

flux_dev_example_fp16_safetensors.png

Q8_0 version:

flux_dev_example_q8_gguf_t5xxl_fp8_e4m3fn.png

can you make the results of all versions Q2 - fp16😁

can you make the results of all versions Q2 - fp16😁

Yes, I can.
Have a nice day.

I'm waiting for the results 😁

I'm waiting for the results 😁

You asked if I could do it. I can. If I spend 65.82 GB of my internet data and wait for all the model files to download, then generate all the images one by one. I can, which is what you were asking.

But I never said I was going to do it for you. I'm not going to download 65.82 GB and waste my data plan. Do it yourself. You have the workflow right above your message, it's part of the image's data. So just use that image's workflow on ComfyUI with each model version.

If you expect other people to do all the work for you, you have a long and difficult road ahead of you in life.
Have a nice day.

ok, I will do it myself

It keeps giving me super blurry images. Any Ideas?

It keeps giving me super blurry images. Any Ideas?

Flux has professional photography image quality, but if you use too few steps, the results will be bad or even blurry. Schnell requires 4 steps, Dev requires 20+ steps. Make sure to use enough steps and make sure nothing is wrong in your ComfyUI workflow. Try the example above and see if it's blurry too.

Using too few steps will generate blurry and nonsensical images on any other image generation model that can run on consumer hardware, such as Stable Diffusion. This is intrinsic to how the image generation works, it begins with random noise then transforms it so it becomes the prompt. Too few steps will mean not enough time for proper transformation. For some models, adding steps beyond a certain amount makes it change minimally or nothing visible to the human eye. For others, it's even worse, making the image change completely or even become nonsensical (like extra or missing body parts etc). But this doesn't mean you should keep steps as low as possible. Finding balance is key.

Are you using the q8_0 GGUF? Because if you have something like a q4 then maybe that's why it's blurry. The model lost too much precision.

Sign up or log in to comment