Thanks for converting! Would you be willing to share the script?

#1
by thamesdrawers - opened

Thanks for converting this, I've used it as a base for LoRA training using AI Toolkit and it worked flawlessly! Would you be willing to do the same for the new Afrodite Flux model? Or to share the script you used so that others can do it?

Owner
β€’
edited Oct 6

Thanks for your feedback. I'll convert Afrodite later.

https://huggingface.co/spaces/John6666/flux-to-diffusers-test
The script I use for the conversion is available to the public. However, the conversion to BF16 is not possible in the free space due to lack of RAM, and we have to copy it to Zero GPU space. Converting to fp8 is barely possible in the free space.
Also due to lack of RAM, the missing parts are copied from the original repo. It is more of a Frankenstein machine than a whole converter.
The manufactured model works, but it is hard to say whether it works properly or not.

The most accurate way to convert is to combine the parts of the model into one file using ComfyUI or something like that, and either from_single_file or directly call some of the functions used inside from_single_file. I gave up on this because it is impossible due to lack of RAM as well.
If anyone is able to do this, please convert them this way as much as possible.

By the way, here is the script I based it on.
https://github.com/huggingface/diffusers/blob/main/scripts/convert_flux_to_diffusers.py
Its reverse version was created by an individual.
https://huggingface.co/twodgirl/flux-devpro-schnell-merge-fp8-e4m3fn-diffusers

Thank you for your explanation and for converting - it is much appreciated. I tried a variety of scripts but I kept running out of RAM and thought I was doing something wrong. Do you have an estimate on how much RAM it would need to do it properly?

Owner
β€’
edited Oct 7

This script will work in fp8 with 16GB RAM and 50GB HDD. Why those numbers? Because this is the spec of HF's free CPU space... no, it's a lot for free, right?

If we were to use from_single_file in BF16, we would need at least 40GB of RAM and VRAM total. 20GB in fp8 would be nice.
This is probably why normal scripts won't work.

The only reason my scripts barely work is because I'm chopping the model up and processing it bit by bit.
I'm sure other people's scripts would work in a similar environment on fp8 if they were processed that way.

For reference, the Zero GPU space apparently has about 80GB of RAM, even without the GPU available. The disk space is also quite large, as it is almost impossible to fill it in a short period of operation. No need to do any tricks.
However, due to the limitation of 10 per person, I switch between them often and can't keep them open all the time.
It doesn't hurt to add $10/month to the amount I spend on my hobbies, I wish there was an expansion plan, but there isn't...

Sign up or log in to comment