Text-to-Image
Diffusers
English
SVDQuant
FLUX.1-dev
INT4
FLUX.1
Diffusion
Quantization
LoRA

HyperSD lora for SVDQuant?

#2
by adhikjoshi - opened

HyperSD allows flux dev to run in low steps and good quality output.

https://huggingface.co/ByteDance/Hyper-SD

How can I convert this Lora to an SVDQuant-style Lora?

Any guide?

MIT HAN Lab org

We are still cleaning the converting script and will release the guide soon. Do you need the Hyper-SD LoRA? If you need the Hyper-SD LoRA, we can first convert that for you first.

Hi, first, thanks for your effort. Now I'm trying to create the quantized svdq int4 model from a custom flux transformer which hyper SD LoRA merged using comand below:

python -m deepcompressor.app.diffusion.ptq configs/model/flux.1-custom.yaml configs/svdquant/int4.yaml --save-model /root/autodl-tmp/flux.1-custom-svdquant-int4

https://github.com/mit-han-lab/deepcompressor/issues/24

I'm wondering how long it would take to finish the job. I'm using an H800 to do the job. @Lmxyy

We are still cleaning the converting script and will release the guide soon. Do you need the Hyper-SD LoRA? If you need the Hyper-SD LoRA, we can first convert that for you first.

Please do provide, it will be allow High quality images in low steps with speeds.

HyperSD和Alimama的Turbo FLUX Alpha在FLUX加速上的表现都非常好,现在我已经在FP16格式上从HyperSD转为使用Turbo Flux了,因为横纹和噪音都少了很多。

We are still cleaning the converting script and will release the guide soon. Do you need the Hyper-SD LoRA? If you need the Hyper-SD LoRA, we can first convert that for you first.

I also look forward to the early release of the LoRA conversion script.

@Lmxyy I'm here to suggest the same thing. Combining SVDQuant with Hyper-SD would be powerful.

Sign up or log in to comment