New discussion

fp8 inference

1
#26 opened about 2 months ago by Melody32768

wrong model

#25 opened about 2 months ago by sunhaha123

Update README.md

#24 opened about 2 months ago by WBD8

Unet?

#22 opened 2 months ago by aiRabbit0

quite slow to load the fp8 model

11
#21 opened 2 months ago by gpt3eth

How to load into VRAM?

2
#19 opened 2 months ago by MicahV

'float8_e4m3fn' attribute error

5
#17 opened 2 months ago by Magenta6

Loading flux-fp8 with diffusers

1
#16 opened 2 months ago by 8au

Quantization Method?

7
#7 opened 2 months ago by vyralsurfer

ComfyUi Workflow

1
#6 opened 2 months ago by Jebari

Diffusers?

19
#4 opened 2 months ago by tintwotin

FP16

1
#2 opened 2 months ago by bsbsbsbs112321

Metadata lost from model

4
#1 opened 2 months ago by mcmonkey