metadata
license: apache-2.0
FLUX schnell Quantized Models
This repo contains quantized versions of the FLUX schnell transformer for use in InvokeAI.
Contents:
transformer/base/
- Transformer in bfloat16 copied from heretransformer/bnb_nf4/
- Transformer quantized to bitsandbytes NF4 format using this script