gguf quantized version of mochi (incl. gguf encoder and gguf vae)🐷🐷🐷
- drag mochi to >
./ComfyUI/models/diffusion_models
- drag t5xxl to >
./ComfyUI/models/text_encoders
- drag vae to >
./ComfyUI/models/vae
- drag demo video (below) to > your browser for workflow

- Prompt
- a pinky pig moving quickly in a beautiful winter scenery nature trees sunset tracking camera

- Prompt
- a pinky pig moving quickly in a beautiful winter scenery nature trees sunset tracking camera

- Prompt
- a pinky pig moving quickly in a beautiful winter scenery nature trees sunset tracking camera
review
- new tensor fixed version; load faster with full set gguf (model + encoder + decoder)
- upgraded encoder; from fp16/fp8 to fp32; but not affecting the file size and memory consumption; more compatible to old machine
- new fp32 gguf vae decoder; similar size to fp16 safetensors; better quality; less ram requirement
- q2 works but not usable; you could get q8 [here] pig architecture
reference
- Downloads last month
- 696
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for calcuis/mochi-gguf
Base model
genmo/mochi-1-preview
Finetuned
Comfy-Org/mochi_preview_repackaged