gguf quantized version of mochi (incl. gguf encoder and gguf vae)🐷🐷🐷

  • drag mochi to > ./ComfyUI/models/diffusion_models
  • drag t5xxl to > ./ComfyUI/models/text_encoders
  • drag vae to > ./ComfyUI/models/vae
  • drag demo video (below) to > your browser for workflow
Prompt
a pinky pig moving quickly in a beautiful winter scenery nature trees sunset tracking camera
Prompt
a pinky pig moving quickly in a beautiful winter scenery nature trees sunset tracking camera
Prompt
a pinky pig moving quickly in a beautiful winter scenery nature trees sunset tracking camera

review

  • new tensor fixed version; load faster with full set gguf (model + encoder + decoder)
  • upgraded encoder; from fp16/fp8 to fp32; but not affecting the file size and memory consumption; more compatible to old machine
  • new fp32 gguf vae decoder; similar size to fp16 safetensors; better quality; less ram requirement
  • q2 works but not usable; you could get q8 [here] pig architecture

screenshot

reference

Downloads last month
696
GGUF
Model size
10B params
Architecture
mochi

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for calcuis/mochi-gguf

Quantized
(1)
this model