Quantized to i1-GGUF
using SpongeQuant, the Oobabooga of LLM quantization. Chat & support at Sponge Engine.
![102. Rush hour traffic, India](https://huggingface.co/spaces/SpongeEngine/README/resolve/main/102.png)
- Downloads last month
- 386
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for SpongeEngine/patricide-12B-Unslop-Mell-i1-GGUF
Base model
redrix/patricide-12B-Unslop-Mell