Quantized to i1-GGUF
using SpongeQuant, the Oobabooga of LLM quantization. Chat & support at Sponge Engine.
![86. House (Africa)](https://huggingface.co/spaces/SpongeEngine/README/resolve/main/086.png)
- Downloads last month
- 62
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for SpongeEngine/Meta-Llama-3.1-8B-Instruct-abliterated-i1-GGUF
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct