Melvin56/GLM-4-9B-0414-abliterated-GGUF

Original Model : huihui-ai/GLM-4-9B-0414-abliterated

Llama.cpp build: 1d735c0b (5165)

I used imatrix to create all these quants using this Dataset.

With Llama.cpp(Ollama and LM Studio were not tested), you'll need to add these specific commands :

--override-kv glm4.rope.dimension_count=int:64 \
--override-kv tokenizer.ggml.eos_token_id=int:151336 \ 
--chat-template chatglm4
CPU (AVX2) CPU (ARM NEON) Metal cuBLAS rocBLAS SYCL CLBlast Vulkan Kompute
K-quants ✅ 🐢5 ✅ 🐢5
I-quants ✅ 🐢4 ✅ 🐢4 ✅ 🐢4 Partial¹
✅: feature works
🚫: feature does not work
❓: unknown, please contribute if you can test it youself
🐢: feature is slow
¹: IQ3_S and IQ1_S, see #5886
²: Only with -ngl 0
³: Inference is 50% slower
⁴: Slower than K-quants of comparable size
⁵: Slower than cuBLAS/rocBLAS on similar cards
⁶: Only q8_0 and iq4_nl
Downloads last month
162
GGUF
Model size
9.4B params
Architecture
glm4
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Melvin56/GLM-4-9B-0414-abliterated-GGUF

Quantized
(5)
this model