Please make a heavily quantized version like you did with R1

#1
by Blazgo - opened

Do the same thing you did to make that 1.58 bit R1:
https://unsloth.ai/blog/deepseekr1-dynamic
Or if it doesn't need big GPUs tell me how to do it.

Blazgo changed discussion status to closed
Unsloth AI org

Do the same thing you did to make that 1.58 bit R1:
https://unsloth.ai/blog/deepseekr1-dynamic
Or if it doesn't need big GPUs tell me how to do it.

We did it's here: https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF

Sign up or log in to comment