DeepSeek-R1-Distill-Qwen-1.5B

This repository contains quantized versions of the model from the original repository: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B.

Name Quantization Method Size (GB)
deepseek-r1-distill-qwen-1.5b.Q2_K.gguf q2_k 0.70
deepseek-r1-distill-qwen-1.5b.Q3_K_S.gguf q3_k_s 0.80
deepseek-r1-distill-qwen-1.5b.Q3_K_M.gguf q3_k_m 0.86
deepseek-r1-distill-qwen-1.5b.Q3_K_L.gguf q3_k_l 0.91
deepseek-r1-distill-qwen-1.5b.Q4_0.gguf q4_0 0.99
deepseek-r1-distill-qwen-1.5b.Q4_K_S.gguf q4_k_s 1.00
deepseek-r1-distill-qwen-1.5b.Q4_K_M.gguf q4_k_m 1.04
deepseek-r1-distill-qwen-1.5b.Q5_0.gguf q5_0 1.17
deepseek-r1-distill-qwen-1.5b.Q5_K_S.gguf q5_k_s 1.17
deepseek-r1-distill-qwen-1.5b.Q5_K_M.gguf q5_k_m 1.20
deepseek-r1-distill-qwen-1.5b.Q6_K.gguf q6_k 1.36
deepseek-r1-distill-qwen-1.5b.Q8_0.gguf q8_0 1.76
Downloads last month
292
GGUF
Model size
1.78B params
Architecture
qwen2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for pbatra/DeepSeek-R1-Distill-Qwen-1.5B-GGUF

Quantized
(60)
this model