roleplaiapp/DeepSeek-R1-Distill-Qwen-14B-Q3_K_M-GGUF

Repo: roleplaiapp/DeepSeek-R1-Distill-Qwen-14B-Q3_K_M-GGUF
Original Model: DeepSeek-R1-Distill-Qwen-14B Organization: deepseek-ai Quantized File: deepseek-r1-distill-qwen-14b-q3_k_m.gguf Quantization: GGUF Quantization Method: Q3_K_M
Use Imatrix: False
Split Model: False

Overview

This is an GGUF Q3_K_M quantized version of DeepSeek-R1-Distill-Qwen-14B.

Quantization By

I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai

Downloads last month
15
GGUF
Model size
14.8B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

3-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for roleplaiapp/DeepSeek-R1-Distill-Qwen-14B-Q3_K_M-GGUF

Quantized
(113)
this model