roleplaiapp/deepseek-r1-qwen-2.5-32B-ablated-Q8_0-GGUF

Repo: roleplaiapp/deepseek-r1-qwen-2.5-32B-ablated-Q8_0-GGUF Original Model: deepseek-r1-qwen-2.5-32B-ablated Quantized File: deepseek-r1-qwen-2.5-32B-ablated-Q8_0.gguf Quantization: GGUF Quantization Method: Q8_0

Overview

This is a GGUF Q8_0 quantized version of deepseek-r1-qwen-2.5-32B-ablated

Quantization By

I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai.

Downloads last month
375
GGUF
Model size
32.8B params
Architecture
qwen2

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.