--- library_name: transformers pipeline_tag: text-generation tags: - 14b - 3-bit - Q3_K_M - deepseek - gguf - llama-cpp - qwen25 - text-generation - uncensored --- # roleplaiapp/Qwen2.5-14B-DeepSeek-R1-1M-Uncensored-Q3_K_M-GGUF **Repo:** `roleplaiapp/Qwen2.5-14B-DeepSeek-R1-1M-Uncensored-Q3_K_M-GGUF` **Original Model:** `Qwen2.5-14B-DeepSeek-R1-1M-Uncensored` **Quantized File:** `Qwen2.5-14B-DeepSeek-R1-1M-Uncensored.Q3_K_M.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q3_K_M` ## Overview This is a GGUF Q3_K_M quantized version of Qwen2.5-14B-DeepSeek-R1-1M-Uncensored ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).