roleplaiapp's picture
Upload README.md with huggingface_hub
0b991d2 verified
metadata
library_name: transformers
pipeline_tag: text-generation
tags:
  - 70b
  - 8-bit
  - Q8_0
  - deepseek
  - distill
  - gguf
  - llama
  - llama-cpp
  - text-generation
  - uncensored

roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-Q8_0-GGUF

Repo: roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-Q8_0-GGUF Original Model: DeepSeek-R1-Distill-Llama-70B-Uncensored-v2 Quantized File: DeepSeek-R1-Distill-Llama-70B-Uncensored-v2.Q8_0.gguf.part1of2 Quantization: GGUF Quantization Method: Q8_0

Overview

This is a GGUF Q8_0 quantized version of DeepSeek-R1-Distill-Llama-70B-Uncensored-v2

Quantization By

I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai.