roleplaiapp commited on
Commit
bb78014
·
verified ·
1 Parent(s): 7fd8bee

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +29 -0
README.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ pipeline_tag: text-generation
4
+ tags:
5
+ - 14b
6
+ - IQ3_M
7
+ - deepseek
8
+ - gguf
9
+ - iq3
10
+ - llama-cpp
11
+ - qwen25
12
+ - text-generation
13
+ ---
14
+
15
+ # roleplaiapp/Qwen2.5-14B-DeepSeek-R1-1M-i1-IQ3_M-GGUF
16
+
17
+ **Repo:** `roleplaiapp/Qwen2.5-14B-DeepSeek-R1-1M-i1-IQ3_M-GGUF`
18
+ **Original Model:** `Qwen2.5-14B-DeepSeek-R1-1M-i1`
19
+ **Quantized File:** `Qwen2.5-14B-DeepSeek-R1-1M.i1-IQ3_M.gguf`
20
+ **Quantization:** `GGUF`
21
+ **Quantization Method:** `IQ3_M`
22
+
23
+ ## Overview
24
+ This is a GGUF IQ3_M quantized version of Qwen2.5-14B-DeepSeek-R1-1M-i1
25
+ ## Quantization By
26
+ I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
27
+ I hope the community finds these quantizations useful.
28
+
29
+ Andrew Webby @ [RolePlai](https://roleplai.app/).