mradermacher commited on
Commit
6a5dcca
1 Parent(s): 032a537

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -2
README.md CHANGED
@@ -5,7 +5,7 @@ language:
5
  library_name: transformers
6
  license: apache-2.0
7
  license_link: https://huggingface.co/huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterate/blob/main/LICENSE
8
- no_imatrix: "nan detected in blk.47.attn_q.weight"
9
  quantized_by: mradermacher
10
  tags:
11
  - code
@@ -26,7 +26,6 @@ tags:
26
  static quants of https://huggingface.co/huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterated
27
 
28
  <!-- provided-files -->
29
- weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
30
  ## Usage
31
 
32
  If you are unsure how to use GGUF files, refer to one of [TheBloke's
@@ -43,8 +42,12 @@ more details, including on how to concatenate multi-part files.
43
  | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct-abliterated.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
44
  | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct-abliterated.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
45
  | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct-abliterated.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
 
 
46
  | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct-abliterated.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
47
  | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct-abliterated.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
 
 
48
  | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct-abliterated.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
49
  | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct-abliterated.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
50
 
 
5
  library_name: transformers
6
  license: apache-2.0
7
  license_link: https://huggingface.co/huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterate/blob/main/LICENSE
8
+ no_imatrix: nan detected in blk.47.attn_q.weight
9
  quantized_by: mradermacher
10
  tags:
11
  - code
 
26
  static quants of https://huggingface.co/huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterated
27
 
28
  <!-- provided-files -->
 
29
  ## Usage
30
 
31
  If you are unsure how to use GGUF files, refer to one of [TheBloke's
 
42
  | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct-abliterated.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
43
  | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct-abliterated.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
44
  | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct-abliterated.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
45
+ | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct-abliterated.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
46
+ | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct-abliterated.Q4_0_4_4.gguf) | Q4_0_4_4 | 8.6 | fast on arm, low quality |
47
  | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct-abliterated.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
48
  | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct-abliterated.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
49
+ | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct-abliterated.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
50
+ | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct-abliterated.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
51
  | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct-abliterated.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
52
  | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct-abliterated.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
53