morriszms commited on
Commit
ee2aa85
1 Parent(s): 87d9f51

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -12
README.md CHANGED
@@ -23,8 +23,16 @@ This repo contains GGUF format model files for [CausalLM/34b-beta](https://huggi
23
 
24
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
25
 
 
 
 
 
 
 
 
26
  ## Prompt template
27
 
 
28
  ```
29
  <|im_start|>system
30
  {system_prompt}<|im_end|>
@@ -37,18 +45,18 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
37
 
38
  | Filename | Quant type | File Size | Description |
39
  | -------- | ---------- | --------- | ----------- |
40
- | [34b-beta-Q2_K.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/tree/main/34b-beta-Q2_K.gguf) | Q2_K | 11.944 GB | smallest, significant quality loss - not recommended for most purposes |
41
- | [34b-beta-Q3_K_S.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/tree/main/34b-beta-Q3_K_S.gguf) | Q3_K_S | 13.933 GB | very small, high quality loss |
42
- | [34b-beta-Q3_K_M.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/tree/main/34b-beta-Q3_K_M.gguf) | Q3_K_M | 15.511 GB | very small, high quality loss |
43
- | [34b-beta-Q3_K_L.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/tree/main/34b-beta-Q3_K_L.gguf) | Q3_K_L | 16.894 GB | small, substantial quality loss |
44
- | [34b-beta-Q4_0.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/tree/main/34b-beta-Q4_0.gguf) | Q4_0 | 18.130 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
45
- | [34b-beta-Q4_K_S.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/tree/main/34b-beta-Q4_K_S.gguf) | Q4_K_S | 18.253 GB | small, greater quality loss |
46
- | [34b-beta-Q4_K_M.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/tree/main/34b-beta-Q4_K_M.gguf) | Q4_K_M | 19.240 GB | medium, balanced quality - recommended |
47
- | [34b-beta-Q5_0.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/tree/main/34b-beta-Q5_0.gguf) | Q5_0 | 22.080 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
48
- | [34b-beta-Q5_K_S.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/tree/main/34b-beta-Q5_K_S.gguf) | Q5_K_S | 22.080 GB | large, low quality loss - recommended |
49
- | [34b-beta-Q5_K_M.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/tree/main/34b-beta-Q5_K_M.gguf) | Q5_K_M | 22.651 GB | large, very low quality loss - recommended |
50
- | [34b-beta-Q6_K.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/tree/main/34b-beta-Q6_K.gguf) | Q6_K | 26.276 GB | very large, extremely low quality loss |
51
- | [34b-beta-Q8_0.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/tree/main/34b-beta-Q8_0.gguf) | Q8_0 | 34.033 GB | very large, extremely low quality loss - not recommended |
52
 
53
 
54
  ## Downloading instruction
 
23
 
24
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
25
 
26
+
27
+ <div style="text-align: left; margin: 20px 0;">
28
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
29
+ Run them on the TensorBlock client using your local machine ↗
30
+ </a>
31
+ </div>
32
+
33
  ## Prompt template
34
 
35
+
36
  ```
37
  <|im_start|>system
38
  {system_prompt}<|im_end|>
 
45
 
46
  | Filename | Quant type | File Size | Description |
47
  | -------- | ---------- | --------- | ----------- |
48
+ | [34b-beta-Q2_K.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/blob/main/34b-beta-Q2_K.gguf) | Q2_K | 11.944 GB | smallest, significant quality loss - not recommended for most purposes |
49
+ | [34b-beta-Q3_K_S.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/blob/main/34b-beta-Q3_K_S.gguf) | Q3_K_S | 13.933 GB | very small, high quality loss |
50
+ | [34b-beta-Q3_K_M.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/blob/main/34b-beta-Q3_K_M.gguf) | Q3_K_M | 15.511 GB | very small, high quality loss |
51
+ | [34b-beta-Q3_K_L.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/blob/main/34b-beta-Q3_K_L.gguf) | Q3_K_L | 16.894 GB | small, substantial quality loss |
52
+ | [34b-beta-Q4_0.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/blob/main/34b-beta-Q4_0.gguf) | Q4_0 | 18.130 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
53
+ | [34b-beta-Q4_K_S.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/blob/main/34b-beta-Q4_K_S.gguf) | Q4_K_S | 18.253 GB | small, greater quality loss |
54
+ | [34b-beta-Q4_K_M.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/blob/main/34b-beta-Q4_K_M.gguf) | Q4_K_M | 19.240 GB | medium, balanced quality - recommended |
55
+ | [34b-beta-Q5_0.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/blob/main/34b-beta-Q5_0.gguf) | Q5_0 | 22.080 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
56
+ | [34b-beta-Q5_K_S.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/blob/main/34b-beta-Q5_K_S.gguf) | Q5_K_S | 22.080 GB | large, low quality loss - recommended |
57
+ | [34b-beta-Q5_K_M.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/blob/main/34b-beta-Q5_K_M.gguf) | Q5_K_M | 22.651 GB | large, very low quality loss - recommended |
58
+ | [34b-beta-Q6_K.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/blob/main/34b-beta-Q6_K.gguf) | Q6_K | 26.276 GB | very large, extremely low quality loss |
59
+ | [34b-beta-Q8_0.gguf](https://huggingface.co/tensorblock/34b-beta-GGUF/blob/main/34b-beta-Q8_0.gguf) | Q8_0 | 34.033 GB | very large, extremely low quality loss - not recommended |
60
 
61
 
62
  ## Downloading instruction