Update README.md
Browse files
README.md
CHANGED
@@ -182,8 +182,16 @@ This repo contains GGUF format model files for [Felladrin/Llama-68M-Chat-v1](htt
|
|
182 |
|
183 |
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
|
184 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
185 |
## Prompt template
|
186 |
|
|
|
187 |
```
|
188 |
<|im_start|>system
|
189 |
{system_prompt}<|im_end|>
|
@@ -196,18 +204,18 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
|
|
196 |
|
197 |
| Filename | Quant type | File Size | Description |
|
198 |
| -------- | ---------- | --------- | ----------- |
|
199 |
-
| [Llama-68M-Chat-v1-Q2_K.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/
|
200 |
-
| [Llama-68M-Chat-v1-Q3_K_S.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/
|
201 |
-
| [Llama-68M-Chat-v1-Q3_K_M.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/
|
202 |
-
| [Llama-68M-Chat-v1-Q3_K_L.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/
|
203 |
-
| [Llama-68M-Chat-v1-Q4_0.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/
|
204 |
-
| [Llama-68M-Chat-v1-Q4_K_S.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/
|
205 |
-
| [Llama-68M-Chat-v1-Q4_K_M.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/
|
206 |
-
| [Llama-68M-Chat-v1-Q5_0.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/
|
207 |
-
| [Llama-68M-Chat-v1-Q5_K_S.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/
|
208 |
-
| [Llama-68M-Chat-v1-Q5_K_M.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/
|
209 |
-
| [Llama-68M-Chat-v1-Q6_K.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/
|
210 |
-
| [Llama-68M-Chat-v1-Q8_0.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/
|
211 |
|
212 |
|
213 |
## Downloading instruction
|
|
|
182 |
|
183 |
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
|
184 |
|
185 |
+
|
186 |
+
<div style="text-align: left; margin: 20px 0;">
|
187 |
+
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
|
188 |
+
Run them on the TensorBlock client using your local machine ↗
|
189 |
+
</a>
|
190 |
+
</div>
|
191 |
+
|
192 |
## Prompt template
|
193 |
|
194 |
+
|
195 |
```
|
196 |
<|im_start|>system
|
197 |
{system_prompt}<|im_end|>
|
|
|
204 |
|
205 |
| Filename | Quant type | File Size | Description |
|
206 |
| -------- | ---------- | --------- | ----------- |
|
207 |
+
| [Llama-68M-Chat-v1-Q2_K.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/blob/main/Llama-68M-Chat-v1-Q2_K.gguf) | Q2_K | 0.033 GB | smallest, significant quality loss - not recommended for most purposes |
|
208 |
+
| [Llama-68M-Chat-v1-Q3_K_S.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/blob/main/Llama-68M-Chat-v1-Q3_K_S.gguf) | Q3_K_S | 0.037 GB | very small, high quality loss |
|
209 |
+
| [Llama-68M-Chat-v1-Q3_K_M.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/blob/main/Llama-68M-Chat-v1-Q3_K_M.gguf) | Q3_K_M | 0.038 GB | very small, high quality loss |
|
210 |
+
| [Llama-68M-Chat-v1-Q3_K_L.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/blob/main/Llama-68M-Chat-v1-Q3_K_L.gguf) | Q3_K_L | 0.039 GB | small, substantial quality loss |
|
211 |
+
| [Llama-68M-Chat-v1-Q4_0.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/blob/main/Llama-68M-Chat-v1-Q4_0.gguf) | Q4_0 | 0.042 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
212 |
+
| [Llama-68M-Chat-v1-Q4_K_S.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/blob/main/Llama-68M-Chat-v1-Q4_K_S.gguf) | Q4_K_S | 0.042 GB | small, greater quality loss |
|
213 |
+
| [Llama-68M-Chat-v1-Q4_K_M.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/blob/main/Llama-68M-Chat-v1-Q4_K_M.gguf) | Q4_K_M | 0.043 GB | medium, balanced quality - recommended |
|
214 |
+
| [Llama-68M-Chat-v1-Q5_0.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/blob/main/Llama-68M-Chat-v1-Q5_0.gguf) | Q5_0 | 0.047 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
215 |
+
| [Llama-68M-Chat-v1-Q5_K_S.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/blob/main/Llama-68M-Chat-v1-Q5_K_S.gguf) | Q5_K_S | 0.047 GB | large, low quality loss - recommended |
|
216 |
+
| [Llama-68M-Chat-v1-Q5_K_M.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/blob/main/Llama-68M-Chat-v1-Q5_K_M.gguf) | Q5_K_M | 0.048 GB | large, very low quality loss - recommended |
|
217 |
+
| [Llama-68M-Chat-v1-Q6_K.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/blob/main/Llama-68M-Chat-v1-Q6_K.gguf) | Q6_K | 0.053 GB | very large, extremely low quality loss |
|
218 |
+
| [Llama-68M-Chat-v1-Q8_0.gguf](https://huggingface.co/tensorblock/Llama-68M-Chat-v1-GGUF/blob/main/Llama-68M-Chat-v1-Q8_0.gguf) | Q8_0 | 0.068 GB | very large, extremely low quality loss - not recommended |
|
219 |
|
220 |
|
221 |
## Downloading instruction
|