chienweichang commited on
Commit
0c4d4e6
1 Parent(s): deb1165

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -6
README.md CHANGED
@@ -1,3 +1,4 @@
 
1
  language:
2
  - zh
3
  - en
@@ -16,12 +17,12 @@ This repo contains GGUF format model files for [yentinglin/Llama-3-Taiwan-8B-Ins
16
  ## Provided files
17
  | Name | Quant method | Bits | Size | Use case |
18
  | ---- | ---- | ---- | ---- | ---- |
19
- | [llama-3-taiwan-8b-instruct-dpo-q5_0.gguf](https://huggingface.co/chienweichang/Llama-3-Taiwan-8B-Instruct-GGUF/blob/main/llama-3-taiwan-8b-instruct-dpo-q5_0.gguf) | Q5_0 | 5 | 5.6 GB| legacy; medium, balanced quality |
20
- | [llama-3-taiwan-8b-instruct-dpo-q5_1.gguf](https://huggingface.co/chienweichang/Llama-3-Taiwan-8B-Instruct-GGUF/blob/main/llama-3-taiwan-8b-instruct-dpo-q5_1.gguf) | Q5_1 | 5 | 6.07 GB| large, low quality loss |
21
- | [llama-3-taiwan-8b-instruct-dpo-q5_k_s.gguf](https://huggingface.co/chienweichang/Llama-3-Taiwan-8B-Instruct-GGUF/blob/main/llama-3-taiwan-8b-instruct-dpo-q5_k_s.gguf) | Q5_K_S | 5 | 5.6 GB| large, very low quality loss |
22
- | [llama-3-taiwan-8b-instruct-dpo-q5_k_m.gguf](https://huggingface.co/chienweichang/Llama-3-Taiwan-8B-Instruct-GGUF/blob/main/llama-3-taiwan-8b-instruct-dpo-q5_k_m.gguf) | Q5_K_M | 5 | 5.73 GB| large, very low quality loss |
23
- | [llama-3-taiwan-8b-instruct-dpo-q6_k.gguf](https://huggingface.co/chienweichang/Llama-3-Taiwan-8B-Instruct-GGUF/blob/main/llama-3-taiwan-8b-instruct-dpo-q6_k.gguf) | Q6_K | 6 | 6.6 GB| very large, extremely low quality loss |
24
- | [llama-3-taiwan-8b-instruct-dpo-q8_0.gguf](https://huggingface.co/chienweichang/Llama-3-Taiwan-8B-Instruct-GGUF/blob/main/llama-3-taiwan-8b-instruct-dpo-q8_0.gguf) | Q8_0 | 8 | 8.54 GB| very large, extremely low quality loss |
25
 
26
  ## Original model card
27
 
 
1
+ ---
2
  language:
3
  - zh
4
  - en
 
17
  ## Provided files
18
  | Name | Quant method | Bits | Size | Use case |
19
  | ---- | ---- | ---- | ---- | ---- |
20
+ | [llama-3-taiwan-8b-instruct-dpo-q5_0.gguf](https://huggingface.co/chienweichang/Llama-3-Taiwan-8B-Instruct-DPO-GGUF/blob/main/llama-3-taiwan-8b-instruct-dpo-q5_0.gguf) | Q5_0 | 5 | 5.6 GB| legacy; medium, balanced quality |
21
+ | [llama-3-taiwan-8b-instruct-dpo-q5_1.gguf](https://huggingface.co/chienweichang/Llama-3-Taiwan-8B-Instruct-DPO-GGUF/blob/main/llama-3-taiwan-8b-instruct-dpo-q5_1.gguf) | Q5_1 | 5 | 6.07 GB| large, low quality loss |
22
+ | [llama-3-taiwan-8b-instruct-dpo-q5_k_s.gguf](https://huggingface.co/chienweichang/Llama-3-Taiwan-8B-Instruct-DPO-GGUF/blob/main/llama-3-taiwan-8b-instruct-dpo-q5_k_s.gguf) | Q5_K_S | 5 | 5.6 GB| large, very low quality loss |
23
+ | [llama-3-taiwan-8b-instruct-dpo-q5_k_m.gguf](https://huggingface.co/chienweichang/Llama-3-Taiwan-8B-Instruct-DPO-GGUF/blob/main/llama-3-taiwan-8b-instruct-dpo-q5_k_m.gguf) | Q5_K_M | 5 | 5.73 GB| large, very low quality loss |
24
+ | [llama-3-taiwan-8b-instruct-dpo-q6_k.gguf](https://huggingface.co/chienweichang/Llama-3-Taiwan-8B-Instruct-DPO-GGUF/blob/main/llama-3-taiwan-8b-instruct-dpo-q6_k.gguf) | Q6_K | 6 | 6.6 GB| very large, extremely low quality loss |
25
+ | [llama-3-taiwan-8b-instruct-dpo-q8_0.gguf](https://huggingface.co/chienweichang/Llama-3-Taiwan-8B-Instruct-DPO-GGUF/blob/main/llama-3-taiwan-8b-instruct-dpo-q8_0.gguf) | Q8_0 | 8 | 8.54 GB| very large, extremely low quality loss |
26
 
27
  ## Original model card
28