apepkuss79
commited on
Commit
•
1802986
1
Parent(s):
780a092
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -20,7 +20,7 @@ language:
|
|
20 |
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
21 |
<!-- header end -->
|
22 |
|
23 |
-
#
|
24 |
|
25 |
## Original Model
|
26 |
|
@@ -68,23 +68,23 @@ language:
|
|
68 |
--prompt-template llama-3-chat \
|
69 |
--ctx-size 4096
|
70 |
```
|
71 |
-
|
72 |
## Quantized GGUF Models
|
73 |
|
74 |
| Name | Quant method | Bits | Size | Use case |
|
75 |
| ---- | ---- | ---- | ---- | ----- |
|
76 |
-
| [
|
77 |
-
| [
|
78 |
-
| [
|
79 |
-
| [
|
80 |
-
| [
|
81 |
-
| [
|
82 |
-
| [
|
83 |
-
| [
|
84 |
-
| [
|
85 |
-
| [
|
86 |
-
| [
|
87 |
-
| [
|
88 |
-
| [
|
89 |
|
90 |
*Quantized with llama.cpp b2824.*
|
|
|
20 |
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
21 |
<!-- header end -->
|
22 |
|
23 |
+
# Llama-3-8B-Japanese-Instruct-GGUF
|
24 |
|
25 |
## Original Model
|
26 |
|
|
|
68 |
--prompt-template llama-3-chat \
|
69 |
--ctx-size 4096
|
70 |
```
|
71 |
+
|
72 |
## Quantized GGUF Models
|
73 |
|
74 |
| Name | Quant method | Bits | Size | Use case |
|
75 |
| ---- | ---- | ---- | ---- | ----- |
|
76 |
+
| [Llama-3-8B-Japanese-Instruct-Q2_K.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q2_K.gguf) | Q2_K | 2 | 3.18 GB| smallest, significant quality loss - not recommended for most purposes |
|
77 |
+
| [Llama-3-8B-Japanese-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 4.32 GB| small, substantial quality loss |
|
78 |
+
| [Llama-3-8B-Japanese-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 4.02 GB| very small, high quality loss |
|
79 |
+
| [Llama-3-8B-Japanese-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 3.66 GB| very small, high quality loss |
|
80 |
+
| [Llama-3-8B-Japanese-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q4_0.gguf) | Q4_0 | 4 | 4.66 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
|
81 |
+
| [Llama-3-8B-Japanese-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 4.92 GB| medium, balanced quality - recommended |
|
82 |
+
| [Llama-3-8B-Japanese-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 4.69 GB| small, greater quality loss |
|
83 |
+
| [Llama-3-8B-Japanese-Instruct-Q5_0.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q5_0.gguf) | Q5_0 | 5 | 5.6 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
|
84 |
+
| [Llama-3-8B-Japanese-Instruct-Q5_K_M.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 5.73 GB| large, very low quality loss - recommended |
|
85 |
+
| [Llama-3-8B-Japanese-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 5.6 GB| large, low quality loss - recommended |
|
86 |
+
| [Llama-3-8B-Japanese-Instruct-Q6_K.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q6_K.gguf) | Q6_K | 6 | 6.6 GB| very large, extremely low quality loss |
|
87 |
+
| [Llama-3-8B-Japanese-Instruct-Q8_0.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q8_0.gguf) | Q8_0 | 8 | 8.54 GB| very large, extremely low quality loss - not recommended |
|
88 |
+
| [Llama-3-8B-Japanese-Instruct-f16.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-f16.gguf) | f16 | 16 | 16.1 GB| |
|
89 |
|
90 |
*Quantized with llama.cpp b2824.*
|