apepkuss79
commited on
Commit
•
7949684
1
Parent(s):
0d5e465
Update README.md
Browse files
README.md
CHANGED
@@ -88,7 +88,10 @@ quantized_by: Second State Inc.
|
|
88 |
| [Meta-Llama-3-70B-Instruct-Q8_0-00001-of-00003.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q8_0-00001-of-00003.gguf) | Q8_0 | 8 | 32 GB| very large, extremely low quality loss - not recommended |
|
89 |
| [Meta-Llama-3-70B-Instruct-Q8_0-00002-of-00003.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q8_0-00002-of-00003.gguf) | Q8_0 | 8 | 32.1 GB| very large, extremely low quality loss - not recommended |
|
90 |
| [Meta-Llama-3-70B-Instruct-Q8_0-00003-of-00003.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q8_0-00003-of-00003.gguf) | Q8_0 | 8 | 10.9 GB| very large, extremely low quality loss - not recommended |
|
91 |
-
|
92 |
-
|
|
|
|
|
|
|
93 |
|
94 |
*Quantized with llama.cpp b2715.*
|
|
|
88 |
| [Meta-Llama-3-70B-Instruct-Q8_0-00001-of-00003.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q8_0-00001-of-00003.gguf) | Q8_0 | 8 | 32 GB| very large, extremely low quality loss - not recommended |
|
89 |
| [Meta-Llama-3-70B-Instruct-Q8_0-00002-of-00003.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q8_0-00002-of-00003.gguf) | Q8_0 | 8 | 32.1 GB| very large, extremely low quality loss - not recommended |
|
90 |
| [Meta-Llama-3-70B-Instruct-Q8_0-00003-of-00003.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q8_0-00003-of-00003.gguf) | Q8_0 | 8 | 10.9 GB| very large, extremely low quality loss - not recommended |
|
91 |
+
| [Meta-Llama-3-70B-Instruct-f16-00001-of-00005.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-f16-00001-of-00005.gguf) | f16 | 16 | 32.1 GB| |
|
92 |
+
| [Meta-Llama-3-70B-Instruct-f16-00002-of-00005.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-f16-00002-of-00005.gguf) | f16 | 16 | 32 GB| |
|
93 |
+
| [Meta-Llama-3-70B-Instruct-f16-00003-of-00005.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-f16-00003-of-00005.gguf) | f16 | 16 | 32 GB| |
|
94 |
+
| [Meta-Llama-3-70B-Instruct-f16-00004-of-00005.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-f16-00004-of-00005.gguf) | f16 | 16 | 31.7 GB| |
|
95 |
+
| [Meta-Llama-3-70B-Instruct-f16-00005-of-00005.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-f16-00005-of-00005.gguf) | f16 | 16 | 13.1 GB| |
|
96 |
|
97 |
*Quantized with llama.cpp b2715.*
|