Update README.md
#2
by
MaziyarPanahi
- opened
README.md
CHANGED
@@ -26,6 +26,8 @@ quantized_by: MaziyarPanahi
|
|
26 |
## Description
|
27 |
[MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.1](https://huggingface.co/MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.1).
|
28 |
|
|
|
|
|
29 |
### About GGUF
|
30 |
|
31 |
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
|
|
26 |
## Description
|
27 |
[MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.1](https://huggingface.co/MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.1).
|
28 |
|
29 |
+
IMPORTANT: There is no need to merge the splits. By now, most libraries support automatically loading the splits by simply pointing to the first one.
|
30 |
+
|
31 |
### About GGUF
|
32 |
|
33 |
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|