shenzhi-wang
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -21,6 +21,8 @@ tags:
|
|
21 |
- π₯ We provide the official **Ollama model for the q8_0 GGUF** version of Llama3-70B-Chinese-Chat at [wangshenzhi/llama3-70b-chinese-chat-ollama-q8](https://ollama.com/wangshenzhi/llama3-70b-chinese-chat-ollama-q8)! Run the following command for quick use of this model: `ollama run wangshenzhi/llama3-70b-chinese-chat-ollama-q8:latest`.
|
22 |
- π₯ We provide the official **q4_0 GGUF** version of Llama3-70B-Chinese-Chat at [shenzhi-wang/Llama3-70B-Chinese-Chat-GGUF-4bit](https://huggingface.co/shenzhi-wang/Llama3-70B-Chinese-Chat-GGUF-4bit).
|
23 |
- π If you are in China, you can download our model from our [gitee repo](https://ai.gitee.com/shenzhi-wang/llama3-70b-chinese-chat).
|
|
|
|
|
24 |
|
25 |
# Model Summary
|
26 |
|
|
|
21 |
- π₯ We provide the official **Ollama model for the q8_0 GGUF** version of Llama3-70B-Chinese-Chat at [wangshenzhi/llama3-70b-chinese-chat-ollama-q8](https://ollama.com/wangshenzhi/llama3-70b-chinese-chat-ollama-q8)! Run the following command for quick use of this model: `ollama run wangshenzhi/llama3-70b-chinese-chat-ollama-q8:latest`.
|
22 |
- π₯ We provide the official **q4_0 GGUF** version of Llama3-70B-Chinese-Chat at [shenzhi-wang/Llama3-70B-Chinese-Chat-GGUF-4bit](https://huggingface.co/shenzhi-wang/Llama3-70B-Chinese-Chat-GGUF-4bit).
|
23 |
- π If you are in China, you can download our model from our [gitee repo](https://ai.gitee.com/shenzhi-wang/llama3-70b-chinese-chat).
|
24 |
+
- π Online demo: https://ai.gitee.com/shenzhi-wang/llama3-70b-chinese-chat
|
25 |
+
|
26 |
|
27 |
# Model Summary
|
28 |
|