Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,12 +1,12 @@
|
|
1 |
---
|
|
|
2 |
tags:
|
3 |
- manticore
|
4 |
- llama-cpp
|
5 |
- gguf-my-repo
|
6 |
-
base_model: Monero/Manticore-13b-Chat-Pyg-Guanaco
|
7 |
---
|
8 |
|
9 |
-
# v000000/Manticore-13b-Chat-Pyg-Guanaco-
|
10 |
This model was converted to GGUF format from [`Monero/Manticore-13b-Chat-Pyg-Guanaco`](https://huggingface.co/Monero/Manticore-13b-Chat-Pyg-Guanaco) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
11 |
Refer to the [original model card](https://huggingface.co/Monero/Manticore-13b-Chat-Pyg-Guanaco) for more details on the model.
|
12 |
|
@@ -21,12 +21,12 @@ Invoke the llama.cpp server or the CLI.
|
|
21 |
|
22 |
### CLI:
|
23 |
```bash
|
24 |
-
llama --hf-repo v000000/Manticore-13b-Chat-Pyg-Guanaco-
|
25 |
```
|
26 |
|
27 |
### Server:
|
28 |
```bash
|
29 |
-
llama-server --hf-repo v000000/Manticore-13b-Chat-Pyg-Guanaco-
|
30 |
```
|
31 |
|
32 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
@@ -43,9 +43,9 @@ cd llama.cpp && LLAMA_CURL=1 make
|
|
43 |
|
44 |
Step 3: Run inference through the main binary.
|
45 |
```
|
46 |
-
./
|
47 |
```
|
48 |
or
|
49 |
```
|
50 |
-
./server --hf-repo v000000/Manticore-13b-Chat-Pyg-Guanaco-
|
51 |
```
|
|
|
1 |
---
|
2 |
+
base_model: Monero/Manticore-13b-Chat-Pyg-Guanaco
|
3 |
tags:
|
4 |
- manticore
|
5 |
- llama-cpp
|
6 |
- gguf-my-repo
|
|
|
7 |
---
|
8 |
|
9 |
+
# v000000/Manticore-13b-Chat-Pyg-Guanaco-Q6_K-GGUF
|
10 |
This model was converted to GGUF format from [`Monero/Manticore-13b-Chat-Pyg-Guanaco`](https://huggingface.co/Monero/Manticore-13b-Chat-Pyg-Guanaco) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
11 |
Refer to the [original model card](https://huggingface.co/Monero/Manticore-13b-Chat-Pyg-Guanaco) for more details on the model.
|
12 |
|
|
|
21 |
|
22 |
### CLI:
|
23 |
```bash
|
24 |
+
llama-cli --hf-repo v000000/Manticore-13b-Chat-Pyg-Guanaco-Q6_K-GGUF --hf-file manticore-13b-chat-pyg-guanaco-q6_k.gguf -p "The meaning to life and the universe is"
|
25 |
```
|
26 |
|
27 |
### Server:
|
28 |
```bash
|
29 |
+
llama-server --hf-repo v000000/Manticore-13b-Chat-Pyg-Guanaco-Q6_K-GGUF --hf-file manticore-13b-chat-pyg-guanaco-q6_k.gguf -c 2048
|
30 |
```
|
31 |
|
32 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
|
|
43 |
|
44 |
Step 3: Run inference through the main binary.
|
45 |
```
|
46 |
+
./llama-cli --hf-repo v000000/Manticore-13b-Chat-Pyg-Guanaco-Q6_K-GGUF --hf-file manticore-13b-chat-pyg-guanaco-q6_k.gguf -p "The meaning to life and the universe is"
|
47 |
```
|
48 |
or
|
49 |
```
|
50 |
+
./llama-server --hf-repo v000000/Manticore-13b-Chat-Pyg-Guanaco-Q6_K-GGUF --hf-file manticore-13b-chat-pyg-guanaco-q6_k.gguf -c 2048
|
51 |
```
|