Change ggerganov -> ggml-org
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ tags:
|
|
15 |
- gguf-my-repo
|
16 |
---
|
17 |
|
18 |
-
#
|
19 |
This model was converted to GGUF format from [`Qwen/Qwen2.5-Coder-1.5B`](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
20 |
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B) for more details on the model.
|
21 |
|
@@ -30,12 +30,12 @@ Invoke the llama.cpp server or the CLI.
|
|
30 |
|
31 |
### CLI:
|
32 |
```bash
|
33 |
-
llama-cli --hf-repo
|
34 |
```
|
35 |
|
36 |
### Server:
|
37 |
```bash
|
38 |
-
llama-server --hf-repo
|
39 |
```
|
40 |
|
41 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
@@ -52,9 +52,9 @@ cd llama.cpp && LLAMA_CURL=1 make
|
|
52 |
|
53 |
Step 3: Run inference through the main binary.
|
54 |
```
|
55 |
-
./llama-cli --hf-repo
|
56 |
```
|
57 |
or
|
58 |
```
|
59 |
-
./llama-server --hf-repo
|
60 |
```
|
|
|
15 |
- gguf-my-repo
|
16 |
---
|
17 |
|
18 |
+
# ggml-org/Qwen2.5-Coder-1.5B-Q8_0-GGUF
|
19 |
This model was converted to GGUF format from [`Qwen/Qwen2.5-Coder-1.5B`](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
20 |
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B) for more details on the model.
|
21 |
|
|
|
30 |
|
31 |
### CLI:
|
32 |
```bash
|
33 |
+
llama-cli --hf-repo ggml-org/Qwen2.5-Coder-1.5B-Q8_0-GGUF --hf-file qwen2.5-coder-1.5b-q8_0.gguf -p "The meaning to life and the universe is"
|
34 |
```
|
35 |
|
36 |
### Server:
|
37 |
```bash
|
38 |
+
llama-server --hf-repo ggml-org/Qwen2.5-Coder-1.5B-Q8_0-GGUF --hf-file qwen2.5-coder-1.5b-q8_0.gguf -c 2048
|
39 |
```
|
40 |
|
41 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
|
|
52 |
|
53 |
Step 3: Run inference through the main binary.
|
54 |
```
|
55 |
+
./llama-cli --hf-repo ggml-org/Qwen2.5-Coder-1.5B-Q8_0-GGUF --hf-file qwen2.5-coder-1.5b-q8_0.gguf -p "The meaning to life and the universe is"
|
56 |
```
|
57 |
or
|
58 |
```
|
59 |
+
./llama-server --hf-repo ggml-org/Qwen2.5-Coder-1.5B-Q8_0-GGUF --hf-file qwen2.5-coder-1.5b-q8_0.gguf -c 2048
|
60 |
```
|