Update README.md
Browse files
README.md
CHANGED
@@ -4,12 +4,12 @@ tags:
|
|
4 |
- llama-factory
|
5 |
- llama-cpp
|
6 |
- gguf-my-repo
|
7 |
-
base_model: avemio-digital/
|
8 |
---
|
9 |
|
10 |
-
# avemio-digital/
|
11 |
-
This model was converted to GGUF format from [`avemio-digital/
|
12 |
-
Refer to the [original model card](https://huggingface.co/avemio-digital/
|
13 |
|
14 |
## Use with llama.cpp
|
15 |
Install llama.cpp through brew (works on Mac and Linux)
|
@@ -22,12 +22,12 @@ Invoke the llama.cpp server or the CLI.
|
|
22 |
|
23 |
### CLI:
|
24 |
```bash
|
25 |
-
llama-cli --hf-repo avemio-digital/
|
26 |
```
|
27 |
|
28 |
### Server:
|
29 |
```bash
|
30 |
-
llama-server --hf-repo avemio-digital/
|
31 |
```
|
32 |
|
33 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
@@ -44,9 +44,9 @@ cd llama.cpp && LLAMA_CURL=1 make
|
|
44 |
|
45 |
Step 3: Run inference through the main binary.
|
46 |
```
|
47 |
-
./llama-cli --hf-repo avemio-digital/
|
48 |
```
|
49 |
or
|
50 |
```
|
51 |
-
./llama-server --hf-repo avemio-digital/
|
52 |
```
|
|
|
4 |
- llama-factory
|
5 |
- llama-cpp
|
6 |
- gguf-my-repo
|
7 |
+
base_model: avemio-digital/German-RAG-PHI-3.5-MINI-4B-MERGED-Long-Context-QA-HESSIAN-AI
|
8 |
---
|
9 |
|
10 |
+
# avemio-digital/German-RAG-PHI-3.5-MINI-4B-MERGED-Long-Context-QA-HESSIAN-AI-Q8_0-GGUF
|
11 |
+
This model was converted to GGUF format from [`avemio-digital/German-RAG-PHI-3.5-MINI-4B-MERGED-Long-Context-QA-HESSIAN-AI`](https://huggingface.co/avemio-digital/German-RAG-PHI-3.5-MINI-4B-MERGED-Long-Context-QA-HESSIAN-AI) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
12 |
+
Refer to the [original model card](https://huggingface.co/avemio-digital/German-RAG-PHI-3.5-MINI-4B-MERGED-Long-Context-QA-HESSIAN-AI) for more details on the model.
|
13 |
|
14 |
## Use with llama.cpp
|
15 |
Install llama.cpp through brew (works on Mac and Linux)
|
|
|
22 |
|
23 |
### CLI:
|
24 |
```bash
|
25 |
+
llama-cli --hf-repo avemio-digital/German-RAG-PHI-3.5-MINI-4B-MERGED-Long-Context-QA-HESSIAN-AI-Q8_0-GGUF --hf-file German-RAG-phi-3.5-mini-4b-merged-long-context-qa-hessian-ai-q8_0.gguf -p "The meaning to life and the universe is"
|
26 |
```
|
27 |
|
28 |
### Server:
|
29 |
```bash
|
30 |
+
llama-server --hf-repo avemio-digital/German-RAG-PHI-3.5-MINI-4B-MERGED-Long-Context-QA-HESSIAN-AI-Q8_0-GGUF --hf-file German-RAG-phi-3.5-mini-4b-merged-long-context-qa-hessian-ai-q8_0.gguf -c 2048
|
31 |
```
|
32 |
|
33 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
|
|
44 |
|
45 |
Step 3: Run inference through the main binary.
|
46 |
```
|
47 |
+
./llama-cli --hf-repo avemio-digital/German-RAG-PHI-3.5-MINI-4B-MERGED-Long-Context-QA-HESSIAN-AI-Q8_0-GGUF --hf-file German-RAG-phi-3.5-mini-4b-merged-long-context-qa-hessian-ai-q8_0.gguf -p "The meaning to life and the universe is"
|
48 |
```
|
49 |
or
|
50 |
```
|
51 |
+
./llama-server --hf-repo avemio-digital/German-RAG-PHI-3.5-MINI-4B-MERGED-Long-Context-QA-HESSIAN-AI-Q8_0-GGUF --hf-file German-RAG-phi-3.5-mini-4b-merged-long-context-qa-hessian-ai-q8_0.gguf -c 2048
|
52 |
```
|