Update README.md
Browse files
README.md
CHANGED
@@ -102,17 +102,17 @@ Invoke the llama.cpp server or the CLI.
|
|
102 |
CLI:
|
103 |
|
104 |
```bash
|
105 |
-
llama-cli --hf-repo
|
106 |
```
|
107 |
|
108 |
Server:
|
109 |
|
110 |
```bash
|
111 |
-
llama-server --hf-repo
|
112 |
```
|
113 |
|
114 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
115 |
|
116 |
```
|
117 |
-
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m granite-8b-code-instruct.
|
118 |
```
|
|
|
102 |
CLI:
|
103 |
|
104 |
```bash
|
105 |
+
llama-cli --hf-repo Sagicc/granite-8b-code-instruct-Q5_K_M-GGUF --model granite-8b-code-instruct.Q5_K_M.gguf -p "You are an AI assistant"
|
106 |
```
|
107 |
|
108 |
Server:
|
109 |
|
110 |
```bash
|
111 |
+
llama-server --hf-repo Sagicc/granite-8b-code-instruct-Q5_K_M-GGUF --model granite-8b-code-instruct.Q5_K_M.gguf -c 2048
|
112 |
```
|
113 |
|
114 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
115 |
|
116 |
```
|
117 |
+
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m granite-8b-code-instruct.Q5_K_M.gguf -n 128
|
118 |
```
|