KimChen commited on
Commit
41fc9c5
·
verified ·
1 Parent(s): b00c1b9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -25,19 +25,19 @@ Invoke the llama.cpp server or the CLI.
25
 
26
  ### CLI:
27
  ```bash
28
- llama-cli --hf-repo bbvch-ai/bge-m3-GGUF --hf-file bge-m3.gguf -p "The meaning to life and the universe is"
29
  ```
30
 
31
  ### Server:
32
  ```bash
33
- llama-server --hf-repo bbvch-ai/bge-m3-GGUF --hf-file bge-m3.gguf -c 2048
34
  ```
35
 
36
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
37
 
38
  Step 1: Clone llama.cpp from GitHub.
39
  ```
40
- git clone https://github.com/ggerganov/llama.cpp
41
  ```
42
 
43
  Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
 
25
 
26
  ### CLI:
27
  ```bash
28
+ llama-cli --hf-repo KimChen/bge-m3-GGUF --hf-file bge-m3.gguf -p "The meaning to life and the universe is"
29
  ```
30
 
31
  ### Server:
32
  ```bash
33
+ llama-server --hf-repo KimChen/bge-m3-GGUF --hf-file bge-m3.gguf -c 2048
34
  ```
35
 
36
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
37
 
38
  Step 1: Clone llama.cpp from GitHub.
39
  ```
40
+ git clone https://github.com/ggerganov/llama.cpp.git
41
  ```
42
 
43
  Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).