GGUF
English
causal-lm
Inference Endpoints
conversational
jon-tow commited on
Commit
cfeb7ee
1 Parent(s): feabb6e

update(README): advise downloading files with `hf_transfer` enabled

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -37,10 +37,12 @@ Make sure to install release [b2684](https://github.com/ggerganov/llama.cpp/rele
37
  Download any of the available GGUF files. For example, using the Hugging Face Hub CLI:
38
 
39
  ```bash
 
 
40
  huggingface-cli download stabilityai/stablelm-2-12b-chat-GGUF stablelm-2-12b-chat-Q5_K_M.gguf --local-dir . --local-dir-use-symlinks False
41
  ```
42
 
43
- Then run the model with the [main](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) program:
44
 
45
  ```bash
46
  ./main -m stablelm-2-12b-chat-Q5_K_M.gguf -p "<|im_start|>user {PROMPT} <|im_end|><|im_start|>assistant"
 
37
  Download any of the available GGUF files. For example, using the Hugging Face Hub CLI:
38
 
39
  ```bash
40
+ pip install huggingface_hub[hf_transfer]
41
+ export HF_HUB_ENABLE_HF_TRANSFER=1
42
  huggingface-cli download stabilityai/stablelm-2-12b-chat-GGUF stablelm-2-12b-chat-Q5_K_M.gguf --local-dir . --local-dir-use-symlinks False
43
  ```
44
 
45
+ Then run the model with the [llama.cpp `main`](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) program:
46
 
47
  ```bash
48
  ./main -m stablelm-2-12b-chat-Q5_K_M.gguf -p "<|im_start|>user {PROMPT} <|im_end|><|im_start|>assistant"