Manel commited on
Commit
bc97cf5
·
verified ·
1 Parent(s): 729a643

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -69
README.md CHANGED
@@ -1,69 +0,0 @@
1
- ---
2
- extra_gated_heading: Access Llama 2 on Hugging Face
3
- extra_gated_description: This is a form to enable access to Llama 2 on Hugging Face
4
- after you have been granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads)
5
- and accept our license terms and acceptable use policy before submitting this form.
6
- Requests will be processed in 1-2 days.
7
- extra_gated_button_content: Submit
8
- extra_gated_fields:
9
- ? I agree to share my name, email address and username with Meta and confirm that
10
- I have already been granted download access on the Meta website
11
- : checkbox
12
- language:
13
- - en
14
- pipeline_tag: text-generation
15
- inference: false
16
- tags:
17
- - facebook
18
- - meta
19
- - pytorch
20
- - llama
21
- - llama-2
22
- - llama-cpp
23
- - gguf-my-repo
24
- base_model: NousResearch/Llama-2-13b-chat-hf
25
- ---
26
-
27
- # Manel/Llama-2-13b-chat-hf-Q4_K_M-GGUF
28
- This model was converted to GGUF format from [`NousResearch/Llama-2-13b-chat-hf`](https://huggingface.co/NousResearch/Llama-2-13b-chat-hf) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
29
- Refer to the [original model card](https://huggingface.co/NousResearch/Llama-2-13b-chat-hf) for more details on the model.
30
-
31
- ## Use with llama.cpp
32
- Install llama.cpp through brew (works on Mac and Linux)
33
-
34
- ```bash
35
- brew install llama.cpp
36
-
37
- ```
38
- Invoke the llama.cpp server or the CLI.
39
-
40
- ### CLI:
41
- ```bash
42
- llama-cli --hf-repo Manel/Llama-2-13b-chat-hf-Q4_K_M-GGUF --hf-file llama-2-13b-chat-hf-q4_k_m.gguf -p "The meaning to life and the universe is"
43
- ```
44
-
45
- ### Server:
46
- ```bash
47
- llama-server --hf-repo Manel/Llama-2-13b-chat-hf-Q4_K_M-GGUF --hf-file llama-2-13b-chat-hf-q4_k_m.gguf -c 2048
48
- ```
49
-
50
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
51
-
52
- Step 1: Clone llama.cpp from GitHub.
53
- ```
54
- git clone https://github.com/ggerganov/llama.cpp
55
- ```
56
-
57
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
58
- ```
59
- cd llama.cpp && LLAMA_CURL=1 make
60
- ```
61
-
62
- Step 3: Run inference through the main binary.
63
- ```
64
- ./llama-cli --hf-repo Manel/Llama-2-13b-chat-hf-Q4_K_M-GGUF --hf-file llama-2-13b-chat-hf-q4_k_m.gguf -p "The meaning to life and the universe is"
65
- ```
66
- or
67
- ```
68
- ./llama-server --hf-repo Manel/Llama-2-13b-chat-hf-Q4_K_M-GGUF --hf-file llama-2-13b-chat-hf-q4_k_m.gguf -c 2048
69
- ```