jrkns commited on
Commit
c95591b
·
1 Parent(s): 245edd2

update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md CHANGED
@@ -1,3 +1,65 @@
1
  ---
 
 
 
2
  license: apache-2.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: KBTG-Labs/THaLLE-0.1-7B-fa
3
+ language:
4
+ - en
5
  license: apache-2.0
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - finance
9
+ - llama-cpp
10
  ---
11
+
12
+ # KBTG-Labs/THaLLE-0.1-7B-fa-GGUF
13
+
14
+ This model was converted to GGUF format from [`KBTG-Labs/THaLLE-0.1-7B-fa`](https://huggingface.co/KBTG-Labs/THaLLE-0.1-7B-fa) using llama.cpp.
15
+ Refer to the [original model card](https://huggingface.co/KBTG-Labs/THaLLE-0.1-7B-fa) for more details on the model.
16
+
17
+ ## Use with llama.cpp
18
+
19
+ Install llama.cpp through brew (works on Mac and Linux)
20
+
21
+ ```bash
22
+ brew install llama.cpp
23
+
24
+ ```
25
+
26
+ Invoke the llama.cpp server or the CLI with your perfered quantization level ("q2_k", "q3_k_m", "q4_k_m", "q5_k_m", "q6_k", "q8_0", "f16").
27
+ Smaller quantization is faster and use less memory, but will be less accurate.
28
+
29
+ ### CLI:
30
+
31
+ ```bash
32
+ llama-cli --hf-repo KBTG-Labs/THaLLE-0.1-7B-fa-GGUF --hf-file thalle-0.1-7b-fa-<QUANTIZATION_LEVEL>.gguf -p "The meaning to life and the universe is"
33
+ ```
34
+
35
+ ### Server:
36
+
37
+ ```bash
38
+ llama-server --hf-repo KBTG-Labs/THaLLE-0.1-7B-fa-GGUF --hf-file thalle-0.1-7b-fa-<QUANTIZATION_LEVEL>.gguf -c 2048
39
+ ```
40
+
41
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
42
+
43
+ Step 1: Clone llama.cpp from GitHub.
44
+
45
+ ```
46
+ git clone https://github.com/ggerganov/llama.cpp
47
+ ```
48
+
49
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
50
+
51
+ ```
52
+ cd llama.cpp && LLAMA_CURL=1 make
53
+ ```
54
+
55
+ Step 3: Run inference through the main binary.
56
+
57
+ ```
58
+ ./llama-cli --hf-repo KBTG-Labs/THaLLE-0.1-7B-fa-GGUF --hf-file thalle-0.1-7b-fa-<QUANTIZATION_LEVEL>.gguf -p "The meaning to life and the universe is"
59
+ ```
60
+
61
+ or
62
+
63
+ ```
64
+ ./llama-server --hf-repo KBTG-Labs/THaLLE-0.1-7B-fa-GGUF --hf-file thalle-0.1-7b-fa-<QUANTIZATION_LEVEL>.gguf -c 2048
65
+ ```