Xin Liu commited on
Commit
ff58cce
1 Parent(s): ec52ab7

Signed-off-by: Xin Liu <[email protected]>

Files changed (1) hide show
  1. README.md +45 -1
README.md CHANGED
@@ -26,4 +26,48 @@ quantized_by: Second State Inc.
26
 
27
  ## Run with LlamaEdge
28
 
29
- - LlamaEdge version: coming soon
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
  ## Run with LlamaEdge
28
 
29
+ - LlamaEdge version: [v0.3.2](https://github.com/second-state/LlamaEdge/releases/tag/0.3.2) (coming soon)
30
+
31
+ - Prompt template
32
+
33
+ - Prompt type: `gemma-instruct`
34
+
35
+ - Prompt string
36
+
37
+ ```text
38
+ <start_of_turn>user
39
+ {user_message}<end_of_turn>
40
+ <start_of_turn>model
41
+ {model_message}<end_of_turn>model
42
+ ```
43
+
44
+ - Run as LlamaEdge service
45
+
46
+ ```bash
47
+ wasmedge --dir .:. --nn-preload default:GGML:AUTO:gemma-7b-it-Q5_K_M.gguf llama-api-server.wasm -p gemma-instruct -c 4096
48
+ ```
49
+
50
+ - Run as LlamaEdge command app
51
+
52
+ ```bash
53
+ wasmedge --dir .:. --nn-preload default:GGML:AUTO:gemma-7b-it-Q5_K_M.gguf llama-chat.wasm -p gemma-instruct -c 4096
54
+ ```
55
+
56
+ ## Quantized GGUF Models
57
+
58
+ | Name | Quant method | Bits | Size | Use case |
59
+ | ---- | ---- | ---- | ---- | ----- |
60
+ | [gemma-7b-it-Q2_K.gguf](https://huggingface.co/second-state/Gemma-7b-it-GGUF/blob/main/gemma-7b-it-Q2_K.gguf) | Q2_K | 2 | 3.09 GB| smallest, significant quality loss - not recommended for most purposes |
61
+ | [gemma-7b-it-Q3_K_L.gguf](https://huggingface.co/second-state/Gemma-7b-it-GGUF/blob/main/gemma-7b-it-Q3_K_L.gguf) | Q3_K_L | 3 | 4.4 GB| small, substantial quality loss |
62
+ | [gemma-7b-it-Q3_K_M.gguf](https://huggingface.co/second-state/Gemma-7b-it-GGUF/blob/main/gemma-7b-it-Q3_K_M.gguf) | Q3_K_M | 3 | 4.06 GB| very small, high quality loss |
63
+ | [gemma-7b-it-Q3_K_S.gguf](https://huggingface.co/second-state/Gemma-7b-it-GGUF/blob/main/gemma-7b-it-Q3_K_S.gguf) | Q3_K_S | 3 | 3.68 GB| very small, high quality loss |
64
+ | [gemma-7b-it-Q4_0.gguf](https://huggingface.co/second-state/Gemma-7b-it-GGUF/blob/main/gemma-7b-it-Q4_0.gguf) | Q4_0 | 4 | 4.81 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
65
+ | [gemma-7b-it-Q4_K_M.gguf](https://huggingface.co/second-state/Gemma-7b-it-GGUF/blob/main/gemma-7b-it-Q4_K_M.gguf) | Q4_K_M | 4 | 5.13 GB| medium, balanced quality - recommended |
66
+ | [gemma-7b-it-Q4_K_S.gguf](https://huggingface.co/second-state/Gemma-7b-it-GGUF/blob/main/gemma-7b-it-Q4_K_S.gguf) | Q4_K_S | 4 | 4.84 GB| small, greater quality loss |
67
+ | [gemma-7b-it-Q5_0.gguf](https://huggingface.co/second-state/Gemma-7b-it-GGUF/blob/main/gemma-7b-it-Q5_0.gguf) | Q5_0 | 5 | 5.88 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
68
+ | [gemma-7b-it-Q5_K_M.gguf](https://huggingface.co/second-state/Gemma-7b-it-GGUF/blob/main/gemma-7b-it-Q5_K_M.gguf) | Q5_K_M | 5 | 6.04 GB| large, very low quality loss - recommended |
69
+ | [gemma-7b-it-Q5_K_S.gguf](https://huggingface.co/second-state/Gemma-7b-it-GGUF/blob/main/gemma-7b-it-Q5_K_S.gguf) | Q5_K_S | 5 | 5.88 GB| large, low quality loss - recommended |
70
+ | [gemma-7b-it-Q6_K.gguf](https://huggingface.co/second-state/Gemma-7b-it-GGUF/blob/main/gemma-7b-it-Q6_K.gguf) | Q6_K | 6 | 7.01 GB| very large, extremely low quality loss |
71
+ | [gemma-7b-it-Q8_0.gguf](https://huggingface.co/second-state/Gemma-7b-it-GGUF/blob/main/gemma-7b-it-Q8_0.gguf) | Q8_0 | 8 | 9.08 GB| very large, extremely low quality loss - not recommended |
72
+
73
+ *Quantized with llama.cpp b2230*