fedric95 commited on
Commit
3377cbf
1 Parent(s): 2445611

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -11
README.md CHANGED
@@ -23,17 +23,17 @@ Original model: https://huggingface.co/google/gemma-2-2b
23
 
24
  | Filename | Quant type | File Size | Perplexity (wikitext-2-raw-v1.test) |
25
  | -------- | ---------- | --------- | ----------- |
26
- | [gemma-2-2b.FP32.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/gemma-2-2b.FP32.gguf) | FP32 | 10.50GB | 8.9236 +/- 0.06373 |
27
- | [gemma-2-2b-Q8_0.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/gemma-2-2b-Q8_0.gguf) | Q8_0 | 2.78GB | 8.9299 +/- 0.06377 |
28
- | [gemma-2-2b-Q6_K.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/gemma-2-2b-Q6_K.gguf) | Q6_K | 2.15GB | 8.9570 +/- 0.06404 |
29
- | [gemma-2-2b-Q5_K_M.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/gemma-2-2b-Q5_K_M.gguf) | Q5_K_M | 1.92GB | 9.0061 +/- 0.06461 |
30
- | [gemma-2-2b-Q5_K_S.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/gemma-2-2b-Q5_K_S.gguf) | Q5_K_S | 1.88GB | 9.0096 +/- 0.06451|
31
- | [gemma-2-2b-Q4_K_M.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/gemma-2-2b-Q4_K_M.gguf) | Q4_K_M | 1.71GB | 9.2260 +/- 0.06643 |
32
- | [gemma-2-2b-Q4_K_S.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/gemma-2-2b-Q4_K_S.gguf) | Q4_K_S | 1.64GB | 9.3116 +/- 0.06726 |
33
- | [gemma-2-2b-Q3_K_L.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/gemma-2-2b-Q3_K_L.gguf) | Q3_K_L | 1.55GB | 9.5683 +/- 0.06909 |
34
- | [gemma-2-2b-Q3_K_M.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/gemma-2-2b-Q3_K_M.gguf) | Q3_K_M | 1.46GB | 9.7759 +/- 0.07120 |
35
- | [gemma-2-2b-Q3_K_S.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/gemma-2-2b-Q3_K_S.gguf) | Q3_K_S | 1.36GB | 10.8067 +/- 0.08032 |
36
- | [gemma-2-2b-Q2_K.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/gemma-2-2b-Q2_K.gguf) | Q2_K | 1.23GB | 13.8994 +/- 0.10723 |
37
 
38
  ## Downloading using huggingface-cli
39
 
 
23
 
24
  | Filename | Quant type | File Size | Perplexity (wikitext-2-raw-v1.test) |
25
  | -------- | ---------- | --------- | ----------- |
26
+ | [gemma-2-2b.FP32.gguf](https://huggingface.co/fedric95/gemma-2-2b-GGUF/blob/main/gemma-2-2b.FP32.gguf) | FP32 | 10.50GB | 8.9236 +/- 0.06373 |
27
+ | [gemma-2-2b-Q8_0.gguf](https://huggingface.co/fedric95/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q8_0.gguf) | Q8_0 | 2.78GB | 8.9299 +/- 0.06377 |
28
+ | [gemma-2-2b-Q6_K.gguf](https://huggingface.co/fedric95/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q6_K.gguf) | Q6_K | 2.15GB | 8.9570 +/- 0.06404 |
29
+ | [gemma-2-2b-Q5_K_M.gguf](https://huggingface.co/fedric95/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q5_K_M.gguf) | Q5_K_M | 1.92GB | 9.0061 +/- 0.06461 |
30
+ | [gemma-2-2b-Q5_K_S.gguf](https://huggingface.co/fedric95/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q5_K_S.gguf) | Q5_K_S | 1.88GB | 9.0096 +/- 0.06451|
31
+ | [gemma-2-2b-Q4_K_M.gguf](https://huggingface.co/fedric95/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q4_K_M.gguf) | Q4_K_M | 1.71GB | 9.2260 +/- 0.06643 |
32
+ | [gemma-2-2b-Q4_K_S.gguf](https://huggingface.co/fedric95/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q4_K_S.gguf) | Q4_K_S | 1.64GB | 9.3116 +/- 0.06726 |
33
+ | [gemma-2-2b-Q3_K_L.gguf](https://huggingface.co/fedric95/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q3_K_L.gguf) | Q3_K_L | 1.55GB | 9.5683 +/- 0.06909 |
34
+ | [gemma-2-2b-Q3_K_M.gguf](https://huggingface.co/fedric95/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q3_K_M.gguf) | Q3_K_M | 1.46GB | 9.7759 +/- 0.07120 |
35
+ | [gemma-2-2b-Q3_K_S.gguf](https://huggingface.co/fedric95/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q3_K_S.gguf) | Q3_K_S | 1.36GB | 10.8067 +/- 0.08032 |
36
+ | [gemma-2-2b-Q2_K.gguf](https://huggingface.co/fedric95/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q2_K.gguf) | Q2_K | 1.23GB | 13.8994 +/- 0.10723 |
37
 
38
  ## Downloading using huggingface-cli
39