|
--- |
|
base_model: google/gemma-2-2b |
|
library_name: transformers |
|
license: gemma |
|
pipeline_tag: text-generation |
|
tags: |
|
- conversational |
|
quantized_by: fedric95 |
|
extra_gated_heading: Access Gemma on Hugging Face |
|
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and |
|
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging |
|
Face and click below. Requests are processed immediately. |
|
extra_gated_button_content: Acknowledge license |
|
--- |
|
|
|
## Llamacpp Quantizations of Meta-Llama-3.1-8B |
|
|
|
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3583">b3583</a> for quantization. |
|
|
|
Original model: https://huggingface.co/google/gemma-2-2b |
|
|
|
## Download a file (not the whole branch) from below: |
|
|
|
| Filename | Quant type | File Size | Perplexity (wikitext-2-raw-v1.test) | |
|
| -------- | ---------- | --------- | ----------- | |
|
| [gemma-2-2b.FP32.gguf](https://huggingface.co/fedric95/gemma-2-2b-GGUF/blob/main/gemma-2-2b.FP32.gguf) | FP32 | 10.50GB | 8.9236 +/- 0.06373 | |
|
| [gemma-2-2b-Q8_0.gguf](https://huggingface.co/fedric95/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q8_0.gguf) | Q8_0 | 2.78GB | 8.9299 +/- 0.06377 | |
|
| [gemma-2-2b-Q6_K.gguf](https://huggingface.co/fedric95/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q6_K.gguf) | Q6_K | 2.15GB | 8.9570 +/- 0.06404 | |
|
| [gemma-2-2b-Q5_K_M.gguf](https://huggingface.co/fedric95/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q5_K_M.gguf) | Q5_K_M | 1.92GB | 9.0061 +/- 0.06461 | |
|
| [gemma-2-2b-Q5_K_S.gguf](https://huggingface.co/fedric95/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q5_K_S.gguf) | Q5_K_S | 1.88GB | 9.0096 +/- 0.06451| |
|
| [gemma-2-2b-Q4_K_M.gguf](https://huggingface.co/fedric95/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q4_K_M.gguf) | Q4_K_M | 1.71GB | 9.2260 +/- 0.06643 | |
|
| [gemma-2-2b-Q4_K_S.gguf](https://huggingface.co/fedric95/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q4_K_S.gguf) | Q4_K_S | 1.64GB | 9.3116 +/- 0.06726 | |
|
| [gemma-2-2b-Q3_K_L.gguf](https://huggingface.co/fedric95/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q3_K_L.gguf) | Q3_K_L | 1.55GB | 9.5683 +/- 0.06909 | |
|
| [gemma-2-2b-Q3_K_M.gguf](https://huggingface.co/fedric95/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q3_K_M.gguf) | Q3_K_M | 1.46GB | 9.7759 +/- 0.07120 | |
|
| [gemma-2-2b-Q3_K_S.gguf](https://huggingface.co/fedric95/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q3_K_S.gguf) | Q3_K_S | 1.36GB | 10.8067 +/- 0.08032 | |
|
| [gemma-2-2b-Q2_K.gguf](https://huggingface.co/fedric95/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q2_K.gguf) | Q2_K | 1.23GB | 13.8994 +/- 0.10723 | |
|
|
|
## Benchmark Results |
|
|
|
| Benchmark | Quant type | Metric | |
|
| -------- | ---------- | --------- | |
|
| WinoGrande (0-shot) | Q8_0 | 68.3504 +/- 1.3072 | |
|
| WinoGrande (0-shot) | Q4_K_M | 67.5612 +/- 1.3157 | |
|
| WinoGrande (0-shot) | Q3_K_M | 65.9037 +/- 1.3323 | |
|
| WinoGrande (0-shot) | Q3_K_S | 66.6930 +/- 1.3246 | |
|
| WinoGrande (0-shot) | Q2_K | 63.2991 +/- 1.3546 | |
|
| HellaSwag (0-shot) | Q8_0 | 71.25074686 | |
|
| HellaSwag (0-shot) | Q4_K_M | 69.95618403 | |
|
| HellaSwag (0-shot) | Q3_K_M | 68.00438160 | |
|
| HellaSwag (0-shot) | Q3_K_S | 69.95618403 | |
|
| HellaSwag (0-shot) | Q2_K | 59.38060147 | |
|
|
|
## Downloading using huggingface-cli |
|
|
|
First, make sure you have hugginface-cli installed: |
|
|
|
``` |
|
pip install -U "huggingface_hub[cli]" |
|
``` |
|
|
|
Then, you can target the specific file you want: |
|
|
|
``` |
|
huggingface-cli download fedric95/gemma-2-2b-GGUF --include "gemma-2-2b-Q4_K_M.gguf" --local-dir ./ |
|
``` |
|
|
|
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: |
|
|
|
``` |
|
huggingface-cli download fedric95/gemma-2-2b-GGUF --include "gemma-2-2b-Q8_0.gguf/*" --local-dir gemma-2-2b-Q8_0 |
|
``` |
|
|
|
You can either specify a new local-dir (gemma-2-2b-Q8_0) or download them all in place (./) |
|
|
|
|
|
## Reproducibility |
|
|
|
https://github.com/ggerganov/llama.cpp/discussions/9020#discussioncomment-10335638 |