|
--- |
|
base_model: mistralai/Mistral-7B-v0.3 |
|
inference: false |
|
library_name: gguf |
|
license: apache-2.0 |
|
pipeline_tag: text-generation |
|
quantized_by: legraphista |
|
tags: |
|
- quantized |
|
- GGUF |
|
- imatrix |
|
- quantization |
|
- imat |
|
- imatrix |
|
- static |
|
--- |
|
|
|
# Mistral-7B-v0.3-IMat-GGUF |
|
_Llama.cpp imatrix quantization of mistralai/Mistral-7B-v0.3_ |
|
|
|
Original Model: [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) |
|
Original dtype: `BF16` (`bfloat16`) |
|
Quantized by: llama.cpp [b3003](https://github.com/ggerganov/llama.cpp/releases/tag/b3003) |
|
IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw) |
|
|
|
- [Mistral-7B-v0.3-IMat-GGUF](#mistral-7b-v0-3-imat-gguf) |
|
- [Files](#files) |
|
- [IMatrix](#imatrix) |
|
- [Common Quants](#common-quants) |
|
- [All Quants](#all-quants) |
|
- [Downloading using huggingface-cli](#downloading-using-huggingface-cli) |
|
- [Inference](#inference) |
|
- [Llama.cpp](#llama-cpp) |
|
- [FAQ](#faq) |
|
- [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere) |
|
- [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf) |
|
|
|
--- |
|
|
|
## Files |
|
|
|
### IMatrix |
|
Status: β
Available |
|
Link: [here](https://huggingface.co/legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/imatrix.dat) |
|
|
|
### Common Quants |
|
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | |
|
| -------- | ---------- | --------- | ------ | ------------ | -------- | |
|
| [Mistral-7B-v0.3.Q8_0.gguf](https://huggingface.co/legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.Q8_0.gguf) | Q8_0 | 7.70GB | β
Available | βͺ No | π¦ No |
|
| [Mistral-7B-v0.3.Q6_K.gguf](https://huggingface.co/legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.Q6_K.gguf) | Q6_K | 5.95GB | β
Available | βͺ No | π¦ No |
|
| Mistral-7B-v0.3.Q4_K | Q4_K | - | β³ Processing | π’ Yes | - |
|
| Mistral-7B-v0.3.Q3_K | Q3_K | - | β³ Processing | π’ Yes | - |
|
| Mistral-7B-v0.3.Q2_K | Q2_K | - | β³ Processing | π’ Yes | - |
|
|
|
|
|
### All Quants |
|
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | |
|
| -------- | ---------- | --------- | ------ | ------------ | -------- | |
|
| [Mistral-7B-v0.3.FP16.gguf](https://huggingface.co/legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.FP16.gguf) | F16 | 14.50GB | β
Available | βͺ No | π¦ No |
|
| [Mistral-7B-v0.3.BF16.gguf](https://huggingface.co/legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.BF16.gguf) | BF16 | 14.50GB | β
Available | βͺ No | π¦ No |
|
| [Mistral-7B-v0.3.Q5_K.gguf](https://huggingface.co/legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.Q5_K.gguf) | Q5_K | 5.14GB | β
Available | βͺ No | π¦ No |
|
| [Mistral-7B-v0.3.Q5_K_S.gguf](https://huggingface.co/legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.Q5_K_S.gguf) | Q5_K_S | 5.00GB | β
Available | βͺ No | π¦ No |
|
| Mistral-7B-v0.3.Q4_K_S | Q4_K_S | - | β³ Processing | π’ Yes | - |
|
| Mistral-7B-v0.3.Q3_K_L | Q3_K_L | - | β³ Processing | π’ Yes | - |
|
| Mistral-7B-v0.3.Q3_K_S | Q3_K_S | - | β³ Processing | π’ Yes | - |
|
| Mistral-7B-v0.3.Q2_K_S | Q2_K_S | - | β³ Processing | π’ Yes | - |
|
| Mistral-7B-v0.3.IQ4_NL | IQ4_NL | - | β³ Processing | π’ Yes | - |
|
| Mistral-7B-v0.3.IQ4_XS | IQ4_XS | - | β³ Processing | π’ Yes | - |
|
| Mistral-7B-v0.3.IQ3_M | IQ3_M | - | β³ Processing | π’ Yes | - |
|
| Mistral-7B-v0.3.IQ3_S | IQ3_S | - | β³ Processing | π’ Yes | - |
|
| Mistral-7B-v0.3.IQ3_XS | IQ3_XS | - | β³ Processing | π’ Yes | - |
|
| Mistral-7B-v0.3.IQ3_XXS | IQ3_XXS | - | β³ Processing | π’ Yes | - |
|
| Mistral-7B-v0.3.IQ2_M | IQ2_M | - | β³ Processing | π’ Yes | - |
|
| Mistral-7B-v0.3.IQ2_S | IQ2_S | - | β³ Processing | π’ Yes | - |
|
| Mistral-7B-v0.3.IQ2_XS | IQ2_XS | - | β³ Processing | π’ Yes | - |
|
| Mistral-7B-v0.3.IQ2_XXS | IQ2_XXS | - | β³ Processing | π’ Yes | - |
|
| Mistral-7B-v0.3.IQ1_M | IQ1_M | - | β³ Processing | π’ Yes | - |
|
| Mistral-7B-v0.3.IQ1_S | IQ1_S | - | β³ Processing | π’ Yes | - |
|
|
|
|
|
## Downloading using huggingface-cli |
|
If you do not have hugginface-cli installed: |
|
``` |
|
pip install -U "huggingface_hub[cli]" |
|
``` |
|
Download the specific file you want: |
|
``` |
|
huggingface-cli download legraphista/Mistral-7B-v0.3-IMat-GGUF --include "Mistral-7B-v0.3.Q8_0.gguf" --local-dir ./ |
|
``` |
|
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run: |
|
``` |
|
huggingface-cli download legraphista/Mistral-7B-v0.3-IMat-GGUF --include "Mistral-7B-v0.3.Q8_0/*" --local-dir Mistral-7B-v0.3.Q8_0 |
|
# see FAQ for merging GGUF's |
|
``` |
|
|
|
--- |
|
|
|
## Inference |
|
|
|
### Llama.cpp |
|
``` |
|
llama.cpp/main -m Mistral-7B-v0.3.Q8_0.gguf --color -i -p "prompt here" |
|
``` |
|
|
|
--- |
|
|
|
## FAQ |
|
|
|
### Why is the IMatrix not applied everywhere? |
|
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results). |
|
|
|
### How do I merge a split GGUF? |
|
1. Make sure you have `gguf-split` available |
|
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases |
|
- Download the appropriate zip for your system from the latest release |
|
- Unzip the archive and you should be able to find `gguf-split` |
|
2. Locate your GGUF chunks folder (ex: `Mistral-7B-v0.3.Q8_0`) |
|
3. Run `gguf-split --merge Mistral-7B-v0.3.Q8_0/Mistral-7B-v0.3.Q8_0-00001-of-XXXXX.gguf Mistral-7B-v0.3.Q8_0.gguf` |
|
- Make sure to point `gguf-split` to the first chunk of the split. |
|
|
|
--- |
|
|
|
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)! |