Exllama v2 Quantizations of Mistral-7B-claude-chat

Using turboderp's ExLlamaV2 v0.0.6 for quantization.

Each branches contains an individual bits per weight.

Conversion was done using wikitext.parquet as calibration dataset.

Original model: https://huggingface.co/Norquinal/Mistral-7B-claude-chat

4.0 bits per weight

8.0 bits per weight

6.0 bits per weight

To download, you can use the following command:

With git:

git clone --single-branch --branch 4.0 https://huggingface.co/bartowski/Mistral-7B-claude-chat-exl2

With huggingface hub (credit to TheBloke for instructions):

pip3 install huggingface-hub

To download the main (only useful if you only care about measurement.json) branch to a folder called Mistral-7B-claude-chat-exl2:

mkdir Mistral-7B-claude-chat-exl2
huggingface-cli download bartowski/Mistral-7B-claude-chat-exl2 --local-dir Mistral-7B-claude-chat-exl2 --local-dir-use-symlinks False

To download from a different branch, add the --revision parameter:

mkdir Mistral-7B-claude-chat-exl2
huggingface-cli download bartowski/Mistral-7B-claude-chat-exl2 --revision 4.0 --local-dir Mistral-7B-claude-chat-exl2 --local-dir-use-symlinks False
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train bartowski/Mistral-7B-claude-chat-exl2