Llamacpp Quantizations of dolphin-2.8-mistral-7b-v02

Using llama.cpp release b2536 for quantization.

Original model: https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02

Download a file (not the whole branch) from below:

Filename Quant type File Size Description
dolphin-2.8-mistral-7b-v02-Q8_0.gguf Q8_0 7.69GB Extremely high quality, generally unneeded but max available quant.
dolphin-2.8-mistral-7b-v02-Q6_K.gguf Q6_K 5.94GB Very high quality, near perfect, recommended.
dolphin-2.8-mistral-7b-v02-Q5_K_M.gguf Q5_K_M 5.13GB High quality, very usable.
dolphin-2.8-mistral-7b-v02-Q5_K_S.gguf Q5_K_S 4.99GB High quality, very usable.
dolphin-2.8-mistral-7b-v02-Q5_0.gguf Q5_0 4.99GB High quality, older format, generally not recommended.
dolphin-2.8-mistral-7b-v02-Q4_K_M.gguf Q4_K_M 4.36GB Good quality, uses about 4.83 bits per weight.
dolphin-2.8-mistral-7b-v02-Q4_K_S.gguf Q4_K_S 4.14GB Slightly lower quality with small space savings.
dolphin-2.8-mistral-7b-v02-IQ4_NL.gguf IQ4_NL 4.15GB Decent quality, similar to Q4_K_S, new method of quanting,
dolphin-2.8-mistral-7b-v02-IQ4_XS.gguf IQ4_XS 3.94GB Decent quality, new method with similar performance to Q4.
dolphin-2.8-mistral-7b-v02-Q4_0.gguf Q4_0 4.10GB Decent quality, older format, generally not recommended.
dolphin-2.8-mistral-7b-v02-Q3_K_L.gguf Q3_K_L 3.82GB Lower quality but usable, good for low RAM availability.
dolphin-2.8-mistral-7b-v02-Q3_K_M.gguf Q3_K_M 3.51GB Even lower quality.
dolphin-2.8-mistral-7b-v02-IQ3_M.gguf IQ3_M 3.28GB Medium-low quality, new method with decent performance.
dolphin-2.8-mistral-7b-v02-IQ3_S.gguf IQ3_S 3.18GB Lower quality, new method with decent performance, recommended over Q3 quants.
dolphin-2.8-mistral-7b-v02-Q3_K_S.gguf Q3_K_S 3.16GB Low quality, not recommended.
dolphin-2.8-mistral-7b-v02-Q2_K.gguf Q2_K 2.71GB Extremely low quality, not recommended.

Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski

Downloads last month
7,002
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for bartowski/dolphin-2.8-mistral-7b-v02-GGUF

Quantized
(38)
this model