DeepSeek V2
Collection
2 items
β’
Updated
Llama.cpp imatrix quantization of deepseek-ai/DeepSeek-V2-Lite
Original Model: deepseek-ai/DeepSeek-V2-Lite
Original dtype: BF16
(bfloat16
)
Quantized by: llama.cpp https://github.com/ggerganov/llama.cpp/pull/7519
IMatrix dataset: here
Status: β
Available
Link: here
Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
---|---|---|---|---|---|
DeepSeek-V2-Lite.Q8_0.gguf | Q8_0 | 16.70GB | β Available | βͺ No | π¦ No |
DeepSeek-V2-Lite.Q6_K.gguf | Q6_K | 14.07GB | β Available | βͺ No | π¦ No |
DeepSeek-V2-Lite.Q4_K.gguf | Q4_K | 10.36GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite.Q3_K.gguf | Q3_K | 8.13GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite.Q2_K.gguf | Q2_K | 6.43GB | β Available | π’ Yes | π¦ No |
Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
---|---|---|---|---|---|
DeepSeek-V2-Lite.FP16.gguf | F16 | 31.42GB | β Available | βͺ No | π¦ No |
DeepSeek-V2-Lite.BF16.gguf | BF16 | 31.42GB | β Available | βͺ No | π¦ No |
DeepSeek-V2-Lite.Q5_K.gguf | Q5_K | 11.85GB | β Available | βͺ No | π¦ No |
DeepSeek-V2-Lite.Q5_K_S.gguf | Q5_K_S | 11.14GB | β Available | βͺ No | π¦ No |
DeepSeek-V2-Lite.Q4_K_S.gguf | Q4_K_S | 9.53GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite.Q3_K_L.gguf | Q3_K_L | 8.46GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite.Q3_K_S.gguf | Q3_K_S | 7.49GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite.Q2_K_S.gguf | Q2_K_S | 6.46GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite.IQ4_NL.gguf | IQ4_NL | 8.91GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite.IQ4_XS.gguf | IQ4_XS | 8.57GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite.IQ3_M.gguf | IQ3_M | 7.55GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite.IQ3_S.gguf | IQ3_S | 7.49GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite.IQ3_XS.gguf | IQ3_XS | 7.12GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite.IQ3_XXS.gguf | IQ3_XXS | 6.96GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite.IQ2_M.gguf | IQ2_M | 6.33GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite.IQ2_S.gguf | IQ2_S | 6.01GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite.IQ2_XS.gguf | IQ2_XS | 5.97GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite.IQ2_XXS.gguf | IQ2_XXS | 5.64GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite.IQ1_M.gguf | IQ1_M | 5.24GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite.IQ1_S.gguf | IQ1_S | 4.99GB | β Available | π’ Yes | π¦ No |
If you do not have hugginface-cli installed:
pip install -U "huggingface_hub[cli]"
Download the specific file you want:
huggingface-cli download legraphista/DeepSeek-V2-Lite-IMat-GGUF --include "DeepSeek-V2-Lite.Q8_0.gguf" --local-dir ./
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
huggingface-cli download legraphista/DeepSeek-V2-Lite-IMat-GGUF --include "DeepSeek-V2-Lite.Q8_0/*" --local-dir DeepSeek-V2-Lite.Q8_0
# see FAQ for merging GGUF's
llama.cpp/main -m DeepSeek-V2-Lite.Q8_0.gguf --color -i -p "prompt here"
According to this investigation, it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
gguf-split
availablegguf-split
, navigate to https://github.com/ggerganov/llama.cpp/releasesgguf-split
DeepSeek-V2-Lite.Q8_0
)gguf-split --merge DeepSeek-V2-Lite.Q8_0/DeepSeek-V2-Lite.Q8_0-00001-of-XXXXX.gguf DeepSeek-V2-Lite.Q8_0.gguf
gguf-split
to the first chunk of the split.Got a suggestion? Ping me @legraphista!
Base model
deepseek-ai/DeepSeek-V2-Lite