DeepSeek V2
Collection
2 items
β’
Updated
Llama.cpp imatrix quantization of deepseek-ai/DeepSeek-V2-Lite-Chat
Original Model: deepseek-ai/DeepSeek-V2-Lite-Chat
Original dtype: BF16
(bfloat16
)
Quantized by: llama.cpp fork PR 7519
IMatrix dataset: here
Status: β
Available
Link: here
Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
---|---|---|---|---|---|
DeepSeek-V2-Lite-Chat.Q8_0.gguf | Q8_0 | 16.70GB | β Available | βͺ No | π¦ No |
DeepSeek-V2-Lite-Chat.Q6_K.gguf | Q6_K | 14.07GB | β Available | βͺ No | π¦ No |
DeepSeek-V2-Lite-Chat.Q4_K.gguf | Q4_K | 10.36GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite-Chat.Q3_K.gguf | Q3_K | 8.13GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite-Chat.Q2_K.gguf | Q2_K | 6.43GB | β Available | π’ Yes | π¦ No |
Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
---|---|---|---|---|---|
DeepSeek-V2-Lite-Chat.FP16.gguf | F16 | 31.42GB | β Available | βͺ No | π¦ No |
DeepSeek-V2-Lite-Chat.BF16.gguf | BF16 | 31.42GB | β Available | βͺ No | π¦ No |
DeepSeek-V2-Lite-Chat.Q5_K.gguf | Q5_K | 11.85GB | β Available | βͺ No | π¦ No |
DeepSeek-V2-Lite-Chat.Q5_K_S.gguf | Q5_K_S | 11.14GB | β Available | βͺ No | π¦ No |
DeepSeek-V2-Lite-Chat.Q4_K_S.gguf | Q4_K_S | 9.53GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite-Chat.Q3_K_L.gguf | Q3_K_L | 8.46GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite-Chat.Q3_K_S.gguf | Q3_K_S | 7.49GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite-Chat.Q2_K_S.gguf | Q2_K_S | 6.46GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite-Chat.IQ4_NL.gguf | IQ4_NL | 8.91GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite-Chat.IQ4_XS.gguf | IQ4_XS | 8.57GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite-Chat.IQ3_M.gguf | IQ3_M | 7.55GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite-Chat.IQ3_S.gguf | IQ3_S | 7.49GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite-Chat.IQ3_XS.gguf | IQ3_XS | 7.12GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite-Chat.IQ3_XXS.gguf | IQ3_XXS | 6.96GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite-Chat.IQ2_M.gguf | IQ2_M | 6.33GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite-Chat.IQ2_S.gguf | IQ2_S | 6.01GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite-Chat.IQ2_XS.gguf | IQ2_XS | 5.97GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite-Chat.IQ2_XXS.gguf | IQ2_XXS | 5.64GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite-Chat.IQ1_M.gguf | IQ1_M | 5.24GB | β Available | π’ Yes | π¦ No |
DeepSeek-V2-Lite-Chat.IQ1_S.gguf | IQ1_S | 4.99GB | β Available | π’ Yes | π¦ No |
First, make sure you have hugginface-cli installed:
pip install -U "huggingface_hub[cli]"
Then, you can target the specific file you want:
huggingface-cli download legraphista/DeepSeek-V2-Lite-Chat-IMat-GGUF --include "DeepSeek-V2-Lite-Chat.Q8_0.gguf" --local-dir ./
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
huggingface-cli download legraphista/DeepSeek-V2-Lite-Chat-IMat-GGUF --include "DeepSeek-V2-Lite-Chat.Q8_0/*" --local-dir DeepSeek-V2-Lite-Chat.Q8_0
# see FAQ for merging GGUF's
<ο½beginβofβsentenceο½>User: {user_message_1}
Assistant: {assistant_message_1}<ο½endβofβsentenceο½>User: {user_message_2}
Assistant:
<ο½beginβofβsentenceο½>{system_message}
User: {user_message_1}
Assistant: {assistant_message_1}<ο½endβofβsentenceο½>User: {user_message_2}
Assistant:
llama.cpp/main -m DeepSeek-V2-Lite-Chat.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
According to this investigation, it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
gguf-split
availablegguf-split
, navigate to https://github.com/ggerganov/llama.cpp/releasesgguf-split
DeepSeek-V2-Lite-Chat.Q8_0
)gguf-split --merge DeepSeek-V2-Lite-Chat.Q8_0/DeepSeek-V2-Lite-Chat.Q8_0-00001-of-XXXXX.gguf DeepSeek-V2-Lite-Chat.Q8_0.gguf
gguf-split
to the first chunk of the split.Got a suggestion? Ping me @legraphista!
Base model
deepseek-ai/DeepSeek-V2-Lite-Chat