Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
qwp4w3hyb
/
deepseek-coder-7b-instruct-v1.5-iMat-GGUF
like
2
GGUF
Inference Endpoints
conversational
License:
deepseek
Model card
Files
Files and versions
Community
Deploy
Use this model
README.md exists but content is empty.
Downloads last month
155
GGUF
Model size
6.91B params
Architecture
llama
1-bit
IQ1_S
2-bit
IQ2_XXS
3-bit
IQ3_XXS
4-bit
IQ4_XS
Q4_K_M
Q4_K_L
5-bit
Q5_K_M
Q5_K_L
6-bit
Q6_K
Q6_K_L
8-bit
Q8_0
Q8_0_L
16-bit
BF16
Inference API
Unable to determine this model's library. Check the
docs
.