Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
sayhan
/
gemma-2b-GGUF-quantized
like
0
GGUF
Inference Endpoints
Model card
Files
Files and versions
Community
Deploy
Use this model
No model card
Downloads last month
70
GGUF
Model size
2.51B params
Architecture
gemma
2-bit
Q2_K
Q2_K
3-bit
Q3_K_S
Q3_K_S
Q3_K_M
Q3_K_M
Q3_K_L
Q3_K_L
4-bit
Q4_K_S
Q4_K_S
Q4_0
Q4_0
Q4_K_M
Q4_K_M
5-bit
Q5_K_S
Q5_K_S
Q5_0
Q5_0
Q5_K_M
Q5_K_M
6-bit
Q6_K
Q6_K
8-bit
Q8_0
Q8_0
View +1 file
Inference API
Unable to determine this model's library. Check the
docs
.