Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Alcoft
/
Qwen2.5-0.5B-Instruct-GGUF
like
0
Text Generation
GGUF
English
conversational
License:
apache-2.0
Model card
Files
Files and versions
Community
Deploy
Use this model
README.md exists but content is empty.
Downloads last month
22
GGUF
Model size
494M params
Architecture
qwen2
Hardware compatibility
Log In
to view the estimation
2-bit
Q2_K
339 MB
3-bit
Q3_K_S
338 MB
Q3_K_M
355 MB
Q3_K_L
369 MB
4-bit
Q4_K_S
385 MB
Q4_K_M
398 MB
5-bit
Q5_K_S
413 MB
Q5_K_M
420 MB
6-bit
Q6_K
506 MB
8-bit
Q8_0
531 MB
View +1 variant
Inference Providers
NEW
Text Generation
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for
Alcoft/Qwen2.5-0.5B-Instruct-GGUF
Base model
Qwen/Qwen2.5-0.5B
Finetuned
Qwen/Qwen2.5-0.5B-Instruct
Quantized
(
108
)
this model
Collection including
Alcoft/Qwen2.5-0.5B-Instruct-GGUF
TAO71-AI Quants: Qwen2.5
Collection
4 items
โข
Updated
Dec 2, 2024