GGUF Quants for: yulan-team/YuLan-Mini
Model by: RUC-GSAI-YuLan (thank you!)
Quants by: quantflex
Run with llama.cpp
No K-Quants included because the tensor cols are not divisible by 256.
- Downloads last month
- 158
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
8-bit
16-bit
32-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for quantflex/YuLan-Mini-GGUF
Base model
yulan-team/YuLan-Mini