Versione quantizzata per MLX di https://huggingface.co/Almawave/Velvet-14B usando mlx_lm.convert, occupa 7.93GB e funziona molto bene sui Mac Silicon con 16GB di RAM
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for fabiolecca/almawave-Velvet-14B-MLX
Base model
Almawave/Velvet-14B