Versione quantizzata per MLX di https://huggingface.co/Almawave/Velvet-14B usando mlx_lm.convert, occupa 7.93GB e funziona molto bene sui Mac Silicon con 16GB di RAM

Downloads last month
3
Safetensors
Model size
2.2B params
Tensor type
FP16
·
U32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for fabiolecca/almawave-Velvet-14B-MLX

Quantized
(7)
this model