This is a quantized GGUF version of Microsoft Phi-2 to 4_0, 8_0 bits and the converted 16 FP model.
(link to the original model : https://huggingface.co/microsoft/phi-2)
Disclamer : make sure to have the latest version of llama.cpp after commit b9e74f9bca5fdf7d0a22ed25e7a9626335fdfa48
- Downloads last month
- 268
Hardware compatibility
Log In
to view the estimation
4-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
HF Inference deployability: The model authors have turned it off explicitly.