GGUF llama.cpp quantized version of:

Recommended Prompt Format (Phi-3)

<|system|>
Provide some context and/or instructions to the model.<|end|>
<|user|>
The user’s message goes here<|end|>
<|assistant|>
AI message goes here<|end|>
<|assistant|>

Quant Version: b3639 with imatrix

Downloads last month
26
GGUF
Model size
3.82B params
Architecture
phi3

2-bit

5-bit

8-bit

32-bit

Inference API
Unable to determine this model's library. Check the docs .