Upload Phi-3-mini-4k-instruct.gguf

#20

I realized that the q4 quantized gguf model was lastly updated 3 months ago and more importantly its chat template does not consider system prompt. I converted the model with id "ba3e2e891adaf6b9e7471bcc80dec875d73ae4e9" to gguf format with 4bit quantization from bf16 via LLaMa CPP converter. I recommend to use this updated Q4 model to use the most updated model and to not get any errors.

The original is this one: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/commit/4f818b18e097c9ae8f93a29a57027cad54b75304

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment