phi-4_exl3 / README.md
cmh's picture
Update README.md
704e411 verified
metadata
license: mit
language:
  - en
base_model:
  - unsloth/phi-4
  - microsoft/phi-4
pipeline_tag: text-generation

Phi-4 converted for ExLlamaV3

ExLlamaV3 is an optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs.

This is an early preview release of ExLlamaV3.

Quant type File Size Vram*
phi-4_3bpw 3 bits per weight 6.53 GB 9.4 GB
phi-4_4bpw 4 bits per weight 8.24 GB 11.0 GB
phi-4_5bpw 5 bits per weight 9.94 GB 12,6 GB
phi-4_6bpw 6 bits per weight 11.65 GB 14,2 GB
phi-4_7bpw 7 bits per weight 13.35 GB 15,8 GB
phi-4_8bpw 8 bits per weight 15.05 GB 17,3 GB

*approximate value at 16k context.


Phi-4 Model Card

Phi-4 Technical Report

Model Summary

Developers Microsoft Research
Description phi-4 is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning.

phi-4 underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures
Architecture 14B parameters, dense decoder-only Transformer model
Context length 16384 tokens

Usage

Input Formats

Given the nature of the training data, phi-4 is best suited for prompts using the chat format as follows:

<|im_start|>system<|im_sep|>
You are a medieval knight and must provide explanations to modern people.<|im_end|>
<|im_start|>user<|im_sep|>
How should I explain the Internet?<|im_end|>
<|im_start|>assistant<|im_sep|>

With exllamav3's chat.py:

python examples\chat.py -m models\phi-4_exl3\4bpw -mode raw