The Quantized Phi-3-small-128k-instruct Model

Original Base Embedding Model: microsoft/Phi-3-small-128k-instruct.
Link: https://huggingface.co/microsoft/Phi-3-small-128k-instruct

Quantization Configurations

"quantization_config": {
    "batch_size": 1,
    "bits": 4,
    "block_name_to_quantize": null,
    "cache_block_outputs": true,
    "damp_percent": 0.1,
    "dataset": null,
    "desc_act": false,
    "exllama_config": {
      "version": 1
    },
    "group_size": 128,
    "max_input_length": null,
    "model_seqlen": null,
    "module_name_preceding_first_block": null,
    "modules_in_block_to_quantize": null,
    "pad_token_id": null,
    "quant_method": "gptq",
    "sym": true,
    "tokenizer": null,
    "true_sequential": true,
    "use_cuda_fp16": false,
    "use_exllama": true
  },

Source Codes

Source Codes: https://github.com/vkola-lab/medpodgpt/tree/main/quantization.

Downloads last month
10
Safetensors
Model size
1.35B params
Tensor type
I32
·
FP16
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Collection including shuyuej/Phi-3-small-128k-instruct-GPTQ