Gemma-2-9B-Instruct-4Bit-GPTQ

Quantization

  • This model was quantized with the Auto-GPTQ library

Metrics

Benchmark Metric Gemma 2 GPTQ Gemma 2 9B it
PIQA 0-shot 80.52 80.79
MMLU 5-shot 52.0 50.00
Downloads last month
490
Safetensors
Model size
2.03B params
Tensor type
I32
·
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Model tree for Granther/Gemma-2-9B-Instruct-4Bit-GPTQ

Base model

google/gemma-2-9b
Quantized
(139)
this model