Official AQLM quantization of meta-llama/Llama-3.2-3B finetuned with PV-Tuning.

For this quantization, we used 2 codebooks of 8 bits and groupsize of 8.

Results:

Model Quantization MMLU (5-shot) ArcC ArcE Hellaswag PiQA Winogrande Model size, Gb
meta-llama/Llama-3.2-3B fp16 0.5642 0.4224 0.7449 0.5526 0.7666 0.6993 6.4
2x8g8 0.3876 0.3328 0.6763 0.4817 0.7421 0.5975 1.5
Downloads last month
94
Safetensors
Model size
1.1B params
Tensor type
FP16
·
I8
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for ISTA-DASLab/Llama-3.2-3B-AQLM-PV-2Bit-2x8

Quantized
(55)
this model

Collection including ISTA-DASLab/Llama-3.2-3B-AQLM-PV-2Bit-2x8