justheuristic commited on
Commit
2bcaf27
1 Parent(s): cd9cabb

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -0
README.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - phi-3
5
+ - phi-3-mini
6
+ - phi-3-mini-4k-instruct
7
+ - conversational
8
+ - text-generation-inference
9
+ pipeline_tag: text-generation
10
+ language:
11
+ - en
12
+ ---
13
+
14
+ Official quantization of [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) using [PV-Tuning](https://arxiv.org/abs/2405.14852) on top of [AQLM](https://arxiv.org/abs/2401.06118).
15
+
16
+ For this quantization, we used 1 codebook of 16 bits for groups of 8 weights.
17
+
18
+ Results (0-shot `acc`):
19
+
20
+ Results:
21
+ | Model | Quantization | ArcC| ArcE| Hellaswag | PiQA | Winogrande | Model size, Gb |
22
+ |------|------|-------|------|------|------|------|------|------|
23
+ | microsoft/Phi-3-mini-4k-instruct| None | 0.5529 | 0.8325 | 0.6055 | 0.8020 | 0.7364 | 7.6 |
24
+ | | 1x16 | 0.5051 | 0.7950 | 0.5532 | 0.7949 | 73.01 | 1.4 |
25
+
26
+ The 1x16g16 (1-bit) models are on the way, as soon as we update the inference lib with their respective kernels.
27
+
28
+ To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the [official GitHub repo](https://github.com/Vahe1994/AQLM).
29
+ The original code for PV-Tuning can be found in the [AQLM@pv-tuning](https://github.com/Vahe1994/AQLM/tree/pv-tuning) branch.