fbaldassarri
commited on
Upload README.md
Browse files
README.md
CHANGED
@@ -23,8 +23,8 @@ tags:
|
|
23 |
- llama-3
|
24 |
- intel-autoround
|
25 |
- intel
|
26 |
-
model_name: Llama 3.2 1B
|
27 |
-
base_model: meta-llama/Llama-3.2-1B
|
28 |
inference: false
|
29 |
model_creator: meta-llama
|
30 |
pipeline_tag: text-generation
|
@@ -35,7 +35,7 @@ quantized_by: fbaldassarri
|
|
35 |
|
36 |
## Model Information
|
37 |
|
38 |
-
Quantized version of [meta-llama/Llama-3.2-1B](meta-llama/Llama-3.2-1B) using torch.float32 for quantization tuning.
|
39 |
- 8 bits (INT8)
|
40 |
- group size = 128
|
41 |
- Asymmetrical Quantization
|
@@ -43,7 +43,7 @@ Quantized version of [meta-llama/Llama-3.2-1B](meta-llama/Llama-3.2-1B) using to
|
|
43 |
|
44 |
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round)
|
45 |
|
46 |
-
Note: this INT8 version of Llama-3.2-1B has been quantized to run inference through CPU.
|
47 |
|
48 |
## Replication Recipe
|
49 |
|
@@ -68,14 +68,14 @@ pip install -vvv --no-build-isolation -e .[cpu]
|
|
68 |
|
69 |
```
|
70 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
71 |
-
model_name = "meta-llama/Llama-3.2-1B"
|
72 |
model = AutoModelForCausalLM.from_pretrained(model_name)
|
73 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
74 |
from auto_round import AutoRound
|
75 |
bits, group_size, sym, device, amp = 8, 128, False, 'cpu', False
|
76 |
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
|
77 |
autoround.quantize()
|
78 |
-
output_dir = "./AutoRound/meta-llama_Llama-3.2-1B-auto_gptq-int8-gs128-asym"
|
79 |
autoround.save_quantized(output_dir, format='auto_gptq', inplace=True)
|
80 |
```
|
81 |
|
|
|
23 |
- llama-3
|
24 |
- intel-autoround
|
25 |
- intel
|
26 |
+
model_name: Llama 3.2 1B Instruct
|
27 |
+
base_model: meta-llama/Llama-3.2-1B-Instruct
|
28 |
inference: false
|
29 |
model_creator: meta-llama
|
30 |
pipeline_tag: text-generation
|
|
|
35 |
|
36 |
## Model Information
|
37 |
|
38 |
+
Quantized version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) using torch.float32 for quantization tuning.
|
39 |
- 8 bits (INT8)
|
40 |
- group size = 128
|
41 |
- Asymmetrical Quantization
|
|
|
43 |
|
44 |
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round)
|
45 |
|
46 |
+
Note: this INT8 version of Llama-3.2-1B-Instruct has been quantized to run inference through CPU.
|
47 |
|
48 |
## Replication Recipe
|
49 |
|
|
|
68 |
|
69 |
```
|
70 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
71 |
+
model_name = "meta-llama/Llama-3.2-1B-Instruct"
|
72 |
model = AutoModelForCausalLM.from_pretrained(model_name)
|
73 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
74 |
from auto_round import AutoRound
|
75 |
bits, group_size, sym, device, amp = 8, 128, False, 'cpu', False
|
76 |
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
|
77 |
autoround.quantize()
|
78 |
+
output_dir = "./AutoRound/meta-llama_Llama-3.2-1B-Instruct-auto_gptq-int8-gs128-asym"
|
79 |
autoround.save_quantized(output_dir, format='auto_gptq', inplace=True)
|
80 |
```
|
81 |
|