ibnzterrell commited on
Commit
4364ae1
·
verified ·
1 Parent(s): 7cd6910

Add documentation to README.md

Browse files
Files changed (1) hide show
  1. README.md +328 -3
README.md CHANGED
@@ -1,3 +1,328 @@
1
- ---
2
- license: llama3.3
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.3
3
+ language:
4
+ - en
5
+ - fr
6
+ - it
7
+ - pt
8
+ - hi
9
+ - es
10
+ - th
11
+ - de
12
+ library_name: transformers
13
+ pipeline_tag: text-generation
14
+ tags:
15
+ - llama-3.3
16
+ - meta
17
+ - autoawq
18
+ base_model:
19
+ - meta-llama/Llama-3.3-70B-Instruct
20
+ ---
21
+
22
+ ## Quantized Model Information
23
+
24
+ > [!IMPORTANT]
25
+ > This repository is an AWQ 4-bit quantized version of [`meta-llama/Llama-3.3-70B-Instruct`](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct), originally released by Meta AI.
26
+
27
+ This model was quantized using [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) from FP16 down to INT4 using GEMM kernels, with zero-point quantization and a group size of 128.
28
+
29
+ Hardware: Intel Xeon CPU E5-2699A v4 @ 2.40GHz, 256GB of RAM, and 2x NVIDIA RTX 3090.
30
+
31
+ Model usage (inference) information for Transformers, AutoAWQ, Text Generation Interface (TGI), and vLLM , as well as quantization reproduction details, are below.
32
+
33
+ ## Original Model Information
34
+
35
+ The Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction tuned generative model in 70B (text in/text out). The Llama 3.3 instruction tuned text only model is optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
36
+
37
+ ## Model Usage
38
+
39
+ In order to use this quantized model, support is offered for different solutions such as `transformers,` `autoawq,` or `text-generation-inference.`
40
+
41
+ > [!NOTE]
42
+ > In order to run inference with Llama 3.3 70B Instruct AWQ in INT4, around 35 GiB of VRAM are needed for loading the model checkpoint, without including the KV cache or the CUDA graphs, meaning that there should be a bit over that VRAM available.
43
+
44
+ ### 🤗 Transformers
45
+
46
+ In order to run inference with Llama 3.3 70B Instruct AWQ in INT4, you need to install the following packages:
47
+
48
+ ```bash
49
+ pip install -q --upgrade transformers autoawq accelerate
50
+ ```
51
+
52
+ To run inference of Llama 3.3 70B Instruct AWQ in INT4 precision, the AWQ model can be instantiated as any other causal language modeling model via `AutoModelForCausalLM`. Run inference as usual.
53
+
54
+ ```python
55
+ import torch
56
+ from transformers import AutoModelForCausalLM, AutoTokenizer, AwqConfig
57
+
58
+ model_id = "ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4"
59
+ quantization_config = AwqConfig(
60
+ bits=4,
61
+ fuse_max_seq_len=512, # Note: Update this as per your use-case
62
+ do_fuse=True,
63
+ )
64
+
65
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
66
+ model = AutoModelForCausalLM.from_pretrained(
67
+ model_id,
68
+ torch_dtype=torch.float16,
69
+ low_cpu_mem_usage=True,
70
+ device_map="auto",
71
+ quantization_config=quantization_config
72
+ )
73
+
74
+ prompt = [
75
+ {"role": "system", "content": "You are a helpful assistant, that responds as a pirate."},
76
+ {"role": "user", "content": "What's Deep Learning?"},
77
+ ]
78
+ inputs = tokenizer.apply_chat_template(
79
+ prompt,
80
+ tokenize=True,
81
+ add_generation_prompt=True,
82
+ return_tensors="pt",
83
+ return_dict=True,
84
+ ).to("cuda")
85
+
86
+ outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256)
87
+ print(tokenizer.batch_decode(outputs[:, inputs['input_ids'].shape[1]:], skip_special_tokens=True)[0])
88
+ ```
89
+
90
+ ### AutoAWQ
91
+
92
+ In order to run inference with Llama 3.3 70B Instruct AWQ in INT4, you need to install the following packages:
93
+
94
+ ```bash
95
+ pip install -q --upgrade transformers autoawq accelerate
96
+ ```
97
+
98
+ Alternatively, one may want to run that via `AutoAWQ` even though it's built on top of 🤗 `transformers`, which is the recommended approach instead as described above.
99
+
100
+ ```python
101
+ import torch
102
+ from awq import AutoAWQForCausalLM
103
+ from transformers import AutoModelForCausalLM, AutoTokenizer
104
+
105
+ model_id = "ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4"
106
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
107
+ model = AutoAWQForCausalLM.from_pretrained(
108
+ model_id,
109
+ torch_dtype=torch.float16,
110
+ low_cpu_mem_usage=True,
111
+ device_map="auto",
112
+ )
113
+
114
+ prompt = [
115
+ {"role": "system", "content": "You are a helpful assistant, that responds as a pirate."},
116
+ {"role": "user", "content": "What's Deep Learning?"},
117
+ ]
118
+ inputs = tokenizer.apply_chat_template(
119
+ prompt,
120
+ tokenize=True,
121
+ add_generation_prompt=True,
122
+ return_tensors="pt",
123
+ return_dict=True,
124
+ ).to("cuda")
125
+
126
+ outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256)
127
+ print(tokenizer.batch_decode(outputs[:, inputs['input_ids'].shape[1]:], skip_special_tokens=True)[0])
128
+ ```
129
+
130
+ The AutoAWQ script has been adapted from [AutoAWQ/examples/generate.py](https://github.com/casper-hansen/AutoAWQ/blob/main/examples/generate.py).
131
+
132
+ ### 🤗 Text Generation Inference (TGI)
133
+
134
+ To run the `text-generation-launcher` with Llama 3.3 70B Instruct AWQ in INT4 with Marlin kernels for optimized inference speed, you will need to have Docker installed (see [installation notes](https://docs.docker.com/engine/install/)) and the `huggingface_hub` Python package as you need to login to the Hugging Face Hub.
135
+
136
+ ```bash
137
+ pip install -q --upgrade huggingface_hub
138
+ huggingface-cli login
139
+ ```
140
+
141
+ Then you just need to run the TGI v2.2.0 (or higher) Docker container as follows:
142
+
143
+ ```bash
144
+ docker run --gpus all --shm-size 1g -ti -p 8080:80 \
145
+ -v hf_cache:/data \
146
+ -e MODEL_ID=ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4 \
147
+ -e NUM_SHARD=4 \
148
+ -e QUANTIZE=awq \
149
+ -e HF_TOKEN=$(cat ~/.cache/huggingface/token) \
150
+ -e MAX_INPUT_LENGTH=4000 \
151
+ -e MAX_TOTAL_TOKENS=4096 \
152
+ ghcr.io/huggingface/text-generation-inference:2.2.0
153
+ ```
154
+
155
+ > [!NOTE]
156
+ > TGI will expose different endpoints, to see all the endpoints available check [TGI OpenAPI Specification](https://huggingface.github.io/text-generation-inference/#/).
157
+
158
+ To send request to the deployed TGI endpoint compatible with [OpenAI OpenAPI specification](https://github.com/openai/openai-openapi) i.e. `/v1/chat/completions`:
159
+
160
+ ```bash
161
+ curl 0.0.0.0:8080/v1/chat/completions \
162
+ -X POST \
163
+ -H 'Content-Type: application/json' \
164
+ -d '{
165
+ "model": "tgi",
166
+ "messages": [
167
+ {
168
+ "role": "system",
169
+ "content": "You are a helpful assistant."
170
+ },
171
+ {
172
+ "role": "user",
173
+ "content": "What is Deep Learning?"
174
+ }
175
+ ],
176
+ "max_tokens": 128
177
+ }'
178
+ ```
179
+
180
+ Or programatically via the `huggingface_hub` Python client as follows:
181
+
182
+ ```python
183
+ import os
184
+ from huggingface_hub import InferenceClient
185
+
186
+ client = InferenceClient(base_url="http://0.0.0.0:8080", api_key=os.getenv("HF_TOKEN", "-"))
187
+
188
+ chat_completion = client.chat.completions.create(
189
+ model="ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4",
190
+ messages=[
191
+ {"role": "system", "content": "You are a helpful assistant."},
192
+ {"role": "user", "content": "What is Deep Learning?"},
193
+ ],
194
+ max_tokens=128,
195
+ )
196
+ ```
197
+
198
+ Alternatively, the OpenAI Python client can also be used (see [installation notes](https://github.com/openai/openai-python?tab=readme-ov-file#installation)) as follows:
199
+
200
+ ```python
201
+ import os
202
+ from openai import OpenAI
203
+
204
+ client = OpenAI(base_url="http://0.0.0.0:8080/v1", api_key=os.getenv("OPENAI_API_KEY", "-"))
205
+
206
+ chat_completion = client.chat.completions.create(
207
+ model="tgi",
208
+ messages=[
209
+ {"role": "system", "content": "You are a helpful assistant."},
210
+ {"role": "user", "content": "What is Deep Learning?"},
211
+ ],
212
+ max_tokens=128,
213
+ )
214
+ ```
215
+
216
+ ### vLLM
217
+
218
+ To run vLLM with Llama 3.3 70B Instruct AWQ in INT4, you will need to have Docker installed (see [installation notes](https://docs.docker.com/engine/install/)) and run the latest vLLM Docker container as follows:
219
+
220
+ ```bash
221
+ docker run --runtime nvidia --gpus all --ipc=host -p 8000:8000 \
222
+ -v hf_cache:/root/.cache/huggingface \
223
+ vllm/vllm-openai:latest \
224
+ --model ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4 \
225
+ --tensor-parallel-size 4 \
226
+ --max-model-len 4096
227
+ ```
228
+
229
+ To send request to the deployed vLLM endpoint compatible with [OpenAI OpenAPI specification](https://github.com/openai/openai-openapi) i.e. `/v1/chat/completions`:
230
+
231
+ ```bash
232
+ curl 0.0.0.0:8000/v1/chat/completions \
233
+ -X POST \
234
+ -H 'Content-Type: application/json' \
235
+ -d '{
236
+ "model": "ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4",
237
+ "messages": [
238
+ {
239
+ "role": "system",
240
+ "content": "You are a helpful assistant."
241
+ },
242
+ {
243
+ "role": "user",
244
+ "content": "What is Deep Learning?"
245
+ }
246
+ ],
247
+ "max_tokens": 128
248
+ }'
249
+ ```
250
+
251
+ Or programatically via the `openai` Python client (see [installation notes](https://github.com/openai/openai-python?tab=readme-ov-file#installation)) as follows:
252
+
253
+ ```python
254
+ import os
255
+ from openai import OpenAI
256
+
257
+ client = OpenAI(base_url="http://0.0.0.0:8000/v1", api_key=os.getenv("VLLM_API_KEY", "-"))
258
+
259
+ chat_completion = client.chat.completions.create(
260
+ model="ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4",
261
+ messages=[
262
+ {"role": "system", "content": "You are a helpful assistant."},
263
+ {"role": "user", "content": "What is Deep Learning?"},
264
+ ],
265
+ max_tokens=128,
266
+ )
267
+ ```
268
+
269
+ ## Quantization Reproduction Information
270
+
271
+ > [!NOTE]
272
+ > In order to quantize Llama 3.3 70B Instruct using AutoAWQ, you will need to use an instance with at least enough CPU RAM to fit the whole model i.e. ~140GiB, and an NVIDIA GPU with 40GiB of VRAM to quantize it.
273
+
274
+ In order to quantize Llama 3.3 70B Instruct, first install the following packages:
275
+
276
+ ```bash
277
+ pip install -q --upgrade transformers autoawq accelerate
278
+ ```
279
+
280
+ This quantization was produced using a single node with an Intel Xeon CPU E5-2699A v4 @ 2.40GHz, 256GB of RAM, and 2x NVIDIA RTX 3090 (24GB VRAM each, for a total of 48 GB VRAM).
281
+
282
+ I initially adapted [hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4](https://huggingface.co/hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4/blob/main/README.md), so many thanks to the Hugging Quants team, the AutoAWQ team, and the MIT HAN Lab for [LLM-AWQ](https://github.com/mit-han-lab/llm-awq). I'd also like to thank Professor David Dobolyi over at University of Colorado Boulder and Marc Sun at Hugging Face for their work, specifically [AutoAWQ PR#630](https://github.com/casper-hansen/AutoAWQ/pull/630).
283
+
284
+ Adapted from [`AutoAWQ/examples/quantize.py`](https://github.com/casper-hansen/AutoAWQ/blob/main/examples/quantize.py) and [hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4](https://huggingface.co/hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4/blob/main/README.md):
285
+
286
+ ```python
287
+ from awq import AutoAWQForCausalLM
288
+ from transformers import AutoTokenizer
289
+ import torch
290
+
291
+ # Empty Cache
292
+ torch.cuda.empty_cache()
293
+
294
+ # Memory Limits - Set this according to your hardware limits
295
+ max_memory = {0: "22GiB", 1: "22GiB", "cpu": "160GiB"}
296
+
297
+ model_path = "meta-llama/Llama-3.3-70B-Instruct"
298
+ quant_path = "ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4"
299
+ quant_config = {
300
+ "zero_point": True,
301
+ "q_group_size": 128,
302
+ "w_bit": 4,
303
+ "version": "GEMM"
304
+
305
+ }
306
+
307
+ # Load model - Note: while this loads the layers into the CPU, the GPUs (and the VRAM) are still required for quantization! (Verified with nvida-smi)
308
+ model = AutoAWQForCausalLM.from_pretrained(
309
+ model_path,
310
+ use_cache=False,
311
+ max_memory=max_memory,
312
+ device_map="cpu"
313
+ )
314
+
315
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
316
+
317
+ # Quantize
318
+ model.quantize(
319
+ tokenizer,
320
+ quant_config=quant_config
321
+ )
322
+
323
+ # Save quantized model
324
+ model.save_quantized(quant_path)
325
+ tokenizer.save_pretrained(quant_path)
326
+
327
+ print(f'Model is quantized and saved at "{quant_path}"')
328
+ ```