TryingHard commited on
Commit
0c20ea9
1 Parent(s): 87db944

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +363 -3
README.md CHANGED
@@ -1,3 +1,363 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - AIDC-AI/Ovis-dataset
5
+ library_name: transformers
6
+ tags:
7
+ - MLLM
8
+ pipeline_tag: image-text-to-text
9
+ language:
10
+ - en
11
+ ---
12
+
13
+ # Ovis1.6-Llama3.2-3B-GPTQ-Int4
14
+ <div align="center">
15
+ <img src=https://cdn-uploads.huggingface.co/production/uploads/637aebed7ce76c3b834cea37/3IK823BZ8w-mz_QfeYkDn.png width="30%"/>
16
+ </div>
17
+
18
+ ## Introduction
19
+ [GitHub](https://github.com/AIDC-AI/Ovis) | [Demo](https://huggingface.co/spaces/AIDC-AI/Ovis1.6-Gemma2-9B) | [Paper](https://arxiv.org/abs/2405.20797)
20
+
21
+
22
+ We are excited to announce the open-sourcing of **Ovis-1.6**, our latest multi-modal large language model. Ovis is a novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.
23
+
24
+ <div align="center">
25
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/658a8a837959448ef5500ce5/TIlymOb86R6_Mez3bpmcB.png" width="100%" />
26
+ </div>
27
+
28
+ ## Model
29
+ Built upon Ovis1.5, **Ovis1.6** further enhances high-resolution image processing, is trained on a larger, more diverse, and higher-quality dataset, and refines the training process with DPO training following instruction-tuning.
30
+
31
+ | Ovis MLLMs | ViT | LLM | Model Weights | Demo |
32
+ |:------------------|:-----------:|:------------------:|:---------------------------------------------------------------:|:----------------------------------------------------------------:|
33
+ | Ovis1.6-Gemma2-9B | Siglip-400M | Gemma2-9B-It | [Huggingface](https://huggingface.co/AIDC-AI/Ovis1.6-Gemma2-9B) | [Space](https://huggingface.co/spaces/AIDC-AI/Ovis1.6-Gemma2-9B) |
34
+ | Ovis1.6-Llama3.2-3B | Siglip-400M | Llama-3.2-3B-Instruct | [Huggingface](https://huggingface.co/AIDC-AI/Ovis1.6-Llama3.2-3B) | [Space](https://huggingface.co/spaces/AIDC-AI/Ovis1.6-Llama3.2-3B) |
35
+ | Ovis1.6-Gemma2-9B-GPTQ-Int4 | Siglip-400M | Gemma2-9B-It | [Huggingface](https://huggingface.co/AIDC-AI/Ovis1.6-Gemma2-9B-GPTQ-Int4) | - |
36
+ | Ovis1.6-Llama3.2-3B-GPTQ-Int4 | Siglip-400M | Llama-3.2-3B-Instruct | [Huggingface](https://huggingface.co/AIDC-AI/Ovis1.6-Llama3.2-3B-GPTQ-Int4) | - |
37
+
38
+ ## Quantized Model
39
+ We quantized Ovis1.6 with AutoGPTQ. Follow these steps to run it.
40
+
41
+ ### Installation
42
+ 1. Run the following commands to get a basic environment. Be sure to run with CUDA 12.1.
43
+ ```bash
44
+ conda create -n <your_env_name> python=3.10
45
+ conda activate <your_env_name>
46
+ pip install torch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1 --index-url https://download.pytorch.org/whl/cu121
47
+ pip install numpy==1.24.3 transformers==4.44.2 pillow==10.3.0 gekko pandas
48
+ ```
49
+ 2. Build AutoGPTQ: We customized AutoGPTQ to support Ovis model quantization. You need to build from source to install the customized version.
50
+ ```bash
51
+ git clone https://github.com/AIDC-AI/AutoGPTQ.git
52
+ cd AutoGPTQ
53
+ pip install -vvv --no-build-isolation -e .
54
+ ```
55
+ Check [this](https://github.com/AutoGPTQ/AutoGPTQ/issues/194) first if you are building inside a Docker container.
56
+
57
+ ### Usage
58
+ Below is a code snippet to run **Ovis1.6-Llama3.2-3B-GPTQ-Int4** with multimodal inputs. For additional usage instructions, including inference wrapper and Gradio UI, please refer to [Ovis GitHub](https://github.com/AIDC-AI/Ovis?tab=readme-ov-file#inference).
59
+ ```python
60
+ import torch
61
+ from PIL import Image
62
+ from transformers import GenerationConfig
63
+ from auto_gptq.modeling import OvisLlamaGPTQForCausalLM
64
+
65
+ # load model
66
+ load_device = "cuda:0" # customize load device
67
+ model = OvisLlamaGPTQForCausalLM.from_pretrained(
68
+ "AIDC-AI/Ovis1.6-Llama3.2-3B-GPTQ-Int4",
69
+ device=load_device,
70
+ trust_remote_code=True
71
+ )
72
+ model.model.generation_config = GenerationConfig.from_pretrained("AIDC-AI/Ovis1.6-Llama3.2-3B-GPTQ-Int4")
73
+ text_tokenizer = model.get_text_tokenizer()
74
+ visual_tokenizer = model.get_visual_tokenizer()
75
+
76
+ # enter image path and prompt
77
+ image_path = input("Enter image path: ")
78
+ image = Image.open(image_path)
79
+ text = input("Enter prompt: ")
80
+ query = f'<image>\n{text}'
81
+
82
+ # format conversation
83
+ prompt, input_ids, pixel_values = model.preprocess_inputs(query, [image])
84
+ attention_mask = torch.ne(input_ids, text_tokenizer.pad_token_id)
85
+ input_ids = input_ids.unsqueeze(0).to(device=model.device)
86
+ attention_mask = attention_mask.unsqueeze(0).to(device=model.device)
87
+ pixel_values = [pixel_values.to(dtype=visual_tokenizer.dtype, device=visual_tokenizer.device)]
88
+
89
+ # generate output
90
+ with torch.inference_mode():
91
+ gen_kwargs = dict(
92
+ max_new_tokens=1024,
93
+ do_sample=False,
94
+ top_p=None,
95
+ top_k=None,
96
+ temperature=None,
97
+ repetition_penalty=None,
98
+ eos_token_id=model.generation_config.eos_token_id,
99
+ pad_token_id=text_tokenizer.pad_token_id,
100
+ use_cache=True
101
+ )
102
+ output_ids = model.generate(input_ids, pixel_values=pixel_values, attention_mask=attention_mask, **gen_kwargs)[0]
103
+ output = text_tokenizer.decode(output_ids, skip_special_tokens=True)
104
+ print(f'Output:\n{output}')
105
+ ```
106
+
107
+ <details>
108
+ <summary>Batch inference</summary>
109
+
110
+ ```python
111
+ batch_inputs = [
112
+ ('example_image1.jpeg', 'Describe the content of this image.'),
113
+ ('example_image2.jpeg', 'What is the equation in the image?')
114
+ ]
115
+
116
+ batch_input_ids = []
117
+ batch_attention_mask = []
118
+ batch_pixel_values = []
119
+
120
+ for image_path, text in batch_inputs:
121
+ image = Image.open(image_path)
122
+ query = f'<image>\n{text}'
123
+ prompt, input_ids, pixel_values = model.preprocess_inputs(query, [image])
124
+ attention_mask = torch.ne(input_ids, text_tokenizer.pad_token_id)
125
+ input_ids = input_ids.unsqueeze(0).to(device=model.device)
126
+ attention_mask = attention_mask.unsqueeze(0).to(device=model.device)
127
+ pixel_values = [pixel_values.to(dtype=visual_tokenizer.dtype, device=visual_tokenizer.device)]
128
+ batch_input_ids.append(input_ids.squeeze())
129
+ batch_attention_mask.append(attention_mask.squeeze())
130
+ batch_pixel_values.append(pixel_values)
131
+
132
+ pad_batch_input_ids = torch.nn.utils.rnn.pad_sequence([i.flip(dims=[0]) for i in batch_input_ids],batch_first=True, padding_value=0.0).flip(dims=[1])
133
+ pad_batch_input_ids = pad_batch_input_ids[:,-model.config.multimodal_max_length:]
134
+ pad_batch_attention_mask = torch.nn.utils.rnn.pad_sequence([i.flip(dims=[0]) for i in batch_attention_mask],batch_first=True, padding_value=False).flip(dims=[1])
135
+ pad_batch_attention_mask = pad_batch_attention_mask[:,-model.config.multimodal_max_length:]
136
+ pad_batch_pixel_values = [item for sublist in batch_pixel_values for item in sublist]
137
+
138
+ # generate output
139
+ with torch.inference_mode():
140
+ gen_kwargs = dict(
141
+ max_new_tokens=1024,
142
+ do_sample=False,
143
+ top_p=None,
144
+ top_k=None,
145
+ temperature=None,
146
+ repetition_penalty=None,
147
+ eos_token_id=model.generation_config.eos_token_id,
148
+ pad_token_id=text_tokenizer.pad_token_id,
149
+ use_cache=True
150
+ )
151
+ output_ids = model.generate(pad_batch_input_ids, pixel_values=pad_batch_pixel_values, attention_mask=pad_batch_attention_mask, **gen_kwargs)
152
+
153
+ for i in range(len(batch_input_ids)):
154
+ output = text_tokenizer.decode(output_ids[i], skip_special_tokens=True)
155
+ print(f'Output_{i}:\n{output}')
156
+ ```
157
+ </details>
158
+
159
+
160
+ ## Quantize Your Own Ovis Model with AutoGPTQ
161
+ We provide a demonstration code snippet for you to quantize your own fine-tuned **Ovis1.6-Llama3.2-3B** model. Before running the code, you need to **follow the ABOVE installation steps** to obtain an environment for quantization.
162
+ ```python
163
+ from typing import Dict, Sequence, Union, List
164
+ import copy
165
+ import logging
166
+
167
+ from auto_gptq import BaseQuantizeConfig
168
+ from auto_gptq.modeling import OvisLlamaGPTQForCausalLM
169
+ import torch
170
+ from torch.utils.data import Dataset, DataLoader
171
+ from PIL import Image
172
+
173
+
174
+ # Specify paths and hyperparameters for quantization
175
+ model_path = "path/to/finetuned/model"
176
+ quantize_save_path = "path/to/save/quantized/model"
177
+ IGNORE_ID = -100
178
+ device_idx = 2 # you customize
179
+ torch.cuda.set_device(device_idx)
180
+ quantize_config = BaseQuantizeConfig(
181
+ bits=4, # 4 or 8
182
+ group_size=128,
183
+ damp_percent=0.1,
184
+ desc_act=False, # set to False can significantly speed up inference but the perplexity may slightly bad
185
+ static_groups=False,
186
+ sym=True,
187
+ true_sequential=True,
188
+ )
189
+
190
+
191
+ # Load model
192
+ model = OvisLlamaGPTQForCausalLM.from_pretrained(
193
+ model_path,
194
+ quantize_config,
195
+ torch_dtype=torch.bfloat16,
196
+ multimodal_max_length=2624,
197
+ llm_attn_implementation='eager',
198
+ trust_remote_code=True
199
+ ).cuda()
200
+ print(f"Model Loaded!")
201
+
202
+
203
+ # prepare calibration samples
204
+ class CalibrationDataset(Dataset):
205
+ """
206
+ Dataset class for calibration. Initialize with the loaded Ovis model, and a sample list in the following format:
207
+ data_list = [
208
+ {
209
+ "image": "path/to/image/of/this/sample",
210
+ "conversations": [
211
+ {
212
+ "from": "human",
213
+ "value": "<image>\n[Your sample prompt]"
214
+ },
215
+ {
216
+ "from": "gpt",
217
+ "value": "[Anything]"
218
+ }
219
+ ]
220
+ },
221
+ ...
222
+ ]
223
+ """
224
+ def __init__(self, model, text_max_length, data_list: List[Dict]):
225
+ self.data = data_list
226
+ self.model = model
227
+ self.visual_tokenizer = model.get_visual_tokenizer()
228
+ self.text_max_length = text_max_length
229
+
230
+
231
+ def __len__(self):
232
+ return len(self.data)
233
+
234
+
235
+ def __getitem__(self, i: int) -> Dict[str, torch.Tensor]:
236
+ sample = self.data[i]
237
+ conversations = copy.deepcopy(sample["conversations"])
238
+ images = [Image.open(sample['image'])]
239
+ max_partition = 9
240
+
241
+ prompt, input_ids, pixel_values, labels = self.model.preprocess_inputs(
242
+ conversations,
243
+ images,
244
+ max_partition=max_partition,
245
+ generation_preface=None,
246
+ return_labels=True,
247
+ propagate_exception=False
248
+ )
249
+
250
+ if pixel_values is None:
251
+ pixel_values, _ = self.visual_tokenizer.mock_input()
252
+
253
+ input_ids = input_ids[:self.text_max_length]
254
+ labels = labels[:self.text_max_length]
255
+
256
+ return dict(
257
+ pixel_values=pixel_values,
258
+ input_ids=input_ids,
259
+ labels=labels
260
+ )
261
+
262
+
263
+ class DataCollatorForMultimodalDatasetGPTQ:
264
+ def __init__(self, text_tokenizer):
265
+ self.text_tokenizer = text_tokenizer
266
+
267
+ def __call__(self, instances: Sequence[Dict]) -> Dict[str, Union[torch.Tensor, List[torch.Tensor]]]:
268
+ pixel_values, input_ids, labels = tuple([instance[key] for instance in instances]
269
+ for key in ("pixel_values", "input_ids", "labels"))
270
+ input_ids = torch.nn.utils.rnn.pad_sequence(
271
+ input_ids,
272
+ batch_first=True,
273
+ padding_value=self.text_tokenizer.pad_token_id)
274
+ attention_mask = torch.ne(input_ids, self.text_tokenizer.pad_token_id)
275
+ labels = torch.nn.utils.rnn.pad_sequence(
276
+ labels,
277
+ batch_first=True,
278
+ padding_value=IGNORE_ID)
279
+
280
+ num_valid_label = torch.not_equal(labels, IGNORE_ID).sum().item()
281
+ if num_valid_label == 0:
282
+ logging.warning(
283
+ f'[DataCollatorForMultimodalDatasetGPTQ] All labels are ignored, may causing training instability\n{input_ids=}\n{attention_mask=}\n{labels=}')
284
+
285
+ return dict(
286
+ input_ids=input_ids,
287
+ attention_mask=attention_mask,
288
+ labels=labels,
289
+ pixel_values=pixel_values
290
+ )
291
+
292
+
293
+ class MyDataLoader(DataLoader):
294
+ def __len__(self):
295
+ return len(self.dataset) // self.batch_size # must set drop last=True
296
+
297
+
298
+ # prepare your own calibration samples here
299
+ data_list = [
300
+ {
301
+ "image": "path/to/image/of/this/sample",
302
+ "conversations": [
303
+ {
304
+ "from": "human",
305
+ "value": "<image>\n[Your sample prompt]"
306
+ },
307
+ {
308
+ "from": "gpt",
309
+ "value": "[Anything]"
310
+ }
311
+ ]
312
+ }
313
+ ]
314
+ train_dataset = CalibrationDataset(model, text_max_length=832, data_list=data_list)
315
+ print(f"Dataset Loaded!")
316
+ print(f"Total length of the training set: {len(train_dataset)}")
317
+
318
+ train_loader = MyDataLoader(
319
+ train_dataset,
320
+ collate_fn=DataCollatorForMultimodalDatasetGPTQ(model.get_text_tokenizer()),
321
+ shuffle=False,
322
+ batch_size=4,
323
+ drop_last=True,
324
+ pin_memory=True,
325
+ num_workers=8
326
+ )
327
+ print(f"Dataloader Loaded!")
328
+
329
+
330
+ # start quantizing
331
+ model.quantize(train_loader, cache_examples_on_gpu=False)
332
+ print(f"Model Quantized! Now Saving...")
333
+
334
+ model.save_quantized(quantize_save_path, use_safetensors=True)
335
+ print(f"ALL Done!")
336
+ ```
337
+
338
+
339
+ ## Performance
340
+ Here we report the performance of Ovis1.6-Llama3.2-3B-GPTQ-Int4. The results are obtained with VLMEvalkit.
341
+ Benchmark:
342
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/645cb4b4a03f3ebb0bde20e0/Mjf7-rZ8eRk-58G9716l9.png)
343
+
344
+ VRAM usage:
345
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/645cb4b4a03f3ebb0bde20e0/QcL3X_5-EvyD95-yP8MJ8.png)
346
+
347
+
348
+ ## Citation
349
+ If you find Ovis useful, please cite the paper
350
+ ```
351
+ @article{lu2024ovis,
352
+ title={Ovis: Structural Embedding Alignment for Multimodal Large Language Model},
353
+ author={Shiyin Lu and Yang Li and Qing-Guo Chen and Zhao Xu and Weihua Luo and Kaifu Zhang and Han-Jia Ye},
354
+ year={2024},
355
+ journal={arXiv:2405.20797}
356
+ }
357
+ ```
358
+
359
+ ## License
360
+ This project is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0.txt) (SPDX-License-Identifier: Apache-2.0).
361
+
362
+ ## Disclaimer
363
+ We used compliance-checking algorithms during the training process, to ensure the compliance of the trained model to the best of our ability. Due to the complexity of the data and the diversity of language model usage scenarios, we cannot guarantee that the model is completely free of copyright issues or improper content. If you believe anything infringes on your rights or generates improper content, please contact us, and we will promptly address the matter.