g-h-chen commited on
Commit
470fc69
β€’
1 Parent(s): 21cc538

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +122 -60
README.md CHANGED
@@ -7,67 +7,129 @@ language:
7
  pipeline_tag: text-generation
8
  ---
9
 
10
- Quick start:
11
-
12
- ```shell
13
- from transformers import AutoModelForCausalLM
14
- from transformers import AutoTokenizer
15
- import torch
16
- import pdb
17
-
18
- dir = "FreedomIntelligence/ALLaVA-3B-Longer"
19
-
20
- device = 'cuda'
21
- model = AutoModelForCausalLM.from_pretrained(dir, trust_remote_code=True, device_map=device, torch_dtype=torch.bfloat16)
22
- tokenizer = AutoTokenizer.from_pretrained(dir)
23
- model.tokenizer = tokenizer
24
-
25
- gen_kwargs = {
26
- 'min_new_tokens': 20,
27
- 'max_new_tokens': 100,
28
- 'do_sample': False,
29
- 'eos_token_id': tokenizer.eos_token_id # this is a must since transformers ~4.37
30
- }
31
 
32
- #################################################################################
33
- # first round
34
- #################################################################################
35
- response, history = model.chat(
36
- texts='What is in the image?',
37
- images=['https://cdn-icons-png.flaticon.com/256/6028/6028690.png'],
38
- return_history=True,
39
- **gen_kwargs
40
- )
41
- print('response:')
42
- print(response)
43
- print('history:')
44
- print(history)
45
- # response:
46
- # The image contains a large, stylized "HI!" in a bright pink color with a yellow outline. The "HI!" is in a speech bubble shape.
47
-
48
- # history:
49
- # [['What is in the image?', 'The image contains a large, stylized "HI!" in a bright pink color with a yellow outline. The "HI!" is in a speech bubble shape.']]
50
-
51
- #################################################################################
52
- # second round
53
- #################################################################################
54
- response, history = model.chat(
55
- texts='Are you sure?',
56
- images=['https://cdn-icons-png.flaticon.com/256/6028/6028690.png'], # images need to be passed again in multi-round conversations
57
- history=history,
58
- return_history=True,
59
- **gen_kwargs
60
- )
61
-
62
- print('response:')
63
- print(response)
64
- print('history:')
65
- print(history)
66
- # response:
67
- # Yes, I'm sure. The image shows a large, stylized "HI!" in a bright pink color with a yellow outline, placed in a speech bubble shape.
68
-
69
- # history:
70
- # [['What is in the image?', 'The image contains a large, stylized "HI!" in a bright pink color with a yellow outline. The "HI!" is in a speech bubble shape.'], ['Are you sure?', 'Yes, I\'m sure. The image shows a large, stylized "HI!" in a bright pink color with a yellow outline, placed in a speech bubble shape.']]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
 
 
 
72
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
  ```
 
7
  pipeline_tag: text-generation
8
  ---
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
 
11
+ # ALLaVA: Harnessing GPT4V-synthesized Data for A Lite Vision-Language Model
12
+
13
+
14
+
15
+ <p align="center">
16
+ ⚑ALLaVA is a project that provides a large-scale GPT4V-synthesized dataset for training LVLMs.⚑
17
+ </p>
18
+
19
+ <!-- <p align="center">
20
+
21
+ ![Python 3.10](https://img.shields.io/badge/Python-3.10-lightblue) ![Pytorch 1.13.0](https://img.shields.io/badge/PyTorch-2.1.1-lightblue) ![transformers](https://img.shields.io/badge/transformers-4.37.0-lightblue)
22
+ </p> -->
23
+
24
+
25
+
26
+ <p align="center">
27
+ πŸ“ƒ <a href="https://arxiv.org/abs/2402.11684" target="_blank">Paper</a> β€’ 🌐 <a href="https://allava.freedomai.cn/#/" target="_blank">Demo</a> β€’ πŸ‘¨πŸ»β€πŸ’» <a href="https://github.com/FreedomIntelligence/ALLaVA" target="_blank">Github</a>
28
+ </p>
29
+ <p align="center">
30
+ πŸ€— <a href="https://huggingface.co/datasets/FreedomIntelligence/ALLaVA-4V" target="_blank">ALLaVA-4V Dataset</a>
31
+ </p>
32
+
33
+ <p align="center">
34
+ πŸ€— <a href="https://huggingface.co/FreedomIntelligence/ALLaVA-3B-Longer" target="_blank">ALLaVA-3B-Longer</a> β€’ πŸ€— <a href="https://huggingface.co/FreedomIntelligence/ALLaVA-3B" target="_blank">ALLaVA-3B</a>
35
+ </p>
36
+
37
+ <!-- <p align="center">
38
+ πŸ“ƒ <a href="https://arxiv.org/abs/2402.11684" target="_blank">Paper</a> β€’ 🌐 <a href="https://allava.freedomai.cn/#/" target="_blank">Demo</a> β€’ πŸ€— <a href="https://huggingface.co/datasets/FreedomIntelligence/ALLaVA-4V" target="_blank">ALLaVA-4V Dataset</a> β€’ πŸ€— <a href="https://huggingface.co/FreedomIntelligence/ALLaVA-3B-Longer" target="_blank">ALLaVA-3B-Longer</a> β€’ πŸ€— <a href="https://huggingface.co/FreedomIntelligence/ALLaVA-3B" target="_blank">ALLaVA-3B</a>
39
+ <br> <a href="https://github.com/FreedomIntelligence/CMB/blob/main/README_zh.md"> δΈ­ζ–‡</a> | <a href="https://github.com/FreedomIntelligence/CMB/blob/main/README.md"> English
40
+ </p> -->
41
+
42
+ ## Benchmark Result
43
+
44
+ Our model [**ALLaVA-3B-Longer**](https://huggingface.co/FreedomIntelligence/ALLaVA-3B-Longer) and [**ALLaVA-3B**](https://huggingface.co/FreedomIntelligence/ALLaVA-3B) achieve competitive results on 12 benchmarks. Bold numbers denote the SOTA performance among 3B-scale models.
45
+
46
+ | Model | Backbone | Vicuna-80 | MMB | SEEDBench-v1 (img) | MM-Vet | MMMU (val) | MME | TextVQA | GQA | EMT (CIFAR10) | MLLM-Bench | TouchStone | LLaVA (In-the-Wild) |
47
+ |-------|----------|-----------|-----|-------------|--------|----------|-----|------|-----|---------|----|----|--------|
48
+ | Qwen-VL-Chat | Qwen-7B | - | 60.6 | 65.4 | - | 35.9 | 1487.5 | 61.5 | 57.5 | - | 6.2 | 711.6 | - |
49
+ | LLaVA-v1.5-7B | Vicuna-7B | - | 64.3 | - | 31.1 | - | 1510.7 | 58.2 | 62.0 | - | - | | 65.4 |
50
+ | LLaVA-v1.5-13B | Vicuna-13B | 22.50 | 67.7 | 68.2 | 35.4 | 36.4 | 1531.3 | 61.3 | 63.3 | 85.0 | 7.4 | 637.7 | 70.7 |
51
+ | ShareGPT4V-7B | Vicuna-7B | - | 68.8 | 69.7 | 37.6 | - | 1943.8 | 60.4 | 63.3 | - | - | - | 72.6 |
52
+ | TinyGPT-V | Phi2-2.7B | - | - | - | - | - | - | - | 33.6 | - | - | - | - |
53
+ | MobileVLM | MobileLLaMA-2.7B | - | 59.6 | - | - | - | 1288.9 | 47.5 | - | - | - | - | - |
54
+ | LLaVA-Phi | Phi2-2.7B | - | 59.8 | - | 28.9 | - | 1335.1 | 48.6 | - | - | - | - | - |
55
+ | **ALLaVA-3B** | Phi2-2.7B | 48.8 | 64.0 | 65.2 | 32.2 | **35.3** | **1623.2** | 49.5 | 48.8 | **90.2** | 6.7 | 632.0 | 69.4 |
56
+ | **ALLaVA-3B-Longer** | Phi2-2.7B | **52.5** | **64.6** | **65.6** | **35.5** | 33.2 | 1564.6 | **50.3** | **50.0** | 85.9 | **8.8** | **636.5** | **71.7** |
57
+
58
+ The detailed information of each benchmark is shown in Table 4 of our [technical report](https://arxiv.org/pdf/2402.11684.pdf).
59
+
60
+
61
+
62
+ ## 🏭 Inference
63
+
64
+ ### Load from πŸ€— (Recommended)
65
+ See the [example script](https://github.com/FreedomIntelligence/ALLaVA/blob/main/allava/serve/huggingface_inference.py).
66
+
67
+ ### CLI
68
+ See [here](https://github.com/FreedomIntelligence/ALLaVA/tree/main?tab=readme-ov-file#cli) for CLI code snippet.
69
+
70
+
71
+
72
+ ## πŸ‹οΈβ€β™‚οΈ Training
73
+
74
+ ### Data
75
+ <div align=center>
76
+ <img src="training_datasets_by_stage.jpg" width = "640" alt="training_datasets" align=center />
77
+ </div>
78
 
79
+ As shown in the table, ALLaVA-3B uses 1M and 1.5M data for PT. and FT., respectively.
80
+ ALLaVA-3B-Longer trains one more epoch (i.e. 3M in total) for the FT. stage.
81
 
82
+ ### Code
83
+ The training code is largely based on [LLaVA-v1.5](https://github.com/haotian-liu/LLaVA).
84
+ We wholeheartedly express our gratitude for their invaluable contributions to open-sourcing LVLMs.
85
+
86
+ ### Cost
87
+ We train our models on 8*A800 GPUs.
88
+ [ALLaVA-3B-Longer](https://huggingface.co/FreedomIntelligence/ALLaVA-3B-Longer) takes 8.3h for PT and 21.3h for FT.
89
+ [ALLaVA-3B](https://huggingface.co/FreedomIntelligence/ALLaVA-3B) takes 8.3h for PT and 10.6h for FT.
90
+ These two models share the same PT procedure.
91
+
92
+
93
+ ### Hyperparameters
94
+
95
+ | Global Batch Size| ZeRO Stage| Optimizer | Max LR| Min LR | Scheduler | Max length | Weight decay |
96
+ | ---: | ---: |--:| ---: | ---: | ---: | ---: | ---: |
97
+ | 256 (PT) / 128 (FT) | 1| AdamW | 2e-5 | 2e-6 | CosineAnnealingWarmRestarts | 2048 | 0 |
98
+
99
+ The LM backbone, projector are trainable, while the vision encoder is kept frozen.
100
+ **The trainabilities of each module are the same for both stages.**
101
+
102
+
103
+ ## πŸ“š ALLaVA-4V Data
104
+
105
+ The majority part of training data is [ALLaVA-4V](https://huggingface.co/datasets/FreedomIntelligence/ALLaVA-4V). See [here](https://github.com/FreedomIntelligence/ALLaVA/tree/main?tab=readme-ov-file#data-preparation) to prepare it for training.
106
+
107
+
108
+ ## πŸ™Œ Contributors
109
+
110
+ - Project Leader: [Guiming Hardy Chen](https://g-h-chen.github.io/)
111
+
112
+ - Data: Shunian Chen, [Junying Chen](https://jymchen.github.io/), Xiangbo Wu
113
+
114
+ - Evaluation: [Ruifei Zhang](https://scholar.google.com/citations?user=W4zOhmEAAAAJ&hl=zh-CN)
115
+
116
+ - Deployment: Xiangbo Wu, Zhiyi Zhang
117
+
118
+ - Advising: [Zhihong Chen](https://zhjohnchan.github.io/), [Benyou Wang](https://wabyking.github.io/old.html)
119
+
120
+ - Others: Jianquan Li, [Xiang Wan](https://scholar.google.com/citations?user=e3_kWigAAAAJ&hl=zh-CN)
121
+
122
+
123
+
124
+
125
+
126
+ ## πŸ“ Citation
127
+ If you find our data useful, please consider citing our work! We are FreedomIntelligence from [Shenzhen Research Institute of Big Data](http://sribd.cn/en) and [The Chinese University of Hong Kong, Shenzhen](https://sds.cuhk.edu.cn/en)
128
+ ```
129
+ @article{chen2024allava,
130
+ title={ALLaVA: Harnessing GPT4V-synthesized Data for A Lite Vision-Language Model},
131
+ author={Chen, Guiming Hardy and Chen, Shunian and Zhang, Ruifei and Chen, Junying and Wu, Xiangbo and Zhang, Zhiyi and Chen, Zhihong and Li, Jianquan and Wan, Xiang and Wang, Benyou},
132
+ journal={arXiv preprint arXiv:2402.11684},
133
+ year={2024}
134
+ }
135
  ```