Image-Text-to-Text
English
weizhiwang commited on
Commit
9ab4fb0
·
verified ·
1 Parent(s): 7cbbadb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -3
README.md CHANGED
@@ -1,11 +1,44 @@
1
  ---
2
- license: cc-by-nc-4.0
3
- library_name: transformers
 
 
 
 
 
 
 
4
  pipeline_tag: image-text-to-text
 
5
  ---
6
 
 
7
  This repository contains the model described in [Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources](https://huggingface.co/papers/2504.00595).
8
 
9
  Project page: https://victorwz.github.io/Open-Qwen2VL
10
 
11
- For code and usage instructions, please refer to the official codebase: https://github.com/Victorwz/Open-Qwen2VL
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-1.5B-Instruct
4
+ - google/siglip-so400m-patch14-384
5
+ datasets:
6
+ - weizhiwang/Open-Qwen2VL-Data
7
+ - MAmmoTH-VL/MAmmoTH-VL-Instruct-12M
8
+ language:
9
+ - en
10
+ license: cc
11
  pipeline_tag: image-text-to-text
12
+ library_name: transformers
13
  ---
14
 
15
+
16
  This repository contains the model described in [Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources](https://huggingface.co/papers/2504.00595).
17
 
18
  Project page: https://victorwz.github.io/Open-Qwen2VL
19
 
20
+ For code and usage instructions, please refer to the official codebase: https://github.com/Victorwz/Open-Qwen2VL
21
+
22
+
23
+ # Model Card for Open-Qwen2VL
24
+
25
+ Open-Qwen2VL-base is a pre-trained base multimodal model that takes images and text as input and produces text as output. This model is described in the paper [Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources](https://huggingface.co/papers/2504.00595). The code is available at [https://github.com/Victorwz/Open-Qwen2VL](https://github.com/Victorwz/Open-Qwen2VL).
26
+
27
+ ## Updates
28
+ - [4/1/2025] The codebase, model, data, and paper are released.
29
+
30
+ <!-- ## Model Details -->
31
+
32
+ ## How to Use
33
+
34
+ The base model is released for further fine-tuning on public SFT data or customized SFT data. It is not appropriate for normal task completions.
35
+
36
+ ## Citation
37
+ ```bibtex
38
+ @article{Open-Qwen2VL,
39
+ title={Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources},
40
+ author={Wang, Weizhi and Tian, Yu and Yang, Linjie and Wang, Heng and Yan, Xifeng},
41
+ journal={arXiv preprint arXiv:2504.00595},
42
+ year={2025}
43
+ }
44
+ ...