Image-Text-to-Text
English
weizhiwang commited on
Commit
f075c7d
·
verified ·
1 Parent(s): 9ab4fb0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -9
README.md CHANGED
@@ -12,15 +12,7 @@ pipeline_tag: image-text-to-text
12
  library_name: transformers
13
  ---
14
 
15
-
16
- This repository contains the model described in [Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources](https://huggingface.co/papers/2504.00595).
17
-
18
- Project page: https://victorwz.github.io/Open-Qwen2VL
19
-
20
- For code and usage instructions, please refer to the official codebase: https://github.com/Victorwz/Open-Qwen2VL
21
-
22
-
23
- # Model Card for Open-Qwen2VL
24
 
25
  Open-Qwen2VL-base is a pre-trained base multimodal model that takes images and text as input and produces text as output. This model is described in the paper [Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources](https://huggingface.co/papers/2504.00595). The code is available at [https://github.com/Victorwz/Open-Qwen2VL](https://github.com/Victorwz/Open-Qwen2VL).
26
 
 
12
  library_name: transformers
13
  ---
14
 
15
+ # Model Card for Open-Qwen2VL-base
 
 
 
 
 
 
 
 
16
 
17
  Open-Qwen2VL-base is a pre-trained base multimodal model that takes images and text as input and produces text as output. This model is described in the paper [Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources](https://huggingface.co/papers/2504.00595). The code is available at [https://github.com/Victorwz/Open-Qwen2VL](https://github.com/Victorwz/Open-Qwen2VL).
18