wzk1015 commited on
Commit
82e212b
1 Parent(s): a0ddf17

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -16,7 +16,7 @@ tags:
16
 
17
  # Mono-InternVL-2B
18
 
19
- [\[⭐️Project Page\]](https://internvl.github.io/blog/2024-10-10-Mono-InternVL/) [\[📜 Mono-InternVL Paper\]](https://arxiv.org/abs/2410.TODO) [\[🚀 Quick Start\]](#quick-start)
20
 
21
  [切换至中文版](#简介)
22
 
@@ -38,7 +38,7 @@ Mono-InternVL achieves superior performance compared to state-of-the-art MLLM Mi
38
 
39
 
40
 
41
- This repository contains the instruction-tuned Mono-InternVL-2B model. It is built upon [internlm2-chat-1_8b](https://huggingface.co/internlm/internlm2-chat-1_8b). For more details, please refer to our [paper](https://arxiv.org/abs/2410.TODO).
42
 
43
 
44
 
@@ -222,7 +222,7 @@ If you find this project useful in your research, please consider citing:
222
  @article{luo2024mono,
223
  title={Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training},
224
  author={Luo, Gen and Yang, Xue and Dou, Wenhan and Wang, Zhaokai and Dai, Jifeng and Qiao, Yu and Zhu, Xizhou},
225
- journal={arXiv preprint arXiv:2410.TODO},
226
  year={2024}
227
  }
228
 
@@ -252,7 +252,7 @@ If you find this project useful in your research, please consider citing:
252
 
253
  Mono-InternVL在性能上优于当前最先进的MLLM Mini-InternVL-2B-1.5,并且显著超越了其他单体化MLLMs,如上方的[雷达图](#radar)所示。同时,它的部署效率也得到了提升,首个token的延迟降低了最多达67%。
254
 
255
- 本仓库包含了经过指令微调的Mono-InternVL-2B模型,它是基于[internlm2-chat-1_8b](https://huggingface.co/internlm/internlm2-chat-1_8b)搭建的。更多详细信息,请参阅我们的[论文](TODO)。
256
 
257
 
258
 
@@ -310,7 +310,7 @@ Mono-InternVL在性能上优于当前最先进的MLLM Mini-InternVL-2B-1.5,并
310
  @article{luo2024mono,
311
  title={Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training},
312
  author={Luo, Gen and Yang, Xue and Dou, Wenhan and Wang, Zhaokai and Dai, Jifeng and Qiao, Yu and Zhu, Xizhou},
313
- journal={arXiv preprint arXiv:2410.TODO},
314
  year={2024}
315
  }
316
 
 
16
 
17
  # Mono-InternVL-2B
18
 
19
+ [\[⭐️Project Page\]](https://internvl.github.io/blog/2024-10-10-Mono-InternVL/) [\[📜 Mono-InternVL Paper\]](https://arxiv.org/abs/2410.08202) [\[🚀 Quick Start\]](#quick-start)
20
 
21
  [切换至中文版](#简介)
22
 
 
38
 
39
 
40
 
41
+ This repository contains the instruction-tuned Mono-InternVL-2B model. It is built upon [internlm2-chat-1_8b](https://huggingface.co/internlm/internlm2-chat-1_8b). For more details, please refer to our [paper](https://arxiv.org/abs/2410.08202).
42
 
43
 
44
 
 
222
  @article{luo2024mono,
223
  title={Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training},
224
  author={Luo, Gen and Yang, Xue and Dou, Wenhan and Wang, Zhaokai and Dai, Jifeng and Qiao, Yu and Zhu, Xizhou},
225
+ journal={arXiv preprint arXiv:2410.08202},
226
  year={2024}
227
  }
228
 
 
252
 
253
  Mono-InternVL在性能上优于当前最先进的MLLM Mini-InternVL-2B-1.5,并且显著超越了其他单体化MLLMs,如上方的[雷达图](#radar)所示。同时,它的部署效率也得到了提升,首个token的延迟降低了最多达67%。
254
 
255
+ 本仓库包含了经过指令微调的Mono-InternVL-2B模型,它是基于[internlm2-chat-1_8b](https://huggingface.co/internlm/internlm2-chat-1_8b)搭建的。更多详细信息,请参阅我们的[论文](https://arxiv.org/abs/2410.08202)。
256
 
257
 
258
 
 
310
  @article{luo2024mono,
311
  title={Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training},
312
  author={Luo, Gen and Yang, Xue and Dou, Wenhan and Wang, Zhaokai and Dai, Jifeng and Qiao, Yu and Zhu, Xizhou},
313
+ journal={arXiv preprint arXiv:2410.08202},
314
  year={2024}
315
  }
316