Update README.md
Browse files
README.md
CHANGED
@@ -3,6 +3,7 @@ license: apache-2.0
|
|
3 |
base_model:
|
4 |
- lmsys/vicuna-7b-v1.5
|
5 |
- openai/clip-vit-large-patch14-336
|
|
|
6 |
---
|
7 |
|
8 |
# p-MoD: Building Mixture-of-Depths MLLMs via Progressive Ratio Decay
|
@@ -13,7 +14,15 @@ Please refer to [this repository](https://github.com/MCG-NJU/p-MoD) for our code
|
|
13 |
This model is pretrained on [LCS-558K](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) image caption data, and instruction-tuned on [779K LLaVA-NeXT instruction data](https://huggingface.co/datasets/lmms-lab/LLaVA-NeXT-Data).
|
14 |
|
15 |
## Citation
|
16 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
18 |
## License
|
19 |
Llama 2 is licensed under the LLAMA 2 Community License,
|
|
|
3 |
base_model:
|
4 |
- lmsys/vicuna-7b-v1.5
|
5 |
- openai/clip-vit-large-patch14-336
|
6 |
+
pipeline_tag: image-text-to-text
|
7 |
---
|
8 |
|
9 |
# p-MoD: Building Mixture-of-Depths MLLMs via Progressive Ratio Decay
|
|
|
14 |
This model is pretrained on [LCS-558K](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) image caption data, and instruction-tuned on [779K LLaVA-NeXT instruction data](https://huggingface.co/datasets/lmms-lab/LLaVA-NeXT-Data).
|
15 |
|
16 |
## Citation
|
17 |
+
If you find our model helpful for your research and applications, please cite our paper:
|
18 |
+
```Bibtex
|
19 |
+
@article{zhang2024pmod,
|
20 |
+
title={p-MoD: Building Mixture-of-Depths MLLMs via Progressive Ratio Decay},
|
21 |
+
author={Zhang, Jun and Meng, Desen and Qi, Ji and Huang, Zhenpeng and Wu, Tao and Wang, Limin},
|
22 |
+
journal={arXiv preprint arXiv:2412.04449},
|
23 |
+
year={2024}
|
24 |
+
}
|
25 |
+
```
|
26 |
|
27 |
## License
|
28 |
Llama 2 is licensed under the LLAMA 2 Community License,
|