Image-Text-to-Text
Safetensors
pmod_llava_llama
p-MoD-LLaVA-NeXT-7B / README.md
JungleGym's picture
Update README.md
57c7d72 verified
|
raw
history blame
1.19 kB
---
license: apache-2.0
base_model:
- lmsys/vicuna-7b-v1.5
- openai/clip-vit-large-patch14-336
pipeline_tag: image-text-to-text
---
# p-MoD: Building Mixture-of-Depths MLLMs via Progressive Ratio Decay
This is the official model checkpoint of [p-MoD: Building Mixture-of-Depths MLLMs via Progressive Ratio Decay](https://arxiv.org/abs/2412.04449).
Please refer to [this repository](https://github.com/MCG-NJU/p-MoD) for our code.
## Model Description
This model is pretrained on [LCS-558K](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) image caption data, and instruction-tuned on [779K LLaVA-NeXT instruction data](https://huggingface.co/datasets/lmms-lab/LLaVA-NeXT-Data).
## Citation
If you find our model helpful for your research and applications, please cite our paper:
```Bibtex
@article{zhang2024pmod,
title={p-MoD: Building Mixture-of-Depths MLLMs via Progressive Ratio Decay},
author={Zhang, Jun and Meng, Desen and Qi, Ji and Huang, Zhenpeng and Wu, Tao and Wang, Limin},
journal={arXiv preprint arXiv:2412.04449},
year={2024}
}
```
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.