|
--- |
|
license: apache-2.0 |
|
base_model: |
|
- lmsys/vicuna-7b-v1.5 |
|
- openai/clip-vit-large-patch14-336 |
|
pipeline_tag: image-text-to-text |
|
--- |
|
# p-MoD: Building Mixture-of-Depths MLLMs via Progressive Ratio Decay |
|
This is the official model checkpoint of [p-MoD: Building Mixture-of-Depths MLLMs via Progressive Ratio Decay](https://arxiv.org/abs/2412.04449). |
|
Please refer to [this repository](https://github.com/MCG-NJU/p-MoD) for our code. |
|
|
|
## Model Description |
|
This model is pretrained on [LCS-558K](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) image caption data, and instruction-tuned on [llava-v1_5-mix-665k](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_v1_5_mix665k.json). |
|
|
|
## Citation |
|
TBD |
|
|
|
## License |
|
Llama 2 is licensed under the LLAMA 2 Community License, |
|
Copyright (c) Meta Platforms, Inc. All Rights Reserved. |
|
|
|
|