xjtupanda's picture
Update README.md
4c839ed verified
---
datasets:
- MBZUAI/VideoInstruct-100K
- Share14/ShareGemini
- xjtupanda/T2Vid-Synthetic
base_model:
- HuggingFaceM4/Idefics3-8B-Llama3
pipeline_tag: video-text-to-text
tags:
- Idefics3
- finetune
- MLLM
license: apache-2.0
language:
- en
library_name: transformers
---
<h1>T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs</h1>
<p>
πŸ’» <a href="https://github.com/xjtupanda/T2Vid">GitHub</a>&nbsp&nbsp | &nbsp&nbsp πŸ“‘ <a href="https://arxiv.org/pdf/2411.19951">Paper</a> &nbsp&nbsp </a>
</p>
## Model Summary
* This is a part of the project [T2Vid](https://github.com/xjtupanda/T2Vid).
* The video-LLM is fine-tuned from the image-LLM [Idefics3-8B-Llama3](https://huggingface.co/HuggingFaceM4/Idefics3-8B-Llama3).
## License
#### Model License
* The model is built on top of the pre-trained model: [HuggingFaceM4/Idefics3-8B-Llama3](https://huggingface.co/HuggingFaceM4/Idefics3-8B-Llama3). We release the fine-tuned Idefics3 checkpoints under the Apache 2.0 license.
* The code in this repo is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
#### Statement
* As an LLM, Idefics3-8B-Llama3 generates contents by learning a large mount of texts, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by Idefics3-8B-Llama3 does not represent the views and positions of the model developers
* We will not be liable for any problems arising from the use of the Idefics3-8B-Llama3 open Source model, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
## Training dataset
- 10K video instruction data from Video-ChatGPT
- 10K video caption data from ShareGemini
- 10K synthetic data derived from long text instruction data