xjtupanda's picture
Update README.md
4c839ed verified
metadata
datasets:
  - MBZUAI/VideoInstruct-100K
  - Share14/ShareGemini
  - xjtupanda/T2Vid-Synthetic
base_model:
  - HuggingFaceM4/Idefics3-8B-Llama3
pipeline_tag: video-text-to-text
tags:
  - Idefics3
  - finetune
  - MLLM
license: apache-2.0
language:
  - en
library_name: transformers

T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs

💻 GitHub   |    📑 Paper   

Model Summary

License

Model License

  • The model is built on top of the pre-trained model: HuggingFaceM4/Idefics3-8B-Llama3. We release the fine-tuned Idefics3 checkpoints under the Apache 2.0 license.
  • The code in this repo is released under the Apache-2.0 License.

Statement

  • As an LLM, Idefics3-8B-Llama3 generates contents by learning a large mount of texts, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by Idefics3-8B-Llama3 does not represent the views and positions of the model developers
  • We will not be liable for any problems arising from the use of the Idefics3-8B-Llama3 open Source model, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.

Training dataset

  • 10K video instruction data from Video-ChatGPT
  • 10K video caption data from ShareGemini
  • 10K synthetic data derived from long text instruction data