---
license: llama2
pipeline_tag: image-text-to-text
language:
- en
---
# LLaVA-NeXT-Video Model Card
Below is the model card of LLaVa-NeXT-Video model 7b, which is copied from the original Llava model card that you can find [here](https://huggingface.co/liuhaotian/llava-v1.5-13b).
Check out also the Google Colab demo to run Llava on a free-tier Google Colab instance: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1qsl6cd2c8gGtEW1xV5io7S8NHh-Cp1TV?usp=sharing)
Or check out our Spaces demo! [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-md-dark.svg)](https://huggingface.co/spaces/llava-hf/llava-4bit)
## Model details
**Model type:**
LLaVA-Next-Video is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.
Base LLM: lmsys/vicuna-7b-v1.5
**Model date:**
LLaVA-Next-Video-7B was trained in April 2024.
**Paper or resources for more information:**
https://github.com/LLaVA-VL/LLaVA-NeXT
## How to use the model
First, make sure to have `transformers >= 4.42.0`.
The model supports multi-visual and multi-prompt generation. Meaning that you can pass multiple images/videos in your prompt. Make sure also to follow the correct prompt template (`USER: xxx\nASSISTANT:`) and add the token `` or `