nielsr's picture
nielsr HF staff
Update README.md
9076059 verified
|
raw
history blame
1.76 kB
metadata
tags:
  - vision
  - image-text-to-text

NOTE: this model can only be used once https://github.com/huggingface/transformers/pull/29012 is merged

LLaVa-Next (also known as LLaVa-1.6)

The LLaVA-NeXT model was proposed in LLaVA-NeXT: Improved reasoning, OCR, and world knowledge by Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, Yong Jae Lee. LLaVa-NeXT (also called LLaVa-1.6) improves upon LLaVa by increasing the input image resolution and training on an improved visual instruction tuning dataset to improve OCR and common sense reasoning.

Disclaimer: The team releasing LLaVa-NeXT did not write a model card for this model so this model card has been written by the Hugging Face team.

Model description

LLaVa combines a pre-trained large language model with a pre-trained vision encoder for multimodal chatbot use cases.

image/png

Intended uses & limitations

You can use the raw model for tasks like image captioning, visual question answering, multimodal chatbot use cases. See the model hub to look for other versions on a task that interests you.

How to use

We refer to the documentation.

BibTeX entry and citation info

@misc{liu2023improved,
      title={Improved Baselines with Visual Instruction Tuning}, 
      author={Haotian Liu and Chunyuan Li and Yuheng Li and Yong Jae Lee},
      year={2023},
      eprint={2310.03744},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}