The bilingual English/Chinese Llama2-7B-Chat VLM trained via LORA for https://arxiv.org/abs/2406.11665.

The Chinese half of the training data used for multimodal alignment and visual instruction tuning is sampled from here.

Downloads last month
8
Safetensors
Model size
7.06B params
Tensor type
F32
·
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train amitha/mllava-llama2-en-zh