File size: 741 Bytes
4b62e65 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
---
license: cc
datasets:
- liuhaotian/LLaVA-Instruct-150K
- liuhaotian/LLaVA-Pretrain
language:
- en
---
# Model Card for LLaVA-LLaMA-3-8B
<!-- Provide a quick summary of what the model is/does. -->
A reproduced LLaVA LVLM based on Llama-3-8B LLM backbone. Not an official implementation.
## Model Details
Follows LLavA-1.5 pre-train and supervised fine-tuning data.
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Please refer to a forked [LLaVA-Llama-3](https://github.com/Victorwz/LLaVA-Llama-3) git repo for usage. The data loading function and fastchat conversation template are changed due to a different tokenizer.
|