|
--- |
|
license: cc |
|
datasets: |
|
- liuhaotian/LLaVA-Instruct-150K |
|
- liuhaotian/LLaVA-Pretrain |
|
language: |
|
- en |
|
--- |
|
|
|
# Model Card for LLaVA-LLaMA-3-8B |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
A reproduced LLaVA LVLM based on Llama-3-8B LLM backbone. Not an official implementation. |
|
|
|
## Model Details |
|
Follows LLavA-1.5 pre-train and supervised fine-tuning data. |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
|
|
Please refer to a forked [LLaVA-Llama-3](https://github.com/Victorwz/LLaVA-Llama-3) git repo for usage. The data loading function and fastchat conversation template are changed due to a different tokenizer. |
|
|
|
|