File size: 3,008 Bytes
4b62e65 3289f18 4b62e65 873e6c4 7486058 d3464c5 7486058 873e6c4 3269bdc a930042 873e6c4 3269bdc 873e6c4 3269bdc 873e6c4 3269bdc fdc1361 a930042 873e6c4 e0e929d 873e6c4 e0e929d 4b62e65 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
---
license: cc
datasets:
- liuhaotian/LLaVA-Instruct-150K
- liuhaotian/LLaVA-Pretrain
language:
- en
---
# Model Card for LLaVA-LLaMA-3-8B
<!-- Provide a quick summary of what the model is/does. -->
A reproduced LLaVA LVLM based on Llama-3-8B LLM backbone. Not an official implementation.
Please follow my reproduced implementation [LLaVA-Llama-3](https://github.com/Victorwz/LLaVA-Llama-3/) for more details on fine-tuning LLaVA model with Llama-3 as the foundatiaon LLM.
## Model Details
Follows LLavA-1.5 pre-train and supervised fine-tuning data.
## How to Use
Please firstly install llava via
```
pip install git+https://github.com/Victorwz/LLaVA-Llama-3.git
```
You can load the model and perform inference as follows:
```python
from llava.conversation import conv_templates, SeparatorStyle
from llava.model.builder import load_pretrained_model
from llava.mm_utils import tokenizer_image_token, process_images, get_model_name_from_path
from PIL import Image
import requests
import torch
from io import BytesIO
# load model and processor
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = get_model_name_from_path("weizhiwang/LLaVA-Llama-3-8B")
tokenizer, model, image_processor, context_len = load_pretrained_model("weizhiwang/LLaVA-Llama-3-8B", None, model_name, False, False, device=device)
# prepare inputs for the model
text = '<image>' + '\n' + "Describe the image."
conv = conv_templates["llama_3"].copy()
conv.append_message(conv.roles[0], text)
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
input_ids = tokenizer_image_token(prompt, tokenizer, -200, return_tensors='pt').unsqueeze(0).cuda()
# prepare image input
url = "https://huggingface.co/adept/fuyu-8b/resolve/main/bus.png"
response = requests.get(url)
image = Image.open(BytesIO(response.content)).convert('RGB')
image_tensor = image_processor.preprocess(image, return_tensors='pt')['pixel_values'].half().cuda()
# autoregressively generate text
with torch.inference_mode():
output_ids = model.generate(
input_ids,
images=image_tensor,
do_sample=False,
max_new_tokens=512,
use_cache=True)
outputs = tokenizer.batch_decode(output_ids[:, input_ids.shape[1]:], skip_special_tokens=True)
print(outputs[0])
```
The image caption results look like:
```
The image features a blue and orange double-decker bus parked on a street. The bus is stopped at a bus stop, waiting for passengers to board. There are several people standing around the bus, some of them closer to the bus and others further away.
In the background, there are two cars parked on the street, one on the left side and the other on the right side. Additionally, there is a traffic light visible in the scene, indicating that the bus is stopped at an intersection.
```
Please refer to a forked [LLaVA-Llama-3](https://github.com/Victorwz/LLaVA-Llama-3) git repo for usage. The data loading function and fastchat conversation template are changed due to a different tokenizer.
|