Mia Fournier
Upload 9 files
c3d78e0 verified
metadata
license: llama3
language:
  - en
pipeline_tag: image-text-to-text
tags:
  - text-generation-inference
extra_gated_fields:
  First Name: text
  Last Name: text
  Country: country
  Affiliation: text
  I want to use this model for:
    type: select
    options:
      - Research
      - Education
      - label: Other
        value: Other
  I agree to use this model in accordance to META LLAMA 3 COMMUNITY LICENSE AGREEMENT and to not use this model for commercial purposes: checkbox

Dragonfly-Med Model Card

Note: Users are permitted to use this model in accordance with the Llama 3 Community License Agreement. Additionally, due to the licensing restrictions of the dataset used to train this model, which prohibits commercial use, the Dragonfly-Med model is restricted to non-commercial use only.

Model Details

Dragonfly-Med is a multimodal biomedical visual-language model, trained by instruction tuning on Llama 3.

Model Sources

Uses

The primary use of Dragonfly-Med is research on large visual-language models. It is primarily intended for researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.

How to Get Started with the Model

πŸ’Ώ Installation

Create a conda environment and install necessary packages

conda env create -f environment.yml
conda activate dragonfly_env

Install flash attention

pip install flash-attn --no-build-isolation

As a final step, please run the following command.

pip install --upgrade -e .

🧠 Inference

If you have successfully completed the installation process, then you should be able to follow the steps below.

Question: Provide a brief description of the given image.

roco

Load necessary packages

import torch
from PIL import Image
from transformers import AutoProcessor, AutoTokenizer

from dragonfly.models.modeling_dragonfly import DragonflyForCausalLM
from dragonfly.models.processing_dragonfly import DragonflyProcessor
from pipeline.train.train_utils import random_seed

Instantiate the tokenizer, processor, and model.

device = torch.device("cuda:0")

tokenizer = AutoTokenizer.from_pretrained("togethercomputer/Llama-3-8B-Dragonfly-Med-v1")
clip_processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32")
image_processor = clip_processor.image_processor
processor = DragonflyProcessor(image_processor=image_processor, tokenizer=tokenizer, image_encoding_style="llava-hd")

model = DragonflyForCausalLM.from_pretrained("togethercomputer/Llama-3-8B-Dragonfly-Med-v1")
model = model.to(torch.bfloat16)
model = model.to(device)

Now, lets load the image and process them.

image = Image.open("ROCO_04197.jpg")
image = image.convert("RGB")
images = [image]
# images = [None] # if you do not want to pass any images

text_prompt = "<|start_header_id|>user<|end_header_id|>\n\nSummarize the visual content of the image.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"

inputs = processor(text=[text_prompt], images=images, max_length=2048, return_tensors="pt", is_generate=True)
inputs = inputs.to(device)

Finally, let us generate the responses from the model

temperature = 0

with torch.inference_mode():
    generation_output = model.generate(**inputs, max_new_tokens=1024, eos_token_id=tokenizer.encode("<|eot_id|>"), do_sample=temperature > 0, temperature=temperature, use_cache=True)

generation_text = processor.batch_decode(generation_output, skip_special_tokens=False)

An example response.

Computed tomography scan showing a large heterogenous mass in the pelvis<|eot_id|>

Training Details

See more details in the "Implementation" section of our paper.

Evaluation

See more details in the "Results" section of our paper.

πŸ† Credits

We would like to acknowledge the following resources that were instrumental in the development of Dragonfly:

πŸ“š BibTeX

@misc{chen2024dragonfly,
      title={Dragonfly: Multi-Resolution Zoom Supercharges Large Visual-Language Model}, 
      author={Kezhen Chen and Rahul Thapa and Rahul Chalamala and Ben Athiwaratkun and Shuaiwen Leon Song and James Zou},
      year={2024},
      eprint={2406.00977},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Model Card Authors

Rahul Thapa, Kezhen Chen, Rahul Chalamala

Model Card Contact

Rahul Thapa ([email protected]), Kezhen Chen ([email protected])