File size: 2,904 Bytes
17ae614 7b197b7 17ae614 7b197b7 c37f0f8 7b197b7 abf6f42 7b197b7 c37f0f8 7b197b7 c37f0f8 7b197b7 c37f0f8 7b197b7 c37f0f8 7b197b7 c37f0f8 7b197b7 c37f0f8 7b197b7 c37f0f8 7b197b7 c37f0f8 7b197b7 c37f0f8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
---
library_name: transformers
pipeline_tag: image-text-to-text
license: mit
tags:
- multimodal
- image-classification
- explanation
- visual-reasoning
- fine-grained-classification
- llava
- fgvc
---
# Fine-Grained Visual Classification on FGVC-Aircraft
Project Page: [SelfSynthX](https://github.com/sycny/SelfSynthX).
Paper on arXiv: [Enhancing Cognition and Explainability of Multimodal Foundation Models with Self-Synthesized Data](https://arxiv.org/abs/2502.14044)
This model is a fine-tuned multimodal foundation model based on [LLaVA-1.5-7B-hf](https://huggingface.co/llava-hf/llava-1.5-7B-hf), optimized for fine-grained classification of aircraft types using the FGVC-Aircraft dataset.
## Key Details
- **Base Model:** LLaVA-1.5-7B
- **Dataset:** FGVC-Aircraft (Fine-Grained Visual Classification of Aircraft)
- **Innovation:**
- **Self-Synthesized Data:** Extracts and highlights distinctive aircraft-specific visual features using the Information Bottleneck principle.
- **Iterative Fine-Tuning:** Uses reward model-free rejection sampling to improve classification accuracy and explanation quality.
- **Intended Use:** Identification of aircraft models with human-verifiable explanations.
## How to Use
```python
import requests
from PIL import Image
import torch
from transformers import AutoProcessor, LlavaForConditionalGeneration
model_id = "YuchengShi/LLaVA-v1.5-7B-Fgvc"
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
).to("cuda")
processor = AutoProcessor.from_pretrained(model_id)
conversation = [
{
"role": "user",
"content": [
{"type": "text", "text": "What type of aircraft is this?"},
{"type": "image"},
],
},
]
prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
image_file = "fgvc-aircraft/test1.png"
raw_image = Image.open(requests.get(image_file, stream=True).raw)
inputs = processor(images=raw_image, text=prompt, return_tensors='pt').to("cuda", torch.float16)
output = model.generate(**inputs, max_new_tokens=200, do_sample=False)
print(processor.decode(output[0][2:], skip_special_tokens=True))
```
## Training & Evaluation
- **Training:** Fine-tuned using LoRA on FGVC-Aircraft with iterative rejection sampling.
- **Evaluation:** Achieves high accuracy in distinguishing aircraft types while providing detailed, interpretable explanations.
## Citation
If you use this model, please cite:
```bibtex
@inproceedings{
shi2025enhancing,
title={Enhancing Cognition and Explainability of Multimodal Foundation Models with Self-Synthesized Data},
author={Yucheng Shi and Quanzheng Li and Jin Sun and Xiang Li and Ninghao Liu},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=lHbLpwbEyt}
}
``` |