MedRegA

Model for paper "Interpretable Bilingual Multimodal Large Language Model for Diverse Biomedical Tasks".

🌐 Project Page: https://medrega.github.io/

πŸ“„ Paper: https://arxiv.org/abs/2410.18387

πŸ’» Code: https://github.com/xmed-lab/MedRegA

Introduction

We propose a Region-Aware medical MLLM, MedRegA, which is the first bilingual generalist medical AI system to simultaneously handle image-level and region-level medical vision-language tasks across a broad range of modalities.

Our MedRegA not only enables three region-centric tasks, but also achieves the best performance for visual question answering, report generation and medical image classification over 8 modalities, showcasing significant versatility.

medrega.png

Citation

@article{wang2024interpretable,
  title={Interpretable bilingual multimodal large language model for diverse biomedical tasks},
  author={Wang, Lehan and Wang, Haonan and Yang, Honglong and Mao, Jiaji and Yang, Zehong and Shen, Jun and Li, Xiaomeng},
  journal={arXiv preprint arXiv:2410.18387},
  year={2024}
}
Downloads last month
115
Safetensors
Model size
40.2B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Luxuriant16/medrega

Finetuned
(1)
this model