--- license: apache-2.0 language: - zh - en --- # PA-LLaVA: A Large Language-Vision Assistant for Human Pathology Image Understanding We developed a domain-speciffc large language-vision assistant (PA-LLaVA) for pathology image understanding. Specifically, (1) we first construct a human pathology image-text dataset by cleaning the public medical image-text data for domainspecific alignment; (2) Using the proposed image-text data, we first train a pathology language-image pretraining (PLIP) model as the specialized visual encoder for pathology image, and then we developed scale-invariant connector to avoid the information loss caused by image scaling; (3) We adopt two-stage learning to train PA-LLaVA, first stage for domain alignment, and second stage for end to end visual question & answering (VQA) task. ##### Our code is publicly available on Github.[ddw2AIGROUP2CQUPT/PA-LLaVA (github.com)](https://github.com/ddw2AIGROUP2CQUPT/PA-LLaVA) ## Architecture ![image/png](https://cdn-uploads.huggingface.co/production/uploads/663f06e01cd68975883a353e/SkFB0x3JunWE_Wae808Nq.png) ## Data Cleaning Process ![image/png](https://cdn-uploads.huggingface.co/production/uploads/663f06e01cd68975883a353e/IAeFWhH8brZYDaTJnew2N.png) Only the image names of the cleaned dataset are provided here, for the specific training code please visit our Github. ## contact mailto: [S230233056@stu.cqupt.edu.cn](mailto:S230233056@stu.cqupt.edu.cn) or [dw_dai@163.com](mailto:dw_dai@163.com)