File size: 1,422 Bytes
d483e35
 
 
 
 
 
 
 
 
 
0f239d1
b92bc50
d483e35
c70ed85
d483e35
c70ed85
b92bc50
 
c70ed85
 
b92bc50
 
 
c70ed85
b92bc50
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
---
license: apache-2.0
language:
- zh
- en
---
# PA-LLaVA: A Large Language-Vision Assistant for Human Pathology Image Understanding

We developed a domain-speciffc large language-vision assistant (PA-LLaVA) for pathology image understanding. Specifically, (1) we first construct a human pathology image-text dataset by cleaning the public medical image-text data for domainspecific alignment; (2) Using the proposed image-text data, we first train a pathology language-image pretraining (PLIP) model as the specialized visual encoder for pathology image, and then we developed scale-invariant connector to avoid the information loss caused by image scaling; (3) We adopt two-stage learning to train PA-LLaVA, first stage for domain alignment, and second stage for end to end visual question & answering (VQA) task.

##### Our code is publicly available on Github.[ddw2AIGROUP2CQUPT/PA-LLaVA (github.com)](https://github.com/ddw2AIGROUP2CQUPT/PA-LLaVA)


## Architecture

![image](https://github.com/ddw2AIGROUP2CQUPT/PA-LLaVA/blob/main/Architecture.png) 


## Data Cleaning Process
![image](https://github.com/ddw2AIGROUP2CQUPT/PA-LLaVA/blob/main/DataCleanProcess.png)

Only the image names of the cleaned dataset are provided here, for the specific training code please visit our Github.


## contact

mailto: [[email protected]](mailto:[email protected]) or [[email protected]](mailto:[email protected])