OpenFace-CQUPT commited on
Commit
c70ed85
1 Parent(s): b92bc50

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -4
README.md CHANGED
@@ -8,18 +8,20 @@ language:
8
 
9
  We developed a domain-speciffc large language-vision assistant (PA-LLaVA) for pathology image understanding. Specifically, (1) we first construct a human pathology image-text dataset by cleaning the public medical image-text data for domainspecific alignment; (2) Using the proposed image-text data, we first train a pathology language-image pretraining (PLIP) model as the specialized visual encoder for pathology image, and then we developed scale-invariant connector to avoid the information loss caused by image scaling; (3) We adopt two-stage learning to train PA-LLaVA, first stage for domain alignment, and second stage for end to end visual question & answering (VQA) task.
10
 
11
- Our code is publicly available on Github.[ddw2AIGROUP2CQUPT/PA-LLaVA (github.com)](https://github.com/ddw2AIGROUP2CQUPT/PA-LLaVA)
12
 
13
- ## Architecture
14
 
15
- ![image](https://github.com/ddw2AIGROUP2CQUPT/PA-LLaVA/blob/main/xtuner_add/d285474a96a76781d5088a0a0dd85a3.png)
16
 
17
- ## Data Cleaning Process
18
 
19
 
 
 
20
 
21
  Only the image names of the cleaned dataset are provided here, for the specific training code please visit our Github.
22
 
 
23
  ## contact
24
 
25
 
8
 
9
  We developed a domain-speciffc large language-vision assistant (PA-LLaVA) for pathology image understanding. Specifically, (1) we first construct a human pathology image-text dataset by cleaning the public medical image-text data for domainspecific alignment; (2) Using the proposed image-text data, we first train a pathology language-image pretraining (PLIP) model as the specialized visual encoder for pathology image, and then we developed scale-invariant connector to avoid the information loss caused by image scaling; (3) We adopt two-stage learning to train PA-LLaVA, first stage for domain alignment, and second stage for end to end visual question & answering (VQA) task.
10
 
11
+ ###### Our code is publicly available on Github.[ddw2AIGROUP2CQUPT/PA-LLaVA (github.com)](https://github.com/ddw2AIGROUP2CQUPT/PA-LLaVA)
12
 
 
13
 
14
+ ## Architecture
15
 
16
+ ![image](https://github.com/ddw2AIGROUP2CQUPT/PA-LLaVA/blob/main/Architecture.png)
17
 
18
 
19
+ ## Data Cleaning Process
20
+ ![image](https://github.com/ddw2AIGROUP2CQUPT/PA-LLaVA/blob/main/DataCleanProcess.png)
21
 
22
  Only the image names of the cleaned dataset are provided here, for the specific training code please visit our Github.
23
 
24
+
25
  ## contact
26
 
27