PathGen-CLIP-L / README.md
jamessyx's picture
Update README.md
feb8cbf verified
metadata
license: cc-by-nc-4.0

PathGen-CLIP

This is the official PathGen-CLIP trained based on PathGen-1.6M: 1.6 Million Pathology Image-text Pairs Generation through Multi-agent Collaboration

Usage of Trained PathGen-CLIP series model

The trained PathGen-CLIP can be downloaded via this PathGen-CLIP and the PathGen-CLIP-L via this PathGen-CLIP-L (We also transform PathGen-CLIP-L to HF version PathGenCLIP-vit-large-patch14-hf to facilitate the integration into LLM).

pip install open_clip_torch
import torch
from PIL import Image
import open_clip

model, _, preprocess = open_clip.create_model_and_transforms('ViT-B-16', pretrained='path/pathgen-clip.pt') // PathGen-CLIP
# model, _, preprocess = open_clip.create_model_and_transforms('ViT-B-16', pretrained='path/pathgen-clip-l.pt') // PathGen-CLIP-L
model.eval()  # model in train mode by default, impacts some models with BatchNorm or stochastic depth active
tokenizer = open_clip.get_tokenizer('ViT-B-16')

image = preprocess(Image.open("example.png")).unsqueeze(0)
text = tokenizer(["An H&E image of tumor patch", "An H&E image of normal patch"])

with torch.no_grad(), torch.cuda.amp.autocast():
    image_features = model.encode_image(image)
    text_features = model.encode_text(text)
    image_features /= image_features.norm(dim=-1, keepdim=True)
    text_features /= text_features.norm(dim=-1, keepdim=True)

    text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)

print("Label probs:", text_probs)

Citation

@article{sun2024pathgen,
  title={Pathgen-1.6 m: 1.6 million pathology image-text pairs generation through multi-agent collaboration},
  author={Sun, Yuxuan and Zhang, Yunlong and Si, Yixuan and Zhu, Chenglu and Shui, Zhongyi and Zhang, Kai and Li, Jingxiong and Lyu, Xingheng and Lin, Tao and Yang, Lin},
  journal={arXiv preprint arXiv:2407.00203},
  year={2024}
}