Update README.md
Browse files
README.md
CHANGED
@@ -1,8 +1,86 @@
|
|
1 |
---
|
2 |
tags:
|
|
|
3 |
- image-classification
|
4 |
- timm
|
|
|
|
|
|
|
5 |
library_name: timm
|
6 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
---
|
8 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
tags:
|
3 |
+
- feature-extraction
|
4 |
- image-classification
|
5 |
- timm
|
6 |
+
- biology
|
7 |
+
- cancer
|
8 |
+
- histology
|
9 |
library_name: timm
|
10 |
+
model-index:
|
11 |
+
- name: tcga_brca
|
12 |
+
results:
|
13 |
+
- task:
|
14 |
+
type: image-classification
|
15 |
+
name: Image Classification
|
16 |
+
dataset:
|
17 |
+
name: TCGA-BRCA
|
18 |
+
type: image-classification
|
19 |
+
metrics:
|
20 |
+
- type: accuracy
|
21 |
+
value: 0.886 ± 0.059
|
22 |
+
name: AUC
|
23 |
+
verified: false
|
24 |
+
license: gpl-3.0
|
25 |
+
pipeline_tag: feature-extraction
|
26 |
+
inference: false
|
27 |
---
|
28 |
+
|
29 |
+
# Model card for resnet50.tcga_brca_simclr
|
30 |
+
|
31 |
+
A Vision Transformer (ViT) image classification model. \
|
32 |
+
Trained on 2M histology patches from TCGA-BRCA.
|
33 |
+
|
34 |
+

|
35 |
+
|
36 |
+
## Model Details
|
37 |
+
|
38 |
+
- **Model Type:** Feature backbone
|
39 |
+
- **Model Stats:**
|
40 |
+
- Params (M): 21.7
|
41 |
+
- Image size: 256 x 256 x 3
|
42 |
+
- **Papers:**
|
43 |
+
- Self-Supervised Vision Transformers Learn Visual Concepts in Histopathology: https://arxiv.org/abs/2203.00585
|
44 |
+
- **Dataset:** TGCA BRCA: https://portal.gdc.cancer.gov/
|
45 |
+
- **Original:** https://github.com/Richarizardd/Self-Supervised-ViT-Path/
|
46 |
+
- **License:** [GPLv3](https://github.com/Richarizardd/Self-Supervised-ViT-Path/blob/master/LICENSE)
|
47 |
+
|
48 |
+
## Model Usage
|
49 |
+
|
50 |
+
### Image Embeddings
|
51 |
+
```python
|
52 |
+
from urllib.request import urlopen
|
53 |
+
from PIL import Image
|
54 |
+
import timm
|
55 |
+
|
56 |
+
# get example histology image
|
57 |
+
img = Image.open(
|
58 |
+
urlopen(
|
59 |
+
"https://github.com/owkin/HistoSSLscaling/raw/main/assets/example.tif"
|
60 |
+
)
|
61 |
+
)
|
62 |
+
|
63 |
+
# load model from the hub
|
64 |
+
model = timm.create_model(
|
65 |
+
model_name="hf-hub:1aurent/vit_small_patch16_256.tcga_brca_dino",
|
66 |
+
pretrained=True,
|
67 |
+
).eval()
|
68 |
+
|
69 |
+
# get model specific transforms (normalization, resize)
|
70 |
+
data_config = timm.data.resolve_model_data_config(model)
|
71 |
+
transforms = timm.data.create_transform(**data_config, is_training=False)
|
72 |
+
|
73 |
+
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
|
74 |
+
```
|
75 |
+
|
76 |
+
## Citation
|
77 |
+
```bibtex
|
78 |
+
@misc{chen2022selfsupervised,
|
79 |
+
title = {Self-Supervised Vision Transformers Learn Visual Concepts in Histopathology},
|
80 |
+
author = {Richard J. Chen and Rahul G. Krishnan},
|
81 |
+
year = {2022},
|
82 |
+
eprint = {2203.00585},
|
83 |
+
archiveprefix = {arXiv},
|
84 |
+
primaryclass = {cs.CV}
|
85 |
+
}
|
86 |
+
```
|