bbexx commited on
Commit
c7d1c9f
1 Parent(s): c633286
Files changed (2) hide show
  1. .gitattributes +1 -0
  2. README.md +95 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ pytorch_model.bin filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,98 @@
1
  ---
2
  license: mit
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ datasets:
4
+ - mlfoundations/datacomp_1b
5
+ pipeline_tag: feature-extraction
6
  ---
7
+
8
+ # Model card for ViTamin-L-256px
9
+
10
+ Official huggingface models of **ViTamin**, from the following CVPR 2024 paper:
11
+
12
+ [ViTamin: Design Scalable Vision Models in the Vision-language Era](https://arxiv.org/pdf/2404.02132.pdf).\
13
+ ✨  [Jieneng Chen](https://beckschen.github.io), [Qihang Yu](https://yucornetto.github.io/), [Xiaohui Shen](https://xiaohuishen.github.io/), [Alan Yuille](https://www.cs.jhu.edu/~ayuille/) and [Liang-Chieh Chen](http://liangchiehchen.com/)\
14
+ 🏠  Johns Hopkins University, Bytedance
15
+
16
+ 🔥 This ViTamin-XL-336px is the pre-trained model transferred to open-vocabulary detection and segmentation, and large multi-modal models in our paper.
17
+
18
+ Load from HuggingFace with transformers.AutoModel:
19
+ ```python
20
+ import torch
21
+ import open_clip
22
+ from PIL import Image
23
+ from transformers import AutoModel, CLIPImageProcessor
24
+ device = "cuda" if torch.cuda.is_available() else "cpu"
25
+
26
+ model = AutoModel.from_pretrained(
27
+ 'jienengchen/ViTamin-L-256px',
28
+ trust_remote_code=True).to(device).eval()
29
+
30
+ image = Image.open('./image.png').convert('RGB')
31
+ image_processor = CLIPImageProcessor.from_pretrained('jienengchen/ViTamin-L-256px')
32
+
33
+ pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
34
+ pixel_values = pixel_values.to(torch.bfloat16).cuda()
35
+
36
+ tokenizer = open_clip.get_tokenizer('hf-hub:laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K')
37
+ text = tokenizer(["a photo of vitamin", "a dog", "a cat"]).to(device)
38
+
39
+ with torch.no_grad(), torch.cuda.amp.autocast():
40
+ image_features, text_features, logit_scale = model(pixel_values, text)
41
+ text_probs = (100.0 * image_features @ text_features.to(torch.float).T).softmax(dim=-1)
42
+
43
+ print("Label probs:", text_probs)
44
+ ```
45
+
46
+ ## Main Results with CLIP Pre-training on DataComp-1B
47
+
48
+
49
+ | image encoder | image size | num patches | text encoder depth/width | seen samples (B) | trainable params Image+Text (M) | MACs Image+Text (G) | ImageNet Acc. | avg. 38 datasets | ImageNet dist. shift. | VTAB | retrieval |
50
+ |---------------|------------|-------------|--------------------------|-------------------|---------------------------------|----------------------|---------------|------------------|-----------------------|------|-----------|
51
+ | ViTamin-L | 224 | 196 | 12/768 | 12.8 | 333.3+123.7 | 72.6+6.6 | 80.8 | 66.7 | 69.8 | 65.3 | 60.3 |
52
+ | ViTamin-L | 256 | 256 | 12/768 | 12.8+0.2 | 333.4+123.7 | 94.8+6.6 | 81.2 | 67.0 | 71.1 | 65.3 | 61.2 |
53
+ | ViTamin-L | 336 | 441 | 12/768 | 12.8+0.2 | 333.6+123.7 | 163.4+6.6 | 81.6 | 67.0 | 72.1 | 64.4 | 61.6 |
54
+ | ViTamin-L | 384 | 576 | 12/768 | 12.8+0.2 | 333.7+123.7 | 213.4+6.6 | 81.8 | 67.2 | 72.4 | 64.7 | 61.8 |
55
+ | ViTamin-L2 | 224 | 196 | 24/1024 | 12.8 | 333.6+354.0 | 72.6+23.3 | 80.9 | 66.4 | 70.6 | 63.4 | 61.5 |
56
+ | ViTamin-L2 | 256 | 256 | 24/1024 | 12.8+0.5 | 333.6+354.0 | 94.8+23.3 | 81.5 | 67.4 | 71.9 | 64.1 | 63.1 |
57
+ | ViTamin-L2 | 336 | 441 | 24/1024 | 12.8+0.5 | 333.8+354.0 | 163.4+23.3 | 81.8 | 67.8 | 73.0 | 64.5 | 63.6 |
58
+ | ViTamin-L2 | 384 | 576 | 24/1024 | 12.8+0.5 | 334.0+354.0 | 213.4+23.3 | 82.1 | 68.1 | 73.4 | 64.8 | 63.7 |
59
+ | ViTamin-XL | 256 | 256 | 27/1152 | 12.8+0.5 | 436.1+488.7 | 125.3+33.1 | 82.1 | 67.6 | 72.3 | 65.4 | 62.7 |
60
+ | ViTamin-XL | 384 | 576 | 27/1152 | 12.8+0.5 | 436.1+488.7 | 281.9+33.1 | 82.6 | 68.1 | 73.6 | 65.6 | 63.8 |
61
+ | ViTamin-XL | 256 | 256 | 27/1152 | 40 | 436.1+488.7 | 125.3+33.1 | 82.3 | 67.5 | 72.8 | 64.0 | 62.1 |
62
+ | ViTamin-XL | 336 | 441 | 27/1152 | 40+1 | 436.1+488.7 | 215.9+33.1 | 82.7 | 68.0 | 73.9 | 64.1 | 62.6 |
63
+ | ViTamin-XL | 384 | 576 | 27/1152 | 40+1 | 436.1+488.7 | 281.9+33.1 | 82.9 | 68.1 | 74.1 | 64.0 | 62.5 |
64
+
65
+ ## Main Results on Downstream tasks
66
+ **Open-Vocab Detection**
67
+ | image encoder | detector | OV-COCO (AP<sub>50</sub><sup>novel</sup>) | OV-LVIS (AP<sub>r</sub>) |
68
+ |---------------|----------|---------------------------------------|-----------------------|
69
+ | ViT-L/14 | Sliding F-ViT | 36.1 | 32.5 |
70
+ | ViTamin-L | Sliding F-ViT | 37.5 | 35.6 |
71
+
72
+ **Open-Vocab Segmentation**
73
+
74
+ | image encoder | segmentor | ADE | Cityscapes | MV | A-150 | A-847 | PC-459 | PC-59 | PAS-21 |
75
+ |---------------|-------------|----------------|--------------|------|-------|-------|--------|-------|--------------------|
76
+ | ViT-L/14 | Sliding FC-CLIP | 24.6 | 40.7 | 16.5 | 31.8 | 14.3 | 18.3 | 55.1 | 81.5 |
77
+ | ViTamin-L | Sliding FC-CLIP | 27.3 | 44.0 | 18.2 | 35.6 | 16.1 | 20.4 | 58.4 | 83.4 |
78
+
79
+ Note: Panoptic dataset (ADE, CityScapes, MV) are with the metric of PQ. Semantic dataset (A-150, A-847, PC-459, PC-59, PAS-21) are with the metric of mIoU.
80
+
81
+ **Large Multi-modal Models**
82
+
83
+ | image encoder | image size | VQAv2 | GQA | VizWiz | SQA | T-VQA | POPE | MME | MM-Bench | MM-B-CN | SEED | LLaVA-Wild | MM-Vet |
84
+ |---------------|----------|-------|------|--------|------|-------|------|------|----------|---------|------|------------|--------|
85
+ | ViTamin-L | 336 | 78.4 | 61.6 | 51.1 | 66.9 | 58.7 | 84.6 | 1421 | 65.4 | 58.4 | 57.7 | 64.5 | 33.6 |
86
+ | ViTamin-L | 384 | 78.9 | 61.6 | 55.4 | 67.6 | 59.8 | 85.5 | 1447 | 64.5 | 58.3 | 57.9 | 66.1 | 33.6 |
87
+
88
+
89
+ ## Citing ViTamin
90
+
91
+ ```
92
+ @inproceedings{chen2024vitamin,
93
+ title={ViTamin: Design Scalable Vision Models in the Vision-language Era},
94
+ author={Chen, Jieneng and Yu, Qihang and Shen, Xiaohui and Yuille, ALan and Chen, Liang-Chieh},
95
+ booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
96
+ year={2024}
97
+ }
98
+ ```