timm
/

Image Classification
timm
PyTorch
Safetensors
rwightman HF staff commited on
Commit
419ee15
1 Parent(s): ea6d15b
Files changed (4) hide show
  1. README.md +162 -0
  2. config.json +41 -0
  3. model.safetensors +3 -0
  4. pytorch_model.bin +3 -0
README.md ADDED
@@ -0,0 +1,162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - image-classification
4
+ - timm
5
+ library_name: timm
6
+ license: apache-2.0
7
+ datasets:
8
+ - imagenet-1k
9
+ ---
10
+ # Model card for mambaout_base_short_rw.sw_e500_in1k
11
+
12
+ A MambaOut image classification model with `timm` specific architecture customizations. Trained on ImageNet-1k by Ross Wightman using Swin / ConvNeXt based recipe.
13
+
14
+
15
+ ## Model Details
16
+ - **Model Type:** Image classification / feature backbone
17
+ - **Model Stats:**
18
+ - Params (M): 88.8
19
+ - GMACs: 16.3
20
+ - Activations (M): 38.1
21
+ - Image size: train = 224 x 224, test = 288 x 288
22
+ - **Dataset:** ImageNet-1k
23
+ - **Papers:**
24
+ - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models
25
+ - MambaOut: Do We Really Need Mamba for Vision?: https://arxiv.org/abs/2405.07992
26
+ - **Original:** https://github.com/yuweihao/MambaOut
27
+
28
+ ## Model Usage
29
+ ### Image Classification
30
+ ```python
31
+ from urllib.request import urlopen
32
+ from PIL import Image
33
+ import timm
34
+
35
+ img = Image.open(urlopen(
36
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
37
+ ))
38
+
39
+ model = timm.create_model('mambaout_base_short_rw.sw_e500_in1k', pretrained=True)
40
+ model = model.eval()
41
+
42
+ # get model specific transforms (normalization, resize)
43
+ data_config = timm.data.resolve_model_data_config(model)
44
+ transforms = timm.data.create_transform(**data_config, is_training=False)
45
+
46
+ output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
47
+
48
+ top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
49
+ ```
50
+
51
+ ### Feature Map Extraction
52
+ ```python
53
+ from urllib.request import urlopen
54
+ from PIL import Image
55
+ import timm
56
+
57
+ img = Image.open(urlopen(
58
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
59
+ ))
60
+
61
+ model = timm.create_model(
62
+ 'mambaout_base_short_rw.sw_e500_in1k',
63
+ pretrained=True,
64
+ features_only=True,
65
+ )
66
+ model = model.eval()
67
+
68
+ # get model specific transforms (normalization, resize)
69
+ data_config = timm.data.resolve_model_data_config(model)
70
+ transforms = timm.data.create_transform(**data_config, is_training=False)
71
+
72
+ output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
73
+
74
+ for o in output:
75
+ # print shape of each feature map in output
76
+ # e.g.:
77
+ # torch.Size([1, 56, 56, 128])
78
+ # torch.Size([1, 28, 28, 256])
79
+ # torch.Size([1, 14, 14, 512])
80
+ # torch.Size([1, 7, 7, 768])
81
+
82
+ print(o.shape)
83
+ ```
84
+
85
+ ### Image Embeddings
86
+ ```python
87
+ from urllib.request import urlopen
88
+ from PIL import Image
89
+ import timm
90
+
91
+ img = Image.open(urlopen(
92
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
93
+ ))
94
+
95
+ model = timm.create_model(
96
+ 'mambaout_base_short_rw.sw_e500_in1k',
97
+ pretrained=True,
98
+ num_classes=0, # remove classifier nn.Linear
99
+ )
100
+ model = model.eval()
101
+
102
+ # get model specific transforms (normalization, resize)
103
+ data_config = timm.data.resolve_model_data_config(model)
104
+ transforms = timm.data.create_transform(**data_config, is_training=False)
105
+
106
+ output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
107
+
108
+ # or equivalently (without needing to set num_classes=0)
109
+
110
+ output = model.forward_features(transforms(img).unsqueeze(0))
111
+ # output is unpooled, a (1, 7, 7, 768) shaped tensor
112
+
113
+ output = model.forward_head(output, pre_logits=True)
114
+ # output is a (1, num_features) shaped tensor
115
+ ```
116
+
117
+ ## Model Comparison
118
+ ### By Top-1
119
+
120
+ |model |img_size|top1 |top5 |param_count|
121
+ |---------------------------------------------------------------------------------------------------------------------|--------|------|------|-----------|
122
+ |[mambaout_base_plus_rw.sw_e150_in12k_ft_in1k](http://huggingface.co/timm/mambaout_base_plus_rw.sw_e150_in12k_ft_in1k)|288 |86.912|98.236|101.66 |
123
+ |[mambaout_base_plus_rw.sw_e150_in12k_ft_in1k](http://huggingface.co/timm/mambaout_base_plus_rw.sw_e150_in12k_ft_in1k)|224 |86.632|98.156|101.66 |
124
+ |[mambaout_base_tall_rw.sw_e500_in1k](http://huggingface.co/timm/mambaout_base_tall_rw.sw_e500_in1k) |288 |84.974|97.332|86.48 |
125
+ |[mambaout_base_wide_rw.sw_e500_in1k](http://huggingface.co/timm/mambaout_base_wide_rw.sw_e500_in1k) |288 |84.962|97.208|94.45 |
126
+ |[mambaout_base_short_rw.sw_e500_in1k](http://huggingface.co/timm/mambaout_base_short_rw.sw_e500_in1k) |288 |84.832|97.27 |88.83 |
127
+ |[mambaout_base.in1k](http://huggingface.co/timm/mambaout_base.in1k) |288 |84.72 |96.93 |84.81 |
128
+ |[mambaout_small_rw.sw_e450_in1k](http://huggingface.co/timm/mambaout_small_rw.sw_e450_in1k) |288 |84.598|97.098|48.5 |
129
+ |[mambaout_small.in1k](http://huggingface.co/timm/mambaout_small.in1k) |288 |84.5 |96.974|48.49 |
130
+ |[mambaout_base_wide_rw.sw_e500_in1k](http://huggingface.co/timm/mambaout_base_wide_rw.sw_e500_in1k) |224 |84.454|96.864|94.45 |
131
+ |[mambaout_base_tall_rw.sw_e500_in1k](http://huggingface.co/timm/mambaout_base_tall_rw.sw_e500_in1k) |224 |84.434|96.958|86.48 |
132
+ |[mambaout_base_short_rw.sw_e500_in1k](http://huggingface.co/timm/mambaout_base_short_rw.sw_e500_in1k) |224 |84.362|96.952|88.83 |
133
+ |[mambaout_base.in1k](http://huggingface.co/timm/mambaout_base.in1k) |224 |84.168|96.68 |84.81 |
134
+ |[mambaout_small.in1k](http://huggingface.co/timm/mambaout_small.in1k) |224 |84.086|96.63 |48.49 |
135
+ |[mambaout_small_rw.sw_e450_in1k](http://huggingface.co/timm/mambaout_small_rw.sw_e450_in1k) |224 |84.024|96.752|48.5 |
136
+ |[mambaout_tiny.in1k](http://huggingface.co/timm/mambaout_tiny.in1k) |288 |83.448|96.538|26.55 |
137
+ |[mambaout_tiny.in1k](http://huggingface.co/timm/mambaout_tiny.in1k) |224 |82.736|96.1 |26.55 |
138
+ |[mambaout_kobe.in1k](http://huggingface.co/timm/mambaout_kobe.in1k) |288 |81.054|95.718|9.14 |
139
+ |[mambaout_kobe.in1k](http://huggingface.co/timm/mambaout_kobe.in1k) |224 |79.986|94.986|9.14 |
140
+ |[mambaout_femto.in1k](http://huggingface.co/timm/mambaout_femto.in1k) |288 |79.848|95.14 |7.3 |
141
+ |[mambaout_femto.in1k](http://huggingface.co/timm/mambaout_femto.in1k) |224 |78.87 |94.408|7.3 |
142
+
143
+ ## Citation
144
+ ```bibtex
145
+ @misc{rw2019timm,
146
+ author = {Ross Wightman},
147
+ title = {PyTorch Image Models},
148
+ year = {2019},
149
+ publisher = {GitHub},
150
+ journal = {GitHub repository},
151
+ doi = {10.5281/zenodo.4414861},
152
+ howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
153
+ }
154
+ ```
155
+ ```bibtex
156
+ @article{yu2024mambaout,
157
+ title={MambaOut: Do We Really Need Mamba for Vision?},
158
+ author={Yu, Weihao and Wang, Xinchao},
159
+ journal={arXiv preprint arXiv:2405.07992},
160
+ year={2024}
161
+ }
162
+ ```
config.json ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architecture": "mambaout_base_short_rw",
3
+ "num_classes": 1000,
4
+ "num_features": 768,
5
+ "pretrained_cfg": {
6
+ "tag": "sw_e500_in1k",
7
+ "custom_load": false,
8
+ "input_size": [
9
+ 3,
10
+ 224,
11
+ 224
12
+ ],
13
+ "test_input_size": [
14
+ 3,
15
+ 288,
16
+ 288
17
+ ],
18
+ "fixed_input_size": false,
19
+ "interpolation": "bicubic",
20
+ "crop_pct": 0.95,
21
+ "test_crop_pct": 1.0,
22
+ "crop_mode": "center",
23
+ "mean": [
24
+ 0.485,
25
+ 0.456,
26
+ 0.406
27
+ ],
28
+ "std": [
29
+ 0.229,
30
+ 0.224,
31
+ 0.225
32
+ ],
33
+ "num_classes": 1000,
34
+ "pool_size": [
35
+ 7,
36
+ 7
37
+ ],
38
+ "first_conv": "stem.conv1",
39
+ "classifier": "head.fc"
40
+ }
41
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93170a377b319652ce26e83386704adfec09ad048905c3d5959bd870897327d0
3
+ size 355343264
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d76d8c39680bdda05f5dd2bf4504125b94bcf73f11c7b8f63684a704f3a664ea
3
+ size 355426930