Update model config and README
Browse files
README.md
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
---
|
2 |
tags:
|
3 |
-
- image-classification
|
4 |
- timm
|
|
|
5 |
library_name: timm
|
6 |
license: apache-2.0
|
7 |
datasets:
|
@@ -9,13 +9,13 @@ datasets:
|
|
9 |
---
|
10 |
# Model card for vit_huge_patch14_224.orig_in21k
|
11 |
|
12 |
-
A Vision Transformer (ViT) image classification model.
|
13 |
|
14 |
|
15 |
## Model Details
|
16 |
- **Model Type:** Image classification / feature backbone
|
17 |
- **Model Stats:**
|
18 |
-
- Params (M):
|
19 |
- GMACs: 162.0
|
20 |
- Activations (M): 95.1
|
21 |
- Image size: 224 x 224
|
|
|
1 |
---
|
2 |
tags:
|
|
|
3 |
- timm
|
4 |
+
- image-classification
|
5 |
library_name: timm
|
6 |
license: apache-2.0
|
7 |
datasets:
|
|
|
9 |
---
|
10 |
# Model card for vit_huge_patch14_224.orig_in21k
|
11 |
|
12 |
+
A Vision Transformer (ViT) image classification model. Pretrained on ImageNet-21k in JAX by paper authors, ported to PyTorch by Ross Wightman. This model does not have a classification head, useful for features and fine-tune only.
|
13 |
|
14 |
|
15 |
## Model Details
|
16 |
- **Model Type:** Image classification / feature backbone
|
17 |
- **Model Stats:**
|
18 |
+
- Params (M): 630.8
|
19 |
- GMACs: 162.0
|
20 |
- Activations (M): 95.1
|
21 |
- Image size: 224 x 224
|