Export model 'openai/clip-vit-large-patch14-336', on 2025-01-29 03:28:39 JST
Browse files- README.md +8 -6
- models.parquet +2 -2
- openai/clip-vit-large-patch14-336/image_encode.onnx +3 -0
- openai/clip-vit-large-patch14-336/meta.json +3 -0
- openai/clip-vit-large-patch14-336/preprocessor.json +3 -0
- openai/clip-vit-large-patch14-336/text_encode.onnx +3 -0
- openai/clip-vit-large-patch14-336/tokenizer.json +3 -0
README.md
CHANGED
@@ -4,6 +4,7 @@ base_model:
|
|
4 |
- openai/clip-vit-base-patch16
|
5 |
- openai/clip-vit-base-patch32
|
6 |
- openai/clip-vit-large-patch14
|
|
|
7 |
language:
|
8 |
- en
|
9 |
tags:
|
@@ -18,11 +19,12 @@ ONNX exported version of CLIP models.
|
|
18 |
|
19 |
# Models
|
20 |
|
21 |
-
|
22 |
|
23 |
-
| Name
|
24 |
-
|
25 |
-
| [openai/clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) | 302.9M /
|
26 |
-
| [openai/clip-vit-
|
27 |
-
| [openai/clip-vit-base-
|
|
|
28 |
|
|
|
4 |
- openai/clip-vit-base-patch16
|
5 |
- openai/clip-vit-base-patch32
|
6 |
- openai/clip-vit-large-patch14
|
7 |
+
- openai/clip-vit-large-patch14-336
|
8 |
language:
|
9 |
- en
|
10 |
tags:
|
|
|
19 |
|
20 |
# Models
|
21 |
|
22 |
+
4 models exported in total.
|
23 |
|
24 |
+
| Name | Image (Params/FLOPS) | Image Size | Image Width (Enc/Emb) | Text (Params/FLOPS) | Text Width (Enc/Emb) | Created At |
|
25 |
+
|:----------------------------------------------------------------------------------------------|:-----------------------|-------------:|:------------------------|:----------------------|:-----------------------|:-------------|
|
26 |
+
| [openai/clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | 302.9M / 174.7G | 336 | 1024 / 768 | 85.1M / 1.2G | 768 / 768 | 2022-04-22 |
|
27 |
+
| [openai/clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) | 302.9M / 77.8G | 224 | 1024 / 768 | 85.1M / 1.2G | 768 / 768 | 2022-03-03 |
|
28 |
+
| [openai/clip-vit-base-patch16](https://huggingface.co/openai/clip-vit-base-patch16) | 85.6M / 16.9G | 224 | 768 / 512 | 37.8M / 529.2M | 512 / 512 | 2022-03-03 |
|
29 |
+
| [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) | 87.4M / 4.4G | 224 | 768 / 512 | 37.8M / 529.2M | 512 / 512 | 2022-03-03 |
|
30 |
|
models.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9150f4745be893a842e26063ef95cc47aed9155eb2d3de7b8830c5cc8bb9e9af
|
3 |
+
size 8637
|
openai/clip-vit-large-patch14-336/image_encode.onnx
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:af2c86f35c06f13796d52c25ab3d3bc5903a5dfcdac758ac173657863971f907
|
3 |
+
size 1217674510
|
openai/clip-vit-large-patch14-336/meta.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2de78e50b7b894b0945875a1448cdcf0c118e0a53982879225fa2dcd44b9dd9a
|
3 |
+
size 458
|
openai/clip-vit-large-patch14-336/preprocessor.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bedcc09958d284d75d7e864bd8e85ca10f997882e7621d0d684d9a2c863978b4
|
3 |
+
size 826
|
openai/clip-vit-large-patch14-336/text_encode.onnx
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1c040a8a429bae34d99fa83a1f2dd48a0f6b21ccbe878eccb7c40c30291fda6e
|
3 |
+
size 494879632
|
openai/clip-vit-large-patch14-336/tokenizer.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:358e995b0cfd26ae9243ad4027fb5aa92bfa6c46ed22fe1adfd0fed53a9baeac
|
3 |
+
size 3642240
|