File size: 1,734 Bytes
bf58d41
 
 
 
f871e65
6ba364c
7c5a0cf
bf58d41
 
 
 
 
 
 
 
 
 
 
 
 
 
7c5a0cf
bf58d41
7c5a0cf
 
 
 
 
 
bf58d41
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
pipeline_tag: zero-shot-classification
base_model:
- openai/clip-vit-base-patch16
- openai/clip-vit-base-patch32
- openai/clip-vit-large-patch14
- openai/clip-vit-large-patch14-336
language:
- en
tags:
- transformers
- clip
- image
- dghs-realutils
library_name: dghs-realutils
---

ONNX exported version of CLIP models.

# Models

4 models exported in total.

| Name                                                                                          | Image (Params/FLOPS)   |   Image Size | Image Width (Enc/Emb)   | Text (Params/FLOPS)   | Text Width (Enc/Emb)   | Created At   |
|:----------------------------------------------------------------------------------------------|:-----------------------|-------------:|:------------------------|:----------------------|:-----------------------|:-------------|
| [openai/clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | 302.9M / 174.7G        |          336 | 1024 / 768              | 85.1M / 1.2G          | 768 / 768              | 2022-04-22   |
| [openai/clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)         | 302.9M / 77.8G         |          224 | 1024 / 768              | 85.1M / 1.2G          | 768 / 768              | 2022-03-03   |
| [openai/clip-vit-base-patch16](https://huggingface.co/openai/clip-vit-base-patch16)           | 85.6M / 16.9G          |          224 | 768 / 512               | 37.8M / 529.2M        | 512 / 512              | 2022-03-03   |
| [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32)           | 87.4M / 4.4G           |          224 | 768 / 512               | 37.8M / 529.2M        | 512 / 512              | 2022-03-03   |