qaihm-bot commited on
Commit
736c554
·
verified ·
1 Parent(s): 24ce654

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +70 -28
README.md CHANGED
@@ -15,7 +15,7 @@ tags:
15
 
16
  Contrastive Language-Image Pre-Training (CLIP) uses a ViT like transformer to get visual features and a causal language model to get the text features. Both the text and visual features can then be used for a variety of zero-shot learning tasks.
17
 
18
- This model is an implementation of OpenAI-Clip found [here](https://github.com/openai/CLIP/).
19
  This repository provides scripts to run OpenAI-Clip on Qualcomm® devices.
20
  More details on model performance across various devices, can be found
21
  [here](https://aihub.qualcomm.com/models/openai_clip).
@@ -33,17 +33,53 @@ More details on model performance across various devices, can be found
33
  - Number of parameters (CLIPImageEncoder): 115M
34
  - Model size (CLIPImageEncoder): 437 MB
35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
 
38
 
39
- | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
40
- | ---|---|---|---|---|---|---|---|
41
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 6.808 ms | 0 - 2 MB | FP16 | NPU | [CLIPTextEncoder.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite)
42
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 41.61 ms | 0 - 4 MB | FP16 | NPU | [CLIPImageEncoder.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite)
43
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 5.858 ms | 0 - 19 MB | FP16 | NPU | [CLIPTextEncoder.so](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.so)
44
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 32.966 ms | 0 - 58 MB | FP16 | NPU | [CLIPImageEncoder.so](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.so)
45
-
46
-
47
 
48
  ## Installation
49
 
@@ -99,23 +135,25 @@ device. This script does the following:
99
  ```bash
100
  python -m qai_hub_models.models.openai_clip.export
101
  ```
102
-
103
  ```
104
- Profile Job summary of CLIPTextEncoder
105
- --------------------------------------------------
106
- Device: Snapdragon X Elite CRD (11)
107
- Estimated Inference Time: 6.20 ms
108
- Estimated Peak Memory Range: 0.12-0.12 MB
109
- Compute Units: NPU (445) | Total (445)
110
-
111
- Profile Job summary of CLIPImageEncoder
112
- --------------------------------------------------
113
- Device: Snapdragon X Elite CRD (11)
114
- Estimated Inference Time: 28.83 ms
115
- Estimated Peak Memory Range: 0.57-0.57 MB
116
- Compute Units: NPU (438) | Total (438)
117
-
118
-
 
 
 
119
  ```
120
 
121
 
@@ -243,15 +281,19 @@ provides instructions on how to use the `.so` shared library in an Android appl
243
  Get more details on OpenAI-Clip's performance across various devices [here](https://aihub.qualcomm.com/models/openai_clip).
244
  Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
245
 
 
246
  ## License
247
- - The license for the original implementation of OpenAI-Clip can be found
248
- [here](https://github.com/openai/CLIP/blob/main/LICENSE).
249
- - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
 
250
 
251
  ## References
252
  * [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020)
253
  * [Source Model Implementation](https://github.com/openai/CLIP/)
254
 
 
 
255
  ## Community
256
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
257
  * For questions or feedback please [reach out to us](mailto:[email protected]).
 
15
 
16
  Contrastive Language-Image Pre-Training (CLIP) uses a ViT like transformer to get visual features and a causal language model to get the text features. Both the text and visual features can then be used for a variety of zero-shot learning tasks.
17
 
18
+ This model is an implementation of OpenAI-Clip found [here]({source_repo}).
19
  This repository provides scripts to run OpenAI-Clip on Qualcomm® devices.
20
  More details on model performance across various devices, can be found
21
  [here](https://aihub.qualcomm.com/models/openai_clip).
 
33
  - Number of parameters (CLIPImageEncoder): 115M
34
  - Model size (CLIPImageEncoder): 437 MB
35
 
36
+ | Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
37
+ |---|---|---|---|---|---|---|---|---|
38
+ | CLIPTextEncoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 5.779 ms | 0 - 3 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
39
+ | CLIPTextEncoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 4.774 ms | 0 - 16 MB | FP16 | NPU | [OpenAI-Clip.so](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.so) |
40
+ | CLIPTextEncoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 35.403 ms | 0 - 130 MB | FP16 | NPU | [OpenAI-Clip.onnx](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.onnx) |
41
+ | CLIPTextEncoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 4.079 ms | 0 - 194 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
42
+ | CLIPTextEncoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 3.405 ms | 0 - 66 MB | FP16 | NPU | [OpenAI-Clip.so](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.so) |
43
+ | CLIPTextEncoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 26.223 ms | 0 - 534 MB | FP16 | NPU | [OpenAI-Clip.onnx](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.onnx) |
44
+ | CLIPTextEncoder | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 5.717 ms | 0 - 2 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
45
+ | CLIPTextEncoder | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 4.856 ms | 0 - 1 MB | FP16 | NPU | Use Export Script |
46
+ | CLIPTextEncoder | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 5.711 ms | 0 - 2 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
47
+ | CLIPTextEncoder | SA8255 (Proxy) | SA8255P Proxy | QNN | 4.794 ms | 0 - 1 MB | FP16 | NPU | Use Export Script |
48
+ | CLIPTextEncoder | SA8775 (Proxy) | SA8775P Proxy | TFLITE | 5.652 ms | 0 - 2 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
49
+ | CLIPTextEncoder | SA8775 (Proxy) | SA8775P Proxy | QNN | 4.897 ms | 0 - 1 MB | FP16 | NPU | Use Export Script |
50
+ | CLIPTextEncoder | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 5.683 ms | 0 - 287 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
51
+ | CLIPTextEncoder | SA8650 (Proxy) | SA8650P Proxy | QNN | 4.903 ms | 0 - 1 MB | FP16 | NPU | Use Export Script |
52
+ | CLIPTextEncoder | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 6.593 ms | 0 - 168 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
53
+ | CLIPTextEncoder | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 5.491 ms | 0 - 66 MB | FP16 | NPU | Use Export Script |
54
+ | CLIPTextEncoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 3.963 ms | 0 - 109 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
55
+ | CLIPTextEncoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 3.266 ms | 0 - 65 MB | FP16 | NPU | Use Export Script |
56
+ | CLIPTextEncoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 23.78 ms | 0 - 319 MB | FP16 | NPU | [OpenAI-Clip.onnx](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.onnx) |
57
+ | CLIPTextEncoder | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 5.196 ms | 0 - 0 MB | FP16 | NPU | Use Export Script |
58
+ | CLIPTextEncoder | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 38.329 ms | 126 - 126 MB | FP16 | NPU | [OpenAI-Clip.onnx](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.onnx) |
59
+ | CLIPImageEncoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 38.384 ms | 0 - 2 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
60
+ | CLIPImageEncoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 27.206 ms | 0 - 56 MB | FP16 | NPU | [OpenAI-Clip.so](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.so) |
61
+ | CLIPImageEncoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 174.036 ms | 0 - 194 MB | FP16 | NPU | [OpenAI-Clip.onnx](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.onnx) |
62
+ | CLIPImageEncoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 33.247 ms | 0 - 666 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
63
+ | CLIPImageEncoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 24.164 ms | 1 - 170 MB | FP16 | NPU | [OpenAI-Clip.so](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.so) |
64
+ | CLIPImageEncoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 118.868 ms | 1 - 3571 MB | FP16 | NPU | [OpenAI-Clip.onnx](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.onnx) |
65
+ | CLIPImageEncoder | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 37.343 ms | 0 - 2 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
66
+ | CLIPImageEncoder | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 22.015 ms | 1 - 2 MB | FP16 | NPU | Use Export Script |
67
+ | CLIPImageEncoder | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 37.324 ms | 0 - 2 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
68
+ | CLIPImageEncoder | SA8255 (Proxy) | SA8255P Proxy | QNN | 22.477 ms | 1 - 2 MB | FP16 | NPU | Use Export Script |
69
+ | CLIPImageEncoder | SA8775 (Proxy) | SA8775P Proxy | TFLITE | 36.58 ms | 0 - 2 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
70
+ | CLIPImageEncoder | SA8775 (Proxy) | SA8775P Proxy | QNN | 22.644 ms | 0 - 1 MB | FP16 | NPU | Use Export Script |
71
+ | CLIPImageEncoder | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 36.958 ms | 0 - 3 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
72
+ | CLIPImageEncoder | SA8650 (Proxy) | SA8650P Proxy | QNN | 22.477 ms | 1 - 2 MB | FP16 | NPU | Use Export Script |
73
+ | CLIPImageEncoder | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 37.123 ms | 0 - 549 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
74
+ | CLIPImageEncoder | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 30.382 ms | 0 - 170 MB | FP16 | NPU | Use Export Script |
75
+ | CLIPImageEncoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 25.495 ms | 0 - 460 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
76
+ | CLIPImageEncoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 17.137 ms | 1 - 172 MB | FP16 | NPU | Use Export Script |
77
+ | CLIPImageEncoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 17.137 ms | 1 - 172 MB | FP16 | NPU | [OpenAI-Clip.onnx](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.onnx) |
78
+ | CLIPImageEncoder | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 22.135 ms | 1 - 1 MB | FP16 | NPU | Use Export Script |
79
+ | CLIPImageEncoder | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 162.155 ms | 188 - 188 MB | FP16 | NPU | [OpenAI-Clip.onnx](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.onnx) |
80
 
81
 
82
 
 
 
 
 
 
 
 
 
83
 
84
  ## Installation
85
 
 
135
  ```bash
136
  python -m qai_hub_models.models.openai_clip.export
137
  ```
 
138
  ```
139
+ Profiling Results
140
+ ------------------------------------------------------------
141
+ CLIPTextEncoder
142
+ Device : Samsung Galaxy S23 (13)
143
+ Runtime : TFLITE
144
+ Estimated inference time (ms) : 5.8
145
+ Estimated peak memory usage (MB): [0, 3]
146
+ Total # Ops : 660
147
+ Compute Unit(s) : NPU (658 ops) CPU (2 ops)
148
+
149
+ ------------------------------------------------------------
150
+ CLIPImageEncoder
151
+ Device : Samsung Galaxy S23 (13)
152
+ Runtime : TFLITE
153
+ Estimated inference time (ms) : 38.4
154
+ Estimated peak memory usage (MB): [0, 2]
155
+ Total # Ops : 659
156
+ Compute Unit(s) : NPU (659 ops)
157
  ```
158
 
159
 
 
281
  Get more details on OpenAI-Clip's performance across various devices [here](https://aihub.qualcomm.com/models/openai_clip).
282
  Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
283
 
284
+
285
  ## License
286
+ * The license for the original implementation of OpenAI-Clip can be found [here](https://github.com/openai/CLIP/blob/main/LICENSE).
287
+ * The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
288
+
289
+
290
 
291
  ## References
292
  * [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020)
293
  * [Source Model Implementation](https://github.com/openai/CLIP/)
294
 
295
+
296
+
297
  ## Community
298
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
299
  * For questions or feedback please [reach out to us](mailto:[email protected]).