qaihm-bot commited on
Commit
8a85c0d
·
verified ·
1 Parent(s): 4d5f77d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +40 -19
README.md CHANGED
@@ -14,7 +14,7 @@ tags:
14
 
15
  QuickSRNet Large is designed for upscaling images on mobile platforms to sharpen in real-time.
16
 
17
- This model is an implementation of QuickSRNetLarge found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet).
18
  This repository provides scripts to run QuickSRNetLarge on Qualcomm® devices.
19
  More details on model performance across various devices, can be found
20
  [here](https://aihub.qualcomm.com/models/quicksrnetlarge).
@@ -29,15 +29,32 @@ More details on model performance across various devices, can be found
29
  - Number of parameters: 424K
30
  - Model size: 1.63 MB
31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
 
34
 
35
- | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
36
- | ---|---|---|---|---|---|---|---|
37
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 2.439 ms | 6 - 7 MB | FP16 | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.tflite)
38
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 2.106 ms | 2 - 6 MB | FP16 | NPU | [QuickSRNetLarge.so](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.so)
39
-
40
-
41
 
42
  ## Installation
43
 
@@ -92,16 +109,16 @@ device. This script does the following:
92
  ```bash
93
  python -m qai_hub_models.models.quicksrnetlarge.export
94
  ```
95
-
96
  ```
97
- Profile Job summary of QuickSRNetLarge
98
- --------------------------------------------------
99
- Device: Snapdragon X Elite CRD (11)
100
- Estimated Inference Time: 2.39 ms
101
- Estimated Peak Memory Range: 0.20-0.20 MB
102
- Compute Units: NPU (31) | Total (31)
103
-
104
-
 
105
  ```
106
 
107
 
@@ -200,15 +217,19 @@ provides instructions on how to use the `.so` shared library in an Android appl
200
  Get more details on QuickSRNetLarge's performance across various devices [here](https://aihub.qualcomm.com/models/quicksrnetlarge).
201
  Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
202
 
 
203
  ## License
204
- - The license for the original implementation of QuickSRNetLarge can be found
205
- [here](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf).
206
- - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
 
207
 
208
  ## References
209
  * [QuickSRNet: Plain Single-Image Super-Resolution Architecture for Faster Inference on Mobile Platforms](https://arxiv.org/abs/2303.04336)
210
  * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet)
211
 
 
 
212
  ## Community
213
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
214
  * For questions or feedback please [reach out to us](mailto:[email protected]).
 
14
 
15
  QuickSRNet Large is designed for upscaling images on mobile platforms to sharpen in real-time.
16
 
17
+ This model is an implementation of QuickSRNetLarge found [here]({source_repo}).
18
  This repository provides scripts to run QuickSRNetLarge on Qualcomm® devices.
19
  More details on model performance across various devices, can be found
20
  [here](https://aihub.qualcomm.com/models/quicksrnetlarge).
 
29
  - Number of parameters: 424K
30
  - Model size: 1.63 MB
31
 
32
+ | Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
33
+ |---|---|---|---|---|---|---|---|---|
34
+ | QuickSRNetLarge | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 2.476 ms | 0 - 1 MB | FP16 | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.tflite) |
35
+ | QuickSRNetLarge | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 2.107 ms | 0 - 3 MB | FP16 | NPU | [QuickSRNetLarge.so](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.so) |
36
+ | QuickSRNetLarge | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 2.75 ms | 0 - 2 MB | FP16 | NPU | [QuickSRNetLarge.onnx](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.onnx) |
37
+ | QuickSRNetLarge | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 1.933 ms | 0 - 32 MB | FP16 | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.tflite) |
38
+ | QuickSRNetLarge | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 1.901 ms | 0 - 11 MB | FP16 | NPU | [QuickSRNetLarge.so](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.so) |
39
+ | QuickSRNetLarge | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 2.674 ms | 0 - 34 MB | FP16 | NPU | [QuickSRNetLarge.onnx](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.onnx) |
40
+ | QuickSRNetLarge | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 2.4 ms | 0 - 2 MB | FP16 | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.tflite) |
41
+ | QuickSRNetLarge | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 2.183 ms | 0 - 2 MB | FP16 | NPU | Use Export Script |
42
+ | QuickSRNetLarge | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 2.443 ms | 0 - 14 MB | FP16 | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.tflite) |
43
+ | QuickSRNetLarge | SA8255 (Proxy) | SA8255P Proxy | QNN | 2.184 ms | 0 - 2 MB | FP16 | NPU | Use Export Script |
44
+ | QuickSRNetLarge | SA8775 (Proxy) | SA8775P Proxy | TFLITE | 2.482 ms | 0 - 8 MB | FP16 | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.tflite) |
45
+ | QuickSRNetLarge | SA8775 (Proxy) | SA8775P Proxy | QNN | 2.209 ms | 0 - 2 MB | FP16 | NPU | Use Export Script |
46
+ | QuickSRNetLarge | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 2.448 ms | 0 - 6 MB | FP16 | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.tflite) |
47
+ | QuickSRNetLarge | SA8650 (Proxy) | SA8650P Proxy | QNN | 2.238 ms | 0 - 1 MB | FP16 | NPU | Use Export Script |
48
+ | QuickSRNetLarge | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 4.174 ms | 6 - 38 MB | FP16 | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.tflite) |
49
+ | QuickSRNetLarge | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 3.471 ms | 0 - 15 MB | FP16 | NPU | Use Export Script |
50
+ | QuickSRNetLarge | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 1.859 ms | 0 - 16 MB | FP16 | NPU | [QuickSRNetLarge.tflite](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.tflite) |
51
+ | QuickSRNetLarge | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 1.594 ms | 0 - 10 MB | FP16 | NPU | Use Export Script |
52
+ | QuickSRNetLarge | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 1.871 ms | 0 - 15 MB | FP16 | NPU | [QuickSRNetLarge.onnx](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.onnx) |
53
+ | QuickSRNetLarge | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 2.388 ms | 0 - 0 MB | FP16 | NPU | Use Export Script |
54
+ | QuickSRNetLarge | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 2.684 ms | 8 - 8 MB | FP16 | NPU | [QuickSRNetLarge.onnx](https://huggingface.co/qualcomm/QuickSRNetLarge/blob/main/QuickSRNetLarge.onnx) |
55
 
56
 
57
 
 
 
 
 
 
 
58
 
59
  ## Installation
60
 
 
109
  ```bash
110
  python -m qai_hub_models.models.quicksrnetlarge.export
111
  ```
 
112
  ```
113
+ Profiling Results
114
+ ------------------------------------------------------------
115
+ QuickSRNetLarge
116
+ Device : Samsung Galaxy S23 (13)
117
+ Runtime : TFLITE
118
+ Estimated inference time (ms) : 2.5
119
+ Estimated peak memory usage (MB): [0, 1]
120
+ Total # Ops : 31
121
+ Compute Unit(s) : NPU (28 ops) CPU (3 ops)
122
  ```
123
 
124
 
 
217
  Get more details on QuickSRNetLarge's performance across various devices [here](https://aihub.qualcomm.com/models/quicksrnetlarge).
218
  Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
219
 
220
+
221
  ## License
222
+ * The license for the original implementation of QuickSRNetLarge can be found [here](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf).
223
+ * The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
224
+
225
+
226
 
227
  ## References
228
  * [QuickSRNet: Plain Single-Image Super-Resolution Architecture for Faster Inference on Mobile Platforms](https://arxiv.org/abs/2303.04336)
229
  * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet)
230
 
231
+
232
+
233
  ## Community
234
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
235
  * For questions or feedback please [reach out to us](mailto:[email protected]).