shreyajn commited on
Commit
2a96d43
·
verified ·
1 Parent(s): b5f186d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +252 -0
README.md ADDED
@@ -0,0 +1,252 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: pytorch
3
+ license: other
4
+ pipeline_tag: image-to-image
5
+ tags:
6
+ - quantized
7
+ - android
8
+
9
+ ---
10
+
11
+ ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/quicksrnetsmall_quantized/web-assets/model_demo.png)
12
+
13
+ # QuickSRNetSmall-Quantized: Optimized for Mobile Deployment
14
+ ## Upscale images and remove image noise
15
+
16
+ QuickSRNet Small is designed for upscaling images on mobile platforms to sharpen in real-time.
17
+
18
+ This model is an implementation of QuickSRNetSmall-Quantized found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet).
19
+ This repository provides scripts to run QuickSRNetSmall-Quantized on Qualcomm® devices.
20
+ More details on model performance across various devices, can be found
21
+ [here](https://aihub.qualcomm.com/models/quicksrnetsmall_quantized).
22
+
23
+
24
+ ### Model Details
25
+
26
+ - **Model Type:** Super resolution
27
+ - **Model Stats:**
28
+ - Model checkpoint: quicksrnet_small_4x_checkpoint_int8
29
+ - Input resolution: 128x128
30
+ - Number of parameters: 33.3K
31
+ - Model size: 42.5 KB
32
+
33
+
34
+ | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
35
+ | ---|---|---|---|---|---|---|---|
36
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 1.365 ms | 0 - 1 MB | FP16 | NPU | [QuickSRNetSmall-Quantized.tflite](https://huggingface.co/qualcomm/QuickSRNetSmall-Quantized/blob/main/QuickSRNetSmall-Quantized.tflite)
37
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 1.359 ms | 0 - 2 MB | INT8 | NPU | [QuickSRNet_Small_Quantized.tflite](https://huggingface.co/qualcomm/QuickSRNetSmall-Quantized/blob/main/QuickSRNet_Small_Quantized.tflite)
38
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.997 ms | 0 - 2 MB | FP16 | NPU | [QuickSRNetSmall-Quantized.so](https://huggingface.co/qualcomm/QuickSRNetSmall-Quantized/blob/main/QuickSRNetSmall-Quantized.so)
39
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 1.023 ms | 0 - 8 MB | INT8 | NPU | [QuickSRNet_Small_Quantized.so](https://huggingface.co/qualcomm/QuickSRNetSmall-Quantized/blob/main/QuickSRNet_Small_Quantized.so)
40
+
41
+
42
+ ## Installation
43
+
44
+ This model can be installed as a Python package via pip.
45
+
46
+ ```bash
47
+ pip install qai-hub-models
48
+ ```
49
+
50
+
51
+ ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
52
+
53
+ Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
54
+ Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
55
+
56
+ With this API token, you can configure your client to run models on the cloud
57
+ hosted devices.
58
+ ```bash
59
+ qai-hub configure --api_token API_TOKEN
60
+ ```
61
+ Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
62
+
63
+
64
+
65
+ ## Demo off target
66
+
67
+ The package contains a simple end-to-end demo that downloads pre-trained
68
+ weights and runs this model on a sample input.
69
+
70
+ ```bash
71
+ python -m qai_hub_models.models.quicksrnetsmall_quantized.demo
72
+ ```
73
+
74
+ The above demo runs a reference implementation of pre-processing, model
75
+ inference, and post processing.
76
+
77
+ **NOTE**: If you want running in a Jupyter Notebook or Google Colab like
78
+ environment, please add the following to your cell (instead of the above).
79
+ ```
80
+ %run -m qai_hub_models.models.quicksrnetsmall_quantized.demo
81
+ ```
82
+
83
+
84
+ ### Run model on a cloud-hosted device
85
+
86
+ In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
87
+ device. This script does the following:
88
+ * Performance check on-device on a cloud-hosted device
89
+ * Downloads compiled assets that can be deployed on-device for Android.
90
+ * Accuracy check between PyTorch and on-device outputs.
91
+
92
+ ```bash
93
+ python -m qai_hub_models.models.quicksrnetsmall_quantized.export
94
+ ```
95
+
96
+ ```
97
+ Profile Job summary of QuickSRNetSmall-Quantized
98
+ --------------------------------------------------
99
+ Device: Samsung Galaxy S23 (13)
100
+ Estimated Inference Time: 1.36 ms
101
+ Estimated Peak Memory Range: 0.02-1.37 MB
102
+ Compute Units: NPU (8),CPU (3) | Total (11)
103
+
104
+ Profile Job summary of QuickSRNet_Small_Quantized
105
+ --------------------------------------------------
106
+ Device: Samsung Galaxy S23 (13)
107
+ Estimated Inference Time: 1.36 ms
108
+ Estimated Peak Memory Range: 0.03-1.52 MB
109
+ Compute Units: NPU (8),CPU (3) | Total (11)
110
+
111
+ Profile Job summary of QuickSRNetSmall-Quantized
112
+ --------------------------------------------------
113
+ Device: Samsung Galaxy S23 (13)
114
+ Estimated Inference Time: 1.00 ms
115
+ Estimated Peak Memory Range: 0.21-2.20 MB
116
+ Compute Units: NPU (12) | Total (12)
117
+
118
+ Profile Job summary of QuickSRNet_Small_Quantized
119
+ --------------------------------------------------
120
+ Device: Samsung Galaxy S23 (13)
121
+ Estimated Inference Time: 1.02 ms
122
+ Estimated Peak Memory Range: 0.22-7.55 MB
123
+ Compute Units: NPU (12) | Total (12)
124
+
125
+
126
+ ```
127
+ ## How does this work?
128
+
129
+ This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/QuickSRNetSmall-Quantized/export.py)
130
+ leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
131
+ on-device. Lets go through each step below in detail:
132
+
133
+ Step 1: **Compile model for on-device deployment**
134
+
135
+ To compile a PyTorch model for on-device deployment, we first trace the model
136
+ in memory using the `jit.trace` and then call the `submit_compile_job` API.
137
+
138
+ ```python
139
+ import torch
140
+
141
+ import qai_hub as hub
142
+ from qai_hub_models.models.quicksrnetsmall_quantized import Model
143
+
144
+ # Load the model
145
+ torch_model = Model.from_pretrained()
146
+ torch_model.eval()
147
+
148
+ # Device
149
+ device = hub.Device("Samsung Galaxy S23")
150
+
151
+ # Trace model
152
+ input_shape = torch_model.get_input_spec()
153
+ sample_inputs = torch_model.sample_inputs()
154
+
155
+ pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
156
+
157
+ # Compile model on a specific device
158
+ compile_job = hub.submit_compile_job(
159
+ model=pt_model,
160
+ device=device,
161
+ input_specs=torch_model.get_input_spec(),
162
+ )
163
+
164
+ # Get target model to run on-device
165
+ target_model = compile_job.get_target_model()
166
+
167
+ ```
168
+
169
+
170
+ Step 2: **Performance profiling on cloud-hosted device**
171
+
172
+ After compiling models from step 1. Models can be profiled model on-device using the
173
+ `target_model`. Note that this scripts runs the model on a device automatically
174
+ provisioned in the cloud. Once the job is submitted, you can navigate to a
175
+ provided job URL to view a variety of on-device performance metrics.
176
+ ```python
177
+ profile_job = hub.submit_profile_job(
178
+ model=target_model,
179
+ device=device,
180
+ )
181
+
182
+ ```
183
+
184
+ Step 3: **Verify on-device accuracy**
185
+
186
+ To verify the accuracy of the model on-device, you can run on-device inference
187
+ on sample input data on the same cloud hosted device.
188
+ ```python
189
+ input_data = torch_model.sample_inputs()
190
+ inference_job = hub.submit_inference_job(
191
+ model=target_model,
192
+ device=device,
193
+ inputs=input_data,
194
+ )
195
+
196
+ on_device_output = inference_job.download_output_data()
197
+
198
+ ```
199
+ With the output of the model, you can compute like PSNR, relative errors or
200
+ spot check the output with expected output.
201
+
202
+ **Note**: This on-device profiling and inference requires access to Qualcomm®
203
+ AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
204
+
205
+
206
+ ## Run demo on a cloud-hosted device
207
+
208
+ You can also run the demo on-device.
209
+
210
+ ```bash
211
+ python -m qai_hub_models.models.quicksrnetsmall_quantized.demo --on-device
212
+ ```
213
+
214
+ **NOTE**: If you want running in a Jupyter Notebook or Google Colab like
215
+ environment, please add the following to your cell (instead of the above).
216
+ ```
217
+ %run -m qai_hub_models.models.quicksrnetsmall_quantized.demo -- --on-device
218
+ ```
219
+
220
+
221
+ ## Deploying compiled model to Android
222
+
223
+
224
+ The models can be deployed using multiple runtimes:
225
+ - TensorFlow Lite (`.tflite` export): [This
226
+ tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
227
+ guide to deploy the .tflite model in an Android application.
228
+
229
+
230
+ - QNN (`.so` export ): This [sample
231
+ app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
232
+ provides instructions on how to use the `.so` shared library in an Android application.
233
+
234
+
235
+ ## View on Qualcomm® AI Hub
236
+ Get more details on QuickSRNetSmall-Quantized's performance across various devices [here](https://aihub.qualcomm.com/models/quicksrnetsmall_quantized).
237
+ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
238
+
239
+ ## License
240
+ - The license for the original implementation of QuickSRNetSmall-Quantized can be found
241
+ [here](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf).
242
+ - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf).
243
+
244
+ ## References
245
+ * [QuickSRNet: Plain Single-Image Super-Resolution Architecture for Faster Inference on Mobile Platforms](https://arxiv.org/abs/2303.04336)
246
+ * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet)
247
+
248
+ ## Community
249
+ * Join [our AI Hub Slack community](https://join.slack.com/t/qualcomm-ai-hub/shared_invite/zt-2dgf95loi-CXHTDRR1rvPgQWPO~ZZZJg) to collaborate, post questions and learn more about on-device AI.
250
+ * For questions or feedback please [reach out to us](mailto:[email protected]).
251
+
252
+